Updates from: 02/03/2024 02:12:41
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/whats-new-docs.md
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Microsoft Entra ID](../active-directory/fundamentals/whats-new.md), [Azure AD B2C developer release notes](custom-policy-developer-notes.md) and [What's new in Microsoft Entra External ID](/entra/external-id/whats-new-docs).
+## January 2024
+
+### Updated articles
+
+- [Tutorial: Configure Nok Nok Passport with Azure Active Directory B2C for passwordless FIDO2 authentication](partner-nok-nok.md) - Updated Nok Nok instructions
+- [Configure Transmit Security with Azure Active Directory B2C for passwordless authentication](partner-bindid.md) - Updated Transmit Security instructions
+- [About claim resolvers in Azure Active Directory B2C custom policies](claim-resolver-overview.md) - Updated claim resolvers and user journey
+ ## December 2023 ### Updated articles
Welcome to what's new in Azure Active Directory B2C documentation. This article
- [Create and read a user account by using Azure Active Directory B2C custom policy](custom-policies-series-store-user.md) - Editorial updates - [Define a Microsoft Entra multifactor authentication technical profile in an Azure AD B2C custom policy](multi-factor-auth-technical-profile.md) - Editorial updates
-## October 2023
-
-### Updated articles
--- [Set up a force password reset flow in Azure Active Directory B2C](force-password-reset.md) - Editorial updates-- [Azure AD B2C: Frequently asked questions (FAQ)](faq.yml) - Editorial updates-- [Enable JavaScript and page layout versions in Azure Active Directory B2C](javascript-and-page-layout.md) - Added breaking change on script tags
ai-services Concept Face Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-face-detection.md
[!INCLUDE [Gate notice](./includes/identity-gate-notice.md)]
+> [!IMPORTANT]
+> Face attributes are predicted through the use of statistical algorithms. They might not always be accurate. Use caution when you make decisions based on attribute data. Please refrain from using these attributes for anti-spoofing. Instead, we recommend using Face Liveness detection. For more information, please refer to [Tutorial: Detect liveness in faces](/azure/ai-services/computer-vision/tutorials/liveness).
+ This article explains the concepts of face detection and face attribute data. Face detection is the process of locating human faces in an image and optionally returning different kinds of face-related data. You use the [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API to detect faces in an image. To get started using the REST API or a client SDK, follow a [quickstart](./quickstarts-sdk/identity-client-library.md). Or, for a more in-depth guide, see [Call the detect API](./how-to/identity-detect-faces.md).
Attributes are a set of features that can optionally be detected by the [Face -
>[!NOTE] > The availability of each attribute depends on the detection model specified. QualityForRecognition attribute also depends on the recognition model, as it is currently only available when using a combination of detection model detection_01 or detection_03, and recognition model recognition_03 or recognition_04.
-> [!IMPORTANT]
-> Face attributes are predicted through the use of statistical algorithms. They might not always be accurate. Use caution when you make decisions based on attribute data.
- ## Input data Use the following tips to make sure that your input images give the most accurate detection results:
ai-services Use Headpose https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/use-headpose.md
In this guide, you'll see how you can use the HeadPose attribute of a detected face to enable some key scenarios.
+> [!IMPORTANT]
+> Face attributes are predicted through the use of statistical algorithms. They might not always be accurate. Use caution when you make decisions based on attribute data. Please refrain from using these attributes for anti-spoofing. Instead, we recommend using Face Liveness detection. For more information, please refer to [Tutorial: Detect liveness in faces](/azure/ai-services/computer-vision/tutorials/liveness).
+ ## Rotate the face rectangle The face rectangle, returned with every detected face, marks the location and size of the face in the image. By default, the rectangle is always aligned with the image (its sides are vertical and horizontal); this can be inefficient for framing angled faces. In situations where you want to programmatically crop faces in an image, it's better to be able to rotate the rectangle to crop.
ai-services Gpt 4 V Prompt Engineering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/gpt-4-v-prompt-engineering.md
+
+ Title: Introduction to prompt engineering with GPT-4 Turbo with Vision
+
+description: Learn how to better engineer prompts for GPT-4 Turbo with Vision.
++++ Last updated : 01/30/2024+++
+# Introduction to prompt engineering with GPT-4 Turbo with Vision
+
+To unlock the full potential of GPT-4 Turbo with Vision, it's essential to tailor the system prompt to your specific needs. Here are some guidelines to enhance the accuracy and efficiency of your prompts.
+
+## Fundamentals of crafting your prompt
+- **Contextual specificity:** Adding context to the scenario at hand gives the model a better understanding of an appropriate output. This level of specificity aids in focusing on relevant aspects and avoiding extraneous details.
+- **Task-oriented prompts:** Focusing on a specific task helps the model to develop the output while taking that perspective into consideration.
+- **Handling refusals:** When the model indicates an inability to perform a task, refining the prompt can be an effective solution. More specific prompts can guide the model towards a clearer understanding and better execution of the task. Some tips to keep in mind:
+
+ - Request explanations for generated responses to enhance transparency in the model's output <br>
+ - If using a single-image prompt, place the image before the text <br>
+ - Ask the model to describe the image in details first and complete your specific task from the description
+
+- **Adding examples:** Add examples that represent the type of responses you're looking for
+- **Try prompt tuning:** Try prompt tuning techniques such as Chain of Thought, a technique where you describe a sequence of steps leading to an outcome, therefore communicating a sort of ΓÇ£chain of thoughtΓÇ¥ or thought process for the model to replicate.
+- **Break down requests:** Try breaking down complex requests step-by-step to create manageable sub-goals
+- **Define output format:** Clearly mention the desired format for the output, such as markdown, JSON, HTML, etc. You can also suggest a specific structure, length, or specific attributes about the response.
+
+## Example prompt inputs & outputs
+There are many ways to craft system prompts to tailor the output specifically to your needs. The following sample inputs and outputs showcase how adjusting your prompts can give you different results. Try the model out for yourself using these images and adjusting the system prompt in the [Azure AI Studio playground](https://ai.azure.com/).
+
+### Contextual specificity
+Context can help improve feedback from the model. For example, if you're working on image descriptions for a product catalog, ensure your prompt reflects this in a clear and concise way. A prompt like ΓÇ£Describe images for an outdoor hiking product catalog, focusing on enthusiasm and professionalismΓÇ¥ guides the model to generate responses that are both accurate and contextually rich.
+
+|Basic prompt|Model output|
+|--|--|
+| :::image type="content" source="../media/concepts/gpt-v/tent-chairs.jpg" lightbox="../media/concepts/gpt-v/tent-chairs.jpg" alt-text="Photograph of a camping tent and chairs in a forest."::: What is in this image? | The image shows a camping setup in a forest area. There is a yellow tent, two folding camping chairsΓÇöone red and one blueΓÇöand a portable camping table. The surrounding area is densely populated with trees, indicating that the campsite is in a wooded region. |
+
+|Updated prompt|Model output|
+|--|--|
+| :::image type="content" source="../media/concepts/gpt-v/tent-chairs.jpg" lightbox="../media/concepts/gpt-v/tent-chairs.jpg" alt-text="Photograph of a camping tent and chairs in a forest."::: Describe images for an outdoor hiking product catalog, focusing on enthusiasm and professionalism. | "Embark on your next outdoor adventure with our premium hiking and camping essentials! Featured here is our vibrant, spacious tent, designed to be your home away from home amidst the serenity of nature. Crafted with top-notch materials, it promises durability and comfort in any weather. Complementing the tent are our ultra-comfortable, lightweight folding chairs, available in eye-catching red and serene blue, perfect for those moments of relaxation after a day of exploration. Don't forget our sturdy, portable table, ideal for a delightful outdoor meal or a game of cards under the stars. Elevate your camping experience with our reliable products, where quality meets the great outdoors!" |
++
+### Task-oriented prompts
+Giving the model a specific task to follow is one way of sharpening the response. In this example, this prompt steers the model to concentrate on elements crucial for insurance assessments, thereby improving accuracy and relevancy.
+
+|Basic prompt|Model output|
+|--|--|
+| :::image type="content" source="../media/concepts/gpt-v/car-crash.jpg" lightbox="../media/concepts/gpt-v/car-crash.jpg" alt-text="Photograph of a car crashed into a pole."::: What is in this image? | The image shows a silver sedan that has been in an accident. The front end of the car is damaged, with the hood crumpled, and the bumper partially detached. The car is resting against a utility pole on the sidewalk, and there is visible smoke coming from the engine area, indicating a recent crash. The setting appears to be a residential street, as indicated by the presence of houses in the background, a neatly trimmed hedge, and a sidewalk. |
+
+|Updated prompt|Model output|
+|--|--|
+| :::image type="content" source="../media/concepts/gpt-v/car-crash.jpg" lightbox="../media/concepts/gpt-v/car-crash.jpg" alt-text="Photograph of a car crashed into a pole."::: Analyze this car damage video for an auto insurance report, focusing on identifying and detailing damage | In the image provided, we observe a silver sedan that has sustained significant front-end damage. The vehicle's hood is notably crumpled, and the front bumper has suffered substantial deformation, partially detaching from the chassis. The right headlamp assembly appears to be damaged or dislodged. There is also visible damage to the front grille area. The car is in contact with a utility pole, suggesting a frontal collision with the pole at a relatively low or moderate speed. Smoke is emanating from the engine compartment, suggesting potential mechanical or electrical damage that may need further inspection for a comprehensive assessment. The airbags do not appear to be deployed in this image. The vehicle will require a detailed inspection to evaluate the full extent of structural and mechanical damage for a complete insurance report. |
+
+### Handling refusals
+When you receive a refusal output from a model, there are several ways you can tweak input to generate a more appropriate response. Below is one example of how to handle a refusal and progress.
+
+|Basic prompt|Model output|
+|--|--|
+| :::image type="content" source="../media/concepts/gpt-v/dinner-spread.jpg" lightbox="../media/concepts/gpt-v/dinner-spread.jpg" alt-text="Photograph of a dinner table with several plates of food."::: Plan this meal | Sorry, I can't provide that information. |
+
+|Follow up prompt|Model output|
+|--|--|
+| What information do you need? | I apologize for the confusion earlier. To plan a meal based on the image provided, I would need to know the following: <br> The number of people dining to ensure there's enough food for everyone.<br>Any dietary restrictions or preferences to accommodate (e.g., vegetarian, gluten-free, allergies).<br>The occasion or type of meal being planned (e.g., casual lunch, dinner party).<br>Desired balance of the meal in terms of food groups and nutrition. <br>With this information, I could help suggest how to portion the dishes, create a balanced meal, and ensure the meal meets the needs of all diners.|
+
+### Adding examples
+
+|Basic prompt|Model output|
+|--|--|
+| :::image type="content" source="../media/concepts/gpt-v/border-collie.jpg" lightbox="../media/concepts/gpt-v/border-collie.jpg" alt-text="Photograph of a border collie dog laying in a grass field."::: What is this dog breed and what is its average weight and height? | The dog in the image appears to be a Border Collie. Border Collies are known for their intelligence, agility, and herding abilities.The average weight for a Border Collie is typically around 30-45 pounds (14-20 kilograms). As for height, Border Collies generally stand about 18-22 inches (46-56 centimeters) tall at the shoulder. These figures can vary based on the individual dog's genetics, diet, and level of activity. Border Collies are medium-sized dogs with a well-proportioned body and a high level of energy. |
+
+|Updated prompt|Model output|
+|--|--|
+| Q: What is this dog breed and what is its average weight & height? :::image type="content" source="../media/concepts/gpt-v/pomeranian.jpg" lightbox="../media/concepts/gpt-v/pomeranian.jpg" alt-text="Photograph of a pomeranian dog sitting on a pool diving board above the water."::: <br> A: Breed: pomeranian; average weight: 3-7 lbs; average height: 8-14 inches <br> :::image type="content" source="../media/concepts/gpt-v/bulldog.jpg" lightbox="../media/concepts/gpt-v/bulldog.jpg" alt-text="Photograph of a bulldog laying in a grass field."::: A: Breed: English bulldog; average weight: 50-55 lbs; average height: 14-16 inches :::image type="content" source="../media/concepts/gpt-v/border-collie.jpg" lightbox="../media/concepts/gpt-v/border-collie.jpg" alt-text="Photograph of a border collie dog laying in a grass field."::: | A: Breed: Border Collie; average weight: 30-45 lbs; average height: 18-22 inches |
+
+### Try prompt tuning
+Using the chain of thought prompt technique below, you can guide the model to follow a thought process to achieve the correct answer.
+
+|Basic prompt|Model output|
+|--|--|
+| Q: Alex has 8 cookies. She buys 2 more boxes of 4 cookies each. How many cookies does she have? <br> A: 16 <br> Q: Alex has 20 balloons for a party, but 3 of them pop. She buys 10 more balloons. How many balloons does she have?| A: 27 |
+
+|Updated prompt|Model output|
+|--|--|
+| Q: Alex has 8 cookies. She buys 2 more boxes of 4 cookies each. How many cookies does she have? <br> A: Alex started with 8 cookies and bought two more boxes of 4 each. 2 x 4 = 8, and 8 + 8 = 16. <br> Q: Alex has 20 balloons for a party, but 3 of them pop. She buys 10 more balloons. How many balloons does she have? | A: Alex started with 20 balloons, but 3 popped, leaving her with 17 balloons. She then bought 10 more, so she has 17 + 10 = 27 balloons. |
+
+### Break down requests
+
+|Basic prompt|Model output|
+|--|--|
+| :::image type="content" source="../media/concepts/gpt-v/grandmas-recipe.jpg" lightbox="../media/concepts/gpt-v/grandmas-recipe.jpg" alt-text="Photograph of a recipe card detailing a recipe for Grandma's Apple Crumb Pie."::: What's in this image? | The image contains a recipe card titled Grandma's Apple Crumb Pie. It is noted as a Thanksgiving tradition. The ingredients listed are:<br><ul><li>5-7 apples, peeled </li> <li>1 9-inch pie crust</li> <li> 3/4 cups sugar </li> <li>1/2 teaspoon cinnamon </li> <li>A dash of salt </li><li> 3 tablespoons butter</li> </ul><br>Please note that the measurements for sugar, flour, and butter are not fully visible in the image. If needed, the standard measures for a crumb topping typically involve a combination of sugar, flour, and butter mixed together until crumbly|
+
+|Updated prompt|Model output|
+|-|--|
+| :::image type="content" source="../media/concepts/gpt-v/grandmas-recipe.jpg" lightbox="../media/concepts/gpt-v/grandmas-recipe.jpg" alt-text="Photograph of a recipe card detailing a recipe for Grandma's Apple Crumb Pie."::: You are an AI assistant that helps create shopping lists and instructions to prep food based on recipes. Please provide a list of ingredients to purchase, and then provide a list of instructions to prep the ingredients based on the recipe card. | Sure, here's a shopping list based on the recipe card: <br>Shopping List:</br><ol><li>5-7 apples (suitable for baking, such as Granny Smith or Honeycrisp)</li> <li>1 ready-made pie crust (9-inch) </li><li>Sugar (ensure you have enough for 1/2 cup)</li><li>All-purpose flour (ensure you have enough for 3/4 cup)</li><li>Ground cinnamon</li><li>Salt</li><li>Butter (ensure you have enough for 3 tablespoons)</li></ol><br>Prepping Instructions:</br> <ul><li>Peel the apples: Using a vegetable peeler or a paring knife, remove the skin from 5 to 7 apples.</li><li>Core and slice the apples: Cut the apples into quarters, remove the cores, and then slice the apple quarters into thin slices or chunks, depending on your preference for pie texture.</li><li>Measure the ingredients: Measure out 3/4 cups of sugar, 1/2 teaspoon of cinnamon, a dash of salt (about 1/8 teaspoon), and 3 tablespoons of butter.</li><li>Prepare the butter: Cut the butter into small pieces to be ready to use in the recipe. It's often easier to incorporate if it's cold, so keep it refrigerated until it's needed</li><li>Preheat your oven according to the pie recipe's instructions so it's ready for baking once your pie is assembled.</li><li>Remember to follow any additional preparation or baking instructions provided in the full recipe for your apple crumb pie.</li></ul> |
+
+### Define output format
+
+|Basic prompt|Model output|
+|-|--|
+|:::image type="content" source="../media/concepts/gpt-v/rental-listing.jpg" lightbox="../media/concepts/gpt-v/rental-listing.jpg" alt-text="Photograph of a white villa in a tropical region with a pool."::: Describe this image | The image shows a luxurious white villa with a traditional thatched roof on one section and a clear blue swimming pool in the foreground. There are sun loungers lined up along the poolside, inviting relaxation. The architecture includes elegant archways, a balcony with wooden railings, and large windows and doors that suggest a bright and airy interior. The villa is surrounded by lush tropical vegetation, including tall palm trees, which adds to the serene and exotic ambiance of the setting. |
+
+|Updated prompt|Model output|
+|-|--|
+|:::image type="content" source="../media/concepts/gpt-v/rental-listing.jpg" lightbox="../media/concepts/gpt-v/rental-listing.jpg" alt-text="Photograph of a white villa in a tropical region with a pool."::: You are an AI assistant that writes detailed listings for vacation rental properties. Summarize in three sentences the optimal features of the listing, based on only what is visible in the image. Conclude with one sentence on why the customer should book today. | This idyllic villa showcases an inviting private pool with steps for easy access, framed by a beautifully tiled sun deck and comfortable loungers ideal for soaking up the sun or enjoying a good book under the shade of tropical palms. Architectural charm is evident in the villa's design, featuring elegant archways, a thatched-roof gazebo, and balconies that promise serene views and a perfect spot to savor your morning or evening beverages. The lush greenery surrounding the property not only provides privacy but also enhances the sense of an exclusive paradise retreat. Don't miss the opportunity to secure this perfect blend of luxury and tranquility ΓÇô book today for a tropical escape like no other! |
++
+These guidelines and examples demonstrate how tailored system prompts can significantly enhance the performance of GPT-4 Turbo with Vision, ensuring that the responses are not only accurate but also perfectly suited to the specific context of the task at hand.
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
See [model versions](../concepts/model-versions.md) to learn about how Azure Ope
| gpt-4 (0314) | | East US <br> France Central <br> South Central US <br> UK South | | gpt-4 (0613) | Australia East <br> Canada East <br> France Central <br> Sweden Central <br> Switzerland North | East US <br> East US 2 <br> Japan East <br> UK South | | gpt-4 (1106-preview) | Australia East <br> Canada East <br> East US 2 <br> France Central <br> Norway East <br> South India <br> Sweden Central <br> UK South <br> West US | |
-| gpt-4 (vision-preview) | Sweden Central <br> West US| Switzerland North <br> Australia East |
+| gpt-4 (vision-preview) | Sweden Central <br> West US <br> Japan East| Switzerland North <br> Australia East |
### GPT-3.5 models
ai-services Gpt With Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/gpt-with-vision.md
The **object grounding** integration brings a new layer to data analysis and use
> To use Vision enhancement, you need a Computer Vision resource. It must be in the paid (S0) tier and in the same Azure region as your GPT-4 Turbo with Vision resource. > [!CAUTION]
-> Azure AI enhancements for GPT-4 Turbo with Vision will be billed separately from the core functionalities. Each specific Azure AI enhancement for GPT-4 Turbo with Vision has its own distinct charges.
+> Azure AI enhancements for GPT-4 Turbo with Vision will be billed separately from the core functionalities. Each specific Azure AI enhancement for GPT-4 Turbo with Vision has its own distinct charges. For details, see the [special pricing information](../concepts/gpt-with-vision.md#special-pricing-information).
#### [REST](#tab/rest)
The chat responses you receive from the model should now include enhanced inform
"choices": [ {
- "finish_reason":"stop",
+ "finish_details": {
+ "type": "stop",
+ "stop": "<|fim_suffix|>"
+ },
"index": 0, "message": {
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/overview.md
Azure OpenAI Service provides REST API access to OpenAI's powerful language mode
| | | | Models available | **GPT-4 series (including GPT-4 Turbo with Vision)** <br>**GPT-3.5-Turbo series**<br> Embeddings series <br> Learn more in our [Models](./concepts/models.md) page.| | Fine-tuning (preview) | `GPT-3.5-Turbo` (0613) <br> `babbage-002` <br> `davinci-002`.|
-| Price | [Available here](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) |
+| Price | [Available here](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) <br> For details on GPT-4 Turbo with Vision, see the [special pricing information](../openai/concepts/gpt-with-vision.md#special-pricing-information).|
| Virtual network support & private link support | Yes, unless using [Azure OpenAI on your data](./concepts/use-your-data.md). | | Managed Identity| Yes, via Microsoft Entra ID | | UI experience | **Azure portal** for account & resource management, <br> **Azure OpenAI Service Studio** for model exploration and fine-tuning |
ai-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
**Supported versions** - `2022-12-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2022-12-01/inference.json)-- `2023-03-15-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)
+- `2023-03-15-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)
- `2023-05-15` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2023-05-15/inference.json)-- `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)-- `2023-09-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
+- `2023-06-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+- `2023-07-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)
+- `2023-08-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
+- `2023-09-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
+- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
**Request body**
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
**Supported versions** - `2022-12-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2022-12-01/inference.json)-- `2023-03-15-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)
+- `2023-03-15-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)
- `2023-05-15` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2023-05-15/inference.json)-- `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
+- `2023-06-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+- `2023-07-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)
+- `2023-08-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
+- `2023-09-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
+- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
**Request body**
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
**Supported versions** -- `2023-03-15-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)
+- `2023-03-15-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)
- `2023-05-15` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2023-05-15/inference.json)-- `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
+- `2023-06-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+- `2023-07-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)
+- `2023-08-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
+- `2023-09-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
- `2023-12-01-preview` (required for Vision scenarios) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview) **Request body**
POST {your-resource-name}/openai/deployments/{deployment-id}/extensions/chat/com
| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. | **Supported versions**-- `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
+- `2023-06-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+- `2023-07-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)
+- `2023-08-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
+- `2023-09-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)+ #### Example request You can make requests using [Azure AI Search](./concepts/use-your-data.md?tabs=ai-search#ingesting-your-data), [Azure Cosmos DB for MongoDB vCore](./concepts/use-your-data.md?tabs=mongo-db#ingesting-your-data), [Azure Machine Learning](/azure/machine-learning/overview-what-is-azure-machine-learning), [Pinecone](https://www.pinecone.io/), and [Elasticsearch](https://www.elastic.co/).
POST https://{your-resource-name}.openai.azure.com/openai/images/generations:sub
**Supported versions** -- `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
+- `2023-06-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+- `2023-07-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)
+- `2023-08-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
+- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
**Request body**
GET https://{your-resource-name}.openai.azure.com/openai/operations/images/{oper
**Supported versions** -- `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
+- `2023-06-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+- `2023-07-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)
+- `2023-08-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
#### Example request
DELETE https://{your-resource-name}.openai.azure.com/openai/operations/images/{o
**Supported versions** -- `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
+- `2023-06-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+- `2023-07-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)
+- `2023-08-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
#### Example request
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
**Supported versions** -- `2023-09-01-preview`
+- `2023-09-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
+- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
**Request body**
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
**Supported versions** -- `2023-09-01-preview`
+- `2023-09-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
+- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
**Request body**
ai-services Whisper Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whisper-quickstart.md
echo export AZURE_OPENAI_ENDPOINT="REPLACE_WITH_YOUR_ENDPOINT_HERE" >> /etc/envi
::: zone-end +++ ## Clean up resources If you want to clean up and remove an OpenAI resource, you can delete the resource. Before deleting the resource, you must first delete any deployed models.
ai-services Get Started Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-speech-to-text.md
Title: "Speech to text quickstart - Speech service"
-description: In this quickstart, learn how to convert speech to text with recognition from a microphone or .wav file.
+description: In this quickstart, learn how to use the Speech service to convert speech to text with recognition from a microphone or .wav file.
Previously updated : 08/24/2023 Last updated : 01/30/2024 ms.devlang: cpp # ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python zone_pivot_groups: programming-languages-speech-services keywords: speech to text, speech to text software
+#customer intent: As a developer, I want to use Speech services to covert speech to text, from either a microphone or a file using my choice of technology.
# Quickstart: Recognize and convert speech to text
keywords: speech to text, speech to text software
[!INCLUDE [CLI include](includes/quickstarts/speech-to-text-basics/cli.md)] ::: zone-end
-## Next steps
+## Next step
> [!div class="nextstepaction"] > [Learn more about speech recognition](how-to-recognize-speech.md)
ai-services Get Started Stt Diarization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-stt-diarization.md
Title: "Real-time diarization quickstart - Speech service"
-description: In this quickstart, you convert speech to text continuously from a file. The service transcribes the speech and identifies one or more speakers.
+description: In this quickstart, you convert speech to text continuously from a file. The Speech service transcribes the speech and identifies one or more speakers.
Previously updated : 7/27/2023 Last updated : 01/30/2024 zone_pivot_groups: programming-languages-speech-services keywords: speech to text, speech to text software
+#customer intent: As a developer, I want to create speech to text applications that use diarization to improve readability of multiple person conversations.
-# Quickstart: Real-time diarization (Preview)
+# Quickstart: Create real-time diarization (Preview)
::: zone pivot="programming-language-csharp" [!INCLUDE [C# include](includes/quickstarts/stt-diarization/csharp.md)]
keywords: speech to text, speech to text software
[!INCLUDE [CLI include](includes/quickstarts/stt-diarization/cli.md)] ::: zone-end -
-## Next steps
+## Next step
> [!div class="nextstepaction"] > [Learn more about speech recognition](how-to-recognize-speech.md)
ai-services Get Started Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-text-to-speech.md
Title: "Text to speech quickstart - Speech service"
-description: In this quickstart, you convert text to speech. Learn about object construction and design patterns, supported audio formats, and custom configuration options.
+description: In this quickstart, you create an app that converts text to speech. Learn about supported audio formats and custom configuration options.
Previously updated : 08/25/2023 Last updated : 01/29/2024 ms.devlang: cpp # ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python zone_pivot_groups: programming-languages-speech-services keywords: text to speech
+#customer intent: As a user, I want to create speech output from text by using my choice of technologies which fit into my current processes.
# Quickstart: Convert text to speech
keywords: text to speech
[!INCLUDE [CLI include](includes/quickstarts/text-to-speech-basics/cli.md)] ::: zone-end
-## Next steps
+## Next step
> [!div class="nextstepaction"] > [Learn more about speech synthesis](how-to-speech-synthesis.md)
ai-studio Python Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/python-tool.md
Create a custom connection that stores all your LLM API KEY or other required cr
1. Go to Prompt flow in your workspace, then go to **connections** tab. 2. Select **Create** and select **Custom**.
-1. In the right panel, you can define your connection name, and you can add multiple *Key-value pairs* to store your credentials and keys by selecting **Add key-value pairs**.
-
+3. In the right panel, you can define your connection name, and you can add multiple *Key-value pairs* to store your credentials and keys by selecting **Add key-value pairs**.
+4. Besides your key value pairs, please also add following extra meta data to the connection:
+ - azureml.flow.connection_type: Custom
+ - azureml.flow.module: promptflow.connections
+
+ :::image type="content" source="./media/python-tool/custom-connection-meta.png" alt-text="Screenshot that shows add extra meta to custom connection in AI studio." lightbox = "./media/python-tool/custom-connection-meta.png":::
+
+
> [!NOTE] > - You can set one Key-Value pair as secret by **is secret** checked, which will be encrypted and stored in your key value. > - Make sure at least one key-value pair is set as secret, otherwise the connection will not be created successfully.
ai-studio Serp Api Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/serp-api-tool.md
Create a Serp connection:
1. Sign in to [Azure AI Studio](https://studio.azureml.net/). 1. Go to **Settings** > **Connections**. 1. Select **+ New connection**.
-1. Complete all steps in the **Create a new connection** dialog box.
+1. Create a custom connection with the following details:
+ - azureml.flow.connection_type: Serp
+ - azureml.flow.module: promptflow.connections
+ - api_key: Your_Serp_API_key, please mark it as a secret.
+
+ :::image type="content" source="./media/serp-api-tool/serp-connection-meta.png" alt-text="Screenshot that shows add extra meta to custom connection in AI studio." lightbox = "./media/serp-api-tool/serp-connection-meta.png":::
The connection is the model used to establish connections with Serp API. Get your API key from the SerpAPI account dashboard.
aks Egress Outboundtype https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/egress-outboundtype.md
Previously updated : 11/06/2023 Last updated : 02/02/2024 #Customer intent: As a cluster operator, I want to define my own egress paths with user-defined routes. Since I define this up front I do not want AKS provided load balancer configurations.
Migration is only supported between `loadBalancer`, `managedNATGateway` (if usin
[!INCLUDE [preview features callout](includes/preview/preview-callout.md)]
-### Install the `aks-preview` Azure CLI extension
-
-`aks-preview` version 0.5.113 is required.
-
-* Install and update the `aks-preview` extension.
-
-```azurecli
-# Install aks-preview extension
-az extension add --name aks-preview
-# Update aks-preview extension
-az extension update --name aks-preview
-```
-
-### Register the `AKS-OutBoundTypeMigrationPreview` feature flag
-
-1. Register the `AKS-OutBoundTypeMigrationPreview` feature flag using the [`az feature register`][az-feature-register] command. It takes a few minutes for the status to show *Registered*.
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "AKS-OutBoundTypeMigrationPreview"
-```
-
-2. Verify the registration status using the [`az feature show`][az-feature-show] command.
-
-```azurecli-interactive
-az feature show --namespace "Microsoft.ContainerService" --name "AKS-OutBoundTypeMigrationPreview"
-```
-
-3. When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider using the [`az provider register`][az-provider-register] command.
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
- ### Update cluster to use a new outbound type
+> [!NOTE]
+> You must use a version >= 2.56 of Azure CLI to migrate outbound type. Use `az upgrade` to update to the latest version of Azure CLI.
+ * Update the outbound configuration of your cluster using the [`az aks update`][az-aks-update] command. ### Update cluster from loadbalancer to managedNATGateway
app-service Deploy Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-azure-pipelines.md
To get started:
* **azureSubscription**: Name of the authorized service connection to your Azure subscription. * **appName**: Name of your existing app.
- * **package**: Fike path to the package or a folder containing your app service contents. Wildcards are supported.
+ * **package**: File path to the package or a folder containing your app service contents. Wildcards are supported.
# [Classic](#tab/classic/)
app-service How To Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-migrate.md
Title: Use the migration feature to migrate your App Service Environment to App Service Environment v3
-description: Learn how to migrate your App Service Environment to App Service Environment v3 using the migration feature
+description: Learn how to migrate your App Service Environment to App Service Environment v3 by using the migration feature.
Last updated 1/16/2024
zone_pivot_groups: app-service-cli-portal + # Use the migration feature to migrate App Service Environment v1 and v2 to App Service Environment v3
-An App Service Environment v1 and v2 can be automatically migrated to an [App Service Environment v3](overview.md) using the migration feature. To learn more about the migration process and to see if your App Service Environment supports migration at this time, see the [Migration to App Service Environment v3 Overview](migrate.md).
+You can automatically migrate App Service Environment v1 and v2 to [App Service Environment v3](overview.md) by using the migration feature. To learn more about the migration process and to see if your App Service Environment supports migration at this time, see the [overview of the migration feature](migrate.md).
> [!IMPORTANT]
-> It is recommended to use this feature for dev environments first before migrating any production environments to ensure there are no unexpected issues. Please provide any feedback related to this article or the feature using the buttons at the bottom of the page.
->
+> We recommend that you use this feature for development environments before migrating any production environments, to avoid unexpected problems. Please provide any feedback related to this article or the feature by using the buttons at the bottom of the page.
## Prerequisites
-Ensure you understand how migrating to an App Service Environment v3 affects your applications. Review the [migration process](migrate.md#overview-of-the-migration-process-using-the-migration-feature) to understand the process timeline and where and when you need to get involved. Also review the [FAQs](migrate.md#frequently-asked-questions), which can answer some questions you currently have.
+Ensure that you understand how migrating to App Service Environment v3 affects your applications. Review the [migration process](migrate.md#overview-of-the-migration-process-using-the-migration-feature) to understand the process timeline and where and when you need to get involved. Also review the [FAQs](migrate.md#frequently-asked-questions), which can answer some of your questions.
-Ensure there are no locks on your virtual network, resource group, resource, or subscription. Locks block platform operations during migration.
+Ensure that there are no locks on your virtual network, resource group, resource, or subscription. Locks block platform operations during migration.
-Ensure there are no Azure Policies that block actions required for the migration including subnet modifications and App Service resource creations. Policies that block resource modifications and creations can cause migration to get stuck or fail.
+Ensure that no Azure policies are blocking actions that are required for the migration, including subnet modifications and Azure App Service resource creations. Policies that block resource modifications and creations can cause migration to get stuck or fail.
::: zone pivot="experience-azcli"
-The recommended experience for the migration feature is using the [Azure portal](how-to-migrate.md?pivots=experience-azp). If you decide to use the Azure CLI to carry out the migration, you should follow the steps described here in order and as written since you're making Azure REST API calls. The recommended way for making these API calls is by using the [Azure CLI](/cli/azure/). For information about other methods, see [Getting Started with Azure REST](/rest/api/azure/).
+We recommend that you use the [Azure portal](how-to-migrate.md?pivots=experience-azp) for the migration experience. If you decide to use the [Azure CLI](/cli/azure/) for the migration, follow the steps described here in order and as written, because you're making Azure REST API calls. We recommend that you use the Azure CLI to make these API calls. For information about other methods, see [Azure REST API reference](/rest/api/azure/).
-For this guide, [install the Azure CLI](/cli/azure/install-azure-cli) or use the [Azure Cloud Shell](https://shell.azure.com/).
+For this guide, [install the Azure CLI](/cli/azure/install-azure-cli) or use [Azure Cloud Shell](https://shell.azure.com/).
## 1. Get your App Service Environment ID
-Run these commands to get your App Service Environment ID and store it as an environment variable. Replace the placeholders for name and resource groups with your values for the App Service Environment you want to migrate. "ASE_RG" and "VNET_RG" are the same if your virtual network and App Service Environment are in the same resource group.
+Run the following commands to get your App Service Environment ID and store it as an environment variable. Replace the placeholders for the name and resource groups with your values for the App Service Environment that you want to migrate. `ASE_RG` and `VNET_RG` are the same if your virtual network and App Service Environment are in the same resource group.
```azurecli ASE_NAME=<Your-App-Service-Environment-name>
VNET_RG=<Your-VNet-Resource-Group>
ASE_ID=$(az appservice ase show --name $ASE_NAME --resource-group $ASE_RG --query id --output tsv) ```
-## 2. Validate migration is supported
+## 2. Validate that migration is supported
-The following command checks whether your App Service Environment is supported for migration. If you receive an error or if your App Service Environment is in an unhealthy or suspended state, you can't migrate at this time. See the [troubleshooting](migrate.md#troubleshooting) section for descriptions of the potential error messages you can get. If your environment [isn't supported for migration](migrate.md#supported-scenarios) or you want to migrate to App Service Environment v3 without using the migration feature, see the [manual migration options](migration-alternatives.md).
+The following command checks whether your App Service Environment is supported for migration. If you receive an error or if your App Service Environment is in an unhealthy or suspended state, you can't migrate at this time. See the [troubleshooting](migrate.md#troubleshooting) section for descriptions of the potential error messages that you can get. If your environment [isn't supported for migration](migrate.md#supported-scenarios) or you want to migrate to App Service Environment v3 without using the migration feature, see the [manual migration options](migration-alternatives.md).
```azurecli az rest --method post --uri "${ASE_ID}/migrate?api-version=2021-02-01&phase=validation"
az rest --method post --uri "${ASE_ID}/migrate?api-version=2021-02-01&phase=vali
If there are no errors, your migration is supported, and you can continue to the next step.
-## 3. Generate IP addresses for your new App Service Environment v3
+## 3. Generate IP addresses for your new App Service Environment v3 resource
-Run the following command to create the new IPs. This step takes about 15 minutes to complete. Don't scale or make changes to your existing App Service Environment during this time.
+Run the following command to create new IP addresses. This step takes about 15 minutes to complete. Don't scale or make changes to your existing App Service Environment during this time.
```azurecli az rest --method post --uri "${ASE_ID}/migrate?api-version=2021-02-01&phase=premigration" ```
-Run the following command to check the status of this step.
+Run the following command to check the status of this step:
```azurecli az rest --method get --uri "${ASE_ID}?api-version=2021-02-01" --query properties.status ```
-If it's in progress, you get a status of "Migrating". Once you get a status of "Ready", run the following command to view your new IPs. If you don't see the new IPs immediately, wait a few minutes and try again.
+If the step is in progress, you get a status of `Migrating`. After you get a status of `Ready`, run the following command to view your new IPs. If you don't see the new IPs immediately, wait a few minutes and try again.
```azurecli az rest --method get --uri "${ASE_ID}/configurations/networking?api-version=2021-02-01"
az rest --method get --uri "${ASE_ID}/configurations/networking?api-version=2021
## 4. Update dependent resources with new IPs
-Using the new IPs, update any of your resources or networking components to ensure your new environment functions as intended once migration is complete. It's your responsibility to make any necessary updates. This step is also a good time to review the [inbound and outbound network](networking.md#ports-and-network-restrictions) dependency changes when moving to App Service Environment v3 including the port change for the Azure Load Balancer, which now uses port 80. Don't migrate until you complete this step.
+By using the new IPs, update any of your resources or networking components to ensure that your new environment functions as intended after migration is complete. It's your responsibility to make any necessary updates.
+
+This step is also a good time to review the [inbound and outbound network](networking.md#ports-and-network-restrictions) dependency changes when moving to App Service Environment v3. These changes include the port change for Azure Load Balancer, which now uses port 80. Don't migrate until you complete this step.
## 5. Delegate your App Service Environment subnet
-App Service Environment v3 requires the subnet it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. Previous versions didn't require this delegation. You need to confirm your subnet is delegated properly and update the delegation if needed before migrating. You can update the delegation either by running the following command or by navigating to the subnet in the [Azure portal](https://portal.azure.com).
+App Service Environment v3 requires the subnet it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. Previous versions didn't require this delegation. You need to confirm that your subnet is delegated properly and update the delegation (if necessary) before migrating. You can update the delegation either by running the following command or by going to the subnet in the [Azure portal](https://portal.azure.com).
```azurecli az network vnet subnet update --resource-group $VNET_RG --name <subnet-name> --vnet-name <vnet-name> --delegations Microsoft.Web/hostingEnvironments
az network vnet subnet update --resource-group $VNET_RG --name <subnet-name> --v
## 6. Confirm there are no locks on the virtual network
-Virtual network locks block platform operations during migration. If your virtual network has locks, you need to remove them before migrating. The locks can be readded if needed once migration is complete. Locks can exist at three different scopes: subscription, resource group, and resource. When you apply a lock at a parent scope, all resources within that scope inherit the same lock. If you have locks applied at the subscription, resource group, or resource scope, they need to be removed before the migration. For more information on locks and lock inheritance, see [Lock your resources to protect your infrastructure](../../azure-resource-manager/management/lock-resources.md).
+Virtual network locks block platform operations during migration. If your virtual network has locks, you need to remove them before migrating. If necessary, you can add back the locks after migration is complete.
-Use the following command to check if your virtual network has any locks.
+Locks can exist at three scopes: subscription, resource group, and resource. When you apply a lock at a parent scope, all resources within that scope inherit the same lock. If you have locks applied at the subscription, resource group, or resource scope, you need to remove them before the migration. For more information on locks and lock inheritance, see [Lock your resources to protect your infrastructure](../../azure-resource-manager/management/lock-resources.md).
+
+Use the following command to check if your virtual network has any locks:
```azurecli az lock list --resource-group $VNET_RG --resource <vnet-name> --resource-type Microsoft.Network/virtualNetworks ```
-Delete any existing locks using the following command.
+Delete any existing locks by using the following command:
```azurecli az lock delete --resource-group $VNET_RG --name <lock-name> --resource <vnet-name> --resource-type Microsoft.Network/virtualNetworks ```
-For related commands to check if your subscription or resource group has locks, see [Azure CLI reference for locks](../../azure-resource-manager/management/lock-resources.md#azure-cli).
+For related commands to check if your subscription or resource group has locks, see the [Azure CLI reference for locks](../../azure-resource-manager/management/lock-resources.md#azure-cli).
## 7. Prepare your configurations
-You can make your new App Service Environment v3 zone redundant if your existing environment is in a [region that supports zone redundancy](./overview.md#regions). Zone redundancy can be configured by setting the `zoneRedundant` property to "true". Zone redundancy is an optional configuration. This configuration can only be set during the creation of your new App Service Environment v3 and can't be removed at a later time. For more information, see [Choose your App Service Environment v3 configurations](./migrate.md#choose-your-app-service-environment-v3-configurations). If you don't want to configure zone redundancy, don't include the `zoneRedundant` parameter.
+You can make your new App Service Environment v3 resource zone redundant if your existing environment is in a [region that supports zone redundancy](./overview.md#regions). You can configure zone redundancy by setting the `zoneRedundant` property to `true`.
+
+Zone redundancy is an optional configuration. You can set it only during the creation of your new App Service Environment v3 resource. You can't remove it at a later time. For more information, see [Choose your App Service Environment v3 configurations](./migrate.md#choose-your-app-service-environment-v3-configurations). If you don't want to configure zone redundancy, don't include the `zoneRedundant` parameter.
-If your existing App Service Environment uses a custom domain suffix, you need to [configure one for your new App Service Environment v3 during the migration process](./migrate.md#choose-your-app-service-environment-v3-configurations). Migration fails if you don't configure a custom domain suffix and are using one currently. Migration also fails if you attempt to add a custom domain suffix during migration to an environment that doesn't have one configured currently. For more information on App Service Environment v3 custom domain suffix including requirements, step-by-step instructions, and best practices, see [Configure custom domain suffix for App Service Environment](./how-to-custom-domain-suffix.md).
+If your existing App Service Environment uses a custom domain suffix, you need to [configure one for your new App Service Environment v3 resource during the migration process](./migrate.md#choose-your-app-service-environment-v3-configurations). Migration fails if you don't configure a custom domain suffix and are using one currently. Migration also fails if you try to add a custom domain suffix during migration to an environment that doesn't have one configured. For more information on App Service Environment v3 custom domain suffixes, including requirements, step-by-step instructions, and best practices, see [Custom domain suffix for App Service Environments](./how-to-custom-domain-suffix.md).
> [!NOTE]
-> If you're configuring a custom domain suffix, when adding the network permissions on your Azure Key Vault, be sure that your key vault allows access from your App Service Environment's new outbound IP addresses that were generated during the IP address generation in step 3.
->
+> If you're configuring a custom domain suffix, when you're adding the network permissions on your Azure key vault, be sure that your key vault allows access from your App Service Environment's new outbound IP addresses that were generated in step 3.
If your migration doesn't include a custom domain suffix and you aren't enabling zone redundancy, you can move on to migration.
-In order to set these configurations, create a file called "parameters.json" with the following details based on your scenario. Don't include the custom domain suffix properties if this feature doesn't apply to your migration. Be sure to pay attention to the value of the `zoneRedundant` property as this configuration is irreversible after migration. Ensure the value of the `kind` property is set based on your existing App Service Environment version. Accepted values for the `kind` property are "ASEV1" and "ASEV2".
+To set these configurations, create a file called *parameters.json* with the following details based on your scenario. Don't include the properties for a custom domain suffix if this feature doesn't apply to your migration. Pay attention to the value of the `zoneRedundant` property, because this configuration is irreversible after migration. Set the value of the `kind` property based on your existing App Service Environment version. Accepted values for the `kind` property are `ASEV1` and `ASEV2`.
-If you're migrating without a custom domain suffix and are enabling zone redundancy:
+If you're migrating without a custom domain suffix and you're enabling zone redundancy, use this code:
```json {
If you're migrating without a custom domain suffix and are enabling zone redunda
} ```
-If you're using a user assigned managed identity for your custom domain suffix configuration and **are enabling zone redundancy**:
+If you're using a user-assigned managed identity for your custom domain suffix configuration and you're enabling zone redundancy, use this code:
```json {
If you're using a user assigned managed identity for your custom domain suffix c
} ```
-If you're using a system assigned managed identity for your custom domain suffix configuration and **aren't enabling zone redundancy**:
+If you're using a system-assigned managed identity for your custom domain suffix configuration and you're *not* enabling zone redundancy, use this code:
```json {
If you're using a system assigned managed identity for your custom domain suffix
} ```
-## 8. Migrate to App Service Environment v3
+## 8. Migrate to App Service Environment v3 and check status
-Only start this step once you complete all premigration actions listed previously and understand the [implications of migration](migrate.md#migrate-to-app-service-environment-v3) including what happens during this time. This step takes three to six hours for v2 to v3 migrations and up to six hours for v1 to v3 migrations depending on environment size. During that time, there's about one hour of application downtime. Scaling, deployments, and modifications to your existing App Service Environment are blocked during this step.
+After you complete all of the preceding steps, you can start the migration. Make sure that you understand the [implications of migration](migrate.md#migrate-to-app-service-environment-v3).
-Only include the "body" parameter in the command if you're enabling zone redundancy and/or are configuring a custom domain suffix. If neither of those configurations apply to your migration, you can remove the parameter from the command.
+This step takes three to six hours for v2 to v3 migrations and up to six hours for v1 to v3 migrations, depending on the environment size. During that time, there's about one hour of application downtime. Scaling, deployments, and modifications to your existing App Service Environment are blocked during this step.
+
+Include the `body` parameter in the following command if you're enabling zone redundancy and/or configuring a custom domain suffix. If neither of those configurations applies to your migration, you can remove the parameter from the command.
```azurecli az rest --method post --uri "${ASE_ID}/migrate?api-version=2021-02-01&phase=fullmigration" --body @parameters.json
az rest --method post --uri "${ASE_ID}/migrate?api-version=2021-02-01&phase=full
Run the following commands to check the detailed status of your migration. For information on the statuses, see the [migration status descriptions](migrate.md#migrate-to-app-service-environment-v3).
-The first command gets the operation ID for the migration. Copy the value of the "ID" property.
+The first command gets the operation ID for the migration. Copy the value of the `ID` property.
```azurecli az rest --method get --uri "${ASE_ID}/operations?api-version=2022-03-01" ```
-Replace the placeholder for the operation ID in the following command with the value you copied in the previous step. This command returns the detailed status of your migration. You can run this command as often as needed to get the latest status.
+Replace the placeholder for the operation ID in the following command with the value that you copied. This command returns the detailed status of your migration. You can run this command as often as needed to get the latest status.
```azurecli az rest --method get --uri "${ASE_ID}/operations/<operation-id>/details/default?api-version=2022-09-01" ```
-Once you get a status of "Ready", migration is done, and you have an App Service Environment v3. Your apps are now running in your new environment.
+After you get a status of `Ready`, migration is done, and you have an App Service Environment v3 resource. Your apps are now running in your new environment.
-Get the details of your new environment by running the following command or by navigating to the [Azure portal](https://portal.azure.com).
+Get the details of your new environment by running the following command or by going to the [Azure portal](https://portal.azure.com).
```azurecli az appservice ase show --name $ASE_NAME --resource-group $ASE_RG
az appservice ase show --name $ASE_NAME --resource-group $ASE_RG
::: zone pivot="experience-azp"
-## 1. Validate migration is supported
+## 1. Validate that migration is supported
+
+In the [Azure portal](https://portal.azure.com), go to the **Migration** page for the App Service Environment that you're migrating. You can get to the **Migration** page by selecting the banner at the top of the **Overview** page for your App Service Environment, or by selecting the **Migration** item on the left menu.
-From the [Azure portal](https://portal.azure.com), navigate to the **Migration** page for the App Service Environment you're migrating. You can get to the migration page by clicking on the banner at the top of the **Overview** page for your App Service Environment or by clicking the **Migration** item on the left-hand side.
+On the **Migration** page, the platform validates if migration is supported for your App Service Environment. Select **Validate**, and then confirm that you want to proceed with the validation. The validation process takes a few seconds.
-On the migration page, the platform validates if migration is supported for your App Service Environment. Select "Validate" and then confirm that you want to proceed with the validation. The validation process takes a few seconds to complete.
+If your environment isn't supported for migration, a banner appears at the top of the page and includes an error message with a reason. For descriptions of the error messages that can appear if you aren't eligible for migration, see [Troubleshooting](migrate.md#troubleshooting).
-If your environment isn't supported for migration, a banner appears at the top of the page and includes an error message with a reason. See the [troubleshooting](migrate.md#troubleshooting) section for descriptions of the error messages you can see if you aren't eligible for migration. If your App Service Environment isn't supported for migration at this time or your environment is in an unhealthy or suspended state, you can't use the migration feature. If your environment [isn't supported for migration with the migration feature](migrate.md#supported-scenarios) or you want to migrate to App Service Environment v3 without using the migration feature, see the [manual migration options](migration-alternatives.md).
+If your App Service Environment isn't supported for migration at this time or your environment is in an unhealthy or suspended state, you can't use the migration feature. If your environment [isn't supported for migration with the migration feature](migrate.md#supported-scenarios) or you want to migrate to App Service Environment v3 without using the migration feature, see the [manual migration options](migration-alternatives.md).
-If migration is supported for your App Service Environment, you'll be able to proceed to the next step in the process. The migration page guides you through the series of steps to complete the migration.
+If migration is supported for your App Service Environment, proceed to the next step in the process. The **Migration** page guides you through the series of steps to complete the migration.
-## 2. Generate IP addresses for your new App Service Environment v3
+## 2. Generate IP addresses for your new App Service Environment v3 resource
-Under **Get new IP addresses**, confirm you understand the implications and start the process. This step takes about 15 minutes to complete. You can't scale or make changes to your existing App Service Environment during this time.
+Under **Get new IP addresses**, confirm that you understand the implications and select the **Start** button. This step takes about 15 minutes to complete. You can't scale or make changes to your existing App Service Environment during this time.
## 3. Update dependent resources with new IPs
-When the previous step finishes, you're shown the IP addresses for your new App Service Environment v3. Using the new IPs, update any resources and networking components to ensure your new environment functions as intended once migration is complete. It's your responsibility to make any necessary updates. This step is also a good time to review the [inbound and outbound network](networking.md#ports-and-network-restrictions) dependency changes when moving to App Service Environment v3 including the port change for the Azure Load Balancer, which now uses port 80. Don't move on to the next step until you confirm that you made these updates.
+When the previous step finishes, the IP addresses for your new App Service Environment v3 resource appear. Use the new IPs to update any resources and networking components so that your new environment functions as intended after migration is complete. It's your responsibility to make any necessary updates.
+
+This step is also a good time to review the [inbound and outbound network](networking.md#ports-and-network-restrictions) dependency changes in moving to App Service Environment v3. These changes include the port change for Azure Load Balancer, which now uses port 80. Don't move to the next step until you confirm that you made these updates.
:::image type="content" source="./media/migration/ip-sample.png" alt-text="Screenshot that shows sample IPs generated during premigration."::: ## 4. Delegate your App Service Environment subnet
-App Service Environment v3 requires the subnet it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. Previous versions didn't require this delegation. You need to confirm your subnet is delegated properly and/or update the delegation if needed before migrating. A link to your subnet is given so that you can confirm and update as needed.
+App Service Environment v3 requires the subnet that it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. Previous versions didn't require this delegation. You need to confirm that your subnet is delegated properly and update the delegation (if necessary) before migrating. The portal displays a link to your subnet so that you can confirm and update as needed.
## 5. Acknowledge instance size changes
-Your App Service plans are converted from Isolated to the corresponding Isolated v2 SKU. For example, I2 is converted to I2v2. Your apps may be over-provisioned after the migration since the Isolated v2 tier has more memory and CPU per corresponding instance size. You have the opportunity to scale your environment as needed once migration is complete. For more information, review the [SKU details](https://azure.microsoft.com/pricing/details/app-service/windows/).
+Your App Service plans are converted from Isolated to the corresponding Isolated v2 tier. For example, I2 is converted to I2v2. Your apps might be overprovisioned after the migration, because the Isolated v2 tier has more memory and CPU per corresponding instance size. You have the opportunity to scale your environment as needed after migration is complete. For more information, review the [pricing details](https://azure.microsoft.com/pricing/details/app-service/windows/).
-## 6. Confirm there are no locks on the virtual network
+## 6. Confirm that the virtual network has no locks
-Virtual network locks block platform operations during migration. If your virtual network has locks, you need to remove them before migrating. The locks can be readded if needed once migration is complete. Locks can exist at three different scopes: subscription, resource group, and resource. When you apply a lock at a parent scope, all resources within that scope inherit the same lock. If you have locks applied at the subscription, resource group, or resource scope, they need to be removed before the migration. For more information on locks and lock inheritance, see [Lock your resources to protect your infrastructure](../../azure-resource-manager/management/lock-resources.md).
+Virtual network locks block platform operations during migration. If your virtual network has locks, you need to remove them before migrating. If necessary, you can add back the locks after migration is complete.
+
+Locks can exist at three scopes: subscription, resource group, and resource. When you apply a lock at a parent scope, all resources within that scope inherit the same lock. If you have locks applied at the subscription, resource group, or resource scope, you need to remove them before the migration. For more information on locks and lock inheritance, see [Lock your resources to protect your infrastructure](../../azure-resource-manager/management/lock-resources.md).
For details on how to check if your subscription or resource group has locks, see [Configure locks](../../azure-resource-manager/management/lock-resources.md#configure-locks).
For details on how to check if your subscription or resource group has locks, se
## 7. Choose your configurations
-You can make your new App Service Environment v3 zone redundant if your existing environment is in a [region that supports zone redundancy](./overview.md#regions). Zone redundancy is an optional configuration. This configuration can only be set during the creation of your new App Service Environment v3 and can't be removed at a later time. For more information, see [Choose your App Service Environment v3 configurations](./migrate.md#choose-your-app-service-environment-v3-configurations). Select **Enabled** if you'd like to configure zone redundancy.
+You can make your new App Service Environment v3 resource zone redundant if your existing environment is in a [region that supports zone redundancy](./overview.md#regions). Zone redundancy is an optional configuration. You can set it only during the creation of your new App Service Environment v3 resource. You can't remove it at a later time. For more information, see [Choose your App Service Environment v3 configurations](./migrate.md#choose-your-app-service-environment-v3-configurations).
+
+Select the **Enabled** checkbox if you want to configure zone redundancy.
-If your environment is in a region that doesn't support zone redundancy, the checkbox is disabled. If you need a zone redundant App Service Environment v3, use one of the manual migration options and create your new App Service Environment v3 in one of the regions that supports zone redundancy.
+If your environment is in a region that doesn't support zone redundancy, the checkbox is unavailable. If you need a zone-redundant App Service Environment v3 resource, use one of the manual migration options and create the resource in one of the regions that supports zone redundancy.
-If your existing App Service Environment uses a [custom domain suffix](./migrate.md#choose-your-app-service-environment-v3-configurations), you're required to configure one for your new App Service Environment v3. You're shown the custom domain suffix configuration options if this situation applies to you. You can't migrate until you provide the required information. If you'd like to use a custom domain suffix but don't currently have one configured, you can configure one once migration is complete. For more information on App Service Environment v3 custom domain suffix including requirements, step-by-step instructions, and best practices, see [Configure custom domain suffix for App Service Environment](./how-to-custom-domain-suffix.md).
+If your existing App Service Environment uses a [custom domain suffix](./migrate.md#choose-your-app-service-environment-v3-configurations), you must configure one for your new App Service Environment v3 resource. The configuration options for a custom domain suffix appear if this situation applies to you. You can't migrate until you provide the required information.
+
+If you want to use a custom domain suffix but don't currently have one configured, you can configure one after migration is complete. For more information on App Service Environment v3 custom domain suffixes, including requirements, step-by-step instructions, and best practices, see [Custom domain suffix for App Service Environments](./how-to-custom-domain-suffix.md).
> [!NOTE]
-> If you're configuring a custom domain suffix, when adding the network permissions on your Azure Key Vault, be sure that your key vault allows access from your App Service Environment's new outbound IP addresses that were generated during the IP address generation in step 2.
->
+> If you're configuring a custom domain suffix, when you're adding the network permissions on your Azure key vault, be sure that your key vault allows access from your App Service Environment's new outbound IP addresses that were generated in step 2.
-After you add your custom domain suffix details, the "Migrate" button will be enabled.
+After you add the details for your custom domain suffix, the **Migrate** button is available.
## 8. Migrate to App Service Environment v3
-Once you complete all of the above steps, you can start migration. Make sure you understand the [implications of migration](migrate.md#migrate-to-app-service-environment-v3) including what happens during this time. This step takes three to six hours for v2 to v3 migrations and up to six hours for v1 to v3 migrations depending on environment size. Scaling and modifications to your existing App Service Environment are blocked during this step.
+After you complete all of the preceding steps, you can start the migration. Make sure that you understand the [implications of migration](migrate.md#migrate-to-app-service-environment-v3), including what happens during this time.
+
+This step takes three to six hours for v2 to v3 migrations and up to six hours for v1 to v3 migrations, depending on the environment size. Scaling and modifications to your existing App Service Environment are blocked during this step.
> [!NOTE]
-> In rare cases, you might see a notification in the portal that says "Migration to App Service Environment v3 failed" after you start migration. There's a known bug that might trigger this notification even if the migration is progressing. Check the activity log for the App Service Environment to determine the validity of this error message.
->
-> :::image type="content" source="./media/migration/migration-error.png" alt-text="Screenshot that shows the potential error notification after starting migration.":::
+> In rare cases, you might see a notification in the portal that says "Migration to App Service Environment v3 failed" after you start the migration. There's a known bug that might trigger this notification even if the migration is progressing. Check the activity log for the App Service Environment to determine the validity of this error message.
>
+> :::image type="content" source="./media/migration/migration-error.png" alt-text="Screenshot that shows the potential error notification after migration starts.":::
-Detailed migration statuses are only available when using the Azure CLI at this time. For more information, see the CLI guidance under the Azure CLI section for Migrate to App Service Environment v3.
+At this time, detailed migration statuses are available only when you're using the Azure CLI. For more information, see the [Azure CLI section for migrating to App Service Environment v3](#8-migrate-to-app-service-environment-v3-and-check-status).
-When migration is complete, you have an App Service Environment v3, and all of your apps are running in your new environment. You can confirm the environment's version by checking the **Configuration** page for your App Service Environment.
+When migration is complete, you have an App Service Environment v3 resource, and all of your apps are running in your new environment. You can confirm the environment's version by checking the **Configuration** page for your App Service Environment.
-If your migration included a custom domain suffix, the domain was shown in the **Essentials** section of the **Overview** page of the portal for App Service Environment v1/v2, but it's no longer shown there in App Service Environment v3. Instead, for App Service Environment v3, go to the **Custom domain suffix** page where you can confirm your custom domain suffix is configured correctly. You can also remove the configuration if you no longer need it or configure one if you didn't have one previously.
+If your migration included a custom domain suffix, the domain appeared in the **Essentials** section of the **Overview** page of the portal for App Service Environment v1/v2, but it no longer appears there in App Service Environment v3. Instead, for App Service Environment v3, go to the **Custom domain suffix** page to confirm that your custom domain suffix is configured correctly. You can also remove the configuration if you no longer need it or configure one if you didn't have one previously.
::: zone-end ## Next steps > [!div class="nextstepaction"]
-> [Using an App Service Environment v3](using.md)
+> [Use an App Service Environment v3 resource](using.md)
> [!div class="nextstepaction"]
-> [App Service Environment v3 Networking](networking.md)
+> [App Service Environment v3 networking](networking.md)
> [!div class="nextstepaction"] > [Custom domain suffix](./how-to-custom-domain-suffix.md)
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
The following are limitations when using the migration feature:
- Your new App Service Environment v3 is in the existing subnet that was used for your old environment. - You can't change the region your App Service Environment is located in.-- ELB App Service Environment canΓÇÖt be migrated to ILB App Service Environment v3 and vice versa.
+- ELB App Service Environment can't be migrated to ILB App Service Environment v3 and vice versa.
- If your existing App Service Environment uses a custom domain suffix, you have to configure custom domain suffix for your App Service Environment v3 during the migration process. - If you no longer want to use a custom domain suffix, you can remove it once the migration is complete.
Migration requires a three to six hour service window for App Service Environmen
- The existing App Service Environment is shut down and replaced by the new App Service Environment v3. - All App Service plans in the App Service Environment are converted from the Isolated to Isolated v2 SKU. - All of the apps that are on your App Service Environment are temporarily down. **You should expect about one hour of downtime during this period**.
- - If you can't support downtime, see [migration-alternatives](migration-alternatives.md#guidance-for-manual-migration).
+ - If you can't support downtime, see [migration-alternatives](migration-alternatives.md#migrate-manually).
- The public addresses that are used by the App Service Environment change to the IPs generated during the IP generation step. The following statuses are available during the migration process:
app-service Migration Alternatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migration-alternatives.md
Title: Migrate to App Service Environment v3
-description: How to migrate your applications to App Service Environment v3
+description: Learn how to migrate your applications to App Service Environment v3.
Last updated 01/30/2024 + # Migrate to App Service Environment v3
-> [!NOTE]
-> The App Service Environment v3 [migration feature](migrate.md) is now available for a set of supported environment configurations in certain regions. Consider that feature which provides an automated migration path to [App Service Environment v3](overview.md).
->
+If you're currently using App Service Environment v1 or v2, you have the opportunity to migrate your workloads to [App Service Environment v3](overview.md). App Service Environment v3 has [advantages and feature differences](overview.md#feature-differences) that provide enhanced support for your workloads and can reduce overall costs.
-If you're currently using App Service Environment v1 or v2, you have the opportunity to migrate your workloads to [App Service Environment v3](overview.md). App Service Environment v3 has [advantages and feature differences](overview.md#feature-differences) that provide enhanced support for your workloads and can reduce overall costs. Consider using the [migration feature](migrate.md) if your environment falls into one of the [supported scenarios](migrate.md#supported-scenarios).
+The App Service Environment v3 [migration feature](migrate.md) provides an automated migration path to App Service Environment v3. Consider using the migration feature if your environment falls into one of the [supported scenarios](migrate.md#supported-scenarios).
-If your App Service Environment [isn't supported for migration](migrate.md#migration-feature-limitations) with the migration feature, you must use one of the manual methods to migrate to App Service Environment v3.
+If your App Service Environment [isn't supported for the migration feature](migrate.md#migration-feature-limitations), you must use one of the manual methods to migrate to App Service Environment v3.
## Prerequisites
-Scenario: An existing app running on an App Service Environment v1 or App Service Environment v2 and you need that app to run on an App Service Environment v3.
+Scenario: You have an app that runs on App Service Environment v1 or App Service Environment v2, and you need that app to run on App Service Environment v3.
+
+For any migration method that doesn't use the [migration feature](migrate.md), you need to [create the App Service Environment v3 resource](creation.md) and a new subnet by using the method of your choice.
+
+[Networking changes](networking.md) between App Service Environment v1/v2 and App Service Environment v3 involve new (and for internet-facing environments, additional) IP addresses. You need to update any infrastructure that relies on these IPs. Be sure to account for inbound dependency changes, such as the Azure Load Balancer port.
-For any migration method that doesn't use the [migration feature](migrate.md), you need to [create the App Service Environment v3](creation.md) and a new subnet using the method of your choice. There are [feature differences](overview.md#feature-differences) between App Service Environment v1/v2 and App Service Environment v3 as well as [networking changes](networking.md) that involve new (and for internet-facing environments, additional) IP addresses. You need to update any infrastructure that relies on these IPs as well as account for inbound dependency changes such as the Azure Load Balancer port.
+Multiple App Service Environments can't exist in a single subnet. If you need to use your existing subnet for your new App Service Environment v3 resource, you must delete the existing App Service Environment before you create a new one. For this scenario, we recommend that you [back up your apps and then restore them](#evaluate-backup-and-restore) in the new environment after you create and configure the environment. This process causes application downtime because of the time it takes to:
-Multiple App Service Environments can't exist in a single subnet. If you need to use your existing subnet for your new App Service Environment v3, you need to delete the existing App Service Environment before you create a new one. For this scenario, the recommended migration method is to [back up your apps and then restore them](#back-up-and-restore) in the new environment after it gets created and configured. There is application downtime during this process because of the time it takes to delete the old environment, create the new App Service Environment v3, configure any infrastructure and connected resources to work with the new environment, and deploy your apps onto the new environment.
+- Delete the old environment.
+- Create the App Service Environment v3 resource.
+- Configure any infrastructure and connected resources to work with the new environment.
+- Deploy your apps onto the new environment.
### Checklist before migrating apps -- [Create an App Service Environment v3](creation.md)-- After creating the new environment, update any networking dependencies with the IP addresses associated with the new environment-- Plan for downtime (if applicable)-- Decide on a process for recreating your apps in your new environment
+- [Create an App Service Environment v3](creation.md) resource.
+- Update any networking dependencies with the IP addresses associated with the new environment.
+- Plan for downtime (if applicable).
+- Decide on a process for re-creating your apps in your new environment.
-## Isolated v2 App Service plans
+## Size and scale the environment
-App Service Environment v3 uses Isolated v2 App Service plans that are priced and sized differently than those from Isolated plans. Review the [SKU details](https://azure.microsoft.com/pricing/details/app-service/windows/) to understand how you're new environment needs to be sized and scaled to ensure appropriate capacity. There's no difference in how you create App Service plans for App Service Environment v3 compared to previous versions.
+App Service Environment v3 uses Isolated v2 Azure App Service plans that are priced and sized differently from Isolated plans. Review the [pricing details](https://azure.microsoft.com/pricing/details/app-service/windows/) to understand how you're new environment needs to be sized and scaled to ensure appropriate capacity. There's no difference in how you create App Service plans for App Service Environment v3 compared to previous versions.
-## Back up and restore
+## Evaluate backup and restore
-The [back up and restore](../manage-backup.md) feature allows you to keep your app configuration, file content, and database connected to your app when migrating to your new environment. Make sure you review the [details](../manage-backup.md#automatic-vs-custom-backups) of this feature.
+You can use the [back up and restore](../manage-backup.md) feature to keep your app configuration, file content, and database connected to your app when you're migrating to the new environment.
-> [!IMPORTANT]
-> You must configure custom backups for your apps in order to restore them to an App Service Environment v3. Automatic backup doesn't support restoration on different App Service Environment versions. For more information on custom backups, see [Automatic vs custom backups](../manage-backup.md#automatic-vs-custom-backups).
->
+You must configure custom backups for your apps in order to restore them to App Service Environment v3. Automatic backup doesn't support restoration on different App Service Environment versions. For more information on custom backups, see [Automatic vs. custom backups](../manage-backup.md#automatic-vs-custom-backups).
-You can select a custom backup and restore it to an App Service in your App Service Environment v3. You must create the App Service you'll restore to before restoring the app. You can choose to restore the backup to the production slot, an existing slot, or a newly created slot that you can create during the restoration process.
+You can select a custom backup and restore it to App Service in your App Service Environment v3 resource. You must create the App Service plan that you'll restore to before you restore the app. You can choose to restore the backup to the production slot, an existing slot, or a new slot that you create during the restoration process.
|Benefits |Limitations | |||
-|Quick - should only take 5-10 minutes per app |Support is limited to [certain database types](../manage-backup.md#automatic-vs-custom-backups) |
-|Multiple apps can be restored at the same time (restoration needs to be configured for each app individually) |Old and new environments as well as supporting resources (for example apps, databases, storage accounts, and containers) must all be in the same subscription |
-|In-app MySQL databases are automatically backed up without any configuration |Backups can be up to 10 GB of app and database content, up to 4 GB of which can be the database backup. If the backup size exceeds this limit, you get an error. |
-|Can restore the app to a snapshot of a previous state |Using a [firewall enabled storage account](../../storage/common/storage-network-security.md) as the destination for your backups isn't supported |
-|Can integrate with [Azure Traffic Manager](../../traffic-manager/traffic-manager-overview.md) and [Azure Application Gateway](../../application-gateway/overview.md) to distribute traffic across old and new environments |Using a [private endpoint enabled storage account](../../storage/common/storage-private-endpoints.md) for backup and restore isn't supported |
-|Can create empty web apps to restore to in your new environment before you start restoring to speed up the process | Only supports custom backups |
+|Quick - should take only 5 to 10 minutes per app. |Support is limited to [certain database types](../manage-backup.md#automatic-vs-custom-backups). |
+|You can restore multiple apps at the same time. (You need to configure restoration for each app individually.) |The old environment, the new environment, and supporting resources (for example apps, databases, storage accounts, and containers) must all be in the same subscription. |
+|In-app MySQL databases are automatically backed up without any configuration. |Backups can be up to 10 GB of app and database content. Up to 4 GB of that content can be the database backup. If the backup size exceeds this limit, you get an error. |
+|You can restore the app to a snapshot of a previous state. |Using a [firewall-enabled storage account](../../storage/common/storage-network-security.md) as the destination for your backups isn't supported. |
+|You can integrate with [Azure Traffic Manager](../../traffic-manager/traffic-manager-overview.md) and [Azure Application Gateway](../../application-gateway/overview.md) to distribute traffic across old and new environments. |Using a [storage account with private endpoints](../../storage/common/storage-private-endpoints.md) for backup and restore isn't supported. |
+|You can create empty web apps to restore to in your new environment before you start restoring, to speed up the process. | Only custom backups are supported. |
-## Clone your app to an App Service Environment v3
+## Clone your app to App Service Environment v3
-[Cloning your apps](../app-service-web-app-cloning.md) is another feature that can be used to get your **Windows** apps onto your App Service Environment v3. There are limitations with cloning apps. These limitations are the same as those for the App Service Backup feature, see [Back up an app in Azure App Service](../manage-backup.md#whats-included-in-an-automatic-backup).
+[Cloning your apps](../app-service-web-app-cloning.md) is another feature that you can use to get your *Windows* apps onto App Service Environment v3. The limitations for cloning apps are the same as those for the App Service backup feature. For more information, see [Back up an app in Azure App Service](../manage-backup.md#whats-included-in-an-automatic-backup).
> [!NOTE]
-> Cloning apps is supported on Windows App Service only.
->
+> Cloning apps is supported for App Service plans on Windows only.
-This solution is recommended for users that are using Windows App Service and can't migrate using the [migration feature](migrate.md). You need to set up your new App Service Environment v3 before cloning any apps. Cloning an app can take up to 30 minutes to complete. Cloning can be done using PowerShell as described in the [documentation](../app-service-web-app-cloning.md#cloning-an-existing-app-to-an-app-service-environment) or using the Azure portal.
+We recommend this solution for users who are using App Service on Windows and can't migrate by using the [migration feature](migrate.md). You need to set up your new App Service Environment v3 resource before you clone any apps. Cloning an app can take up to 30 minutes to complete.
-To clone an app using the [Azure portal](https://portal.azure.com), navigate to your existing App Service and select **Clone App** under **Development Tools**. Fill in the required fields using the details for your new App Service Environment v3.
+To clone an app by using PowerShell, see the [instructions](../app-service-web-app-cloning.md#cloning-an-existing-app-to-an-app-service-environment).
-1. Select an existing or create a new **Resource Group**.
-1. Give your app a **Name**. This name can be the same as the old app, but note the site's default URL using the new environment will be different. You need to update any custom DNS or connected resources to point to the new URL.
-1. Use your App Service Environment v3 name for **Region**.
-1. Choose whether or not to clone your deployment source.
-1. You can use an existing Windows **App Service plan** from your new environment if you created one already, or create a new one. The available Windows App Service plans in your new App Service Environment v3, if any, are listed in the dropdown.
-1. Modify **SKU and size** as needed using one of the Isolated v2 options if creating a new App Service plan. Note App Service Environment v3 uses Isolated v2 plans, which have more memory and CPU per corresponding instance size compared to the Isolated plan. For more information, see [App Service Environment v3 SKU details](overview.md#pricing).
+To clone an app by using the Azure portal:
+1. In the [Azure portal](https://portal.azure.com), go to your existing App Service plan. Under **Development Tools**, select **Clone App**.
+1. Fill in the required fields by using the details for your new App Service Environment v3 resource:
+
+ 1. For **Resource Group**, select an existing resource group or create a new one.
+ 1. For **Name**, give your app a name. This name can be the same as the old app, but the site's default URL for the new environment will be different. You need to update any custom DNS or connected resources to point to the new URL.
+ 1. For **Region**, use your App Service Environment v3 name.
+ 1. If you want to clone your deployment source, select the **Clone deployment source** checkbox.
+ 1. For **Windows Plan**, you can use an existing App Service plan from your new environment if you created one already, or you can create a new plan. The available App Service plans in your new App Service Environment v3 resource appear in the dropdown list.
+ 1. For **Sku and size**, modify the memory and CPU as needed by using one of the Isolated v2 options if you're creating a new App Service plan. App Service Environment v3 uses Isolated v2 plans, which have more memory and CPU per corresponding instance size compared to the Isolated plans. For more information, see the [App Service Environment v3 pricing details](overview.md#pricing).
+ |Benefits |Limitations | |||
-|Can be automated using PowerShell |Only supported on Windows App Service |
-|Multiple apps can be cloned at the same time (cloning needs to be configured for each app individually or using a script) |Support is limited to [certain database types](../manage-backup.md#automatic-vs-custom-backups) |
-|Can integrate with [Azure Traffic Manager](../../traffic-manager/traffic-manager-overview.md) and [Azure Application Gateway](../../application-gateway/overview.md) to distribute traffic across old and new environments |Old and new environments as well as supporting resources (for example apps, databases, storage accounts, and containers) must all be in the same subscription |
+|You can automate cloning by using PowerShell. |Supported only for App Service plans on Windows. |
+|You can clone multiple apps at the same time. (Cloning needs to be configured for each app individually or through a script.) |Support is limited to [certain database types](../manage-backup.md#automatic-vs-custom-backups). |
+|You can integrate with [Azure Traffic Manager](../../traffic-manager/traffic-manager-overview.md) and [Azure Application Gateway](../../application-gateway/overview.md) to distribute traffic across old and new environments. |The old environment, the new environment, and supporting resources (for example apps, databases, storage accounts, and containers) must all be in the same subscription. |
-## Manually create your apps on an App Service Environment v3
+## Manually create your apps in App Service Environment v3
-If the above feature doesn't support your apps or you're looking to take a more manual route, you have the option of deploying your apps following the same process you used for your existing App Service Environment. You don't need to make updates when you deploy your apps to your new environment.
+If the migration feature doesn't support your apps or you want to take a more manual route, you can deploy your apps by following the same process that you used for your existing App Service Environment. You don't need to make updates when you deploy your apps to your new environment.
-You can export [Azure Resource Manager (ARM) templates](../../azure-resource-manager/templates/overview.md) of your existing apps, App Service plans, and any other supported resources and deploy them in or with your new environment. To export a template for just your app, navigate to your App Service and go to **Export template** under **Automation**.
+You can export [Azure Resource Manager templates](../../azure-resource-manager/templates/overview.md) (ARM templates) of your existing apps, App Service plans, and any other supported resources and deploy them in or with your new environment. To export a template for just an app, go to your App Service plan. Under **Automation**, select **Export template**.
-You can also export templates for multiple resources directly from your resource group by going to your resource group, selecting the resources you want a template for, and then selecting **Export template**.
+You can also export templates for multiple resources directly from your resource group. Go to your resource group, select the resources that you want a template for, and then select **Export template**.
-The following initial changes to your Azure Resource Manager templates are required to get your apps onto your App Service Environment v3:
+The following initial changes to your ARM templates are required to get your apps onto App Service Environment v3:
-- Update SKU parameters for App Service plan to an Isolated v2 plan:
+- Update `sku` parameters for an App Service plan to an Isolated v2 plan:
```json "type": "Microsoft.Web/serverfarms",
The following initial changes to your Azure Resource Manager templates are requi
}, ``` -- Update App Service plan (serverfarm) parameter the app is to be deployed into to the plan associated with the App Service Environment v3-- Update hosting environment profile (hostingEnvironmentProfile) parameter to the new App Service Environment v3 resource ID-- An Azure Resource Manager template export includes all properties exposed by the resource providers for the given resources. Remove all nonrequired properties such as those which point to the domain of the old app. For example, you `sites` resource could be simplified to the following sample:
+- Update the App Service plan (`serverfarm`) parameter that the app will be deployed into to the plan that's associated with App Service Environment v3.
+- Update the hosting environment profile (`hostingEnvironmentProfile`) parameter to the new App Service Environment v3 resource ID.
+- An ARM template export includes all properties that the resource providers expose for the resources. Remove all nonrequired properties, such as properties that point to the domain of the old app. For example, you could simplify the `sites` resource to the following sample:
```json "type": "Microsoft.Web/sites",
The following initial changes to your Azure Resource Manager templates are requi
} ```
-Other changes may be required depending on how your app is configured.
+Other changes might be required, depending on how you configured your app.
+
+You can [deploy ARM templates](../deploy-complex-application-predictably.md) by using the Azure portal, the Azure CLI, or PowerShell.
-Azure Resource Manager templates can be [deployed](../deploy-complex-application-predictably.md) using multiple methods including using the Azure portal, Azure CLI, or PowerShell.
+## Migrate manually
-## Guidance for manual migration
+The [migration feature](migrate.md) automates the migration to App Service Environment v3 and transfers all of your apps to the new environment. There's about one hour of downtime during this migration. If your apps can't have any downtime, we recommend that you use one of the manual options to re-create your apps in App Service Environment v3.
-The [migration feature](migrate.md) automates the migration to App Service Environment v3 and at the same time transfers all of your apps to the new environment. There's about one hour of downtime during this migration. If you're in a position where you can't have any downtime, the recommendation is to use one of the manual options to recreate your apps in an App Service Environment v3.
+You can distribute traffic between your old and new environments by using [Application Gateway](../networking/app-gateway-with-service-endpoints.md). If you're using an internal load balancer (ILB) App Service Environment, [create an Azure Application Gateway instance](integrate-with-application-gateway.md) with an extra back-end pool to distribute traffic between your environments. For information about ILB App Service Environments and internet-facing App Service Environments, see [Application Gateway integration](../overview-app-gateway-integration.md).
-You can distribute traffic between your old and new environment using an [Application Gateway](../networking/app-gateway-with-service-endpoints.md). If you're using an Internal Load Balancer (ILB) App Service Environment, see the [considerations](../networking/app-gateway-with-service-endpoints.md#considerations-for-an-ilb-app-service-environment) and [create an Azure Application Gateway](integrate-with-application-gateway.md) with an extra backend pool to distribute traffic between your environments. For internet facing App Service Environments, see these [considerations](../networking/app-gateway-with-service-endpoints.md#considerations-for-an-external-app-service-environment). You can also use services like [Azure Front Door](../../frontdoor/quickstart-create-front-door.md), [Azure Content Delivery Network (CDN)](../../cdn/cdn-add-to-web-app.md), and [Azure Traffic Manager](../../cdn/cdn-traffic-manager.md) to distribute traffic between environments. Using these services allows for testing of your new environment in a controlled manner and allows you to move to your new environment at your own pace.
+You can also use services like [Azure Front Door](../../frontdoor/quickstart-create-front-door.md), [Azure Content Delivery Network](../../cdn/cdn-add-to-web-app.md), and [Azure Traffic Manager](../../cdn/cdn-traffic-manager.md) to distribute traffic between environments. Using these services allows for testing of your new environment in a controlled manner and helps you move to your new environment at your own pace.
-Once your migration and any testing with your new environment is complete, delete your old App Service Environment, the apps that are on it, and any supporting resources that you no longer need. You continue to be charged for any resources that haven't been deleted.
+After your migration and any testing with your new environment are complete, delete your old App Service Environment, the apps that are on it, and any supporting resources that you no longer need. You continue to be charged for any resources that you don't delete.
## Frequently asked questions - **Will I experience downtime during the migration?**
- Downtime is dependent on your migration process. If you have a different App Service Environment that you can point traffic to while you migrate or if you can use a different subnet to create your new environment, you won't have downtime. However, if you must use the same subnet, there is downtime resulting from the time it takes to delete the old environment, create the App Service Environment v3, create the new App Service plans, re-create the apps, and update any resources that need to know about the new IP addresses.
+ Downtime is dependent on your migration process. If you have a different App Service Environment that you can point traffic to while you migrate, or if you can use a different subnet to create your new environment, you won't have downtime. If you must use the same subnet, there's downtime while you delete the old environment, create the App Service Environment v3 resource, create the new App Service plans, re-create the apps, and update any resources that use the new IP addresses.
- **Do I need to change anything about my apps to get them to run on App Service Environment v3?**
- No, apps that run on App Service Environment v1 and v2 shouldn't need any modifications to run on App Service Environment v3. If you're using IP SSL, you must remove the IP SSL bindings before migrating.
+ No. Apps that run on App Service Environment v1 and v2 shouldn't need any modifications to run on App Service Environment v3. If you're using IP SSL, you must remove the IP SSL bindings before migrating.
- **What if my App Service Environment has a custom domain suffix?**
- The migration feature supports this [migration scenario](./migrate.md#supported-scenarios). You can migrate using a manual method if you don't want to use the migration feature. You can configure your [custom domain suffix](./how-to-custom-domain-suffix.md) when creating your App Service Environment v3 or any time after.
-- **What if my App Service Environment v2 is zone pinned?**
- Zone pinning isn't a supported feature on App Service Environment v3. You can choose to enable zone redundancy when creating your App Service Environment v3.
+ The migration feature supports this [migration scenario](./migrate.md#supported-scenarios). You can migrate by using a manual method if you don't want to use the migration feature. You can configure your [custom domain suffix](./how-to-custom-domain-suffix.md) when creating your App Service Environment v3 resource or any time after.
+- **What if my App Service Environment v2 resource is zone pinned?**
+ Zone pinning isn't a supported feature on App Service Environment v3. You can choose to enable zone redundancy when creating your App Service Environment v3 resource.
- **What properties of my App Service Environment will change?**
- You'll now be on App Service Environment v3 so be sure to review the [features and feature differences](overview.md#feature-differences) compared to previous versions. For ILB App Service Environment, you keep the same ILB IP address. For internet facing App Service Environment, the public IP address and the outbound IP address change. Note for internet facing App Service Environment, previously there was a single IP for both inbound and outbound. For App Service Environment v3, they're separate. For more information, see [App Service Environment v3 networking](networking.md#addresses).
+ Review the [feature differences](overview.md#feature-differences) between App Service Environment v3 and previous versions. For ILB App Service Environments, you keep the same ILB IP address. For internet-facing App Service Environments, the public IP address and the outbound IP address change.
+
+ For internet-facing App Service Environments, previously there was a single IP for both inbound and outbound. For App Service Environment v3, they're separate. For more information, see [App Service Environment v3 networking](networking.md#addresses).
- **Is backup and restore supported for moving apps from App Service Environment v2 to v3?**
- The [back up and restore](../manage-backup.md) feature supports restoring apps between App Service Environment versions as long as a custom backup is used for the restoration. Automatic backup doesn't support restoration to different App Service Environment versions.
-- **What will happen to my App Service Environment v1/v2 resources after 31 August 2024?**
- After 31 August 2024, if you haven't migrated to App Service Environment v3, your App Service Environment v1/v2s and the apps deployed in them will no longer be available. App Service Environment v1/v2 is hosted on App Service scale units running on [Cloud Services (classic)](../../cloud-services/cloud-services-choose-me.md) architecture that will be [retired on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Because of this, [App Service Environment v1/v2 will no longer be available after that date](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). Migrate to App Service Environment v3 to keep your apps running or save or back up any resources or data that you need to maintain.
+ The [back up and restore](../manage-backup.md) feature supports restoring apps between App Service Environment versions as long as you use a custom backup for the restoration. Automatic backup doesn't support restoration to different App Service Environment versions.
+- **What will happen to my App Service Environment v1 and v2 resources after August 31, 2024?**
+ After August 31, 2024, if you haven't migrated to App Service Environment v3, your App Service Environment v1 and v2 resources and the apps deployed in them will no longer be available.
+
+ App Service Environment v1 and v2 are hosted on App Service scale units that run on [Azure Cloud Services (classic)](../../cloud-services/cloud-services-choose-me.md) architecture. Because this architecture will be [retired on August 31, 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/), App Service Environment v1 and v2 [won't be available after that date](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). Migrate to App Service Environment v3 to keep your apps running, or save or back up any resources or data that you need to maintain.
## Next steps > [!div class="nextstepaction"]
-> [App Service Environment v3 Networking](networking.md)
+> [App Service Environment v3 networking](networking.md)
> [!div class="nextstepaction"]
-> [Using an App Service Environment v3](using.md)
+> [Use an App Service Environment v3 resource](using.md)
> [!div class="nextstepaction"]
-> [Integrate your ILB App Service Environment with the Azure Application Gateway](integrate-with-application-gateway.md)
+> [Integrate your ILB App Service Environment with Azure Application Gateway](integrate-with-application-gateway.md)
> [!div class="nextstepaction"]
-> [Migrate to App Service Environment v3 using the migration feature](migrate.md)
+> [Migrate to App Service Environment v3 by using the migration feature](migrate.md)
> [!div class="nextstepaction"] > [Custom domain suffix](./how-to-custom-domain-suffix.md)
app-service Upgrade To Asev3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/upgrade-to-asev3.md
description: Take the first steps toward upgrading to App Service Environment v3
Previously updated : 1/31/2024 Last updated : 2/2/2024 # Upgrade to App Service Environment v3
Last updated 1/31/2024
> [!IMPORTANT] > If you're currently using App Service Environment v1 or v2, you must migrate your workloads to [App Service Environment v3](overview.md). [App Service Environment v1 and v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). Failure to migrate by that date will result in loss of the environments, running applications, and all application data. >
+> As of [29 January 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/), you can no longer create new App Service Environment v1 and v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API.
+>
This page is your one-stop shop for guidance and resources to help you upgrade successfully with minimal downtime. Follow the guidance to plan and complete your upgrade as soon as possible. This page will be updated with the latest information as it becomes available.
app-service Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-python.md
To run the application locally:
1. Create a virtual environment for the app:
- [!INCLUDE [Virtual environment setup](<./includes/quickstart-python/virtual-environment-setup.md>)]
+ [!INCLUDE [Virtual environment setup](./includes/quickstart-python/virtual-environment-setup.md)]
1. Install the dependencies:
Having issues? [Let us know](https://aka.ms/PythonAppServiceQuickstartFeedback).
1. Create a virtual environment for the app:
- [!INCLUDE [Virtual environment setup](<./includes/quickstart-python/virtual-environment-setup.md>)]
+ [!INCLUDE [Virtual environment setup](./includes/quickstart-python/virtual-environment-setup.md)]
1. Install the dependencies:
To host your application in Azure, you need to create Azure App Service web app
### [Azure CLI](#tab/azure-cli) ### [VS Code](#tab/vscode-aztools)
code .
| Instructions | Screenshot | |:-|--:|
-| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-visual-studio-code-1.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-1-240-px.png" alt-text="A Screenshot of the Azure Tools icon in the left toolbar of VS Code." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-1.png"::: |
-| [!INCLUDE [Create app service step 2](<./includes/quickstart-python/create-app-service-visual-studio-code-2.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-2-240-px.png" alt-text="A screenshot of the App Service section of Azure Tools extension and the context menu used to create a new web app." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-2.png"::: |
-| [!INCLUDE [Create app service step 3](<./includes/quickstart-python/create-app-service-visual-studio-code-3.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-4-240-px.png" alt-text="A screenshot of the dialog box in VS Code used to select Create a new Web App." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-4.png"::: |
-| [!INCLUDE [Create app service step 4](<./includes/quickstart-python/create-app-service-visual-studio-code-4.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-5-240-px.png" alt-text="A screenshot of the dialog box in VS Code used to enter the globally unique name for the new web app." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-5.png"::: |
-| [!INCLUDE [Create app service step 5](<./includes/quickstart-python/create-app-service-visual-studio-code-5.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-6-240-px.png" alt-text="A screenshot of the dialog box in VS Code used to select the runtime stack for the new web app." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-6.png"::: |
-| [!INCLUDE [Create app service step 6](<./includes/quickstart-python/create-app-service-visual-studio-code-6.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-7-240-px.png" alt-text="A screenshot of the dialog box in VS Code used to a pricing tier for the new web app." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-7.png"::: |
-| [!INCLUDE [Create app service step 7](<./includes/quickstart-python/create-app-service-visual-studio-code-7.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-7b-240-px.png" alt-text="A screenshot of the dialog box in VS Code used to start deploy to new web app." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-7b.png"::: |
-| [!INCLUDE [Create app service step 8](<./includes/quickstart-python/create-app-service-visual-studio-code-8.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-7c-240-px.png" alt-text="A screenshot of the dialog box in VS Code used to select the folder to deploy as the new web app." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-7c.png"::: |
-| [!INCLUDE [Create app service step 9](<./includes/quickstart-python/create-app-service-visual-studio-code-9.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-8-240-px.png" alt-text="A screenshot of a dialog box in VS Code asking if you want to update your workspace to run build commands." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-8.png"::: |
-| [!INCLUDE [Create app service step 10](<./includes/quickstart-python/create-app-service-visual-studio-code-10.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-9-240-px.png" alt-text="A screenshot showing the confirmation dialog when the app code has been deployed to Azure." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-9.png"::: |
+| [!INCLUDE [Create app service step 1](./includes/quickstart-python/create-app-service-visual-studio-code-1.md)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-1-240-px.png" alt-text="A Screenshot of the Azure Tools icon in the left toolbar of VS Code." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-1.png"::: |
+| [!INCLUDE [Create app service step 2](./includes/quickstart-python/create-app-service-visual-studio-code-2.md)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-2-240-px.png" alt-text="A screenshot of the App Service section of Azure Tools extension and the context menu used to create a new web app." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-2.png"::: |
+| [!INCLUDE [Create app service step 3](./includes/quickstart-python/create-app-service-visual-studio-code-3.md)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-4-240-px.png" alt-text="A screenshot of the dialog box in VS Code used to select Create a new Web App." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-4.png"::: |
+| [!INCLUDE [Create app service step 4](./includes/quickstart-python/create-app-service-visual-studio-code-4.md)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-5-240-px.png" alt-text="A screenshot of the dialog box in VS Code used to enter the globally unique name for the new web app." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-5.png"::: |
+| [!INCLUDE [Create app service step 5](./includes/quickstart-python/create-app-service-visual-studio-code-5.md)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-6-240-px.png" alt-text="A screenshot of the dialog box in VS Code used to select the runtime stack for the new web app." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-6.png"::: |
+| [!INCLUDE [Create app service step 6](./includes/quickstart-python/create-app-service-visual-studio-code-6.md)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-7-240-px.png" alt-text="A screenshot of the dialog box in VS Code used to a pricing tier for the new web app." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-7.png"::: |
+| [!INCLUDE [Create app service step 7](./includes/quickstart-python/create-app-service-visual-studio-code-7.md)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-7b-240-px.png" alt-text="A screenshot of the dialog box in VS Code used to start deploy to new web app." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-7b.png"::: |
+| [!INCLUDE [Create app service step 8](./includes/quickstart-python/create-app-service-visual-studio-code-8.md)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-7c-240-px.png" alt-text="A screenshot of the dialog box in VS Code used to select the folder to deploy as the new web app." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-7c.png"::: |
+| [!INCLUDE [Create app service step 9](./includes/quickstart-python/create-app-service-visual-studio-code-9.md)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-8-240-px.png" alt-text="A screenshot of a dialog box in VS Code asking if you want to update your workspace to run build commands." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-8.png"::: |
+| [!INCLUDE [Create app service step 10](./includes/quickstart-python/create-app-service-visual-studio-code-10.md)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-9-240-px.png" alt-text="A screenshot showing the confirmation dialog when the app code has been deployed to Azure." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-9.png"::: |
### [Azure portal](#tab/azure-portal)
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
| Instructions | Screenshot | |:-|--:|
-| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-azure-portal-1.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-1-240px.png" alt-text="A screenshot of how to use the search box in the top tool bar to find App Services in Azure." lightbox="./media/quickstart-python/create-app-service-azure-portal-1.png"::: |
-| [!INCLUDE [Create app service step 2](<./includes/quickstart-python/create-app-service-azure-portal-2.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-2-240px.png" alt-text="A screenshot of the location of the Create button on the App Services page in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-2.png"::: |
-| [!INCLUDE [Create app service step 3](<./includes/quickstart-python/create-app-service-azure-portal-3.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-3-240px.png" alt-text="A screenshot of how to fill out the form to create a new App Service in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-3.png"::: |
-| [!INCLUDE [Create app service step 4](<./includes/quickstart-python/create-app-service-azure-portal-4.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-4-240px.png" alt-text="A screenshot of how to select the basic app service plan in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-4.png"::: |
-| [!INCLUDE [Create app service step 5](<./includes/quickstart-python/create-app-service-azure-portal-5.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-5-240px.png" alt-text="A screenshot of the location of the Review plus Create button in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-5.png"::: |
+| [!INCLUDE [Create app service step 1](./includes/quickstart-python/create-app-service-azure-portal-1.md)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-1-240px.png" alt-text="A screenshot of how to use the search box in the top tool bar to find App Services in Azure." lightbox="./media/quickstart-python/create-app-service-azure-portal-1.png"::: |
+| [!INCLUDE [Create app service step 2](./includes/quickstart-python/create-app-service-azure-portal-2.md)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-2-240px.png" alt-text="A screenshot of the location of the Create button on the App Services page in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-2.png"::: |
+| [!INCLUDE [Create app service step 3](./includes/quickstart-python/create-app-service-azure-portal-3.md)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-3-240px.png" alt-text="A screenshot of how to fill out the form to create a new App Service in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-3.png"::: |
+| [!INCLUDE [Create app service step 4](./includes/quickstart-python/create-app-service-azure-portal-4.md)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-4-240px.png" alt-text="A screenshot of how to select the basic app service plan in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-4.png"::: |
+| [!INCLUDE [Create app service step 5](./includes/quickstart-python/create-app-service-azure-portal-5.md)] | :::image type="content" source="./media/quickstart-python/create-app-service-azure-portal-5-240px.png" alt-text="A screenshot of the location of the Review plus Create button in the Azure portal." lightbox="./media/quickstart-python/create-app-service-azure-portal-5.png"::: |
Azure App service supports multiple methods to deploy your application code to A
### [Deploy using VS Code](#tab/vscode-deploy) ### [Deploy using Azure CLI](#tab/azure-cli-deploy) ### [Deploy using Local Git](#tab/local-git-deploy) ### [Deploy using a ZIP file](#tab/zip-deploy)
The contents of the App Service diagnostic logs can be reviewed in the Azure por
| Instructions | Screenshot | |:-|--:|
-| [!INCLUDE [Stream logs from Azure portal 1](<./includes/quickstart-python/stream-logs-azure-portal-1.md>)] | :::image type="content" source="./media/quickstart-python/stream-logs-azure-portal-1-240px.png" alt-text="A screenshot of the location in the Azure portal where to enable streaming logs." lightbox="./media/quickstart-python/stream-logs-azure-portal-1.png"::: |
-| [!INCLUDE [Stream logs from Azure portal 2](<./includes/quickstart-python/stream-logs-azure-portal-2.md>)] | :::image type="content" source="./media/quickstart-python/stream-logs-azure-portal-2-240px.png" alt-text="A screenshot of how to view logs in the Azure portal." lightbox="./media/quickstart-python/stream-logs-azure-portal-2.png"::: |
+| [!INCLUDE [Stream logs from Azure portal 1](./includes/quickstart-python/stream-logs-azure-portal-1.md)] | :::image type="content" source="./media/quickstart-python/stream-logs-azure-portal-1-240px.png" alt-text="A screenshot of the location in the Azure portal where to enable streaming logs." lightbox="./media/quickstart-python/stream-logs-azure-portal-1.png"::: |
+| [!INCLUDE [Stream logs from Azure portal 2](./includes/quickstart-python/stream-logs-azure-portal-2.md)] | :::image type="content" source="./media/quickstart-python/stream-logs-azure-portal-2-240px.png" alt-text="A screenshot of how to view logs in the Azure portal." lightbox="./media/quickstart-python/stream-logs-azure-portal-2.png"::: |
### [VS Code](#tab/vscode-aztools) | Instructions | Screenshot | |:-|--:|
-| [!INCLUDE [Stream logs from VS Code 1](<./includes/quickstart-python/stream-logs-visual-studio-code-1.md>)] | :::image type="content" source="./media/quickstart-python/stream-logs-vs-code-1-240px.png" alt-text="A screenshot of how to start streaming logs with the VS Code extension." lightbox="./media/quickstart-python/stream-logs-vs-code-1.png"::: |
-| [!INCLUDE [Stream logs from VS Code 2](<./includes/quickstart-python/stream-logs-visual-studio-code-2.md>)] | :::image type="content" source="./media/quickstart-python/stream-logs-vs-code-2-240px.png" alt-text="A screenshot of an example of streaming logs in the VS Code Output window." lightbox="./media/quickstart-python/stream-logs-vs-code-2.png"::: |
+| [!INCLUDE [Stream logs from VS Code 1](./includes/quickstart-python/stream-logs-visual-studio-code-1.md)] | :::image type="content" source="./media/quickstart-python/stream-logs-vs-code-1-240px.png" alt-text="A screenshot of how to start streaming logs with the VS Code extension." lightbox="./media/quickstart-python/stream-logs-vs-code-1.png"::: |
+| [!INCLUDE [Stream logs from VS Code 2](./includes/quickstart-python/stream-logs-visual-studio-code-2.md)] | :::image type="content" source="./media/quickstart-python/stream-logs-vs-code-2-240px.png" alt-text="A screenshot of an example of streaming logs in the VS Code Output window." lightbox="./media/quickstart-python/stream-logs-vs-code-2.png"::: |
### [Azure CLI](#tab/azure-cli) First, you need to configure Azure App Service to output logs to the App Service filesystem using the [az webapp log config](/cli/azure/webapp/log#az-webapp-log-config) command. To stream logs, use the [az webapp log tail](/cli/azure/webapp/log#az-webapp-log-tail) command. Refresh the home page in the app or attempt other requests to generate some log messages. The output should look similar to the following.
Follow these steps while signed-in to the Azure portal to delete a resource grou
| Instructions | Screenshot | |:-|--:|
-| [!INCLUDE [Remove resource group Azure portal 1](<./includes/quickstart-python/remove-resource-group-azure-portal-1.md>)] | :::image type="content" source="./media/quickstart-python/remove-resource-group-azure-portal-1-240px.png" alt-text="A screenshot of how to search for and navigate to a resource group in the Azure portal." lightbox="./media/quickstart-python/remove-resource-group-azure-portal-1.png"::: |
-| [!INCLUDE [Remove resource group Azure portal 2](<./includes/quickstart-python/remove-resource-group-azure-portal-2.md>)] | :::image type="content" source="./media/quickstart-python/remove-resource-group-azure-portal-2-240px.png" alt-text="A screenshot of the location of the Delete Resource Group button in the Azure portal." lightbox="./media/quickstart-python/remove-resource-group-azure-portal-2.png"::: |
-| [!INCLUDE [Remove resource group Azure portal 3](<./includes/quickstart-python/remove-resource-group-azure-portal-3.md>)] | :::image type="content" source="./media/quickstart-python/remove-resource-group-azure-portal-3-240px.png" alt-text="A screenshot of the confirmation dialog for deleting a resource group in the Azure portal." lightbox="./media/quickstart-python/remove-resource-group-azure-portal-3.png"::: |
+| [!INCLUDE [Remove resource group Azure portal 1](./includes/quickstart-python/remove-resource-group-azure-portal-1.md)] | :::image type="content" source="./media/quickstart-python/remove-resource-group-azure-portal-1-240px.png" alt-text="A screenshot of how to search for and navigate to a resource group in the Azure portal." lightbox="./media/quickstart-python/remove-resource-group-azure-portal-1.png"::: |
+| [!INCLUDE [Remove resource group Azure portal 2](./includes/quickstart-python/remove-resource-group-azure-portal-2.md)] | :::image type="content" source="./media/quickstart-python/remove-resource-group-azure-portal-2-240px.png" alt-text="A screenshot of the location of the Delete Resource Group button in the Azure portal." lightbox="./media/quickstart-python/remove-resource-group-azure-portal-2.png"::: |
+| [!INCLUDE [Remove resource group Azure portal 3](./includes/quickstart-python/remove-resource-group-azure-portal-3.md)] | :::image type="content" source="./media/quickstart-python/remove-resource-group-azure-portal-3-240px.png" alt-text="A screenshot of the confirmation dialog for deleting a resource group in the Azure portal." lightbox="./media/quickstart-python/remove-resource-group-azure-portal-3.png"::: |
### [VS Code](#tab/vscode-aztools) | Instructions | Screenshot | |:-|--:|
-| [!INCLUDE [Remove resource group VS Code 1](<./includes/quickstart-python/remove-resource-group-visual-studio-code-1.md>)] | :::image type="content" source="./media/quickstart-python/remove-resource-group-visual-studio-code-1-240px.png" alt-text="A screenshot of how to delete a resource group in VS Code using the Azure Tools extension." lightbox="./media/quickstart-python/remove-resource-group-visual-studio-code-1.png"::: |
-| [!INCLUDE [Remove resource group VS Code 2](<./includes/quickstart-python/remove-resource-group-visual-studio-code-2.md>)] | :::image type="content" source="./media/quickstart-python/remove-resource-group-visual-studio-code-2-240px.png" alt-text="A screenshot of the confirmation dialog for deleting a resource group from VS Code." lightbox="./media/quickstart-python/remove-resource-group-visual-studio-code-2.png"::: |
+| [!INCLUDE [Remove resource group VS Code 1](./includes/quickstart-python/remove-resource-group-visual-studio-code-1.md)] | :::image type="content" source="./media/quickstart-python/remove-resource-group-visual-studio-code-1-240px.png" alt-text="A screenshot of how to delete a resource group in VS Code using the Azure Tools extension." lightbox="./media/quickstart-python/remove-resource-group-visual-studio-code-1.png"::: |
+| [!INCLUDE [Remove resource group VS Code 2](./includes/quickstart-python/remove-resource-group-visual-studio-code-2.md)] | :::image type="content" source="./media/quickstart-python/remove-resource-group-visual-studio-code-2-240px.png" alt-text="A screenshot of the confirmation dialog for deleting a resource group from VS Code." lightbox="./media/quickstart-python/remove-resource-group-visual-studio-code-2.png"::: |
### [Azure CLI](#tab/azure-cli)
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
SECRET_KEY=<secret-key>
Create a virtual environment for the app: Install the dependencies:
app-service Tutorial Troubleshoot Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-troubleshoot-monitor.md
What you learned:
## <a name="nextsteps"></a> Next steps * [Query logs with Azure Monitor](../azure-monitor/logs/log-query-overview.md) * [Troubleshooting Azure App Service in Visual Studio](troubleshoot-dotnet-visual-studio.md)
-* [Analyze app Logs in HDInsight](https://gallery.technet.microsoft.com/scriptcenter/Analyses-Windows-Azure-web-0b27d413)
+* [Analyze app Logs in HDInsight](/azure/hdinsight/hdinsight-hadoop-oms-log-analytics-tutorial)
* [Tutorial: Run a load test to identify performance bottlenecks in a web app](../load-testing/tutorial-identify-bottlenecks-azure-portal.md)
application-gateway How To Url Rewrite Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-url-rewrite-gateway-api.md
RESOURCE_GROUP='<resource group name of the Application Gateway For Containers r
RESOURCE_NAME='alb-test' RESOURCE_ID=$(az network alb show --resource-group $RESOURCE_GROUP --name $RESOURCE_NAME --query id -o tsv)
-FRONTEND_NAME='frontend'
+FRONTEND_NAME='test-frontend'
``` 2. Create a Gateway
application-gateway Key Vault Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/key-vault-certs.md
Previously updated : 02/01/2024 Last updated : 02/02/2024
Define access policies to use the user-assigned managed identity with your Key V
If you're using **Azure role-based access control** follow the article [Assign a managed identity access to a resource](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md) and assign the user-assigned managed identity the **Key Vault Secrets User** role to the Azure Key Vault.
-> [!NOTE]
-> If you have Key Vaults for your HTTPS listener that use different identities, creating or updating the listener requires checking the certificates associated with each identity. In order for the operation to be successful, you must [grant permission](../key-vault/general/rbac-guide.md) to all identities.
- ### Verify Firewall Permissions to Key Vault As of March 15, 2021, Key Vault recognizes Application Gateway as a trusted service by leveraging User Managed Identities for authentication to Azure Key Vault. With the use of service endpoints and enabling the trusted services option for Key Vault's firewall, you can build a secure network boundary in Azure. You can deny access to traffic from all networks (including internet traffic) to Key Vault but still make Key Vault accessible for an Application Gateway resource under your subscription.
azure-arc Backup Controller Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/backup-controller-database.md
description: Explains how to back up the controller database for Azure Arc-enabl
--++ Last updated 04/26/2023
azure-arc Configure Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/configure-managed-instance.md
Title: Configure SQL Managed Instance enabled by Azure Arc description: Configure SQL Managed Instance enabled by Azure Arc.--++ Last updated 12/05/2023
spec:
If you already have an existing SQL managed instance enabled by Azure Arc, you can run `kubectl edit sqlmi <sqlminame> -n <namespace>` and add the above options into the spec.
-Sample Azure Arc-enabled SQL Managed Instance YAML file:
+Example YAML file:
```yml apiVersion: sql.arcdata.microsoft.com/v13
azure-arc Connect Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/connect-managed-instance.md
description: Connect to SQL Managed Instance enabled by Azure Arc
--++ Last updated 07/30/2021
azure-arc Create Data Controller Direct Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-direct-azure-portal.md
Title: Deploy Azure Arc data controller from Azure portal| Direct connect mode description: Explains how to deploy the data controller in direct connect mode from Azure portal. --++
azure-arc Create Data Controller Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-direct-cli.md
Title: Create Azure Arc data controller | Direct connect mode description: Explains how to create the data controller in direct connect mode. --++
azure-arc Create Data Controller Direct Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-direct-prerequisites.md
Title: Prerequisites | Direct connect mode description: Prerequisites to deploy the data controller in direct connect mode. --++
azure-arc Create Data Controller Indirect Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-indirect-cli.md
--++ Last updated 11/03/2021
azure-arc Create Sql Managed Instance Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-sql-managed-instance-azure-data-studio.md
description: Create SQL Managed Instance enabled by Azure Arc using Azure Data S
--++ Last updated 06/16/2021
azure-arc Create Sql Managed Instance Using Kubernetes Native Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-sql-managed-instance-using-kubernetes-native-tools.md
Title: Create a SQL Managed Instance using Kubernetes tools
-description: Deploy Azure Arc-enabled SQL Managed Instance using Kubernetes tools.
+ Title: Deploy a new SQL Managed Instance enabled by Azure Arc using Kubernetes tools
+description: Describes how to use Kubernetes tools to deploy SQL Managed Instance enabled by Azure Arc.
--++ Last updated 02/28/2022
-# Create Azure Arc-enabled SQL Managed Instance using Kubernetes tools
+# Deploy SQL Managed Instance enabled by Azure Arc using Kubernetes tools
This article demonstrates how to deploy Azure SQL Managed Instance for Azure Arc with Kubernetes tools.
A Kubernetes secret is stored as a base64 encoded string - one for the username
> For optimum security, using the value `sa` is not allowed for the login . > Follow the [password complexity policy](/sql/relational-databases/security/password-policy#password-complexity).
-You can use an online tool to base64 encode your desired username and password or you can use built in CLI tools depending on your platform.
+You can use an online tool to base64 encode your desired username and password or you can use CLI tools depending on your platform.
PowerShell
Requirements for resource limits and requests:
- The cores limit and request must be a positive integer value, if specified. - The minimum of 1 core is required for the cores request, if specified. - The memory value format follows the Kubernetes notation. -- A minimum of 2 Gi is required for memory request, if specified.
+- A minimum of 2 GB is required for memory request, if specified.
- As a general guideline, you should have 4 GB of RAM for each 1 core for production use cases. ### Customizing service type
If you encounter any troubles with the deployment, please see the [troubleshooti
## Related content
-[Connect to Azure Arc-enabled SQL Managed Instance](connect-managed-instance.md)
+[Connect to SQL Managed Instance enabled by Azure Arc](connect-managed-instance.md)
azure-arc Create Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-sql-managed-instance.md
Title: Create a SQL Server Managed Instance enabled by Azure Arc
-description: Deploy SQL Server Managed Instance enabled by Azure Arc
+ Title: Create a SQL Managed Instance enabled by Azure Arc
+description: Deploy SQL Managed Instance enabled by Azure Arc
--++ Last updated 07/30/2021
-# Create a SQL Server Managed Instance enabled by Azure Arc
+# Deploy a SQL Managed Instance enabled by Azure Arc
[!INCLUDE [azure-arc-common-prerequisites](../../../includes/azure-arc-common-prerequisites.md)] -
-## Create an Azure Arc-enabled SQL Managed Instance
-
-To view available options for the create command for Azure Arc-enabled SQL Managed Instance, use the following command:
+To view available options for the create command for SQL Managed Instance enabled by Azure Arc, use the following command:
```azurecli az sql mi-arc create --help ```
-To create an Azure Arc-enabled SQL Managed Instance, use `az sql mi-arc create`. See the following examples for different connectivity modes:
+To create a SQL Managed Instance enabled by Azure Arc, use `az sql mi-arc create`. See the following examples for different connectivity modes:
> [!NOTE] > A ReadWriteMany (RWX) capable storage class needs to be specified for backups. Learn more about [access modes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes)
-If no storage class is specified for backups, the default storage class in Kubernetes is used and if this is not RWX capable, the Azure Arc-enabled SQL Managed Instance installation may not succeed.
+If no storage class is specified for backups, the default storage class in Kubernetes is used and if this is not RWX capable, the SQL Managed Instance enabled by Azure Arc installation may not succeed.
### [Directly connected mode](#tab/directly-connected-mode)
az sql mi-arc create -n sqldemo --storage-class-backups mybackups --k8s-namespac
---- > [!NOTE] > Names must be less than 60 characters in length and conform to [DNS naming conventions](https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#rfc-1035-label-names).
-> When specifying memory allocation and vCore allocation use this formula to ensure your performance is acceptable: for each 1 vCore you should have at least 4GB of RAM of capacity available on the Kubernetes node where the Azure Arc-enabled SQL managed instance pod will run.
-> If you want to automate the creation of Azure Arc-enabled SQL Managed Instances and avoid the interactive prompt for the admin password, you can set the `AZDATA_USERNAME` and `AZDATA_PASSWORD` environment variables to the desired username and password prior to running the `az sql mi-arc create` command.
-> If you created the data controller using AZDATA_USERNAME and AZDATA_PASSWORD in the same terminal session, then the values for AZDATA_USERNAME and AZDATA_PASSWORD will be used to create the Azure Arc-enabled SQL Managed Instance too.
+> When specifying memory allocation and vCore allocation use this formula to ensure your performance is acceptable: for each 1 vCore you should have at least 4GB of RAM of capacity available on the Kubernetes node where the SQL Managed Instance enabled by Azure Arc pod will run.
+> If you want to automate the creation of SQL Managed Instance enabled by Azure Arc and avoid the interactive prompt for the admin password, you can set the `AZDATA_USERNAME` and `AZDATA_PASSWORD` environment variables to the desired username and password prior to running the `az sql mi-arc create` command.
+> If you created the data controller using AZDATA_USERNAME and AZDATA_PASSWORD in the same terminal session, then the values for AZDATA_USERNAME and AZDATA_PASSWORD will be used to create the SQL Managed Instance enabled by Azure Arc too.
> [!NOTE]
-> If you are using the indirect connectivity mode, creating Azure Arc-enabled SQL Managed Instance in Kubernetes will not automatically register the resources in Azure. Steps to register the resource are in the following articles:
+> If you are using the indirect connectivity mode, creating SQL Managed Instance enabled by Azure Arc in Kubernetes will not automatically register the resources in Azure. Steps to register the resource are in the following articles:
> - [Upload billing data to Azure and view it in the Azure portal](view-billing-data-in-azure.md) >
To view the instance, use the following command:
az sql mi-arc list --k8s-namespace <namespace> --use-k8s ```
-You can copy the external IP and port number from here and connect to Azure Arc-enabled SQL Managed Instance using your favorite tool for connecting to eg. SQL Server or Azure SQL Managed Instance such as Azure Data Studio or SQL Server Management Studio.
+You can copy the external IP and port number from here and connect to SQL Managed Instance enabled by Azure Arc using your favorite tool for connecting to eg. SQL Server or Azure SQL Managed Instance such as Azure Data Studio or SQL Server Management Studio.
[!INCLUDE [use-insider-azure-data-studio](includes/use-insider-azure-data-studio.md)] ## Related content - [Connect to SQL Managed Instance enabled by Azure Arc](connect-managed-instance.md) - [Register your instance with Azure and upload metrics and logs about your instance](upload-metrics-and-logs-to-azure-monitor.md)-- [Create Azure Arc-enabled SQL Managed Instance using Azure Data Studio](create-sql-managed-instance-azure-data-studio.md)
+- [Create SQL Managed Instance enabled by Azure Arc using Azure Data Studio](create-sql-managed-instance-azure-data-studio.md)
azure-arc Delete Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/delete-managed-instance.md
Title: Delete a SQL Server Managed Instance enabled by Azure Arc
-description: Learn how to delete a SQL Server Managed Instance enabled by Azure Arc and optionally, reclaim associated Kubernetes persistent volume claims (PVCs).
+ Title: Delete a SQL Managed Instance enabled by Azure Arc
+description: Learn how to delete a SQL Managed Instance enabled by Azure Arc and optionally, reclaim associated Kubernetes persistent volume claims (PVCs).
--++ Last updated 07/30/2021
-# Delete a SQL Server Managed Instance enabled by Azure Arc
+# Delete a SQL Managed Instance enabled by Azure Arc
In this how-to guide, you'll find and then delete a SQL Managed Instance enabled by Azure Arc. Optionally, after deleting managed instances, you can reclaim associated Kubernetes persistent volume claims (PVCs).
azure-arc Deploy Active Directory Sql Managed Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-sql-managed-instance-cli.md
Title: Deploy Active Directory integrated SQL Server Managed Instance enabled by Azure Arc using Azure CLI
-description: Explains how to deploy Active Directory integrated SQL Server Managed Instance enabled by Azure Arc using Azure CLI
+ Title: Deploy Active Directory integrated SQL Managed Instance enabled by Azure Arc using Azure CLI
+description: Explains how to deploy Active Directory integrated SQL Managed Instance enabled by Azure Arc using Azure CLI
Last updated 10/11/2022
-# Deploy Active Directory integrated SQL Server Managed Instance enabled by Azure Arc using Azure CLI
+# Deploy Active Directory integrated SQL Managed Instance enabled by Azure Arc using Azure CLI
This article explains how to deploy SQL Managed Instance enabled by Azure Arc with Active Directory (AD) authentication using Azure CLI.
azure-arc Deploy Active Directory Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-sql-managed-instance.md
Title: Deploy Active Directory-integrated SQL Server Managed Instance enabled by Azure Arc
-description: Learn how to deploy SQL Server Managed Instance enabled by Azure Arc with Active Directory authentication.
+ Title: Deploy Active Directory-integrated SQL Managed Instance enabled by Azure Arc
+description: Learn how to deploy SQL Managed Instance enabled by Azure Arc with Active Directory authentication.
Last updated 10/11/2022
-# Deploy Active Directory-integrated SQL Server Managed Instance enabled by Azure Arc
+# Deploy Active Directory-integrated SQL Managed Instance enabled by Azure Arc
In this article, learn how to deploy Azure Arc-enabled Azure SQL Managed Instance with Active Directory authentication.
azure-arc Limitations Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/limitations-managed-instance.md
Title: Limitations of SQL Server Managed Instance enabled by Azure Arc
-description: Limitations of SQL Server Managed Instance enabled by Azure Arc
+ Title: Limitations of SQL Managed Instance enabled by Azure Arc
+description: Limitations of SQL Managed Instance enabled by Azure Arc
--++ Last updated 09/07/2021
-# Limitations of SQL Server Managed Instance enabled by Azure Arc
+# Limitations of SQL Managed Instance enabled by Azure Arc
This article describes limitations of SQL Managed Instance enabled by Azure Arc.
azure-arc Maintenance Window https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/maintenance-window.md
--++ Last updated 03/31/2022
azure-arc Managed Instance Business Continuity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-business-continuity-overview.md
--++ Last updated 01/27/2022
azure-arc Managed Instance Disaster Recovery Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-disaster-recovery-cli.md
--++ Last updated 08/02/2023
azure-arc Managed Instance Disaster Recovery Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-disaster-recovery-portal.md
description: Describes how to configure disaster recovery for SQL Managed Instan
--++ Last updated 08/02/2023
azure-arc Managed Instance Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-disaster-recovery.md
description: Describes disaster recovery for SQL Managed Instance enabled by Azu
--++ Last updated 08/02/2023
azure-arc Managed Instance Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-features.md
Title: Features and Capabilities of Azure Arc-enabled SQL Managed Instance
-description: Features and Capabilities of Azure Arc-enabled SQL Managed Instance
+ Title: Features and Capabilities of SQL Managed Instance enabled by Azure Arc
+description: Features and Capabilities of SQL Managed Instance enabled by Azure Arc
--++ Last updated 07/30/2021
-# Features and Capabilities of Azure Arc-enabled SQL Managed Instance
+# Features and Capabilities of SQL Managed Instance enabled by Azure Arc
-Azure Arc-enabled SQL Managed Instance share a common code base with the latest stable version of SQL Server. Most of the standard SQL language, query processing, and database management features are identical. The features that are common between SQL Server and SQL Database or SQL Managed Instance are:
+SQL Managed Instance enabled by Azure Arc share a common code base with the latest stable version of SQL Server. Most of the standard SQL language, query processing, and database management features are identical. The features that are common between SQL Server and SQL Database or SQL Managed Instance are:
- Language features - [Control of flow language keywords](/sql/t-sql/language-elements/control-of-flow), [Cursors](/sql/t-sql/language-elements/cursors-transact-sql), [Data types](/sql/t-sql/data-types/data-types-transact-sql), [DML statements](/sql/t-sql/queries/queries), [Predicates](/sql/t-sql/queries/predicates), [Sequence numbers](/sql/relational-databases/sequence-numbers/sequence-numbers), [Stored procedures](/sql/relational-databases/stored-procedures/stored-procedures-database-engine), and [Variables](/sql/t-sql/language-elements/variables-transact-sql). - Database features - [Automatic tuning (plan forcing)](/sql/relational-databases/automatic-tuning/automatic-tuning), [Change tracking](/sql/relational-databases/track-changes/about-change-tracking-sql-server), [Database collation](/sql/relational-databases/collations/set-or-change-the-database-collation), [Contained databases](/sql/relational-databases/databases/contained-databases), [Contained users](/sql/relational-databases/security/contained-database-users-making-your-database-portable), [Data compression](/sql/relational-databases/data-compression/data-compression), [Database configuration settings](/sql/t-sql/statements/alter-database-scoped-configuration-transact-sql), [Online index operations](/sql/relational-databases/indexes/perform-index-operations-online), [Partitioning](/sql/relational-databases/partitions/partitioned-tables-and-indexes), and [Temporal tables](/sql/relational-databases/tables/temporal-tables) ([see getting started guide](/sql/relational-databases/tables/getting-started-with-system-versioned-temporal-tables)).
Azure Arc-enabled SQL Managed Instance share a common code base with the latest
## <a name="RDBMSHA"></a> RDBMS High Availability
-|Feature|Azure Arc-enabled SQL Managed Instance|
+|Feature|SQL Managed Instance enabled by Azure Arc|
|-|-| |Always On failover cluster instance<sup>1</sup>| Not Applicable. Similar capabilities available.| |Always On availability groups<sup>2</sup>|Business Critical service tier.|
Azure Arc-enabled SQL Managed Instance share a common code base with the latest
|Backup compression|Yes| |Backup mirror |Yes| |Backup encryption|Yes|
-|Backup to Azure to (backup to URL)|Yes|
+|Back up to Azure to (back up to URL)|Yes|
|Database snapshot|Yes| |Fast recovery|Yes| |Hot add memory and CPU|Yes|
Azure Arc-enabled SQL Managed Instance share a common code base with the latest
## <a name="RDBMSSP"></a> RDBMS Scalability and Performance
-| Feature | Azure Arc-enabled SQL Managed Instance |
+| Feature | SQL Managed Instance enabled by Azure Arc |
|--|--| | Columnstore | Yes | | Large object binaries in clustered columnstore indexes | Yes |
Azure Arc-enabled SQL Managed Instance share a common code base with the latest
## <a name="RDBMSS"></a> RDBMS Security
-| Feature | Azure Arc-enabled SQL Managed Instance |
+| Feature | SQL Managed Instance enabled by Azure Arc |
|--|--| | Row-level security | Yes | | Always Encrypted | Yes |
Azure Arc-enabled SQL Managed Instance share a common code base with the latest
## <a name="RDBMSM"></a> RDBMS Manageability
-| Feature | Azure Arc-enabled SQL Managed Instance |
+| Feature | SQL Managed Instance enabled by Azure Arc |
|--|--|
-| Dedicated admin connection | Yes |
+| Dedicated administrator connection | Yes |
| PowerShell scripting support | Yes | | Support for data-tier application component operations - extract, deploy, upgrade, delete | Yes | | Policy automation (check on schedule and change) | Yes |
Azure Arc-enabled SQL Managed Instance share a common code base with the latest
| Standard performance reports | Yes | | Plan guides and plan freezing for plan guides | Yes | | Direct query of indexed views (using NOEXPAND hint) | Yes |
-| Automatic indexed views maintenance | Yes |
+| Automatically maintain indexed views | Yes |
| Distributed partitioned views | Yes | | Parallel indexed operations | Yes | | Automatic use of indexed view by query optimizer | Yes |
Azure Arc-enabled SQL Managed Instance share a common code base with the latest
### <a name="Programmability"></a> Programmability
-| Feature | Azure Arc-enabled SQL Managed Instance |
+| Feature | SQL Managed Instance enabled by Azure Arc |
|--|--| | JSON | Yes | | Query Store | Yes |
Azure Arc-enabled SQL Managed Instance share a common code base with the latest
### Tools
-Azure Arc-enabled SQL Managed Instance supports various data tools that can help you manage your data.
+SQL Managed Instance enabled by Azure Arc supports various data tools that can help you manage your data.
-| **Tool** | Azure Arc-enabled SQL Managed Instance|
+| **Tool** | SQL Managed Instance enabled by Azure Arc|
| | | | | Azure portal | Yes | | Azure CLI | Yes |
Azure Arc-enabled SQL Managed Instance supports various data tools that can help
### <a name="Unsupported"></a> Unsupported Features & Services
-The following features and services are not available for Azure Arc-enabled SQL Managed Instance.
+The following features and services are not available for SQL Managed Instance enabled by Azure Arc.
| Area | Unsupported feature or service | |--|--|
azure-arc Managed Instance High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-high-availability.md
Title: SQL Managed Instance enabled by Azure Arc high availability-
-description: Learn how to deploy SQL Server Managed Instance enabled by Azure Arc with high availability.
--+
+description: Learn how to deploy SQL Managed Instance enabled by Azure Arc with high availability.
++ Last updated 07/30/2021
-# High Availability with SQL Server Managed Instance enabled by Azure Arc
+# High Availability with SQL Managed Instance enabled by Azure Arc
-SQL Managed Instance enabled by Azure Arc is deployed on Kubernetes as a containerized application. It uses Kubernetes constructs such as stateful sets and persistent storage to provide built-in health monitoring, failure detection, and failover mechanisms to maintain service health. For increased reliability, you can also configure SQL Managed Instance enabled by Azure Arc to deploy with extra replicas in a high availability configuration. Monitoring, failure detection, and automatic failover are managed by the Arc data services data controller. Arc-enabled data service provides this service is provided without user intervention. The service sets up the availability group, configures database mirroring endpoints, adds databases to the availability group, and coordinates failover and upgrade. This document explores both types of high availability.
+SQL Managed Instance enabled by Azure Arc is deployed on Kubernetes as a containerized application. It uses Kubernetes constructs such as stateful sets and persistent storage to provide built-in:
+
+- Health monitoring
+- Failure detection
+- Automatic fail over to maintain service health.
+
+For increased reliability, you can also configure SQL Managed Instance enabled by Azure Arc to deploy with extra replicas in a high availability configuration. The Arc data services data controller manages:
+
+- Monitoring
+- Failure detection
+- Automatic failover
+
+Arc-enabled data service provides this service without user intervention. The service:
+
+- Sets up the availability group
+- Configures database mirroring endpoints
+- Adds databases to the availability group
+- Coordinates failover and upgrade.
+
+This document explores both types of high availability.
SQL Managed Instance enabled by Azure Arc provides different levels of high availability depending on whether the SQL managed instance was deployed as a *General Purpose* service tier or *Business Critical* service tier. ## High availability in General Purpose service tier
-In the General Purpose service tier, there is only one replica available, and the high availability is achieved via Kubernetes orchestration. For instance, if a pod or node containing the managed instance container image crashes, then Kubernetes will attempt to stand up another pod or node, and attach to the same persistent storage. During this time, the SQL managed instance is unavailable to the applications. Applications will need to reconnect and retry the transaction when the new pod is up. If `load balancer` is the service type used, then applications can reconnect to the same primary endpoint and Kubernetes will redirect the connection to the new primary. If the service type is `nodeport` then the applications will need to reconnect to the new IP address.
+In the General Purpose service tier, there is only one replica available, and the high availability is achieved via Kubernetes orchestration. For instance, if a pod or node containing the managed instance container image crashes, Kubernetes attempts to stand up another pod or node, and attach to the same persistent storage. During this time, the SQL managed instance is unavailable to the applications. Applications need to reconnect and retry the transaction when the new pod is up. If `load balancer` is the service type used, then applications can reconnect to the same primary endpoint and Kubernetes will redirect the connection to the new primary. If the service type is `nodeport` then the applications will need to reconnect to the new IP address.
### Verify built-in high availability
-To verify the build-in high availability provided by Kubernetes, you can delete the pod of an existing managed instance and verify that Kubernetes recovers from this action by bootstrapping another pod and attaching the persistent storage.
+To verify the build-in high availability provided by Kubernetes, you can:
+
+1. Delete the pod of an existing managed instance
+1. Verify that Kubernetes recovers from this action
+
+During recover, Kubernetes bootstraps another pod and attaches the persistent storage.
### Prerequisites -- Kubernetes cluster must have [shared, remote storage](storage-configuration.md#factors-to-consider-when-choosing-your-storage-configuration)
+- Kubernetes cluster requires [shared, remote storage](storage-configuration.md#factors-to-consider-when-choosing-your-storage-configuration)
- A SQL Managed Instance enabled by Azure Arc deployed with one replica (default)
To verify the build-in high availability provided by Kubernetes, you can delete
sql1-0 2/3 Running 0 22s ```
-After all containers within the pod have recovered, you can connect to the managed instance.
+After all containers within the pod recover, you can connect to the managed instance.
## High availability in Business Critical service tier
-In the Business Critical service tier, in addition to what is natively provided by Kubernetes orchestration, Azure SQL Managed Instance for Azure Arc provides a contained availability group. The contained availability group is built on SQL Server Always On technology. It provides higher levels of availability. SQL Managed Instance enabled by Azure Arc deployed with *Business Critical* service tier can be deployed with either 2 or 3 replicas. These replicas are always kept in sync with each other. With contained availability groups, any pod crashes or node failures are transparent to the application as there is at least one other pod that has the instance that has all the data from the primary and is ready to take on connections.
+In the Business Critical service tier, in addition to what is natively provided by Kubernetes orchestration, SQL Managed Instance for Azure Arc provides a contained availability group. The contained availability group is built on SQL Server Always On technology. It provides higher levels of availability. SQL Managed Instance enabled by Azure Arc deployed with *Business Critical* service tier can be deployed with either 2 or 3 replicas. These replicas are always kept in sync with each other.
+
+With contained availability groups, any pod crashes or node failures are transparent to the application. The contained availability group provides at least one other pod that has all the data from the primary and is ready to take on connections.
## Contained availability groups
SQL Managed Instance enabled by Azure Arc takes this concept of contained availa
Capabilities that contained availability groups enable: -- When deployed with multiple replicas, a single availability group named with the same name as the Arc enabled SQL managed instance is created. By default, contained AG has three replicas, including primary. All CRUD operations for the availability group are managed internally, including creating the availability group or joining replicas to the availability group created. Additional availability groups cannot be created in an instance.
+- When deployed with multiple replicas, a single availability group named with the same name as the Arc enabled SQL managed instance is created. By default, contained AG has three replicas, including primary. All CRUD operations for the availability group are managed internally, including creating the availability group or joining replicas to the availability group created. You can't create more availability groups in an instance.
- All databases are automatically added to the availability group, including all user and system databases like `master` and `msdb`. This capability provides a single-system view across the availability group replicas. Notice both `containedag_master` and `containedag_msdb` databases if you connect directly to the instance. The `containedag_*` databases represent the `master` and `msdb` inside the availability group. - An external endpoint is automatically provisioned for connecting to databases within the availability group. This endpoint `<managed_instance_name>-external-svc` plays the role of the availability group listener.
-### Deploy SQL Server Managed Instance enabled by Azure Arc with multiple replicas using Azure portal
+### Deploy SQL Managed Instance enabled by Azure Arc with multiple replicas using Azure portal
-From Azure portal, on the create Azure Arc-enabled SQL Managed Instance page:
+From Azure portal, on the create SQL Managed Instance enabled by Azure Arc page:
1. Select **Configure Compute + Storage** under Compute + Storage. The portal shows advanced settings. 2. Under Service tier, select **Business Critical**. 3. Check the "For development use only", if using for development purposes.
From Azure portal, on the create Azure Arc-enabled SQL Managed Instance page:
### Deploy with multiple replicas using Azure CLI
-When a SQL Managed Instance enabled by Azure Arc is deployed in Business Critical service tier, this enables multiple replicas to be created. The setup and configuration of contained availability groups among those instances is automatically done during provisioning.
+When a SQL Managed Instance enabled by Azure Arc is deployed in Business Critical service tier, the deployment creates multiple replicas. The setup and configuration of contained availability groups among those instances is automatically done during provisioning.
For instance, the following command creates a managed instance with 3 replicas.
Example:
az sql mi-arc create --name sqldemo --resource-group rg --location uswest2 ΓÇôsubscription xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx --custom-location private-location --tier BusinessCritical --replcias 3 ```
-By default, all the replicas are configured in synchronous mode. This means any updates on the primary instance will be synchronously replicated to each of the secondary instances.
+By default, all the replicas are configured in synchronous mode. This means any updates on the primary instance are synchronously replicated to each of the secondary instances.
## View and monitor high availability status
Example output:
} ```
-You can connect to the above primary endpoint using SQL Server Management Studio and verify using DMVs as:
+You can connect to the primary endpoint with SQL Server Management Studio and verify DMVs as:
```sql SELECT * FROM sys.dm_hadr_availability_replica_states
And the Contained Availability Dashboard:
Unlike SQL Server Always On availability groups, the contained availability group is a managed high availability solution. Hence, the failover modes are limited compared to the typical modes available with SQL Server Always On availability groups.
-Deploy Business Critical service tier SQL managed instances in either two-replica configuration or three replica configuration. The effects of failures and the subsequent recoverability is different with each configuration. A three replica instance provides a much higher level of availability and recovery, than a two replica instance.
+Deploy Business Critical service tier SQL managed instances in either two-replica configuration or three replica configuration. The effects of failures and the subsequent recoverability are different with each configuration. A three replica instance provides a higher level of availability and recovery, than a two replica instance.
-In a two replica configuration, when both the node states are `SYNCHRONIZED`, if the primary replica becomes unavailable, the secondary replica is automatically promoted to primary. When the failed replica becomes available, it will be updated with all the pending changes. If there are connectivity issues between the replicas, then the primary replica may not commit any transactions as every transaction needs to be committed on both replicas before a success is returned back on the primary.
+In a two replica configuration, when both the node states are `SYNCHRONIZED`, if the primary replica becomes unavailable, the secondary replica is automatically promoted to primary. When the failed replica becomes available, it is updated with all the pending changes. If there are connectivity issues between the replicas, then the primary replica may not commit any transactions as every transaction needs to be committed on both replicas before a success is returned back on the primary.
-In a three replica configuration, a transaction needs to commit in at least 2 of the 3 replicas before returning a success message back to the application. In the event of a failure, one of the secondaries is automatically promoted to primary while Kubernetes attempts to recover the failed replica. When the replica becomes available it is automatically joined back with the contained availability group and pending changes are synchronized. If there are connectivity issues between the replicas, and more than 2 replicas are out of sync, primary replica will not commit any transactions.
+In a three replica configuration, a transaction needs to commit in at least 2 of the 3 replicas before returning a success message back to the application. In the event of a failure, one of the secondaries is automatically promoted to primary while Kubernetes attempts to recover the failed replica. When the replica becomes available, it is automatically joined back with the contained availability group and pending changes are synchronized. If there are connectivity issues between the replicas, and more than 2 replicas are out of sync, primary replica won't commit any transactions.
> [!NOTE] > It is recommended to deploy a Business Critical SQL Managed Instance in a three replica configuration than a two replica configuration to achieve near-zero data loss.
Additional steps are required to restore a database into an availability group.
The query returns the pod that hosts the primary replica.
- Create the Kubernetes service to the primary instance by running the command below if your Kubernetes cluster uses nodePort services. Replace `podName` with the name of the server returned at previous step, `serviceName` with the preferred name for the Kubernetes service created.
+ Create the Kubernetes service to the primary instance by running the following command if your Kubernetes cluster uses `NodePort` services. Replace `<podName>` with the name of the server returned at previous step, `<serviceName>` with the preferred name for the Kubernetes service created.
```console kubectl -n <namespaceName> expose pod <podName> --port=1533 --name=<serviceName> --type=NodePort
azure-arc Managed Instance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-overview.md
description: SQL Managed Instance enabled by Azure Arc Overview
--++ Last updated 07/19/2023
azure-arc Migrate To Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/migrate-to-managed-instance.md
Title: Migrate a database from SQL Server to SQL Server Managed Instance enabled by Azure Arc
-description: Migrate database from SQL Server to SQL Server Managed Instance enabled by Azure Arc
+ Title: Migrate a database from SQL Server to SQL Managed Instance enabled by Azure Arc
+description: Migrate database from SQL Server to SQL Managed Instance enabled by Azure Arc
--++ Last updated 07/30/2021
-# Migrate: SQL Server to SQL Server Managed Instance enabled by Azure Arc
+# Migrate: SQL Server to SQL Managed Instance enabled by Azure Arc
This scenario walks you through the steps for migrating a database from a SQL Server instance to Azure SQL managed instance in Azure Arc via two different backup and restore methods.
azure-arc Plan Azure Arc Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/plan-azure-arc-data-services.md
description: This article explains the considerations for planning an Azure Arc-
--++ Last updated 07/19/2023
azure-arc Pod Scheduling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/pod-scheduling.md
description: Describes how pods are scheduled for Azure Arc-enabled data service
--++ Last updated 07/07/2023
azure-arc Point In Time Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/point-in-time-restore.md
Title: Restore a database in SQL Managed Instance enabled by Azure Arc to a previous point-in-time description: Explains how to restore a database to a specific point-in-time on SQL Managed Instance enabled by Azure Arc.--++
azure-arc Privacy Data Collection And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/privacy-data-collection-and-reporting.md
Title: Data collection and reporting | Azure Arc-enabled data services description: Explains the type of data that is transmitted by Azure Arc-enabled Data services to Microsoft. --++
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
This preview release is published July 13, 2021.
- Kubernetes native deployment templates have been modified for data controller, bootstrapper, & SQL Managed Instance. Update your .yaml templates. [Sample yaml files](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/yaml)
-#### New Azure CLI extension for data controller and SQL Server Managed Instance enabled by Azure Arc
+#### New Azure CLI extension for data controller and SQL Managed Instance enabled by Azure Arc
This release introduces the `arcdata` extension to the Azure CLI. To add the extension, run the following command:
azure-arc Reprovision Replica https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reprovision-replica.md
Title: Reprovision replica
-description: This article explains how to rebuild a broken SQL Server Managed Instance enabled by Azure Arc replica. A replica may break due to storage corruption, for example.
+description: This article explains how to rebuild a broken SQL Managed Instance enabled by Azure Arc replica. A replica may break due to storage corruption, for example.
Last updated 10/05/2022
-# Reprovision replica - SQL Server Managed Instance enabled by Azure Arc
+# Reprovision replica - SQL Managed Instance enabled by Azure Arc
-This article describes how to provision a new replica to replace an existing replica in SQL Server Managed Instance enabled by Azure Arc.
+This article describes how to provision a new replica to replace an existing replica in SQL Managed Instance enabled by Azure Arc.
-When you reprovision a replica, you rebuild a new managed instance replica for a SQL Server Managed Instance enabled by Azure Arc deployment. Use this task to replace a replica that is failing to synchronize, for example, due to corruption of the data on the persistent volumes (PV) for that instance, or due to some recurring SQL issue.
+When you reprovision a replica, you rebuild a new managed instance replica for a SQL Managed Instance enabled by Azure Arc deployment. Use this task to replace a replica that is failing to synchronize, for example, due to corruption of the data on the persistent volumes (PV) for that instance, or due to some recurring SQL issue.
You can reprovision a replica [via `az` CLI](#via-az-cli) or [via `kubectl`](#via-kubectl). You can't reprovision a replica from the Azure portal.
azure-arc Reserved Capacity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/reserved-capacity-overview.md
Title: Save costs with reserved capacity
-description: Learn how to buy SQL Server Managed Instance enabled by Azure Arc reserved capacity to save costs.
+description: Learn how to buy SQL Managed Instance enabled by Azure Arc reserved capacity to save costs.
Last updated 10/27/2021
-# Reserved capacity - SQL Server Managed Instance enabled by Azure Arc
+# Reserved capacity - SQL Managed Instance enabled by Azure Arc
Save money with SQL Managed Instance enabled by Azure Arc by committing to a reservation for Azure Arc services compared to pay-as-you-go prices. With reserved capacity, you make a commitment for SQL Managed Instance enabled by Azure Arc use for one or three years to get a significant discount on the service fee. To purchase reserved capacity, you need to specify the Azure region, deployment type, performance tier, and term.
azure-arc Resize Persistent Volume Claim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/resize-persistent-volume-claim.md
description: Explains how to resize a persistent volume claim for a volume used
--++ Last updated 07/19/2023
azure-arc Resource Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/resource-sync.md
description: Synchronize resources for Azure Arc-enabled data services in direct
--++ Last updated 07/14/2022
azure-arc Rotate Customer Managed Keytab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/rotate-customer-managed-keytab.md
Last updated 05/05/2023
-# Rotate SQL Server Managed Instance enabled by Azure Arc customer-managed keytab
+# Rotate SQL Managed Instance enabled by Azure Arc customer-managed keytab
This article describes how to rotate customer-managed keytabs for SQL Managed Instance enabled by Azure Arc. These keytabs are used to enable Active Directory logins for the managed instance.
azure-arc Rotate Sql Managed Instance Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/rotate-sql-managed-instance-credentials.md
Last updated 03/06/2023
-# Rotate SQL Server Managed Instance enabled by Azure Arc service-managed credentials (preview)
+# Rotate SQL Managed Instance enabled by Azure Arc service-managed credentials (preview)
This article describes how to rotate service-managed credentials for SQL Managed Instance enabled by Azure Arc. Arc data services generate various service-managed credentials like certificates and SQL logins used for Monitoring, Backup/Restore, High Availability etc. These credentials are considered custom resource credentials managed by Azure Arc data services.
azure-arc Rotate User Tls Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/rotate-user-tls-certificate.md
Title: Rotate user-provided TLS certificate in indirectly connected SQL Server Managed Instance enabled by Azure Arc
-description: Rotate user-provided TLS certificate in indirectly connected SQL Server Managed Instance enabled by Azure Arc
+ Title: Rotate user-provided TLS certificate in indirectly connected SQL Managed Instance enabled by Azure Arc
+description: Rotate user-provided TLS certificate in indirectly connected SQL Managed Instance enabled by Azure Arc
Last updated 12/15/2021
-# Rotate certificate SQL Server Managed Instance enabled by Azure Arc (indirectly connected)
+# Rotate certificate SQL Managed Instance enabled by Azure Arc (indirectly connected)
This article describes how to rotate user-provided Transport Layer Security(TLS) certificate for SQL Managed Instance enabled by Azure Arc in indirectly connected mode using Azure CLI or `kubectl` commands.
azure-arc Service Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/service-tiers.md
--++ Last updated 07/19/2023
azure-arc Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/troubleshoot-guide.md
description: Introduction to troubleshooting resources
--++ Last updated 07/07/2022
azure-arc Troubleshoot Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/troubleshoot-managed-instance.md
Title: Troubleshoot connection to failover group - SQL Server Managed Instance enabled by Azure Arc
+ Title: Troubleshoot connection to failover group - SQL Managed Instance enabled by Azure Arc
description: Describes how to troubleshoot issues with connections to failover group resources in Azure Arc-enabled data services
Last updated 03/15/2023
-# Troubleshoot SQL Server Managed Instance enabled by Azure Arc deployments
+# Troubleshoot SQL Managed Instance enabled by Azure Arc deployments
This article identifies potential issues, and describes how to diagnose root causes for these issues for deployments of Azure Arc-enabled data services.
-## Connection to SQL Server Managed Instance enabled by Azure Arc failover group
+## Connection to SQL Managed Instance enabled by Azure Arc failover group
This section describes how to troubleshoot issues connecting to a failover group.
On each side, there are two replicas for one failover group. Check the value of
If one of `connectedState` isn't equal to `CONNECTED`, see the instructions under [Check parameters](#check-parameters).
-If one of `synchronizationState` isn't equal to `HEALTHY`, focus on the instance which `synchronizationState` isn't equal to `HEALTHY`". Refer to [Can't connect to SQL Server Managed Instance enabled by Azure Arc](#cant-connect-to-sql-server-managed-instance-enabled-by-azure-arc).
+If one of `synchronizationState` isn't equal to `HEALTHY`, focus on the instance which `synchronizationState` isn't equal to `HEALTHY`". Refer to [Can't connect to SQL Managed Instance enabled by Azure Arc](#cant-connect-to-sql-managed-instance-enabled-by-azure-arc).
### Check parameters
kubectl exec -ti -n $nameSpace $sqlmiName-0 -c arc-sqlmi -- /opt/mssql-tools/bin
If SQL server can use external endpoint TDS, there is a good chance it can reach external mirroring endpoint because they are defined and activated in the same service, specifically `$sqlmiName-external-svc`.
-## Can't connect to SQL Server Managed Instance enabled by Azure Arc
+## Can't connect to SQL Managed Instance enabled by Azure Arc
This section identifies specific steps you can take to troubleshoot connections to SQL Managed Instance enabled by Azure Arc.
azure-arc Uninstall Azure Arc Data Controller https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/uninstall-azure-arc-data-controller.md
--++ Last updated 07/28/2022
azure-arc Update Service Principal Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/update-service-principal-credentials.md
description: Update credential for a service principal
--++ Last updated 07/30/2021
azure-arc Upgrade Data Controller Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-data-controller-direct-cli.md
--++ Last updated 07/07/2022
azure-arc Upgrade Data Controller Direct Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-data-controller-direct-portal.md
description: Article describes how to upgrade a directly connected Azure Arc dat
--++ Last updated 07/07/2022
azure-arc Upgrade Data Controller Indirect Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-data-controller-indirect-cli.md
--++ Last updated 07/07/2022
azure-arc Upgrade Data Controller Indirect Kubernetes Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-data-controller-indirect-kubernetes-tools.md
description: Article describes how to upgrade an indirectly connected data contr
--++ Last updated 07/07/2022
azure-arc Upgrade Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-overview.md
description: Explains how to upgrade Azure Arc-enabled data controller, and othe
--++ Last updated 08/15/2022
azure-arc Upgrade Sql Managed Instance Auto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-sql-managed-instance-auto.md
description: Article describes how to enable automatic upgrades for Azure SQL Ma
--++ Last updated 05/27/2022
azure-arc Upgrade Sql Managed Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-sql-managed-instance-cli.md
--++ Last updated 10/11/2022
azure-arc Upgrade Sql Managed Instance Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-sql-managed-instance-direct-cli.md
--++ Last updated 10/11/2022
azure-arc Upgrade Sql Managed Instance Direct Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-sql-managed-instance-direct-portal.md
--++ Last updated 10/11/2022
azure-arc Upgrade Sql Managed Instance Indirect Kubernetes Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-sql-managed-instance-indirect-kubernetes-tools.md
--++ Last updated 10/11/2022
azure-arc Upload Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upload-logs.md
--++ Last updated 05/27/2022
azure-arc Upload Usage Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upload-usage-data.md
description: Upload usage Azure Arc-enabled data services data to Azure
--++ Last updated 05/27/2022
azure-cache-for-redis Cache How To Manage Redis Cache Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-manage-redis-cache-powershell.md
The following table contains Azure PowerShell properties and descriptions for co
| maxmemory-reserved |Configures the [memory reserved](cache-configure.md#memory-policies) for non-cache processes |Standard and Premium | | maxmemory-policy |Configures the [eviction policy](cache-configure.md#memory-policies) for the cache |All pricing tiers | | notify-keyspace-events |Configures [keyspace notifications](cache-configure.md#keyspace-notifications-advanced-settings) |Standard and Premium |
-| hash-max-ziplist-entries |Configures [memory optimization](https://redis.io/topics/memory-optimization) for small aggregate data types |Standard and Premium |
-| hash-max-ziplist-value |Configures [memory optimization](https://redis.io/topics/memory-optimization) for small aggregate data types |Standard and Premium |
-| set-max-intset-entries |Configures [memory optimization](https://redis.io/topics/memory-optimization) for small aggregate data types |Standard and Premium |
-| zset-max-ziplist-entries |Configures [memory optimization](https://redis.io/topics/memory-optimization) for small aggregate data types |Standard and Premium |
-| zset-max-ziplist-value |Configures [memory optimization](https://redis.io/topics/memory-optimization) for small aggregate data types |Standard and Premium |
+| hash-max-ziplist-entries |Configures [memory optimization](https://redis.io/docs/management/optimization/memory-optimization/) for small aggregate data types |Standard and Premium |
+| hash-max-ziplist-value |Configures [memory optimization](https://redis.io/docs/management/optimization/memory-optimization/) for small aggregate data types |Standard and Premium |
+| set-max-intset-entries |Configures [memory optimization](https://redis.io/docs/management/optimization/memory-optimization/) for small aggregate data types |Standard and Premium |
+| zset-max-ziplist-entries |Configures [memory optimization](https://redis.io/docs/management/optimization/memory-optimization/) for small aggregate data types |Standard and Premium |
+| zset-max-ziplist-value |Configures [memory optimization](https://redis.io/docs/management/optimization/memory-optimization/) for small aggregate data types |Standard and Premium |
| databases |Configures the number of databases. This property can be configured only at cache creation. |Standard and Premium | ## To create an Azure Cache for Redis
azure-fluid-relay Use Audience In Fluid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/use-audience-in-fluid.md
In this tutorial, you'll learn about using the Fluid Framework [Audience](https://fluidframework.com/docs/build/audience/) with [React](https://reactjs.org/) to create a visual demonstration of users connecting to a container. The audience object holds information related to all users connected to the container. In this example, the Azure Client library will be used to create the container and audience.
-To jump ahead into the finished demo, check out the [Audience demo in our FluidExamples repo](https://github.com/microsoft/FluidExamples/tree/main/audience-demo).
- The following image shows ID buttons and a container ID input field. Leaving the container ID field blank and clicking a user ID button will create a new container and join as the selected user. Alternatively, the end-user can input a container ID and choose a user ID to join an existing container as the selected user. :::image type="content" source="../images/container-select-user.png" alt-text="A screenshot of a browser with buttons for selecting a user.":::
The next image shows multiple users connected to a container represented by boxe
### Set up state variables and component view
-1. Open the file `\src\App.js` in the code editor. Delete all the default `import` statements. Then delete all the markup from the `return` statement. Then add import statements for components and React hooks. Note that we will be implementing the imported **AudienceDisplay** and **UserIdSelection** components in the later steps. The file should look like the following:
+1. Open the file `\src\App.js` in the code editor. Delete all the default `import` statements. Then delete all the markup from the `return` statement. Then add import statements for components and React hooks. Note that we'll be implementing the imported **AudienceDisplay** and **UserIdSelection** components in the later steps. The file should look like the following:
```js import { useState, useCallback } from "react";
The next image shows multiple users connected to a container represented by boxe
You can use a helper function to get the Fluid data, from the Audience object, into the view layer (the React state). The `tryGetAudienceObject` method is called when the view component loads after a user ID is selected. The returned value is assigned to a React state property.
-1. Replace `TODO 1` with the following code. Note that the values for `userId` `userName` `containerId` will be passed in from the **App** component. If there is no `containerId`, a new container is created. Also, note that the `containerId` is stored on the URL hash. A user entering a session from a new browser may copy the URL from an existing session browser or navigate to `localhost:3000` and manually input the container ID. With this implementation, we want to wrap the `getContainer` call in a try catch in the case that the user inputs a container ID which does not exist. Visit the [React demo](https://fluidframework.com/docs/recipes/react/) and [Containers](../concepts/architecture.md#container) documentation for more information.
+1. Replace `TODO 1` with the following code. Note that the values for `userId` `userName` `containerId` will be passed in from the **App** component. If there's no `containerId`, a new container is created. Also, note that the `containerId` is stored on the URL hash. A user entering a session from a new browser may copy the URL from an existing session browser or navigate to `localhost:3000` and manually input the container ID. With this implementation, we want to wrap the `getContainer` call in a try catch in the case that the user inputs a container ID which doesn't exist. Visit the [Containers](../concepts/architecture.md#container) documentation for more information.
```js const userConfig = {
Now that we've defined how to get the Fluid audience, we need to tell React to c
const [currentMember, setCurrentMember] = useState(); ```
-1. Replace `TODO 3` with the following code. This will call the `tryGetAudienceObject` when the component is mounted and set the returned audience members to `fluidMembers` and `currentMember`. Note, we check if an audience object is returned in case a user inputs a containerId which does not exist and we need to return them to the **UserIdSelection** view (`props.onContainerNotFound()` will handle switching the view). Also, it is good practice to deregister event handlers when the React component dismounts by returning `audience.off`.
+1. Replace `TODO 3` with the following code. This will call the `tryGetAudienceObject` when the component is mounted and set the returned audience members to `fluidMembers` and `currentMember`. Note, we check if an audience object is returned in case a user inputs a containerId which doesn't exist and we need to return them to the **UserIdSelection** view (`props.onContainerNotFound()` will handle switching the view). Also, it is good practice to deregister event handlers when the React component dismounts by returning `audience.off`.
```js useEffect(() => {
azure-monitor Agent Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-data-sources.md
Data source configurations are delivered to agents that are directly connected t
For System Center Operations Manager agents in a connected management group, data source configurations are translated into management packs and delivered to the management group every 5 minutes by default. The agent downloads the management pack like any other and collects the specified data. Depending on the data source, the data will either be sent to a management server, which forwards the data to the Azure Monitor, or the agent will send the data to Azure Monitor without going through the management server.
-For more information, see [Data collection details for monitoring solutions in Azure](../monitor-reference.md). You can read about details of connecting Operations Manager and Azure Monitor and modifying the frequency that configuration is delivered at [Configure integration with System Center Operations Manager](./om-agents.md).
+For more information, see [Data collection in Azure Monitor](../essentials/data-collection.md). You can read about details of connecting Operations Manager and Azure Monitor and modifying the frequency that configuration is delivered at [Configure integration with System Center Operations Manager](./om-agents.md).
If the agent is unable to connect to Azure Monitor or Operations Manager, it will continue to collect data that it will deliver when it establishes a connection. Data can be lost if the amount of data reaches the maximum cache size for the client, or if the agent can't establish a connection within 24 hours.
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
We strongly recommended to always update to the latest version, or opt in to the
## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|
-| December 2023 |**Windows** <ul><li>Prevent CPU spikes by not using bookmark when resetting an Event Log subscription</li><li>Added missing fluentbit exe to AMA client setup for Custom Log support</li><li>Updated to latest AzureCredentialsManagementService and DsmsCredentialsManagement package</li><li>Update ME to v2.2023.1027.1417</li></ul>**Linux**<ul><li>Support for TLS V1.3</li><li>Support for nopri in Syslog</li><li>Ability to set disk quota from DCR Agent Settings</li><li>Add ARM64 Ubuntu 22 support</li><li>**Fixes**<ul><li>SysLog</li><ul><li>Parse syslog Palo Alto CEF with multiple space characters following the hostname</li><li>Fix an issue with incorrectly parsing messages containing two '\n' chars in a row</li><li>Improved support for non-RFC compliant devices</li><li>Support infoblox device messages containing both hostname and IP headers</li></ul><li>Fix AMA crash in RHEL 7.2</li><li>Remove dependency on "which" command</li><li>Fix port conflicts due to AMA using 13000 </li><li>Reliability and Performance improvements</li></ul></li></ul>| 1.22.0 | 1.29.4|
+| January 2024 |**Known Issues**<ul><li>The agent extension code size is beyond the deployment limit set by Arc, thus 1.29.5 will not install on Arc enabled servers. Fix is coming in 1.29.6.</li></ul>**Windows**<ul><li>Added support for Transport Layer Security 1.3</li><li>Reverted a change to enable multiple IIS subscriptions to use same filter. Feature will be redeployed once memory leak is fixed.</li><li>Improved ETW event throughput rate</li></ul>**Linux**<ul><li>Fix Error messages logged intended for mdsd.err went to mdsd.warn instead in 1.29.4 only. Likely error messages: "Exception while uploading to Gig-LA : ...", "Exception while uploading to ODS: ...", "Failed to upload to ODS: ..."</li><li>Fixed syslog timestamp parsing where an incorrect timezone offset might be applied</li></ul> | 1.23.0 | 1.29.5 |
+| December 2023 |**Known Issues**<ul><li>The agent extension code size is beyond the deployment limit set by Arc, thus 1.29.4 will not install on Arc enabled servers. Fix is coming in 1.29.6.</li><li>Multiple IIS subscriptions causes a memory leak. feature reverted in 1.23.0.</ul>**Windows** <ul><li>Prevent CPU spikes by not using bookmark when resetting an Event Log subscription</li><li>Added missing fluentbit exe to AMA client setup for Custom Log support</li><li>Updated to latest AzureCredentialsManagementService and DsmsCredentialsManagement package</li><li>Update ME to v2.2023.1027.1417</li></ul>**Linux**<ul><li>Support for TLS V1.3</li><li>Support for nopri in Syslog</li><li>Ability to set disk quota from DCR Agent Settings</li><li>Add ARM64 Ubuntu 22 support</li><li>**Fixes**<ul><li>SysLog</li><ul><li>Parse syslog Palo Alto CEF with multiple space characters following the hostname</li><li>Fix an issue with incorrectly parsing messages containing two '\n' chars in a row</li><li>Improved support for non-RFC compliant devices</li><li>Support infoblox device messages containing both hostname and IP headers</li></ul><li>Fix AMA crash in RHEL 7.2</li><li>Remove dependency on "which" command</li><li>Fix port conflicts due to AMA using 13000 </li><li>Reliability and Performance improvements</li></ul></li></ul>| 1.22.0 | 1.29.4|
| October 2023| **Windows** <ul><li>Minimize CPU spikes when resetting an Event Log subscription</li><li>Enable multiple IIS subscriptions to use same filter</li><li>Cleanup files and folders for inactive tenants in multi-tenant mode</li><li>AMA installer will not install unnecessary certs</li><li>AMA emits Telemetry table locally</li><li>Update Metric Extension to v2.2023.721.1630</li><li>Update AzureSecurityPack to v4.29.0.4</li><li>Update AzureWatson to v1.0.99</li></ul>**Linux**<ul><li> Add support for Process metrics counters for Log Analytics upload and Azure Monitor Metrics</li><li>Use rsyslog omfwd TCP for improved syslog reliability</li><li>Support Palo Alto CEF logs where hostname is followed by 2 spaces</li><li>Bug and reliability improvements</li></ul> |1.21.0|1.28.11| | September 2023| **Windows** <ul><li>Fix issue with high CPU usage due to excessive Windows Event Logs subscription reset</li><li>Reduce fluentbit resource usage by limiting tracked files older than 3 days and limiting logging to errors only</li><li>Fix race-condition where resource_id is unavailable when agent is restarted</li><li>Fix race-condition when vm-extension provision agent (aka GuestAgent) is issuing a disable-vm-extension command to AMA.</li><li>Update MetricExtension version to 2.2023.721.1630</li><li>Update Troubleshooter to v1.5.14 </li></ul>|1.20.0| None | | August 2023| **Windows** <ul><li>AMA: Allow prefixes in the tag names to handle regression</li><li>Updating package version for AzSecPack 4.28 release</li></ui>|1.19.0| None |
azure-monitor Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/gateway.md
To learn how to design and deploy a Windows Server 2016 network load balancing c
To learn how to design and deploy an Azure Load Balancer, see [What is Azure Load Balancer?](../../load-balancer/load-balancer-overview.md). To deploy a basic load balancer, follow the steps outlined in this [quickstart](../../load-balancer/quickstart-load-balancer-standard-public-portal.md) excluding the steps outlined in the section **Create back-end servers**. > [!NOTE]
-> Configuring the Azure Load Balancer using the **Basic SKU**, requires that Azure virtual machines belong to an Availability Set. To learn more about availability sets, see [Manage the availability of Windows virtual machines in Azure](../../virtual-machines/availability.md). To add existing virtual machines to an availability set, refer to [Set Azure Resource Manager VM Availability Set](https://gallery.technet.microsoft.com/Set-Azure-Resource-Manager-f7509ec4).
+> Configuring the Azure Load Balancer using the **Basic SKU**, requires that Azure virtual machines belong to an Availability Set. To learn more about availability sets, see [Manage the availability of Windows virtual machines in Azure](../../virtual-machines/availability.md). To add existing virtual machines to an availability set, refer to [Set Azure Resource Manager VM Availability Set](/troubleshoot/azure/virtual-machines/allocation-failure#resize-a-vm-or-add-vms-to-an-existing-availability-set).
> After the load balancer is created, a backend pool needs to be created, which distributes traffic to one or more gateway servers. Follow the steps described in the quickstart article section [Create resources for the load balancer](../../load-balancer/quickstart-load-balancer-standard-public-portal.md).
azure-monitor Log Analytics Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/log-analytics-agent.md
This section explains how to install the Log Analytics agent on different types
## Data collected
-The following table lists the types of data you can configure a Log Analytics workspace to collect from all connected agents. For a list of insights and solutions that use the Log Analytics agent to collect other kinds of data, see [What is monitored by Azure Monitor?](../monitor-reference.md).
+The following table lists the types of data you can configure a Log Analytics workspace to collect from all connected agents.
| Data Source | Description | | | |
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
Follow the steps in this section to instrument your application with OpenTelemet
### [Python](#tab/python) -- Python Application using Python 3.8+
+- Python Application using Python 3.7+
azure-monitor Opentelemetry Python Opencensus Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-python-opencensus-migrate.md
page to onboard onto the Azure Monitor OpenTelemetry Distro.
The following changes and limitations may be encountered when migrating from OpenCensus to OpenTelemetry.
-### Python < 3.8 support
+### Python < 3.7 support
-OpenTelemetry's Python-based monitoring solutions only support Python 3.8 and greater, excluding the previously supported Python versions 2.7, 3.4, 3.5, 3.6, and 3.7 from OpenCensus. We suggest upgrading for users who are on the older versions of Python since, as of writing this document, those versions have already reached [end of life](https://devguide.python.org/versions/). Users who are adamant about not upgrading may still use the OpenTelemetry solutions, but may find unexpected or breaking behavior that is unsupported. In any case, the last supported version of [opencensus-ext-azure](https://pypi.org/project/opencensus-ext-azure/) always exists, and stills work for those versions, but no new releases are made for that project.
+OpenTelemetry's Python-based monitoring solutions only support Python 3.7 and greater, excluding the previously supported Python versions 2.7, 3.4, 3.5, and 3.6 from OpenCensus. We suggest upgrading for users who are on the older versions of Python since, as of writing this document, those versions have already reached [end of life](https://devguide.python.org/versions/). Users who are adamant about not upgrading may still use the OpenTelemetry solutions, but may find unexpected or breaking behavior that is unsupported. In any case, the last supported version of [opencensus-ext-azure](https://pypi.org/project/opencensus-ext-azure/) always exists, and stills work for those versions, but no new releases are made for that project.
### Configurations
azure-monitor Best Practices Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-data-collection.md
Insights provide a specialized monitoring experience for a particular service. T
There's no cost for insights, but you might be charged for any data they collect.
-See [What is monitored by Azure Monitor?](monitor-reference.md) for a list of available insights and solutions in Azure Monitor. See the documentation for each for any unique configuration or pricing information.
+See [Azure Monitor Insights overview](insights/insights-overview.md) for a list of available insights and solutions in Azure Monitor. See the documentation for each for any unique configuration or pricing information.
> [!IMPORTANT] > The following insights are much more complex than others and have more guidance for their configuration:
azure-monitor Kubernetes Monitoring Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/kubernetes-monitoring-enable.md
az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-n
### Use advanced configuration settings az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogs.resources.daemonset.limits.cpu=150m amalogs.resources.daemonset.limits.memory=600Mi amalogs.resources.deployment.limits.cpu=1 amalogs.resources.deployment.limits.memory=750Mi
-### On Azure Stack Edge
-az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogs.logsettings.custommountpath=/home/data/docker
+### With custom mount path for container stdout & stderr logs
+### Custom mount path not required for Azure Stack Edge version > 2318. Custom mount path must be /home/data/docker for Azure Stack Edge cluster with version <= 2318
+az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogs.logsettings.custommountpath=<customMountPath>
```
azure-monitor Cost Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/cost-usage.md
Since a usage export has both the number of units of usage and their cost, you c
You can also see these data benefits in the Log Analytics Usage and estimated costs page. If the workspace is receiving these benefits, there will be a sentence below the cost estimate table that gives the data volume of the benefits used over the last 31 days. ++ ### Query benefits from the Operation table The [Operation](/azure/azure-monitor/reference/tables/operation) table contains daily events which given the amount of benefit used from the [Defender for Servers data allowance](logs/cost-logs.md#workspaces-with-microsoft-defender-for-cloud) and the [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5, and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/). The `Detail` column for these events are all of the format `Benefit amount used 1.234 GB`, and the type of benefit is in the `OperationKey` column. Here is a query that charts the benefits used in the last 31-days:
azure-monitor Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/data-sources.md
Some of these data sources use the [new data ingestion pipeline](essentials/data
## Application tiers
-Sources of monitoring data from Azure applications can be organized into tiers, the highest tiers being your application itself and the lower tiers being components of Azure platform. The method of accessing data from each tier varies. The application tiers are summarized in the table below, and the sources of monitoring data in each tier are presented in the following sections. See [Monitoring data locations in Azure](monitor-reference.md) for a description of each data location and how you can access its data.
+Sources of monitoring data from Azure applications can be organized into tiers, the highest tiers being your application itself and the lower tiers being components of Azure platform. The method of accessing data from each tier varies. The application tiers are summarized in the table below, and the sources of monitoring data in each tier are presented in the following sections.
:::image type="content" source="media/overview/overview-simple-20230707-opt.svg" alt-text="Diagram that shows an overview of Azure Monitor with data sources on the left sending data to a central data platform and features of Azure Monitor on the right that use the collected data." border="false" lightbox="media/overview/overview-blowout-20230707-opt.svg":::
When you enable Application Insights for an application by installing an instrum
| | Debug snapshot data that is captured for a subset of exceptions is stored in Azure Storage. Use Application Insights in the Azure portal to download for local analysis. | [How snapshots work](app/snapshot-debugger.md#how-snapshots-work) | ## Insights
-[Insights](monitor-reference.md) collect data to provide additional insights into the operation of a particular service or application. They may address resources in different application tiers and even multiple tiers.
+[Insights](insights/insights-overview.md) collect data to provide additional insights into the operation of a particular service or application. They may address resources in different application tiers and even multiple tiers.
### Container insights
Other services in Azure write data to the Azure Monitor data platform. This allo
## Next steps - Learn more about the [types of monitoring data collected by Azure Monitor](data-platform.md) and how to view and analyze this data.-- List the [different locations where Azure resources store data](monitor-reference.md) and how you can access it.
azure-monitor Data Platform Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-platform-metrics.md
# Azure Monitor Metrics overview
-Azure Monitor Metrics is a feature of Azure Monitor that collects numeric data from [monitored resources](../monitor-reference.md) into a time-series database. Metrics are numerical values that are collected at regular intervals and describe some aspect of a system at a particular time.
+Azure Monitor Metrics is a feature of Azure Monitor that collects numeric data from monitored resources into a time-series database. Metrics are numerical values that are collected at regular intervals and describe some aspect of a system at a particular time.
> [!NOTE] > Azure Monitor Metrics is one half of the data platform that supports Azure Monitor. The other half is [Azure Monitor Logs](../logs/data-platform-logs.md), which collects and organizes log and performance data. You can analyze that data by using a rich query language.
Azure Monitor collects metrics from the following sources. After these metrics a
- **Custom metrics**: You can define metrics in addition to the standard metrics that are automatically available. You can [define custom metrics in your application](../app/api-custom-events-metrics.md) that's monitored by Application Insights. You can also create custom metrics for an Azure service by using the [custom metrics API](./metrics-store-custom-rest-api.md). - **Kubernetes clusters**: Kubernetes clusters typically send metric data to a local Prometheus server that you must maintain. [Azure Monitor managed service for Prometheus ](prometheus-metrics-overview.md) provides a managed service that collects metrics from Kubernetes clusters and store them in Azure Monitor Metrics.
-For a complete list of data sources that can send data to Azure Monitor Metrics, see [What is monitored by Azure Monitor?](../monitor-reference.md).
+## REST API
-## REST API
Azure Monitor provides REST APIs that allow you to get data in and out of Azure Monitor Metrics. - **Custom metrics API** - [Custom metrics](./metrics-custom-overview.md) allow you to load your own metrics into the Azure Monitor Metrics database. Those metrics can then be used by the same analysis tools that process Azure Monitor platform metrics. - **Azure Monitor Metrics REST API** - Allows you to access Azure Monitor platform metrics definitions and values. For more information, see [Azure Monitor REST API](/rest/api/monitor/metrics/list). For information on how to use the API, see the [Azure monitoring REST API walkthrough](./rest-api-walkthrough.md).
azure-monitor Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings.md
# Diagnostic settings in Azure Monitor
-This article provides details on creating and configuring diagnostic settings to send Azure platform metrics and logs to different destinations.
-
-[Platform metrics](./metrics-supported.md) are sent automatically to [Azure Monitor Metrics](./data-platform-metrics.md) by default and without configuration.
-
-[Platform logs](./platform-logs-overview.md) provide detailed diagnostic and auditing information for Azure resources and the Azure platform they depend on.
--- **Resource logs** aren't collected until they're routed to a destination.-- **Activity logs** exist on their own but can be routed to other locations.
+This article provides details on creating and configuring diagnostic settings to send Azure platform metrics, resource logs and the activity log to different destinations.
Each Azure resource requires its own diagnostic setting, which defines the following criteria:
Information on these newer features is included in this article.
There are three sources for diagnostic information: -- Metrics-- Resource logs-- Activity logs
+- **[Platform metrics](./metrics-supported.md)** are sent automatically to [Azure Monitor Metrics](./data-platform-metrics.md) by default and without configuration.
+- **[Platform logs](./platform-logs-overview.md)** - provide detailed diagnostic and auditing information for Azure resources and the Azure platform they depend on.
+ - **Resource logs** aren't collected until they're routed to a destination.
+ - The **Activity log** provides information about resources from outside the resource, such as when the resource was created or deleted. Entries exist on their own but can be routed to other locations.
### Metrics
The **AllMetrics** setting routes a resource's platform metrics to other destina
### Resource logs
-With logs, you can select the log categories you want to route individually or choose a category group.
+With resource logs, you can select the log categories you want to route individually or choose a category group.
+**Category groups**
> [!NOTE] > Category groups don't apply to all metric resource providers. If a provider doesn't have them available in the diagnostic settings in the Azure portal, then they also won't be available via Azure Resource Manager templates.
-You can use *category groups* to dynamically collect resource logs based on predefined groupings instead of selecting individual log categories. Microsoft defines the groupings to help monitor specific use cases across all Azure services.
-
-Over time, the categories in the group might be updated as new logs are rolled out or as assessments change. When log categories are added or removed from a category group, your log collection is modified automatically without you having to update your diagnostic settings.
+You can use *category groups* to dynamically collect resource logs based on predefined groupings instead of selecting individual log categories. Microsoft defines the groupings to help monitor specific use cases across all Azure services. Over time, the categories in the group might be updated as new logs are rolled out or as assessments change. When log categories are added or removed from a category group, your log collection is modified automatically without you having to update your diagnostic settings.
When you use category groups, you:
To ensure the security of data in transit, all destination endpoints are configu
The activity log uses a diagnostic setting but has its own user interface because it applies to the whole subscription rather than individual resources. The destination information listed here still applies. For more information, see [Azure activity log](activity-log.md). + ## Requirements and limitations This section discusses requirements and limitations.
The following table provides unique requirements for each destination including
| Destination | Requirements | |:|:| | Log Analytics workspace | The workspace doesn't need to be in the same region as the resource being monitored.|
-| Storage account | Don't use an existing storage account that has other, nonmonitoring data stored in it so that you can better control access to the data. If you're archiving the activity log and resource logs together, you might choose to use the same storage account to keep all monitoring data in a central location.<br><br>To send the data to immutable storage, set the immutable policy for the storage account as described in [Set and manage immutability policies for Azure Blob Storage](../../storage/blobs/immutable-policy-configure-version-scope.md). You must follow all steps in this linked article including enabling protected append blobs writes.<br><br>The storage account needs to be in the same region as the resource being monitored if the resource is regional.<br><br> Diagnostic settings can't access storage accounts when virtual networks are enabled. You must enable **Allow trusted Microsoft services** to bypass this firewall setting in storage accounts so that the Azure Monitor diagnostic settings service is granted access to your storage account.<br><br>[Azure DNS zone endpoints (preview)](../../storage/common/storage-account-overview.md#azure-dns-zone-endpoints-preview) and [Azure Premium LRS](../../storage/common/storage-redundancy.md#locally-redundant-storage) (locally redundant storage) storage accounts aren't supported as a log or metric destination.|
+| Storage account | Don't use an existing storage account that has other, non-monitoring data stored in it. Spliting the types of data up allow you better control access to the data. If you're archiving the activity log and resource logs together, you might choose to use the same storage account to keep all monitoring data in a central location.<br><br>To prevent modification of the data, send it to immutable storage. Set the immutable policy for the storage account as described in [Set and manage immutability policies for Azure Blob Storage](../../storage/blobs/immutable-policy-configure-version-scope.md). You must follow all steps in this linked article including enabling protected append blobs writes.<br><br>The storage account needs to be in the same region as the resource being monitored if the resource is regional.<br><br> Diagnostic settings can't access storage accounts when virtual networks are enabled. You must enable **Allow trusted Microsoft services** to bypass this firewall setting in storage accounts so that the Azure Monitor diagnostic settings service is granted access to your storage account.<br><br>[Azure DNS zone endpoints (preview)](../../storage/common/storage-account-overview.md#azure-dns-zone-endpoints-preview) and [Azure Premium LRS](../../storage/common/storage-redundancy.md#locally-redundant-storage) (locally redundant storage) storage accounts aren't supported as a log or metric destination.|
| Event Hubs | The shared access policy for the namespace defines the permissions that the streaming mechanism has. Streaming to Event Hubs requires Manage, Send, and Listen permissions. To update the diagnostic setting to include streaming, you must have the ListKey permission on that Event Hubs authorization rule.<br><br>The event hub namespace needs to be in the same region as the resource being monitored if the resource is regional. <br><br> Diagnostic settings can't access Event Hubs resources when virtual networks are enabled. You must enable **Allow trusted Microsoft services** to bypass this firewall setting in Event Hubs so that the Azure Monitor diagnostic settings service is granted access to your Event Hubs resources.| | Partner integrations | The solutions vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
azure-monitor Monitor Azure Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/monitor-azure-resource.md
To learn how to create a diagnostic setting, see [Collect and analyze resource l
## Insights
-The **Insights** menu item opens the insight for the resource if the Azure service has one. [Insights](../monitor-reference.md) provide a customized monitoring experience built on the Azure Monitor data platform and standard features.
+The **Insights** menu item opens the insight for the resource if the Azure service has one. [Insights](../insights/insights-overview.md) provide a customized monitoring experience built on the Azure Monitor data platform and standard features.
For a list of insights that are available and links to their documentation, see [Insights](../insights/insights-overview.md) and [core solutions](/previous-versions/azure/azure-monitor/insights/solutions). <!-- convertborder later -->
azure-monitor Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/getting-started.md
This section provides answers to common questions.
### How do I enable Azure Monitor?
-Azure Monitor is enabled the moment that you create a new Azure subscription, and [activity log](./essentials/platform-logs-overview.md) and platform [metrics](essentials/data-platform-metrics.md) are automatically collected. Create [diagnostic settings](essentials/diagnostic-settings.md) to collect more detailed information about the operation of your Azure resources, and add [monitoring solutions](/previous-versions/azure/azure-monitor/insights/solutions) and [insights](./monitor-reference.md) to provide extra analysis on collected data for particular services.
+Azure Monitor is enabled the moment that you create a new Azure subscription, and [activity log](./essentials/platform-logs-overview.md) and platform [metrics](essentials/data-platform-metrics.md) are automatically collected. Create [diagnostic settings](essentials/diagnostic-settings.md) to collect more detailed information about the operation of your Azure resources, and add monitoring solutions to provide extra analysis on collected data for particular services.
### How do I access Azure Monitor?
azure-monitor Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/insights-overview.md
The following table lists the available curated visualizations and information a
## Next steps - Reference some of the insights listed above to review their functionality-- Understand [what Azure Monitor can monitor](../monitor-reference.md)
azure-monitor Data Platform Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-platform-logs.md
Title: Azure Monitor Logs
-description: Learn the basics of Azure Monitor Logs, which is used for advanced analysis of monitoring data.
+description: Learn the basics of Azure Monitor Logs, which are used for advanced analysis of monitoring data.
Last updated 09/14/2023 # Azure Monitor Logs overview
-Azure Monitor Logs is a feature of Azure Monitor that collects and organizes log and performance data from [monitored resources](../monitor-reference.md). Several features of Azure Monitor store their data in Logs and present this data in various ways to assist you in monitoring the performance and availability of your cloud and hybrid applications and their supporting components.
+Azure Monitor Logs is a feature of Azure Monitor that collects and organizes log and performance data from monitored resources. Several features of Azure Monitor store their data in Logs and present this data in various ways to assist you in monitoring the performance and availability of your cloud and hybrid applications and their supporting components.
Along with using existing Azure Monitor features, you can analyze Logs data by using a sophisticated query language that's capable of quickly analyzing millions of records. You might perform a simple query that retrieves a specific set of records or perform sophisticated data analysis to identify critical patterns in your monitoring data. Work with log queries and their results interactively by using Log Analytics, use them in alert rules to be proactively notified of issues, or visualize their results in a workbook or dashboard.
This configuration will be different depending on the data source. For example:
> [!IMPORTANT] > Most data collection in Logs will incur ingestion and retention costs. See [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) before you enable any data collection.
-For a complete list of data sources that you can configure to send data to Azure Monitor Logs, see [What is monitored by Azure Monitor?](../monitor-reference.md).
- ## Log Analytics workspaces Azure Monitor Logs stores the data that it collects in one or more [Log Analytics workspaces](./workspace-design.md). You must create at least one workspace to use Azure Monitor Logs. For a description of Log Analytics workspaces, see [Log Analytics workspace overview](log-analytics-workspace-overview.md).
The experience of using Log Analytics to work with Azure Monitor queries in the
[Security monitoring](../best-practices-plan.md#security-monitoring) in Azure is performed by [Microsoft Sentinel](../../sentinel/overview.md) and [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md).
-These services store their data in Azure Monitor Logs so that it can be analyzed with other log data collected by Azure Monitor. For more information, see [Product integrations](../monitor-reference.md#product-integrations) and [Other services](../data-sources.md#other-services).
+These services store their data in Azure Monitor Logs so that it can be analyzed with other log data collected by Azure Monitor.
### Learn more
azure-monitor Data Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-security.md
This section provides answers to common questions.
### Does my agent traffic use my Azure ExpressRoute connection? Traffic to Azure Monitor uses the Microsoft peering ExpressRoute circuit. See [ExpressRoute documentation](../../expressroute/expressroute-faqs.md#supported-services) for a description of the different types of ExpressRoute traffic.-
-## Next steps
-* [See the different kinds of data that you can collect in Azure Monitor](../monitor-reference.md).
azure-monitor Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/monitor-reference.md
- Title: What is monitored by Azure Monitor
-description: Reference of all services and other resources monitored by Azure Monitor.
---- Previously updated : 09/08/2022---
-# What is monitored by Azure Monitor?
-
-This article is a reference of the different applications and services that are monitored by Azure Monitor.
-
-Azure Monitor data is collected and stored based on resource provider namespaces. Each resource in Azure has a unique ID. The resource provider namespace is part of all unique IDs. For example, a key vault resource ID would be similar to `/subscriptions/d03b04c7-d1d4-eeee-aaaa-87b6fcb38b38/resourceGroups/KeyVaults/providers/Microsoft.KeyVault/vaults/mysafekeys ` . *Microsoft.KeyVault* is the resource provider namespace. *Microsoft.KeyVault/vaults/* is the resource provider.
-
-For a list of Azure resource provider namespaces, see [Resource providers for Azure services](../azure-resource-manager/management/azure-services-resource-providers.md).
-
-For a list of resource providers that support Azure Monitor
--- **Metrics** - See [Supported metrics in Azure Monitor](essentials/metrics-supported.md).-- **Metric alerts** - See [Supported resources for metric alerts in Azure Monitor](alerts/alerts-metric-near-real-time.md).-- **Prometheus metrics** - See [Prometheus metrics overview](essentials/prometheus-metrics-overview.md#enable).-- **Resource logs** - See [Supported categories for Azure Monitor resource logs](essentials/resource-logs-categories.md).-- **Activity log** - All entries in the activity log are available for query, alerting and routing to Azure Monitor Logs store regardless of resource provider.-
-## Services that require agents
-
-Azure Monitor can't see inside a service running its own application, operating system or container. That type of service requires one or more agents to be installed. The agent then runs as well to collect metrics, logs, traces and changes and forward them to Azure Monitor. The following services require agents for this reason.
--- [Azure Cloud Services](../cloud-services-extended-support/index.yml)-- [Azure Virtual Machines](../virtual-machines/index.yml)-- [Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/index.yml) -- [Azure Service Fabric](../service-fabric/index.yml) -
-In addition, applications also require either the Application Insights SDK or auto-instrumentation (via an agent) to collect information and write it to the Azure Monitor data platform.
-
-## Services with Insights
-
-Some services have curated monitoring experiences call "insights". Insights are meant to be a starting point for monitoring a service or set of services. Some insights may also automatically pull additional data that's not captured or stored in Azure Monitor. For more information on monitoring insights, see [Insights Overview](insights/insights-overview.md).
-
-## Product integrations
-
-The services and [older monitoring solutions](/previous-versions/azure/azure-monitor/insights/solutions) in the following table store their data in Azure Monitor Logs so that it can be analyzed with other log data collected by Azure Monitor.
-
-| Product/Service | Description |
-|:|:|
-| [Azure Automation](../automation/index.yml) | Manage operating system updates and track changes on Windows and Linux computers. See [Change tracking](../automation/change-tracking/overview.md) and [Update management](../automation/update-management/overview.md). |
-| [Azure Information Protection](/azure/information-protection/) | Classify and optionally protect documents and emails. See [Central reporting for Azure Information Protection](/azure/information-protection/reports-aip#configure-a-log-analytics-workspace-for-the-reports). |
-| [Defender for the Cloud](../defender-for-cloud/defender-for-cloud-introduction.md) | Collect and analyze security events and perform threat analysis. See [Data collection in Defender for the Cloud](../defender-for-cloud/monitoring-components.md). |
-| [Microsoft Sentinel](../sentinel/index.yml) | Connect to different sources including Office 365 and Amazon Web Services Cloud Trail. See [Connect data sources](../sentinel/connect-data-sources.md). |
-| [Microsoft Intune](/intune/) | Create a diagnostic setting to send logs to Azure Monitor. See [Send log data to storage, Event Hubs, or log analytics in Intune (preview)](/intune/fundamentals/review-logs-using-azure-monitor). |
-| Network [Traffic Analytics](../network-watcher/traffic-analytics.md) | Analyze Network Watcher network security group flow logs to provide insights into traffic flow in your Azure cloud. |
-| [System Center Operations Manager](/system-center/scom) | Collect data from Operations Manager agents by connecting their management group to Azure Monitor. See [Connect Operations Manager to Azure Monitor](agents/om-agents.md).<br> Assess the risk and health of your System Center Operations Manager management group with the [Operations Manager Assessment](insights/scom-assessment.md) solution. |
-| [Microsoft Teams Rooms](/microsoftteams/room-systems/azure-monitor-deploy) | Integrated, end-to-end management of Microsoft Teams Rooms devices. |
-| [Visual Studio App Center](/appcenter/) | Build, test, and distribute applications and then monitor their status and usage. See [Start analyzing your mobile app with App Center and Application Insights](https://github.com/Microsoft/appcenter). |
-| Windows | [Windows Update Compliance](/windows/deployment/update/update-compliance-get-started) - Assess your Windows desktop upgrades.<br>[Desktop Analytics](/configmgr/desktop-analytics/overview) - Integrates with Configuration Manager to provide insight and intelligence to make more informed decisions about the update readiness of your Windows clients. |
-| **The following solutions also integrate with parts of Azure Monitor. Note that solutions, which are based on Azure Monitor Logs and Log Analytics, are no longer under active development. Use [Insights](insights/insights-overview.md) instead.** | |
-| Network - [Network Performance Monitor solution](/previous-versions/azure/azure-monitor/insights/network-performance-monitor) |
-| Network - [Azure Application Gateway solution](/previous-versions/azure/azure-monitor/insights/azure-networking-analytics#application-gateway-analytics) |
-| [Office 365 solution](/previous-versions/azure/azure-monitor/insights/solution-office-365) | Monitor your Office 365 environment. Updated version with improved onboarding available through Microsoft Sentinel. |
-| [SQL Analytics solution](/previous-versions/azure/azure-monitor/insights/azure-sql) | Use SQL Insights instead. |
-| [Surface Hub solution](/previous-versions/azure/azure-monitor/insights/surface-hubs) | |
-
-## Third-party integration
-
-| Integration | Description |
-|:|:|
-| [ITSM](alerts/itsmc-overview.md) | The IT Service Management (ITSM) Connector allows you to connect Azure and a supported ITSM product/service. |
-| [Azure Monitor Partners](./partners.md) | A list of partners that integrate with Azure Monitor in some form. |
-| [Azure Monitor Partner integrations](../partner-solutions/overview.md)| Specialized integrations between Azure Monitor and other non-Microsoft monitoring platforms if you've already built on them. Examples include Datadog and Elastic.|
-
-## Resources outside of Azure
-
-Azure Monitor can collect data from resources outside of Azure by using the methods listed in the following table.
-
-| Resource | Method |
-|:|:|
-| Applications | Monitor web applications outside of Azure by using Application Insights. See [What is Application Insights?](./app/app-insights-overview.md). |
-| Virtual machines | Use agents to collect data from the guest operating system of virtual machines in other cloud environments or on-premises. See [Overview of Azure Monitor agents](agents/agents-overview.md). |
-| REST API Client | Separate APIs are available to write data to Azure Monitor Logs and Metrics from any REST API client. See [Send log data to Azure Monitor with the HTTP Data Collector API](logs/data-collector-api.md) for Logs. See [Send custom metrics for an Azure resource to the Azure Monitor metric store by using a REST API](essentials/metrics-store-custom-rest-api.md) for Metrics. |
-
-## Next steps
--- Read more about the [Azure Monitor data platform that stores the logs and metrics collected by insights and solutions](data-platform.md).-- Complete a [tutorial on monitoring an Azure resource](essentials/tutorial-resource-logs.md).-- Complete a [tutorial on writing a log query to analyze data in Azure Monitor Logs](essentials/tutorial-resource-logs.md).-- Complete a [tutorial on creating a metrics chart to analyze data in Azure Monitor Metrics](essentials/tutorial-metrics.md).
azure-netapp-files Manage Cool Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-cool-access.md
Previously updated : 10/20/2023 Last updated : 01/16/2023
The standard storage with cool access feature provides options for the ΓÇ£coolne
* This feature is available only at the **Standard** service level. It's not supported for the Ultra or Premium service level. * Although cool access is available for the Standard service level, how you're billed for using the feature differs from the Standard service level charges. See the [Billing section](cool-access-introduction.md#billing) for details and examples. * You can convert an existing Standard service-level capacity pool into a cool-access capacity pool to create cool access volumes. However, once the capacity pool is enabled for cool access, you can't convert it back to a non-cool-access capacity pool.
-* A cool-access capacity pool can contain both volumes with cool access enabled and volumes with cool access disabled.
+* A cool-access capacity pool can contain both volumes with cool access enabled and volumes with cool access disabled.
+* Antivirus and file scanning are not recommended for files that are archived to cool access.
* After the capacity pool is configured with the option to support cool access volumes, the setting can't be disabled at the _capacity pool_ level. However, you can turn on or turn off the cool access setting at the volume level anytime. Turning off the cool access setting at the _volume_ level stops further tiering of data.ΓÇ» * Standard storage with cool access is supported only on capacity pools of the **auto** QoS type. * An auto QoS capacity pool enabled for standard storage with cool access cannot be converted to a capacity pool using manual QoS.
azure-resource-manager Bicep Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config.md
Title: Bicep config file
description: Describes the configuration file for your Bicep deployments Previously updated : 09/27/2023 Last updated : 02/02/2024 # Configure your Bicep environment
-Bicep supports an optional configuration file named `bicepconfig.json`. Within this file, you can add values that customize your Bicep development experience.
-
-To customize configuration, create this file in the same directory, or a parent directory of your Bicep files. If multiple parent directories contain `bicepconfig.json` files, Bicep uses configuration from the nearest one. If a configuration file is not found, Bicep uses default values.
+Bicep supports an optional configuration file named `bicepconfig.json`. Within this file, you can add values that customize your Bicep development experience. This file is merged with the [default configuration file](https://github.com/Azure/bicep/blob/main/src/Bicep.Core/Configuration/bicepconfig.json). For more information, see [Understand the merge process](#understand-the-merge-process). To customize configuration, create a configuration file in the same directory or a parent directory of your Bicep files. If there are multiple parent directories containing `bicepconfig.json` files, Bicep uses the configuration from the nearest one. For more information, see [Understand the file resolution process](#understand-the-file-resolution-process).
To configure Bicep extension settings, see [VS Code and Bicep extension](./install.md#visual-studio-code-and-bicep-extension).
The Bicep extension for Visual Studio Code supports intellisense for your `bicep
:::image type="content" source="./media/bicep-config/bicep-linter-configure-intellisense.png" alt-text="Screenshot of the intellisense support in configuring bicepconfig.json.":::
+## Understand the merge process
+
+The `bicepconfig.json` file undergoes a recursive bottom-up merging process with the default configuration file. During the merging process, Bicep examines each path in both configurations. If a path isn't present in the default configuration, the path and its associated value are added in the final result. Conversely, if a path exists in the default configuration with a different value, the value from `bicepconfig.json` takes precedence in the merged result.
+
+Consider a scenario where the default configuration is defined as follows:
+
+```json
+{
+ "cloud": {
+ ...
+ "credentialPrecedence": [
+ "AzureCLI",
+ "AzurePowerShell"
+ ]
+ },
+ "moduleAliases": {
+ "ts": {},
+ "br": {
+ "public": {
+ "registry": "mcr.microsoft.com",
+ "modulePath": "bicep"
+ }
+ }
+ },
+ ...
+}
+```
+
+And the `bicepconfig.json` is defined as follows:
+
+```json
+{
+ "cloud": {
+ "credentialPrecedence": [
+ "AzurePowerShell",
+ "AzureCLI"
+ ]
+ },
+ "moduleAliases": {
+ "br": {
+ "ContosoRegistry": {
+ "registry": "contosoregistry.azurecr.io"
+ },
+ "CoreModules": {
+ "registry": "contosoregistry.azurecr.io",
+ "modulePath": "bicep/modules/core"
+ }
+ }
+ }
+}
+```
+
+The resulting merged configuration would be:
+
+```json
+{
+ "cloud": {
+ ...
+ "credentialPrecedence": [
+ "AzurePowerShell",
+ "AzureCLI"
+ ]
+ },
+ "moduleAliases": {
+ "ts": {},
+ "br": {
+ "public": {
+ "registry": "mcr.microsoft.com",
+ "modulePath": "bicep"
+ },
+ "ContosoRegistry": {
+ "registry": "contosoregistry.azurecr.io"
+ },
+ "CoreModules": {
+ "registry": "contosoregistry.azurecr.io",
+ "modulePath": "bicep/modules/core"
+ }
+ }
+ },
+ ...
+}
+```
+
+In the preceding example, the value of `cloud.credentialPrecedence` is replaced, while the value of `cloud.moduleAliases.ContosoRegistry` and `cloud.moduleAliases.CoreModules` are appended in the merged configuration.
+
+## Understand the file resolution process
+
+The `bicepconfig.json` file can be placed in the same directory or a parent directory of your Bicep files. If there are multiple parent directories containing `bicepconfig.json` files, Bicep uses the configuration file from the nearest one. For instance, in the given folder structure where each folder has a `bicepconfig.json` file:
++
+If you compile `main.bicep` in the `child` folder, the `bicepconfig.json` file in the `child` folder is used. The configuration files in the `parent` folder and the `root` folder are ignored. If the `child` folder doesn't contain a configuration file, Bicep searches for a configuration in the `parent` folder and then the `root` folder. If no configuration file is found in any of the folders, Bicep defaults to using the [default values](https://github.com/Azure/bicep/blob/main/src/Bicep.Core/Configuration/bicepconfig.json).
+
+In the context of a Bicep file invoking multiple modules, each module undergoes compilation using the nearest `bicepconfig.json`. Then, the main Bicep file is compiled with its corresponding `bicepconfig.json`. In the following scenario, `modA.bicep` is compiled using the `bicepconfig.json` located in the `A` folder, `modB.bicep` is compiled with the `bicepconfig.json` in the `B` folder, and finally, `main.bicep` is compiled using the `bicepconfig.json` in the `root` folder.
++
+In the absence of a `bicepconfig.json` file in the `A` and `B` folders, all three Bicep files are compiled using the `bicepconfig.json` found in the `root` folder. If `bicepconfig.json` isn't present in any of the folders, the compilation process defaults to using the [default values](https://github.com/Azure/bicep/blob/main/src/Bicep.Core/Configuration/bicepconfig.json).
+ ## Configure Bicep modules When working with [modules](modules.md), you can add aliases for module paths. These aliases simplify your Bicep file because you don't have to repeat complicated paths. You can also configure cloud profile and credential precedence for authenticating to Azure from Bicep CLI and Visual Studio Code. The credentials are used to publish modules to registries and to restore external modules to the local cache when using the insert resource function. For more information, see [Add module settings to Bicep config](bicep-config-modules.md).
The [Bicep linter](linter.md) checks Bicep files for syntax errors and best prac
You can enable experimental features by adding the following section to your `bicepconfig.json` file.
-Here is an example of enabling features 'compileTimeImports' and 'userDefinedFunctions`.
+Here's an example of enabling features 'compileTimeImports' and 'userDefinedFunctions`.
```json {
azure-resource-manager Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/modules.md
Title: Bicep modules
description: Describes how to define a module in a Bicep file, and how to use module scopes. Previously updated : 10/13/2023 Last updated : 02/02/2024 # Bicep modules
To share modules with other people in your organization, create a [template spec
> - Content in the Bicep module registry can only be deployed from another Bicep file. Template specs can be deployed directly from the API, Azure PowerShell, Azure CLI, and the Azure portal. You can even use [`UiFormDefinition`](../templates/template-specs-create-portal-forms.md) to customize the portal deployment experience. > - Bicep has some limited capabilities for embedding other project artifacts (including non-Bicep and non-ARM-template files. For example, PowerShell scripts, CLI scripts and other binaries) by using the [`loadTextContent`](./bicep-functions-files.md#loadtextcontent) and [`loadFileAsBase64`](./bicep-functions-files.md#loadfileasbase64) functions. Template specs can't package these artifacts.
-Bicep modules are converted into a single Azure Resource Manager template with [nested templates](../templates/linked-templates.md#nested-template).
+Bicep modules are converted into a single Azure Resource Manager template with [nested templates](../templates/linked-templates.md#nested-template). For more information about how Bicep resolves configuration files and how Bicep merge user-defined configuration file with the default configuration file, see [Configuration file resolution process](./bicep-config.md#understand-the-file-resolution-process) and [Configuration file merge process](./bicep-config.md#understand-the-merge-process).
### Training resources
azure-vmware Deploy Arc For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-arc-for-azure-vmware-solution.md
When Arc appliance is successfully deployed on your private cloud, you can do th
- Discover your VMware vSphere infrastructure resources and project them to Azure by navigating, **Private cloud > Arc vCenter resources > Virtual Machines**. - Similar to VMs, customers can enable networks, templates, resource pools, and data-stores in Azure.
-## Enable resource pools, clusters, hosts, datastores, networks, and VM templates in Azure
+## Enable virtual machines, resource pools, clusters, hosts, datastores, networks, and VM templates in Azure
-Once you connected your Azure VMware Solution private cloud to Azure, you can browse your vCenter inventory from the Azure portal. This section shows you how to enable resource pools, networks, and other non-VM resources in Azure.
+Once you connected your Azure VMware Solution private cloud to Azure, you can browse your vCenter inventory from the Azure portal. This section shows you how to make these resources Azure enabled.
> [!NOTE] > Enabling Azure Arc on a VMware vSphere resource is a read-only operation on vCenter. It doesn't make changes to your resource in vCenter.
Once you connected your Azure VMware Solution private cloud to Azure, you can br
2. Select the resource(s) you want to enable, then select **Enable in Azure**. 3. Select your Azure **Subscription** and **Resource Group**, then select **Enable**.
- The enable action starts a deployment and creates a resource in Azure, creating representations for your VMware vSphere resources. It allows you to manage who can access those resources through Role-based access control granularly.
+ The enable action starts a deployment and creates a resource in Azure, creating representative objects in Azure for your VMware vSphere resources. It allows you to manage who can access those resources through Role-based access control granularly.
-4. Repeat the previous steps for one or more network, resource pool, and VM template resources.
+1. Repeat the previous steps for one or more virtual machine, network, resource pool, and VM template resources.
+
+Additionally, for virtual machines there is an additional section to configure **VM extensions**. This will enable guest management to facilitate additional Azure extensions to be installed on the VM. The steps to enable this would be:
+
+1. Select **Enable guest management**.
+
+1. Choose a __Connectivity Method__ for the Arc agent.
+
+1. Provide an Administrator/Root access username and password for the VM.
+
+If you choose to enable the guest management as a separate step or have issues with the VM extension install steps please review the prerequisites and steps discussed in the section below.
## Enable guest management and extension installation
You need to enable guest management on the VMware VM before you can install an e
1. Select **Configuration** from the left navigation for a VMware VM. 1. Verify **Enable guest management** is now checked.
-### Install the LogAnalytics extension
-
-1. Go to Azure portal.
-1. Find the Arc-enabled Azure VMware Solution VM that you want to install an extension on and select the VM name.
-1. Locate **Extensions** from the left navigation and select **Add**.
-1. Select the extension you want to install.
- 1. Based on the extension, you need to provide details. For example, `workspace Id` and `key` for LogAnalytics extension.
-1. When you're done, select **Review + create**.
-
-When the extension installation steps are completed, they trigger deployment and install the selected extension on the VM.
+From here additional extensions can be installed. See the [VM extensions Overview](/azure/azure-arc/servers/manage-vm-extensions) for a list of current extensions.
-## Supported extensions and management services
+### Next Steps
-Perform VM operations on VMware VMs through Azure using [supported extensions and management services](/azure/azure-arc/vmware-vsphere/perform-vm-ops-through-azure#supported-extensions-and-management-services)
+To manage Arc-enabled Azure VMware Solution go to: [Manage Arc-enabled Azure VMware private cloud - Azure VMware Solution](/azure/azure-vmware/manage-arc-enabled-azure-vmware-solution)
+To remove Arc-enabled  Azure VMWare Solution resources from Azure go to: [Remove Arc-enabled Azure VMware Solution vSphere resources from Azure - Azure VMware Solution](/azure/azure-vmware/remove-arc-enabled-azure-vmware-solution-vsphere-resources-from-azure)
azure-vmware Deploy Disaster Recovery Using Jetstream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-disaster-recovery-using-jetstream.md
For full details, refer to the article: [Disaster Recovery with Azure NetApp Fil
- [Attach Azure NetApp Files datastores to Azure VMware Solution hosts](attach-azure-netapp-files-to-azure-vmware-solution-hosts.md) - [Disaster Recovery with Azure NetApp Files, JetStream DR, and Azure VMware Solution](https://www.jetstreamsoft.com/portal/jetstream-knowledge-base/disaster-recovery-with-azure-netapp-files-jetstream-dr-and-avs-azure-vmware-solution/)
-For more on-premises JetStream DR prerequisites, see the [JetStream Pre-Installation Guide](https://www.jetstreamsoft.com/portal/jetstream-knowledge-base/pre-installation-guidelines/).
+For more on-premises JetStream DR prerequisites, see the [JetStream Pre-Installation Guide](https://jetstreamsoft.com/portal/online-docs/jsdr-admin_4.2/Pre-Installation.html).
## Install JetStream DR on Azure VMware Solution You can follow these steps for both supported scenarios.
-1. In your on-premises data center, install JetStream DR following the [JetStream documentation](https://www.jetstreamsoft.com/portal/jetstream-knowledge-base/installing-jetstream-dr-software/).
+1. In your on-premises data center, install JetStream DR following the [JetStream documentation](https://jetstreamsoft.com/portal/online-docs/jsdr-admin_4.2/Installation.html).
1. In your Azure VMware Solution private cloud, install JetStream DR using a Run command. From the [Azure portal](https://portal.azure.com),select **Run command** > **Packages** > **JSDR.Configuration**.
Once JetStream DR MSA and JetStream VIB are installed on the Azure VMware Soluti
1. [Add an external storage site](https://www.jetstreamsoft.com/portal/jetstream-knowledge-base/add-a-storage-site/).
-1. [Deploy a JetStream DRVA appliance](https://www.jetstreamsoft.com/portal/jetstream-knowledge-base/deploy-a-dr-virtual-appliance/).
+1. [Deploy a JetStream DRVA appliance](https://jetstreamsoft.com/portal/online-docs/jsdr-admin_4.2/DeployaDRVA.html).
1. Create a JetStream replication log store volume using one of the datastores available to the Azure VMware Solution cluster.
azure-vmware Disaster Recovery Using Vmware Site Recovery Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/disaster-recovery-using-vmware-site-recovery-manager.md
VMware and Microsoft support teams engage each other as needed to troubleshoot V
- [Operational Limits of vSphere Replication 8.3](https://docs.vmware.com/en/vSphere-Replication/8.3/com.vmware.vsphere.replication-admin.doc/GUID-E114BAB8-F423-45D4-B029-91A5D551AC47.html) - [Calculate bandwidth for vSphere Replication](https://docs.vmware.com/en/vSphere-Replication/8.3/com.vmware.vsphere.replication-admin.doc/GUID-4A34D0C9-8CC1-46C4-96FF-3BF7583D3C4F.html) - [SRM installation and configuration](https://docs.vmware.com/en/Site-Recovery-Manager/8.3/com.vmware.srm.install_config.doc/GUID-B3A49FFF-E3B9-45E3-AD35-093D896596A0.html)-- [vSphere Replication administration](https://docs.vmware.com/en/vSphere-Replication/8.2/com.vmware.vsphere.replication-admin.doc/GUID-35C0A355-C57B-430B-876E-9D2E6BE4DDBA.html)
+- [vSphere Replication administration](https://docs.vmware.com/en/vSphere-Replication/https://docsupdatetracker.net/index.html)
- [Prerequisites and Best Practices for SRM installation](https://docs.vmware.com/en/Site-Recovery-Manager/8.3/com.vmware.srm.install_config.doc/GUID-BB0C03E4-72BE-4C74-96C3-97AC6911B6B8.html) - [Network ports for SRM](https://docs.vmware.com/en/Site-Recovery-Manager/8.3/com.vmware.srm.install_config.doc/GUID-499D3C83-B8FD-4D4C-AE3D-19F518A13C98.html) - [Network ports for vSphere Replication](https://kb.vmware.com/s/article/2087769)
azure-vmware Ecosystem Back Up Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/ecosystem-back-up-vms.md
Back up network traffic between Azure VMware Solution VMs and the backup reposit
You can find more information on these backup solutions here: - [Cohesity](https://www.cohesity.com/blogs/expanding-cohesitys-support-for-microsofts-ecosystem-azure-stack-and-azure-vmware-solution/) - [Commvault](https://documentation.commvault.com/v11/essential/azure_vmware_solution.html)-- [Dell Technologies](https://www.delltechnologies.com/resources/en-us/asset/briefs-handouts/solutions/dell-emc-data-protection-for-avs.pdf)
+- [Dell Technologies](https://www.delltechnologies.com/asset/en-us/products/data-protection/briefs-summaries/cyber-recovery-with-powerprotect-for-multi-cloud-solution-brief.pdf)
- [Rubrik](https://www.rubrik.com/en/products/cloud-data-management) - [Veeam](https://www.veeam.com/kb4012) - [Veritas](https://vrt.as/nb4avs)
cdn Cdn Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-features.md
The following table compares the features available with each product.
| [Token authentication](cdn-token-auth.md) | | |**&#x2713;**| | [DDOS protection](https://www.cisa.gov/news-events/news/understanding-denial-service-attacks) | **&#x2713;** |**&#x2713;** |**&#x2713;** | | [Bring your own certificate](cdn-custom-ssl.md?tabs=option-2-enable-https-with-your-own-certificate#tlsssl-certificates) |**&#x2713;** | **&#x2713;** | **&#x2713;** |
-| Supported TLS Versions | TLS 1.2, TLS 1.0/1.1 - [Configurable](/rest/api/cdn/custom-domains/enable-custom-https#usermanagedhttpsparameters) | TLS 1.2 | TLS 1.2 |
+| Supported TLS Versions | TLS 1.2, TLS 1.0/1.1 - [Configurable](/rest/api/cdn/custom-domains/enable-custom-https#usermanagedhttpsparameters) | TLS 1.2, TLS 1.3 | TLS 1.2, TLS 1.3 |
|||| | **Analytics and reporting** | **Standard Microsoft** | **Standard Edgio** | **Premium Edgio** | | [Azure diagnostic logs](cdn-azure-diagnostic-logs.md) | **&#x2713;** |**&#x2713;** |**&#x2713;** |
chaos-studio Chaos Studio Private Link Agent Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-private-link-agent-service.md
This guide explains the steps needed to configure Private Link for a Chaos Studi
<br/>
-3. Ensure that the `Microsoft.Resources/EUAPParticipation` feature flag is enabled for your subscription. Previous Chaos Studio users may already have the feature flag enabled if you ran your first experiment via the Azure portal in the past.
-
-<br/>
-
-The feature flag can be enabled using Azure CLI. Here's an example:
-
-```AzCLI
-az feature register --namespace Microsoft.Resources --name "EUAPParticipation" --subscription <subscription id>
-```
- ## Limitations - You'll need to use our **2023-10-27-preview REST API** to create and use private link for agent-based experiments ONLY. There's **no** support for private link for agent-based experiments in our GA-stable REST API until H1 2024.
az feature register --namespace Microsoft.Resources --name "EUAPParticipation" -
- The entire end-to-end for this flow requires some use of the CLI. The current end-to-end experience cannot be done from the Azure portal currently. - The **Chaos Studio Private Accesses (CSPA)** resource type has a **strict 1:1 mapping of Chaos Target:CSPA Resource (abstraction for private endpoint).**.** We only allow **5 CSPA resources to be created per Subscription** to maintain the expected experience for all of our customers. -
-## Step 1: Make sure you allowlist Microsoft.Network/AllowPrivateEndpoints in your subscription
-
-The first step is to ensure that your desired subscription allows the Networking Resource Provider to operate.
-
-Ensure that the `Microsoft.Network/AllowPrivateEndpoints` feature flag is enabled for your subscription.
-
-<br/>
-
-The feature flag can be enabled using Azure CLI. Here's an example:
-
-```AzCLI
-az feature register --namespace Microsoft.Network --name "AllowPrivateEndpoints" --subscription <subscription id>
-```
-
-> [!NOTE]
-> If you are going to be using private endpoints using manual requests across multiple subscriptions, you'll need to ensure you register the Microsoft.Network Resource Provider (RP) in your respective tenants/subscriptions. See [Register RP](../azure-resource-manager/management/resource-providers-and-types.md) for more info about this.
->
->
-> This step is not needed if you are using the same subscription across both the Chaos and Networking Resource Providers.
-## Step 2: Create a Chaos Studio Private Access (CSPA) resource
+## Step 1: Create a Chaos Studio Private Access (CSPA) resource
To use Private endpoints for agent-based chaos experiments, you need to create a new resource type called **Chaos Studio Private Accesses**. CSPA is the resource against which the private endpoints are created.
az rest --verbose --skip-authorization-header --header "Authorization=Bearer $ac
|resourceLocation|True|String|Location you want the resource to be hosted (must be a support region by Chaos Studio)|
-## Step 3: Create your Virtual Network, Subnet, and Private Endpoint
+## Step 2: Create your Virtual Network, Subnet, and Private Endpoint
[Set up your desired Virtual Network, Subnet, and Endpoint](../private-link/create-private-endpoint-portal.md) for the experiment if you haven't already.
Make sure you attach it to the same VM's VNET. Screenshots provide examples of c
[![Screenshot of VNET tab of private endpoint creation.](images/resource-vnet-cspa.png)](images/resource-vnet-cspa.png#lightbox)
-## Step 4: Map the agent host VM to the CSPA resource
+## Step 3: Map the agent host VM to the CSPA resource
Find the Target "Resource ID" by making a GetTarget call:
az rest --verbose --skip-authorization-header --header "Authorization=Bearer $ac
``` > [!NOTE]
-> The PrivateAccessID should exactly match the "resourceID" used to create the CSPA resource in Step 2.
+> The PrivateAccessID should exactly match the "resourceID" used to create the CSPA resource in Step 1.
-## Step 5: Update host VM to map the communications endpoint to the private endpoint
+## Step 4: Update host VM to map the communications endpoint to the private endpoint
During the Preview of this feature, customers need to update the Agent VM extensions settings to point to the communication endpoint that supports traffic over a private network. Customers need to update the host entry on the actual VM to map the communication endpoint to the private IP generated during the private endpoint creation. You can get the IP address from the "DNS Configuration" tab in the Private Endpoint resource seen in the following screenshot:
Example of what the "hosts" file should look like. The IP address and Azure regi
Save and close the file.
-## Step 6: Update the communication endpoint in agentSettings and agentInstanceConfig JSON files
+## Step 5: Update the communication endpoint in agentSettings and agentInstanceConfig JSON files
In this step, you need to continue to edit files on the host VM machine. You need to update the "agentSettings.json" and "agentInstanceConfig.json" files to include the communication endpoint based on the region in which the VM targets were created in the previous steps.
Example of updated agentInstanceConfig.json:
[![Screenshot of agentInstanceConfig JSON.](images/agent-instance-config-json.png)](images/agent-instance-config-json.png#lightbox)
-## Step 6.5: Disable CRL verification in agentSettings.JSON
+## Step 5.5: Disable CRL verification in agentSettings.JSON
**IF** you blocked outbound access to Microsoft Certificate Revocation List (CRL) verification endpoints, then you need to update agentSettings.JSON to disable CRL verification check in the agent.
The final agentSettings.JSON should appear as shown:
If outbound access to Microsoft CRL verification endpoints is not blocked, then you can ignore this step.
-## Step 7: Restart the Azure Chaos Agent service in the VM
+## Step 6: Restart the Azure Chaos Agent service in the VM
After making all the required changes to the host, restart the Azure Chaos Agent Service in the VM
Systemctl restart azure-chaos-agent
[![Screenshot of restarting Linux VM.](images/restart-linux-vm.png)](images/restart-linux-vm.png#lightbox)
-## Step 8: Run your Agent-based experiment using private endpoints
+## Step 7: Run your Agent-based experiment using private endpoints
After the restart, the Chaos agent should be able to communicate with the Agent Communication data plane service and the agent registration to the data plane should be successful. After successful registration, the agent will be able to heartbeat its status and you can go ahead and run the chaos agent-based experiments using private endpoints!
cloud-services Automation Manage Cloud Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/automation-manage-cloud-services.md
[!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)]
-This guide will introduce you to the Azure Automation service, and how it can be used to simplify management of your Azure cloud services.
+This guide introduces you to the Azure Automation service, and how it can be used to simplify management of your Azure cloud services.
## What is Azure Automation? [Azure Automation](https://azure.microsoft.com/services/automation/) is an Azure service for simplifying cloud management through process automation. Using Azure Automation, long-running, manual, error-prone, and frequently repeated tasks can be automated to increase reliability, efficiency, and time to value for your organization.
-Azure Automation provides a highly-reliable and highly-available workflow execution engine that scales to meet your needs as your organization grows. In Azure Automation, processes can be kicked off manually, by 3rd-party systems, or at scheduled intervals so that tasks happen exactly when needed.
+Azure Automation provides a highly reliable and highly available workflow execution engine that scales to meet your needs as your organization grows. In Azure Automation, processes can be kicked off manually, by third-party systems, or at scheduled intervals so that tasks happen exactly when needed.
Lower operational overhead and free up IT / DevOps staff to focus on work that adds business value by moving your cloud management tasks to be run automatically by Azure Automation. ## How can Azure Automation help manage Azure cloud services?
-Azure cloud services can be managed in Azure Automation by using the PowerShell cmdlets that are available in the [Azure PowerShell tools](/powershell/). Azure Automation has these cloud service PowerShell cmdlets available out of the box, so that you can perform all of your cloud service management tasks within the service. You can also pair these cmdlets in Azure Automation with the cmdlets for other Azure services, to automate complex tasks across Azure services and 3rd party systems.
-
-Some example uses of Azure Automation to manage Azure Cloud Services include:
-
-* [Continuous deployment of a Cloud Service whenever cscfg or cspkg is updated in Azure Blob storage](https://gallery.technet.microsoft.com/scriptcenter/Continuous-Deployment-of-A-eeebf3a6)
-* [Rebooting Cloud Service instances in parallel, one upgrade domain at a time](https://gallery.technet.microsoft.com/scriptcenter/Reboot-Cloud-Service-PaaS-b337a06d)
+Azure cloud services can be managed in Azure Automation by using the PowerShell cmdlets that are available in the [Azure PowerShell tools](/powershell/). Azure Automation has these cloud service PowerShell cmdlets available out of the box, so that you can perform all of your cloud service management tasks within the service. You can also pair these cmdlets in Azure Automation with the cmdlets for other Azure services, to automate complex tasks across Azure services and third party systems.
## Next Steps Now that you've learned the basics of Azure Automation and how it can be used to manage Azure cloud services, follow these links to learn more about Azure Automation. * [Azure Automation Overview](../automation/automation-intro.md)
-* [My first runbook](../automation/learn/powershell-runbook-managed-identity.md)
+* [My first runbook](../automation/learn/powershell-runbook-managed-identity.md)
cloud-services Cloud Services Python How To Use Service Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-python-how-to-use-service-management.md
To create the `.cer` certificate, execute:
openssl x509 -inform pem -in mycert.pem -outform der -out mycert.cer ```
-For more information about Azure certificates, see [Certificates overview for Azure Cloud Services](cloud-services-certs-create.md). For a complete description of OpenSSL parameters, see the documentation at [https://www.openssl.org/docs/apps/openssl.html](https://www.openssl.org/docs/apps/openssl.html).
+For more information about Azure certificates, see [Certificates overview for Azure Cloud Services](cloud-services-certs-create.md). For a complete description of OpenSSL parameters, see the documentation at [https://www.openssl.org/docs/manmaster/man1/req.html](https://www.openssl.org/docs/manmaster/man1/req.html).
After you create these files, upload the `.cer` file to Azure. In the [Azure portal], on the **Settings** tab, select **Upload**. Note where you saved the `.pem` file.
communication-services Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/notifications.md
Learn more about [event handling in Azure Communication Services](../../event-gr
## Deliver push notifications via Azure Notification Hubs > [!IMPORTANT]
-> Azure Notification Hubs has announced that Firebase Cloud Messaging (FCM) legacy API will be deprecated in July 2024. This will affect all Azure Communiation Services Calling applications who send Android push notifications. Customers impacted will need to migrate their registrations from FCM legacy to FCM v1, and can start doing so in March 2024. For more information including migration steps, please see [Notifcation Hub documentation](../../notification-hubs/notification-hubs-gcm-to-fcm.md).
+> Azure Notification Hubs has announced that Firebase Cloud Messaging (FCM) legacy API will be deprecated in July 2024. This will affect all Azure Communiation Services Calling and Chat applications who send Android push notifications. Customers impacted will need to migrate their registrations from FCM legacy to FCM v1, and can start doing so in March 2024. For more information including migration steps, please see [Notifcation Hub documentation](../../notification-hubs/notification-hubs-gcm-to-fcm.md).
You can connect an Azure Notification Hub to your Communication Services resource in order to automatically send push notifications to a user's mobile device when they receive an incoming call or to notify them about missed chat activity. You should use these push notifications to wake up your application from the background and display UI that lets the user accept or decline the call or read the newly received chat message.
communication-services Direct Routing Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/direct-routing-provisioning.md
The health of an SBC connection now exposed in Azure portal. It takes in account
### Possible values of each health indicator TLS Status - Status of the TLS connections of a Trunk: -- Unknown - Indicates an unknown status.
+- Unknown - Indicates that SBC hasn't attempted a TLS handshake in the last 15 minutes.
- Active - Indicates that TLS connection is established. - CertExpiring - Indicates that SBC certificate is expiring. - CertExpired - Indicates that SBC certificate is expired. SIP OPTIONS (Ping) - Status of SIP OPTIONS messages exchange: -- Unknown - Indicates an unknown status.
+- Unknown - Indicates that SBC hasn't sent any SIP options.
- Active - Indicates that OPTIONS are being sent and received. -- Expired - Indicates that status is expired.
+- Expired - Indicates that SBC was sending SIP OPTIONS, but we haven't received any OPTIONS messages in the last 15 minutes.
- Error - Indicates an error in OPTIONS exchange. Status - The overall health status of a Trunk:
Status - The overall health status of a Trunk:
- Online - Indicates that SBC connection is healthy. - Warning - Indicates TLS or Ping is expired.
+> [!NOTE]
+>If you've just configured a new trunk, it can take up to 15 minutes to update the status.
+ > [!IMPORTANT] > Before placing or receiving calls, make sure that SBC status is *Online*
communication-services Direct Routing Sip Specification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/direct-routing-sip-specification.md
Use of an IP address isn't supported in either Record-Route or Contact. The only
- If the FQDN doesn't match the value of the Common or Subject Alternative Name in the presented certificate, the call fails.
+## Call context headers
+
+Call context headers are currently available only for Call Automation SDK. Call Automation SDK supports User-To-User header and up to five custom SIP headers. Those headers are supported in INVITE and REFER methods.
+
+### User-To-User header
+
+SIP User-To-User (UUI) header is an industry standard to pass contextual information during a call setup process. The maximum length of a UUI header key is 64 chars. The maximum length of UUI header value is 256 chars.
+
+### Custom header
+
+Azure Communication Services also supports up to five custom SIP headers. Custom SIP header key must start with a mandatory `X-MS-Custom-` prefix. The maximum length of a SIP header key is 64 chars, including the `X-MS-Custom-` prefix. The maximum length of SIP header value is 256 chars.
+
+For implementation details refer to [How to pass contextual data between calls](../../how-tos/call-automation/custom-context.md).
+ ## Inbound call: SIP dialog description Here are the details of how SIP Proxy processes inbound calls.
An Azure Communication Services identity might be used in multiple endpoints (ap
## Replaces option
-The SBC must support Invite with Replaces.
+The SBC must support INVITE with Replaces.
## Size of SDP considerations
communication-services Subscribe To Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/events/subscribe-to-events.md
+
+ Title: Quickstart - Subscribe to Azure Communication Services Events
+
+description: In this quickstart, you learn how to subscribe to events from Azure Communication Services.
++++ Last updated : 01/26/2024+++
+zone_pivot_groups: acs-plat-azp-azcli-net-ps
+
+ms.devlang: azurecli
+
+# Quickstart: Subscribe to Azure Communication Services events
+
+In this quickstart, you learn how to subscribe to events from Azure Communication Services through the portal, Azure CLI, PowerShell and .NET SDK.
+
+You can set up event subscriptions for Communication Services resources through the [Azure portal](https://portal.azure.com) or Azure CLI, PowerShell or with the Azure [Event Grid Management SDK](https://www.nuget.org/packages/Azure.ResourceManager.EventGrid/).
+
+For this Quickstart, we walk through the process of setting up webhook as a subscriber for SMS events from Azure Communication Services. For a full list of events, see this [page](/azure/event-grid/event-schema-communication-services).
++++
communications-gateway Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/connectivity.md
Last updated 11/20/2023
# Connectivity for Azure Communications Gateway
-Each Azure region in your deployment connects to your core network. You need to choose the type of connection (for example, MAPS Voice or Express Route) and route traffic between your network and Azure Communications Gateway with specific IP addresses and domain names.
+Each Azure region in your deployment connects to your core network. You must choose the type of connection (for example, Microsoft Azure Peering Service Voice). You must use specific IP addresses and domain names to route traffic between your network and Azure Communications Gateway.
This article describes:
Azure Communications Gateway supports multiple types of connection to your netwo
- We strongly recommend using Microsoft Azure Peering Service Voice (also called MAPS Voice or MAPSV). - If you can't use MAPS Voice, we recommend ExpressRoute Microsoft Peering.
-The following table lists all the available connection types and whether they're supported for each communications service. The connection types are in the order that we recommend (with recommended services first).
+The following table lists all the available connection types and whether they're supported for each communications service. The connection types are in the order that we recommend (with recommended types first).
|Connection type | Operator Connect / Teams Phone Mobile | Microsoft Teams Direct Routing | Zoom Phone Cloud Peering | Notes | ||||||
-| MAPS Voice |✅ |✅|✅|- Best media quality because of prioritization with Microsoft network<br>- No extra costs<br>- See [Azure Internet peering for Communications Services walkthrough](../internet-peering/walkthrough-communications-services-partner.md)|
+| MAPS Voice |✅ |✅|✅|- Best media quality because of prioritization with Microsoft network<br>- No extra costs<br>- See [Internet peering for Peering Service Voice walkthrough](../internet-peering/walkthrough-communications-services-partner.md)|
|ExpressRoute Microsoft Peering |✅|✅|✅|- Easy to deploy<br>- Extra cost<br>- Consult with your onboarding team and ensure that it's available in your region<br>- See [Using ExpressRoute for Microsoft PSTN services](/azure/expressroute/using-expressroute-for-microsoft-pstn)| |Public internet |❌|✅|✅|- No extra setup<br>- Not recommended for production|
Azure Communications Gateway provides multiple FQDNs:
You must decide whether you want these FQDNs to be `*.commsgw.azure.com` domain names or subdomains of a domain you already own, using [domain delegation with Azure DNS](../dns/dns-domain-delegation.md).
-Domain delegation provides topology hiding and might increase customer trust, but requires giving us full control over the subdomain that you delegate. For Microsoft Teams Direct Routing, choose domain delegation if you don't want customers to see an `*.commsgw.azure.com` in their Microsoft 365 admin centers.
+Domain delegation provides topology hiding and might increase customer trust, but requires giving us full control over the subdomain that you delegate. For Microsoft Teams Direct Routing, choose domain delegation if you don't want customers to see a `*.commsgw.azure.com` address in their Microsoft 365 admin centers.
## Related content
communications-gateway Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/deploy.md
When your resource has been provisioned, you can connect Azure Communications Ga
</ApplicationServer> </InitialFilterCriteria> ```
-1. Configure your routers and peering connection to ensure all traffic to Azure Communications Gateway is through Azure Internet Peering for Communications Services (also known as Microsoft Azure Peering Service for Voice and MAPS Voice) or ExpressRoute Microsoft Peering.
+1. Configure your routers and peering connection to ensure all traffic to Azure Communications Gateway is through Microsoft Azure Peering Service Voice (also known as MAPS Voice) or ExpressRoute Microsoft Peering.
1. Enable Bidirectional Forwarding Detection (BFD) on your on-premises edge routers to speed up link failure detection. - The interval must be 150 ms (or 300 ms if you can't use 150 ms). - With MAPS Voice, BFD must bring up the BGP peer for each Private Network Interface (PNI).
communications-gateway Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/get-started.md
This article summarizes the steps and documentation that you need.
> [!IMPORTANT] > You must fully understand the onboarding process for your chosen communications service and any dependencies introduced by the onboarding process. For advice, ask your onboarding team. >
-> Some steps in the deployment and integration process can require days or weeks to complete. For example, you might need to arrange Microsoft Azure Peering Service for Voice (MAPS Voice) connectivity before you can deploy, wait for onboarding, or wait for a specific date to launch your service. We recommend that you read through any documentation from your onboarding team and the procedures in [Deploy Azure Communications Gateway](#deploy-azure-communications-gateway) and [Integrate with your chosen communications services](#integrate-with-your-chosen-communications-services) before you start deploying.
+> Some steps in the deployment and integration process can require days or weeks to complete. For example, you might need to arrange Microsoft Azure Peering Service Voice (MAPS Voice) connectivity before you can deploy, wait for onboarding, or wait for a specific date to launch your service. We recommend that you read through any documentation from your onboarding team and the procedures in [Deploy Azure Communications Gateway](#deploy-azure-communications-gateway) and [Integrate with your chosen communications services](#integrate-with-your-chosen-communications-services) before you start deploying.
## Learn about and plan for Azure Communications Gateway
communications-gateway Prepare To Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-to-deploy.md
Wait for confirmation that Azure Communications Gateway is enabled before moving
Decide how Azure Communications Gateway should connect to your network. You must choose: -- The type of connection you want to use (for example, MAPS Voice (recommended) or ExpressRoute).
+- The type of connection you want to use: for example, Microsoft Azure Peering Service Voice (recommended; sometimes called MAPS Voice).
- The form of domain names Azure Communications Gateway uses towards your network: an autogenerated `*.commsgw.azure.com` domain name or a subdomain of a domain you already own (using [domain delegation with Azure DNS](../dns/dns-domain-delegation.md)). For more information about your options, see [Connectivity for Azure Communications Gateway](connectivity.md).
If you plan to route emergency calls through Azure Communications Gateway, read
- [Operator Connect and Teams Phone Mobile](emergency-calls-operator-connect.md) - [Zoom Phone Cloud Peering](emergency-calls-zoom.md)
-## Configure MAPS Voice or ExpressRoute
+## Configure Microsoft Azure Peering Service Voice or ExpressRoute
Connect your network to Azure: -- To configure MAPS Voice, follow the instructions in [Azure Internet peering for Communications Services walkthrough](../internet-peering/walkthrough-communications-services-partner.md).
+- To configure Microsoft Azure Peering Service Voice (sometimes called MAPS Voice), follow the instructions in [Internet peering for Peering Service Voice walkthrough](../internet-peering/walkthrough-communications-services-partner.md).
- To configure ExpressRoute Microsoft Peering, follow the instructions in [Tutorial: Configure peering for ExpressRoute circuit](../../articles/expressroute/expressroute-howto-routing-portal-resource-manager.md). ## Next step
connectors Enable Stateful Affinity Built In Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/enable-stateful-affinity-built-in-connectors.md
In Standard logic app workflows, the following built-in, service provider-based
- Azure Service Bus - SAP
+- IBM MQ
To run these connector operations in stateful mode, you must enable this capability. This how-to guide shows how to enable stateful mode for these connectors.
Resource scale-in events might cause the loss of context for built-in connectors
- [Connect to Azure Service Bus](connectors-create-api-servicebus.md) - [Connect to SAP](../logic-apps/logic-apps-using-sap-connector.md)
+- [Connect to IBM MQ](connectors-create-api-mq.md)
container-apps Revisions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/revisions.md
Previously updated : 10/10/2023 Last updated : 02/01/2024
Key characteristics of revisions include:
- **Scoped changes**: While revisions remain static, [application-scope](#change-types) changes can affect all revisions, while [revision-scope](#change-types) changes create a new revision. -- **Historical record**: Azure Container Apps allow you to retain up to 100 revisions. This history gives you a comprehensive historical record of your app's updates.
+- **Historical record**: By default, you have access to 100 inactive revisions, but you can [adjust this threshold manually](#change-inactive-revision-limit).
- **Multiple revisions**: You can run multiple revisions concurrently. This feature is especially beneficial when you need to manage different versions of your app simultaneously.
After a container app is successfully provisioned, a revision enters its operati
Revisions can also enter an inactive state. These revisions don't possess provisioning or running states. However, Azure Container Apps maintains a list of these revisions, accommodating up to 100 inactive entries. You can activate a revision at any time.
+### Change inactive revision limit
+
+You can use the `--max-inactive-revisions` parameter with the `containerapp create` or `containerapp update` commands to control the number of inactive revisions tracked by Container Apps.
+
+This example demonstrates how to create a new container app that tracks 50 inactive revisions:
+
+```azurecli
+az containerapp create --max-inactive-revisions 50
+```
+ ## Revision modes Azure Container Apps support two revision modes. Your choice of mode determines how many revisions of your app are simultaneously active.
cosmos-db Postgres Migrate Cosmos Db Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/postgres-migrate-cosmos-db-kafka.md
This article will demonstrate how to use a combination of Kafka connectors to se
Here is high-level overview of the end to end flow presented in this article.
-Data in PostgreSQL table will be pushed to Apache Kafka using the [Debezium PostgreSQL connector](https://debezium.io/documentation/reference/1.2/connectors/postgresql.html), which is a Kafka Connect **source** connector. Inserts, updates, or deletion to records in the PostgreSQL table will be captured as `change data` events and sent to Kafka topic(s). The [DataStax Apache Kafka connector](https://docs.datastax.com/en/kafka/doc/kafka/kafkaIntro.html) (Kafka Connect **sink** connector), forms the second part of the pipeline. It will synchronize the change data events from Kafka topic to Azure Cosmos DB for Apache Cassandra tables.
+Data in PostgreSQL table will be pushed to Apache Kafka using the [Debezium PostgreSQL connector](https://debezium.io/documentation/reference/stable/connectors/postgresql.html), which is a Kafka Connect **source** connector. Inserts, updates, or deletion to records in the PostgreSQL table will be captured as `change data` events and sent to Kafka topic(s). The [DataStax Apache Kafka connector](https://docs.datastax.com/en/kafka/doc/kafka/kafkaIntro.html) (Kafka Connect **sink** connector), forms the second part of the pipeline. It will synchronize the change data events from Kafka topic to Azure Cosmos DB for Apache Cassandra tables.
> [!NOTE] > Using specific features of the DataStax Apache Kafka connector allows us to push data to multiple tables. In this example, the connector will help us persist change data records to two Cassandra tables that can support different query requirements.
cp <path_to_debezium_connector>/*.jar <KAFKA_HOME>/libs
cp <path_to_cassandra_connector>/*.jar <KAFKA_HOME>/libs ```
-> For details, please refer to the [Debezium](https://debezium.io/documentation/reference/1.2/connectors/postgresql.html#postgresql-deploying-a-connector) and [DataStax](https://docs.datastax.com/en/kafka/doc/) documentation.
+> For details, please refer to the [Debezium](https://debezium.io/documentation/reference/stable/connectors/postgresql.html) and [DataStax](https://docs.datastax.com/en/kafka/doc/) documentation.
## Configure Kafka Connect and start data pipeline
select * from retail.orders_info;
Check the change data capture events in the Kafka topic
-> [!NOTE]
-> Note that the topic name is `myserver.retail.orders_info` which as per the [connector convention](https://debezium.io/documentation/reference/1.3/connectors/postgresql.html#postgresql-topic-names)
- ```bash cd <KAFKA_HOME>/bin
cosmos-db Configure Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/configure-synapse-link.md
except exceptions.CosmosResourceExistsError:
print('A container with already exists') ```
-## Optional - Disable analytical store
-
-Analytical store can be disabled in SQL API containers or in MongoDB API collections, using Azure CLI or PowerShell. It is done by setting `analytical TTL` to `0`.
-
-> [!NOTE]
-> Please note that currently this action can't be undone. If analytical store is disabled in a container, it can never be re-enabled.
-
-> [!NOTE]
-> Please note that currently it is not possible to disable Synapse Link from a database account.
- ## <a id="connect-to-cosmos-database"></a> Connect to a Synapse workspace Use the instructions in [Connect to Azure Synapse Link](../synapse-analytics/synapse-link/how-to-connect-synapse-link-cosmos-db.md) on how to access an Azure Cosmos DB database from Azure Synapse Analytics Studio with Azure Synapse Link.
cosmos-db Partitioning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partitioning-overview.md
If you assign a throughput of 18,000 request units per second (RU/s), then each
Azure Cosmos DB transparently and automatically manages the placement of logical partitions on physical partitions to efficiently satisfy the scalability and performance needs of the container. As the throughput and storage requirements of an application increase, Azure Cosmos DB moves logical partitions to automatically spread the load across a greater number of physical partitions. You can learn more about [physical partitions](partitioning-overview.md#physical-partitions).
-Azure Cosmos DB uses hash-based partitioning to spread logical partitions across physical partitions. Azure Cosmos DB hashes the partition key value of an item. The hashed result determines the physical partition. Then, Azure Cosmos DB allocates the key space of partition key hashes evenly across the physical partitions.
+Azure Cosmos DB uses hash-based partitioning to spread logical partitions across physical partitions. Azure Cosmos DB hashes the partition key value of an item. The hashed result determines the logical partition. Then, Azure Cosmos DB allocates the key space of partition key hashes evenly across the physical partitions.
Transactions (in stored procedures or triggers) are allowed only against items in a single logical partition.
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/quickstart-dotnet.md
This quickstart will create a single Azure Cosmos DB account using the API for T
#### [PowerShell](#tab/azure-powershell) #### [Portal](#tab/azure-portal)
This quickstart will create a single Azure Cosmos DB account using the API for T
#### [Azure CLI](#tab/azure-cli) #### [PowerShell](#tab/azure-powershell) #### [Portal](#tab/azure-portal)
dotnet add package Azure.Data.Tables
### Configure environment variables ## Code examples
data-catalog Data Catalog Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-get-started.md
To set up Data Catalog, you must be the owner or co-owner of an Azure subscripti
## Create a data catalog
-You can create only one data catalog per organization (Microsoft Entra domain). Therefore, if the owner or co-owner of an Azure subscription who belongs to this Microsoft Entra domain has already created a catalog, then you can't create a catalog again even if you have multiple Azure subscriptions. To test whether a data catalog has been created by a user in your Microsoft Entra domain, go to the [Azure Data Catalog home page](http://azuredatacatalog.com) and verify whether you see the catalog. If a catalog has already been created for you, skip the following procedure and go to the next section.
+You can create only one data catalog per organization (Microsoft Entra domain). Therefore, if the owner or co-owner of an Azure subscription who belongs to this Microsoft Entra domain has already created a catalog, then you can't create a catalog again even if you have multiple Azure subscriptions. To test whether a data catalog has been created by a user in your Microsoft Entra domain, go to the [Azure Data Catalog home page](https://www.azuredatacatalog.com) and verify whether you see the catalog. If a catalog has already been created for you, skip the following procedure and go to the next section.
1. Go to the [Azure portal](https://portal.azure.com) > **Create a resource** and select **Data Catalog**.
You can create only one data catalog per organization (Microsoft Entra domain).
1. Specify a **name** for the data catalog, the **subscription** you want to use, the **location** for the catalog, and the **pricing tier**. Then select **Create**.
-1. Go to the [Azure Data Catalog home page](http://azuredatacatalog.com) and select **Publish Data**.
+1. Go to the [Azure Data Catalog home page](https://www.azuredatacatalog.com) and select **Publish Data**.
:::image type="content" source="media/data-catalog-get-started/data-catalog-publish-data.png" alt-text="On the data catalog homepage, the Publish Data button is selected.":::
data-catalog Register Data Assets Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/register-data-assets-tutorial.md
The database name we're using in this tutorial is *RLSTest*.
You can now register data assets from the database sample by using Azure Data Catalog.
-1. Go to the [Azure Data Catalog home page](http://azuredatacatalog.com) and select **Publish Data**.
+1. Go to the [Azure Data Catalog home page](https://www.azuredatacatalog.com) and select **Publish Data**.
:::image type="content" source="media/register-data-assets-tutorial/data-catalog-publish-data.png" alt-text="The data catalog is open with the Publish Data button selected.":::
data-factory Connector Google Bigquery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-google-bigquery.md
Set "authenticationType" property to **UserAuthentication**, and specify the fol
| Property | Description | Required | |: |: |: |
-| clientId | ID of the application used to generate the refresh token. | No |
-| clientSecret | Secret of the application used to generate the refresh token. Mark this field as a SecureString to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | No |
-| refreshToken | The refresh token obtained from Google used to authorize access to BigQuery. Learn how to get one from [Obtaining OAuth 2.0 access tokens](https://developers.google.com/identity/protocols/OAuth2WebServer#obtainingaccesstokens) and [this community blog](https://jpd.ms/getting-your-bigquery-refresh-token-for-azure-datafactory-f884ff815a59). Mark this field as a SecureString to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | No |
+| clientId | ID of the application used to generate the refresh token. | Yes |
+| clientSecret | Secret of the application used to generate the refresh token. Mark this field as a SecureString to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes |
+| refreshToken | The refresh token obtained from Google used to authorize access to BigQuery. Learn how to get one from [Obtaining OAuth 2.0 access tokens](https://developers.google.com/identity/protocols/OAuth2WebServer#obtainingaccesstokens) and [this community blog](https://jpd.ms/getting-your-bigquery-refresh-token-for-azure-datafactory-f884ff815a59). Mark this field as a SecureString to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes |
The minimum scope required to obtain an OAuth 2.0 refresh token is `https://www.googleapis.com/auth/bigquery.readonly`. If you plan to run a query that might return large results, other scope might be required. For more information, refer to this [article](https://cloud.google.com/bigquery/docs/writing-results#large-results).
Set "authenticationType" property to **ServiceAuthentication**, and specify the
| Property | Description | Required | |: |: |: | | email | The service account email ID that is used for ServiceAuthentication. It can be used only on Self-hosted Integration Runtime. | No |
-| keyFilePath | The full path to the `.p12` or `.json` key file that is used to authenticate the service account email address. | No |
+| keyFilePath | The full path to the `.p12` or `.json` key file that is used to authenticate the service account email address. | Yes |
| trustedCertPath | The full path of the .pem file that contains trusted CA certificates used to verify the server when you connect over TLS. This property can be set only when you use TLS on Self-hosted Integration Runtime. The default value is the cacerts.pem file installed with the integration runtime. | No | | useSystemTrustStore | Specifies whether to use a CA certificate from the system trust store or from a specified .pem file. The default value is **false**. | No |
data-factory Connector Mariadb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mariadb.md
Previously updated : 01/05/2024 Last updated : 01/18/2024 + # Copy data from MariaDB using Azure Data Factory or Synapse Analytics [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
For a list of data stores that are supported as sources/sinks by the copy activi
The service provides a built-in driver to enable connectivity, therefore you don't need to manually install any driver using this connector.
-This connector currently supports MariaDB of version 10.0 to 10.5.
+This connector currently supports MariaDB of version 10.x, 11.x under the recommended new driver version v2 and 10.0 to 10.5 for the legacy driver version.
## Prerequisites
Use the following steps to create a linked service to MariaDB in the Azure porta
2. Search for Maria and select the MariaDB connector.
- :::image type="content" source="media/connector-mariadb/mariadb-connector.png" alt-text="Screenshot of the MariaDB connector.":::
-
+ :::image type="content" source="media/connector-mariadb/mariadb-connector.png" alt-text="Screenshot of the MariaDB connector.":::
1. Configure the service details, test the connection, and create the new linked service.
The following sections provide details about properties that are used to define
## Linked service properties
-The following properties are supported for MariaDB linked service:
+If you use the recommended driver version, the following properties are supported for MariaDB linked service:
| Property | Description | Required | |: |: |: | | type | The type property must be set to: **MariaDB** | Yes |
-| connectionString | An ODBC connection string to connect to MariaDB. <br/>You can also put password in Azure Key Vault and pull the `pwd` configuration out of the connection string. Refer to the following samples and [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md) article with more details. | Yes |
+| driverVersion | The driver version when you select the recommended driver version. The value is v2. | Yes |
+| server | The name of your MariaDB Server. | Yes |
+| port | The port number to connect to the MariaDB server. | No |
+| database | Your MariaDB database name. | Yes |
+| username | Your user name. | Yes |
+| password | The password for the user name. Mark this field as SecureString to store it securely. Or, you can [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes |
| connectVia | The [Integration Runtime](concepts-integration-runtime.md) to be used to connect to the data store. Learn more from [Prerequisites](#prerequisites) section. If not specified, it uses the default Azure Integration Runtime. |No | **Example:**
The following properties are supported for MariaDB linked service:
"properties": { "type": "MariaDB", "typeProperties": {
- "connectionString": "Server=<host>;Port=<port>;Database=<database>;UID=<user name>;PWD=<password>"
+ "server": "<server>",
+ "port": "<port>",
+ "database": "<database>",
+ "username": "<username>",
+ "password": {
+ "type": "SecureString",
+ "value": "<password>"
+ },
+ "driverVersion": "v2"
}, "connectVia": { "referenceName": "<name of Integration Runtime>",
The following properties are supported for MariaDB linked service:
"properties": { "type": "MariaDB", "typeProperties": {
- "connectionString": "Server=<host>;Port=<port>;Database=<database>;UID=<user name>;",
- "pwd": { 
- "type": "AzureKeyVaultSecret", 
- "store": { 
- "referenceName": "<Azure Key Vault linked service name>", 
- "type": "LinkedServiceReference" 
- }, 
- "secretName": "<secretName>" 
- }
+ "server": "<server>",
+ "port": "<port>",
+ "database": "<database>",
+ "username": "<username>",
+ "password": {
+ "type": "AzureKeyVaultSecret",
+ "store": {
+ "referenceName": "<Azure Key Vault linked service name>",
+ "type": "LinkedServiceReference"
+ },
+ "secretName": "<secretName>"
+ },
+ "driverVersion": "v2"
+ },
+ "connectVia": {
+ "referenceName": "<name of Integration Runtime>",
+ "type": "IntegrationRuntimeReference"
+ }
+ }
+}
+```
+
+If you use the legacy driver version, the following properties are supported:
+
+| Property | Description | Required |
+|: |: |: |
+| type | The type property must be set to: **MariaDB** | Yes |
+| connectionString | An ODBC connection string to connect to MariaDB. <br/>You can also put password in Azure Key Vault and pull the `pwd` configuration out of the connection string. Refer to the following samples and [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md) article with more details. | Yes |
+| connectVia | The [Integration Runtime](concepts-integration-runtime.md) to be used to connect to the data store. Learn more from [Prerequisites](#prerequisites) section. If not specified, it uses the default Azure Integration Runtime. |No |
+
+**Example:**
+
+```json
+{
+ "name": "MariaDBLinkedService",
+ "properties": {
+ "type": "MariaDB",
+ "typeProperties": {
+ "connectionString": "Server=<host>;Port=<port>;Database=<database>;UID=<user name>;PWD=<password>"
}, "connectVia": { "referenceName": "<name of Integration Runtime>",
To copy data from MariaDB, set the source type in the copy activity to **MariaDB
] ```
+## Data type mapping for MariaDB
+
+When copying data from MariaDB, the following mappings are used from MariaDB data types to interim data types used by the service internally. See [Schema and data type mappings](copy-activity-schema-and-type-mapping.md) to learn about how copy activity maps the source schema and data type to the sink.
+
+| MariaDB data type | Interim service data type | Interim service data type (for the legacy driver version) |
+|: |: |: |
+| `bigint` |`Int64` |`Int64` |
+| `bigint unsigned` |`Decimal` |`Decimal` |
+| `bit(1)` |`UInt64` |`Boolean` |
+| `bit(M), M>1`|`UInt64`|`Byte[]`|
+| `blob` |`Byte[]` |`Byte[]` |
+| `bool` |`Boolean` <br/>(If TreatTinyAsBoolean=false, it is mapped as `SByte`. TreatTinyAsBoolean is true by default ) |`Int16` |
+| `char` |`String` |`String` |
+| `date` |`Datetime` |`Datetime` |
+| `datetime` |`Datetime` |`Datetime` |
+| `decimal` |`Decimal` |`Decimal, String` |
+| `double` |`Double` |`Double` |
+| `double precision` |`Double` |`Double` |
+| `enum` |`String` |`String` |
+| `float` |`Single` |`Single` |
+| `int` |`Int32` |`Int32` |
+| `int unsigned` |`Int64`|`Int64`|
+| `integer` |`Int32` |`Int32` |
+| `integer unsigned` |`Int64` |`Int64` |
+| `JSON` |`String` |-|
+| `long varbinary` |`Byte[]` |`Byte[]` |
+| `long varchar` |`String` |`String` |
+| `longblob` |`Byte[]` |`Byte[]` |
+| `longtext` |`String` |`String` |
+| `mediumblob` |`Byte[]` |`Byte[]` |
+| `mediumint` |`Int32` |`Int32` |
+| `mediumint unsigned` |`Int64` |`Int64` |
+| `mediumtext` |`String` |`String` |
+| `numeric` |`Decimal` |`Decimal` |
+| `real` |`Double` |`Double` |
+| `set` |`String` |`String` |
+| `smallint` |`Int16` |`Int16` |
+| `smallint unsigned` |`Int32` |`Int32` |
+| `text` |`String` |`String` |
+| `time` |`TimeSpan` |`TimeSpan` |
+| `timestamp` |`Datetime` |`Datetime` |
+| `tinyblob` |`Byte[]` |`Byte[]` |
+| `tinyint` |`SByte` |`Int16` |
+| `tinyint unsigned` |`Int16` |`Int16` |
+| `tinytext` |`String` |`String` |
+| `varchar` |`String` |`String` |
+| `year` |`Int` |`Int` |
+ ## Lookup activity properties To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
+## Upgrade the MariaDB driver version
+
+Here are steps that help you upgrade your MariaDB driver version:
+
+1. Create a new MariaDB linked service and configure it by referring to [Linked service properties](connector-mariadb.md#linked-service-properties).
+
+1. The data type mapping for the latest MariaDB linked service is different from that for the legacy version. To learn the latest data type mapping, see [Data type mapping for MariaDB](connector-mariadb.md#data-type-mapping-for-mariadb).
+
+1. More MariaDB versions are supported for the latest driver version. For more information, see [Supported capabilities](connector-mariadb.md#supported-capabilities).
## Related content+ For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Microsoft Fabric Lakehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-microsoft-fabric-lakehouse.md
Previously updated : 01/08/2024 Last updated : 02/01/2024 # Copy and transform data in Microsoft Fabric Lakehouse using Azure Data Factory or Azure Synapse Analytics [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-Microsoft Fabric Lakehouse is a data architecture platform for storing, managing, and analyzing structured and unstructured data in a single location. In order to achieve seamless data access across all compute engines in Microsoft Fabric, go to [Lakehouse and Delta Tables](/fabric/data-engineering/lakehouse-and-delta-tables) to learn more.
+Microsoft Fabric Lakehouse is a data architecture platform for storing, managing, and analyzing structured and unstructured data in a single location. In order to achieve seamless data access across all compute engines in Microsoft Fabric, go to [Lakehouse and Delta Tables](/fabric/data-engineering/lakehouse-and-delta-tables) to learn more. By default, data is written to Lakehouse Table in V-Order, and you can go to [Delta Lake table optimization and V-Order](/fabric/data-engineering/delta-optimization-and-v-order?tabs=sparksql#what-is-v-order) for more information.
This article outlines how to use Copy activity to copy data from and to Microsoft Fabric Lakehouse and use Data Flow to transform data in Microsoft Fabric Lakehouse. To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
To copy data to Microsoft Fabric Lakehouse using Microsoft Fabric Lakehouse Tabl
| : | :-- | :- | | type | The **type** property of the Copy Activity source must be set to **LakehouseTableSink**. | Yes |
+>[!Note]
+> Data is written to Lakehouse Table in V-Order by default. For more information, go to [Delta Lake table optimization and V-Order](/fabric/data-engineering/delta-optimization-and-v-order?tabs=sparksql#what-is-v-order).
+ **Example:** ```json
data-factory Connector Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mysql.md
Previously updated : 10/20/2023 Last updated : 01/16/2024
This MySQL connector is supported for the following capabilities:
For a list of data stores that are supported as sources/sinks by the copy activity, see the [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats) table.
-Specifically, this MySQL connector supports MySQL **version 5.6, 5.7 and 8.0**.
+This connector supports MySQL version 5.5, 5.6, 5.7, 8.0, 8.1 and 8.2 under the recommended new driver version v2 and 5.6, 5.7 and 8.0 for the legacy driver version.
## Prerequisites
The following sections provide details about properties that are used to define
## Linked service properties
-The following properties are supported for MySQL linked service:
+If you use the recommended driver version,the following properties are supported for MySQL linked service:
| Property | Description | Required | |: |: |: | | type | The type property must be set to: **MySql** | Yes |
-| connectionString | Specify information needed to connect to the Azure Database for MySQL instance.<br/> You can also put password in Azure Key Vault and pull the `password` configuration out of the connection string. Refer to the following samples and [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md) article with more details. | Yes |
+| driverVersion | The driver version when you select the recommended driver version. The value is v2. | Yes |
+| server | The name of your MySQL Server. | Yes |
+| port | The port number to connect to the MySQL server. |No|
+| database | Your MySQL database name. |Yes|
+| username | Your user name.|Yes|
+| password | The password for the user name. Mark this field as SecureString to store it securely. Or, you can [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes |
+| sslMode | This option specifies whether the driver uses TLS encryption and verification when connecting to MySQL. E.g., `SSLMode=<0/1/2/3/4>`.<br/>Options: DISABLED (0) / PREFERRED (1) **(Default)** / REQUIRED (2) / VERIFY_CA (3) / VERIFY_IDENTITY (4) | Yes |
+| useSystemTrustStore | This option specifies whether to use a CA certificate from the system trust store, or from a specified PEM file. E.g. `UseSystemTrustStore=<0/1>`;<br/>Options: Enabled (1) / Disabled (0) **(Default)** | No |
| connectVia | The [Integration Runtime](concepts-integration-runtime.md) to be used to connect to the data store. Learn more from [Prerequisites](#prerequisites) section. If not specified, it uses the default Azure Integration Runtime. |No |
-A typical connection string is `Server=<server>;Port=<port>;Database=<database>;UID=<username>;PWD=<password>`. More properties you can set per your case:
-
-| Property | Description | Options | Required |
-|: |: |: |: |
-| SSLMode | This option specifies whether the driver uses TLS encryption and verification when connecting to MySQL. E.g., `SSLMode=<0/1/2/3/4>`.| DISABLED (0) / PREFERRED (1) **(Default)** / REQUIRED (2) / VERIFY_CA (3) / VERIFY_IDENTITY (4) | No |
-| SSLCert | The full path and name of a .pem file containing the SSL certificate used for proving the identity of the client. <br/> To specify a private key for encrypting this certificate before sending it to the server, use the `SSLKey` property.| | Yes, if using two-way SSL verification. |
-| SSLKey | The full path and name of a file containing the private key used for encrypting the client-side certificate during two-way SSL verification.| | Yes, if using two-way SSL verification. |
-| UseSystemTrustStore | This option specifies whether to use a CA certificate from the system trust store, or from a specified PEM file. E.g. `UseSystemTrustStore=<0/1>;`| Enabled (1) / Disabled (0) **(Default)** | No |
- **Example:** ```json
A typical connection string is `Server=<server>;Port=<port>;Database=<database>;
"properties": { "type": "MySql", "typeProperties": {
- "connectionString": "Server=<server>;Port=<port>;Database=<database>;UID=<username>;PWD=<password>"
+ "server": "<server>",
+ "port": 3306,
+ "database": "<database>",
+ "username": "<username>",
+ "password": {
+ "type": "SecureString",
+ "value": "<password>"
+ },
+ "sslmode": <sslmode>,
+ "usesystemtruststore": <UseSystemTrustStore>,
+ "driverVersion": "v2"
}, "connectVia": { "referenceName": "<name of Integration Runtime>",
A typical connection string is `Server=<server>;Port=<port>;Database=<database>;
"properties": { "type": "MySql", "typeProperties": {
- "connectionString": "Server=<server>;Port=<port>;Database=<database>;UID=<username>;",
+ "server": "<server>",
+ "port": 3306,
+ "database": "<database>",
+ "username": "<username>",
+ "sslmode": <sslmode>,
+ "usesystemtruststore": <UseSystemTrustStore>,
"password": {  "type": "AzureKeyVaultSecret",  "store": { 
A typical connection string is `Server=<server>;Port=<port>;Database=<database>;
"type": "LinkedServiceReference"  },  "secretName": "<secretName>" 
- }
+ },
+ "driverVersion": "v2"
}, "connectVia": { "referenceName": "<name of Integration Runtime>",
A typical connection string is `Server=<server>;Port=<port>;Database=<database>;
} ```
-If you were using MySQL linked service with the following payload, it is still supported as-is, while you are suggested to use the new one going forward.
+If you use the legacy driver version, the following properties are supported:
+
+| Property | Description | Required |
+|: |: |: |
+| type | The type property must be set to: **MySql** | Yes |
+| connectionString | Specify information needed to connect to the Azure Database for MySQL instance.<br/> You can also put password in Azure Key Vault and pull the `password` configuration out of the connection string. Refer to the following samples and [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md) article with more details. | Yes |
+| connectVia | The [Integration Runtime](concepts-integration-runtime.md) to be used to connect to the data store. Learn more from [Prerequisites](#prerequisites) section. If not specified, it uses the default Azure Integration Runtime. | No |
+
+A typical connection string is `Server=<server>;Port=<port>;Database=<database>;UID=<username>;PWD=<password>`. More properties you can set per your case:
+
+| Property | Description | Required |
+|: |: |: |
+| sslMode | This option specifies whether the driver uses TLS encryption and verification when connecting to MySQL. E.g., `SSLMode=<0/1/2/3/4>`.<br/>Options: DISABLED (0) / PREFERRED (1) **(Default)** / REQUIRED (2) / VERIFY_CA (3) / VERIFY_IDENTITY (4) | Yes |
+| SSLCert | The full path and name of a .pem file containing the SSL certificate used for proving the identity of the client. <br/> To specify a private key for encrypting this certificate before sending it to the server, use the `SSLKey` property.| Yes, if using two-way SSL verification. |
+| SSLKey | The full path and name of a file containing the private key used for encrypting the client-side certificate during two-way SSL verification.| Yes, if using two-way SSL verification. |
+| useSystemTrustStore | This option specifies whether to use a CA certificate from the system trust store, or from a specified PEM file. E.g. `UseSystemTrustStore=<0/1>`;<br/>Options: Enabled (1) / Disabled (0) **(Default)** | No |
-**Previous payload:**
+**Example:**
```json {
If you were using MySQL linked service with the following payload, it is still s
"properties": { "type": "MySql", "typeProperties": {
- "server": "<server>",
- "database": "<database>",
- "username": "<username>",
- "password": {
- "type": "SecureString",
- "value": "<password>"
- }
+ "connectionString": "Server=<server>;Port=<port>;Database=<database>;UID=<username>;PWD=<password>"
}, "connectVia": { "referenceName": "<name of Integration Runtime>",
If you were using `RelationalSource` typed source, it is still supported as-is,
When copying data from MySQL, the following mappings are used from MySQL data types to interim data types used by the service internally. See [Schema and data type mappings](copy-activity-schema-and-type-mapping.md) to learn about how copy activity maps the source schema and data type to the sink.
-| MySQL data type | Interim service data type |
-|: |: |
-| `bigint` |`Int64` |
-| `bigint unsigned` |`Decimal` |
-| `bit(1)` |`Boolean` |
-| `bit(M), M>1`|`Byte[]`|
-| `blob` |`Byte[]` |
-| `bool` |`Int16` |
-| `char` |`String` |
-| `date` |`Datetime` |
-| `datetime` |`Datetime` |
-| `decimal` |`Decimal, String` |
-| `double` |`Double` |
-| `double precision` |`Double` |
-| `enum` |`String` |
-| `float` |`Single` |
-| `int` |`Int32` |
-| `int unsigned` |`Int64`|
-| `integer` |`Int32` |
-| `integer unsigned` |`Int64` |
-| `json` |`String` |
-| `long varbinary` |`Byte[]` |
-| `long varchar` |`String` |
-| `longblob` |`Byte[]` |
-| `longtext` |`String` |
-| `mediumblob` |`Byte[]` |
-| `mediumint` |`Int32` |
-| `mediumint unsigned` |`Int64` |
-| `mediumtext` |`String` |
-| `numeric` |`Decimal` |
-| `real` |`Double` |
-| `set` |`String` |
-| `smallint` |`Int16` |
-| `smallint unsigned` |`Int32` |
-| `text` |`String` |
-| `time` |`TimeSpan` |
-| `timestamp` |`Datetime` |
-| `tinyblob` |`Byte[]` |
-| `tinyint` |`Int16` |
-| `tinyint unsigned` |`Int16` |
-| `tinytext` |`String` |
-| `varchar` |`String` |
-| `year` |`Int` |
-
+| MySQL data type | Interim service data type | Interim service data type (for the legacy driver version) |
+|: |: |: |
+| `bigint` |`Int64` |`Int64` |
+| `bigint unsigned` |`Decimal` |`Decimal` |
+| `bit(1)` |`UInt64` |`Boolean` |
+| `bit(M), M>1`|`UInt64`|`Byte[]`|
+| `blob` |`Byte[]` |`Byte[]` |
+| `bool` |`Boolean` <br/>(If TreatTinyAsBoolean=false, it is mapped as `SByte`. TreatTinyAsBoolean is true by default ) |`Int16` |
+| `char` |`String` |`String` |
+| `date` |`Datetime` |`Datetime` |
+| `datetime` |`Datetime` |`Datetime` |
+| `decimal` |`Decimal` |`Decimal, String` |
+| `double` |`Double` |`Double` |
+| `double precision` |`Double` |`Double` |
+| `enum` |`String` |`String` |
+| `float` |`Single` |`Single` |
+| `int` |`Int32` |`Int32` |
+| `int unsigned` |`Int64`|`Int64`|
+| `integer` |`Int32` |`Int32` |
+| `integer unsigned` |`Int64` |`Int64` |
+| `JSON` |`String` |-|
+| `long varbinary` |`Byte[]` |`Byte[]` |
+| `long varchar` |`String` |`String` |
+| `longblob` |`Byte[]` |`Byte[]` |
+| `longtext` |`String` |`String` |
+| `mediumblob` |`Byte[]` |`Byte[]` |
+| `mediumint` |`Int32` |`Int32` |
+| `mediumint unsigned` |`Int64` |`Int64` |
+| `mediumtext` |`String` |`String` |
+| `numeric` |`Decimal` |`Decimal` |
+| `real` |`Double` |`Double` |
+| `set` |`String` |`String` |
+| `smallint` |`Int16` |`Int16` |
+| `smallint unsigned` |`Int32` |`Int32` |
+| `text` |`String` |`String` |
+| `time` |`TimeSpan` |`TimeSpan` |
+| `timestamp` |`Datetime` |`Datetime` |
+| `tinyblob` |`Byte[]` |`Byte[]` |
+| `tinyint` |`SByte` |`Int16` |
+| `tinyint unsigned` |`Int16` |`Int16` |
+| `tinytext` |`String` |`String` |
+| `varchar` |`String` |`String` |
+| `year` |`Int` |`Int` |
## Lookup activity properties To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
+## Upgrade the MySQL driver version
+
+Here are steps that help you upgrade your MySQL driver version:
+
+1. Create a new MySQL linked service and configure it by referring toΓÇ»[Linked service properties](connector-mysql.md#linked-service-properties).
+
+1. The data type mapping for the latest MySQL linked service is different from that for the legacy version. To learn the latest data type mapping, see [Data type mapping for MySQL](connector-mysql.md#data-type-mapping-for-mysql).
+
+1. More MySQL versions are supported for the latest driver version. For more information, see [Supported capabilities](connector-mysql.md#supported-capabilities).
+ ## Related content+ For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
databox-online Azure Stack Edge Gpu Prepare Device Failure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-prepare-device-failure.md
The following third-party data protection solutions can provide a backup solutio
|--|| | Cohesity | [https://www.cohesity.com/solution/cloud/azure/](https://www.cohesity.com/solution/cloud/azure/) <br> For details, contact Cohesity. | | Commvault | [https://www.commvault.com/azure](https://www.commvault.com/azure) <br> For details, contact Commvault. |
-| Veritas | [http://veritas.com/azure](http://veritas.com/azure) <br> For details, contact Veritas. |
+| Veritas | [https://www.veritas.com/partners/microsoft-azure](https://www.veritas.com/partners/microsoft-azure) <br> For details, contact Veritas. |
| Veeam | [https://www.veeam.com/kb4041](https://www.veeam.com/kb4041) <br> For details, contact Veeam. |
Azure Stack Edge works with Azure Backup and other third-party data protection s
## Next steps -- Learn how to [Recover from a failed Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-recover-device-failure.md).
+- Learn how to [Recover from a failed Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-recover-device-failure.md).
databox-online Azure Stack Edge Gpu Recover Device Failure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-recover-device-failure.md
To prepare for a potential device failure, you may have deployed one the followi
|--|| | Cohesity | [https://www.cohesity.com/solution/cloud/azure/](https://www.cohesity.com/solution/cloud/azure/) <br> For details, contact Cohesity. | | Commvault | [https://www.commvault.com/azure](https://www.commvault.com/azure) <br> For details, contact Commvault. |
-| Veritas | [http://veritas.com/azure](http://veritas.com/azure) <br> For details, contact Veritas. |
+| Veritas | [https://www.veritas.com/partners/microsoft-azure](https://www.veritas.com/partners/microsoft-azure) <br> For details, contact Veritas. |
| Veeam | [https://www.veeam.com/kb4041](https://www.veeam.com/kb4041) <br> For details, contact Veeam. | After the replacement device is fully configured, enable the device for local storage.
databox-online Azure Stack Edge Migrate Fpga Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-migrate-fpga-gpu.md
Data in Edge local shares stays on the device. Do these steps on your *source* d
|--|| | Cohesity | [https://www.cohesity.com/solution/cloud/azure/](https://www.cohesity.com/solution/cloud/azure/) <br> For details, contact Cohesity. | | Commvault | [https://www.commvault.com/azure](https://www.commvault.com/azure) <br> For details, contact Commvault. |
- | Veritas | [http://veritas.com/azure](http://veritas.com/azure) <br> For details, contact Veritas. |
+ | Veritas | [https://www.veritas.com/partners/microsoft-azure](https://www.veritas.com/partners/microsoft-azure) <br> For details, contact Veritas. |
| Veeam | [https://www.veeam.com/kb4041](https://www.veeam.com/kb4041) <br> For details, contact Veeam. |
Data in Edge local shares stays on the device. Do these steps on your *source* d
|--|| | Cohesity | [https://www.cohesity.com/solution/cloud/azure/](https://www.cohesity.com/solution/cloud/azure/) <br> For details, contact Cohesity. | | Commvault | [https://www.commvault.com/azure](https://www.commvault.com/azure) <br> For details, contact Commvault. |
- | Veritas | [http://veritas.com/azure](http://veritas.com/azure) <br> For details, contact Veritas. |
+ | Veritas | [https://www.veritas.com/partners/microsoft-azure](https://www.veritas.com/partners/microsoft-azure) <br> For details, contact Veritas. |
| Veeam | [https://www.veeam.com/kb4041](https://www.veeam.com/kb4041) <br> For details, contact Veeam. |
Follow these steps to recover the data from local shares:
|--|| | Cohesity | [https://www.cohesity.com/solution/cloud/azure/](https://www.cohesity.com/solution/cloud/azure/) <br> For details, contact Cohesity. | | Commvault | [https://www.commvault.com/azure](https://www.commvault.com/azure) <br> For details, contact Commvault. |
- | Veritas | [http://veritas.com/azure](http://veritas.com/azure) <br> For details, contact Veritas. |
+ | Veritas | [https://www.veritas.com/partners/microsoft-azure](https://www.veritas.com/partners/microsoft-azure) <br> For details, contact Veritas. |
| Veeam | [https://www.veeam.com/kb4041](https://www.veeam.com/kb4041) <br> For details, contact Veeam. | ### 3. Redeploy IoT Edge workloads
After the data migration is complete, erase local data and return the source dev
## Next steps
-[Learn how to deploy IoT Edge workloads on Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-deploy-compute-module-simple.md)
+[Learn how to deploy IoT Edge workloads on Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-deploy-compute-module-simple.md)
event-grid Cloudevents Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/cloudevents-schema.md
If you're already familiar with Event Grid, you might be aware of the endpoint v
If you're using Visual Studio or Visual Studio Code, and C# programming language to develop functions, make sure that you're using the latest [Microsoft.Azure.WebJobs.Extensions.EventGrid](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.EventGrid/) NuGet package (version **3.3.1** or above).
-In Visual Studio, use the **Tools** -> **NuGet Package Manager** -> **Package Manager Console**, and run the `Install-Package` command (`Install-Package Microsoft.Azure.WebJobs.Extensions.EventGrid -Version 3.2.1`). Alternatively, right-click the project in the Solution Explorer window, and select **Manage NuGet Packages** menu to browse for the NuGet package, and install or update it to the latest version.
+In Visual Studio, use the **Tools** -> **NuGet Package Manager** -> **Package Manager Console**, and run the `Install-Package` command (`Install-Package Microsoft.Azure.WebJobs.Extensions.EventGrid -Version 3.3.1`). Alternatively, right-click the project in the Solution Explorer window, and select **Manage NuGet Packages** menu to browse for the NuGet package, and install or update it to the latest version.
In VS Code, update the version number for the **Microsoft.Azure.WebJobs.Extensions.EventGrid** package in the **csproj** file for your Azure Functions project.
In VS Code, update the version number for the **Microsoft.Azure.WebJobs.Extensio
<AzureFunctionsVersion>v4</AzureFunctionsVersion> </PropertyGroup> <ItemGroup>
- <PackageReference Include="Microsoft.Azure.WebJobs.Extensions.EventGrid" Version="3.2.1" />
+ <PackageReference Include="Microsoft.Azure.WebJobs.Extensions.EventGrid" Version="3.3.1" />
<PackageReference Include="Microsoft.NET.Sdk.Functions" Version="4.1.1" /> </ItemGroup> <ItemGroup>
event-grid Event Schema Storage Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-storage-actions.md
+
+ Title: Azure Storage Actions as Event Grid source
+description: Describes the properties that are provided for Azure Storage Actions events with Azure Event Grid.
+++ Last updated : 08/30/2023++
+# Azure Storage Actions as an Event Grid source
+
+This article provides the properties and schema for Azure Storage Actions events. For an introduction to event schemas, see [Azure Event Grid event schema](event-schema.md). To learn more about Azure Storage Actions, see [What is Azure Storage Actions?](../storage-actions/overview.md).
+
+> [!IMPORTANT]
+> Azure Storage Actions is currently in PREVIEW and is available these [regions](../storage-actions/overview.md#supported-regions).
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> To enroll, see \<sign-up form link here\>.
+
+## Available event types
+
+## Storage Actions events
+
+These events are triggered when a storage task is queued and when a storage task run completes.
+
+ |Event name |Description|
+ |-|--|
+ | [Microsoft.StorageActions.StorageTaskQueued](#microsoftstorageactionsstoragetaskqueued-event) | Triggered when a storage task assignment run is queued. This event provides the status of assignment execution such as when the assignment is queued, and the corresponding execution ID for tracking purpose. |
+ |[Microsoft.StorageActions.StorageTaskCompleted](#microsoftstorageactionsstoragetaskcompleted-event) | Triggered when a storage tasks assignment run is completed. This event provides the status of assignment execution such as when the assignment is completed, the assignment's status, which task is associated with the assignment, and the link to summary report file. |
+
+### Example events
+
+# [Event Grid event schema](#tab/event-grid-event-schema)
+
+### Microsoft.StorageActions.StorageTaskQueued event
+
+```json
+[{
+ "topic":"/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/BlobInventory/providers/Microsoft.StorageActions/storageAccounts/my-storage-account",
+ "subject":"DataManagement/StorageTasks",
+ "eventType":"Microsoft.StorageActions.StorageTaskQueued",
+ "id":"8eb4656c-5c4a-4541-91e0-685558acbb1d",
+ "data":{
+ "queuedDateTime":"2023-08-07T21:35:23Z",
+ "taskExecutionId":"testdelete-2023-08-07T21:35:16.9494934Z_2023-08-07T21:35:17.5432186Z"
+ },
+ "dataVersion":"1.0",
+ "metadataVersion":"1",
+ "eventTime":"2023-08-07T21:35:23Z"
+}]
+
+```
+
+### Microsoft.StorageActions.StorageTaskCompleted event
+
+```json
+[{
+ "topic":"/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/BlobInventory/providers/Microsoft.StorageActions/storageAccounts/my-storage-account",
+ "subject":"DataManagement/StorageTasks",
+ "eventType":"Microsoft.StorageActions.StorageTaskCompleted",
+ "id":"dee33d3b-0b39-42f2-b2be-76f2fb94b852",
+ "data":{
+ "status":"Succeeded",
+ "completedDateTime":"2023-08-07T21:35:34Z",
+ "taskExecutionId":"testdelete-2023-08-07T21:35:16.9494934Z_2023-08-07T21:35:17.5432186Z",
+ "taskName":"deleteallcentraleu",
+ "summaryReportBlobUrl":"https://my-storage-account.blob.core.windows.net/result-container/deleteallcentraleu_testdelete_2023-08-07T21:35:23/SummaryReport.json"
+ },
+ "dataVersion":"1.0",
+ "metadataVersion":"1",
+ "eventTime":"2023-08-07T21:35:34Z"
+}]
+```
+
+# [Cloud event schema](#tab/cloud-event-schema)
+
+### Microsoft.StorageActions.StorageTaskQueued event
+
+```json
+[{
+ "source": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/BlobInventory/providers/Microsoft.StorageActions/storageAccounts/my-storage-account",
+ "subject": "DataManagement/StorageTasks",
+ "type": "Microsoft.StorageActions.StorageTaskQueued",
+ "time": "2023-08-07T21:35:23Z",
+ "id": "8eb4656c-5c4a-4541-91e0-685558acbb1d",
+ "data": {
+ "queuedDateTime":"2023-08-07T21:35:23Z",
+ "taskExecutionId":"testdelete-2023-08-07T21:35:16.9494934Z_2023-08-07T21:35:17.5432186Z"
+ },
+ "specversion": "1.0"
+}]
+```
+
+### Microsoft.StorageActions.StorageTaskCompleted event
+
+```json
+[{
+ "source": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/BlobInventory/providers/Microsoft.StorageActions/storageAccounts/my-storage-account",
+ "subject": "DataManagement/StorageTasks",
+ "type": "Microsoft.StorageActions.StorageTaskCompleted",
+ "time": "2023-08-07T21:35:34Z",
+ "id": "dee33d3b-0b39-42f2-b2be-76f2fb94b852",
+ "data": {
+ "status":"Succeeded",
+ "completedDateTime":"2023-08-07T21:35:34Z",
+ "taskExecutionId":"testdelete-2023-08-07T21:35:16.9494934Z_2023-08-07T21:35:17.5432186Z",
+ "taskName":"deleteallcentraleu",
+ "summaryReportBlobUrl":"https://my-storage-account.blob.core.windows.net/result-container/deleteallcentraleu_testdelete_2023-08-07T21:35:23/SummaryReport.json"
+ },
+ "specversion": "1.0"
+}]
+```
+++
+## Event properties
+
+# [Event Grid event schema](#tab/event-grid-event-schema)
+
+An event has the following top-level data:
+
+| Property | Type | Description |
+| -- | - | -- |
+| `topic` | string | Full resource path to the event source. This field isn't writeable. Event Grid provides this value. |
+| `subject` | string | Publisher-defined path to the event subject. |
+| `eventTime` | string | The time the event is generated based on the provider's UTC time. |
+| `eventType` | string | One of the registered event types for this event source. |
+| `id` | string | Unique identifier for the event. |
+| `data` | object | Storage task event data. |
+| `dataVersion` | string | The schema version of the data object. The publisher defines the schema version. |
+| `metadataVersion` | string | The schema version of the event metadata. Event Grid defines the schema of the top-level properties. Event Grid provides this value. |
+
+# [Cloud event schema](#tab/cloud-event-schema)
+
+An event has the following top-level data:
+
+| Property | Type | Description |
+| -- | - | -- |
+| `source` | string | Full resource path to the event source. This field isn't writeable. Event Grid provides this value. |
+| `subject` | string | Publisher-defined path to the event subject. |
+| `type` | string | One of the registered event types for this event source. |
+| `id` | string | Unique identifier for the event. |
+| `data` | object | Storage task event data. |
+| `specversion` | string | CloudEvents schema specification version. |
+++
+The data object has the following properties:
+
+| Property | Type | Description |
+| -- | - | -- |
+| `queuedDateTime` | string | The time that the storage task assignment is queued. |
+| `status` | string | The storage task assignment completion status (`Succeeded` or `Failed`) |
+| `completedDateTime` | string | The time that the storage task assignment completed. |
+| `taskExecutionId` | string | The unique ID that is associated with the storage task assignment. |
+| `taskName` | string | The storage task that is associated with the storage task assignment. |
+| `summaryReportBlobUrl` | string | The link to the storage task assignment summary report file. |
+
+## Next steps
+
+- For an introduction to Azure Event Grid, see [What is Event Grid?](overview.md)
+- For more information about creating an Azure Event Grid subscription, see [Event Grid subscription schema](subscription-creation-schema.md).
event-hubs Event Hubs Ip Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-ip-filtering.md
Title: Azure Event Hubs Firewall Rules | Microsoft Docs
description: Use Firewall Rules to allow connections from specific IP addresses to Azure Event Hubs. Previously updated : 02/15/2023 Last updated : 02/02/2024 # Allow access to Azure Event Hubs namespaces from specific IP addresses or ranges
-By default, Event Hubs namespaces are accessible from internet as long as the request comes with valid authentication and authorization. With IP firewall, you can restrict it further to only a set of IPv4 addresses or IPv4 address ranges in [CIDR (Classless Inter-Domain Routing)](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation.
+By default, Event Hubs namespaces are accessible from internet as long as the request comes with valid authentication and authorization. With IP firewall, you can restrict it further to only a set of IPv4 and IPv6 addresses or address ranges in [CIDR (Classless Inter-Domain Routing)](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation.
-This feature is helpful in scenarios in which Azure Event Hubs should be only accessible from certain well-known sites. Firewall rules enable you to configure rules to accept traffic originating from specific IPv4 addresses. For example, if you use Event Hubs with [Azure Express Route][express-route], you can create a **firewall rule** to allow traffic from only your on-premises infrastructure IP addresses.
+This feature is helpful in scenarios in which Azure Event Hubs should be only accessible from certain well-known sites. Firewall rules enable you to configure rules to accept traffic originating from specific IPv4 and IPv6 addresses. For example, if you use Event Hubs with [Azure Express Route][express-route], you can create a **firewall rule** to allow traffic from only your on-premises infrastructure IP addresses.
## IP firewall rules You specify IP firewall rules at the Event Hubs namespace level. So, the rules apply to all connections from clients using any supported protocol. Any connection attempt from an IP address that doesn't match an allowed IP rule on the Event Hubs namespace is rejected as unauthorized. The response doesn't mention the IP rule. IP filter rules are applied in order, and the first rule that matches the IP address determines the accept or reject action.
This section shows you how to use the Azure portal to create IP firewall rules f
- **All networks** (default). This option enables public access from all networks using an access key. If you select the **All networks** option, the event hub accepts connections from any IP address (using the access key). This setting is equivalent to a rule that accepts the 0.0.0.0/0 IP address range. 1. To restrict access to **specific IP addresses**, select **Selected networks** option, and then follow these steps: 1. In the **Firewall** section, select **Add your client IP address** option to give your current client IP the access to the namespace.
- 3. For **address range**, enter a specific IPv4 address or a range of IPv4 address in CIDR notation.
-
- To restrict access to **specific virtual networks**, see [Allow access from specific networks](event-hubs-service-endpoints.md).
+ 3. For **address range**, enter specific IPv4 or IPv6 addresses or address ranges in CIDR notation.
+
+ > [!IMPORTANT]
+ > When the service starts supporting IPv6 connections in the future and clients automatically switch to using IPv6, your clients will break if you have only IPv4 addresses, not IPv6 addresses. Therefore, we recommend that you add IPv6 addresses to the list of allowed IP addresses now so that your clients don't break when the service eventually switches to supporting IPv6.
1. Specify whether you want to **allow trusted Microsoft services to bypass this firewall**. See [Trusted Microsoft services](#trusted-microsoft-services) for details. :::image type="content" source="./media/event-hubs-firewall/firewall-selected-networks-trusted-access-disabled.png" lightbox="./media/event-hubs-firewall/firewall-selected-networks-trusted-access-disabled.png" alt-text="Firewall section highlighted in the Public access tab of the Networking page.":::
Set-AzEventHubNetworkRuleSet -ResourceGroupName myResourceGroup -NamespaceName m
### REST API
-The default value of the `defaultAction` property was `Deny` for API version **2021-01-01-preview and earlier**. However, the deny rule isn't enforced unless you set IP filters or virtual network (VNet) rules. That is, if you didn't have any IP filters or VNet rules, it's treated as `Allow`.
+The default value of the `defaultAction` property was `Deny` for API version **2021-01-01-preview and earlier**. However, the deny rule isn't enforced unless you set IP filters or virtual network rules. That is, if you didn't have any IP filters or virtual network rules, it's treated as `Allow`.
-From API version **2021-06-01-preview onwards**, the default value of the `defaultAction` property is `Allow`, to accurately reflect the service-side enforcement. If the default action is set to `Deny`, IP filters and VNet rules are enforced. If the default action is set to `Allow`, IP filters and VNet rules aren't enforced. The service remembers the rules when you turn them off and then back on again.
+From API version **2021-06-01-preview onwards**, the default value of the `defaultAction` property is `Allow`, to accurately reflect the service-side enforcement. If the default action is set to `Deny`, IP filters and virtual network rules are enforced. If the default action is set to `Allow`, IP filters and virtual network rules aren't enforced. The service remembers the rules when you turn them off and then back on again.
The API version **2021-06-01-preview onwards** also introduces a new property named `publicNetworkAccess`. If it's set to `Disabled`, operations are restricted to private links only. If it's set to `Enabled`, operations are allowed over the public internet.
For more information about these properties, see [Create or Update Network Rule
### Azure portal
-Azure portal always uses the latest API version to get and set properties. If you had configured your namespace using **2021-01-01-preview and earlier** with `defaultAction` set to `Deny`, and specified zero IP filters and VNet rules, the portal would have previously checked **Selected Networks** on the **Networking** page of your namespace. Now, it checks the **All networks** option.
+Azure portal always uses the latest API version to get and set properties. If you had configured your namespace using **2021-01-01-preview and earlier** with `defaultAction` set to `Deny`, and specified zero IP filters and virtual network rules, the portal would have previously checked **Selected Networks** on the **Networking** page of your namespace. Now, it checks the **All networks** option.
:::image type="content" source="./media/event-hubs-firewall/firewall-all-networks-selected.png" lightbox="./media/event-hubs-firewall/firewall-all-networks-selected.png" alt-text="Screenshot that shows the Public access page with the All networks option selected.":::
event-hubs Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/network-security.md
Title: Network security for Azure Event Hubs
-description: This article describes how to configure access from private endpoints
+description: This article describes how to configure access from private endpoints.
Last updated 04/13/2022
This article describes how to use the following security features with Azure Eve
## Service tags A service tag represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, minimizing the complexity of frequent updates to network security rules. For more information about service tags, see [Service tags overview](../virtual-network/service-tags-overview.md).
-You can use service tags to define network access controls on [network security groups](../virtual-network/network-security-groups-overview.md#security-rules) or [Azure Firewall](../firewall/service-tags.md). Use service tags in place of specific IP addresses when you create security rules. By specifying the service tag name (for example, **EventHub**) in the appropriate *source* or *destination* field of a rule, you can allow or deny the traffic for the corresponding service.
+You can use service tags to define network access controls on [network security groups](../virtual-network/network-security-groups-overview.md#security-rules) or [Azure Firewall](../firewall/service-tags.md). Use service tags in place of specific IP addresses when you create security rules. By specifying the service tag name (for example, `EventHub`) in the appropriate *source* or *destination* field of a rule, you can allow or deny the traffic for the corresponding service.
| Service tag | Purpose | Can use inbound or outbound? | Can be regional? | Can use with Azure Firewall? | | | -- |::|::|::|::|::|::|::|
-| **EventHub** | Azure Event Hubs. | Outbound | Yes | Yes |
+| `EventHub` | Azure Event Hubs. | Outbound | Yes | Yes |
> [!NOTE] > Azure Event Hubs service tag contains some of the IP addresses used by Azure Service Bus because of historical reasons. ## IP firewall
-By default, Event Hubs namespaces are accessible from internet as long as the request comes with valid authentication and authorization. With IP firewall, you can restrict it further to only a set of IPv4 addresses or IPv4 address ranges in [CIDR (Classless Inter-Domain Routing)](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation.
+By default, Event Hubs namespaces are accessible from internet as long as the request comes with valid authentication and authorization. With IP firewall, you can restrict it further to only a set of IPv4 or IPv6 addresses or address ranges in [CIDR (Classless Inter-Domain Routing)](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation.
-This feature is helpful in scenarios in which Azure Event Hubs should be only accessible from certain well-known sites. Firewall rules enable you to configure rules to accept traffic originating from specific IPv4 addresses. For example, if you use Event Hubs with [Azure Express Route](../expressroute/expressroute-faqs.md#supported-services), you can create a **firewall rule** to allow traffic from only your on-premises infrastructure IP addresses.
+This feature is helpful in scenarios in which Azure Event Hubs should be only accessible from certain well-known sites. Firewall rules enable you to configure rules to accept traffic originating from specific IPv4 or IPv6 addresses. For example, if you use Event Hubs with [Azure Express Route](../expressroute/expressroute-faqs.md#supported-services), you can create a **firewall rule** to allow traffic from only your on-premises infrastructure IP addresses.
-The IP firewall rules are applied at the Event Hubs namespace level. Therefore, the rules apply to all connections from clients using any supported protocol. Any connection attempt from an IP address that does not match an allowed IP rule on the Event Hubs namespace is rejected as unauthorized. The response does not mention the IP rule. IP filter rules are applied in order, and the first rule that matches the IP address determines the accept or reject action.
+The IP firewall rules are applied at the Event Hubs namespace level. Therefore, the rules apply to all connections from clients using any supported protocol. Any connection attempt from an IP address that doesn't match an allowed IP rule on the Event Hubs namespace is rejected as unauthorized. The response doesn't mention the IP rule. IP filter rules are applied in order, and the first rule that matches the IP address determines the accept or reject action.
-For more information, see [How to configure IP firewall for an event hub](event-hubs-ip-filtering.md)
+For more information, see [How to configure IP firewall for an event hub](event-hubs-ip-filtering.md).
## Network service endpoints
-The integration of Event Hubs with [Virtual Network (VNet) Service Endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) enables secure access to messaging capabilities from workloads such as virtual machines that are bound to virtual networks, with the network traffic path being secured on both ends.
+The integration of Event Hubs with [Virtual Network (virtual network) Service Endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) enables secure access to messaging capabilities from workloads such as virtual machines that are bound to virtual networks, with the network traffic path being secured on both ends.
Once configured to bound to at least one virtual network subnet service endpoint, the respective Event Hubs namespace no longer accepts traffic from anywhere but authorized subnets in virtual networks. From the virtual network perspective, binding an Event Hubs namespace to a service endpoint configures an isolated networking tunnel from the virtual network subnet to the messaging service.
-The result is a private and isolated relationship between the workloads bound to the subnet and the respective Event Hubs namespace, in spite of the observable network address of the messaging service endpoint being in a public IP range. There is an exception to this behavior. Enabling a service endpoint, by default, enables the `denyall` rule in the [IP firewall](event-hubs-ip-filtering.md) associated with the virtual network. You can add specific IP addresses in the IP firewall to enable access to the Event Hub public endpoint.
+The result is a private and isolated relationship between the workloads bound to the subnet and the respective Event Hubs namespace, in spite of the observable network address of the messaging service endpoint being in a public IP range. There's an exception to this behavior. When you enable a service endpoint, by default, the service enables the `denyall` rule in the [IP firewall](event-hubs-ip-filtering.md) associated with the virtual network. You can add specific IP addresses in the IP firewall to enable access to the Event Hubs public endpoint.
> [!IMPORTANT] > This feature isn't supported in the **basic** tier.
-### Advanced security scenarios enabled by VNet integration
+### Advanced security scenarios enabled by virtual network integration
Solutions that require tight and compartmentalized security, and where virtual network subnets provide the segmentation between the compartmentalized services, still need communication paths between services residing in those compartments.
Binding an Event Hubs namespace to a virtual network is a two-step process. You
The virtual network rule is an association of the Event Hubs namespace with a virtual network subnet. While the rule exists, all workloads bound to the subnet are granted access to the Event Hubs namespace. Event Hubs itself never establishes outbound connections, doesn't need to gain access, and is therefore never granted access to your subnet by enabling this rule.
-For more information, see [How to configure virtual network service endpoints for an event hub](event-hubs-service-endpoints.md)
+For more information, see [How to configure virtual network service endpoints for an event hub](event-hubs-service-endpoints.md).
## Private endpoints [Azure Private Link service](../private-link/private-link-overview.md) enables you to access Azure Services (for example, Azure Event Hubs, Azure Storage, and Azure Cosmos DB) and Azure hosted customer/partner services over a **private endpoint** in your virtual network.
-A private endpoint is a network interface that connects you privately and securely to a service powered by Azure Private Link. The private endpoint uses a private IP address from your VNet, effectively bringing the service into your VNet. All traffic to the service can be routed through the private endpoint, so no gateways, NAT devices, ExpressRoute or VPN connections, or public IP addresses are needed. Traffic between your virtual network and the service traverses over the Microsoft backbone network, eliminating exposure from the public Internet. You can connect to an instance of an Azure resource, giving you the highest level of granularity in access control.
+A private endpoint is a network interface that connects you privately and securely to a service powered by Azure Private Link. The private endpoint uses a private IP address from your virtual network, effectively bringing the service into your virtual network. All traffic to the service can be routed through the private endpoint, so no gateways, NAT devices, ExpressRoute or VPN connections, or public IP addresses are needed. Traffic between your virtual network and the service traverses over the Microsoft backbone network, eliminating exposure from the public Internet. You can connect to an instance of an Azure resource, giving you the highest level of granularity in access control.
> [!IMPORTANT] > This feature isn't supported in the **basic** tier.
-For more information, see [How to configure private endpoints for an event hub](private-link-service.md)
+For more information, see [How to configure private endpoints for an event hub](private-link-service.md).
## Next steps
event-hubs Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/troubleshooting-guide.md
Last updated 12/15/2022
# Troubleshoot connectivity issues - Azure Event Hubs
-There are various reasons for client applications not able to connect to an event hub. The connectivity issues that you experience may be permanent or transient. If the issue happens all the time (permanent), you may want to check the connection string, your organization's firewall settings, IP firewall settings, network security settings (service endpoints, private endpoints, etc.), and more. For transient issues, upgrading to latest version of the SDK, running commands to check dropped packets, and obtaining network traces may help with troubleshooting the issues. This article provides tips for troubleshooting connectivity issues with Azure Event Hubs.
+There are various reasons for client applications not able to connect to an event hub. The connectivity issues might be permanent or transient. If the issue happens all the time (permanent), you might want to check the connection string, your organization's firewall settings, IP firewall settings, network security settings (service endpoints, private endpoints, etc.), and more. For transient issues, upgrading to latest version of the SDK, running commands to check dropped packets, and obtaining network traces might help with troubleshooting the issues. This article provides tips for troubleshooting connectivity issues with Azure Event Hubs.
## Troubleshoot permanent connectivity issues If the application isn't able to connect to the event hub at all, follow steps from this section to troubleshoot the issue.
For Kafka clients, verify that producer.config or consumer.config files are conf
### Verify that Event Hubs service tag is allowed in your network security groups If your application is running inside a subnet and there's an associated network security group, confirm whether the internet outbound is allowed or Event Hubs service tag (`EventHub`) is allowed. See [Virtual network service tags](../virtual-network/service-tags-overview.md) and search for `EventHub`.
-### Check if the application needs to be running in a specific subnet of a vnet
+### Check if the application needs to be running in a specific subnet of a virtual network
Confirm that your application is running in a virtual network subnet that has access to the namespace. If it's not, run the application in the subnet that has access to the namespace or add the IP address of the machine on which application is running to the [IP firewall](event-hubs-ip-filtering.md). When you create a virtual network service endpoint for an event hub namespace, the namespace accepts traffic only from the subnet that's bound to the service endpoint. There's an exception to this behavior. You can add specific IP addresses in the IP firewall to enable access to the event hub's public endpoint. For more information, see [Network service endpoints](event-hubs-service-endpoints.md).
When you create a virtual network service endpoint for an event hub namespace, t
### Check the IP Firewall settings for your namespace Check that the public IP address of the machine on which the application is running isn't blocked by the IP firewall.
-By default, Event Hubs namespaces are accessible from internet as long as the request comes with valid authentication and authorization. With IP firewall, you can restrict it further to only a set of IPv4 addresses or IPv4 address ranges in [CIDR (Classless Inter-Domain Routing)](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation.
+By default, Event Hubs namespaces are accessible from internet as long as the request comes with valid authentication and authorization. With IP firewall, you can restrict it further to only a set of IPv4 or IPv6 addresses or address ranges in [CIDR (Classless Inter-Domain Routing)](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) notation.
The IP firewall rules are applied at the Event Hubs namespace level. Therefore, the rules apply to all connections from clients using any supported protocol. Any connection attempt from an IP address that doesn't match an allowed IP rule on the Event Hubs namespace is rejected as unauthorized. The response doesn't mention the IP rule. IP filter rules are applied in order, and the first rule that matches the IP address determines the accept or reject action.
An example of **failure error message**:
If you're experiencing intermittent connectivity issues, go through the following sections for troubleshooting tips. ### Use the latest version of the client SDK
-Some of the transient connectivity issues may have been fixed in the later versions of the SDK than what you are using. Ensure that you're using the latest version of client SDKs in your applications. SDKs are continuously improved with new/updated features and bug fixes, so always test with latest package. Check the release notes for issues that are fixed and features added/updated.
+Some of the transient connectivity issues might have been fixed in the later versions of the SDK than what you are using. Ensure that you're using the latest version of client SDKs in your applications. SDKs are continuously improved with new/updated features and bug fixes, so always test with latest package. Check the release notes for issues that are fixed and features added/updated.
For information about client SDKs, see the [Azure Event Hubs - Client SDKs](sdks.md) article. ### Run the command to check dropped packets
-When there are intermittent connectivity issues, run the following command to check if there are any dropped packets. This command will try to establish 25 different TCP connections every 1 second with the service. Then, you can check how many of them succeeded/failed and also see TCP connection latency. You can download the `psping` tool from [here](/sysinternals/downloads/psping).
+When there are intermittent connectivity issues, run the following command to check if there are any dropped packets. This command tries to establish 25 different TCP connections every 1 second with the service. Then, you can check how many of them succeeded/failed and also see TCP connection latency. You can download the `psping` tool from [here](/sysinternals/downloads/psping).
```shell .\psping.exe -n 25 -i 1 -q <yournamespacename>.servicebus.windows.net:5671 -nobanner
You can use equivalent commands if you're using other tools such as `tnc`, `ping
Obtain a network trace if the previous steps don't help and analyze it using tools such as [Wireshark](https://www.wireshark.org/). Contact [Microsoft Support](https://support.microsoft.com/) if needed. ### Service upgrades/restarts
-Transient connectivity issues may occur because of backend service upgrades and restarts. When they occur, you may see the following symptoms:
+Transient connectivity issues might occur because of backend service upgrades and restarts. When they occur, you might see the following symptoms:
-- There may be a drop in incoming messages/requests.-- The log file may contain error messages.-- The applications may be disconnected from the service for a few seconds.-- Requests may be momentarily throttled.
+- There might be a drop in incoming messages/requests.
+- The log file might contain error messages.
+- The applications might be disconnected from the service for a few seconds.
+- Requests might be momentarily throttled.
-If the application code utilizes SDK, the retry policy is already built in and active. The application will reconnect without significant impact to the application/workflow. Catching these transient errors, backing off and then retrying the call will ensure that your code is resilient to these transient issues.
+If the application code utilizes SDK, the retry policy is already built in and active. The application reconnects without significant impact to the application/workflow. Catching these transient errors, backing off and then retrying the call ensures that your code is resilient to these transient issues.
## Next steps See the following articles:
expressroute Expressroute About Virtual Network Gateways https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-about-virtual-network-gateways.md
Zone-redundant gateways use specific new gateway SKUs for ExpressRoute gateway.
The new gateway SKUs also support other deployment options to best match your needs. When creating a virtual network gateway using the new gateway SKUs, you can deploy the gateway in a specific zone. This type of gateway is referred to as a zonal gateway. When you deploy a zonal gateway, all the instances of the gateway are deployed in the same Availability Zone.
+## VNet to VNet and VNet to Virtual WAN connectivity
+
+By default, VNet to VNet and VNet to Virtual WAN connectivity is disabled through an ExpressRoute circuit for all gateway SKUs. To enable this connectivity, you must configure the ExpressRoute virtual network gateway to allow this traffic. For more information, see guidance about [virtual network connectivity over ExpressRoute](virtual-network-connectivity-guidance.md). To enabled this traffic, see [Enable VNet to VNet or VNet to Virtual WAN connectivity through ExpressRoute](expressroute-howto-add-gateway-portal-resource-manager.md#enable-or-disable-vnet-to-vnet-or-vnet-to-virtual-wan-traffic-through-expressroute).
+ ## <a name="fastpath"></a>FastPath ExpressRoute virtual network gateway is designed to exchange network routes and route network traffic. FastPath is designed to improve the data path performance between your on-premises network and your virtual network. When enabled, FastPath sends network traffic directly to virtual machines in the virtual network, bypassing the gateway.
expressroute Expressroute Howto Add Gateway Portal Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-add-gateway-portal-resource-manager.md
The steps for this tutorial use the values in the following configuration refere
1. Select **Review + Create**, and then **Create** to begin creating the gateway. The settings are validated and the gateway deploys. Creating virtual network gateway can take up to 45 minutes to complete.
+## Enable or disable VNet to VNet or VNet to Virtual WAN traffic through ExpressRoute
+
+You can enable or disable VNet to VNet or VNet to WAN connectivity through ExpressRoute. By default, VNet to VNet or VNet to WAN traffic is disabled through ExpressRoute. You can enable this traffic by using the following steps.
+
+1. In the Azure portal, navigate to the ExpressRoute virtual network gateway.
+
+1. In the **Settings** section, select **Configuration**.
+
+1. In the **Configuration** section, select the checkbox for **Allow traffic from remote virtual networks**. You can also enable traffic from remote Virtual WAN networks by selecting the checkbox for **Allow traffic from remote Virtual WAN network**.
+
+ :::image type="content" source="./media/expressroute-howto-add-gateway-portal-resource-manager/allow-remote-virtual-network-wan.png" alt-text="Screenshot that shows how to enable VNet to VNet and VNet to WAN traffic.":::
+
+These steps will need to be completed on all virtual networks that need to communicate with each other through ExpressRoute.
+ ## Clean up resources If you no longer need the ExpressRoute gateway, locate the gateway in the virtual network resource group and select **Delete**. Ensure the gateway doesn't have any connections to a circuit.
expressroute Virtual Network Connectivity Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/virtual-network-connectivity-guidance.md
Virtual networks that are connected to an ExpressRoute circuit are established b
ExpressRoute connectivity gets managed by a pair of Microsoft Enterprise Edge (MSEE) devices located at [ExpressRoute peering locations](expressroute-locations-providers.md#expressroute-locations). ExpressRoute peering locations are physically separate from Azure regions, when virtual network to virtual network connectivity is enabled using ExpressRoute. Traffic from the virtual network leaves the origin Azure region and passes through the MSEE devices at the peering location. Then that traffic goes through Microsoft's global network to reach the destination Azure region. With VNet peering, traffic flows from the origin Azure region directly to the destination Azure region using Microsoft's global network, without the extra hop of the MSEE devices. Since the extra hop is no longer in the data path, you see lower latency and an overall better experience with your applications and network traffic.
+## Enable VNet to VNet or VNet to Virtual WAN connectivity through ExpressRoute
+
+By default, VNet to VNet and VNet to Virtual WAN connectivity is disabled through an ExpressRoute circuit. To enable this connectivity, you must configure the ExpressRoute virtual network gateway to allow this traffic. For more information, see [Enable VNet to VNet or VNet to Virtual WAN connectivity through ExpressRoute](expressroute-howto-add-gateway-portal-resource-manager.md#enable-or-disable-vnet-to-vnet-or-vnet-to-virtual-wan-traffic-through-expressroute).
+ ## Next steps * Learn more about [Designing for high availability](designing-for-high-availability-with-expressroute.md).
external-attack-surface-management Modifying Inventory Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/modifying-inventory-assets.md
+
+ Title: Modify inventory assets
+description: This article outlines how to update assets with customized text labels to categorize and make use of inventory data.
+++ Last updated : 3/1/2022+++
+# Asset modification overview
+
+This article outlines how to modify inventory assets. You can change the state of an asset, assign an external ID or apply labels to help provide context and use inventory data. Users can also remove assets from their inventory in bulk based on the method with which they were discovered; for instance, users can remove a seed from a discovery group and elect to remove any assets that were discovered through a connection to this seed. This article describes all modification options available in Defender EASM, and outlines how to update assets and track any updates with the Task Manager.
+
+### Label assets
+
+Labels help you organize your attack surface and apply business context in a customizable way. You can apply any text label to a subset of assets to group assets and make better use of your inventory. Customers commonly categorize assets that:
+
+- Have recently come under your organization's ownership through a merger or acquisition.
+- Require compliance monitoring.
+- Are owned by a specific business unit in their organization.
+- Are affected by a specific vulnerability that requires mitigation.
+- Relate to a particular brand owned by the organization.
+- Were added to your inventory within a specific time range.
+
+
+Labels are free form text fields, so you can create a label for any use case that applies to your organization.
+
+[![Screenshot that shows an inventory list view with a filtered Labels column.](media/labels-1a.png)](media/labels-1a.png#lightbox)
++
+### Change the state of an asset
+
+Users can also change the state of an asset. States help categorize your inventory based on their role in your organization. Users can switch between the following states:
+- **Approved Inventory**: A part of your owned attack surface; an item that you are directly responsible for.
+- **Dependency**: Infrastructure that is owned by a third party but is part of your attack surface because it directly supports the operation of your owned assets. For example, you might depend on an IT provider to host your web content. While the domain, hostname, and pages would be part of your ΓÇ£Approved Inventory,ΓÇ¥ you may wish to treat the IP Address running the host as a ΓÇ£Dependency.ΓÇ¥
+- **Monitor Only**: An asset that is relevant to your attack surface but is neither directly controlled nor a technical dependency. For example, independent franchisees or assets belonging to related companies might be labeled as ΓÇ£Monitor OnlyΓÇ¥ rather than ΓÇ£Approved InventoryΓÇ¥ to separate the groups for reporting purposes.
+- **Candidate**: An asset that has some relationship to your organization's known seed assets but does not have a strong enough connection to immediately label it as ΓÇ£Approved Inventory.ΓÇ¥ These candidate assets must be manually reviewed to determine ownership.
+- **Requires Investigation**: A state similar to the ΓÇ£CandidateΓÇ¥ states, but this value is applied to assets that require manual investigation to validate. This is determined based on our internally generated confidence scores that assess the strength of detected connections between assets. It does not indicate the infrastructure's exact relationship to the organization as much as it denotes that this asset has been flagged as requiring additional review to determine how it should be categorized.
++
+### Apply an External ID
+
+Users can also apply an external ID to an asset. This is useful in situations when you employ multiple solutions for asset tracking, remediation activities or ownership monitoring; seeing any external IDs within Defender EASM helps you align this disparate asset information. External ID values can be numeric or alphanumeric and must be entered in text format. External IDs are also displayed within the Asset Details section.
++
+## How to modify assets
+
+You can modify assets from both the inventory list and asset details pages. You can make changes to a single asset from the asset details page. You can make changes to a single asset or multiple assets from the inventory list page. The following sections describe how to apply changes from the two inventory views depending on your use case.
+
+### Inventory list page
+
+You should modify assets from the inventory list page if you want to update numerous assets at once. You can refine your asset list based on filter parameters. This process helps you to identify assets that should be categorized with the label, external ID or state change that you want. To modify assets from this page:
+
+1. On the leftmost pane of your Microsoft Defender External Attack Surface Management (Defender EASM) resource, select **Inventory**.
+
+2. Apply filters to produce your intended results. In this example, we're looking for domains that expire within 30 days that require renewal. The applied label helps you more quickly access any expiring domains to simplify the remediation process. You can apply as many filters as necessary to obtain the specific results that are needed. For more information on filters, see [Inventory filters overview](inventory-filters.md).
+
+ ![Screenshot that shows the inventory list view with the Add filter dropdown opened to display the query editor.](media/labels-2.png)
+
+3. After your inventory list is filtered, select the dropdown by the checkbox next to the **Asset** table header. This dropdown gives you the option to select all results that match your query or the results on that specific page (up to 25). The **None** option clears all assets. You can also choose to select only specific results on the page by selecting the individual check marks next to each asset.
+
+ ![Screenshot that shows the inventory list view with the bulk selection dropdown opened.](media/labels-14.png)
+
+4. Select **Modify assets**.
+
+5. On the **Modify Assets** pane that opens on the right side of your screen, you can quickly change the state of the selected assets. For this example, you create a new label. Select **Create a new label**.
+
+6. Determine the label name and display text values. The label name can't be changed after you initially create the label, but the display text can be edited at a later time. The label name is used to query for the label in the product interface or via API, so edits are disabled to ensure these queries work properly. To edit a label name, you need to delete the original label and create a new one.
+
+ Select a color for your new label and select **Add**. This action takes you back to the **Modify Assets** screen.
+
+ ![Screenshot that shows the Add label pane that displays the configuration fields.](media/labels-4.png)
+
+7. Apply your new label to the assets. Click inside the **Add labels** text box to view a full list of available labels. Or you can type inside the box to search by keyword. After you select the labels you want to apply, select **Update**.
+
+ ![Screenshot that shows the Modify Asset pane with the newly created label applied.](media/labels-5.png)
+
+8. Allow a few moments for the labels to be applied. After the process is finished, you see a "Completed" notification. The page automatically refreshes and displays your asset list with the labels visible. A banner at the top of the screen confirms that your labels were applied.
+
+ [![Screenshot that shows the inventory list view with the selected assets now displaying the new label.](media/labels-6.png)](media/labels-6.png#lightbox)
++
+### Asset details page
+
+You can also modify a single asset from the asset details page. This option is ideal for situations when assets need to be thoroughly reviewed before a label or state change is applied.
+
+1. On the leftmost pane of your Defender EASM resource, select **Inventory**.
+
+2. Select the specific asset you want to modify to open the asset details page.
+
+3. On this page, select **Modify asset**.
+
+ ![Screenshot that shows the asset details page with the Modify asset button highlighted.](media/labels-7a.png)
+
+4. Follow steps 5 to 7 in the "Inventory list page" section.
+
+5. The asset details page refreshes and displays the newly applied label or state change. A banner indicates that the asset was successfully updated.
++
+## Modify, remove, or delete labels
+
+Users can remove a label from an asset by accessing the same **Modify asset** pane from either the inventory list or asset details view. From the inventory list view, you can select multiple assets at once and then add or remove the desired label in one action.
+
+To modify the label itself or delete a label from the system:
+
+1. On the leftmost pane of your Defender EASM resource, select **Labels (Preview)**.
+
+ [![Screenshot that shows the Labels (Preview) page that enables label management.](media/labels-8a.png)](media/labels-8a.png#lightbox)
+
+ This page displays all the labels within your Defender EASM inventory. Labels on this page might exist in the system but not be actively applied to any assets. You can also add new labels from this page.
+
+2. To edit a label, select the pencil icon in the **Actions** column of the label you want to edit. A pane opens on the right side of your screen where you can modify the name or color of a label. Select **Update**.
+
+3. To remove a label, select the trash can icon from the **Actions** column of the label you want to delete. Select **Remove Label**.
+
+ ![Screenshot that shows the Confirm Remove option on the Labels management page.](media/labels-9a.png)
+
+The **Labels** page automatically refreshes. The label is removed from the list and also removed from any assets that had the label applied. A banner confirms the removal.
++
+## Task Manager and notifications
+
+After a task is submitted, a notification confirms that the update is in progress. From any page in Azure, select the notification (bell) icon to see more information about recent tasks.
+
+![Screenshot that shows the Task submitted notification.](media/labels-12.png)
+![Screenshot that shows the Notifications pane that displays recent task status.](media/labels-13.png)
+
+The Defender EASM system can take seconds to update a handful of assets or minutes to update thousands. You can use the Task Manager to check on the status of any modification tasks in progress. This section outlines how to access the Task Manager and use it to better understand the completion of submitted updates.
+
+1. On the leftmost pane of your Defender EASM resource, select **Task Manager**.
+
+ ![Screenshot that shows the Task Manager page with appropriate section in navigation pane highlighted.](media/labels-11a.png)
+
+2. This page displays all your recent tasks and their status. Tasks are listed as **Completed**, **Failed**, or **In Progress**. A completion percentage and progress bar also appear. To see more details about a specific task, select the task name. A pane opens on the right side of your screen that provides more information.
+
+3. Select **Refresh** to see the latest status of all items in the Task Manager.
++
+## Filter for labels
+
+After you label assets in your inventory, you can use inventory filters to retrieve a list of all assets with a specific label applied.
+
+1. On the leftmost pane of your Defender EASM resource, select **Inventory**.
+
+2. Select **Add filter**.
+
+3. Select **Labels** from the **Filter** dropdown list. Select an operator and choose a label from the dropdown list of options. The following example shows how to search for a single label. You can use the **In** operator to search for multiple labels. For more information on filters, see the [inventory filters overview](inventory-filters.md).
+
+ ![Screenshot that shows the query editor used to apply filters, displaying the Labels filter with possible label values in a dropdown list.](media/labels-10.png)
+
+4. Select **Apply**. The inventory list page reloads and displays all assets that match your criteria.
+++
+## Asset chain-based management
+
+In some instances, you may wish to remove multiple assets at once based on the means with which they were discovered. For example, you may determine that a particular seed within a discovery group has pulled in assets that are not relevant to your organization, or you may need to remove assets that relate to a subsidiary that is no longer under your purview. For this reason, Defender EASM offers the ability to remove the source entity and any assets "downstream" in the discovery chain. You can delete linked assets with the following three methods:
+
+- **Seed-based management:** users can delete a seed that was previously included in a discovery group, removing all assets that were introduced to inventory through an observed connection to the specified seed. This method is useful when you can determine that a specific manually inputted seed resulted in undesired assets being added to inventory.
+- **Discovery chain management**: users can identify an asset within a discovery chain and delete it, simultaneously removing any assets discovered by that entity. Discovery is a recursive process; it scans seeds to identify new assets directly associated to these designated seeds, then continues to scan the newly discovered entities to unveil more connections. This deletion approach is useful when your discovery group is properly configured, but you need to remove a newly discovered asset and any asset(s) brought into inventory by association to that entity. Consider your discovery group settings and designated seeds to be the "top" of your discovery chain; this deletion approach allows you to remove assets from the middle.
+- **Discovery group management:** users can remove entire discovery group(s) and all assets that were introduced to inventory through this discovery group. This is useful when an entire discovery group is no longer applicable to your organization. For instance, you may have a discovery group that specifically searches for assets related to a subsidiary. If this subsidiary is no longer relevant to your organization, you can leverage asset chain-based management to delete all assets brought into inventory through that discovery group.
+
+ You can still view removed assets in Defender EASM; simply filter your inventory list for assets in the "Archived" state.
++
+### Seed-based deletion
+
+You may decide that one of your initially designated discovery seeds should no longer be included in a discovery group. The seed may no longer be relevant to your organization, or it might be bringing in more false positives than legitimate owned assets. In this situation, you can remove the seed from your Discovery Group to prevent it from being used in future discovery runs while simultaneously removing any assets that have been brought to inventory through the designated seed in the past.
+
+To perform a bulk removal based on a seed, route to the appropriate Discovery Group details page and click "Edit discovery group." Follow the prompts to reach the Seeds page, and remove the problematic seed from the list. When you select "Review + Update", you will see a warning that indicates that all assets discovered through the designated seed will be removed as well. Select "Update" or "Update & Run" to complete the deletion.
+
+![Screenshot that shows the Edit Discovery Group page with a warning indicating the removal of a seed and any assets discovered through that seed.](media/chain-1.png)
+++
+### Discovery chain-based deletion
+
+In the following example, imagine that you've discovered an insecure login form on your Attack Surface Summary dashboard. Your investigation routes you to a host that does not appear to be owned by your organization. You view the asset details page for more information; upon reviewing the Discovery chain, you learn that the host was brought into inventory because the corresponding domain was registered using an employee's corporate email address that was also used to register approved business entities.
+
+![Screenshot that shows the Asset Details page with the Discovery Chain section highlighted.](media/chain-2.png)
+
+In this situation, the initial discovery seed (the corporate domain) is still legitimate so we need to instead remove a problematic asset from the discovery chain. While we could perform the chain deletion from the contact email, we'll instead elect to remove everything associated to the personal domain registered to this employee so that Defender EASM will alert us of any other domains registered to that email address in the future. From the discovery chain, select this personal domain to view the asset details page. From this view, select "Remove from discovery chain" to remove the asset from your inventory, as well as all assets brought into inventory due to an observed connection to the personal domain. You will be asked to confirm the removal of the asset and all downstream assets and will be presented with a summarized list of the other assets that will be removed with this action. Select "Remove discovery chain" to confirm the bulk removal.
+
+![Screenshot that shows the box that prompts users to confirm the removal of the current asset and all downstream assets, with a summary of the other assets that will be removed with this action.](media/chain-3.png)
++
+### Discovery group deletion
+
+You may need to delete and entire discovery group and all the assets discovered through the group. For instance, your company may have sold a subsidiary that no longer needs to be monitored. Users can delete discovery groups from the Discovery management page. To remove a discovery group and all related assets, simply select the trash bin icon next to the appropriate group in the list. You will receive a warning that lists a summary of the assets that will be removed with this action. To confirm the deletion of the discovery group; and all related assets, select "Remove discovery group."
+
+![Screenshot that shows the Discovery management page, with the warning box that appears after electing to delete a group highlighted.](media/chain-4.png)
+++
+
+## Next steps
+
+- [Inventory filters overview](inventory-filters.md)
+- [Understand inventory assets](understanding-inventory-assets.md)
+- [Understand asset details](understanding-asset-details.md)
frontdoor Blue Green Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/blue-green-deployment.md
+
+ Title: Blue/Green deployments using Azure Front Door
+description: Learn how to use Azure Front Door to implement a blue/green deployment strategy for your web applications.
++++++ Last updated : 01/29/2024++
+# Blue/green deployments using Azure Front Door
+
+*Blue/Green deployment* is a software release methodology that gradually introduces application enhancements to a small subset of end users. If the enhancements are successful, the number of users on the new deployment is slowly increased until all users are on the new deployment. If there are any issues, requests are routed to the old backend with the previous application version. This method is a safer way to introduce code changes than suddenly pointing all users to the new enhancements.
+
+Azure Front Door is MicrosoftΓÇÖs modern cloud Content Delivery Network (CDN) that provides fast, reliable, and secure access between your users and your applicationsΓÇÖ static and dynamic web content across the globe. This article explains how to use Azure Front DoorΓÇÖs global load balancing capabilities to set up a blue/green deployment model for your backends.
+
+## Prerequisites
+
+* An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Create an Azure Front Door profile
+
+1. Sign in to the [Azure portal](https://portal.azure.com/?WT.mc_id=A261C142F).
+
+1. Select **Create a resource** from the home page, search for *Front Door and CDN profiles*, and select **Create**.
+
+1. Select **Custom create** on the *Compare offerings* page, and then select **Continue to create a Front Door**.
+
+1. On the **Basics** tab, enter or select the following information:
+
+ | Settings | Values |
+ | -- | -- |
+ | Subscriptions | Select your subscription. |
+ | Resource group | Select **Create new** and enter myAFDResourceGroup into the text box. |
+ | Resource group location | Select **East US**. |
+ | Name | Enter a unique name for your Front Door profile. |
+ | Tier | Select **Standard**. |
+
+1. Select the **Endpoints** tab, and then select **Add endpoint**. Enter a globally unique name for your endpoint, and then select **Add**. You can create more endpoints after the deployment.
+
+1. Select **+ Add a route** to configure routing to your Web App origin.
+
+ :::image type="content" source="./media/blue-green-deployment/endpoint.png" alt-text="Screenshot of the add a new endpoint for a new Azure Front Door profile.":::
+
+1. Provide a name for the route and configure the route settings based on the needs of your application. For more information, see [Create a Front Door for your application](create-front-door-portal.md#create-a-front-door-for-your-application).
+
+ :::image type="content" source="./media/blue-green-deployment/add-a-route.png" alt-text="Screenshot of the added route page for a new Azure Front Door profile.":::
+
+1. To create a new origin group, select **Add a new origin group** and enter *myOriginGroup* as the name.
+
+1. Select **+ Add** to add an origin to the origin group. Enter the following information for the existing version of the application:
+
+ :::image type="content" source="./media/blue-green-deployment/add-current-origin.png" alt-text="Screenshot of the adding the first origin in an origin group for a new Azure Front Door profile.":::
+
+ | Settings | Values |
+ | -- | -- |
+ | Name | Enter **CurrentWebApp** for the name. |
+ | Origin type | Select *App Service* from the dropdown. |
+ | Hostname | Enter the hostname of your Web App. This example uses *webapp-current.azurewebsites.net*. |
+ | Priority | Enter **1**. |
+ | Weight | Enter **75**. |
+ | Status | Select the check box for **Enable this origin**. |
+
+1. Select **+ Add** to add another origin to the origin group. Enter the following information for the new version of the application:
+
+ :::image type="content" source="./media/blue-green-deployment/add-new-origin.png" alt-text="Screenshot of the adding the second origin in an origin group for a new Azure Front Door profile.":::
+
+ | Settings | Values |
+ | -- | -- |
+ | Name | Enter **NewWebApp** for the name. |
+ | Origin type | Select *App Service* from the dropdown. |
+ | Hostname | Enter the hostname of your Web App. This example uses *webapp-new.azurewebsites.net*. |
+ | Priority | Enter **1**. |
+ | Weight | Enter **25**. |
+ | Status | Leave **Enable this origin** unchecked. |
+
+ > [!NOTE]
+ > Initially you want to set the weight of the current origin higher than the new origin. This ensures that most of the traffic is routed to the current origin. As you test the new origin, you can gradually increase the weight of the new origin and decrease the weight of the current origin. The total weight doesn't have to add up to be 100, although it will help you visualize the traffic distribution. The example sets the existing origin to receive three times as much traffic as the new origin.
+
+1. Enable session affinity if your application requires it. For more information, see [Session affinity](routing-methods.md#session-affinity).
+
+ > [!NOTE]
+ > *Session affinity* determines if the end user gets routed to the same origin after the first first request goes through Azure Front Door. Whether or not you enable this feature depends on your application, and the type of enhancements being rolled out. If it is a major revision, you might want to enable session affinity, so that if users were initially routed to the new codebase they will continue to use it. If the enhancement is relatively minor, for example, involving a single page with no dependencies on other parts of the application, you can potentially leave session affinity disabled. When in doubt, have session affinity enabled.
+
+1. Health probe settings can be left at the default values. You can adjust the probe settings based on the needs of your application. For more information, see [Health probes](health-probes.md).
+
+1. Under **Load balancing settings**, enter the following information:
+
+ :::image type="content" source="./media/blue-green-deployment/configure-origin-group-settings.png" alt-text="Screenshot of configuring the origin group settings.":::
+
+ | Settings | Values |
+ | -- | -- |
+ | Sample size | Enter **4**. |
+ | Successful samples required | Enter **3**. |
+ | Latency sensitivity (in milliseconds) | Enter **500**. |
+
+ > [!NOTE]
+ > We recommend setting the latency sensitivity to 500 milliseconds (half a second) or higher. This ensures that both origins gets used, as it is possible that one origin is faster than the other.
+
+1. Select **Add** to add the origin group. Then select **Review + create** to review the settings of your Front Door profile. Select **Create** to create the profile.
+
+## Start Blue/Green deployment
+
+When you're ready to begin the blue/green deployment, you can start by enabling the new origin. This starts routing traffic to the new origin, while still allowing you to revert back to the old origin if needed.
+
+1. After the Front Door profile gets created, go to the origin group you created earlier. Select the new origin, and then select **Enable this origin**. This starts routing traffic to the new origin.
+
+ :::image type="content" source="./media/blue-green-deployment/enable-new-origin.png" alt-text="Screenshot of enabling the new origin to receive traffic.":::
+
+1. Monitor the new origin to ensure that it's working as expected. Once you're confident that the new origin is working as expected, you can gradually increase the weight of the new origin and decrease the weight of the old origin. Keep increasing the weight of the new origin until all traffic is routed to the new origin.
+
+1. If you experience any issues with the new origin, you can disable the new origin to route all traffic back to the old origin. This process allows you to investigate and fix the issues without impacting your users.
+
+## Next steps
+
+[Secure traffic to your Azure Front Door origins](origin-security.md)
governance Policy For Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-for-kubernetes.md
aligns with how the add-on was installed:
- Installations of Gatekeeper outside of the Azure Policy Add-on aren't supported. Uninstall any components installed by a previous Gatekeeper installation before enabling the Azure Policy Add-on. - [Reasons for non-compliance](../how-to/determine-non-compliance.md#compliance-reasons) aren't available for the Microsoft.Kubernetes.Data [Resource Provider mode](./definition-structure.md#resource-provider-modes). Use [Component details](../how-to/determine-non-compliance.md#component-details-for-resource-provider-modes). - Component-level [exemptions](./exemption-structure.md) aren't supported for [Resource Provider modes](./definition-structure.md#resource-provider-modes). Parameters support is available in Azure Policy definitions to exclude and include particular namespaces.
+ - Using the `metadata.gatekeeper.sh/requires-sync-data` annotation in a constraint template to configure the [replication of data](https://open-policy-agent.github.io/gatekeeper/website/docs/sync) from your cluster into the OPA cache is currently only allowed for built-in policies. This is because it can dramatically increase the Gatekeeper pods' resource usage if not used carefully.
The following limitations apply only to the Azure Policy Add-on for AKS: - [AKS Pod security policy](../../../aks/use-pod-security-policies.md) and the Azure Policy Add-on for AKS can't both be enabled. For more information, see [AKS pod security limitation](../../../aks/use-azure-policy.md).
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/export-data.md
[!INCLUDE [retirement banner](../includes/healthcare-apis-azure-api-fhir-retirement.md)]
-The Bulk Export feature allows data to be exported from the FHIR Server per the [FHIR specification](https://hl7.org/fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html).
+The Bulk Export feature allows data to be exported from the FHIR Server per the [FHIR specification](https://www.hl7.org/fhir/uv/bulkdata/).
Before using $export, you want to make sure that the Azure API for FHIR is configured to use it. For configuring export settings and creating Azure storage account, refer to [the configure export data page](configure-export-data.md). ## Using $export command
-After configuring the Azure API for FHIR for export, you can use the $export command to export the data out of the service. The data will be stored into the storage account you specified while configuring export. To learn how to invoke $export command in FHIR server, read documentation on the [HL7 FHIR $export specification](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html).
+After configuring the Azure API for FHIR for export, you can use the $export command to export the data out of the service. The data will be stored into the storage account you specified while configuring export. To learn how to invoke $export command in FHIR server, read documentation on the [HL7 FHIR $export specification](https://www.hl7.org/fhir/uv/bulkdata/).
**Jobs stuck in a bad state**
After configuring the Azure API for FHIR for export, you can use the $export com
In some situations, thereΓÇÖs a potential for a job to be stuck in a bad state. This can occur especially if the storage account permissions havenΓÇÖt been set up properly. One way to validate export is to check your storage account to see if the corresponding container (that is, `ndjson`) files are present. If they arenΓÇÖt present, and there are no other export jobs running, then thereΓÇÖs a possibility the current job is stuck in a bad state. You should cancel the export job by sending a cancellation request and try requeuing the job again. Our default run time for an export in bad state is 10 minutes before it will stop and move to a new job or retry the export. The Azure API For FHIR supports $export at the following levels:
-* [System](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointsystem-level-export): `GET https://<<FHIR service base URL>>/$export>>`
-* [Patient](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointall-patients): `GET https://<<FHIR service base URL>>/Patient/$export>>`
-* [Group of patients*](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointgroup-of-patients) - Azure API for FHIR exports all related resources but doesn't export the characteristics of the group: `GET https://<<FHIR service base URL>>/Group/[ID]/$export>>`
+* [System](https://www.hl7.org/fhir/uv/bulkdata/): `GET https://<<FHIR service base URL>>/$export>>`
+* [Patient](https://www.hl7.org/fhir/uv/bulkdata/): `GET https://<<FHIR service base URL>>/Patient/$export>>`
+* [Group of patients*](https://www.hl7.org/fhir/uv/bulkdata/) - Azure API for FHIR exports all related resources but doesn't export the characteristics of the group: `GET https://<<FHIR service base URL>>/Group/[ID]/$export>>`
With export, data is exported in multiple files each containing resources of only one type. The number of resources in an individual file will be limited. The maximum number of resources is based on system performance. It is currently set to 5,000, but can change. The result is that you may get multiple files for a resource type, which will be enumerated (for example, `Patient-1.ndjson`, `Patient-2.ndjson`).
Currently we support $export for ADLS Gen2 enabled storage accounts, with the fo
## Settings and parameters ### Headers
-There are two required header parameters that must be set for $export jobs. The values are defined by the current [$export specification](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#headers).
+There are two required header parameters that must be set for $export jobs. The values are defined by the current [$export specification](https://www.hl7.org/fhir/uv/bulkdata/).
* **Accept** - application/fhir+json * **Prefer** - respond-async
healthcare-apis Validation Against Profiles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/validation-against-profiles.md
If you'd like to specify a profile as a parameter, you can specify the canonical
## Validating a new resource If you'd like to validate a new resource that you're uploading to Azure API for FHIR, you can do a `POST` request:
+The server will always return an OperationOutcome as the result.
`POST http://<your Azure API for FHIR base URL>/{Resource}/$validate`
For example:
`POST https://myAzureAPIforFHIR.azurehealthcareapis.com/Patient/$validate`
-This request will first validate the resource. New resource you're specifying in the request will be created after validation. The server will always return an OperationOutcome as the result.
+This request will validate the resource. On validation resources are not created in FHIR service, you will need to send a POST request without $validate to create resource.
## Validate on resource CREATE/ UPDATE using header.
healthcare-apis Configure Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-export-data.md
# Configure export settings and set up a storage account
-The FHIR service supports the `$export` operation [specified by HL7](https://hl7.org/fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html) for exporting FHIR data from a FHIR server. In the FHIR service implementation, calling the `$export` endpoint causes the FHIR service to export data into a pre-configured Azure storage account.
+The FHIR service supports the `$export` operation [specified by HL7](https://www.hl7.org/fhir/uv/bulkdata/) for exporting FHIR data from a FHIR server. In the FHIR service implementation, calling the `$export` endpoint causes the FHIR service to export data into a pre-configured Azure storage account.
Ensure you are granted with application role - 'FHIR Data exporter role' prior to configuring export. To understand more on application roles, see [Authentication and Authorization for FHIR service](../../healthcare-apis/authentication-authorization.md).
healthcare-apis Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/export-data.md
# Export your FHIR data
-By using the bulk `$export` operation in the FHIR service, you can export data as described in the [HL7 FHIR Bulk Data Access specification](https://hl7.org/fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html).
+By using the bulk `$export` operation in the FHIR service, you can export data as described in the [HL7 FHIR Bulk Data Access specification](https://www.hl7.org/fhir/uv/bulkdata/).
Before you attempt to use `$export`, make sure that your FHIR service is configured to connect with an Azure Data Lake Storage Gen2 account. To configure export settings and create a Data Lake Storage Gen2 account, refer to [Configure settings for export](./configure-export-data.md).
GET {{fhirurl}}/$export?_container={{containerName}}
If you don't specify a container name in the request (for example, by calling `GET {{fhirurl}}/$export`), a new container with an autogenerated name will be created for the exported data.
-For general information about the FHIR `$export` API spec, see the [HL7 FHIR Export Request Flow](https://hl7.org/fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#request-flow) documentation.
+For general information about the FHIR `$export` API spec, see the [HL7 FHIR Export Request Flow](https://www.hl7.org/fhir/uv/bulkdata/) documentation.
The FHIR service supports `$export` at the following levels:
-* [System](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointsystem-level-export): `GET {{fhirurl}}/$export`
-* [Patient](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointall-patients): `GET {{fhirurl}}/Patient/$export`
-* [Group of patients](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#endpointgroup-of-patients)\*: `GET {{fhirurl}}/Group/[ID]/$export`
+* [System](https://www.hl7.org/fhir/uv/bulkdata/): `GET {{fhirurl}}/$export`
+* [Patient](https://www.hl7.org/fhir/uv/bulkdata/): `GET {{fhirurl}}/Patient/$export`
+* [Group of patients](https://www.hl7.org/fhir/uv/bulkdata/)\*: `GET {{fhirurl}}/Group/[ID]/$export`
\*The FHIR service exports all referenced resources but doesn't export the characteristics of the group resource itself. Data is exported in multiple files. Each file contains resources of only one type. The number of resources in an individual file will be limited. The maximum number of resources is based on system performance. It is currently set to 5,000, but can change. The result is that you might get multiple files for a resource type. The file names will follow the format `<resourceName>-<number>-<number>.ndjson`. The order of the files is not guaranteed to correspond to any ordering of the resources in the database.
Data is exported in multiple files. Each file contains resources of only one typ
> [!NOTE] > `Patient/$export` and `Group/[ID]/$export` can export duplicate resources if a resource is in multiple groups or in a compartment of more than one resource.
-In addition to checking the presence of exported files in your storage account, you can check your `$export` operation status through the URL in the `Content-Location` header that's returned in the FHIR service response. For more information, see the [Bulk Data Status Request](https://hl7.org/fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#bulk-data-status-request) documentation from HL7.
+In addition to checking the presence of exported files in your storage account, you can check your `$export` operation status through the URL in the `Content-Location` header that's returned in the FHIR service response. For more information, see the [Bulk Data Status Request](https://www.hl7.org/fhir/uv/bulkdata/) documentation from HL7.
### Export your FHIR data to Data Lake Storage Gen2
To export data to a storage account behind a firewall, see [Configure settings f
## Settings and parameters ### Headers
-Two required header parameters must be set for `$export` jobs. The values are set according to the current HL7 [$export specification](https://hl7.org/Fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#headers).
+Two required header parameters must be set for `$export` jobs. The values are set according to the current HL7 [$export specification](https://www.hl7.org/fhir/uv/bulkdata/).
* **Accept**: `application/fhir+json` * **Prefer**: `respond-async`
In some situations, there's a potential for a job to be stuck in a bad state whi
One way to check the status of your `$export` operation is to go to your storage account's *storage browser* and see whether any `.ndjson` files are present in the export container. If the files aren't present and no other `$export` jobs are running, it's possible that the current job is stuck in a bad state. In this case, you can cancel the `$export` job by calling the FHIR service API with a `DELETE` request. Later, you can requeue the `$export` job and try again.
-For more information about canceling an `$export` operation, see the [Bulk Data Delete Request](https://hl7.org/fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html#bulk-data-delete-request) documentation from HL7.
+For more information about canceling an `$export` operation, see the [Bulk Data Delete Request](https://www.hl7.org/fhir/uv/bulkdata/) documentation from HL7.
> [!NOTE] > In the FHIR service, the default time for an `$export` operation to idle in a bad state is 10 minutes before the service stops the operation and moves to a new job.
healthcare-apis Fhir Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-faq.md
-
+ Title: FAQ about FHIR service in Azure Health Data Services description: Get answers to frequently asked questions about FHIR service, such as the storage location of data behind FHIR APIs and version support.
We support Microsoft Entra ID as the identity provider.
### Can I use Azure AD B2C with the FHIR service?
-No, we don't support B2C in the FHIR service. If you need more granular access controls, we recommend looking at the [open-source FHIR proxy](https://github.com/microsoft/fhir-proxy).
+Yes. You can use [Azure Active Directory B2C](../../active-directory-b2c/overview.md) (Azure AD B2C) with the FHIR service to grant access to your applications and users. For more information, see [Use Azure Active Directory B2C to grant access to the FHIR service](../fhir/azure-ad-b2c-setup.md).
### What FHIR version do you support?
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md
Learn more:
- [SMART on FHIR](fhir/smart-on-fhir.md) - [Sample: Azure ONC (g)(10) SMART on FHIR](https://github.com/Azure-Samples/azure-health-data-and-ai-samples/tree/main/samples/patientandpopulationservices-smartonfhir-oncg10)
-**Storage size support in FHIR service beyond 4TB**
+**Request up to 100 TB of storage for the FHIR service**
+
+The FHIR service can store and exchange large amounts of health data, and each FHIR service instance has a 4 TB storage limit by default. If you have more data, you can ask Microsoft to increase storage up to 100 TB for your FHIR service.
+
+By adding more storage, organizations can handle large data sets to enable analytics scenarios. For example, you can use more storage to manage population health, conduct research, and gain new insights from health data. Plus, more storage enables Azure API for FHIR customers with high-volume data (greater than 4 TB) to migrate to the FHIR service in Azure Health Data Services.
+
+To request storage greater than 4 TB, [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) on the Azure portal and use the issue type **Service and Subscription limit (quotas)**.
-By default each FHIR instance is limited to storage capacity of 4TB. To provision a FHIR instance with storage capacity beyond 4TB, [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) with the Issue type 'Service and Subscription limit (quotas)'.
> [!NOTE]
-> Due to an issue with billing metrics for storage, customers opting for more than 4TB storage capacity won't be billed for storage until the issue is addressed.
+> Due to an issue with billing metrics for storage, customers who opt for more than 4 TB of storage capacity won't be billed for storage until the issue is resolved.
## December 2023
Learn more:
**Store and manage medical imaging data with Azure Data Lake Storage (Preview)**
-With the integration of Azure Data Lake Storage available for preview, organizations can now enjoy full control over their imaging data and increased flexibility for accessing and working with that data through the Azure storage ecosystem and APIs. By using Azure Data Lake Storage with the DICOM service, organizations are able to:
+With the integration of Azure Data Lake Storage available for preview, organizations have full control over their imaging data and increased flexibility for accessing and working with that data through the Azure storage ecosystem and APIs. By using Azure Data Lake Storage with the DICOM service, organizations are able to:
- Enable direct access to medical imaging data stored by the DICOM service using Azure storage APIs and DICOMweb APIs, providing more flexibility to access and work with the data. - Open medical imaging data up to the entire ecosystem of tools for working with Azure storage, including AzCopy, Azure Storage Explorer, and the Data Movement library.
General availability (GA) of Azure Health Data services in Japan East region.
**Introduction of _till parameters and throughput improvement by 50x** _till parameter is introduced as optional parameter and allows you to export resources that have been modified until the specified time.
-This feature improvement is applicable to System export, for more information on export, see [FHIR specification](https://hl7.org/fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html)
-Also see [Export your FHIR data by invoking the $export command on the FHIR service](./../healthcare-apis/fhir/export-data.md)
+This feature improvement is applicable to System export, for more information on export, see [FHIR specification](https://hl7.org/fhir/uv/bulkdata/)
+Also see [Export your FHIR data by invoking the $export command on the FHIR service](./../healthcare-apis/fhir/export-data.md)
**Fixed issue for Chained search with :contains modifier results with no resources are returned**
iot-central Concepts Faq Extend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-faq-extend.md
- Title: How to extend IoT Central
-description: How to use data exports, rules, or the REST API to extend IoT Central if it's missing something you need.
-- Previously updated : 06/12/2023------
-# How do I extend IoT Central if it's missing something I need?
-
-Use the following extension points to expand the built-in functionality of IoT Central:
--- Process your IoT data in other services or applications by using the IoT Central data export capabilities.-- Trigger business flows and activities by using IoT Central rules.-- Interact with IoT Central programmatically by using the IoT Central REST APIs.-
-## Export data
-
-To extend IoT Central's built-in rules and analytics capabilities, use the data export capability to continuously stream data from your devices to other services for processing. The data export capability enables extension scenarios such as:
--- Enrich, and transform your IoT data to generate advanced visualizations that provide insights.-- Extract business metrics and use artificial intelligence and machine learning to derive business insights from your IoT data.-- Monitoring and diagnostics for hundreds of thousands of connected IoT devices.-- Combine your IoT data with other business data to build dashboards and reports.-
-To learn more, see [IoT Central data integration guide](overview-iot-central-solution-builder.md).
-
-## Rules
-
-You can create rules in IoT Central that trigger actions when specified conditions are met. Conditions are evaluated based on data from your connected IoT devices. Actions include sending messages to other cloud services or calling a webhook endpoint. Rules enable extension scenarios such as:
--- Notifying operators in other systems.-- Starting business processes or flows.-- Monitoring alerts on a custom dashboard.-
-To learn more, see [Configure rules](howto-configure-rules.md).
-
-## REST API
-
-The *data plane* REST API lets you manage entities in your IoT Central application programmatically. Entities include devices, users, and roles. The preview data plane REST API lets you query the data from your connected devices and manage a wider selection of entities such as jobs and data exports.
-
-The *control plane* REST API lets you create and manage IoT Central applications.
-
-The REST APIs enable extension scenarios such as:
--- Programmatic management of your IoT Central applications.-- Tight integration with other applications.-
-To learn more, see [Tutorial: Use the REST API to manage an Azure IoT Central application](tutorial-use-rest-api.md).
-
-## Next steps
-
-Now that you've learned about the IoT Central extensibility points, the suggested next step is to review the [IoT Central data integration guide](overview-iot-central-solution-builder.md).
iot-central Howto Configure Rules Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-configure-rules-advanced.md
Title: Use workflows to integrate Azure IoT Central
description: How to configure rules and actions that integrate your IoT Central application with other cloud services by using Power Automate or Azure Logic Apps. Previously updated : 06/14/2023 Last updated : 02/02/2024
To add the **Azure IoT Central V3** connector as a trigger in Power Automate:
To add the **Azure IoT Central V3** connector as a trigger in Azure Logic Apps:
+> [!IMPORTANT]
+> Triggers in the IoT Central connector won't work unless the Logic App has a public endpoint. To learn more, see [Considerations for inbound traffic to Logic Apps through private endpoints](../../logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint.md#considerations-for-inbound-traffic-through-private-endpoints).
+ 1. In **Logic Apps Designer**, select the **Blank Logic App** template. 1. In the designer, search for *IoT Central*, and select the **Azure IoT Central V3** connector. 1. In the list of triggers, select **When a rule is fired (preview)**.
iot-central Overview Iot Central Solution Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-solution-builder.md
Title: Azure IoT Central data integration guide
+ Title: Azure IoT Central integration guide
description: This guide describes how to integrate your IoT Central application with other services to extend its capabilities. Previously updated : 06/12/2023 Last updated : 02/02/2024
# This article applies to solution builders.
-# IoT Central data integration guide
+# IoT Central integration and extension guide
Azure IoT Central is an application platform that:
When you use IoT Central to create an IoT solution, tasks include:
- Configure dashboards and views in the IoT Central web UI. - Use the built-in rules and analytics tools to derive business insights from the connected devices. - Use the data export feature, rules capabilities, and APIs to integrate IoT Central with other services and applications.
+- Trigger business flows and activities by using IoT Central rules.
+- Interact with IoT Central programmatically by using the IoT Central REST APIs.
-## Data export
+## Export data
Many integration scenarios build on the IoT Central data export feature. An IoT Central application can continuously export filtered and enriched IoT data. Data export pushes changes in near real time to other parts of your cloud solution for warm-path insights, analytics, and storage.
Currently, IoT Central can export data to:
## Transform data at ingress
-Devices may send complex telemetry that needs to be simplified before it's used in IoT Central or exported. In some scenarios, you need to normalize the telemetry from different devices so that you can display and process the telemetry consistently. To learn more, see [Map telemetry on ingress to IoT Central](howto-map-data.md).
+Devices can send complex telemetry that needs to be simplified before IoT Central can use it or export it. In some scenarios, you need to normalize the telemetry from different devices so that you can display and process the telemetry consistently. To learn more, see [Map telemetry on ingress to IoT Central](howto-map-data.md).
## Extract business value
Built-in features of IoT Central you can use to extract business value include:
- Built-in rules and analytics:
- You can add rules to an IoT Central application that run customizable actions. Rules evaluate conditions, based on data coming from a device, to determine when to run an action. To learn more about rules, see:
+ You can add rules to an IoT Central application that run customizable actions. Rules evaluate conditions, based on data coming from a device, to determine when to run an action. Actions include sending messages to other cloud services or calling a webhook endpoint. Rules enable extension scenarios such as:
+
+ - Notifying operators in other systems.
+ - Starting business processes or flows.
+ - Monitoring alerts on a custom dashboard.
+
+ To learn more about rules, see:
- [Tutorial: Create a rule and set up notifications in your Azure IoT Central application](tutorial-create-telemetry-rules.md) - [Configure rules](howto-configure-rules.md)
You can use the data export and rules capabilities in IoT Central to integrate w
IoT Central provides rich operator dashboards and visualizations. However, some IoT solutions must integrate with existing applications, or require new companion applications to expand their capabilities. To integrate with other applications, use IoT Central extensibility points such as the REST API and the continuous data export feature.
-You use data plane REST APIs to access the entities in and the capabilities of your IoT Central application. For example, managing devices, device templates, users, and roles. The IoT Central REST API operations are *data plane* operations. To learn more, see [How to use the IoT Central REST API to manage users and roles](howto-manage-users-roles-with-rest-api.md).
+You use *data plane* REST APIs to access the entities in and the capabilities of your IoT Central application. For example, managing devices, device templates, users, and roles. The IoT Central REST API operations are data plane operations. The REST APIs enable extension scenarios such as:
+
+- Programmatic management of your IoT Central applications.
+- Tight integration with other applications.
+
+To learn more, see [Tutorial: Use the REST API to manage an Azure IoT Central application](tutorial-use-rest-api.md).
-You use the *control plane* to manage IoT Central-related resources in your Azure subscription. You can use the Azure CLI and Resource Manager templates for control plane operations. For example, you can use the Azure CLI to create an IoT Central application. To learn more, see [Manage IoT Central from Azure CLI](howto-manage-iot-central-from-cli.md).
+You use the *control plane* to manage IoT Central-related resources in your Azure subscription. You can use the REST API, the Azure CLI, or Resource Manager templates for control plane operations. For example, you can use the Azure CLI to create an IoT Central application. To learn more, see [Manage IoT Central from Azure CLI](howto-manage-iot-central-from-cli.md).
## Next steps
iot-hub Iot Hub Ip Filter Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-ip-filter-classic.md
Last updated 10/16/2020+ # IoT Hub classic IP filter and how to upgrade
iot-hub Iot Hub Ip Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-ip-filtering.md
By default, the **IP Filter** grid in the portal for an IoT hub is empty. This d
## Add or edit an IP filter rule
-To add an IP filter rule, select **+ Add IP Filter Rule**. To quickly add your computer's IP address, click the **Add your client IP address**.
+To add an IP filter rule, select **Add IP Filter Rule**. To quickly add your computer's IP address, select **Add your client IP address**.
:::image type="content" source="./media/iot-hub-ip-filtering/ip-filter-add-rule.png" alt-text="Screenshot showing how to add an IP filter rule to an IoT hub.":::
To delete an IP filter rule, select the trash can icon on that row and then sele
:::image type="content" source="./media/iot-hub-ip-filtering/ip-filter-delete-rule.png" alt-text="Screenshot showing how to delete an IoT Hub IP filter rule.":::
-## Apply IP filter rules to the built-in Event Hub compatible endpoint
+## Apply IP filter rules to the built-in Event Hubs compatible endpoint
-To apply the IP filter rules to the built-in Event Hub compatible endpoint, check the box next to **Apply IP filters to the built-in endpoint?**, then select **Save**.
+To apply the IP filter rules to the built-in Event Hubs compatible endpoint, check the box next to **Apply IP filters to the built-in endpoint?**, then select **Save**.
:::image type="content" source="media/iot-hub-ip-filtering/ip-filter-built-in-endpoint.png" alt-text="Screenshot showing the toggle for the built-in endpoint.":::
If you disable this option, the built-in endpoint is accessible to all IP addres
## How filter rules are applied
-The IP filter rules are applied at the IoT Hub service level. Therefore, the IP filter rules apply to all connections from devices and back-end apps using any supported protocol. Also, you can choose if the [built-in Event Hub compatible endpoint](iot-hub-devguide-messages-read-builtin.md) (not via the IoT Hub connection string) are bound to these rules.
+The IP filter rules are applied at the IoT Hub service level. Therefore, the IP filter rules apply to all connections from devices and back-end apps using any supported protocol. Also, you can choose if the [built-in Event Hubs compatible endpoint](iot-hub-devguide-messages-read-builtin.md) (not via the IoT Hub connection string) are bound to these rules.
Any connection attempt from an IP address that isn't explicitly allowed receives an unauthorized 401 status code and description. The response message does not mention the IP rule. Rejecting IP addresses can prevent other Azure services such as Azure Stream Analytics, Azure Virtual Machines, or the Device Explorer in Azure portal from interacting with the IoT hub.
$iothubResource | Set-AzResource -Force
## Update IP filter rules using REST - You may also retrieve and modify your IoT Hub's IP filter using Azure resource Provider's REST endpoint. See `properties.networkRuleSets` in [createorupdate method](/rest/api/iothub/iothubresource/createorupdate).
-## IP filter (classic) retirement
-
-Classic IP filter has been retired. To learn more, see [IoT Hub classic IP filter and how to upgrade](iot-hub-ip-filter-classic.md).
- ## Next steps To further explore the capabilities of IoT Hub, see:
iot-operations Howto Manage Assets Remotely https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-devices-assets/howto-manage-assets-remotely.md
Title: Manage asset configurations remotely
-description: Use the Azure IoT Operations portal to manage your asset configurations remotely and enable data to flow from your assets to an MQTT broker.
+description: Use the Azure IoT Operations portal or the Azure CLI to manage your asset configurations remotely and enable data to flow from your assets to an MQTT broker.
_OPC UA servers_ are software applications that communicate with assets. OPC UA
An _asset endpoint_ is a custom resource in your Kubernetes cluster that connects OPC UA servers to OPC UA connector modules. This connection enables an OPC UA connector to access an asset's data points. Without an asset endpoint, data can't flow from an OPC UA server to the Azure IoT OPC UA Broker (preview) instance and Azure IoT MQ (preview) instance. After you configure the custom resources in your cluster, a connection is established to the downstream OPC UA server and the server forwards telemetry to the OPC UA Broker instance.
-This article describes how to use the Azure IoT Operations (preview) portal to:
+This article describes how to use the Azure IoT Operations (preview) portal and the Azure CLI to:
- Define asset endpoints - Add assets, and define tags and events. These assets, tags, and events map inbound data from OPC UA servers to friendly names that you can use in the MQ broker and Azure IoT Data Processor (preview) pipelines.
-You can also use the Azure CLI to manage assets and asset endpoints. To learn more, see:
--- [az iot ops asset](/cli/azure/iot/ops/asset)-- [az iot ops asset endpoint](/cli/azure/iot/ops/asset/endpoint).- ## Prerequisites To configure an assets endpoint, you need a running instance of Azure IoT Operations.
-## Sign in to the Azure IoT Operations portal
+## Sign in
-Navigate to the [Azure IoT Operations portal](https://iotoperations.azure.com) in your browser and sign in by using your Microsoft Entra ID credentials.
+# [Azure IoT Operations portal](#tab/portal)
+
+To sign in to the Azure IoT Operations portal, navigate to the [Azure IoT Operations portal](https://iotoperations.azure.com) in your browser and sign in by using your Microsoft Entra ID credentials.
## Select your cluster
When you sign in, the portal displays a list of the Azure Arc-enabled Kubernetes
> [!TIP] > You can use the filter box to search for clusters.
+# [Azure CLI](#tab/cli)
+
+Before you use the `az az iot ops asset` commands, sign in to the subscription that contains your Azure IoT Operations deployment:
+
+```azurecli
+az login
+```
+++ ## Create an asset endpoint By default, an Azure IoT Operations deployment includes a built-in OPC PLC simulator. To create an asset endpoint that uses the built-in OPC PLC simulator:
+# [Azure IoT Operations portal](#tab/portal)
+ 1. Select **Asset endpoints** and then **Create asset endpoint**: :::image type="content" source="media/howto-manage-assets-remotely/asset-endpoints.png" alt-text="Screenshot that shows the asset endpoints page in the Azure IoT Operations portal.":::
By default, an Azure IoT Operations deployment includes a built-in OPC PLC simul
1. To save the definition, select **Create**.
-This configuration deploys a new module called `opc-ua-connector-0` to the cluster. After you define an asset, an OPC UA connector pod discovers it. The pod uses the asset endpoint that you specify in the asset definition to connect to an OPC UA server.
+# [Azure CLI](#tab/cli)
+
+Run the following command:
+
+```azurecli
+az iot ops asset endpoint create --name opc-ua-connector-0 --target-address opc.tcp://opcplc-000000:50000 -g {your resource group name} --cluster {your cluster name}
+```
+
+> [!TIP]
+> Use `az connectedk8s list` to list the clusters you have access to.
+
+To learn more, see [az iot ops asset endpoint](/cli/azure/iot/ops/asset/endpoint).
+++
+This configuration deploys a new `assetendpointprofile` resource called `opc-ua-connector-0` to the cluster. After you define an asset, an OPC UA connector pod discovers it. The pod uses the asset endpoint that you specify in the asset definition to connect to an OPC UA server.
When the OPC PLC simulator is running, data flows from the simulator, to the connector, to the OPC UA broker, and finally to the MQ broker.
When the OPC PLC simulator is running, data flows from the simulator, to the con
The previous example uses the `Anonymous` authentication mode. This mode doesn't require a username or password.
+# [Azure IoT Operations portal](#tab/portal)
+ To use the `UsernamePassword` authentication mode, complete the following steps: 1. Follow the steps in [Configure OPC UA user authentication with username and password](howto-configure-opcua-authentication-options.md#configure-opc-ua-user-authentication-with-username-and-password) to add secrets for username and password in Azure Key Vault, and project them into Kubernetes cluster.
To use the `UsernamePassword` authentication mode, complete the following steps:
| Username reference | `aio-opc-ua-broker-user-authentication/username` | | Password reference | `aio-opc-ua-broker-user-authentication/password` |
+# [Azure CLI](#tab/cli)
+
+To use the `UsernamePassword` authentication mode, complete the following steps:
+
+1. Follow the steps in [Configure OPC UA user authentication with username and password](howto-configure-opcua-authentication-options.md#configure-opc-ua-user-authentication-with-username-and-password) to add secrets for username and password in Azure Key Vault, and project them into Kubernetes cluster.
+
+1. Use a command like the following example to create your asset endpoint:
+
+ ```azurecli
+ az iot ops asset endpoint create --name opc-ua-connector-0 --target-address opc.tcp://opcplc-000000:50000 -g {your resource group name} --cluster {your cluster name} --username-ref "aio-opc-ua-broker-user-authentication/username" --password-ref "aio-opc-ua-broker-user-authentication/password"
+ ```
+++ ### Configure an asset endpoint to use a transport authentication certificate To configure the asset endpoint to use a transport authentication certificate, complete the following steps:
+# [Azure IoT Operations portal](#tab/portal)
+ 1. Follow the steps in [Configure OPC UA transport authentication](howto-configure-opcua-authentication-options.md#configure-opc-ua-transport-authentication) to add a transport certificate and private key to Azure Key Vault, and project them into Kubernetes cluster. 2. In Azure IoT Operations portal, select **Use transport authentication certificate** for the **Transport authentication** field and enter the certificate thumbprint.
+# [Azure CLI](#tab/cli)
+
+1. Follow the steps in [Configure OPC UA transport authentication](howto-configure-opcua-authentication-options.md#configure-opc-ua-transport-authentication) to add a transport certificate and private key to Azure Key Vault, and project them into Kubernetes cluster.
+
+1. Use a command like the following example to create your asset endpoint:
+
+ ```azurecli
+ az iot ops asset endpoint create --name opc-ua-connector-0 --target-address opc.tcp://opcplc-000000:50000 -g {your resource group name} --cluster {your cluster name} --username-ref "aio-opc-ua-broker-user-authentication/username" --password-ref "aio-opc-ua-broker-user-authentication/password" --cert secret=aio-opc-ua-broker-client-certificate thumbprint=000000000000000000 password=aio-opc-ua-broker-client-certificate-password
+ ```
+
+To learn more, see [az iot ops asset](/cli/azure/iot/ops/asset).
+++ ## Add an asset, tags, and events
+# [Azure IoT Operations portal](#tab/portal)
+ To add an asset in the Azure IoT Operations portal:
-1. Select the **Assets** tab. If you haven't created any assets yet, you see the following screen:
+1. Select the **Assets** tab. Before you create any assets, you see the following screen:
:::image type="content" source="media/howto-manage-assets-remotely/create-asset-empty.png" alt-text="Screenshot that shows an empty Assets tab in the Azure IoT Operations portal.":::
Now you can define the tags associated with the asset. To add OPC UA tags:
- **Sampling interval (milliseconds)**: The sampling interval indicates the fastest rate at which the OPC UA Server should sample its underlying source for data changes. - **Publishing interval (milliseconds)**: The rate at which OPC UA Server should publish data.
- - **Queue size**: The depth of the queue to hold the sampling data before it's published.
+ - **Queue size**: The depth of the queue to hold the sampling data before publishing it.
### Add tags in bulk to an asset
You can import up to 1000 OPC UA tags at a time from a CSV file:
> [!TIP] > You can use the filter box to search for tags.
+# [Azure CLI](#tab/cli)
+
+Use the following command to add a "thermostat" asset by using the Azure CLI. The command adds two tags to the asset by using the `--data` parameter:
+
+```azurecli
+az iot ops asset create --name thermostat -g {your resource group name} --cluster {your cluster name} --endpoint opc-ua-connector-0 --description 'A simulated thermostat asset' --data data_source='ns=3;s=FastUInt10', name=temperature --data data_source='ns=3;s=FastUInt100', name='Tag 10'
+```
+
+When you create an asset by using the Azure CLI, you can define:
+
+- Multiple tags by using the `--data` parameter multiple times.
+- Multiple events by using the `--event` parameter multiple times.
+- Optional information for the asset such as:
+ - Manufacturer
+ - Manufacturer URI
+ - Model
+ - Product code
+ - Hardware version
+ - Software version
+ - Serial number
+ - Documentation URI
+- Default values for sampling interval, publishing interval, and queue size.
+- Tag specific values for sampling interval, publishing interval, and queue size.
+- Event specific values for sampling publishing interval, and queue size.
+- The observability mode for each tag and event
+++ ### Add individual events to an asset
+# [Azure IoT Operations portal](#tab/portal)
+ Now you can define the events associated with the asset. To add OPC UA events: 1. Select **Add > Add event**.
Now you can define the events associated with the asset. To add OPC UA events:
1. Select **Manage default settings** to configure default event settings for the asset. These settings apply to all the OPC UA events that belong to the asset. You can override these settings for each event that you add. Default event settings include: - **Publishing interval (milliseconds)**: The rate at which OPC UA Server should publish data.
- - **Queue size**: The depth of the queue to hold the sampling data before it's published.
+ - **Queue size**: The depth of the queue to hold the sampling data before publishing it.
### Add events in bulk to an asset
Review your asset and OPC UA tag and event details and make any adjustments you
:::image type="content" source="media/howto-manage-assets-remotely/review-asset.png" alt-text="A screenshot that shows how to review your asset, tags, and events in the Azure IoT Operations portal.":::
+# [Azure CLI](#tab/cli)
+
+When you create an asset by using the Azure CLI, you can define multiple events by using the `--event` parameter multiple times. The syntax for the `--event` parameter is similar to the `--data` parameter:
+
+```azurecli
+az iot ops asset create --name thermostat -g {your resource group name} --cluster {your cluster name} --endpoint opc-ua-connector-0 --description 'A simulated thermostat asset' --event event_notifier='ns=3;s=FastUInt12', name=warning
+```
+
+For each event that you define, you can specify the:
+
+- Event notifier. This value is the event notifier from the OPC UA server.
+- Event name. This value is the friendly name that you want to use for the event. If you don't specify an event name, the event notifier is used as the event name.
+- Observability mode.
+- Queue size.
+++ ## Update an asset
+# [Azure IoT Operations portal](#tab/portal)
+ Find and select the asset you created previously. Use the **Properties**, **Tags**, and **Events** tabs to make any changes: :::image type="content" source="media/howto-manage-assets-remotely/asset-update-property-save.png" alt-text="A screenshot that shows how to update an existing asset in the Azure IoT Operations portal.":::
You can also add, update, and delete events and properties in the same way.
When you're finished making changes, select **Save** to save your changes.
+# [Azure CLI](#tab/cli)
+
+To list your assets, use the following command:
+
+```azurecli
+az iot ops asset query -g {your resource group name}
+```
+
+> [!TIP]
+> You can refine the query command to search for assets that match specific criteria. For example, you can search for assets by manufacturer.
+
+To view the details of the thermostat asset, use the following command:
+
+```azurecli
+az iot ops asset show --name thermostat -g {your resource group}
+```
+
+To update an asset, use the `az iot ops asset update` command. For example, to update the asset's description, use a command like the following example:
+
+```azurecli
+az iot ops asset update --name thermostat --description 'A simulated thermostat asset' -g {your resource group}
+```
+
+To list the thermostat asset's tags, use the following command:
+
+```azurecli
+az iot ops asset data-point list --asset thermostat -g {your resource group}
+```
+
+To list the thermostat asset's events, use the following command:
+
+```azurecli
+az iot ops asset event list --asset thermostat -g {your resource group}
+```
+
+To add a new tag to the thermostat asset, use a command like the following example:
+
+```azurecli
+az iot ops asset data-point add --asset thermostat -g {your resource group} --data-source 'ns=3;s=FastUInt1002' --name 'humidity'
+```
+
+To delete a tag, use the `az iot ops asset data-point remove` command.
+
+You can manage an asset's events by using the `az iot ops asset event` commands.
+++ ## Delete an asset
+# [Azure IoT Operations portal](#tab/portal)
+ To delete an asset, select the asset you want to delete. On the **Asset** details page, select **Delete**. Confirm your changes to delete the asset: :::image type="content" source="media/howto-manage-assets-remotely/asset-delete.png" alt-text="A screenshot that shows how to delete an asset from the Azure IoT Operations portal.":::
+# [Azure CLI](#tab/cli)
+
+To delete an asset, use a command that looks like the following example:
+
+```azurecli
+az iot ops asset delete --name thermostat -g {your resource group name}
+```
+++ ## Notifications
-Whenever you make a change to asset, you see a notification in the Azure IoT Operations portal that reports the status of the operation:
+Whenever you make a change to asset in the Azure IoT Operations portal, you see a notification that reports the status of the operation:
:::image type="content" source="media/howto-manage-assets-remotely/portal-notifications.png" alt-text="A screenshot that shows the notifications in the Azure IoT Operations portal.":::
Whenever you make a change to asset, you see a notification in the Azure IoT Ope
- [Azure OPC UA Broker overview](overview-opcua-broker.md) - [Azure IoT Akri overview](overview-akri.md)
+- [az iot ops asset](/cli/azure/iot/ops/asset)
+- [az iot ops asset endpoint](/cli/azure/iot/ops/asset/endpoint)
key-vault How To Export Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/how-to-export-certificate.md
To download the certificate, select **Download in CER format** or **Download in
Azure App Service certificates are a convenient way to purchase SSL certificates. You can assign them to Azure Apps from within the portal. After you import them, the App Service certificates are located under **secrets**.
-For more information, see the steps to [export Azure App Service certificates](https://social.technet.microsoft.com/wiki/contents/articles/37431.exporting-azure-app-service-certificates.aspx).
+For more information, see the steps to [export Azure App Service certificates](/azure/app-service/configure-ssl-app-service-certificate?tabs=portal#export-an-app-service-certificate).
key-vault Move Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/move-subscription.md
Connect-AzAccount #Log
```azurecli az account set -s <your-subscriptionId> # Select your Azure Subscription
-$tenantId=$(az account show --query tenantId) # Get your tenantId
+tenantId=$(az account show --query tenantId) # Get your tenantId
az keyvault update -n myvault --remove Properties.accessPolicies # Remove the access policies az keyvault update -n myvault --set Properties.tenantId=$tenantId # Update the key vault tenantId ```
load-balancer Gateway Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/gateway-partners.md
Azure has a growing ecosystem of partners offering their network appliances for
**Brendan OΓÇÖFlaherty - CEO**
-[Learn more](https://www.cpacket.com/azure-gateway-load-balancer/)
+[Learn more](https://www.cpacket.com/)
### F5
logic-apps Logic Apps Securing A Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-securing-a-logic-app.md
To prevent others from changing or deleting your logic app workflow, you can use
| Role | Description | ||-|
-| [**Logic Apps Standard Reader** (Preview)](../role-based-access-control/built-in-roles.md#logic-apps-standard-reader) | You have read-only access to all resources in a Standard logic app and workflows, including the workflow runs and their history. |
-| [**Logic Apps Standard Operator** (Preview)](../role-based-access-control/built-in-roles.md#logic-apps-standard-operator) | You have access to enable, resubmit, and disable workflows and to create connections to services, systems, and networks for a Standard logic app. The Operator role can perform administration and support tasks on the Azure Logic Apps platform, but doesn't have permissions to edit workflows or settings. |
-| [**Logic Apps Standard Developer** (Preview)](../role-based-access-control/built-in-roles.md#logic-apps-standard-developer) | You have access to create and edit workflows, connections, and settings for a Standard logic app. The Developer role doesn't have permissions to make changes outside the scope of workflows, for example, application-wide changes such as configure virtual network integration. App Service Plans aren't supported. |
-| [**Logic Apps Standard Contributor** (Preview)](../role-based-access-control/built-in-roles.md#logic-apps-standard-contributor) | You have access to manage all aspects of a Standard logic app, but you can't change access or ownership. |
+| [**Logic Apps Standard Reader (Preview)**](../role-based-access-control/built-in-roles.md#logic-apps-standard-reader-preview) | You have read-only access to all resources in a Standard logic app and workflows, including the workflow runs and their history. |
+| [**Logic Apps Standard Operator (Preview)**](../role-based-access-control/built-in-roles.md#logic-apps-standard-operator-preview) | You have access to enable, resubmit, and disable workflows and to create connections to services, systems, and networks for a Standard logic app. The Operator role can perform administration and support tasks on the Azure Logic Apps platform, but doesn't have permissions to edit workflows or settings. |
+| [**Logic Apps Standard Developer (Preview)**](../role-based-access-control/built-in-roles.md#logic-apps-standard-developer-preview) | You have access to create and edit workflows, connections, and settings for a Standard logic app. The Developer role doesn't have permissions to make changes outside the scope of workflows, for example, application-wide changes such as configure virtual network integration. App Service Plans aren't supported. |
+| [**Logic Apps Standard Contributor (Preview)**](../role-based-access-control/built-in-roles.md#logic-apps-standard-contributor-preview) | You have access to manage all aspects of a Standard logic app, but you can't change access or ownership. |
<a name="secure-run-history"></a>
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
To install the Azure Machine Learning extension on Kubernetes compute, all Azure
The hosts in this section are used to install Visual Studio Code packages to establish a remote connection between Visual Studio Code and compute instances in your Azure Machine Learning workspace. > [!NOTE]
-> This is not a complete list of the hosts required for all Visual Studio Code resources on the internet, only the most commonly used. For example, if you need access to a GitHub repository or other host, you must identify and add the required hosts for that scenario.
+> This is not a complete list of the hosts required for all Visual Studio Code resources on the internet, only the most commonly used. For example, if you need access to a GitHub repository or other host, you must identify and add the required hosts for that scenario. For a complete list of host names, see [Network Connections in Visual Studio Code](https://code.visualstudio.com/docs/setup/network).
| __Host name__ | __Purpose__ | | - | - |
The hosts in this section are used to install Visual Studio Code packages to est
| `update.code.visualstudio.com`<br>`*.vo.msecnd.net` | Used to retrieve VS Code server bits that are installed on the compute instance through a setup script. | | `marketplace.visualstudio.com`<br>`vscode.blob.core.windows.net`<br>`*.gallerycdn.vsassets.io` | Required to download and install VS Code extensions. These hosts enable the remote connection to compute instances using the Azure Machine Learning extension for VS Code. For more information, see [Connect to an Azure Machine Learning compute instance in Visual Studio Code](./how-to-set-up-vs-code-remote.md) | | `raw.githubusercontent.com/microsoft/vscode-tools-for-ai/master/azureml_remote_websocket_server/*` | Used to retrieve websocket server bits that are installed on the compute instance. The websocket server is used to transmit requests from Visual Studio Code client (desktop application) to Visual Studio Code server running on the compute instance. |
+| `vscode.download.prss.microsoft.com` | Used for Visual Studio Code download CDN |
## Scenario: Third party firewall or Azure Firewall without service tags
machine-learning How To Deploy Mlflow Models Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md
The workspace is the top-level resource for Azure Machine Learning, providing a
import requests import pandas as pd from mlflow.deployments import get_deploy_client
+ from mlflow.tracking import MlflowClient
+ ```
+
+1. Initialize the MLflow client
+
+ ```python
+ mlflow_client = MlflowClient()
``` 1. Configure the deployment client
version = registered_model.version
You can configure the properties of this endpoint using a configuration file. In this case, you're configuring the authentication mode of the endpoint to be "key". ```python+
+ # Creating a unique endpoint name with current datetime to avoid conflicts
+ import datetime
+
+ endpoint_name = "sklearn-diabetes-" + datetime.datetime.now().strftime("%m%d%H%M%f")
+ endpoint_config = { "auth_mode": "key", "identity": {
version = registered_model.version
} ```
- Let's write this configuration into a `JSON` file:
+ Write this configuration into a `JSON` file:
```python endpoint_config_path = "endpoint_config.json"
deployment_client.delete_endpoint(endpoint_name)
- [Deploy models with REST](how-to-deploy-with-rest.md) - [Safe rollout for online endpoints](how-to-safely-rollout-online-endpoints.md)-- [Troubleshoot online endpoint deployment](how-to-troubleshoot-managed-online-endpoints.md)- [How to autoscale managed online endpoints](how-to-autoscale-endpoints.md)
+- [Troubleshoot online endpoint deployment](how-to-troubleshoot-managed-online-endpoints.md)
machine-learning How To Manage Models Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-models-mlflow.md
# Manage models registries in Azure Machine Learning with MLflow
-Azure Machine Learning supports MLflow for model management. Such approach represents a convenient way to support the entire model lifecycle for users familiar with the MLFlow client. The following article describes the different capabilities and how it compares with other options.
+Azure Machine Learning supports MLflow for model management when connected to a workspace. Such approach represents a convenient way to support the entire model lifecycle for users familiar with the MLFlow client. The following article describes the different capabilities and how it compares with other options.
### Prerequisites [!INCLUDE [mlflow-prereqs](includes/machine-learning-mlflow-prereqs.md)]
+* Notice that organizational registries are not supported for model management with MLflow.
+ * Some operations may be executed directly using the MLflow fluent API (`mlflow.<method>`). However, others may require to create an MLflow client, which allows to communicate with Azure Machine Learning in the MLflow protocol. You can create an `MlflowClient` object as follows. This tutorial uses the object `client` to refer to such MLflow client. ```python
The MLflow client exposes several methods to retrieve and manage models. The fol
| Search registered models by name | **&check;** | **&check;** | **&check;** | **&check;**<sup>4</sup> | | Search registered models using string comparators `LIKE` and `ILIKE` | **&check;** | | | **&check;**<sup>4</sup> | | Search registered models by tag | | | | **&check;**<sup>4</sup> |
+| [Organizational registries](how-to-manage-registries.md) support | | | **&check;** | **&check;** |
> [!NOTE] > - <sup>1</sup> Use URIs with format `runs:/<ruin-id>/<path>`. > - <sup>2</sup> Use URIs with format `azureml://jobs/<job-id>/outputs/artifacts/<path>`. > - <sup>3</sup> Registered models are immutable objects in Azure ML. > - <sup>4</sup> Use search box in Azure ML Studio. Partial match supported.
-> - <sup>5</sup> Use [registries](how-to-manage-registries.md).
+> - <sup>5</sup> Use [registries](how-to-manage-registries.md) to move models across different workspaces while keeping lineage.
## Next steps
machine-learning How To Manage Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-rest.md
Previously updated : 09/14/2022 Last updated : 02/02/2024
curl -h "Authorization:Bearer <YOUR-ACCESS-TOKEN>" ...more args...
To retrieve the list of resource groups associated with your subscription, run: ```bash
-curl https://management.azure.com/subscriptions/<YOUR-SUBSCRIPTION-ID>/resourceGroups?api-version=2021-04-01 -H "Authorization:Bearer <YOUR-ACCESS-TOKEN>"
+curl https://management.azure.com/subscriptions/<YOUR-SUBSCRIPTION-ID>/resourceGroups?api-version=2022-04-01 -H "Authorization:Bearer <YOUR-ACCESS-TOKEN>"
``` Across Azure, many REST APIs are published. Each service provider updates their API on their own cadence, but does so without breaking existing programs. The service provider uses the `api-version` argument to ensure compatibility. > [!IMPORTANT]
-> The `api-version` argument varies from service to service. For the Machine Learning Service, for instance, the current API version is `2022-05-01`. To find the latest API version for other Azure services, see the [Azure REST API reference](/rest/api/azure/) for the specific service.
+> The `api-version` argument varies from service to service. For the Machine Learning Service, for instance, the current API version is `2023-10-01`. To find the latest API version for other Azure services, see the [Azure REST API reference](/rest/api/azure/) for the specific service.
All REST calls should set the `api-version` argument to the expected value. You can rely on the syntax and semantics of the specified version even as the API continues to evolve. If you send a request to a provider without the `api-version` argument, the response will contain a human-readable list of supported values.
The above call will result in a compacted JSON response of the form:
To retrieve the set of workspaces in a resource group, run the following, replacing `<YOUR-SUBSCRIPTION-ID>`, `<YOUR-RESOURCE-GROUP>`, and `<YOUR-ACCESS-TOKEN>`: ```
-curl https://management.azure.com/subscriptions/<YOUR-SUBSCRIPTION-ID>/resourceGroups/<YOUR-RESOURCE-GROUP>/providers/Microsoft.MachineLearningServices/workspaces/?api-version=2022-05-01 \
+curl https://management.azure.com/subscriptions/<YOUR-SUBSCRIPTION-ID>/resourceGroups/<YOUR-RESOURCE-GROUP>/providers/Microsoft.MachineLearningServices/workspaces/?api-version=2023-10-01 \
-H "Authorization:Bearer <YOUR-ACCESS-TOKEN>" ```
Again you'll receive a JSON list, this time containing a list, each item of whic
"properties": { "friendlyName": "", "description": "",
- "creationTime": "2020-01-03T19:56:09.7588299+00:00",
+ "creationTime": "2023-01-03T19:56:09.7588299+00:00",
"storageAccount": "/subscriptions/12345abc-abbc-1b2b-1234-57ab575a5a5a/resourcegroups/DeepLearningResourceGroup/providers/microsoft.storage/storageaccounts/myworkspace0275623111", "containerRegistry": null, "keyVault": "/subscriptions/12345abc-abbc-1b2b-1234-57ab575a5a5a/resourcegroups/DeepLearningResourceGroup/providers/microsoft.keyvault/vaults/myworkspace2525649324",
The value of the `api` response is the URL of the server that you'll use for mor
```bash curl https://<REGIONAL-API-SERVER>/history/v1.0/subscriptions/<YOUR-SUBSCRIPTION-ID>/resourceGroups/<YOUR-RESOURCE-GROUP>/\
-providers/Microsoft.MachineLearningServices/workspaces/<YOUR-WORKSPACE-NAME>/experiments?api-version=2022-05-01 \
+providers/Microsoft.MachineLearningServices/workspaces/<YOUR-WORKSPACE-NAME>/experiments?api-version=2023-10-01 \
-H "Authorization:Bearer <YOUR-ACCESS-TOKEN>" ```
Similarly, to retrieve registered models in your workspace, send:
```bash curl https://<REGIONAL-API-SERVER>/modelmanagement/v1.0/subscriptions/<YOUR-SUBSCRIPTION-ID>/resourceGroups/<YOUR-RESOURCE-GROUP>/\
-providers/Microsoft.MachineLearningServices/workspaces/<YOUR-WORKSPACE-NAME>/models?api-version=2022-05-01 \
+providers/Microsoft.MachineLearningServices/workspaces/<YOUR-WORKSPACE-NAME>/models?api-version=2023-10-01 \
-H "Authorization:Bearer <YOUR-ACCESS-TOKEN>" ```
Training and running ML models require compute resources. You can list the compu
```bash curl https://management.azure.com/subscriptions/<YOUR-SUBSCRIPTION-ID>/resourceGroups/<YOUR-RESOURCE-GROUP>/\
-providers/Microsoft.MachineLearningServices/workspaces/<YOUR-WORKSPACE-NAME>/computes?api-version=2022-05-01 \
+providers/Microsoft.MachineLearningServices/workspaces/<YOUR-WORKSPACE-NAME>/computes?api-version=2023-10-01 \
-H "Authorization:Bearer <YOUR-ACCESS-TOKEN>" ```
To create or overwrite a named compute resource, you'll use a PUT request. In th
```bash curl -X PUT \
- 'https://management.azure.com/subscriptions/<YOUR-SUBSCRIPTION-ID>/resourceGroups/<YOUR-RESOURCE-GROUP>/providers/Microsoft.MachineLearningServices/workspaces/<YOUR-WORKSPACE-NAME>/computes/<YOUR-COMPUTE-NAME>?api-version=2022-05-01' \
+ 'https://management.azure.com/subscriptions/<YOUR-SUBSCRIPTION-ID>/resourceGroups/<YOUR-RESOURCE-GROUP>/providers/Microsoft.MachineLearningServices/workspaces/<YOUR-WORKSPACE-NAME>/computes/<YOUR-COMPUTE-NAME>?api-version=2023-10-01' \
-H 'Authorization:Bearer <YOUR-ACCESS-TOKEN>' \ -H 'Content-Type: application/json' \ -d '{
curl -X PUT \
}' ```
-> [!Note]
+> [!NOTE]
> In Windows terminals you may have to escape the double-quote symbols when sending JSON data. That is, text such as `"location"` becomes `\"location\"`. A successful request will get a `201 Created` response, but note that this response simply means that the provisioning process has begun. You'll need to poll (or use the portal) to confirm its successful completion.
To create a workspace, PUT a call similar to the following to `management.azure.
```bash curl -X PUT \ 'https://management.azure.com/subscriptions/<YOUR-SUBSCRIPTION-ID>/resourceGroups/<YOUR-RESOURCE-GROUP>\
-/providers/Microsoft.MachineLearningServices/workspaces/<YOUR-NEW-WORKSPACE-NAME>?api-version=2022-05-01' \
+/providers/Microsoft.MachineLearningServices/workspaces/<YOUR-NEW-WORKSPACE-NAME>?api-version=2023-10-01' \
-H 'Authorization: Bearer <YOUR-ACCESS-TOKEN>' \ -H 'Content-Type: application/json' \ -d '{
When creating workspace, you can specify a user-assigned managed identity that w
```bash curl -X PUT \ 'https://management.azure.com/subscriptions/<YOUR-SUBSCRIPTION-ID>/resourceGroups/<YOUR-RESOURCE-GROUP>\
-/providers/Microsoft.MachineLearningServices/workspaces/<YOUR-NEW-WORKSPACE-NAME>?api-version=2022-05-01' \
+/providers/Microsoft.MachineLearningServices/workspaces/<YOUR-NEW-WORKSPACE-NAME>?api-version=2023-10-01' \
-H 'Authorization: Bearer <YOUR-ACCESS-TOKEN>' \ -H 'Content-Type: application/json' \ -d '{
To create a workspace that uses a user-assigned managed identity and customer-ma
```bash curl -X PUT \ 'https://management.azure.com/subscriptions/<YOUR-SUBSCRIPTION-ID>/resourceGroups/<YOUR-RESOURCE-GROUP>\
-/providers/Microsoft.MachineLearningServices/workspaces/<YOUR-NEW-WORKSPACE-NAME>?api-version=2022-05-01' \
+/providers/Microsoft.MachineLearningServices/workspaces/<YOUR-NEW-WORKSPACE-NAME>?api-version=2023-10-01' \
-H 'Authorization: Bearer <YOUR-ACCESS-TOKEN>' \ -H 'Content-Type: application/json' \ -d '{
machine-learning How To Submit Spark Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-submit-spark-jobs.md
To troubleshoot a Spark job, you can access the logs generated for that job in A
1. Access the Spark job logs inside the **driver** and **library manager** folders > [!NOTE]
-> To troubleshoot Spark jobs created during interactive data wrangling in a notebook session, select **Job details** near the top right corner of the notebook UI. A Spark jobs from an interactive notebook session is created under the experiment name **notebook-runs**.
-
-## Improving serverless Spark session start-up time while using session-level Conda packages
-A serverless Spark session [*cold start* with session-level Conda packages](./apache-spark-azure-ml-concepts.md#inactivity-periods-and-tear-down-mechanism) may take 10 to 15 minutes. You can improve the Spark session *cold start* time by setting configuration variable `spark.hadoop.aml.enable_cache` to true. Declaring this configuration variable is optional. To ensure that the configuration variable was set successfully, check status of the latest job in the experiment `cachejobmamangement`. A successful job indicates that the cache was created successfully. A session *cold start* with session level Conda packages typically takes 10 to 15 minutes when the session starts for the first time. However, subsequent session *cold starts* typically take three to five minutes.
-
-# [CLI](#tab/cli)
-
-Use the `conf` property in the standalone Spark job, or the Spark component YAML specification file, to define the configuration variable `spark.hadoop.aml.enable_cache`.
-
-```yaml
-conf:
- spark.hadoop.aml.enable_cache: True
-```
-
-# [Python SDK](#tab/sdk)
-
-Use the `conf` parameter of the `azure.ai.ml.spark` function to define the configuration variable `spark.hadoop.aml.enable_cache`.
-
-```python
-conf={"spark.hadoop.aml.enable_cache": "true"},
-```
-
-# [Studio UI](#tab/ui)
-
-Define configuration variable `spark.hadoop.aml.enable_cache` in the **Configure session** user interface, under **Configuration settings**. Set the value of this variable to `true`.
---
+> To troubleshoot Spark jobs created during interactive data wrangling in a notebook session, select **Job details** near the top right corner of the notebook UI. A Spark jobs from an interactive notebook session is created under the experiment name **notebook-runs**.
## Next steps
machine-learning How To Train Distributed Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-distributed-gpu.md
Title: Distributed GPU training guide (SDK v2)
-description: Learn the best practices for performing distributed training with Azure Machine Learning SDK (v2) supported frameworks, such as MPI, Horovod, DeepSpeed, PyTorch, TensorFlow, and InfiniBand.
+description: Learn best practices for distributed training with supported frameworks, such as MPI, Horovod, DeepSpeed, PyTorch, TensorFlow, and InfiniBand.
Previously updated : 11/30/2022 Last updated : 01/29/2024
[!INCLUDE [sdk v2](includes/machine-learning-sdk-v2.md)]
-Learn more about how to use distributed GPU training code in Azure Machine Learning (ML). This article will not teach you about distributed training. It will help you run your existing distributed training code on Azure Machine Learning. It offers tips and examples for you to follow for each framework:
+Learn more about using distributed GPU training code in Azure Machine Learning. This article helps you run your existing distributed training code, and offers tips and examples for you to follow for each framework:
* Message Passing Interface (MPI) * Horovod * Environment variables from Open MPI * PyTorch
-* TensorFlow
+* TensorFlow
* Accelerate GPU training with InfiniBand ## Prerequisites
-Review these [basic concepts of distributed GPU training](concept-distributed-training.md) such as _data parallelism_, _distributed data parallelism_, and _model parallelism_.
+Review the basic concepts of [distributed GPU training](concept-distributed-training.md), such as *data parallelism*, *distributed data parallelism*, and *model parallelism*.
> [!TIP]
-> If you don't know which type of parallelism to use, more than 90% of the time you should use __Distributed Data Parallelism__.
+> If you don't know which type of parallelism to use, more than 90% of the time you should use **distributed data parallelism**.
## MPI
-Azure Machine Learning offers an [MPI job](https://www.mcs.anl.gov/research/projects/mpi/) to launch a given number of processes in each node. Azure Machine Learning constructs the full MPI launch command (`mpirun`) behind the scenes. You can't provide your own full head-node-launcher commands like `mpirun` or `DeepSpeed launcher`.
+Azure Machine Learning offers an [MPI job](https://www.mcs.anl.gov/research/projects/mpi/) to launch a given number of processes in each node. Azure Machine Learning constructs the full MPI launch command (`mpirun`) behind the scenes. You can't provide your own full head-node-launcher commands like `mpirun` or `DeepSpeed launcher`.
> [!TIP]
-> The base Docker image used by an Azure Machine Learning MPI job needs to have an MPI library installed. [Open MPI](https://www.open-mpi.org/) is included in all the [Azure Machine Learning GPU base images](https://github.com/Azure/AzureML-Containers). When you use a custom Docker image, you are responsible for making sure the image includes an MPI library. Open MPI is recommended, but you can also use a different MPI implementation such as Intel MPI. Azure Machine Learning also provides [curated environments](resource-curated-environments.md) for popular frameworks.
+> The base Docker image used by an Azure Machine Learning MPI job needs to have an MPI library installed. [Open MPI](https://www.open-mpi.org) is included in all the [Azure Machine Learning GPU base images](https://github.com/Azure/AzureML-Containers). When you use a custom Docker image, you are responsible for making sure the image includes an MPI library. Open MPI is recommended, but you can also use a different MPI implementation such as Intel MPI. Azure Machine Learning also provides [curated environments](resource-curated-environments.md) for popular frameworks.
To run distributed training using MPI, follow these steps:
-1. Use an Azure Machine Learning environment with the preferred deep learning framework and MPI. Azure Machine Learning provides [curated environment](resource-curated-environments.md) for popular frameworks. Or [create a custom environment](how-to-manage-environments-v2.md#create-an-environment) with the preferred deep learning framework and MPI.
-1. Define a `command` with `instance_count`. `instance_count` should be equal to the number of GPUs per node for per-process-launch, or set to 1 (the default) for per-node-launch if the user script will be responsible for launching the processes per node.
+1. Use an Azure Machine Learning environment with the preferred deep learning framework and MPI. Azure Machine Learning provides [curated environments](resource-curated-environments.md) for popular frameworks. Or [create a custom environment](how-to-manage-environments-v2.md#create-an-environment) with the preferred deep learning framework and MPI.
+1. Define a `command` with `instance_count`. `instance_count` should be equal to the number of GPUs per node for per-process-launch, or set to 1 (the default) for per-node-launch if the user script is responsible for launching the processes per node.
1. Use the `distribution` parameter of the `command` to specify settings for `MpiDistribution`. [!notebook-python[](~/azureml-examples-main/sdk/python/jobs/single-step/tensorflow/mnist-distributed-horovod/tensorflow-mnist-distributed-horovod.ipynb?name=job)] - ### Horovod Use the MPI job configuration when you use [Horovod](https://horovod.readthedocs.io/en/stable/https://docsupdatetracker.net/index.html) for distributed training with the deep learning framework. Make sure your code follows these tips:
-* The training code is instrumented correctly with Horovod before adding the Azure Machine Learning parts
-* Your Azure Machine Learning environment contains Horovod and MPI. The PyTorch and TensorFlow curated GPU environments come pre-configured with Horovod and its dependencies.
+* The training code is instrumented correctly with Horovod before adding the Azure Machine Learning parts.
+* Your Azure Machine Learning environment contains Horovod and MPI. The PyTorch and TensorFlow curated GPU environments come preconfigured with Horovod and its dependencies.
* Create a `command` with your desired distribution. ### Horovod example
-* For the full notebook to run the above example, see [azureml-examples: Train a basic neural network with distributed MPI on the MNIST dataset using Horovod](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/single-step/tensorflow/mnist-distributed-horovod/tensorflow-mnist-distributed-horovod.ipynb)
+* For the full notebook to run the Horovod example, see [azureml-examples: Train a basic neural network with distributed MPI on the MNIST dataset using Horovod](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/single-step/tensorflow/mnist-distributed-horovod/tensorflow-mnist-distributed-horovod.ipynb).
### Environment variables from Open MPI
-When running MPI jobs with Open MPI images, the following environment variables for each process launched:
+When running MPI jobs with Open MPI images, you can use the following environment variables for each process launched:
-1. `OMPI_COMM_WORLD_RANK` - the rank of the process
-2. `OMPI_COMM_WORLD_SIZE` - the world size
-3. `AZ_BATCH_MASTER_NODE` - primary address with port, `MASTER_ADDR:MASTER_PORT`
-4. `OMPI_COMM_WORLD_LOCAL_RANK` - the local rank of the process on the node
-5. `OMPI_COMM_WORLD_LOCAL_SIZE` - number of processes on the node
+1. `OMPI_COMM_WORLD_RANK`: The rank of the process
+2. `OMPI_COMM_WORLD_SIZE`: The world size
+3. `AZ_BATCH_MASTER_NODE`: The primary address with port, `MASTER_ADDR:MASTER_PORT`
+4. `OMPI_COMM_WORLD_LOCAL_RANK`: The local rank of the process on the node
+5. `OMPI_COMM_WORLD_LOCAL_SIZE`: The number of processes on the node
> [!TIP]
-> Despite the name, environment variable `OMPI_COMM_WORLD_NODE_RANK` does not corresponds to the `NODE_RANK`. To use per-node-launcher, set `process_count_per_node=1` and use `OMPI_COMM_WORLD_RANK` as the `NODE_RANK`.
+> Despite the name, the environment variable `OMPI_COMM_WORLD_NODE_RANK` doesn't correspond to the `NODE_RANK`. To use per-node-launcher, set `process_count_per_node=1` and use `OMPI_COMM_WORLD_RANK` as the `NODE_RANK`.
## PyTorch Azure Machine Learning supports running distributed jobs using PyTorch's native distributed training capabilities (`torch.distributed`). > [!TIP]
-> For data parallelism, the [official PyTorch guidance](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html#comparison-between-dataparallel-and-distributeddataparallel) is to use DistributedDataParallel (DDP) over DataParallel for both single-node and multi-node distributed training. PyTorch also [recommends using DistributedDataParallel over the multiprocessing package](https://pytorch.org/docs/stable/notes/cuda.html#use-nn-parallel-distributeddataparallel-instead-of-multiprocessing-or-nn-dataparallel). Azure Machine Learning documentation and examples will therefore focus on DistributedDataParallel training.
+> For data parallelism, the [official PyTorch guidance](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html#comparison-between-dataparallel-and-distributeddataparallel) is to use DistributedDataParallel (DDP) over DataParallel for both single-node and multi-node distributed training. PyTorch also recommends using [DistributedDataParallel over the multiprocessing package](https://pytorch.org/docs/stable/notes/cuda.html#use-nn-parallel-distributeddataparallel-instead-of-multiprocessing-or-nn-dataparallel). Azure Machine Learning documentation and examples therefore focus on DistributedDataParallel training.
### Process group initialization
-The backbone of any distributed training is based on a group of processes that know each other and can communicate with each other using a backend. For PyTorch, the process group is created by calling [torch.distributed.init_process_group](https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group) in __all distributed processes__ to collectively form a process group.
+The backbone of any distributed training is based on a group of processes that know each other and can communicate with each other using a backend. For PyTorch, the process group is created by calling [torch.distributed.init_process_group](https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group) in **all distributed processes** to collectively form a process group.
``` torch.distributed.init_process_group(backend='nccl', init_method='env://', ...) ```
-The most common communication backends used are `mpi`, `nccl`, and `gloo`. For GPU-based training `nccl` is recommended for best performance and should be used whenever possible.
+The most common communication backends used are `mpi`, `nccl`, and `gloo`. For GPU-based training, `nccl` is recommended for best performance and should be used whenever possible.
-`init_method` tells how each process can discover each other, how they initialize and verify the process group using the communication backend. By default if `init_method` is not specified PyTorch will use the environment variable initialization method (`env://`). `init_method` is the recommended initialization method to use in your training code to run distributed PyTorch on Azure Machine Learning. PyTorch will look for the following environment variables for initialization:
+`init_method` tells how each process can discover each other, how they initialize and verify the process group using the communication backend. By default, if `init_method` isn't specified, PyTorch uses the environment variable initialization method (`env://`). `init_method` is the recommended initialization method to use in your training code to run distributed PyTorch on Azure Machine Learning. PyTorch looks for the following environment variables for initialization:
-- **`MASTER_ADDR`** - IP address of the machine that will host the process with rank 0.-- **`MASTER_PORT`** - A free port on the machine that will host the process with rank 0.-- **`WORLD_SIZE`** - The total number of processes. Should be equal to the total number of devices (GPU) used for distributed training.-- **`RANK`** - The (global) rank of the current process. The possible values are 0 to (world size - 1).
+- **`MASTER_ADDR`**: IP address of the machine that hosts the process with rank 0
+- **`MASTER_PORT`**: A free port on the machine that hosts the process with rank 0
+- **`WORLD_SIZE`**: The total number of processes. Should be equal to the total number of devices (GPU) used for distributed training
+- **`RANK`**: The (global) rank of the current process. The possible values are 0 to (world size - 1)
For more information on process group initialization, see the [PyTorch documentation](https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group).
-Beyond these, many applications will also need the following environment variables:
-- **`LOCAL_RANK`** - The local (relative) rank of the process within the node. The possible values are 0 to (# of processes on the node - 1). This information is useful because many operations such as data preparation only should be performed once per node usually on local_rank = 0.-- **`NODE_RANK`** - The rank of the node for multi-node training. The possible values are 0 to (total # of nodes - 1).-
+Many applications also need the following environment variables:
+- **`LOCAL_RANK`**: The local (relative) rank of the process within the node. The possible values are 0 to (# of processes on the node - 1). This information is useful because many operations such as data preparation only should be performed once per node, usually on local_rank = 0.
+- **`NODE_RANK`**: The rank of the node for multi-node training. The possible values are 0 to (total # of nodes - 1).
You don't need to use a launcher utility like `torch.distributed.launch`. To run a distributed PyTorch job:
-1. Specify the training script and arguments
+1. Specify the training script and arguments.
1. Create a `command` and specify the type as `PyTorch` and the `process_count_per_instance` in the `distribution` parameter. The `process_count_per_instance` corresponds to the total number of processes you want to run for your job. `process_count_per_instance` should typically equal to `# of GPUs per node`. If `process_count_per_instance` isn't specified, Azure Machine Learning will by default launch one process per node.
-Azure Machine Learning will set the `MASTER_ADDR`, `MASTER_PORT`, `WORLD_SIZE`, and `NODE_RANK` environment variables on each node, and set the process-level `RANK` and `LOCAL_RANK` environment variables.
+Azure Machine Learning sets the `MASTER_ADDR`, `MASTER_PORT`, `WORLD_SIZE`, and `NODE_RANK` environment variables on each node, and sets the process-level `RANK` and `LOCAL_RANK` environment variables.
[!notebook-python[](~/azureml-examples-main/sdk/python/jobs/single-step/pytorch/distributed-training/distributed-cifar10.ipynb?name=job)] ### Pytorch example -- For the full notebook to run the above example, see [azureml-examples: Distributed training with PyTorch on CIFAR-10](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/single-step/pytorch/distributed-training/distributed-cifar10.ipynb)
+* For the full notebook to run the Pytorch example, see [azureml-examples: Distributed training with PyTorch on CIFAR-10](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/single-step/pytorch/distributed-training/distributed-cifar10.ipynb).
## DeepSpeed
-[DeepSpeed](https://www.deepspeed.ai/tutorials/azure/) is supported as a first-class citizen within Azure Machine Learning to run distributed jobs with near linear scalability in terms of 
+Azure Machine Learning supports [DeepSpeed](https://www.deepspeed.ai/tutorials/azure/) as a first-class citizen to run distributed jobs with near linear scalability in terms of:
* Increase in model size * Increase in number of GPUs
-`DeepSpeed` can be enabled using either Pytorch distribution or MPI for running distributed training. Azure Machine Learning supports the `DeepSpeed` launcher to launch distributed training as well as autotuning to get optimal `ds` configuration.
+DeepSpeed can be enabled using either Pytorch distribution or MPI for running distributed training. Azure Machine Learning supports the DeepSpeed launcher to launch distributed training as well as autotuning to get optimal `ds` configuration.
-You can use a [curated environment](resource-curated-environments.md) for an out of the box environment with the latest state of art technologies including `DeepSpeed`, `ORT`, `MSSCCL`, and `Pytorch` for your DeepSpeed training jobs.
+You can use a [curated environment](resource-curated-environments.md) for an out of the box environment with the latest state of art technologies including DeepSpeed, ORT, MSSCCL, and Pytorch for your DeepSpeed training jobs.
### DeepSpeed example -- For DeepSpeed training and autotuning examples, see [these folders](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/deepspeed).
+* For DeepSpeed training and autotuning examples, see [these folders](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/deepspeed).
## TensorFlow
-If you're using [native distributed TensorFlow](https://www.tensorflow.org/guide/distributed_training) in your training code, such as TensorFlow 2.x's `tf.distribute.Strategy` API, you can launch the distributed job via Azure Machine Learning using `distribution` parameters or the `TensorFlowDistribution` object.
-
+If you use [native distributed TensorFlow](https://www.tensorflow.org/guide/distributed_training) in your training code, such as TensorFlow 2.x's `tf.distribute.Strategy` API, you can launch the distributed job via Azure Machine Learning using `distribution` parameters or the `TensorFlowDistribution` object.
[!notebook-python[](~/azureml-examples-main/sdk/python/jobs/single-step/tensorflow/mnist-distributed/tensorflow-mnist-distributed.ipynb?name=job)]
-If your training script uses the parameter server strategy for distributed training, such as for legacy TensorFlow 1.x, you'll also need to specify the number of parameter servers to use in the job, inside the `distribution` parameter of the `command`. In the above, for example, `"parameter_server_count" : 1` and `"worker_count": 2,
+If your training script uses the parameter server strategy for distributed training, such as for legacy TensorFlow 1.x, you also need to specify the number of parameter servers to use in the job, inside the `distribution` parameter of the `command`. In the above, for example, `"parameter_server_count" : 1` and `"worker_count": 2`.
### TF_CONFIG
-In TensorFlow, the **TF_CONFIG** environment variable is required for training on multiple machines. For TensorFlow jobs, Azure Machine Learning will configure and set the TF_CONFIG variable appropriately for each worker before executing your training script.
+In TensorFlow, the `TF_CONFIG` environment variable is required for training on multiple machines. For TensorFlow jobs, Azure Machine Learning configures and sets the `TF_CONFIG` variable appropriately for each worker before executing your training script.
-You can access TF_CONFIG from your training script if you need to: `os.environ['TF_CONFIG']`.
+You can access `TF_CONFIG` from your training script if you need to: `os.environ['TF_CONFIG']`.
+
+Example `TF_CONFIG` set on a chief worker node:
-Example TF_CONFIG set on a chief worker node:
```json TF_CONFIG='{ "cluster": {
TF_CONFIG='{
### TensorFlow example -- For the full notebook to run the above example, see [azureml-examples: Train a basic neural network with distributed MPI on the MNIST dataset using Tensorflow with Horovod](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/single-step/tensorflow/mnist-distributed-horovod/tensorflow-mnist-distributed-horovod.ipynb)
+* For the full notebook to run the TensorFlow example, see [azureml-examples: Train a basic neural network with distributed MPI on the MNIST dataset using Tensorflow with Horovod](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/single-step/tensorflow/mnist-distributed-horovod/tensorflow-mnist-distributed-horovod.ipynb).
## Accelerating distributed GPU training with InfiniBand As the number of VMs training a model increases, the time required to train that model should decrease. The decrease in time, ideally, should be linearly proportional to the number of training VMs. For instance, if training a model on one VM takes 100 seconds, then training the same model on two VMs should ideally take 50 seconds. Training the model on four VMs should take 25 seconds, and so on.
-InfiniBand can be an important factor in attaining this linear scaling. InfiniBand enables low-latency, GPU-to-GPU communication across nodes in a cluster. InfiniBand requires specialized hardware to operate. Certain Azure VM series, specifically the NC, ND, and H-series, now have RDMA-capable VMs with SR-IOV and InfiniBand support. These VMs communicate over the low latency and high-bandwidth InfiniBand network, which is much more performant than Ethernet-based connectivity. SR-IOV for InfiniBand enables near bare-metal performance for any MPI library (MPI is used by many distributed training frameworks and tooling, including NVIDIA's NCCL software.) These SKUs are intended to meet the needs of computationally intensive, GPU-acclerated machine learning workloads. For more information, see [Accelerating Distributed Training in Azure Machine Learning with SR-IOV](https://techcommunity.microsoft.com/t5/azure-ai/accelerating-distributed-training-in-azure-machine-learning/ba-p/1059050).
+InfiniBand can be an important factor in attaining this linear scaling. InfiniBand enables low-latency, GPU-to-GPU communication across nodes in a cluster. InfiniBand requires specialized hardware to operate. Certain Azure VM series, specifically the NC, ND, and H-series, now have RDMA-capable VMs with SR-IOV and InfiniBand support. These VMs communicate over the low latency and high-bandwidth InfiniBand network, which is much more performant than Ethernet-based connectivity. SR-IOV for InfiniBand enables near bare-metal performance for any MPI library (MPI is used by many distributed training frameworks and tooling, including NVIDIA's NCCL software.) These SKUs are intended to meet the needs of computationally intensive, GPU-accelerated machine learning workloads. For more information, see [Accelerating Distributed Training in Azure Machine Learning with SR-IOV](https://techcommunity.microsoft.com/t5/azure-ai/accelerating-distributed-training-in-azure-machine-learning/ba-p/1059050).
-Typically, VM SKUs with an 'r' in their name contain the required InfiniBand hardware, and those without an 'r' typically do not. ('r' is a reference to RDMA, which stands for "remote direct memory access.") For instance, the VM SKU `Standard_NC24rs_v3` is InfiniBand-enabled, but the SKU `Standard_NC24s_v3` is not. Aside from the InfiniBand capabilities, the specs between these two SKUs are largely the same ΓÇô both have 24 cores, 448 GB RAM, 4 GPUs of the same SKU, etc. [Learn more about RDMA- and InfiniBand-enabled machine SKUs](../virtual-machines/sizes-hpc.md#rdma-capable-instances).
+Typically, VM SKUs with an "r" in their name contain the required InfiniBand hardware, and those without an "r" typically do not. ("r" is a reference to RDMA, which stands for *remote direct memory access*.) For instance, the VM SKU `Standard_NC24rs_v3` is InfiniBand-enabled, but the SKU `Standard_NC24s_v3` is not. Aside from the InfiniBand capabilities, the specs between these two SKUs are largely the same. Both have 24 cores, 448-GB RAM, 4 GPUs of the same SKU, etc. [Learn more about RDMA- and InfiniBand-enabled machine SKUs](../virtual-machines/sizes-hpc.md#rdma-capable-instances).
>[!WARNING]
->The older-generation machine SKU `Standard_NC24r` is RDMA-enabled, but it does not contain SR-IOV hardware required for InfiniBand.
+>The older-generation machine SKU `Standard_NC24r` is RDMA-enabled, but it doesn't contain SR-IOV hardware required for InfiniBand.
-If you create an `AmlCompute` cluster of one of these RDMA-capable, InfiniBand-enabled sizes, the OS image will come with the Mellanox OFED driver required to enable InfiniBand preinstalled and preconfigured.
+If you create an `AmlCompute` cluster of one of these RDMA-capable, InfiniBand-enabled sizes, the OS image comes with the Mellanox OFED driver required to enable InfiniBand preinstalled and preconfigured.
## Next steps * [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md) * [Reference architecture for distributed deep learning training in Azure](/azure/architecture/reference-architectures/ai/training-deep-learning) * [Troubleshooting environment issues](how-to-troubleshoot-environments.md)-
machine-learning Interactive Data Wrangling With Apache Spark Azure Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/interactive-data-wrangling-with-apache-spark-azure-ml.md
The session configuration changes persist and become available to another notebo
> [!TIP] >
-> If you use session-level Conda packages, you can [improve](./how-to-submit-spark-jobs.md#improving-serverless-spark-session-start-up-time-while-using-session-level-conda-packages) the Spark session *cold start* time if you set the configuration variable `spark.hadoop.aml.enable_cache` to true. A session cold start with session level Conda packages typically takes 10 to 15 minutes when the session starts for the first time. However, subsequent session cold starts with the configuration variable set to true typically take three to five minutes.
+> If you use session-level Conda packages, you can [improve](./apache-spark-azure-ml-concepts.md#improving-session-cold-start-time-while-using-session-level-conda-packages) the Spark session *cold start* time if you set the configuration variable `spark.hadoop.aml.enable_cache` to true. A session cold start with session level Conda packages typically takes 10 to 15 minutes when the session starts for the first time. However, subsequent session cold starts with the configuration variable set to true typically take three to five minutes.
### Import and wrangle data from Azure Data Lake Storage (ADLS) Gen 2
machine-learning Troubleshoot Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/troubleshoot-guidance.md
In prompt flow, on flow page with successful run and run detail page, you can fi
You may encounter 409 error from Azure OpenAI, it means you have reached the rate limit of Azure OpenAI. You can check the error message in the output section of LLM node. Learn more about [Azure OpenAI rate limit](../../../ai-services/openai/quotas-limits.md).
migrate Migrate Support Matrix Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware.md
ms. Previously updated : 01/19/2024 Last updated : 01/25/2024
Requirement | Details
**Internet connectivity** | If servers aren't connected to the internet, install the Log Analytics gateway on the servers. **Azure Government** | Agent-based dependency analysis isn't supported.
+## Import servers using RVTools XLSX (preview)
+
+As part of your migration journey to Azure using the Azure Migrate appliance, you first discover servers, inventory, and workloads. However, for a quick assessment before you deploy the appliance, you can [import the servers using the RVtools XLSX file (preview)](tutorial-import-vmware-using-rvtools-xlsx.md).
+
+### Key benefits
+
+- Helps to create a business case or assess the servers before you deploy the appliance.
+- Aids as an alternative when there's an organizational restriction to deploy Azure Migrate appliance.
+- Helpful when you can't share credentials that allow access to on-premises servers
+- Useful when security constraints prevent you from gathering and sending data collected by the appliance to Azure.
+
+### Limitations
+
+#### [Business case considerations](#tab/businesscase)
+
+If you're importing servers by using an RVTools XLSX file and building a business case, listed below are few limitations:
+
+- Performance history duration in Azure settings aren't applicable.
+- Servers are classified as unknown in the business case utilization insights chart and are sized as-is without right sizing for Azure or AVS cost.
+
+#### [Assessment considerations](#tab/assessmentcase)
+
+If you're importing servers by using an RVTools XLSX file for creating an assessment with the following criteria:
+- Sizing criteria as **performance-based** on the configured CPU and memory (based on the CPUs and Memory columns from the RVTools XLSX).
+- Storage criteria (In use MiB and In use MB for versions prior to 4.1.2)
+
+You won't be able to provide performance history or percentile information.
+
+To get an accurate OS suitability/readiness in Azure VM and Azure VMware Solution assessment, enter the **Operating system** version and **architecture** in the respective columns.
+++++ ## Next steps - Review [assessment best practices](best-practices-assessment.md).
migrate Migrate Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix.md
Microsoft Azure operated by 21Vianet | China North 2
There are two versions of the Azure Migrate service: - **Current version**: Using this version you can create new projects, discover on-premises assesses, and orchestrate assessments and migrations. [Learn more](whats-new.md).-- **Previous version**: For customer using the previous version of Azure Migrate (only assessment of on-premises VMware VMs was supported), you should now use the current version. In the previous version, you can't create new projects or perform new discoveries.
+- **Previous version**: For customer using, the previous version of Azure Migrate (only assessment of on-premises VMware VMs was supported), you should now use the current version. In the previous version, you can't create new projects or perform new discoveries.
## Next steps
migrate Tutorial Import Vmware Using Rvtools Xlsx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-import-vmware-using-rvtools-xlsx.md
+
+ Title: Import on-premises servers in a VMware environment using RVTools XLSX (preview)
+description: Learn how to import on-premises servers in a VMware environment by using the RVTools XLSX (preview).
+++ Last updated : 01/25/2024+
+#Customer intent: As an VMware admin, I want to import my on-premises servers running in a VMware environment.
++
+# Tutorial: Import servers running in a VMware environment with RVTools XLSX (preview)
+
+As part of your migration journey to Azure, you discover your on-premises inventory and workloads.
+
+This tutorial shows you how to discover the servers that are running in your VMware environment by using RVTools XLSX (preview). When you use this tool, you can control the data shared in the file and there's no need to set up the Azure Migrate appliance to discover servers. [Learn more](migrate-support-matrix-vmware.md#import-servers-using-rvtools-xlsx-preview).
++
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Prepare an Azure account.
+> * Set up an Azure Migrate project.
+> * Import the RVTools XLSX file.
+
+> [!NOTE]
+> Tutorials show you the quickest path for trying out a scenario. They use default options where possible.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin.
+
+## Prerequisites
+
+Before you begin this tutorial, ensure that you have the following prerequisites in place:
+
+- 20,000 servers in a single XLSX file and 35,000 in an Azure Migrate project.
+- The file format should be XLSX.
+- File sensitivity is set to **General** or file protection is set to **Any user**.
+- [Operating system names](migrate-support-matrix.md) specified in the RVTools XLSX (preview) file contains and matches the supported names.
+- The XLSX file should contain the vInfo & vDisk sheets and the VM, Powerstate, Disks, CPUs, Memory, Provisioned MiB, In use MiB, OS according to the configuration file, VM UUID columns from the vInfo sheet and the VM, Capacity MiB columns from the vDisk sheet should be present.
+
+
+## Prepare an Azure user account
+
+To create a project and register the Azure Migrate appliance, you must have an Azure user account that has the following permissions:
+
+- Contributor or Owner permissions in Azure subscription. Complete the procedure to [set Contributor or Owner permissions in the Azure subscription](tutorial-discover-vmware.md#prepare-an-azure-user-account)
+- Permissions to register Microsoft Entra apps.
+- Owner or Contributor and User Access Administrator permission at subscription level to create an instance of Azure Key Vault, which is used during the agentless server migration.
++
+## Set up an Azure Migrate project
+
+Follow the instructions on [how to set up an Azure Migrate project](tutorial-discover-import.md#set-up-a-project).
++
+## Import the servers using the RVTools XLSX file (preview)
+
+To import the servers using RVTools XLSX (preview) file, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and go to **Azure Migrate**.
+1. Under **Migration goals**, select **Servers, databases and web apps**.
+1. On **Azure Migrate | Servers, databases and web apps** page, under **Assessment tools**, select **Discover** and then select **Using import**.
+
+ :::image type="content" source="./media/tutorial-import-vmware-using-rvtools-xlsx/navigation-using-import.png" alt-text="Screenshot that shows how to navigate to the RVTools import option." lightbox="./media/tutorial-import-vmware-using-rvtools-xlsx/navigation-using-import.png":::
+
+1. In **Discover** page, in **File type**, select **VMware inventory (RVTools XLSX)**.
+1. In the **Step 1: Import the file** section, select the RVTools XLSX file and then select **Import**.
+
+ :::image type="content" source="./media/tutorial-import-vmware-using-rvtools-xlsx/select-import.png" alt-text="Screenshot that shows to upload, check status and selecting import." lightbox="./media/tutorial-import-vmware-using-rvtools-xlsx/select-import.png":::
+
+ We recommend that you don't close the browser tab or attempt to import again while the current import is in progress. The import status provides information on the following:
+ - If there are warnings in the status, you can either fix them or continue without addressing them.
+ - To improve assessment accuracy, improve the server information as suggested in warnings.
+ - If the import status appears as **Failed**, you must fix the errors to continue with the import.
+
+ :::image type="content" source="./media/tutorial-import-vmware-using-rvtools-xlsx/failed-status.png" alt-text="Screenshot that shows to status as failed." lightbox="./media/tutorial-import-vmware-using-rvtools-xlsx/failed-status.png":::
+
+ - To view and fix errors, follow these steps:
+ - Select *Download error details.XLSX* file. This operation downloads the XLSX with warnings included.
+ - Review and address the errors as necessary.
+ - Upload the modified file again.
+
+When the **Import status** is marked as **Complete**, it implies that the server information is successfully imported.
+
+## Update server information
+To update the information for a server, follow these steps:
+
+1. In the *Download error details.XLSX* file, update the rows.
+1. To reimport the data, follow the steps from 1-5 in the [Import using the RVTools XLSX file (preview)](#import-the-servers-using-the-rvtools-xlsx-file-preview).
+
+> [!NOTE]
+> Currently, we don't support deleting servers after you import them into project.
+
+## Verify servers in Azure portal
+
+To verify that the servers appear in the Azure portal after importing, follow these steps:
+
+1. Go to Azure Migrate dashboard.
+1. On the **Azure Migrate | Servers, databases and web apps >** page, in **Azure Migrate: Discovery and assessment** section, select the icon that displays the count for Discovered servers.
+1. Select the **Import based** tab.
++
+## Next steps
+
+- Learn on [key benefits and limitations of using RVTools.XLSX](migrate-support-matrix-vmware.md#import-servers-using-rvtools-xlsx-preview).
migrate Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/whats-new.md
ms. Previously updated : 12/07/2023 Last updated : 01/24/2024 # What's new in Azure Migrate
-[Azure Migrate](migrate-services-overview.md) helps you to discover, assess, and migrate on-premises servers, apps, and data to the Microsoft Azure cloud. This article summarizes new releases and features in Azure Migrate.
+[Azure Migrate](migrate-services-overview.md) helps you to discover, assess and migrate on-premises servers, apps, and data to the Microsoft Azure cloud. This article summarizes new releases and features in Azure Migrate.
+
+## Update (January 2024)
+
+- Public preview: Using the RVTools XLSX, you can import on-premises servers' configuration into a VMware environment and create quick business case by assessing the cost of Azure and Azure VMware Solution (AVS) environments. [Learn more](migrate-support-matrix-vmware.md#import-servers-using-rvtools-xlsx-preview).
## Update (December 2023)
mysql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-backup-restore.md
The Backup and Restore blade in the Azure portal provides a complete list of the
In Azure Database for MySQL flexible server, performing a restore creates a new server from the original server's backups. There are two types of restore available: - Point-in-time restore: is available with either backup redundancy option and creates a new server in the same region as your original server.-- Geo-restore: is available only if you configured your server for geo-redundant storage and it allows you to restore your server to either a geo-paired region or any other Azure supported region where flexible server is available. Currently, Geo-restore isn't supported for regions like `Brazil South`, `USGov Virginia`, and `West US 3`.
+- Geo-restore: is available only if you configured your server for geo-redundant storage and it allows you to restore your server to either a geo-paired region or any other Azure supported region where flexible server is available.
The estimated time for the recovery of the server depends on several factors:
You can choose between latest restore point, custom restore point and fastest re
The estimated time of recovery depends on several factors including the database sizes, the transaction log backup size, the compute size of the SKU, and the time of the restore as well. The transaction log recovery is the most time consuming part of the restore process. If the restore time is chosen closer to the snapshot backup schedule, the restore operations are faster since transaction log application is minimal. To estimate the accurate recovery time for your server, we highly recommend testing it in your environment as it has too many environment-specific variables. > [!IMPORTANT]
-> If you are restoring a Azure Database for MySQL flexible server instance configured with zone redundant high availability, the restored server is configured in the same region and zone as your primary server, and deployed as a single server in a non-HA mode. Refer to [zone redundant high availability](concepts-high-availability.md) for flexible server.
+> If you are restoring an Azure Database for MySQL flexible server instance configured with zone redundant high availability, the restored server is configured in the same region and zone as your primary server, and deployed as a single server in a non-HA mode. Refer to [zone redundant high availability](concepts-high-availability.md) for flexible server.
> [!IMPORTANT] > You can recover a deleted Azure Database for MySQL flexible server resource within 5 days from the time of server deletion. For a detailed guide on how to restore a deleted server, [refer documented steps](../flexible-server/how-to-restore-dropped-server.md). To protect server resources post deployment from accidental deletion or unexpected changes, administrators can leverage [management locks](../../azure-resource-manager/management/lock-resources.md).
mysql Concepts Connection Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-connection-libraries.md
MySQL offers standard database driver connectivity for using MySQL with applicat
| ODBC | Windows, Linux, macOS X, and Unix platforms | [MySQL Connector/ODBC Developer Guide](https://dev.mysql.com/doc/connector-odbc/en/) | [Download](https://dev.mysql.com/downloads/connector/odbc/) | | ADO.NET | Windows | [MySQL Connector/Net Developer Guide](https://dev.mysql.com/doc/connector-net/en/) | [Download](https://dev.mysql.com/downloads/connector/net/) | | JDBC | Platform independent | MySQL Connector/J 8.1 Developer Guide | [Download](https://dev.mysql.com/downloads/connector/j/) |
-| Node.js | Windows, Linux, macOS X | [sidorares/node-mysql2](https://github.com/sidorares/node-mysql2/tree/master/documentation) | [Download](https://github.com/sidorares/node-mysql2) |
+| Node.js | Windows, Linux, macOS X | [sidorares/node-mysql2](https://github.com/sidorares/node-mysql2/blob/master/website/docs/documentation/00-index.mdx) | [Download](https://github.com/sidorares/node-mysql2) |
| Python | Windows, Linux, macOS X | [MySQL Connector/Python Developer Guide](https://dev.mysql.com/doc/connector-python/en/) | [Download](https://dev.mysql.com/downloads/connector/python/) | | C++ | Windows, Linux, macOS X | [MySQL Connector/C++ Developer Guide](https://dev.mysql.com/doc/refman/8.1/en/connector-cpp-info.html) | [Download](https://dev.mysql.com/downloads/connector/python/) | | C | Windows, Linux, macOS X | [MySQL Connector/C Developer Guide](https://dev.mysql.com/doc/c-api/8.0/en/) | [Download](https://dev.mysql.com/downloads/connector/c/)
mysql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/overview.md
One advantage of running your workload in Azure is its global reach. Azure Datab
| Australia Central | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | | Australia East | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Australia Southeast | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
-| Brazil South | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| Brazil South | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Canada Central | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | Canada East | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | Central India | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
One advantage of running your workload in Azure is its global reach. Azure Datab
| UAE North | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | | UK South | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | UK West | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
-| USGov Virginia | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: |
+| USGov Virginia | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
| USGov Arizona | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | USGov Texas | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | West Central US | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | West Europe | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | West US | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | West US 2 | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| West US 3 | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| West US 3 | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
## Contacts
openshift Howto Monitor Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-monitor-alerts.md
[Azure Resource Health](/azure/service-health/resource-health-overview?WT.mc_id=Portal-Microsoft_Azure_Health) is a component of Azure Monitor that can be configured to generate alerts based on signals from Azure Red Hat OpenShift clusters. These alerts help you prepare for events such as planned and unplanned maintenance.
-Resource Health alert signals for ARO clusters include the following:
+Resource Health signals can generate one or more of the following alerts:
- **Cluster maintenance operation pending:** This signal indicates that your Azure Red Hat OpenShift cluster will undergo a maintenance operation within the next two weeks. This may cause rolling reboots of nodes resulting in workload pod restarts. - **Cluster maintenance operation in progress:** This signal indicates one of the following operation types:
Resource Health alert signals for ARO clusters include the following:
- **Cluster API server is unreachable:** This signal indicates that the Azure Red Hat OpenShift service Resource Provider is unable to reach your cluster's API server. Your cluster is hence unable to be monitored and is unmanageable.
-Once the underlying condition causing an alert signal is remediated, the signal is cleared and the alert condition ends.
+Once the underlying condition causing an alert is remediated, the alert is cleared and the Resource Health is reported as *Available*.
## Creating alert rules
openshift Quickstart Openshift Arm Bicep Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/quickstart-openshift-arm-bicep-template.md
Previously updated : 02/15/2023 Last updated : 02/02/2024 keywords: azure, openshift, aro, red hat, arm, bicep zone_pivot_groups: azure-red-hat-openshift #Customer intent: I need to use ARM templates or Bicep files to deploy my Azure Red Hat OpenShift cluster.
SP_OBJECT_ID=$(az ad sp show --id $SP_CLIENT_ID | jq -r '.id')
az role assignment create \ --role 'User Access Administrator' \ --assignee-object-id $SP_OBJECT_ID \
- --resource-group $RESOURCEGROUP \
+ --scope $SCOPE \
--assignee-principal-type 'ServicePrincipal' az role assignment create \ --role 'Contributor' \ --assignee-object-id $SP_OBJECT_ID \
- --resource-group $RESOURCEGROUP \
+ --scope $SCOPE \
--assignee-principal-type 'ServicePrincipal' ```
az deployment group create \
--parameters location=$LOCATION \ --parameters domain=$DOMAIN \ --parameters pullSecret=$PULL_SECRET \
- --parameters clusterName=$CLUSTER \
+ --parameters clusterName=$ARO_CLUSTER_NAME \
--parameters aadClientId=$SP_CLIENT_ID \ --parameters aadObjectId=$SP_OBJECT_ID \ --parameters aadClientSecret=$SP_CLIENT_SECRET \
operator-insights Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/managed-identity.md
Previously updated : 10/18/2023 Last updated : 01/23/2024 # Managed identity for Azure Operator Insights This article helps you understand managed identity (formerly known as Managed Service Identity/MSI) and how it works in Azure Operator Insights.
-## Overview
+## Overview of managed identities
-Managed identities eliminate the need to manage credentials. Managed identities provide an identity for the service instance when connecting to resources that support Microsoft Entra ID (formerly Azure Active Directory) authentication. For example, the service can use a managed identity to access resources like [Azure Key Vault](../key-vault/general/overview.md), where data admins can securely store credentials or access storage accounts. The service uses the managed identity to obtain Microsoft Entra ID (formerly Azure Active Directory) tokens.
+Managed identities eliminate the need to manage credentials. Managed identities provide an identity for service instances to use when connecting to resources that support Microsoft Entra ID (formerly Azure Active Directory) authentication. For example, the service can use a managed identity to access resources like [Azure Key Vault](../key-vault/general/overview.md), where data admins can securely store credentials or access storage accounts. The service uses the managed identity to obtain Microsoft Entra ID tokens.
-There are two types of supported managed identities:
+Microsoft Entra ID offers two types of managed identities:
-- **System-assigned:** You can enable a managed identity directly on a service instance. When you allow a system-assigned managed identity during the creation of the service, an identity is created in Microsoft Entra ID (formerly Azure Active Directory) tied to that service instance's lifecycle. By design, only that Azure resource can use this identity to request tokens from Azure AD. So when the resource is deleted, Azure automatically deletes the identity for you.
+- **System-assigned:** You can enable a managed identity directly on a resource. When you enable a system-assigned managed identity during the creation of the resource, an identity is created in Microsoft Entra ID tied to that resource's lifecycle. By design, only that Azure resource can use this identity to request tokens from Microsoft Entra ID. When the resource is deleted, Azure automatically deletes the identity for you.
-- **User-assigned:** You can also create a managed identity as a standalone Azure resource. You can [create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md). In user-assigned managed identities, the identity is managed separately from the resources that use it.
+- **User-assigned:** You can also create a managed identity as a standalone resource and associate it with other resources. The identity is managed separately from the resources that use it.
-Managed identity provides the below benefits:
+For more general information about managed identities, see [What are managed identities for Azure resources?](/entra/identity/managed-identities-azure-resources/overview).
-- [Store credential in Azure Key Vault](../data-factory/store-credentials-in-key-vault.md), in which case-managed identity is used for Azure Key Vault authentication.
+## User-assigned managed identities in Azure Operator Insights
-- Access data stores or computes using managed identity authentication, including Azure Blob storage, Azure Data Explorer, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure SQL Database, Azure SQL Managed Instance, Azure Synapse Analytics, REST, Databricks activity, Web activity, and more.
+Azure Operator Insights Data Products use a user-assigned managed identity for:
-- Managed identity is also used to encrypt/decrypt data and metadata using the customer-managed key stored in Azure Key Vault, providing double encryption.
+- Encryption with customer-managed keys, also called CMK-based encryption.
+- Integration with Microsoft Purview. The managed identity allows the Data Product to manage the collection and the data catalog within the collection.
-## System-assigned managed identity
+When you [create a Data Product](data-product-create.md), you set up the managed identity and associate it with the Data Product. To use the managed identity with Microsoft Purview, you must also [grant the managed identity the appropriate permissions in Microsoft Purview](purview-setup.md#access-and-set-up-your-microsoft-purview-account).
->[!NOTE]
-> System-assigned managed identity is not currently supported with Azure Operator Insights Data Product Resource.
+You use Microsoft Entra ID to manage user-assigned managed identities. For more information, see [Create, list, delete, or assign a role to a user-assigned managed identity using the Azure portal](/entra/identity/managed-identities-azure-resources/how-manage-user-assigned-managed-identities).
-## User-assigned managed identity
+## System-assigned managed identities in Azure Operator Insights
-You can create, delete, manage user-assigned managed identities in Microsoft Entra ID (formerly Azure Active Directory). For more details refer to [Create, list, delete, or assign a role to a user-assigned managed identity using the Azure portal](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md).
-
-Once you have created a user-assigned managed identity, you must supply the credentials during or after [Azure Operator Insights Data Product Resource creation](../data-factory/credentials.md).
+Azure Operator Insights doesn't support system-assigned managed identities.
## Related content See [Store credential in Azure Key Vault](../data-factory/store-credentials-in-key-vault.md) for information about when and how to use managed identity.
-See [Managed Identities for Azure Resources Overview](../active-directory/managed-identities-azure-resources/overview.md) for more background on managed identities for Azure resources, on which managed identity in Azure Operator Insights is based.
+See [What are managed identities for Azure resources?](/entra/identity/managed-identities-azure-resources/overview) for more background on managed identities for Azure resources, on which managed identity in Azure Operator Insights is based.
operator-insights Purview Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/purview-setup.md
You can access your Purview account through the Azure portal by going to `https:
To begin to catalog a data product in this account, [create a collection](../purview/how-to-create-and-manage-collections.md) to hold the Data Product.
-Provide your User-Assigned-Managed-Identity (UAMI) with necessary roles in the Microsoft Purview compliance portal. The UAMI you enter is the one that was set up when creating an AOI Data Product. For information on how to set up this UAMI, refer to [Set up user-assigned managed identity](data-product-create.md#set-up-user-assigned-managed-identity). At the desired collection, assign this UAMI to the **Collection admin**, **Data source admin**, and **Data curator** roles. Alternately, you can apply the UAMI at the root collection/account level. All collections would inherit these role assignments by default.
+Provide the user-assigned managed identity (UAMI) for your Azure Operator Insights Data Product with necessary roles in the Microsoft Purview compliance portal. This UAMI was set up when the Data Product was created. For information on how to set up this UAMI, see [Set up user-assigned managed identity](data-product-create.md#set-up-user-assigned-managed-identity). At the desired collection, assign this UAMI to the **Collection admin**, **Data source admin**, and **Data curator** roles. Alternately, you can apply the UAMI at the root collection/account level. All collections would inherit these role assignments by default.
:::image type="content" source="media/purview-setup/data-product-role-assignments.png" alt-text="Screenshot of collections with Role assignment tab open and icon to add the UAMI to the collection admins role highlighted.":::
operator-nexus Howto Kubernetes Cluster Customize Workers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-kubernetes-cluster-customize-workers.md
+
+ Title: "Azure Operator Nexus: Customize Kubernetes worker nodes with a DaemonSet"
+description: How-to guide for customizing Kubernetes Worker Nodes with a DaemonSet.
++++ Last updated : 01/29/2024+++
+# Customize worker nodes with a DaemonSet
+
+To meet application requirements, you may need to modify operating system settings, enable a Linux kernel module or install a host-level application package. Use a `DaemonSet` with host privileges to customize worker nodes.
+
+The example `DaemonSet` sets `registry.contoso.com` to bypass the Cloud Services Network proxy for image pulls, installs the SCTP kernel module and sets `fs.inotify.max_user_instances` to `4096`. Finally, the script applies a label to the Kubernetes Node to ensure the DaemonSet only runs once.
++
+```yaml
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+ name: customized
+ namespace: kube-system
+spec:
+ selector:
+ matchLabels:
+ name: customized
+ template:
+ metadata:
+ labels:
+ name: customized
+ spec:
+ affinity:
+ nodeAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: customized
+ operator: NotIn
+ values:
+ - "1"
+ tolerations:
+ - operator: Exists
+ effect: NoSchedule
+ containers:
+ - name: customized
+ image: mcr.microsoft.com/cbl-mariner/base/core:1.0
+ command:
+ - nsenter
+ - --target
+ - "1"
+ - --mount
+ - --uts
+ - --ipc
+ - --net
+ - --pid
+ - --
+ - bash
+ - -exc
+ - |
+ sed -i '/registrycontoso.com/!s/NO_PROXY=/&registry.contoso.com,/' /etc/systemd/system/containerd.service.d/http-proxy.conf
+ systemctl daemon-reload
+ systemctl restart containerd
+ modprobe sctp
+ sed -i 's/^fs.inotify.max_user_instances.*/fs.inotify.max_user_instances = 4096/' /etc/sysctl.d/90-system-max-limits.conf
+ kubectl --kubeconfig=/etc/kubernetes/kubelet.conf label node ${HOSTNAME,,} customized=1
+ sleep infinity
+ resources:
+ limits:
+ memory: 200Mi
+ requests:
+ cpu: 100m
+ memory: 16Mi
+ securityContext:
+ privileged: true
+ hostNetwork: true
+ hostPID: true
+ hostIPC: true
+ terminationGracePeriodSeconds: 0
+```
+
+And apply the `Daemonset`:
+
+```bash
+kubectl apply -f /path/to/daemonset.yaml
+```
postgresql How To Configure Sign In Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-configure-sign-in-azure-ad-authentication.md
To connect by using a Microsoft Entra token with PgAdmin, follow these steps:
Here are some essential considerations when you're connecting: -- `user@tenant.onmicrosoft.com` is the name of the Microsoft Entra user.
+- `user@tenant.onmicrosoft.com` is the display name of the Microsoft Entra user.
- Be sure to use the exact way the Azure user is spelled. Microsoft Entra user and group names are case-sensitive. - If the name contains spaces, use a backslash (`\`) before each space to escape it.
+ You can use the Azure CLI to get the signed in user and set the value for `PGUGSER` environment variable:
+ ```bash
+ export PGUSER=$(az ad signed-in-user show --query "[displayName]" -o tsv | sed 's/ /\\ /g')
+ ```
- The access token's validity is 5 minutes to 60 minutes. You should get the access token before initiating the sign-in to Azure Database for PostgreSQL. You're now authenticated to your Azure Database for PostgreSQL server through Microsoft Entra authentication.
postgresql Troubleshoot Password Authentication Failed For User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/troubleshoot-password-authentication-failed-for-user.md
When attempting to connect to Azure Database for PostgreSQL - Flexible Server, y
This error indicates that the password provided for the user `<user-name>` is incorrect.
-Following the initial password authentication error, you might see another error message indicating that the client is trying to reconnect to the server, this time without SSL encryption. The failure here's due to the server's `pg_hba.conf` configuration not permitting unencrypted connections.
+Following the initial password authentication error, you might see another error message indicating that the client is trying to reconnect to the server, this time without SSL encryption. The failure here is due to the server's `pg_hba.conf` configuration not permitting unencrypted connections.
> connection to server at "\<server-name\>.postgres.database.azure.com" (x.x.x.x), port 5432 failed: FATAL: no pg_hba.conf entry for host "y.y.y.y", user "\<user-name\>", database "postgres", no encryption
The combined error message you receive in this scenario looks like this:
connection to server at "\<server-name\>.postgres.database.azure.com" (x.x.x.x), port 5432 failed: FATAL: no pg_hba.conf entry for host "y.y.y.y", user "\<user-name\>", database "postgres", no encryption
-To avoid this dual attempt and specify the desired SSL mode, you can use the `sslmode` connection option in your client configuration. For instance, if you're using `libpq` variables in the bash shell, you can set the SSL mode by using the following command:
+To avoid this dual attempt and specify the desired SSL mode, use the `sslmode` connection option in your client configuration. For instance, if you're using `libpq` variables in the bash shell, you can set the SSL mode by using the following command:
```bash export PGSSLMODE=require
The error encountered when connecting to Azure Database for PostgreSQL - Flexibl
The password authentication failed for user `<user-name>` error occurs when the password for the user is incorrect. This could happen due to a mistyped password, a recent password change that hasn't been updated in the connection settings, or other similar issues. * **User or role created without a password**
-Another possible cause of this error is creating a user or role in PostgreSQL without specifying a password. Executing commands like `CREATE USER <user-name>` or `CREATE ROLE <role-name>` without an accompanying password statement results in a user or role with no password set. Attempting to connect with such a user or role without setting a password will lead to authentication failure with password authentication failed error.
+Another possible cause of this error is creating a user or role in PostgreSQL without specifying a password. Executing commands like `CREATE USER <user-name>` or `CREATE ROLE <role-name>` without an accompanying password statement results in a user or role with no password set. Attempting to connect with these kinds of users or roles without setting a password will lead to authentication failure with password authentication failed error.
* **Potential security breach** If the authentication failure is unexpected, particularly if there are multiple failed attempts recorded, it could indicate a potential security breach. Unauthorized access attempts might trigger such errors.
If you're encountering the "password authentication failed for user `<user-name>
ALTER ROLE <role-name> PASSWORD '<new-password>'; ```
-* **Identify the Attacker's IP Address and Secure Your Database**
+* **If you suspect a potential security breach**
If you suspect a potential security breach is causing unauthorized access to your Azure Database for PostgreSQL - Flexible Server, follow these steps to address the issue: 1. **Enable log capturing**
- If log capturing isn't already on, get it set up now. It's key for keeping an eye on database activities and catching any odd access patterns. There are several ways to do this, including Azure Monitor Log Analytics and server logs, which help store and analyze database event logs.
+ If log capturing isn't already on, get it set up now. Log capturing key for keeping an eye on database activities and catching any odd access patterns. There are several ways to do this, including Azure Monitor Log Analytics and server logs, which help store and analyze database event logs.
* **Log Analytics**, Check out the setup instructions for Azure Monitor Log Analytics here: [Configure and access logs in Azure Database for PostgreSQL - Flexible Server](how-to-configure-and-access-logs.md). * **Server logs**, For hands-on log management, head over to the Azure portal's server logs section here: [Enable, list and download server logs for Azure Database for PostgreSQL - Flexible Server](how-to-server-logs-portal.md).
If you're encountering the "password authentication failed for user `<user-name>
-By following these steps, you should be able to resolve the authentication issues and successfully connect to your Azure Database for PostgreSQL - Flexible Server.
+By following these steps, you should be able to resolve the authentication issues and successfully connect to your Azure Database for PostgreSQL - Flexible Server. If you're still facing issues after following the guidance provided, please don't hesitate to [file a support ticket](../../azure-portal/supportability/how-to-create-azure-support-request.md).
postgresql Best Practices Seamless Migration Single To Flexible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/best-practices-seamless-migration-single-to-flexible.md
To get an idea of the downtime required for migrating your server, we strongly r
## Set up Online migration parameters > [!NOTE]
-> For Online migrations using Single servers running PostgreSQL 9.5 and 9.6, we explicitly have to allow replication connection. To enable that, add a firewall entry to allowlist connection from target. Make sure the firewall rule name has `_replrule` suffix. The suffic isn't required for Single servers running PostgreSQL 10 and 11. Support for **Online** migrations is currently available in UK South, South Africa North, UAE North, and all regions across Asia and Australia. In other regions, Online migration can be enabled by the user at a subscription-level by registering for the **Online PostgreSQL migrations to Azure PostgreSQL Flexible server** preview feature as shown in the image.
+> For Online migrations using Single servers running PostgreSQL 9.5 and 9.6, we explicitly have to allow replication connection. To enable that, add a firewall entry to allowlist connection from target. Make sure the firewall rule name has `_replrule` suffix. The suffic isn't required for Single servers running PostgreSQL 10 and 11. Support for **Online** migrations is currently available in France Central, Germany West Central, North Europe, South Africa North, UAE North, all regions across Asia, Australia, UK and public US regions. In other regions, Online migration can be enabled by the user at a subscription-level by registering for the **Online PostgreSQL migrations to Azure PostgreSQL Flexible server** preview feature as shown in the image.
:::image type="content" source="./media/concepts-single-to-flexible/online-migration-feature-switch.png" alt-text="Screenshot of online PostgreSQL migrations to Azure PostgreSQL Flexible server." lightbox="./media/concepts-single-to-flexible/online-migration-feature-switch.png":::
postgresql Concepts Single To Flexible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/concepts-single-to-flexible.md
The following table lists the different tools available for performing the migra
| pg_dump and pg_restore | Offline | - Tried and tested tool that is in use for a long time<br />- Suited for databases of size less than 10 GB<br />| - Need prior knowledge of setting up and using this tool<br />- Slow when compared to other tools<br />Significant downtime to your application. | > [!NOTE]
-> The Single to Flex Migration tool is available in all Azure regions and currently supports **Offline** migrations. Support for **Online** migrations is currently available in Central US, France Central, Germany West Central, North Central US, South Central US, North Europe, all West US regions, UK South, South Africa North, UAE North, and all regions across Asia and Australia. In other regions, Online migration can be enabled by the user at a subscription-level by registering for the **Online PostgreSQL migrations to Azure PostgreSQL Flexible server** preview feature as shown in the image.
+> The Single to Flex Migration tool is available in all Azure regions and currently supports **Offline** migrations. Support for **Online** migrations is currently available in France Central, Germany West Central, North Europe, South Africa North, UAE North, all regions across Asia, Australia, UK and public US regions. In other regions, Online migration can be enabled by the user at a subscription-level by registering for the **Online PostgreSQL migrations to Azure PostgreSQL Flexible server** preview feature as shown in the image.
:::image type="content" source="media\concepts-single-to-flexible\online-migration-feature-switch.png" alt-text="Screenshot of online PostgreSQL migrations to Azure PostgreSQL Flexible server." lightbox="media\concepts-single-to-flexible\online-migration-feature-switch.png":::
Along with data migration, the tool automatically provides the following built-i
- Migration of permissions of database objects on your source server such as GRANTS/REVOKES to the target server. > [!NOTE]
-> This functionality is enabled by default for flexible servers in all Azure public regions. It will be enabled for flexible servers in gov clouds and China regions soon.
+> This functionality is enabled by default for flexible servers in all Azure public regions. It will be enabled for flexible servers in gov clouds and China regions soon. Also, please note that this feature is currently disabled for PostgreSQL version 16 servers, and support for it will be introduced in the near future.
## Limitations
postgresql How To Migrate Single To Flexible Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-cli.md
Note these important points for the command response:
- The migration moves to the `Succeeded` state as soon as the `Migrating Data` substate finishes successfully. If there's a problem at the `Migrating Data` substate, the migration moves into a `Failed` state. > [!NOTE]
-> The Single to Flex Migration tool is available in all Azure regions and currently supports **Offline** migrations. Support for **Online** migrations is currently available in Central US, France Central, Germany West Central, North Central US, South Central US, North Europe, all West US regions, UK South, South Africa North, UAE North, and all regions across Asia and Australia. In other regions, Online migration can be enabled by the user at a subscription-level by registering for the **Online PostgreSQL migrations to Azure PostgreSQL Flexible server** preview feature as shown in the image.
+> The Single to Flex Migration tool is available in all Azure regions and currently supports **Offline** migrations. Support for **Online** migrations is currently available in France Central, Germany West Central, North Europe, South Africa North, UAE North, all regions across Asia, Australia, UK and public US regions. In other regions, Online migration can be enabled by the user at a subscription-level by registering for the **Online PostgreSQL migrations to Azure PostgreSQL Flexible server** preview feature as shown in the image.
:::image type="content" source="./media/concepts-single-to-flexible/online-migration-feature-switch.png" alt-text="Screenshot of online PostgreSQL migrations to Azure PostgreSQL Flexible server." lightbox="./media/concepts-single-to-flexible/online-migration-feature-switch.png":::
postgresql How To Migrate Single To Flexible Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-portal.md
The first tab is **Setup**. Just in case you missed it, allowlist necessary exte
It's always a good practice to choose **Validate** or **Validate and Migrate** option to perform pre-migration validations before running the migration. To learn more about the pre-migration validation refer to this [documentation](./concepts-single-to-flexible.md#pre-migration-validations).
-**Migration mode** gives you the option to pick the mode for the migration. **Offline** is the default option. Support for **Online** migrations is currently available in Central US, France Central, Germany West Central, North Central US, South Central US, North Europe, all West US regions, UK South, South Africa North, UAE North, and all regions across Asia and Australia. In other regions, Online migration can be enabled by the user at a subscription-level by registering for the **Online PostgreSQL migrations to Azure PostgreSQL Flexible server** preview feature as shown in the image.
+**Migration mode** gives you the option to pick the mode for the migration. **Offline** is the default option. Support for **Online** migrations is currently available in France Central, Germany West Central, North Europe, South Africa North, UAE North, all regions across Asia, Australia, UK and public US regions. In other regions, Online migration can be enabled by the user at a subscription-level by registering for the **Online PostgreSQL migrations to Azure PostgreSQL Flexible server** preview feature as shown in the image.
:::image type="content" source="./media/concepts-single-to-flexible/online-migration-feature-switch.png" alt-text="Screenshot of online PostgreSQL migrations to Azure PostgreSQL Flexible server." lightbox="./media/concepts-single-to-flexible/online-migration-feature-switch.png":::
postgresql Partners Migration Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/partners-migration-postgresql.md
To learn more about some of Microsoft's other partners, see the [Microsoft Partn
<!--Datasheet Links--> <!--Marketplace Links --> [credativ_marketplace]:https://azuremarketplace.microsoft.com/de-de/marketplace/apps?search=credativ&page=1
-[newt_marketplace]:https://azuremarketplace.microsoft.com/en-in/marketplace/apps/newtglobalconsultingllc1581492268566.dmap_db_container_offer?tab=Overview
[improving_marketplace]:https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/prosourcesolutionsllc1594761633057.azure_database_for_postgresql_migration?tab=Overview&filters=country-unitedstates [quadrant_marketplace_implementation]:https://azuremarketplace.microsoft.com/en-us/marketplace/apps/quadrantresourcellc.quadrant_database_migration_to_oss_implementation?tab=Overview [quadrant_marketplace_assessment]:https://azuremarketplace.microsoft.com/en-us/marketplace/apps/quadrantresourcellc.qmigrator_db_migration_tool?tab=Overview
To learn more about some of Microsoft's other partners, see the [Microsoft Partn
[databene_contact]:https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.data-bene.io%2Fen%23contact&data=05%7C01%7Carianap%40microsoft.com%7C9619e9fb8f20426c479d08db4bcedd2c%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638187124891347095%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=LAv2lRHmJH0kk2tft7LpRwtefQEdTkzwbB2ptoQpt3w%3D&reserved=0 [solliance_contact]:https://solliance.net/Contact [improving_contact]:mailto:toren.huntley@improving.com
-[quadrant_contact]:mailto:migrations@quadrantresource.com
+[quadrant_contact]:mailto:migrations@quadrantresource.com
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles.md
Previously updated : 01/29/2024 Last updated : 01/30/2024
The following table provides a brief description of each built-in role. Click th
> | [Virtual Machine Administrator Login](#virtual-machine-administrator-login) | View Virtual Machines in the portal and login as administrator | 1c0163c0-47e6-4577-8991-ea5c82e286e4 | > | [Virtual Machine Contributor](#virtual-machine-contributor) | Create and manage virtual machines, manage disks, install and run software, reset password of the root user of the virtual machine using VM extensions, and manage local user accounts using VM extensions. This role does not grant you management access to the virtual network or storage account the virtual machines are connected to. This role does not allow you to assign roles in Azure RBAC. | 9980e02c-c2be-4d73-94e8-173b1dc7cf3c | > | [Virtual Machine Data Access Administrator (preview)](#virtual-machine-data-access-administrator-preview) | Manage access to Virtual Machines by adding or removing role assignments for the Virtual Machine Administrator Login and Virtual Machine User Login roles. Includes an ABAC condition to constrain role assignments. | 66f75aeb-eabe-4b70-9f1e-c350c4c9ad04 |
+> | [Virtual Machine Local User Login](#virtual-machine-local-user-login) | View Virtual Machines in the portal and login as a local user configured on the arc server | 602da2ba-a5c2-41da-b01d-5360126ab525 |
> | [Virtual Machine User Login](#virtual-machine-user-login) | View Virtual Machines in the portal and login as a regular user. | fb879df8-f326-4884-b1cf-06f3ad86be52 | > | [Windows Admin Center Administrator Login](#windows-admin-center-administrator-login) | Let's you manage the OS of your resource via Windows Admin Center as an administrator. | a6333a3e-0164-44c3-b281-7a577aff287f | > | **Networking** | | |
The following table provides a brief description of each built-in role. Click th
> | [Azure Service Bus Data Owner](#azure-service-bus-data-owner) | Allows for full access to Azure Service Bus resources. | 090c5cfd-751d-490a-894a-3ce6f1109419 | > | [Azure Service Bus Data Receiver](#azure-service-bus-data-receiver) | Allows for receive access to Azure Service Bus resources. | 4f6d3b9b-027b-4f4c-9142-0e5a2a2247e0 | > | [Azure Service Bus Data Sender](#azure-service-bus-data-sender) | Allows for send access to Azure Service Bus resources. | 69a216fc-b8fb-44d8-bc22-1f3c2cd27a39 |
+> | [Azure Stack HCI Administrator](#azure-stack-hci-administrator) | Grants full access to the cluster and its resources, including the ability to register Azure Stack HCI and assign others as Azure Arc HCI VM Contributor and/or Azure Arc HCI VM Reader | bda0d508-adf1-4af0-9c28-88919fc3ae06 |
+> | [Azure Stack HCI Device Management Role](#azure-stack-hci-device-management-role) | Microsoft.AzureStackHCI Device Management Role | 865ae368-6a45-4bd1-8fbf-0d5151f56fc1 |
+> | [Azure Stack HCI VM Contributor](#azure-stack-hci-vm-contributor) | Grants permissions to perform all VM actions | 874d1c73-6003-4e60-a13a-cb31ea190a85 |
+> | [Azure Stack HCI VM Reader](#azure-stack-hci-vm-reader) | Grants permissions to view VMs | 4b3fe76c-f777-4d24-a2d7-b027b0f7b273 |
> | [Azure Stack Registration Owner](#azure-stack-registration-owner) | Lets you manage Azure Stack registrations. | 6f12a6df-dd06-4f3e-bcb1-ce8be600526a | > | [EventGrid Contributor](#eventgrid-contributor) | Lets you manage EventGrid operations. | 1e241071-0855-49ea-94dc-649edcd759de | > | [EventGrid Data Sender](#eventgrid-data-sender) | Allows send access to event grid events. | d5a91429-5739-47e2-a06b-3470a27159e7 |
The following table provides a brief description of each built-in role. Click th
> | [Integration Service Environment Contributor](#integration-service-environment-contributor) | Lets you manage integration service environments, but not access to them. | a41e2c5b-bd99-4a07-88f4-9bf657a760b8 | > | [Integration Service Environment Developer](#integration-service-environment-developer) | Allows developers to create and update workflows, integration accounts and API connections in integration service environments. | c7aa55d3-1abb-444a-a5ca-5e51e485d6ec | > | [Intelligent Systems Account Contributor](#intelligent-systems-account-contributor) | Lets you manage Intelligent Systems accounts, but not access to them. | 03a6d094-3444-4b3d-88af-7477090a9e5e |
-> | [Logic App Contributor](#logic-app-contributor) | Lets you manage Consumption logic apps, but not change access to them. | 87a39d53-fc1b-424a-814c-f7e04687dc9e |
-> | [Logic App Operator](#logic-app-operator) | Lets you read, enable, and disable Consumption logic apps, but not edit or update them. | 515c2055-d9d4-4321-b1b9-bd0c9a0f79fe |
-> | [Logic Apps Standard Contributor (Preview)](#logic-apps-standard-contributor) | You can manage all aspects of a Standard logic app and workflows. You can't change access or ownership. | ad710c24-b039-4e85-a019-deb4a06e8570 |
-> | [Logic Apps Standard Developer (Preview)](#logic-apps-standard-developer) | You can create and edit workflows, connections, and settings for a Standard logic app. You can't make changes outside the workflow scope. | 523776ba-4eb24-600a-3c8f-2dc93da4bdb |
-> | [Logic Apps Standard Operator (Preview)](#logic-apps-standard-operator) | You can enable, resubmit, and disable workflows as well as create connections. You can't edit workflows or settings. | b70c96e9-66fe-4c09-b6e7-c98e69c98555 |
-> | [Logic Apps Standard Reader (Preview)](#logic-apps-standard-reader) | You have read-only access to all resources in a Standard logic app and workflows, including the workflow runs and their history. | 4accf36b-2c05-432f-91c8-5c532dff4c73 |
+> | [Logic App Contributor](#logic-app-contributor) | Lets you manage logic apps, but not change access to them. | 87a39d53-fc1b-424a-814c-f7e04687dc9e |
+> | [Logic App Operator](#logic-app-operator) | Lets you read, enable, and disable logic apps, but not edit or update them. | 515c2055-d9d4-4321-b1b9-bd0c9a0f79fe |
+> | [Logic Apps Standard Contributor (Preview)](#logic-apps-standard-contributor-preview) | You can manage all aspects of a Standard logic app and workflows. You can't change access or ownership. | ad710c24-b039-4e85-a019-deb4a06e8570 |
+> | [Logic Apps Standard Developer (Preview)](#logic-apps-standard-developer-preview) | You can create and edit workflows, connections, and settings for a Standard logic app. You can't make changes outside the workflow scope. | 523776ba-4eb2-4600-a3c8-f2dc93da4bdb |
+> | [Logic Apps Standard Operator (Preview)](#logic-apps-standard-operator-preview) | You can enable, resubmit, and disable workflows as well as create connections. You can't edit workflows or settings. | b70c96e9-66fe-4c09-b6e7-c98e69c98555 |
+> | [Logic Apps Standard Reader (Preview)](#logic-apps-standard-reader-preview) | You have read-only access to all resources in a Standard logic app and workflows, including the workflow runs and their history. | 4accf36b-2c05-432f-91c8-5c532dff4c73 |
> | **Identity** | | | > | [Domain Services Contributor](#domain-services-contributor) | Can manage Azure AD Domain Services and related network configurations | eeaeda52-9324-47f6-8069-5d5bade478b2 | > | [Domain Services Reader](#domain-services-reader) | Can view Azure AD Domain Services and related network configurations | 361898ef-9ed1-48c2-849c-a832951106bb |
The following table provides a brief description of each built-in role. Click th
> | [Azure Arc Kubernetes Writer](#azure-arc-kubernetes-writer) | Lets you update everything in cluster/namespace, except (cluster)roles and (cluster)role bindings. | 5b999177-9696-4545-85c7-50de3797e5a1 | > | [Azure Connected Machine Onboarding](#azure-connected-machine-onboarding) | Can onboard Azure Connected Machines. | b64e21ea-ac4e-4cdf-9dc9-5b892992bee7 | > | [Azure Connected Machine Resource Administrator](#azure-connected-machine-resource-administrator) | Can read, write, delete and re-onboard Azure Connected Machines. | cd570a14-e51a-42ad-bac8-bafd67325302 |
+> | [Azure Connected Machine Resource Manager](#azure-connected-machine-resource-manager) | Custom Role for AzureStackHCI RP to manage hybrid compute machines and hybrid connectivity endpoints in a resource group | f5819b54-e033-4d82-ac66-4fec3cbf3f4c |
+> | [Azure Resource Bridge Deployment Role](#azure-resource-bridge-deployment-role) | Azure Resource Bridge Deployment Role | 7b1f81f9-4196-4058-8aae-762e593270df |
> | [Billing Reader](#billing-reader) | Allows read access to billing data | fa23ad8b-c56e-40d8-ac0c-ce449e1d2c64 | > | [Blueprint Contributor](#blueprint-contributor) | Can manage blueprint definitions, but not assign them. | 41077137-e803-4205-871c-5a86e6a753b4 | > | [Blueprint Operator](#blueprint-operator) | Can assign existing published blueprints, but cannot create new blueprints. Note that this only works if the assignment is done with a user-assigned managed identity. | 437d2ced-4a38-4302-8479-ed2bcb43d090 |
Manage access to Virtual Machines by adding or removing role assignments for the
} ```
+### Virtual Machine Local User Login
+
+View Virtual Machines in the portal and login as a local user configured on the arc server
+
+[Learn more](/azure/azure-arc/servers/ssh-arc-troubleshoot)
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/*/read | |
+> | [Microsoft.HybridConnectivity](resource-provider-operations.md#microsofthybridconnectivity)/endpoints/listCredentials/action | Gets the endpoint access credentials to the resource. |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "View Virtual Machines in the portal and login as a local user configured on the arc server",
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/602da2ba-a5c2-41da-b01d-5360126ab525",
+ "name": "602da2ba-a5c2-41da-b01d-5360126ab525",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.HybridCompute/machines/*/read",
+ "Microsoft.HybridConnectivity/endpoints/listCredentials/action"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Virtual Machine Local User Login",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+ ### Virtual Machine User Login View Virtual Machines in the portal and login as a regular user.
Let's you manage the OS of your resource via Windows Admin Center as an administ
> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkSecurityGroups/securityRules/write | Creates a security rule or updates an existing security rule | > | [Microsoft.HybridConnectivity](resource-provider-operations.md#microsofthybridconnectivity)/endpoints/write | Update the endpoint to the target resource. | > | [Microsoft.HybridConnectivity](resource-provider-operations.md#microsofthybridconnectivity)/endpoints/read | Gets the endpoint to the resource. |
+> | [Microsoft.HybridConnectivity](resource-provider-operations.md#microsofthybridconnectivity)/endpoints/serviceConfigurations/write | Update the service details in the service configurations of the target resource. |
+> | [Microsoft.HybridConnectivity](resource-provider-operations.md#microsofthybridconnectivity)/endpoints/serviceConfigurations/read | Gets the details about the service to the resource. |
> | [Microsoft.HybridConnectivity](resource-provider-operations.md#microsofthybridconnectivity)/endpoints/listManagedProxyDetails/action | Fetches the managed proxy details | > | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/virtualMachines/read | Get the properties of a virtual machine | > | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/virtualMachines/patchAssessmentResults/latest/read | Retrieves the summary of the latest patch assessment operation |
Let's you manage the OS of your resource via Windows Admin Center as an administ
"Microsoft.Network/networkSecurityGroups/securityRules/write", "Microsoft.HybridConnectivity/endpoints/write", "Microsoft.HybridConnectivity/endpoints/read",
+ "Microsoft.HybridConnectivity/endpoints/serviceConfigurations/write",
+ "Microsoft.HybridConnectivity/endpoints/serviceConfigurations/read",
"Microsoft.HybridConnectivity/endpoints/listManagedProxyDetails/action", "Microsoft.Compute/virtualMachines/read", "Microsoft.Compute/virtualMachines/patchAssessmentResults/latest/read",
Can view AFD standard and premium profiles and their endpoints, but can't make c
> | [Microsoft.Cdn](resource-provider-operations.md#microsoftcdn)/profiles/querywafloganalyticsmetrics/action | | > | [Microsoft.Cdn](resource-provider-operations.md#microsoftcdn)/profiles/querywafloganalyticsrankings/action | | > | [Microsoft.Cdn](resource-provider-operations.md#microsoftcdn)/profiles/afdendpoints/CheckCustomDomainDNSMappingStatus/action | |
+> | [Microsoft.Cdn](resource-provider-operations.md#microsoftcdn)/profiles/Usages/action | |
+> | [Microsoft.Cdn](resource-provider-operations.md#microsoftcdn)/profiles/afdendpoints/Usages/action | |
+> | [Microsoft.Cdn](resource-provider-operations.md#microsoftcdn)/profiles/origingroups/Usages/action | |
+> | [Microsoft.Cdn](resource-provider-operations.md#microsoftcdn)/profiles/rulesets/Usages/action | |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
Can view AFD standard and premium profiles and their endpoints, but can't make c
"Microsoft.Cdn/profiles/queryloganalyticsrankings/action", "Microsoft.Cdn/profiles/querywafloganalyticsmetrics/action", "Microsoft.Cdn/profiles/querywafloganalyticsrankings/action",
- "Microsoft.Cdn/profiles/afdendpoints/CheckCustomDomainDNSMappingStatus/action"
+ "Microsoft.Cdn/profiles/afdendpoints/CheckCustomDomainDNSMappingStatus/action",
+ "Microsoft.Cdn/profiles/Usages/action",
+ "Microsoft.Cdn/profiles/afdendpoints/Usages/action",
+ "Microsoft.Cdn/profiles/origingroups/Usages/action",
+ "Microsoft.Cdn/profiles/rulesets/Usages/action"
], "notActions": [], "dataActions": [],
Can view CDN profiles and their endpoints, but can't make changes.
> | [Microsoft.Cdn](resource-provider-operations.md#microsoftcdn)/profiles/afdendpoints/validateCustomDomain/action | | > | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. | > | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket |
+> | [Microsoft.Cdn](resource-provider-operations.md#microsoftcdn)/profiles/CheckResourceUsage/action | |
+> | [Microsoft.Cdn](resource-provider-operations.md#microsoftcdn)/profiles/endpoints/CheckResourceUsage/action | |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
Can view CDN profiles and their endpoints, but can't make changes.
"Microsoft.Resources/deployments/*", "Microsoft.Cdn/profiles/afdendpoints/validateCustomDomain/action", "Microsoft.Resources/subscriptions/resourceGroups/read",
- "Microsoft.Support/*"
+ "Microsoft.Support/*",
+ "Microsoft.Cdn/profiles/CheckResourceUsage/action",
+ "Microsoft.Cdn/profiles/endpoints/CheckResourceUsage/action"
], "notActions": [], "dataActions": [],
Grants access to read map related data from an Azure maps account.
Allow read, write and delete access to Azure Spring Cloud Config Server
-[Learn more](../spring-apps/basic-standard/how-to-access-data-plane-azure-ad-rbac.md)
+[Learn more](/azure/spring-apps/basic-standard/how-to-access-data-plane-azure-ad-rbac)
> [!div class="mx-tableFixed"] > | Actions | Description |
Allow read, write and delete access to Azure Spring Cloud Config Server
Allow read access to Azure Spring Cloud Config Server
-[Learn more](../spring-apps/basic-standard/how-to-access-data-plane-azure-ad-rbac.md)
+[Learn more](/azure/spring-apps/basic-standard/how-to-access-data-plane-azure-ad-rbac)
> [!div class="mx-tableFixed"] > | Actions | Description |
Allow read access to Azure Spring Cloud Data
Allow read, write and delete access to Azure Spring Cloud Service Registry
-[Learn more](../spring-apps/basic-standard/how-to-access-data-plane-azure-ad-rbac.md)
+[Learn more](/azure/spring-apps/basic-standard/how-to-access-data-plane-azure-ad-rbac)
> [!div class="mx-tableFixed"] > | Actions | Description |
Allow read, write and delete access to Azure Spring Cloud Service Registry
Allow read access to Azure Spring Cloud Service Registry
-[Learn more](../spring-apps/basic-standard/how-to-access-data-plane-azure-ad-rbac.md)
+[Learn more](/azure/spring-apps/basic-standard/how-to-access-data-plane-azure-ad-rbac)
> [!div class="mx-tableFixed"] > | Actions | Description |
Allows for send access to Azure Service Bus resources.
} ```
-### Azure Stack Registration Owner
+### Azure Stack HCI Administrator
-Lets you manage Azure Stack registrations.
+Grants full access to the cluster and its resources, including the ability to register Azure Stack HCI and assign others as Azure Arc HCI VM Contributor and/or Azure Arc HCI VM Reader
+
+[Learn more](/azure-stack/hci/manage/assign-vm-rbac-roles)
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | [Microsoft.AzureStack](resource-provider-operations.md#microsoftazurestack)/edgeSubscriptions/read | |
-> | [Microsoft.AzureStack](resource-provider-operations.md#microsoftazurestack)/registrations/products/*/action | |
-> | [Microsoft.AzureStack](resource-provider-operations.md#microsoftazurestack)/registrations/products/read | Gets the properties of an Azure Stack Marketplace product |
-> | [Microsoft.AzureStack](resource-provider-operations.md#microsoftazurestack)/registrations/read | Gets the properties of an Azure Stack registration |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/register/action | Registers the subscription for the Azure Stack HCI resource provider and enables the creation of Azure Stack HCI resources. |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/Unregister/Action | Unregisters the subscription for the Azure Stack HCI resource provider. |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/clusters/* | |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/register/action | Registers the subscription for the Microsoft.HybridCompute Resource Provider |
+> | [Microsoft.GuestConfiguration](resource-provider-operations.md#microsoftguestconfiguration)/register/action | Registers the subscription for the Microsoft.GuestConfiguration resource provider. |
+> | [Microsoft.GuestConfiguration](resource-provider-operations.md#microsoftguestconfiguration)/guestConfigurationAssignments/read | Get guest configuration assignment. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/write | Creates or updates a resource group. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/delete | Deletes a resource group and all its resources. |
+> | [Microsoft.HybridConnectivity](resource-provider-operations.md#microsofthybridconnectivity)/register/action | Register the subscription for Microsoft.HybridConnectivity |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/roleAssignments/write | Create a role assignment at the specified scope. |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/roleAssignments/delete | Delete a role assignment at the specified scope. |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/* | Create and manage a deployment |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/read | Gets the list of subscriptions. |
+> | [Microsoft.Management](resource-provider-operations.md#microsoftmanagement)/managementGroups/read | List management groups for the authenticated user. |
+> | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/* | |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Write | Create or update a classic metric alert |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Delete | Delete a classic metric alert |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Read | Read a classic metric alert |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Activated/Action | Classic metric alert activated |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Resolved/Action | Classic metric alert resolved |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Throttled/Action | Classic metric alert rule throttled |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Incidents/Read | Read a classic metric alert incident |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourcegroups/deployments/read | Gets or lists deployments. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourcegroups/deployments/write | Creates or updates an deployment. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourcegroups/deployments/operations/read | Gets or lists deployment operations. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourcegroups/deployments/operationstatuses/read | Gets or lists deployment operation statuses. |
+> | [Microsoft.ResourceHealth](resource-provider-operations.md#microsoftresourcehealth)/availabilityStatuses/read | Gets the availability statuses for all resources in the specified scope |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/read | Gets the list of subscriptions. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/operationresults/read | Get the subscription operation results. |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/read | Read any Azure Arc machines |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/write | Writes an Azure Arc machines |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/delete | Deletes an Azure Arc machines |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/UpgradeExtensions/action | Upgrades Extensions on Azure Arc machines |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/assessPatches/action | Assesses any Azure Arc machines to get missing software patches |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/installPatches/action | Installs patches on any Azure Arc machines |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/extensions/read | Reads any Azure Arc extensions |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/extensions/write | Installs or Updates an Azure Arc extensions |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/extensions/delete | Deletes an Azure Arc extensions |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/operations/read | Read all Operations for Azure Arc for Servers |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/locations/operationresults/read | Reads the status of an operation on Microsoft.HybridCompute Resource Provider |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/locations/operationstatus/read | Reads the status of an operation on Microsoft.HybridCompute Resource Provider |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/patchAssessmentResults/read | Reads any Azure Arc patchAssessmentResults |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/patchAssessmentResults/softwarePatches/read | Reads any Azure Arc patchAssessmentResults/softwarePatches |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/patchInstallationResults/read | Reads any Azure Arc patchInstallationResults |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/patchInstallationResults/softwarePatches/read | Reads any Azure Arc patchInstallationResults/softwarePatches |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/locations/updateCenterOperationResults/read | Reads the status of an update center operation on machines |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/hybridIdentityMetadata/read | Read any Azure Arc machines's Hybrid Identity Metadata |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/osType/agentVersions/read | Read all Azure Connected Machine Agent versions available |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/osType/agentVersions/latest/read | Read the latest Azure Connected Machine Agent version |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/runcommands/read | Reads any Azure Arc runcommands |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/runcommands/write | Installs or Updates an Azure Arc runcommands |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/runcommands/delete | Deletes an Azure Arc runcommands |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/licenseProfiles/read | Reads any Azure Arc licenseProfiles |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/licenseProfiles/write | Installs or Updates an Azure Arc licenseProfiles |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/licenseProfiles/delete | Deletes an Azure Arc licenseProfiles |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/licenses/read | Reads any Azure Arc licenses |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/licenses/write | Installs or Updates an Azure Arc licenses |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/licenses/delete | Deletes an Azure Arc licenses |
+> | Microsoft.ResourceConnector/register/action | Registers the subscription for Appliances resource provider and enables the creation of Appliance. |
+> | Microsoft.ResourceConnector/appliances/read | Gets an Appliance resource |
+> | Microsoft.ResourceConnector/appliances/write | Creates or Updates Appliance resource |
+> | Microsoft.ResourceConnector/appliances/delete | Deletes Appliance resource |
+> | Microsoft.ResourceConnector/locations/operationresults/read | Get result of Appliance operation |
+> | Microsoft.ResourceConnector/locations/operationsstatus/read | Get result of Appliance operation |
+> | Microsoft.ResourceConnector/appliances/listClusterUserCredential/action | Get an appliance cluster user credential |
+> | Microsoft.ResourceConnector/appliances/listKeys/action | Get an appliance cluster customer user keys |
+> | Microsoft.ResourceConnector/operations/read | Gets list of Available Operations for Appliances |
+> | Microsoft.ExtendedLocation/register/action | Registers the subscription for Custom Location resource provider and enables the creation of Custom Location. |
+> | Microsoft.ExtendedLocation/customLocations/read | Gets an Custom Location resource |
+> | Microsoft.ExtendedLocation/customLocations/deploy/action | Deploy permissions to a Custom Location resource |
+> | Microsoft.ExtendedLocation/customLocations/write | Creates or Updates Custom Location resource |
+> | Microsoft.ExtendedLocation/customLocations/delete | Deletes Custom Location resource |
+> | Microsoft.EdgeMarketplace/offers/read | Get a Offer |
+> | Microsoft.EdgeMarketplace/publishers/read | Get a Publisher |
+> | [Microsoft.Kubernetes](resource-provider-operations.md#microsoftkubernetes)/register/action | Registers Subscription with Microsoft.Kubernetes resource provider |
+> | [Microsoft.KubernetesConfiguration](resource-provider-operations.md#microsoftkubernetesconfiguration)/register/action | Registers subscription to Microsoft.KubernetesConfiguration resource provider. |
+> | [Microsoft.KubernetesConfiguration](resource-provider-operations.md#microsoftkubernetesconfiguration)/extensions/write | Creates or updates extension resource. |
+> | [Microsoft.KubernetesConfiguration](resource-provider-operations.md#microsoftkubernetesconfiguration)/extensions/read | Gets extension instance resource. |
+> | [Microsoft.KubernetesConfiguration](resource-provider-operations.md#microsoftkubernetesconfiguration)/extensions/delete | Deletes extension instance resource. |
+> | [Microsoft.KubernetesConfiguration](resource-provider-operations.md#microsoftkubernetesconfiguration)/extensions/operations/read | Gets Async Operation status. |
+> | [Microsoft.KubernetesConfiguration](resource-provider-operations.md#microsoftkubernetesconfiguration)/namespaces/read | Get Namespace Resource |
+> | [Microsoft.KubernetesConfiguration](resource-provider-operations.md#microsoftkubernetesconfiguration)/operations/read | Gets available operations of the Microsoft.KubernetesConfiguration resource provider. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/StorageContainers/Write | Creates/Updates storage containers resource |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/StorageContainers/Read | Gets/Lists storage containers resource |
+> | Microsoft.HybridContainerService/register/action | Register the subscription for Microsoft.HybridContainerService |
> | **NotActions** | | > | *none* | | > | **DataActions** | | > | *none* | | > | **NotDataActions** | | > | *none* | |
+> | **Condition** | |
+> | ((!(ActionMatches{'Microsoft.Authorization/roleAssignments/write'})) OR (@Request[Microsoft.Authorization/roleAssignments:RoleDefinitionId] ForAnyOfAnyValues:GuidEquals{f5819b54-e033-4d82-ac66-4fec3cbf3f4c, cd570a14-e51a-42ad-bac8-bafd67325302, b64e21ea-ac4e-4cdf-9dc9-5b892992bee7, 4b3fe76c-f777-4d24-a2d7-b027b0f7b273, 874d1c73-6003-4e60-a13a-cb31ea190a85,865ae368-6a45-4bd1-8fbf-0d5151f56fc1,7b1f81f9-4196-4058-8aae-762e593270df,4633458b-17de-408a-b874-0445c86b69e6})) AND ((!(ActionMatches{'Microsoft.Authorization/roleAssignments/delete'})) OR (@Resource[Microsoft.Authorization/roleAssignments:RoleDefinitionId] ForAnyOfAnyValues:GuidEquals{f5819b54-e033-4d82-ac66-4fec3cbf3f4c, cd570a14-e51a-42ad-bac8-bafd67325302, b64e21ea-ac4e-4cdf-9dc9-5b892992bee7, 4b3fe76c-f777-4d24-a2d7-b027b0f7b273, 874d1c73-6003-4e60-a13a-cb31ea190a85,865ae368-6a45-4bd1-8fbf-0d5151f56fc1,7b1f81f9-4196-4058-8aae-762e593270df,4633458b-17de-408a-b874-0445c86b69e6})) | Add or remove role assignments for the following roles:<br/>Azure Connected Machine Resource Manager<br/>Azure Connected Machine Resource Administrator<br/>Azure Connected Machine Onboarding<br/>Azure Stack HCI VM Reader<br/>Azure Stack HCI VM Contributor<br/>Azure Stack HCI Device Management Role<br/>Azure Resource Bridge Deployment Role<br/>Key Vault Secrets User |
```json { "assignableScopes": [ "/" ],
- "description": "Lets you manage Azure Stack registrations.",
- "id": "/providers/Microsoft.Authorization/roleDefinitions/6f12a6df-dd06-4f3e-bcb1-ce8be600526a",
- "name": "6f12a6df-dd06-4f3e-bcb1-ce8be600526a",
+ "description": "Grants full access to the cluster and its resources, including the ability to register Azure Stack HCI and assign others as Azure Arc HCI VM Contributor and/or Azure Arc HCI VM Reader",
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/bda0d508-adf1-4af0-9c28-88919fc3ae06",
+ "name": "bda0d508-adf1-4af0-9c28-88919fc3ae06",
"permissions": [ { "actions": [
- "Microsoft.AzureStack/edgeSubscriptions/read",
- "Microsoft.AzureStack/registrations/products/*/action",
- "Microsoft.AzureStack/registrations/products/read",
- "Microsoft.AzureStack/registrations/read"
+ "Microsoft.AzureStackHCI/register/action",
+ "Microsoft.AzureStackHCI/Unregister/Action",
+ "Microsoft.AzureStackHCI/clusters/*",
+ "Microsoft.HybridCompute/register/action",
+ "Microsoft.GuestConfiguration/register/action",
+ "Microsoft.GuestConfiguration/guestConfigurationAssignments/read",
+ "Microsoft.Resources/subscriptions/resourceGroups/write",
+ "Microsoft.Resources/subscriptions/resourceGroups/delete",
+ "Microsoft.HybridConnectivity/register/action",
+ "Microsoft.Authorization/roleAssignments/write",
+ "Microsoft.Authorization/roleAssignments/delete",
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Resources/deployments/*",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.Resources/subscriptions/read",
+ "Microsoft.Management/managementGroups/read",
+ "Microsoft.Support/*",
+ "Microsoft.AzureStackHCI/*",
+ "Microsoft.Insights/AlertRules/Write",
+ "Microsoft.Insights/AlertRules/Delete",
+ "Microsoft.Insights/AlertRules/Read",
+ "Microsoft.Insights/AlertRules/Activated/Action",
+ "Microsoft.Insights/AlertRules/Resolved/Action",
+ "Microsoft.Insights/AlertRules/Throttled/Action",
+ "Microsoft.Insights/AlertRules/Incidents/Read",
+ "Microsoft.Resources/subscriptions/resourcegroups/deployments/read",
+ "Microsoft.Resources/subscriptions/resourcegroups/deployments/write",
+ "Microsoft.Resources/subscriptions/resourcegroups/deployments/operations/read",
+ "Microsoft.Resources/subscriptions/resourcegroups/deployments/operationstatuses/read",
+ "Microsoft.ResourceHealth/availabilityStatuses/read",
+ "Microsoft.Resources/subscriptions/read",
+ "Microsoft.Resources/subscriptions/operationresults/read",
+ "Microsoft.HybridCompute/machines/read",
+ "Microsoft.HybridCompute/machines/write",
+ "Microsoft.HybridCompute/machines/delete",
+ "Microsoft.HybridCompute/machines/UpgradeExtensions/action",
+ "Microsoft.HybridCompute/machines/assessPatches/action",
+ "Microsoft.HybridCompute/machines/installPatches/action",
+ "Microsoft.HybridCompute/machines/extensions/read",
+ "Microsoft.HybridCompute/machines/extensions/write",
+ "Microsoft.HybridCompute/machines/extensions/delete",
+ "Microsoft.HybridCompute/operations/read",
+ "Microsoft.HybridCompute/locations/operationresults/read",
+ "Microsoft.HybridCompute/locations/operationstatus/read",
+ "Microsoft.HybridCompute/machines/patchAssessmentResults/read",
+ "Microsoft.HybridCompute/machines/patchAssessmentResults/softwarePatches/read",
+ "Microsoft.HybridCompute/machines/patchInstallationResults/read",
+ "Microsoft.HybridCompute/machines/patchInstallationResults/softwarePatches/read",
+ "Microsoft.HybridCompute/locations/updateCenterOperationResults/read",
+ "Microsoft.HybridCompute/machines/hybridIdentityMetadata/read",
+ "Microsoft.HybridCompute/osType/agentVersions/read",
+ "Microsoft.HybridCompute/osType/agentVersions/latest/read",
+ "Microsoft.HybridCompute/machines/runcommands/read",
+ "Microsoft.HybridCompute/machines/runcommands/write",
+ "Microsoft.HybridCompute/machines/runcommands/delete",
+ "Microsoft.HybridCompute/machines/licenseProfiles/read",
+ "Microsoft.HybridCompute/machines/licenseProfiles/write",
+ "Microsoft.HybridCompute/machines/licenseProfiles/delete",
+ "Microsoft.HybridCompute/licenses/read",
+ "Microsoft.HybridCompute/licenses/write",
+ "Microsoft.HybridCompute/licenses/delete",
+ "Microsoft.ResourceConnector/register/action",
+ "Microsoft.ResourceConnector/appliances/read",
+ "Microsoft.ResourceConnector/appliances/write",
+ "Microsoft.ResourceConnector/appliances/delete",
+ "Microsoft.ResourceConnector/locations/operationresults/read",
+ "Microsoft.ResourceConnector/locations/operationsstatus/read",
+ "Microsoft.ResourceConnector/appliances/listClusterUserCredential/action",
+ "Microsoft.ResourceConnector/appliances/listKeys/action",
+ "Microsoft.ResourceConnector/operations/read",
+ "Microsoft.ExtendedLocation/register/action",
+ "Microsoft.ExtendedLocation/customLocations/read",
+ "Microsoft.ExtendedLocation/customLocations/deploy/action",
+ "Microsoft.ExtendedLocation/customLocations/write",
+ "Microsoft.ExtendedLocation/customLocations/delete",
+ "Microsoft.EdgeMarketplace/offers/read",
+ "Microsoft.EdgeMarketplace/publishers/read",
+ "Microsoft.Kubernetes/register/action",
+ "Microsoft.KubernetesConfiguration/register/action",
+ "Microsoft.KubernetesConfiguration/extensions/write",
+ "Microsoft.KubernetesConfiguration/extensions/read",
+ "Microsoft.KubernetesConfiguration/extensions/delete",
+ "Microsoft.KubernetesConfiguration/extensions/operations/read",
+ "Microsoft.KubernetesConfiguration/namespaces/read",
+ "Microsoft.KubernetesConfiguration/operations/read",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.AzureStackHCI/StorageContainers/Write",
+ "Microsoft.AzureStackHCI/StorageContainers/Read",
+ "Microsoft.HybridContainerService/register/action"
], "notActions": [], "dataActions": [],
- "notDataActions": []
+ "notDataActions": [],
+ "conditionVersion": "2.0",
+ "condition": "((!(ActionMatches{'Microsoft.Authorization/roleAssignments/write'})) OR (@Request[Microsoft.Authorization/roleAssignments:RoleDefinitionId] ForAnyOfAnyValues:GuidEquals{f5819b54-e033-4d82-ac66-4fec3cbf3f4c, cd570a14-e51a-42ad-bac8-bafd67325302, b64e21ea-ac4e-4cdf-9dc9-5b892992bee7, 4b3fe76c-f777-4d24-a2d7-b027b0f7b273, 874d1c73-6003-4e60-a13a-cb31ea190a85,865ae368-6a45-4bd1-8fbf-0d5151f56fc1,7b1f81f9-4196-4058-8aae-762e593270df,4633458b-17de-408a-b874-0445c86b69e6})) AND ((!(ActionMatches{'Microsoft.Authorization/roleAssignments/delete'})) OR (@Resource[Microsoft.Authorization/roleAssignments:RoleDefinitionId] ForAnyOfAnyValues:GuidEquals{f5819b54-e033-4d82-ac66-4fec3cbf3f4c, cd570a14-e51a-42ad-bac8-bafd67325302, b64e21ea-ac4e-4cdf-9dc9-5b892992bee7, 4b3fe76c-f777-4d24-a2d7-b027b0f7b273, 874d1c73-6003-4e60-a13a-cb31ea190a85,865ae368-6a45-4bd1-8fbf-0d5151f56fc1,7b1f81f9-4196-4058-8aae-762e593270df,4633458b-17de-408a-b874-0445c86b69e6}))"
} ],
- "roleName": "Azure Stack Registration Owner",
+ "roleName": "Azure Stack HCI Administrator",
"roleType": "BuiltInRole", "type": "Microsoft.Authorization/roleDefinitions" } ```
-### EventGrid Contributor
+### Azure Stack HCI Device Management Role
-Lets you manage EventGrid operations.
+Microsoft.AzureStackHCI Device Management Role
+
+[Learn more](/azure-stack/hci/deploy/deployment-azure-resource-manager-template)
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
-> | [Microsoft.EventGrid](resource-provider-operations.md#microsofteventgrid)/* | Create and manage Event Grid resources |
-> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/* | Create and manage a classic metric alert |
-> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/* | Create and manage a deployment |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/Clusters/* | |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/EdgeDevices/* | |
> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
-> | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
Lets you manage EventGrid operations.
"assignableScopes": [ "/" ],
- "description": "Lets you manage EventGrid operations.",
- "id": "/providers/Microsoft.Authorization/roleDefinitions/1e241071-0855-49ea-94dc-649edcd759de",
- "name": "1e241071-0855-49ea-94dc-649edcd759de",
+ "description": "Microsoft.AzureStackHCI Device Management Role",
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/865ae368-6a45-4bd1-8fbf-0d5151f56fc1",
+ "name": "865ae368-6a45-4bd1-8fbf-0d5151f56fc1",
"permissions": [ { "actions": [
- "Microsoft.Authorization/*/read",
- "Microsoft.EventGrid/*",
- "Microsoft.Insights/alertRules/*",
- "Microsoft.Resources/deployments/*",
- "Microsoft.Resources/subscriptions/resourceGroups/read",
- "Microsoft.Support/*"
+ "Microsoft.AzureStackHCI/Clusters/*",
+ "Microsoft.AzureStackHCI/EdgeDevices/*",
+ "Microsoft.Resources/subscriptions/resourceGroups/read"
], "notActions": [], "dataActions": [], "notDataActions": [] } ],
- "roleName": "EventGrid Contributor",
+ "roleName": "Azure Stack HCI Device Management Role",
"roleType": "BuiltInRole", "type": "Microsoft.Authorization/roleDefinitions" } ```
-### EventGrid Data Sender
+### Azure Stack HCI VM Contributor
-Allows send access to event grid events.
+Grants permissions to perform all VM actions
+
+[Learn more](/azure-stack/hci/manage/assign-vm-rbac-roles)
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/VirtualMachines/* | |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/virtualMachineInstances/* | |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/NetworkInterfaces/* | |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/VirtualHardDisks/* | |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/VirtualNetworks/Read | Gets/Lists virtual networks resource |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/VirtualNetworks/join/action | Joins virtual networks resource |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/LogicalNetworks/Read | Gets/Lists logical networks resource |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/LogicalNetworks/join/action | Joins logical networks resource |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/GalleryImages/Read | Gets/Lists gallery images resource |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/GalleryImages/deploy/action | Deploys gallery images resource |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/StorageContainers/Read | Gets/Lists storage containers resource |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/StorageContainers/deploy/action | Deploys storage containers resource |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/MarketplaceGalleryImages/Read | Gets/Lists market place gallery images resource |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/MarketPlaceGalleryImages/deploy/action | Deploys market place gallery images resource |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/Clusters/Read | Gets clusters |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/Clusters/ArcSettings/Read | Gets arc resource of HCI cluster |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Write | Create or update a classic metric alert |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Delete | Delete a classic metric alert |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Read | Read a classic metric alert |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Activated/Action | Classic metric alert activated |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Resolved/Action | Classic metric alert resolved |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Throttled/Action | Classic metric alert rule throttled |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Incidents/Read | Read a classic metric alert incident |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/read | Gets or lists deployments. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/write | Creates or updates an deployment. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/delete | Deletes a deployment. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/cancel/action | Cancels a deployment. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/validate/action | Validates an deployment. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/whatIf/action | Predicts template deployment changes. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/exportTemplate/action | Export template for a deployment |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/operations/read | Gets or lists deployment operations. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/operationstatuses/read | Gets or lists deployment operation statuses. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourcegroups/deployments/read | Gets or lists deployments. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourcegroups/deployments/write | Creates or updates an deployment. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourcegroups/deployments/operations/read | Gets or lists deployment operations. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourcegroups/deployments/operationstatuses/read | Gets or lists deployment operation statuses. |
+> | [Microsoft.ResourceHealth](resource-provider-operations.md#microsoftresourcehealth)/availabilityStatuses/read | Gets the availability statuses for all resources in the specified scope |
> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
-> | [Microsoft.EventGrid](resource-provider-operations.md#microsofteventgrid)/topics/read | Read a topic |
-> | [Microsoft.EventGrid](resource-provider-operations.md#microsofteventgrid)/domains/read | Read a domain |
-> | [Microsoft.EventGrid](resource-provider-operations.md#microsofteventgrid)/partnerNamespaces/read | Read a partner namespace |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/read | Gets the list of subscriptions. |
> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
-> | [Microsoft.EventGrid](resource-provider-operations.md#microsofteventgrid)/namespaces/read | Read a namespace |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/operationresults/read | Get the subscription operation results. |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/read | Read any Azure Arc machines |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/write | Writes an Azure Arc machines |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/delete | Deletes an Azure Arc machines |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/UpgradeExtensions/action | Upgrades Extensions on Azure Arc machines |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/assessPatches/action | Assesses any Azure Arc machines to get missing software patches |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/installPatches/action | Installs patches on any Azure Arc machines |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/extensions/read | Reads any Azure Arc extensions |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/extensions/write | Installs or Updates an Azure Arc extensions |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/extensions/delete | Deletes an Azure Arc extensions |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/operations/read | Read all Operations for Azure Arc for Servers |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/locations/operationresults/read | Reads the status of an operation on Microsoft.HybridCompute Resource Provider |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/locations/operationstatus/read | Reads the status of an operation on Microsoft.HybridCompute Resource Provider |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/patchAssessmentResults/read | Reads any Azure Arc patchAssessmentResults |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/patchAssessmentResults/softwarePatches/read | Reads any Azure Arc patchAssessmentResults/softwarePatches |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/patchInstallationResults/read | Reads any Azure Arc patchInstallationResults |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/patchInstallationResults/softwarePatches/read | Reads any Azure Arc patchInstallationResults/softwarePatches |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/locations/updateCenterOperationResults/read | Reads the status of an update center operation on machines |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/hybridIdentityMetadata/read | Read any Azure Arc machines's Hybrid Identity Metadata |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/osType/agentVersions/read | Read all Azure Connected Machine Agent versions available |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/osType/agentVersions/latest/read | Read the latest Azure Connected Machine Agent version |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/runcommands/read | Reads any Azure Arc runcommands |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/runcommands/write | Installs or Updates an Azure Arc runcommands |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/runcommands/delete | Deletes an Azure Arc runcommands |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/licenseProfiles/read | Reads any Azure Arc licenseProfiles |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/licenseProfiles/write | Installs or Updates an Azure Arc licenseProfiles |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/licenseProfiles/delete | Deletes an Azure Arc licenseProfiles |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/licenses/read | Reads any Azure Arc licenses |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/licenses/write | Installs or Updates an Azure Arc licenses |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/licenses/delete | Deletes an Azure Arc licenses |
+> | Microsoft.ExtendedLocation/customLocations/Read | Gets an Custom Location resource |
+> | Microsoft.ExtendedLocation/customLocations/deploy/action | Deploy permissions to a Custom Location resource |
+> | [Microsoft.KubernetesConfiguration](resource-provider-operations.md#microsoftkubernetesconfiguration)/extensions/read | Gets extension instance resource. |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
-> | [Microsoft.EventGrid](resource-provider-operations.md#microsofteventgrid)/events/send/action | Send events to topics |
+> | *none* | |
> | **NotDataActions** | | > | *none* | |
Allows send access to event grid events.
"assignableScopes": [ "/" ],
- "description": "Allows send access to event grid events.",
- "id": "/providers/Microsoft.Authorization/roleDefinitions/d5a91429-5739-47e2-a06b-3470a27159e7",
- "name": "d5a91429-5739-47e2-a06b-3470a27159e7",
+ "description": "Grants permissions to perform all VM actions",
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/874d1c73-6003-4e60-a13a-cb31ea190a85",
+ "name": "874d1c73-6003-4e60-a13a-cb31ea190a85",
"permissions": [ { "actions": [
+ "Microsoft.AzureStackHCI/VirtualMachines/*",
+ "Microsoft.AzureStackHCI/virtualMachineInstances/*",
+ "Microsoft.AzureStackHCI/NetworkInterfaces/*",
+ "Microsoft.AzureStackHCI/VirtualHardDisks/*",
+ "Microsoft.AzureStackHCI/VirtualNetworks/Read",
+ "Microsoft.AzureStackHCI/VirtualNetworks/join/action",
+ "Microsoft.AzureStackHCI/LogicalNetworks/Read",
+ "Microsoft.AzureStackHCI/LogicalNetworks/join/action",
+ "Microsoft.AzureStackHCI/GalleryImages/Read",
+ "Microsoft.AzureStackHCI/GalleryImages/deploy/action",
+ "Microsoft.AzureStackHCI/StorageContainers/Read",
+ "Microsoft.AzureStackHCI/StorageContainers/deploy/action",
+ "Microsoft.AzureStackHCI/MarketplaceGalleryImages/Read",
+ "Microsoft.AzureStackHCI/MarketPlaceGalleryImages/deploy/action",
+ "Microsoft.AzureStackHCI/Clusters/Read",
+ "Microsoft.AzureStackHCI/Clusters/ArcSettings/Read",
+ "Microsoft.Insights/AlertRules/Write",
+ "Microsoft.Insights/AlertRules/Delete",
+ "Microsoft.Insights/AlertRules/Read",
+ "Microsoft.Insights/AlertRules/Activated/Action",
+ "Microsoft.Insights/AlertRules/Resolved/Action",
+ "Microsoft.Insights/AlertRules/Throttled/Action",
+ "Microsoft.Insights/AlertRules/Incidents/Read",
+ "Microsoft.Resources/deployments/read",
+ "Microsoft.Resources/deployments/write",
+ "Microsoft.Resources/deployments/delete",
+ "Microsoft.Resources/deployments/cancel/action",
+ "Microsoft.Resources/deployments/validate/action",
+ "Microsoft.Resources/deployments/whatIf/action",
+ "Microsoft.Resources/deployments/exportTemplate/action",
+ "Microsoft.Resources/deployments/operations/read",
+ "Microsoft.Resources/deployments/operationstatuses/read",
+ "Microsoft.Resources/subscriptions/resourcegroups/deployments/read",
+ "Microsoft.Resources/subscriptions/resourcegroups/deployments/write",
+ "Microsoft.Resources/subscriptions/resourcegroups/deployments/operations/read",
+ "Microsoft.Resources/subscriptions/resourcegroups/deployments/operationstatuses/read",
+ "Microsoft.ResourceHealth/availabilityStatuses/read",
"Microsoft.Authorization/*/read",
- "Microsoft.EventGrid/topics/read",
- "Microsoft.EventGrid/domains/read",
- "Microsoft.EventGrid/partnerNamespaces/read",
+ "Microsoft.Resources/subscriptions/read",
"Microsoft.Resources/subscriptions/resourceGroups/read",
- "Microsoft.EventGrid/namespaces/read"
+ "Microsoft.Resources/subscriptions/operationresults/read",
+ "Microsoft.HybridCompute/machines/read",
+ "Microsoft.HybridCompute/machines/write",
+ "Microsoft.HybridCompute/machines/delete",
+ "Microsoft.HybridCompute/machines/UpgradeExtensions/action",
+ "Microsoft.HybridCompute/machines/assessPatches/action",
+ "Microsoft.HybridCompute/machines/installPatches/action",
+ "Microsoft.HybridCompute/machines/extensions/read",
+ "Microsoft.HybridCompute/machines/extensions/write",
+ "Microsoft.HybridCompute/machines/extensions/delete",
+ "Microsoft.HybridCompute/operations/read",
+ "Microsoft.HybridCompute/locations/operationresults/read",
+ "Microsoft.HybridCompute/locations/operationstatus/read",
+ "Microsoft.HybridCompute/machines/patchAssessmentResults/read",
+ "Microsoft.HybridCompute/machines/patchAssessmentResults/softwarePatches/read",
+ "Microsoft.HybridCompute/machines/patchInstallationResults/read",
+ "Microsoft.HybridCompute/machines/patchInstallationResults/softwarePatches/read",
+ "Microsoft.HybridCompute/locations/updateCenterOperationResults/read",
+ "Microsoft.HybridCompute/machines/hybridIdentityMetadata/read",
+ "Microsoft.HybridCompute/osType/agentVersions/read",
+ "Microsoft.HybridCompute/osType/agentVersions/latest/read",
+ "Microsoft.HybridCompute/machines/runcommands/read",
+ "Microsoft.HybridCompute/machines/runcommands/write",
+ "Microsoft.HybridCompute/machines/runcommands/delete",
+ "Microsoft.HybridCompute/machines/licenseProfiles/read",
+ "Microsoft.HybridCompute/machines/licenseProfiles/write",
+ "Microsoft.HybridCompute/machines/licenseProfiles/delete",
+ "Microsoft.HybridCompute/licenses/read",
+ "Microsoft.HybridCompute/licenses/write",
+ "Microsoft.HybridCompute/licenses/delete",
+ "Microsoft.ExtendedLocation/customLocations/Read",
+ "Microsoft.ExtendedLocation/customLocations/deploy/action",
+ "Microsoft.KubernetesConfiguration/extensions/read"
], "notActions": [],
- "dataActions": [
- "Microsoft.EventGrid/events/send/action"
- ],
+ "dataActions": [],
"notDataActions": [] } ],
- "roleName": "EventGrid Data Sender",
+ "roleName": "Azure Stack HCI VM Contributor",
"roleType": "BuiltInRole", "type": "Microsoft.Authorization/roleDefinitions" } ```
-### EventGrid EventSubscription Contributor
+### Azure Stack HCI VM Reader
-Lets you manage EventGrid event subscription operations.
+Grants permissions to view VMs
-[Learn more](/azure/event-grid/security-authorization)
+[Learn more](/azure-stack/hci/manage/assign-vm-rbac-roles)
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/VirtualMachines/Read | Gets/Lists virtual machine resource |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/virtualMachineInstances/Read | Gets/Lists virtual machine instance resource |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/VirtualMachines/Extensions/Read | Gets/Lists virtual machine extensions resource |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/VirtualNetworks/Read | Gets/Lists virtual networks resource |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/LogicalNetworks/Read | Gets/Lists logical networks resource |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/NetworkInterfaces/Read | Gets/Lists network interfaces resource |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/VirtualHardDisks/Read | Gets/Lists virtual hard disk resource |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/StorageContainers/Read | Gets/Lists storage containers resource |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/GalleryImages/Read | Gets/Lists gallery images resource |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/MarketplaceGalleryImages/Read | Gets/Lists market place gallery images resource |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Write | Create or update a classic metric alert |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Delete | Delete a classic metric alert |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Read | Read a classic metric alert |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Activated/Action | Classic metric alert activated |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Resolved/Action | Classic metric alert resolved |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Throttled/Action | Classic metric alert rule throttled |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Incidents/Read | Read a classic metric alert incident |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/read | Gets or lists deployments. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/exportTemplate/action | Export template for a deployment |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/operations/read | Gets or lists deployment operations. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/operationstatuses/read | Gets or lists deployment operation statuses. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourcegroups/deployments/read | Gets or lists deployments. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourcegroups/deployments/operations/read | Gets or lists deployment operations. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourcegroups/deployments/operationstatuses/read | Gets or lists deployment operation statuses. |
+> | [Microsoft.ResourceHealth](resource-provider-operations.md#microsoftresourcehealth)/availabilityStatuses/read | Gets the availability statuses for all resources in the specified scope |
> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
-> | [Microsoft.EventGrid](resource-provider-operations.md#microsofteventgrid)/eventSubscriptions/* | Create and manage regional event subscriptions |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/read | Gets the list of subscriptions. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/operationresults/read | Get the subscription operation results. |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Grants permissions to view VMs",
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/4b3fe76c-f777-4d24-a2d7-b027b0f7b273",
+ "name": "4b3fe76c-f777-4d24-a2d7-b027b0f7b273",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.AzureStackHCI/VirtualMachines/Read",
+ "Microsoft.AzureStackHCI/virtualMachineInstances/Read",
+ "Microsoft.AzureStackHCI/VirtualMachines/Extensions/Read",
+ "Microsoft.AzureStackHCI/VirtualNetworks/Read",
+ "Microsoft.AzureStackHCI/LogicalNetworks/Read",
+ "Microsoft.AzureStackHCI/NetworkInterfaces/Read",
+ "Microsoft.AzureStackHCI/VirtualHardDisks/Read",
+ "Microsoft.AzureStackHCI/StorageContainers/Read",
+ "Microsoft.AzureStackHCI/GalleryImages/Read",
+ "Microsoft.AzureStackHCI/MarketplaceGalleryImages/Read",
+ "Microsoft.Insights/AlertRules/Write",
+ "Microsoft.Insights/AlertRules/Delete",
+ "Microsoft.Insights/AlertRules/Read",
+ "Microsoft.Insights/AlertRules/Activated/Action",
+ "Microsoft.Insights/AlertRules/Resolved/Action",
+ "Microsoft.Insights/AlertRules/Throttled/Action",
+ "Microsoft.Insights/AlertRules/Incidents/Read",
+ "Microsoft.Resources/deployments/read",
+ "Microsoft.Resources/deployments/exportTemplate/action",
+ "Microsoft.Resources/deployments/operations/read",
+ "Microsoft.Resources/deployments/operationstatuses/read",
+ "Microsoft.Resources/subscriptions/resourcegroups/deployments/read",
+ "Microsoft.Resources/subscriptions/resourcegroups/deployments/operations/read",
+ "Microsoft.Resources/subscriptions/resourcegroups/deployments/operationstatuses/read",
+ "Microsoft.ResourceHealth/availabilityStatuses/read",
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Resources/subscriptions/read",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.Resources/subscriptions/operationresults/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Azure Stack HCI VM Reader",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+
+### Azure Stack Registration Owner
+
+Lets you manage Azure Stack registrations.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.AzureStack](resource-provider-operations.md#microsoftazurestack)/edgeSubscriptions/read | |
+> | [Microsoft.AzureStack](resource-provider-operations.md#microsoftazurestack)/registrations/products/*/action | |
+> | [Microsoft.AzureStack](resource-provider-operations.md#microsoftazurestack)/registrations/products/read | Gets the properties of an Azure Stack Marketplace product |
+> | [Microsoft.AzureStack](resource-provider-operations.md#microsoftazurestack)/registrations/read | Gets the properties of an Azure Stack registration |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Lets you manage Azure Stack registrations.",
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/6f12a6df-dd06-4f3e-bcb1-ce8be600526a",
+ "name": "6f12a6df-dd06-4f3e-bcb1-ce8be600526a",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.AzureStack/edgeSubscriptions/read",
+ "Microsoft.AzureStack/registrations/products/*/action",
+ "Microsoft.AzureStack/registrations/products/read",
+ "Microsoft.AzureStack/registrations/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Azure Stack Registration Owner",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+
+### EventGrid Contributor
+
+Lets you manage EventGrid operations.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.EventGrid](resource-provider-operations.md#microsofteventgrid)/* | Create and manage Event Grid resources |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/* | Create and manage a classic metric alert |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/* | Create and manage a deployment |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Lets you manage EventGrid operations.",
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/1e241071-0855-49ea-94dc-649edcd759de",
+ "name": "1e241071-0855-49ea-94dc-649edcd759de",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Authorization/*/read",
+ "Microsoft.EventGrid/*",
+ "Microsoft.Insights/alertRules/*",
+ "Microsoft.Resources/deployments/*",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.Support/*"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "EventGrid Contributor",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+
+### EventGrid Data Sender
+
+Allows send access to event grid events.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.EventGrid](resource-provider-operations.md#microsofteventgrid)/topics/read | Read a topic |
+> | [Microsoft.EventGrid](resource-provider-operations.md#microsofteventgrid)/domains/read | Read a domain |
+> | [Microsoft.EventGrid](resource-provider-operations.md#microsofteventgrid)/partnerNamespaces/read | Read a partner namespace |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | [Microsoft.EventGrid](resource-provider-operations.md#microsofteventgrid)/namespaces/read | Read a namespace |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | [Microsoft.EventGrid](resource-provider-operations.md#microsofteventgrid)/events/send/action | Send events to topics |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Allows send access to event grid events.",
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/d5a91429-5739-47e2-a06b-3470a27159e7",
+ "name": "d5a91429-5739-47e2-a06b-3470a27159e7",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Authorization/*/read",
+ "Microsoft.EventGrid/topics/read",
+ "Microsoft.EventGrid/domains/read",
+ "Microsoft.EventGrid/partnerNamespaces/read",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.EventGrid/namespaces/read"
+ ],
+ "notActions": [],
+ "dataActions": [
+ "Microsoft.EventGrid/events/send/action"
+ ],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "EventGrid Data Sender",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+
+### EventGrid EventSubscription Contributor
+
+Lets you manage EventGrid event subscription operations.
+
+[Learn more](/azure/event-grid/security-authorization)
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.EventGrid](resource-provider-operations.md#microsofteventgrid)/eventSubscriptions/* | Create and manage regional event subscriptions |
> | [Microsoft.EventGrid](resource-provider-operations.md#microsofteventgrid)/topicTypes/eventSubscriptions/read | List global event subscriptions by topic type | > | [Microsoft.EventGrid](resource-provider-operations.md#microsofteventgrid)/locations/eventSubscriptions/read | List regional event subscriptions | > | [Microsoft.EventGrid](resource-provider-operations.md#microsofteventgrid)/locations/topicTypes/eventSubscriptions/read | List regional event subscriptions by topictype |
Lets you manage logic apps, but not change access to them.
### Logic App Operator
-Lets you read, enable, and disable Consumption logic apps, but not edit or update them. [Learn more](../logic-apps/logic-apps-securing-a-logic-app.md#access-to-logic-app-operations)
+Lets you read, enable, and disable logic apps, but not edit or update them.
+
+[Learn more](/azure/logic-apps/logic-apps-securing-a-logic-app#access-to-logic-app-operations)
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments. |
-> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/*/read | Read Insights alert rules. |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/*/read | Read Insights alert rules |
> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/metricAlerts/*/read | |
-> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/diagnosticSettings/*/read | Get diagnostic settings for Consumption logic apps. |
-> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/metricDefinitions/*/read | Get the available metrics for Consumption logic apps. |
-> | [Microsoft.Logic](resource-provider-operations.md#microsoftlogic)/*/read | Read Consumption logic app resources. |
-> | [Microsoft.Logic](resource-provider-operations.md#microsoftlogic)/workflows/disable/action | Disable the workflow. |
-> | [Microsoft.Logic](resource-provider-operations.md#microsoftlogic)/workflows/enable/action | Enable the workflow. |
-> | [Microsoft.Logic](resource-provider-operations.md#microsoftlogic)/workflows/validate/action | Validate the workflow. |
-> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/operations/read | Get or list deployment operations. |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/diagnosticSettings/*/read | Gets diagnostic settings for Logic Apps |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/metricDefinitions/*/read | Gets the available metrics for Logic Apps. |
+> | [Microsoft.Logic](resource-provider-operations.md#microsoftlogic)/*/read | Reads Logic Apps resources. |
+> | [Microsoft.Logic](resource-provider-operations.md#microsoftlogic)/workflows/disable/action | Disables the workflow. |
+> | [Microsoft.Logic](resource-provider-operations.md#microsoftlogic)/workflows/enable/action | Enables the workflow. |
+> | [Microsoft.Logic](resource-provider-operations.md#microsoftlogic)/workflows/validate/action | Validates the workflow. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/operations/read | Gets or lists deployment operations. |
> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/operationresults/read | Get the subscription operation results. |
-> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Get or list resource groups. |
-> | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/connectionGateways/*/read | Read connection gateways. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/connections/*/read | Read connections. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/customApis/*/read | Read custom APIs. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/serverFarms/read | Get the properties for an App Service Plan. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/connectionGateways/*/read | Read Connection Gateways. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/connections/*/read | Read Connections. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/customApis/*/read | Read Custom API. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/serverFarms/read | Get the properties on an App Service Plan |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
Lets you read, enable, and disable Consumption logic apps, but not edit or updat
} ```
-<a name="logic-apps-standard-contributor"></a>
- ### Logic Apps Standard Contributor (Preview)
-You can manage all aspects of a Standard logic app and workflows. You can't change access or ownership. [Learn more](../logic-apps/logic-apps-securing-a-logic-app.md#access-to-logic-app-operations)
+You can manage all aspects of a Standard logic app and workflows. You can't change access or ownership.
+
+[Learn more](/azure/logic-apps/logic-apps-securing-a-logic-app#access-to-logic-app-operations)
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments. |
-> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/* | Create and manage a classic metric alert. |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/* | Create and manage a classic metric alert |
> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/operations/read | Gets or lists deployment operations. | > | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/operationresults/read | Get the subscription operation results. | > | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
-> | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket. |
+> | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket |
> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/certificates/* | Create and manage a certificate. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/connectionGateways/* | Create and manage a connection gateway. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/connections/* | Create and manage a connection. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/customApis/* | Create and manage a custom API. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/connectionGateways/* | Create and manages a Connection Gateway. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/connections/* | Create and manages a Connection. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/customApis/* | Creates and manages a Custom API. |
> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/listSitesAssignedToHostName/read | Get names of sites assigned to hostname. | > | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/serverFarms/* | Create and manage an App Service Plan. | > | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/* | Create and manage a web app. |
You can manage all aspects of a Standard logic app and workflows. You can't chan
```json {
- "Description": "You can manage all aspects of a Standard logic app and workflows. You can't change access or ownership.",
- "Metadata": {
- "CreatedBy": null,
- "CreatedOn": "2023-08-02T22:35:40.6977003Z",
- "UpdatedBy": null,
- "UpdatedOn": "2023-08-23T18:55:27.6632763Z"
- },
- "IsBuiltIn": true,
- "AdminSecurityClaim": "Microsoft.Web",
- "Id": "ad710c24b0394e85a019deb4a06e8570",
- "Name": "Logic Apps Standard Contributor (Preview)",
- "IsServiceRole": false,
- "Permissions": [
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "You can manage all aspects of a Standard logic app and workflows. You can't change access or ownership.",
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/ad710c24-b039-4e85-a019-deb4a06e8570",
+ "name": "ad710c24-b039-4e85-a019-deb4a06e8570",
+ "permissions": [
{
- "Actions": [
+ "actions": [
"Microsoft.Authorization/*/read", "Microsoft.Insights/alertRules/*", "Microsoft.Resources/deployments/operations/read",
You can manage all aspects of a Standard logic app and workflows. You can't chan
"Microsoft.Web/serverFarms/*", "Microsoft.Web/sites/*" ],
- "NotActions": [],
- "DataActions": [],
- "NotDataActions": [],
- "Condition": null,
- "ConditionVersion": null
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
} ],
- "Scopes": [
- "/"
- ]
+ "roleName": "Logic Apps Standard Contributor (Preview)",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
} ```
-<a name="logic-apps-standard-developer"></a>
- ### Logic Apps Standard Developer (Preview)
-You can create and edit workflows, connections, and settings for a Standard logic app. You can't make changes outside the workflow scope. [Learn more](../logic-apps/logic-apps-securing-a-logic-app.md#access-to-logic-app-operations)
+You can create and edit workflows, connections, and settings for a Standard logic app. You can't make changes outside the workflow scope.
+
+[Learn more](/azure/logic-apps/logic-apps-securing-a-logic-app#access-to-logic-app-operations)
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments. |
-> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/* | Create and manage a classic metric alert. |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/* | Create and manage a classic metric alert |
> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/operations/read | Gets or lists deployment operations. | > | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/operationresults/read | Get the subscription operation results. | > | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
-> | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/connectionGateways/*/read | Get a list of connection gateways. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/connections/* | Create and manage a connection. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/customApis/* | Create and manage a custom API. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/serverFarms/read | Get the properties for an App Service Plan. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/config/appettings/read | Get the web app settings. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/config/list/Action | List the web app's security sensitive settings, such as publishing credentials, app settings, and connection strings. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/config/Read | Get the web app configuration settings. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/config/Write | Update the web app's configuration settings. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/config/web/appsettings/delete | Delete the web app's configuration. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/config/web/appsettings/read | Get a single app setting for the web app. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/config/web/appsettings/write | Create or update a single app setting for the web app. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/deployWorkflowArtifacts/action | Create the artifacts in a Standard logic app. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/hostruntime/* | Get or list hostruntime artifacts for the web app or function app. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/listworkflowsconnections/actions | No information available. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/publish/Action | Publish the web app. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/Read | Get the web app properties. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/config/appsettings/read | Get the web app slot's settings. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/config/appsettings/write | Create or update a single app setting for the web app slot. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/config/list/Action | List the web app slot's security sensitive settings, such as publishing credentials, app settings, and connection strings. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/config/Read | Get the web app slot's configuration settings. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/config/web/appsettings/delete | Delete the web app slot's app setting. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/deployWorkflowArtifacts/action | Create the artifacts in a deployment slot for the Standard logic app. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/listworkflowsconnections/action | No information available. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/publish/Action | Publish a web app slot. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/workflows/read | List the workflows in a deployment slot for the Standard logic app. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/workflowsconfiguration/read | Get the workflow's app configuration information based on its ID in a deployment slot for the Standard logic app. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/workflows/* | Manage the workflows in the Standard logic app. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/workflowsconfiguration/* | Get the workflow's app configuration information based on its ID for the Standard logic app. |
-> | **NotActions** | |
-> | *none* | |
-> | **DataActions** | |
-> | *none* | |
-> | **NotDataActions** | |
-> | *none* | |
-
-```json
-{
- "Description": "You can create and edit workflows, connections, and settings for a Standard logic app. You can't make changes outside the workflow scope.",
- "Metadata": {
- "CreatedBy": null,
- "CreatedOn": "2023-08-02T22:37:24.4551086Z",
- "UpdatedBy": null,
- "UpdatedOn": "2023-08-23T18:56:32.6015183Z"
- },
- "IsBuiltIn": true,
- "AdminSecurityClaim": "Microsoft.Web",
- "Id": "523776ba4eb24600a3c8f2dc93da4bdb",
- "Name": "Logic Apps Standard Developer (Preview)",
- "IsServiceRole": false,
- "Permissions": [
- {
- "Actions": [
+> | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/connectionGateways/*/read | Read Connection Gateways. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/connections/* | Create and manages a Connection. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/customApis/* | Creates and manages a Custom API. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/serverFarms/read | Get the properties on an App Service Plan |
+> | [microsoft.web](resource-provider-operations.md#microsoftweb)/sites/config/appsettings/read | Get Web App settings. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/config/list/Action | List Web App's security sensitive settings, such as publishing credentials, app settings and connection strings |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/config/Read | Get Web App configuration settings |
+> | [microsoft.web](resource-provider-operations.md#microsoftweb)/sites/config/Write | Update Web App's configuration settings |
+> | [microsoft.web](resource-provider-operations.md#microsoftweb)/sites/config/web/appsettings/delete | Delete Web Apps App Setting |
+> | [microsoft.web](resource-provider-operations.md#microsoftweb)/sites/config/web/appsettings/read | Get Web App Single App setting. |
+> | [microsoft.web](resource-provider-operations.md#microsoftweb)/sites/config/web/appsettings/write | Create or Update Web App Single App setting |
+> | [microsoft.web](resource-provider-operations.md#microsoftweb)/sites/deployWorkflowArtifacts/action | Create the artifacts in a Logic App. |
+> | [microsoft.web](resource-provider-operations.md#microsoftweb)/sites/hostruntime/* | Get or list hostruntime artifacts for the web app or function app. |
+> | [microsoft.web](resource-provider-operations.md#microsoftweb)/sites/listworkflowsconnections/action | List logic app's connections by its ID in a Logic App. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/publish/Action | Publish a Web App |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/Read | Get the properties of a Web App |
+> | [microsoft.web](resource-provider-operations.md#microsoftweb)/sites/slots/config/appsettings/read | Get Web App Slot's single App setting. |
+> | [microsoft.web](resource-provider-operations.md#microsoftweb)/sites/slots/config/appsettings/write | Create or Update Web App Slot's Single App setting |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/config/list/Action | List Web App Slot's security sensitive settings, such as publishing credentials, app settings and connection strings |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/config/Read | Get Web App Slot's configuration settings |
+> | [microsoft.web](resource-provider-operations.md#microsoftweb)/sites/slots/config/web/appsettings/delete | Delete Web App Slot's App Setting |
+> | [microsoft.web](resource-provider-operations.md#microsoftweb)/sites/slots/deployWorkflowArtifacts/action | Create the artifacts in a deployment slot in a Logic App. |
+> | [microsoft.web](resource-provider-operations.md#microsoftweb)/sites/slots/listworkflowsconnections/action | List logic app's connections by its ID in a deployment slot in a Logic App. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/publish/Action | Publish a Web App Slot |
+> | [microsoft.web](resource-provider-operations.md#microsoftweb)/sites/slots/workflows/read | List the workflows in a deployment slot in a Logic App. |
+> | [microsoft.web](resource-provider-operations.md#microsoftweb)/sites/slots/workflowsconfiguration/read | Get logic app's configuration information by its ID in a deployment slot in a Logic App. |
+> | [microsoft.web](resource-provider-operations.md#microsoftweb)/sites/workflows/* | |
+> | [microsoft.web](resource-provider-operations.md#microsoftweb)/sites/workflowsconfiguration/* | |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "You can create and edit workflows, connections, and settings for a Standard logic app. You can't make changes outside the workflow scope.",
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/523776ba-4eb2-4600-a3c8-f2dc93da4bdb",
+ "name": "523776ba-4eb2-4600-a3c8-f2dc93da4bdb",
+ "permissions": [
+ {
+ "actions": [
"Microsoft.Authorization/*/read", "Microsoft.Insights/alertRules/*", "Microsoft.Resources/deployments/operations/read",
You can create and edit workflows, connections, and settings for a Standard logi
"microsoft.web/sites/workflows/*", "microsoft.web/sites/workflowsconfiguration/*" ],
- "NotActions": [],
- "DataActions": [],
- "NotDataActions": [],
- "Condition": null,
- "ConditionVersion": null
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
} ],
- "Scopes": [
- "/"
- ]
+ "roleName": "Logic Apps Standard Developer (Preview)",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
} ```
-<a name="logic-apps-standard-operator"></a>
- ### Logic Apps Standard Operator (Preview)
-You can enable, resubmit, and disable workflows as well as create connections. You can't edit workflows or settings. [Learn more](../logic-apps/logic-apps-securing-a-logic-app.md#access-to-logic-app-operations)
+You can enable, resubmit, and disable workflows as well as create connections. You can't edit workflows or settings.
+
+[Learn more](/azure/logic-apps/logic-apps-securing-a-logic-app#access-to-logic-app-operations)
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments. |
-> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/* | Create and manage a classic metric alert. |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/* | Create and manage a classic metric alert |
> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/operations/read | Gets or lists deployment operations. | > | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/operationresults/read | Get the subscription operation results. | > | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
-> | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/connectionGateways/*/read | Get a list of connection gateways. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/connections/*/read | No information available. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/customApis/*/read | No information available. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/serverFarms/read | Get the properties for an App Service Plan. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/applySlotConfig/Action | No information available. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/config/Read | Get the web app configuration settings. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/hostruntime/* | Get or list hostruntime artifacts for the web app or function app. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/Read | Get the web app properties. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/restart/Action | Restart the web app. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/config/Read | Get the web app slot's configuration settings. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/restart/Action | Restart the web app slot. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/slotsswap/Action | Swap the web app deployment slots. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/start/Action | Start the web app slot. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/stop/Action | Stop the web app slot. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/workflows/read | List the workflows in a deployment slot for the Standard logic app. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/workflowsconfiguration/read | Get the workflow's app configuration information based on its ID in a deployment slot for the Standard logic app. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/slotsdiffs/Action | Get the differences in the configuration between the web app and slots. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slotsswap/Action | Swap the web app deployment slots. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/start/Action | Start the web app. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/stop/Action | Stop the web app. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/workflows/read | List the workflows in the Standard logic app. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/workflowsconfiguration/read | Get the workflow's app configuration based on its ID for the Standard logic app. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/write | Create or update a web app. |
-> | **NotActions** | |
-> | *none* | |
-> | **DataActions** | |
-> | *none* | |
-> | **NotDataActions** | |
-> | *none* | |
-
-```json
-{
- "Description": "You can enable, resubmit, and disable workflows as well as create connections. You can't edit workflows or settings.",
- "Metadata": {
- "CreatedBy": null,
- "CreatedOn": "2023-08-02T22:38:47.4360166Z",
- "UpdatedBy": null,
- "UpdatedOn": "2023-08-23T19:03:50.1098085Z"
- },
- "IsBuiltIn": true,
- "AdminSecurityClaim": "Microsoft.Web",
- "Id": "b70c96e966fe4c09b6e7c98e69c98555",
- "Name": "Logic Apps Standard Operator (Preview)",
- "IsServiceRole": false,
- "Permissions": [
- {
- "Actions": [
+> | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/connectionGateways/*/read | Read Connection Gateways. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/connections/*/read | Read Connections. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/customApis/*/read | Read Custom API. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/serverFarms/read | Get the properties on an App Service Plan |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/applySlotConfig/Action | Apply web app slot configuration from target slot to the current web app |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/config/Read | Get Web App configuration settings |
+> | [microsoft.web](resource-provider-operations.md#microsoftweb)/sites/hostruntime/* | Get or list hostruntime artifacts for the web app or function app. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/Read | Get the properties of a Web App |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/restart/Action | Restart a Web App |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/config/Read | Get Web App Slot's configuration settings |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/restart/Action | Restart a Web App Slot |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/slotsswap/Action | Swap Web App deployment slots |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/start/Action | Start a Web App Slot |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/stop/Action | Stop a Web App Slot |
+> | [microsoft.web](resource-provider-operations.md#microsoftweb)/sites/slots/workflows/read | List the workflows in a deployment slot in a Logic App. |
+> | [microsoft.web](resource-provider-operations.md#microsoftweb)/sites/slots/workflowsconfiguration/read | Get logic app's configuration information by its ID in a deployment slot in a Logic App. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slotsdiffs/Action | Get differences in configuration between web app and slots |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slotsswap/Action | Swap Web App deployment slots |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/start/Action | Start a Web App |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/stop/Action | Stop a Web App |
+> | [microsoft.web](resource-provider-operations.md#microsoftweb)/sites/workflows/read | List the workflows in a Logic App. |
+> | [microsoft.web](resource-provider-operations.md#microsoftweb)/sites/workflowsconfiguration/read | Get logic app's configuration information by its ID in a Logic App. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/write | Create a new Web App or update an existing one |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "You can enable, resubmit, and disable workflows as well as create connections. You can't edit workflows or settings.",
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/b70c96e9-66fe-4c09-b6e7-c98e69c98555",
+ "name": "b70c96e9-66fe-4c09-b6e7-c98e69c98555",
+ "permissions": [
+ {
+ "actions": [
"Microsoft.Authorization/*/read", "Microsoft.Insights/alertRules/*", "Microsoft.Resources/deployments/operations/read",
You can enable, resubmit, and disable workflows as well as create connections. Y
"microsoft.web/sites/workflowsconfiguration/read", "Microsoft.Web/sites/write" ],
- "NotActions": [],
- "DataActions": [],
- "NotDataActions": [],
- "Condition": null,
- "ConditionVersion": null
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
} ],
- "Scopes": [
- "/"
- ]
+ "roleName": "Logic Apps Standard Operator (Preview)",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
} ```
-<a name="logic-apps-standard-reader"></a>
- ### Logic Apps Standard Reader (Preview)
-You have read-only access to all resources in a Standard logic app and workflows, including the workflow runs and their history. [Learn more](../logic-apps/logic-apps-securing-a-logic-app.md#access-to-logic-app-operations)
+You have read-only access to all resources in a Standard logic app and workflows, including the workflow runs and their history.
+
+[Learn more](/azure/logic-apps/logic-apps-securing-a-logic-app#access-to-logic-app-operations)
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments. |
-> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/* | Create and manage a classic metric alert. |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/* | Create and manage a classic metric alert |
> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/operations/read | Gets or lists deployment operations. | > | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/operationresults/read | Get the subscription operation results. | > | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
-> | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/connectionGateways/*/read | Get a list of connection gateways. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/connections/*/read | No information available. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/customApis/*/read | No information available. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/serverFarms/read | Get the properties for an App Service Plan. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/hostruntime/webhooks/api/workflows/triggers/read | List the web app's hostruntime workflow triggers. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/hostruntime/webhooks/api/workflows/runs/read | List the web app's hostruntime workflow runs. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/workflows/read | List the workflows in the Standard logic app. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/workflowsconfiguration/read | Get the workflow's app configuration based on its ID for the Standard logic app. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/workflows/read | List the workflows in a deployment slot for the Standard logic app. |
-> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/sites/slots/workflowsconfiguration/read | Get the workflow's app configuration information based on its ID in a deployment slot for the Standard logic app. |
+> | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/connectionGateways/*/read | Read Connection Gateways. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/connections/*/read | Read Connections. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/customApis/*/read | Read Custom API. |
+> | [Microsoft.Web](resource-provider-operations.md#microsoftweb)/serverFarms/read | Get the properties on an App Service Plan |
+> | [microsoft.web](resource-provider-operations.md#microsoftweb)/sites/hostruntime/webhooks/api/workflows/triggers/read | List Web Apps Hostruntime Workflow Triggers. |
+> | [microsoft.web](resource-provider-operations.md#microsoftweb)/sites/hostruntime/webhooks/api/workflows/runs/read | List Web Apps Hostruntime Workflow Runs. |
+> | [microsoft.web](resource-provider-operations.md#microsoftweb)/sites/workflows/read | List the workflows in a Logic App. |
+> | [microsoft.web](resource-provider-operations.md#microsoftweb)/sites/workflowsconfiguration/read | Get logic app's configuration information by its ID in a Logic App. |
+> | [microsoft.web](resource-provider-operations.md#microsoftweb)/sites/slots/workflows/read | List the workflows in a deployment slot in a Logic App. |
+> | [microsoft.web](resource-provider-operations.md#microsoftweb)/sites/slots/workflowsconfiguration/read | Get logic app's configuration information by its ID in a deployment slot in a Logic App. |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
You have read-only access to all resources in a Standard logic app and workflows
```json {
- "Description": "You have read-only access to all resources in a Standard logic app and workflows, including the workflow runs and their history.",
- "Metadata": {
- "CreatedBy": null,
- "CreatedOn": "2023-08-02T22:33:56.2374571Z",
- "UpdatedBy": null,
- "UpdatedOn": "2023-08-23T19:05:11.7148533Z"
- },
- "IsBuiltIn": true,
- "AdminSecurityClaim": "Microsoft.Web",
- "Id": "4accf36b2c05432f91c85c532dff4c73",
- "Name": "Logic Apps Standard Reader (Preview)",
- "IsServiceRole": false,
- "Permissions": [
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "You have read-only access to all resources in a Standard logic app and workflows, including the workflow runs and their history.",
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/4accf36b-2c05-432f-91c8-5c532dff4c73",
+ "name": "4accf36b-2c05-432f-91c8-5c532dff4c73",
+ "permissions": [
{
- "Actions": [
+ "actions": [
"Microsoft.Authorization/*/read", "Microsoft.Insights/alertRules/*", "Microsoft.Resources/deployments/operations/read",
You have read-only access to all resources in a Standard logic app and workflows
"microsoft.web/sites/slots/workflows/read", "microsoft.web/sites/slots/workflowsconfiguration/read" ],
- "NotActions": [],
- "DataActions": [],
- "NotDataActions": [],
- "Condition": null,
- "ConditionVersion": null
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
} ],
- "Scopes": [
- "/"
- ]
+ "roleName": "Logic Apps Standard Reader (Preview)",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
} ```
Can onboard Azure Connected Machines.
Can read, write, delete and re-onboard Azure Connected Machines.
+[Learn more](/azure/azure-arc/servers/security-overview)
+ > [!div class="mx-tableFixed"] > | Actions | Description | > | | |
Can read, write, delete and re-onboard Azure Connected Machines.
} ```
+### Azure Connected Machine Resource Manager
+
+Custom Role for AzureStackHCI RP to manage hybrid compute machines and hybrid connectivity endpoints in a resource group
+
+[Learn more](/azure-stack/hci/deploy/deployment-azure-resource-manager-template)
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.HybridConnectivity](resource-provider-operations.md#microsofthybridconnectivity)/endpoints/read | Gets the endpoint to the resource. |
+> | [Microsoft.HybridConnectivity](resource-provider-operations.md#microsofthybridconnectivity)/endpoints/write | Update the endpoint to the target resource. |
+> | [Microsoft.HybridConnectivity](resource-provider-operations.md#microsofthybridconnectivity)/endpoints/serviceConfigurations/read | Gets the details about the service to the resource. |
+> | [Microsoft.HybridConnectivity](resource-provider-operations.md#microsofthybridconnectivity)/endpoints/serviceConfigurations/write | Update the service details in the service configurations of the target resource. |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/read | Read any Azure Arc machines |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/write | Writes an Azure Arc machines |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/delete | Deletes an Azure Arc machines |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/extensions/read | Reads any Azure Arc extensions |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/extensions/write | Installs or Updates an Azure Arc extensions |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/extensions/delete | Deletes an Azure Arc extensions |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/*/read | |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/UpgradeExtensions/action | Upgrades Extensions on Azure Arc machines |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/licenseProfiles/read | Reads any Azure Arc licenseProfiles |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/licenseProfiles/write | Installs or Updates an Azure Arc licenseProfiles |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/licenseProfiles/delete | Deletes an Azure Arc licenseProfiles |
+> | [Microsoft.GuestConfiguration](resource-provider-operations.md#microsoftguestconfiguration)/guestConfigurationAssignments/read | Get guest configuration assignment. |
+> | [Microsoft.GuestConfiguration](resource-provider-operations.md#microsoftguestconfiguration)/guestConfigurationAssignments/*/read | |
+> | [Microsoft.GuestConfiguration](resource-provider-operations.md#microsoftguestconfiguration)/guestConfigurationAssignments/write | Create new guest configuration assignment. |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Custom Role for AzureStackHCI RP to manage hybrid compute machines and hybrid connectivity endpoints in a resource group",
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/f5819b54-e033-4d82-ac66-4fec3cbf3f4c",
+ "name": "f5819b54-e033-4d82-ac66-4fec3cbf3f4c",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.HybridConnectivity/endpoints/read",
+ "Microsoft.HybridConnectivity/endpoints/write",
+ "Microsoft.HybridConnectivity/endpoints/serviceConfigurations/read",
+ "Microsoft.HybridConnectivity/endpoints/serviceConfigurations/write",
+ "Microsoft.HybridCompute/machines/read",
+ "Microsoft.HybridCompute/machines/write",
+ "Microsoft.HybridCompute/machines/delete",
+ "Microsoft.HybridCompute/machines/extensions/read",
+ "Microsoft.HybridCompute/machines/extensions/write",
+ "Microsoft.HybridCompute/machines/extensions/delete",
+ "Microsoft.HybridCompute/*/read",
+ "Microsoft.HybridCompute/machines/UpgradeExtensions/action",
+ "Microsoft.HybridCompute/machines/licenseProfiles/read",
+ "Microsoft.HybridCompute/machines/licenseProfiles/write",
+ "Microsoft.HybridCompute/machines/licenseProfiles/delete",
+ "Microsoft.GuestConfiguration/guestConfigurationAssignments/read",
+ "Microsoft.GuestConfiguration/guestConfigurationAssignments/*/read",
+ "Microsoft.GuestConfiguration/guestConfigurationAssignments/write"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Azure Connected Machine Resource Manager",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+
+### Azure Resource Bridge Deployment Role
+
+Azure Resource Bridge Deployment Role
+
+[Learn more](/azure/azure-arc/resource-bridge/overview)
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/Register/Action | Registers the subscription for the Azure Stack HCI resource provider and enables the creation of Azure Stack HCI resources. |
+> | Microsoft.ResourceConnector/register/action | Registers the subscription for Appliances resource provider and enables the creation of Appliance. |
+> | Microsoft.ResourceConnector/appliances/read | Gets an Appliance resource |
+> | Microsoft.ResourceConnector/appliances/write | Creates or Updates Appliance resource |
+> | Microsoft.ResourceConnector/appliances/delete | Deletes Appliance resource |
+> | Microsoft.ResourceConnector/locations/operationresults/read | Get result of Appliance operation |
+> | Microsoft.ResourceConnector/locations/operationsstatus/read | Get result of Appliance operation |
+> | Microsoft.ResourceConnector/appliances/listClusterUserCredential/action | Get an appliance cluster user credential |
+> | Microsoft.ResourceConnector/appliances/listKeys/action | Get an appliance cluster customer user keys |
+> | Microsoft.ResourceConnector/appliances/upgradeGraphs/read | Gets the upgrade graph of Appliance cluster |
+> | Microsoft.ResourceConnector/telemetryconfig/read | Get Appliances telemetry config utilized by Appliances CLI |
+> | Microsoft.ResourceConnector/operations/read | Gets list of Available Operations for Appliances |
+> | Microsoft.ExtendedLocation/register/action | Registers the subscription for Custom Location resource provider and enables the creation of Custom Location. |
+> | Microsoft.ExtendedLocation/customLocations/deploy/action | Deploy permissions to a Custom Location resource |
+> | Microsoft.ExtendedLocation/customLocations/read | Gets an Custom Location resource |
+> | Microsoft.ExtendedLocation/customLocations/write | Creates or Updates Custom Location resource |
+> | Microsoft.ExtendedLocation/customLocations/delete | Deletes Custom Location resource |
+> | [Microsoft.HybridConnectivity](resource-provider-operations.md#microsofthybridconnectivity)/register/action | Register the subscription for Microsoft.HybridConnectivity |
+> | [Microsoft.Kubernetes](resource-provider-operations.md#microsoftkubernetes)/register/action | Registers Subscription with Microsoft.Kubernetes resource provider |
+> | [Microsoft.KubernetesConfiguration](resource-provider-operations.md#microsoftkubernetesconfiguration)/register/action | Registers subscription to Microsoft.KubernetesConfiguration resource provider. |
+> | [Microsoft.KubernetesConfiguration](resource-provider-operations.md#microsoftkubernetesconfiguration)/extensions/write | Creates or updates extension resource. |
+> | [Microsoft.KubernetesConfiguration](resource-provider-operations.md#microsoftkubernetesconfiguration)/extensions/read | Gets extension instance resource. |
+> | [Microsoft.KubernetesConfiguration](resource-provider-operations.md#microsoftkubernetesconfiguration)/extensions/delete | Deletes extension instance resource. |
+> | [Microsoft.KubernetesConfiguration](resource-provider-operations.md#microsoftkubernetesconfiguration)/extensions/operations/read | Gets Async Operation status. |
+> | [Microsoft.KubernetesConfiguration](resource-provider-operations.md#microsoftkubernetesconfiguration)/namespaces/read | Get Namespace Resource |
+> | [Microsoft.KubernetesConfiguration](resource-provider-operations.md#microsoftkubernetesconfiguration)/operations/read | Gets available operations of the Microsoft.KubernetesConfiguration resource provider. |
+> | [Microsoft.GuestConfiguration](resource-provider-operations.md#microsoftguestconfiguration)/guestConfigurationAssignments/read | Get guest configuration assignment. |
+> | Microsoft.HybridContainerService/register/action | Register the subscription for Microsoft.HybridContainerService |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/StorageContainers/Write | Creates/Updates storage containers resource |
+> | [Microsoft.AzureStackHCI](resource-provider-operations.md#microsoftazurestackhci)/StorageContainers/Read | Gets/Lists storage containers resource |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Azure Resource Bridge Deployment Role",
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/7b1f81f9-4196-4058-8aae-762e593270df",
+ "name": "7b1f81f9-4196-4058-8aae-762e593270df",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.AzureStackHCI/Register/Action",
+ "Microsoft.ResourceConnector/register/action",
+ "Microsoft.ResourceConnector/appliances/read",
+ "Microsoft.ResourceConnector/appliances/write",
+ "Microsoft.ResourceConnector/appliances/delete",
+ "Microsoft.ResourceConnector/locations/operationresults/read",
+ "Microsoft.ResourceConnector/locations/operationsstatus/read",
+ "Microsoft.ResourceConnector/appliances/listClusterUserCredential/action",
+ "Microsoft.ResourceConnector/appliances/listKeys/action",
+ "Microsoft.ResourceConnector/appliances/upgradeGraphs/read",
+ "Microsoft.ResourceConnector/telemetryconfig/read",
+ "Microsoft.ResourceConnector/operations/read",
+ "Microsoft.ExtendedLocation/register/action",
+ "Microsoft.ExtendedLocation/customLocations/deploy/action",
+ "Microsoft.ExtendedLocation/customLocations/read",
+ "Microsoft.ExtendedLocation/customLocations/write",
+ "Microsoft.ExtendedLocation/customLocations/delete",
+ "Microsoft.HybridConnectivity/register/action",
+ "Microsoft.Kubernetes/register/action",
+ "Microsoft.KubernetesConfiguration/register/action",
+ "Microsoft.KubernetesConfiguration/extensions/write",
+ "Microsoft.KubernetesConfiguration/extensions/read",
+ "Microsoft.KubernetesConfiguration/extensions/delete",
+ "Microsoft.KubernetesConfiguration/extensions/operations/read",
+ "Microsoft.KubernetesConfiguration/namespaces/read",
+ "Microsoft.KubernetesConfiguration/operations/read",
+ "Microsoft.GuestConfiguration/guestConfigurationAssignments/read",
+ "Microsoft.HybridContainerService/register/action",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.AzureStackHCI/StorageContainers/Write",
+ "Microsoft.AzureStackHCI/StorageContainers/Read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Azure Resource Bridge Deployment Role",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+ ### Billing Reader Allows read access to billing data
sap Integration Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/integration-get-started.md
We have over thirty years of partnership between SAP and Microsoft, which is a f
- [SAP Partners with Microsoft for First-in-Market Cloud Migration Offerings](https://news.microsoft.com/2019/10/20/sap-partners-with-microsoft-for-first-in-market-cloud-migration-offerings/) - [SAP and Microsoft Expand Partnership and Integrate Microsoft Teams Across Solutions](https://news.sap.com/2021/01/sap-and-microsoft-expand-partnership-integrate-teams/) - [Come Explore the Future](https://news.sap.com/2021/01/sap-and-microsoft-expand-partnership-integrate-teams/), showing how Microsoft and SAP are partnering to meet the needs of every business.-- [Collaborating for Success: How SAP and Microsoft are working together to accelerate customer innovation and transformation](https://www.sap.com/documents/2021/01/ca0f4c25-c87d-0010-87a3-c30de2ffd8ff.html)
+- [Collaborating for Success: How SAP and Microsoft are working together to accelerate customer innovation and transformation](https://discover.sap.com/microsoft/en-us/https://docsupdatetracker.net/index.html)
## Integration resources
security Customer Lockbox Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/customer-lockbox-overview.md
We've introduced a new baseline control ([PA-8: Determine access process for clo
## Exclusions
-Customer Lockbox requests are not triggered in the following engineering support scenarios:
+Customer Lockbox requests are not triggered in the following scenarios:
- Emergency scenarios that fall outside of standard operating procedures. For example, a major service outage requires immediate attention to recover or restore services in an unexpected or unpredictable scenario. These ΓÇ£break glassΓÇ¥ events are rare and, in most instances, do not require any access to customer data to resolve. - A Microsoft engineer accesses the Azure platform as part of troubleshooting and is inadvertently exposed to customer data. For example, the Azure Network Team performs troubleshooting that results in a packet capture on a network device. It is rare that such scenarios would result in access to meaningful quantities of customer data. Customers can further protect their data through use of in transit and at rest encryption.
sentinel Crowdstrike Falcon Data Replicator V2 Using Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/crowdstrike-falcon-data-replicator-v2-using-azure-functions.md
Title: "Crowdstrike Falcon Data Replicator V2 (using Azure Functions) (Preview) connector for Microsoft Sentinel"
-description: "Learn how to install the connector Crowdstrike Falcon Data Replicator V2 (using Azure Functions) (Preview) to connect your data source to Microsoft Sentinel."
+ Title: "Crowdstrike Falcon Data Replicator V2 (using Azure Functions) connector for Microsoft Sentinel"
+description: "Learn how to install the connector Crowdstrike Falcon Data Replicator V2 (using Azure Functions) to connect your data source to Microsoft Sentinel."
Last updated 01/06/2024
-# Crowdstrike Falcon Data Replicator V2 (using Azure Functions) (Preview) connector for Microsoft Sentinel
+# Crowdstrike Falcon Data Replicator V2 (using Azure Functions) connector for Microsoft Sentinel
The [Crowdstrike](https://www.crowdstrike.com/) Falcon Data Replicator connector provides the capability to ingest raw event data from the [Falcon Platform](https://www.crowdstrike.com/blog/tech-center/intro-to-falcon-data-replicator/) events into Microsoft Sentinel. The connector provides ability to get events from Falcon Agents which helps to examine potential security risks, analyze your team's use of collaboration, diagnose configuration problems and more.
The [Crowdstrike](https://www.crowdstrike.com/) Falcon Data Replicator connector
| | | | **Azure function app code** | https://aka.ms/sentinel-CrowdstrikeReplicatorV2-functionapp | | **Kusto function alias** | CrowdstrikeReplicator |
-| **Kusto function url** | https://aka.ms/sentinel-crowdstrikereplicator-parser |
| **Log Analytics table(s)** | CrowdStrike_Additional_Events_CL<br/> ASimNetworkSessionLogs<br/> ASimDnsActivityLogs<br/> ASimAuditEventLogs<br/> ASimFileEventLogs<br/> ASimAuthenticationEventLogs<br/> ASimProcessEventLogs<br/> ASimRegistryEventLogs<br/> ASimUserManagementActivityLogs<br/> CrowdStrike_Secondary_Data_CL<br/> | | **Data collection rules support** | Not currently supported | | **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
CrowdStrikeReplicatorV2
## Prerequisites
-To integrate with Crowdstrike Falcon Data Replicator V2 (using Azure Functions) (Preview) make sure you have:
+To integrate with Crowdstrike Falcon Data Replicator V2 (using Azure Functions) make sure you have:
- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). - **SQS and AWS S3 account credentials/permissions**: **AWS_SECRET**, **AWS_REGION_NAME**, **AWS_KEY**, **QUEUE_URL** is required. [See the documentation to learn more about data pulling](https://www.crowdstrike.com/blog/tech-center/intro-to-falcon-data-replicator/). To start, contact CrowdStrike support. At your request they will create a CrowdStrike managed Amazon Web Services (AWS) S3 bucket for short term storage purposes as well as a SQS (simple queue service) account for monitoring changes to the S3 bucket.
sentinel Deprecated Claroty Via Legacy Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-claroty-via-legacy-agent.md
# [Deprecated] Claroty via Legacy Agent connector for Microsoft Sentinel
-The [Claroty](https://claroty.com/) data connector provides the capability to ingest [Continuous Threat Detection](https://claroty.com/resources/datasheets/continuous-threat-detection) and [Secure Remote Access](https://claroty.com/secure-remote-access/) events into Microsoft Sentinel.
+The [Claroty](https://claroty.com/) data connector provides the capability to ingest [Continuous Threat Detection](https://claroty.com/resources/datasheets/continuous-threat-detection) and [Secure Remote Access](https://claroty.com/industrial-cybersecurity/sra) events into Microsoft Sentinel.
## Connector attributes
sentinel Google Workspace G Suite Using Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/google-workspace-g-suite-using-azure-functions.md
To integrate with Google Workspace (G Suite) (using Azure Functions) make sure y
>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
-**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias GWorkspaceReports and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/GoogleWorkspaceReports/Parsers/GWorkspaceActivityReports), on the second line of the query, enter the hostname(s) of your GWorkspaceReports device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
+**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias GWorkspaceReports and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/GoogleWorkspaceReports/Parsers/GWorkspaceActivityReports.yaml), on the second line of the query, enter the hostname(s) of your GWorkspaceReports device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update.
**STEP 1 - Ensure the prerequisites to obtain the Google Pickel String**
sentinel Recommended Claroty Via Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-claroty-via-ama.md
# [Recommended] Claroty via AMA connector for Microsoft Sentinel
-The [Claroty](https://claroty.com/) data connector provides the capability to ingest [Continuous Threat Detection](https://claroty.com/resources/datasheets/continuous-threat-detection) and [Secure Remote Access](https://claroty.com/secure-remote-access/) events into Microsoft Sentinel.
+The [Claroty](https://claroty.com/) data connector provides the capability to ingest [Continuous Threat Detection](https://claroty.com/resources/datasheets/continuous-threat-detection) and [Secure Remote Access](https://claroty.com/industrial-cybersecurity/sra) events into Microsoft Sentinel.
## Connector attributes
sentinel Skill Up Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/skill-up-resources.md
There are three common scenarios for side-by-side deployment:
* Over time, as Microsoft Sentinel covers more workloads, you would ordinarily reverse direction and send alerts from your on-premises SIEM to Microsoft Sentinel. To do so: * For Splunk, see [Send data and notable events from Splunk to Microsoft Sentinel](https://techcommunity.microsoft.com/t5/azure-sentinel/how-to-export-data-from-splunk-to-azure-sentinel/ba-p/1891237). * For QRadar, see [Send QRadar offenses to Microsoft Sentinel](https://techcommunity.microsoft.com/t5/azure-sentinel/migrating-qradar-offenses-to-azure-sentinel/ba-p/2102043).
- * For ArcSight, see [Common Event Format (CEF) forwarding](https://community.microfocus.com/t5/Logger-Forwarding-Connectors/ArcSight-Forwarding-Connector-Configuration-Guide/ta-p/1583918).
+ * For ArcSight, see [Common Event Format (CEF) forwarding](https://community.microfocus.com/cyberres/arcsight/f/discussions/241072/using-forwarding-connector-to-send-logs-to-esm-and-logger-destination).
You can also send the alerts from Microsoft Sentinel to your third-party SIEM or ticketing system by using the [Graph Security API](/graph/security-integration). This approach is simpler, but it doesn't enable sending other data.
site-recovery Hyper V Deployment Planner Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-deployment-planner-overview.md
If you have previous version of the deployment planner, do either of the followi
## Version history The latest Azure Site Recovery Deployment Planner tool version is 2.5.
-Refer to [Azure Site Recovery Deployment Planner Version History](https://social.technet.microsoft.com/wiki/contents/articles/51049.asr-deployment-planner-version-history.aspx) page for the fixes that are added in each update.
+Refer to [Azure Site Recovery Deployment Planner Version History](/azure/site-recovery/site-recovery-deployment-planner-history) page for the fixes that are added in each update.
## Next steps
spring-apps Tutorial Managed Identities Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/tutorial-managed-identities-key-vault.md
Previously updated : 05/07/2023 Last updated : 02/01/2024
+zone_pivot_groups: spring-apps-tier-selection
# Connect Azure Spring Apps to Key Vault using managed identities
**This article applies to:** ✔️ Java ❌ C#
-**This article applies to:** ✔️ Basic/Standard ✔️ Enterprise
-
-This article shows you how to create a managed identity for an app deployed to Azure Spring Apps and use it to access Azure Key Vault.
+This article shows you how to create a system-assigned or user-assigned managed identity for an app deployed to Azure Spring Apps and use it to access Azure Key Vault.
Azure Key Vault can be used to securely store and tightly control access to tokens, passwords, certificates, API keys, and other secrets for your app. You can create a managed identity in Microsoft Entra ID, and authenticate to any service that supports Microsoft Entra authentication, including Key Vault, without having to display credentials in your code.
The following video describes how to manage secrets using Azure Key Vault.
## Prerequisites
-* [Sign up for an Azure subscription](https://azure.microsoft.com/free/)
-* [Install the Azure CLI version 2.45.0 or higher](/cli/azure/install-azure-cli)
-* [Install Maven 3.0 or higher](https://maven.apache.org/download.cgi)
+
+- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise plan in Azure Marketplace](./how-to-enterprise-marketplace-offer.md).
+- [Azure CLI](/cli/azure/install-azure-cli), version 2.55.0 or higher.
+++
+- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- [Azure CLI](/cli/azure/install-azure-cli), version 2.55.0 or higher.
++
+## Provide names for each resource
+
+Create variables to hold the resource names by using the following commands. Be sure to replace the placeholders with your own values.
+
+### [System-assigned managed identity](#tab/system-assigned-managed-identity)
+
+```azurecli
+export LOCATION=<location>
+export RESOURCE_GROUP=myresourcegroup
+export SPRING_APPS=myasa
+export APP=springapp-system
+export KEY_VAULT=<your-keyvault-name>
+```
+
+### [User-assigned managed identity](#tab/user-assigned-managed-identity)
+
+```azurecli
+export LOCATION=<location>
+export RESOURCE_GROUP=myresourcegroup
+export SPRING_APPS=myasa
+export APP=springapp-user
+export KEY_VAULT=<your-keyvault-name>
+export USER_ASSIGNED_IDENTITY=<user-assigned-identity-name>
+```
++ ## Create a resource group
-A resource group is a logical container into which Azure resources are deployed and managed. Create a resource group to contain both the Key Vault and Spring Cloud using the command [az group create](/cli/azure/group#az-group-create):
+A resource group is a logical container into which Azure resources are deployed and managed. Create a resource group to contain both the Key Vault and Spring Cloud using the [az group create](/cli/azure/group#az-group-create) command, as shown in the following example:
```azurecli
-az group create --name "myResourceGroup" --location "EastUS"
+az group create --name ${RESOURCE_GROUP} --location ${LOCATION}
``` ## Set up your Key Vault
-To create a Key Vault, use the command [az keyvault create](/cli/azure/keyvault#az-keyvault-create):
+To create a Key Vault, use the [az keyvault create](/cli/azure/keyvault#az-keyvault-create) command, as shown in the following example:
-> [!Important]
-> Each Key Vault must have a unique name. Replace *\<your-keyvault-name>* with the name of your Key Vault in the following examples.
+> [!IMPORTANT]
+> Each Key Vault must have a unique name.
```azurecli az keyvault create \
- --resource-group <your-resource-group-name> \
- --name "<your-keyvault-name>"
+ --resource-group ${RESOURCE_GROUP} \
+ --name ${KEY_VAULT}
```
-Make a note of the returned `vaultUri`, which is in the format `https://<your-keyvault-name>.vault.azure.net`. You use this value in the following step.
+Use the following command to show the app URL and then make a note of the returned URL, which is in the format `https://${KEY_VAULT}.vault.azure.net`. Use this value in the following step.
-You can now place a secret in your Key Vault with the command [az keyvault secret set](/cli/azure/keyvault/secret#az-keyvault-secret-set):
+```azurecli
+az keyvault show \
+ --resource-group ${RESOURCE_GROUP} \
+ --name ${KEY_VAULT} \
+ --query properties.vaultUri --output tsv
+```
+
+You can now place a secret in your Key Vault by using the [az keyvault secret set](/cli/azure/keyvault/secret#az-keyvault-secret-set) command, as shown in the following example:
```azurecli az keyvault secret set \
- --vault-name "<your-keyvault-name>" \
+ --vault-name ${KEY_VAULT} \
--name "connectionString" \ --value "jdbc:sqlserver://SERVER.database.windows.net:1433;database=DATABASE;" ``` ## Create Azure Spring Apps service and app
-After installing corresponding extension, create an Azure Spring Apps instance with the Azure CLI command `az spring create`.
+After you install all corresponding extensions, use the following command to create an Azure Spring Apps instance:
+ ```azurecli az extension add --upgrade --name spring az spring create \
- --resource-group <your-resource-group-name> \
- --name <your-Azure-Spring-Apps-instance-name>
+ --resource-group ${RESOURCE_GROUP} \
+ --sku Enterprise \
+ --name ${SPRING_APPS}
+```
+
+### [System-assigned managed identity](#tab/system-assigned-managed-identity)
+
+The following example creates the app with a system-assigned managed identity, as requested by the `--system-assigned` parameter:
+
+```azurecli
+az spring app create \
+ --resource-group ${RESOURCE_GROUP} \
+ --service ${SPRING_APPS} \
+ --name ${APP} \
+ --assign-endpoint true \
+ --system-assigned
+export MANAGED_IDENTITY_PRINCIPAL_ID=$(az spring app show \
+ --resource-group ${RESOURCE_GROUP} \
+ --service ${SPRING_APPS} \
+ --name ${APP} \
+ --query identity.principalId --output tsv)
+```
+
+### [User-assigned managed identity](#tab/user-assigned-managed-identity)
+
+Use the following command to create a user-assigned managed identity:
+
+```azurecli
+az identity create --resource-group ${RESOURCE_GROUP} --name ${USER_ASSIGNED_IDENTITY}
+export MANAGED_IDENTITY_PRINCIPAL_ID=$(az identity show \
+ --resource-group ${RESOURCE_GROUP} \
+ --name ${USER_ASSIGNED_IDENTITY} \
+ --query principalId --output tsv)
+export USER_IDENTITY_RESOURCE_ID=$(az identity show \
+ --resource-group ${RESOURCE_GROUP} \
+ --name ${USER_ASSIGNED_IDENTITY} \
+ --query id --output tsv)
+```
+
+The following command creates the app with a user-assigned managed identity, as requested by the `--user-assigned` parameter:
+
+```azurecli
+az spring app create \
+ --resource-group ${RESOURCE_GROUP} \
+ --service ${SPRING_APPS} \
+ --name ${APP} \
+ --user-assigned $USER_IDENTITY_RESOURCE_ID \
+ --assign-endpoint true
+```
+++++
+```azurecli
+az extension add --upgrade --name spring
+az spring create \
+ --resource-group ${RESOURCE_GROUP} \
+ --name ${SPRING_APPS}
``` ### [System-assigned managed identity](#tab/system-assigned-managed-identity)
The following example creates an app named `springapp` with a system-assigned ma
```azurecli az spring app create \
- --resource-group <your-resource-group-name> \
- --service <your-Azure-Spring-Apps-instance-name> \
- --name "springapp" \
+ --resource-group ${RESOURCE_GROUP} \
+ --service ${SPRING_APPS} \
+ --name ${APP} \
--assign-endpoint true \ --runtime-version Java_17 \ --system-assigned export MANAGED_IDENTITY_PRINCIPAL_ID=$(az spring app show \
- --resource-group "<your-resource-group-name>" \
- --service "<your-Azure-Spring-Apps-instance-name>" \
- --name "springapp" \
- | jq -r '.identity.principalId')
+ --resource-group ${RESOURCE_GROUP} \
+ --service ${SPRING_APPS} \
+ --name ${APP} \
+ --query identity.principalId --output tsv)
``` ### [User-assigned managed identity](#tab/user-assigned-managed-identity)
-First, create a user-assigned managed identity in advance with its resource ID set to `$USER_IDENTITY_RESOURCE_ID`. Save the client ID for the property configuration.
+Use the following command to create a user-assigned managed identity:
-
-```bash
-export MANAGED_IDENTITY_PRINCIPAL_ID=<principal-ID-of-user-assigned-managed-identity>
-export USER_IDENTITY_RESOURCE_ID=<resource-ID-of-user-assigned-managed-identity>
+```azurecli
+az identity create --resource-group ${RESOURCE_GROUP} --name ${USER_ASSIGNED_IDENTITY}
+export MANAGED_IDENTITY_PRINCIPAL_ID=$(az identity show \
+ --resource-group ${RESOURCE_GROUP} \
+ --name ${USER_ASSIGNED_IDENTITY} \
+ --query principalId --output tsv)
+export USER_IDENTITY_RESOURCE_ID=$(az identity show \
+ --resource-group ${RESOURCE_GROUP} \
+ --name ${USER_ASSIGNED_IDENTITY} \
+ --query id --output tsv)
```
-The following example creates an app named `springapp` with a user-assigned managed identity, as requested by the `--user-assigned` parameter.
+The following command creates the app with a user-assigned managed identity, as requested by the `--user-assigned` parameter.
```azurecli az spring app create \
- --resource-group <your-resource-group-name> \
- --service <your-Azure-Spring-Apps-instance-name> \
- --name "springapp" \
+ --resource-group ${RESOURCE_GROUP} \
+ --service ${SPRING_APPS} \
+ --name ${APP} \
--user-assigned $USER_IDENTITY_RESOURCE_ID \ --runtime-version Java_17 \ --assign-endpoint true
-az spring app show \
- --resource-group <your-resource-group-name> \
- --service <your-Azure-Spring-Apps-instance-name> \
- --name "springapp"
```
-Make a note of the returned URL, which is in the format `https://<your-app-name>.azuremicroservices.io`. You use this value in the following step.
## Grant your app access to Key Vault
Use the following command to grant proper access in Key Vault for your app:
```azurecli az keyvault set-policy \
- --name "<your-keyvault-name>" \
+ --name ${KEY_VAULT} \
--object-id ${MANAGED_IDENTITY_PRINCIPAL_ID} \ --secret-permissions set get list ``` > [!NOTE]
-> For system-assigned managed identity case, use `az keyvault delete-policy --name "<your-keyvault-name>" --object-id ${MANAGED_IDENTITY_PRINCIPAL_ID}` to remove the access for your app after system-assigned managed identity is disabled.
+> For system-assigned managed identity, use `az keyvault delete-policy --name ${KEY_VAULT} --object-id ${MANAGED_IDENTITY_PRINCIPAL_ID}` to remove the access for your app after system-assigned managed identity is disabled.
## Build a sample Spring Boot app with Spring Boot starter
This app has access to get secrets from Azure Key Vault. Use the Azure Key Vault
1. To use managed identity for an app deployed to Azure Spring Apps, add properties with the following content to the *src/main/resources/application.properties* file.
-### [System-assigned managed identity](#tab/system-assigned-managed-identity)
+ ### [System-assigned managed identity](#tab/system-assigned-managed-identity)
-```properties
-spring.cloud.azure.keyvault.secret.property-sources[0].endpoint=https://<your-keyvault-name>.vault.azure.net
-spring.cloud.azure.keyvault.secret.property-sources[0].credential.managed-identity-enabled=true
-```
+ ```properties
+ spring.cloud.azure.keyvault.secret.property-sources[0].endpoint=<your-keyvault-url>
+ spring.cloud.azure.keyvault.secret.property-sources[0].credential.managed-identity-enabled=true
+ ```
-### [User-assigned managed identity](#tab/user-assigned-managed-identity)
+ ### [User-assigned managed identity](#tab/user-assigned-managed-identity)
-```properties
-spring.cloud.azure.keyvault.secret.property-sources[0].endpoint=https://<your-keyvault-name>.vault.azure.net
-spring.cloud.azure.keyvault.secret.property-sources[0].credential.managed-identity-enabled=true
-spring.cloud.azure.keyvault.secret.property-sources[0].credential.client-id={Client ID of user-assigned managed identity}
-```
+ Use the following command to query the client ID of the user-assigned managed identity:
-
+ ```azurecli
+ az identity show \
+ --resource-group ${RESOURCE_GROUP} \
+ --name ${USER_ASSIGNED_IDENTITY} \
+ --query clientId --output tsv
+ ```
+
+ ```properties
+ spring.cloud.azure.keyvault.secret.property-sources[0].endpoint=<your-keyvault-url>
+ spring.cloud.azure.keyvault.secret.property-sources[0].credential.managed-identity-enabled=true
+ spring.cloud.azure.keyvault.secret.property-sources[0].credential.client-id=<client-ID-of-user-assigned-managed-identity>
+ ```
+
+
> [!NOTE] > You must add the key vault URL in the *application.properties* file as shown previously. Otherwise, the key vault URL may not be captured during runtime.
-1. Add the following code example to *src/main/java/com/example/demo/DemoApplication.java*. This code retrieves the connection string from the key vault.
+1. Update *src/main/java/com/example/demo/DemoApplication.java* with the following code example. This code retrieves the connection string from the Key Vault.
```Java package com.example.demo;
spring.cloud.azure.keyvault.secret.property-sources[0].credential.client-id={Cli
</dependency> ```
-1. Use the following command to package your sample app.
- ```bash
- ./mvnw clean package -DskipTests
+5. Use the following command to deploy your app to Azure Spring Apps:
+
+ ```azurecli
+ az spring app deploy \
+ --resource-group ${RESOURCE_GROUP} \
+ --service ${SPRING_APPS} \
+ --name ${APP} \
+ --source-path
```
-1. Now you can deploy your app to Azure with the following command:
++
+5. Use the following command to deploy your app to Azure Spring Apps:
```azurecli az spring app deploy \
- --resource-group <your-resource-group-name> \
- --service <your-Azure-Spring-Apps-instance-name> \
- --name "springapp" \
- --artifact-path target/demo-0.0.1-SNAPSHOT.jar
+ --resource-group ${RESOURCE_GROUP} \
+ --service ${SPRING_APPS} \
+ --name ${APP} \
+ --source-path \
+ --build-env BP_JVM_VERSION=17
```
-1. To test your app, access the public endpoint or test endpoint by using the following command:
+
+6. To test your app, access the public endpoint or test endpoint by using the following command:
```bash
- curl https://myspringcloud-springapp.azuremicroservices.io/get
+ curl https://${SPRING_APPS}-${APP}.azuremicroservices.io/get
```
- You're shown the message `jdbc:sqlserver://SERVER.database.windows.net:1433;database=DATABASE;`.
+ The following message is returned in the response body: `jdbc:sqlserver://SERVER.database.windows.net:1433;database=DATABASE;`.
+
+## Clean up resources
+
+Use the following command to delete the entire resource group, including the newly created service instance:
+
+```azurecli
+az group delete --name ${RESOURCE_GROUP} --yes
+```
## Next steps
-* [How to access Storage blob with managed identity in Azure Spring Apps](https://github.com/Azure-Samples/azure-spring-apps-samples/tree/main/managed-identity-storage-blob)
-* [How to enable system-assigned managed identity for applications in Azure Spring Apps](./how-to-enable-system-assigned-managed-identity.md)
-* [What are managed identities for Azure resources?](/entra/identity/managed-identities-azure-resources/overview)
-* [Authenticate Azure Spring Apps with Key Vault in GitHub Actions](./github-actions-key-vault.md)
+- [How to access Storage blob with managed identity in Azure Spring Apps](https://github.com/Azure-Samples/azure-spring-apps-samples/tree/main/managed-identity-storage-blob)
+- [Enable system-assigned managed identity for applications in Azure Spring Apps](./how-to-enable-system-assigned-managed-identity.md)
+- [What are managed identities for Azure resources?](/entra/identity/managed-identities-azure-resources/overview)
+- [Authenticate Azure Spring Apps with Key Vault in GitHub Actions](./github-actions-key-vault.md)
storage-actions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-actions/overview.md
+
+ Title: About Azure Storage Actions Preview
+
+description: Learn how to perform operations on blobs in Azure Storage accounts based on a set of conditions that you define.
++++++ Last updated : 01/17/2024++++
+# What is Azure Storage Actions Preview?
+
+Azure Storage Actions is a serverless framework that you can use to perform common data operations on millions of objects across multiple storage accounts.
+
+> [!IMPORTANT]
+> Azure Storage Actions is currently in PREVIEW and is available these [regions](#supported-regions).
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+The preview release of Azure Storage Actions enables you to create _storage tasks_ that can perform operations on blobs in Azure Storage accounts based on a set of conditions that you define. Storage tasks can process millions of objects in a storage account without provisioning additional compute capacity and without requiring you to write code.
+
+## Anatomy of a storage task
+
+A storage task contains a set of _conditions_, _operations_, and _assignments_. The following table describes each component of a storage task.
+
+| Component | Description |
+|||
+| Conditions | A _condition_ a collection of one or more _clauses_. Each clause contains a property, a value, and an operator. When the storage task runs, it uses the operator to compare a property with a value to determine whether a clause is met by the target object. For example, a clause might evaluate whether a `creation-time` property of a blob is greater than five days ago. |
+| Operations | An operation is the action a storage task performs on each object that meets the defined set of conditions. Deleting a blob is an example of an operation. |
+| Assignments | An assignment identifies a storage account and a subset of objects to target in that account. It also specifies when the task runs and where execution reports are stored. |
+
+## How to use a storage task
+
+First, define the conditions and operations of a storage task. Then, assign that task to one or more storage accounts. Monitor task runs by using metrics, charts, and reports.
+
+### Define a storage task
+
+Start by creating a storage task. To provision a storage task, you must define at least one condition and one operation. After the task is created, you can edit those conditions and operations or add more of them by using a visual designer.
+
+See these articles to learn how to define a storage task:
+
+- [Create a storage task](storage-tasks/storage-task-create.md)
+- [Define storage task conditions and operations](storage-tasks/storage-task-conditions-operations-edit.md)
+- [Properties, operators, and operations in storage task conditions](storage-tasks/storage-task-properties-operators-operations.md)
+
+### Assign a storage task
+
+You can assign a storage task to any storage account in which your identity is assigned the [Storage Blob Data Owner](../role-based-access-control/built-in-roles.md#storage-blob-data-owner) role. As you create an assignment, you'll select a storage account, and assign a role to the system-assigned managed identity of the task. That identity is created for you automatically when the task is provisioned. The role that you assign that identity must enable it to perform the operations that are defined in the task.
+
+A storage task can be assigned to a storage account only by an owner of that account. Therefore, if the task that you define is useful to an owner of another storage account, you must grant that user access to the storage task. Then, that user can assign your task to their storage account. You can grant a user access to your storage task by assigning an Azure role to their user identity.
+
+See these articles to learn how to assign a storage task:
+
+- [Create and manage a storage task assignment](storage-tasks/storage-task-assignment-create.md)
+- [Azure roles for storage task assignments](storage-tasks/storage-task-authorization-roles.md)
+
+### Monitor storage task runs
+
+Tasks run asynchronously according to the schedule that you specify in the assignment. An execution report is created when the run completes. That report itemizes the results of the task run on each object that was targeted by the task.
+
+The overview page of the task presents metrics and visualizations that summarize how many objects met the task condition, and the result of the operations attempted by the storage task on each object. The charts enable you to quickly drill into a specific execution instance.
+
+See these articles to learn how to monitor task runs:
+
+- [Analyze storage task runs](storage-tasks/storage-task-runs.md)
+- [Monitor Azure Storage Actions](storage-tasks/monitor-storage-tasks.md)
+
+### Handle storage task events
+
+Azure Storage Actions events allow applications to react to events, such as the completion of a storage task run. It does so without the need for complicated code or expensive and inefficient polling services.
+
+Azure Storage Actions events are pushed using [Azure Event Grid](https://azure.microsoft.com/services/event-grid/) to subscribers such as Azure Functions, Azure Logic Apps, or even to your own http listener. Event Grid provides reliable event delivery to your applications through rich retry policies and dead-lettering. Event Grid uses [event subscriptions](../event-grid/concepts.md#event-subscriptions) to route event messages to subscribers. First, subscribe an endpoint to an event. Then, when an event is triggered, the Event Grid service will send data about that event to the endpoint.
+
+See the [Azure Storage Actions events schema](../event-grid/event-schema-storage-actions.md?toc=/azure/storage-actions/toc.json) article to view the full list of the events that Azure Storage Actions supports.
+
+## Supported Regions
+
+Azure Storage tasks are supported in the following public regions:
+
+- France Central
+- Canada Central
+
+## Pricing and billing
+
+List pricing information here.
+
+## Next steps
+
+- [Quickstart: Create, assign, and run a storage task by using the Azure portal](storage-tasks/storage-task-quickstart-portal.md)
+- [Known issues with storage tasks](storage-tasks/storage-task-known-issues.md)
storage-actions Monitor Storage Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-actions/storage-tasks/monitor-storage-tasks.md
+
+ Title: Monitor Azure Storage Actions Preview
+
+description: Start here to learn how to monitor Azure Storage Actions Preview.
++++++ Last updated : 01/17/2024++++
+# Monitor Azure Storage Actions Preview
+
+When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation. This article describes the monitoring data generated by Azure Storage Actions. Azure Storage Actions uses [Azure Monitor](/azure/azure-monitor/overview). If you are unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource).
+
+## Monitoring overview page in Azure portal
+
+The following metrics appear in the **Overview** page of each storage task.
+
+- Total number of runs
+- Total number of runs that resulted in an error
+- Total number of objects targeted by task runs
+- Total number of objects operated on
+- Total number of objects where an operation succeeded
+- Total number of objects where an operation failed
+
+The following image shows how these metrics appear in the **Overview** page.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of the Select scope section of the assignment pane.](../media/storage-tasks/monitor-storage-tasks/overview-page-storage-task.png)
+
+These metrics include runs from multiple storage task assignments, but only assignments that target storage accounts to which you have read permission. These metrics appear as tiles that you can select to view a list of the task runs which comprise the metric. Each listed task provides a link to a detailed execution report. For more information about how to drill into metrics and reports for each task run, see [Analyze storage task runs](storage-task-runs.md).
+
+## Monitoring data
+
+Azure Storage Actions collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](/azure/azure-monitor/essentials/monitor-azure-resource#monitoring-data-from-Azure-resources). For information about the metrics and logs that are created by Azure Storage tasks, see [Monitoring Azure Storage Actions data reference](storage-tasks-monitor-data-reference.md).
+
+## Collection and routing
+
+Platform metrics and the activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting. To learn how to create a diagnostic setting, see [Create diagnostic setting to collect platform logs and metrics in Azure](/azure/azure-monitor/platform/diagnostic-settings).
+
+## Analyzing metrics
+
+You can analyze metrics for Azure Storage Actions with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](/azure/azure-monitor/essentials/metrics-getting-started) for details on using this tool.
+
+For a list of the platform metrics collected for Azure Storage Actions, see [Monitoring Azure Storage Actions metrics](storage-tasks-monitor-data-reference.md#metrics). You can also see a list [all resource metrics supported in Azure Monitor](/azure/azure-monitor/essentials/metrics-supported).
+
+## Analyzing logs
+
+Azure Storage Actions don't support resource logs. Azure Storage Actions support the [activity log](/azure/azure-monitor/essentials/activity-log) log only. The [activity log](/azure/azure-monitor/essentials/activity-log) is a type of platform log in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+
+## Alerts
+
+Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](/azure/azure-monitor/alerts/alerts-metric-overview) and the [activity log](/azure/azure-monitor/alerts/activity-log-alerts).
+
+## Next steps
+
+- See [Monitoring Azure Storage Actions data reference](storage-tasks-monitor-data-reference.md) for a reference of the metrics, logs, and other important values created by Azure Storage Actions.
+
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
storage-actions Storage Task Assignment Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-actions/storage-tasks/storage-task-assignment-create.md
+
+ Title: Create and manage a storage task assignment
+
+description: Learn how to create an assignment, and then enable that assignment to run.
+++++ Last updated : 01/17/2024++
+# Create and manage a storage task assignment
+
+An _assignment_ identifies a storage account and a subset of objects in that account that the task will target. An assignment also defines when the task runs and where execution reports are stored.
+
+This article helps you create an assignment, and then enable that assignment to run.
+
+> [!IMPORTANT]
+> Azure Storage Actions is currently in PREVIEW and is available these [regions](../overview.md#supported-regions).
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Create an assignment
+
+Create an assignment for each storage account you want to target. A storage task can contain up to 50 assignments.
+
+> [!NOTE]
+> During the public preview, you can target only storage accounts that are in the same region as the storage tasks.
+
+### Create an assignment from the storage task menu
+
+You can create an assignment in the context of a storage task. This option can be convenient if you're the task author and you want to target multiple storage accounts. For each assignment you'll identify the storage account that you want to target.
+
+Navigate to the storage task in the Azure portal and then under **Storage task management**, select **Assignments**, and then select **Configure assignments**.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of the Assignments page that appears in the context of the storage task.](../media/storage-tasks/storage-task-assignment-create/assignment-create.png)
+
+In the **Assignments** page, select **+ Add assignment** and the **Add assignment** pane will appear.
+
+### Create an assignment from the storage account menu
+
+You can also create an assignment in the context of a storage account. This option can be convenient if you want to use an existing task to process objects in your storage account. For each assignment you'll identify the storage task that you want to assign to your account.
+
+Navigate to the storage account in the Azure portal and then under **Data management**, select **Storage tasks**.
+
+In the **Storage tasks** page, select the **Task assignment** tab, select **+ Create assignment**, and then select **+ Add assignment**.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of the Assignments page that appears in the context of a storage account.](../media/storage-tasks/storage-task-assignment-create/assignment-create-2.png)
+
+The **Add assignment** pane appears.
+
+## Select a scope
+
+In the **Select scope** section, select a subscription and name the assignment. Then, select the storage account that you want to target.
+
+If you opened the **Add assignment** pane in the context of the storage account, you'll select a storage task instead of the storage account.
+
+The following table describes each field in the **Select Scope** section:
+
+| Section | Field | Required or optional | Description |
+|--|--|--|--|
+| Select scope | Subscription | Required | The subscription of the storage account that you want to add to this assignment. |
+| Select scope | Select a storage account | Required | The storage account that you want to add to this assignment. You must be an owner of the storage account. This field appears only if you create the assignment in the context of a storage task.|
+| Select scope | Select a storage task | Required | The storage task to which you would like to assign your storage account. This field appears only if you create the assignment in the context of a storage account.|
+| Select scope | Assignment name | Required | The name of the assignment. Assignment names must be between 2 and 62 characters in length and may contain only letters and numbers. |
+
+## Add a role assignment
+
+In the **Role assignment** section, in the **Role** drop-down list, select the role that you want to assign to the system-assigned managed identity of the storage task. Only roles that are assigned to your user identity appear in this drop-down list. Roles not assigned to your user identity do not appear in this list. To learn more, see [Azure roles for storage tasks](storage-task-authorization-roles.md)
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of the Role assignment section of the assignment pane.](../media/storage-tasks/storage-task-assignment-create/assignment-role.png)
+
+## Add a filter
+
+In the **Filter objects** section, choose whether you want to target a subset of blobs based on a filter. Filters help you narrow the scope of execution. If you want the task to evaluate all of the containers and blobs in an account, then you can select the **Do not filter** option. The following example uses a filter to target only blobs that exist in a container that is named `mycontainer`.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of the Filter objects section of the Add assignment pane.](../media/storage-tasks/storage-task-assignment-create/assignment-pane-filter-prefix.png)
+
+The following table describes each field:
+
+| Section | Field | Required or optional | Description |
+|--|--|--|--|
+| Filter objects | Filter by | Required | Option to either filter objects by using a prefix or to run the task against the entire storage account. |
+| Filter objects | Blob prefixes | Optional | The string prefix that is used to narrow the scope of blobs that are evaluated by the task. This field is required only if you choose to filter by using a blob prefix. |
+
+## Define the trigger
+
+In the **Trigger details** section, select how often you'd like this task to run. You can choose to run this task only once, or run the task recurring. If you decide to run this task on a recurring basis, choose a start and end time and specify the number of days in between each run. You can also specify where you'd like to store the execution reports.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of the Triggers section of the Add assignment pane.](../media/storage-tasks/storage-task-assignment-create/assignment-trigger.png)
+
+The following table describes each field:
+
+| Section | Field | Required or optional | Description |
+|--|--|--|--|
+| Trigger details | Run frequency | Required | Option to either run the task one time or multiple times. |
+| Trigger details | Start from | Required | The date and time to begin running the task. |
+| Trigger details | End by | Required | The date and time stop running the task. |
+| Trigger details | Repeat very (in days) | Required | The interval in days between each run. |
+| Trigger details | Report export container | Required | The container where task execution reports are stored. |
+
+## Save the assignment
+
+Select the **Add** button to create the assignment.
+
+The **Add assignment pane** closes. When deployment is complete, the assignment appears in the **Assignments** page. If you don't see the assignment in that page, then select the **Refresh** button.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of the assignment appearing in the Assignments page.](../media/storage-tasks/storage-task-assignment-create/assignment-refresh.png)
+
+## Enable an assignment
+
+The assignment is disabled by default. To enable the assignment so that it will be scheduled to run, select the checkbox that appears beside the assignment, and then select **Enable**.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of the Enable button in the Assignments page.](../media/storage-tasks/storage-task-assignment-create/assignment-enable.png)
+
+After the task runs, an execution report is generated and then stored in the container that you specified when you created the assignment. For more information about that report as well as how to view metrics that capture the number of objects targeted, the number of operations attempted, and the number of operations that succeeded, see [Analyze storage task runs](storage-task-runs.md).
+
+## Edit an assignment
+
+An assignment becomes a sub resource of the targeted storage account. Therefore, after you create the assignment, you can edit only it's run frequency. The other fields of an assignment become read only. The **Single run (only once)** option becomes read only as well.
+
+- To edit the run frequency of an assignment in the context of a storage task, navigate to the storage task in the Azure portal and then under **Storage task management**, select **Assignments**.
+
+- To edit the run frequency of an assignment in the context of a storage account, navigate to the storage account in the Azure portal and then under **Data management**, select **Storage tasks**.
+
+## See also
+
+- [Azure Storage Actions overview](../overview.md)
+- [Analyze storage task runs](storage-task-runs.md)
storage-actions Storage Task Authorization Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-actions/storage-tasks/storage-task-authorization-roles.md
+
+ Title: Azure roles for storage tasks
+
+description: Learn about the least privileged built-in Azure roles or RBAC actions required to read, update, delete and assign a storage task.
++++++ Last updated : 01/17/2024++++
+# Azure roles for storage tasks
+
+This article describes the least privileged built-in Azure roles or RBAC actions required to read, update, delete and assign a storage task.
+
+> [!IMPORTANT]
+> Azure Storage Actions is currently in PREVIEW and is available these [regions](../overview.md#supported-regions).
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Permission to read, edit, or delete a task
+
+You must assign a role to any security principal in your organization that needs access to the storage task. To learn how to assign an Azure role, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+
+To give users or applications access to the storage task, choose an Azure Built-in or custom role that has the permission necessary to edit the read or edit task. If you prefer to use a custom role, make sure that your role contains the RBAC actions necessary to read or edit the task. Use the following table as a guide.
+
+| Permission level | Azure built-in role | RBAC actions for custom roles |
+|--|--|--|
+| List and read storage tasks | `Contributor` | `Microsoft.StorageActions/storageTasks/read` |
+| Create and update storage tasks | `Contributor` | `Microsoft.StorageActions/storageTasks/write` |
+| Delete storage tasks | `Contributor` | `Microsoft.StorageActions/storageTasks/delete` |
+
+## Permission to assign a task
+
+A task _assignment_ identifies a storage account and a subset of objects in that account that the storage task will target. An assignment also defines when the task runs and where execution reports are stored. For step-by-step guidance, see [Create and manage a Storage task assignment](storage-task-assignment-create.md).
+
+To create an assignment, your identity must be assigned a custom role that contains the following RBAC actions:
+
+- The `Microsot.Authorization.roleAssignments/write` action.
+
+- All of the RBAC actions that are available in the `Microsoft.Storage/StorageAccounts` set of RBAC action.
+
+To learn how to create a custom role, see [Azure custom roles](../../role-based-access-control/custom-roles.md#steps-to-create-a-custom-role).
+
+## Permission for a task to perform operations
+
+As you create an assignment, you must choose an Azure Built-in or custom role that has the permission necessary to perform the specified operations on the target storage account or storage account container. You can choose only roles that are assigned to your user identity. If you prefer to use a custom role, you must make sure that your role contains the RBAC actions necessary to perform the operations.
+
+The following table shows the least privileged built-in Azure role as well as the RBAC actions required by each operation.
+
+| Permission | Built-in role | RBAC actions for a custom role |
+||||
+| SetBlobTier | [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner)| Microsoft.Storage/storageAccounts/blobServices/read<br>Microsoft.Storage/storageAccounts/blobServices/write<br>Microsoft.Storage/storageAccounts/blobServices/containers/write<br>Microsoft.Storage/storageAccounts/blobServices/containers/read<br>Microsoft.Storage/storageAccounts/blobServices/containers/read<br>Microsoft.Storage/storageAccounts/blobServices/containers/write<br>Microsoft.Storage/storageAccounts/blobServices/containers/delete |
+| SetBlobExpiry | [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) | Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write |
+| SetBlobTags | [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) | Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write |
+| SetBlobImmutabilityPolicy | [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) | Microsoft.Storage/storageAccounts/blobServices/containers/immutabilityPolicies/write<br>Microsoft.Storage/storageAccounts/blobServices/containers/immutabilityPolicies/read<br>Microsoft.Storage/storageAccounts/blobServices/containers/immutabilityPolicies/delete<br>Microsoft.Storage/storageAccounts/blobServices/containers/immutabilityPolicies/extend/action<br>Microsoft.Storage/storageAccounts/blobServices/containers/immutabilityPolicies/lock/action |
+| SetBlobLegalHold | [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) | Microsoft.Storage/storageAccounts/blobServices/containers/setLegalHold/action<br>Microsoft.Storage/storageAccounts/blobServices/containers/clearLegalHold/action |
+| DeleteBlob | [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) | Microsoft.Storage/storageAccounts/blobServices/containers/delete |
+| UndeleteBlob | [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) | Microsoft.Storage/storageAccounts/blobServices/containers/write<br>Microsoft.Storage/storageAccounts/blobServices/containers/delete |
+
+## See also
+
+- [Create and manage an assignment](storage-task-assignment-create.md)
storage-actions Storage Task Conditions Operations Edit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-actions/storage-tasks/storage-task-conditions-operations-edit.md
+
+ Title: Define storage task conditions & operations
+
+description: Learn how to define the conditions and operations of a storage task by using a visual editor.
+++++ Last updated : 01/17/2024++
+# Define storage task conditions and operations
+
+You can use a visual editor to define the conditions and operations of a storage task.
+
+> [!IMPORTANT]
+> Azure Storage Actions is currently in PREVIEW and is available these [regions](../overview.md#supported-regions).
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+An _operation_ is an action taken on each object that meets the conditions defined in the task. A _condition_ contains one or more conditional _clauses_. Each clause defines the relationship between a property and a value. To execute an operation defined in the storage task, the terms of that relationship must be met by each object.
+
+To learn more, see [Properties, operators, and operations in storage task conditions](storage-task-properties-operators-operations.md).
+
+## Open the conditions editor
+
+Navigate to the storage task in the Azure portal and then under **Storage task management**, select **Conditions**.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of the Conditions button and the Conditions editor.](../media/storage-tasks/storage-task-conditions-operations-edit/storage-task-condition-editor.png)
+
+The **Visual builder** tab of the **Conditions** pane appears.
+
+## Define conditions
+
+Define a condition by adding clauses. A clause defines the relationship between a property and a value. To execute an operation defined in the storage task, the terms of that relationship must be met by each object.
+
+### Add and remove clauses
+
+To add a clause, select **Add new clause**, and to remove a clause, select the delete icon(:::image type="icon" source="../media/storage-tasks/storage-task-conditions-operations-edit/conditions-delete-icon.png":::) that appears next to it.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of the Add new clause button and three clauses that were added to the list of clauses.](../media/storage-tasks/storage-task-conditions-operations-edit/storage-task-add-conditions.png)
+
+### Specify the terms of a clause
+
+To define a clause, choose a property, specify a value for that property, and then choose an operator that relates them together.
+
+#### Choose a property
+
+In the **Blob property** drop-down list, choose a property. See [Supported blob properties](storage-task-properties-operators-operations.md#supported-properties-in-a-clause).
+
+The following example selects the **Blob name** property.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of the property drop-down list of a clause.](../media/storage-tasks/storage-task-conditions-operations-edit/storage-task-condition-choose-property.png)
+
+#### Choose a value and operator
+
+In the **Property value** box, enter a value and in the **Operator** drop-down list, choose an operator. See [Supported Operators](storage-task-properties-operators-operations.md#supported-operators-in-a-clause).
+
+The following example specifies a value of `.log` along with the **Ends with** operator. This condition allows the operation defined in this storage task to execute only on blobs that have a `.log` file extension.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of an example condition in the visual editor.](../media/storage-tasks/storage-task-conditions-operations-edit/storage-task-blob-name-condition.png)
+
+#### Use a wildcard in string values
+
+You can use the `*` and `?` wildcard characters in the value of a string property. The `*` character represents zero or more characters while a `?` character represents exactly one character.
+
+For example, if you want your clause to evaluate to true only for blobs that are Word documents, you would use the string `*.docx`. However, if you want only documents are named with a single character such as `1.docx` or `2.docx`, then you would use the string `?.docx`.
+
+You can use the `*` or `?` anywhere in a string. You can escape these characters by adding a `\` just before the character.
+
+#### Reference a value from object metadata
+
+Clauses that include a date and time property can reference a value from the metadata of a container or an index tag of a blob. These values are obtained dynamically at runtime when the task executes.
+
+In your storage account, you can add a key to the metadata of a container or to the index tags of a blob. The value of that key must be a [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601#Durations)-formatted time interval. For example, you might add a key named `retainfor` along with a string value of `PT5M` which represents an interval of five minutes.
+
+To reference a key, select the **Edit** link that appears in the **Property value** column. Then, in the **Select a value** dialog box, select **Container metadata** or **Blob Index tags**.
+
+The following example adds the **Creation time** property, the **Earlier than** operator, and references a key named `retainFor` in the index tags of each blob that is evaluated.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of specifying a key in container metadata as part of the condition definition.](../media/storage-tasks/storage-task-conditions-operations-edit/parameterized-query.png)
+
+This condition tests whether a blob was created earlier than a certain time duration relative to now (the current date and time). For example, if the value retrieved from the `retainFor` tag is five minutes, then this condition checks if the blob was created more than 5 minutes ago.
+
+> [!NOTE]
+> In the **Code** tab, you'll see the logic of this example represented in a slightly different way. The generated code tests whether the current time is greater than creation time plus the time interval. Both representations yield the same result.
+
+If the key is not present for an evaluated object, then the condition evaluates to false. If the key value is a string that does not conform to the [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601#Durations) standard, then an error is reported in the execution report.
+
+#### Apply And / Or to a clause
+
+You add **And** or **Or** to a clause. Specify **And** if you want to target objects that meet the criteria in both the current clause and the previous clause of the condition. Specify **Or** to target objects that meet the criterion in either the current clause or the previous clause.
+
+The following example shows clauses that use **And**. In this example, the storage task targets objects that have a `.log` extension and which have a tag named `Archive-Status` set to `Ready`.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of clauses that use the AND operators.](../media/storage-tasks/storage-task-conditions-operations-edit/storage-task-condition-and-operator.png)
+
+### Change the order of clauses
+
+You can arrange clauses in an order that you believe will improve the performance of a task run. For example, instead of first testing all blobs in an account against a name filter, you might elevate a clause that targets a specific container. That small adjustment can prevent the task from performing unnecessary evaluations.
+
+First, select the clause. Then, select **Move clause up** or **Move clause down** to change its position in the list.
+
+The following example shows the result of selecting a clause and then selecting **Move clause up**.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of clause appearing in a new position in the list.](../media/storage-tasks/storage-task-conditions-operations-edit/storage-task-move-clause-up.png)
+
+### Group and ungroup clauses
+
+Grouped clauses operate as a single unit separate from the rest of the clauses. Grouping clauses is similar to putting parentheses around a mathematical equation or logic expression. The **And** or **Or** operator for the first clause in the group applies to the whole group.
+
+Select the checkbox that appears next to each clause you want to group together. Then, select **Group**.
+
+The following example shows two conditions grouped together. In this example, the operation executes if a blob has the `.log` extension and either a tag named `Archive-Status` is set to the value of `Ready` or the file has not been accessed in 120 days.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of clauses grouped together.](../media/storage-tasks/storage-task-conditions-operations-edit/storage-task-grouped-clauses.png)
+
+To ungroup clauses, select the ungroup icon (:::image type="icon" source="../media/storage-tasks/storage-task-conditions-operations-edit/ungroup-icon.png":::) or select each clause in the group, and select **Ungroup**.
+
+## Preview the effect of conditions
+
+You can view a list of blobs that would be impacted by the conditions that you've defined. In the conditions editor, select **Preview conditions**.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of the Preview Conditions button.](../media/storage-tasks/storage-task-conditions-operations-edit/storage-task-preview-conditions-button.png)
+
+In the **Preview Conditions**, you can specify a target subscription, storage account and container. Because a preview can only show up to 5000 blobs, you can also specify a prefix to narrow the list.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of the Preview Conditions page.](../media/storage-tasks/storage-task-conditions-operations-edit/storage-task-preview-conditions.png)
+
+## Define operations
+
+An operation is an action taken on each object that meets the conditions defined in the task.
+
+### Add and remove operations
+
+To add an operation, select **Add new operation**, and to remove an operation, select the delete icon(:::image type="icon" source="../media/storage-tasks/storage-task-conditions-operations-edit/conditions-delete-icon.png":::) that appears next to it.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of the Add new operation button and three operations that were added to the list of operations.](../media/storage-tasks/storage-task-conditions-operations-edit/storage-task-add-operations.png)
+
+#### Choose an operation
+
+In the **Operation** drop-down list, choose an operation. See [Supported operations](storage-task-properties-operators-operations.md#supported-operations).
+
+The following example selects the **Set blob tags** property.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of the Operation drop-down list of an operation.](../media/storage-tasks/storage-task-conditions-operations-edit/storage-task-condition-choose-operation.png)
+
+#### Choose a parameter
+
+Enter or select the parameters that are appropriate for the operation.
+
+The following example sets the `Archive-Status` tag to the value `Archived`.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of an example condition.](../media/storage-tasks/storage-task-conditions-operations-edit/storage-task-blob-set-tag-operation.png)
+
+### Change the order of operations
+
+You can arrange operations in any order.
+
+In this example, the existing order makes sense. Blobs are first archived and the tags are set. It wouldn't make sense to set the tag before changing the tier just in case the attempt to change the tier of a blob didn't succeed. If the set blob tag operation appeared first in the list, you might consider moving that operation beneath the set blob tier operation.
+
+To move an operation, select the checkbox that appears beside it. Then, select **Move operation up** or **Move operation down** to change its position in the list.
+
+## See also
+
+- [Azure Storage Actions Overview](../overview.md)
storage-actions Storage Task Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-actions/storage-tasks/storage-task-create.md
+
+ Title: Create a storage task
+
+description: Learn how to create a storage task. A storage task can perform on blobs in one or more Azure Storage accounts.
+++++ Last updated : 01/22/2024++
+# Create a storage task
+
+A storage task can perform operations on blobs in an Azure Storage account. As you create a task, you can define the conditions that must be met by each object (container or blob), and the operations to perform on the object. You can also identify one or more Azure Storage account targets. See [What are Azure Storage Actions?](../overview.md).
+
+In this how-to article, you'll learn how to create a storage task.
+
+> [!IMPORTANT]
+> Azure Storage Actions is currently in PREVIEW and is available these [regions](../overview.md#supported-regions).
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Create a task
+
+In the Azure portal, search for _Storage Tasks_. Then, under **Services**, select **Storage tasks - Azure Storage Actions**.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of the search result of the string Storage Tasks.](../media/storage-tasks/storage-task-create/storage-task-search.png)
+
+On the **Azure Storage Actions | Storage Tasks** page, select **Create**.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of the storage task create button.](../media/storage-tasks/storage-task-create/storage-task-create-button.png)
+
+## Basics tab
+
+On the **Basics** tab, provide the essential information for your storage task. The following table describes the fields on the **Basics** tab.
+
+| Section | Field | Required or optional | Description |
+|--|--|--|--|
+| Project details | Subscription | Required | Select the subscription for the new storage task. |
+| Project details | Resource group | Required | Create a new resource group for this storage task, or select an existing one. For more information, see [Resource groups](../../azure-resource-manager/management/overview.md#resource-groups). |
+| Instance details | Storage task name | Required | Choose a unique name for your storage task. storage task names must be between 3 and 18 characters in length and might contain only lowercase letters and numbers. |
+| Instance details | Region | Required | Select the appropriate region for your storage task. For more information, see [Regions and Availability Zones in Azure](../../availability-zones/az-overview.md). |
+
+The following image shows an example of the **Basics** tab.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of basics tab of the storage task create experience.](../media/storage-tasks/storage-task-create/storage-task-basics-page.png)
+
+## Conditions tab
+
+On the **Conditions** tab, define the conditions that must be met by each object (container or blob), and the operations to perform on the object.
+
+You must define at least one condition and one operation. To add a clause to a condition, select **Add new clause**. To add operations, select **Add new operation**.
+
+The following table describes the fields on the **Conditions** tab.
+
+| Section | Field | Required or optional | Description |
+|--|--|--|--|
+| If | And/Or | Required | An operator that combines two or more predicates to form a logical-AND or logical-OR expression. |
+| If | Blob property | Required | The blob or container property that you like to use in the clause. See [Supported blob properties](storage-task-properties-operators-operations.md#supported-properties-in-a-clause)|
+| If | Operator | Required | The operator that defines how each property in the clause must relate to the corresponding value. See [Supported operators](storage-task-properties-operators-operations.md#supported-operators-in-a-clause)|
+| If| Property value | Required | The value that relates to the corresponding property. |
+| Then | Operations | Required | The action to perform when objects meet the conditions defined in this task. See [Supported operations](storage-task-properties-operators-operations.md#supported-operations)|
+| Then | Parameter | Required | A value used by the operation. |
+
+The following image shows an example of the **Conditions** tab.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of conditions tab of the storage task create experience.](../media/storage-tasks/storage-task-create/storage-task-conditions-tab.png)
+
+To learn more about supported properties and operators in conditions, see [Storage task conditions and operations](storage-task-properties-operators-operations.md).
+
+## Assignments tab
+
+An _assignment_ identifies a storage account and a subset of objects in that account that the task will target. An assignment also defines when the task runs and where execution reports are stored.
+
+To add an assignment, select **Add assignment**. This step is optional. You don't have to add an assignment to create the task.
+
+The following table describes the fields that appear in the **Add assignment** pane.
+
+| Section | Field | Required or optional | Description |
+|--|--|--|--|
+| Select scope | Subscription | Required | The subscription of the storage account that you want to add to this assignment. |
+| Select scope | Select a storage account | Required | The storage account that you want to add to this assignment. |
+| Select scope | Assignment name | Required | The name of the assignment. Assignment names must be between 2 and 62 characters in length and may contain only letters and numbers. |
+| Filter objects | Filter by | Required | Option to either filter objects by using a prefix or to run the task against the entire storage account. |
+| Filter objects | Blob prefixes | Optional | The string prefix that is used to narrow the scope of blobs that are evaluated by the task. This field is required only if you choose to filter by using a blob prefix. |
+| Trigger details | Run frequency | Required | Option to either run the task one time or multiple times. |
+| Trigger details | Start from | Required | The date and time to begin running the task. |
+| Trigger details | End by | Required | The date and time to stop running the task. |
+| Trigger details | Repeat very (in days) | Required | The interval in days between each run. |
+| Trigger details | Report export container | Required | The container where task execution reports are stored. |
+
+The following image shows an example of the **Add assignment** pane.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of assignment tab of the storage task create experience.](../media/storage-tasks/storage-task-create/storage-task-assignment-tab.png)
+
+## Tags tab
+
+On the **Tags** tab, you can specify Resource Manager tags to help organize your Azure resources. For more information, see [Tag resources, resource groups, and subscriptions for logical organization](../../azure-resource-manager/management/tag-resources.md).
+
+The following image shows a standard configuration of the index tag properties for a new storage account.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of tags tab of the storage task create experience.](../media/storage-tasks/storage-task-create/storage-task-tags-page.png)
+
+## Review + create tab
+
+When you navigate to the **Review + create** tab, Azure runs validation on the storage task settings that you have chosen. If validation passes, you can proceed to create the storage task.
+
+If validation fails, then the portal indicates which settings need to be modified.
+
+The following image shows the **Review** tab data prior to the creation of a new storage task.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of review and create tab of the storage task create experience.](../media/storage-tasks/storage-task-create/storage-task-validation-tab.png)
+
+## See also
+
+- [Azure Storage Actions overview](../overview.md)
+- [Create, assign, and run a storage task](storage-task-quickstart-portal.md)
+- [Define conditions and operations](storage-task-conditions-operations-edit.md)
storage-actions Storage Task Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-actions/storage-tasks/storage-task-known-issues.md
+
+ Title: Known issues and limitations with storage tasks
+
+description: Learn about limitations and known issues of storage tasks.
+++++ Last updated : 01/17/2024+++
+# Known issues and limitations with storage tasks
+
+This article describes limitations and known issues of storage tasks. The issues that appear in this article reflect the current state of the service. This list will change over time as support continues to expand.
+
+> [!IMPORTANT]
+> Azure Storage Actions is currently in PREVIEW and is available these [regions](../overview.md#supported-regions).
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Storage account regions
+
+During the public preview, you can target only storage accounts that are in the same region as the storage tasks.
+
+## Scale limits
+
+| Scale factor | Supported limit |
+|||
+| Storage tasks per subscription | 100 |
+| Storage task assignments per storage task | 50 |
+| Storage task assignments per storage account | 50 |
+| Storage task definition versions | 50 |
+
+Azure Storage Actions autoscales its processing tasks based on the volume of data in a storage account, subject to internal limits. The duration of execution depends on the number of blobs in the storage account, as well as their hierarchy in Azure Data Lake Storage Gen2. The first execution of a task over a path prefix might take longer than subsequent executions. Azure Storage Actions are also designed to be self-regulating and to allow application workloads on the storage account to take precedence. As a result, the scale and the duration of execution also depend on the available transaction capacity given the storage account's maximum request limit. The following are typical processing scales, which might be higher if you have more transaction capacity available, or might be lower for lesser spare transaction capacity on the storage account.
+
+During the preview, Azure Storage Actions can invoke up to 200 million operations per day for a maximum of seven days on a flat-namespace storage account. Depending on the proportion of blobs targeted that meet the condition for operations, a task assignment might process between 200 million and four billion blobs in a day.
+
+For storage accounts with a hierarchical namespace, Azure Storage Actions can invoke up to 35 million operations per day for a maximum of seven days during the preview. Depending on the proportion of blobs targeted that meet the condition for operations, a task assignment might process between 35 million to 400 million blobs in a day.
+
+## Overlapping prefix for assignments
+
+Storage tasks don't prevent execution of task assignments on overlapping prefixes. If there are multiple task assignments with overlapping prefixes, blobs might be processed by these task assignments in any order. If the execution order of operations across these task assignments is important, then as you configure the task, ensure that the prefix for assignments doesn't overlap.
+
+## Concurrency limit for execution
+
+Storage tasks have a limit on the number of task assignments that can be executed concurrently on each storage account. To ensure optimal performance, make sure that task assignments on a single storage account are scheduled to run with a reasonable time interval between them based on the objects targeted, to ensure task runs complete in time. Task assignment executions exceeding the concurrency limit for a storage account are paused until other assignments have completed.
+
+## Scale dependence on transaction capacity available for the storage account
+
+Storage task assignment execution is autoscaled depending on the transaction request capacity available on the storage account. Scale is higher when more transaction capacity is available and lower when less transaction capacity is available.
+
+When the targeted storage account has lower available transaction capacity, storage task execution might be throttled resulting in longer than expected duration for completing the task assignment execution.
+
+For more information about scale limits, see [Scalability and performance targets for Blob storage](../../storage/blobs/scalability-targets.md).
+
+> [!NOTE]
+> You can request higher capacity and ingress limits. To request an increase, contact [Azure Support](https://azure.microsoft.com/support/faq/).
+
+## Storage task runs can write to the report export container without permission to the container
+
+As you create a task assignment, you'll assign a role to the system-assigned managed identity of the storage task. When the storage task runs, it can operate only on containers where it's managed identity is assigned the required role. This is not the case with the report export container that you choose during task assignment. While a storage task can't operate on existing blobs in that container, a task does not require the correct role to write reports to that container.
+
+## String operators on container metadata, blob metadata, and blob index tags don't work if the values are numbers
+
+You can't use string operators on container metadata, blob metadata and blob index tags along with numbers as value. For example, equals(Tags.Value[Year], '2022') where the value '2022' is a number, along with string operator equals, doesn't evaluate correctly.
+
+## Assignments fail when they reference a storage account name that starts with a digit
+
+If you assign a storage task to a storage account that has a name, which starts with a digit, the storage task assignment fails.
+
+## Storage task assignments fail on some storage accounts in supported regions
+
+Storage tasks are supported on new storage accounts created after the subscription is allow-listed. They might not work on some existing storage accounts even thought those accounts are located in supported regions.
+
+## Monitoring data doesn't appear unless the storage task and the storage account are in the same resource group
+
+If the storage task and the storage account specified in the task assignment are in different resource groups, the aggregated monitoring data for the storage account doesn't show up correctly in the monitoring tab of the storage task pane.
+
+## Storage tasks assignment execution gets stuck in an in-progress state with no blobs processed
+
+In most cases, tasks execution progresses after 20 minutes. However, if the task gets stuck, then make sure that the target storage account has the necessary compatible configuration. For example, if the storage task sets the immutability policy, but the target account isn't configured with versioning support, the storage task won't progress and will eventually fail. Make sure to test each operation on the target storage account by using a mechanism other than a storage task to ensure that the operation succeeds. Then, add the operation to the storage task.
+
+## Storage task fails with an internal error
+
+If incompatible storage task operations are tried out on storage accounts, the task execution can fail with an error, or it can be stuck in in-progress state. For example, an operation that attempts to set a blob index tag on an account that has a hierarchical namespace won't succeed. Make sure that the storage account configuration and the storage task operation are compatible.
+
+## Whitespace characters in Blob index tags and metadata isn't yet supported
+
+Whitespace characters in the key and value of blob tags are acceptable inputs. However, storage task conditions are unable to process the whitespace characters. If a key or value contains a whitespace character, an error appears when the task runs.
+
+## Blob name property value contains or matches "." is unsupported
+
+The string field input on blob name clause accepts ".doc" or ".pdf" as inputs but fails to deploy the task resource. The service resource provider validation catches it and throws the error. The value of the property 'Name' is '.doc' and it doesn't follow the pattern '^[a-zA-Z0-9]+$'"}]}.
+
+## Storage task assignments operate on an incomplete list of blobs when used with multiple directory filters in accounts that have a hierarchical namespace
+
+If multiple filters are used in storage task assignments, not all directory prefixes are scanned for blobs to be operated on.
+
+## Using whitespace characters in the path prefix during task assignment isn't supported
+
+Storage accounts that have a hierarchical namespace display location information as `container1 / subcontainer1` with a whitespace character between the string and the `/` character. An error appears if you copy and paste this information into the path prefix field during assignment.
+
+## See Also
+
+- [Azure Storage Actions overview](../overview.md)
storage-actions Storage Task Properties Operators Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-actions/storage-tasks/storage-task-properties-operators-operations.md
+
+ Title: Properties, operators, and operations in storage task conditions
+
+description: Learn about the elements of conditions and operations in a storage task.
++++++ Last updated : 01/17/2024++++
+# Properties, operators, and operations in storage task conditions
+
+This article describes the properties, operators, and operations that you can use to compose each storage task condition. To learn how to define conditions and operations, see [Define storage task conditions and operations](storage-task-conditions-operations-edit.md).
+
+> [!IMPORTANT]
+> Azure Storage Actions is currently in PREVIEW and is available these [regions](../overview.md#supported-regions).
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Supported properties in a clause
+
+The following table shows the properties that you can use to compose each clause of a condition. A clause can contain string, boolean, numeric, as well as date and time properties.
+
+| String | Date and time<sup>3</sup> | Numeric | Boolean |
+|--||-||
+| AccessTier<sup>1</sup> | AccessTierChangeTime | Content-Length | Deleted |
+| Metadata.Value | Creation-Time | TagCount | IsCurrentVersion |
+| Name | DeletedTime | | |
+| BlobType<sup>2</sup> | LastAccessTime | | |
+| Container.Metadata.Value[Name] | Last-Modified | | |
+| Container.Name | | | |
+| Container.Metadata.Value[Name] | | | |
+| Container.Name | | | |
+| Tags.Value[Name] | | | |
+| VersionId | | | |
+
+<sup>1</sup> Allowed values are `Hot`, `Cool`, or `Archive`.
+
+<sup>2</sup> Allowed values are `BlockBlob`, `PageBlob`, or `AppendBlob`
+
+<sup>3</sup> Can be set to a specific time or to a metadata value dynamically obtained from objects. See [Reference a value from object metadata](storage-task-conditions-operations-edit.md#reference-a-value-from-object-metadata).
+
+### Supported operators in a clause
+
+The following table shows the operators that you can use in a clause to evaluate the value of each type of property.
+
+| String | Date and time | Numeric | Boolean |
+|||||
+| contains | equals |equals | equals |
+| empty | greater | greater | not |
+| equals | greaterOrEquals |greaterOrEquals ||
+| endWith | less | less ||
+| length | lessOrEquals | lessOrEquals ||
+| startsWith | addToTime | ||
+| Matches | | ||
+
+## Supported operations
+
+The following table shows the supported operations, parameters, and parameter values:
+
+| Operation | Parameters | Values |
+||-||
+| Set blob tier | Tier | Hot \| Cold \| Archive |
+| Set blob expiry | None | Absolute \| Never expire \| Relative to creation time \| Relative to current time |
+| Delete blob | None | None |
+| Undelete blob | None | None |
+| Set blob tags | TagSet | A fixed collection of up to 10 key-value pairs |
+| Set blob immutability policy | DateTime, string | DateTime of when policy ends, Locked \| Unlocked |
+| Set blob legal hold | Bool | True \| False |
+
+## See also
+
+- [Define conditions and operations](storage-task-conditions-operations-edit.md)
storage-actions Storage Task Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-actions/storage-tasks/storage-task-quickstart-portal.md
+
+ Title: 'Quickstart: Create, assign, and run a storage task'
+
+description: Learn how to create your first storage task. You'll also assign that task to a storage account, queue the task to run, and then view the results of the run.
+++++ Last updated : 01/17/2024+++
+# Quickstart: Create, assign, and run a storage task
+
+In this quickstart, you learn how to use the [Azure portal](https://portal.azure.com/) to create a storage task and assign it to an Azure Storage account. Then, you'll review the results of the run. The storage task applies a time-based immutability policy any Microsoft Word documents that exist in the storage account.
+
+> [!IMPORTANT]
+> Azure Storage Actions is currently in PREVIEW and is available these [regions](../overview.md#supported-regions).
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Prerequisites
+
+- An Azure subscription. See [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- An Azure storage account. See [create a storage account](../../storage/common/storage-account-create.md). As you create the account, make sure to enable version-level immutability support and that you don't enable the hierarchical namespace feature.
+
+- The [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) role is assigned to your user identity in the context of the storage account or resource group.
+
+- A custom role assigned to your user identity in the context of the resource group which contains the RBAC actions necessary to assign a task to a storage account. See [Permissions required to assign a task](storage-task-authorization-roles.md#permission-for-a-task-to-perform-operations).
+
+- A blob container with one or more Microsoft Word documents stored in that container.
+
+## Create a task
+
+1. In the Azure portal, search for _Storage tasks_.
+
+2. Under **Services**, select **Storage tasks - Azure Storage Actions**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of the search result of the string storage tasks.](../media/storage-tasks/storage-task-create/storage-task-search.png)
+
+3. On the **Azure Storage Actions | Storage Tasks** page, select **Create**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of the storage task create button.](../media/storage-tasks/storage-task-create/storage-task-create-button.png)
+
+4. In the **Basics** page, under **Project details**, make sure that the correct subscription is selected. Then, select the same resource group that contains your new storage account.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of the Project details section of the Basics tab.](../media/storage-tasks/storage-task-quickstart-portal/project-details-section.png)
+
+5. Under **Instance details**, enter *mystoragetask* for the **Storage task name**, and select any region that is supported by the preview of this service.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of the Instance details section of the Basics tab.](../media/storage-tasks/storage-task-quickstart-portal/instance-details-section.png)
+
+6. Select **Next** to open the **Conditions** page.
+
+## Add clauses to a condition
+
+You can specify the conditions of a storage task by making selections in **If** section of the **Visual Builder** tab. Every storage task has at least one condition with one clause in that condition.
+
+1. In the **Select a property** drop-down list of the **If** section, select **Blob name**.
+
+2. For the **Operator** of that condition, select **Ends with**, and in the **Enter a string** box, enter _.docx_.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of the clause that filters for blob name.](../media/storage-tasks/storage-task-quickstart-portal/if-condition-blob-name.png)
+
+ This condition allows operations only on Word documents.
+
+## Add operations
+
+You can specify the operations that a storage task performs by making selections in **Then** section of the **Visual Builder** tab. Every storage task has at least one operation to perform when a blob or container meets the specified condition.
+
+1. In the **Select an operation** drop-down list of the **Then** section, select **Set blob immutability policy**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of the Then operation which sets the immutability policy.](../media/storage-tasks/storage-task-quickstart-portal/then-operation-immutability-policy.png)
+
+ This operation applies a time-based immutability policy to Microsoft Word documents.
+
+2. Select **Add new operation**, and then in the **Select a operation** drop-down list, select **Set blob tags**.
+
+3. In the **Enter a tag name** box, Enter _ImmutabilityUpdatedBy_, and in the **Enter a tag value** box, enter _StorageTaskQuickstart_.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of the Then operation which sets a blob index tag.](../media/storage-tasks/storage-task-quickstart-portal/then-operation-set-blob-tags.png)
+
+ This operation adds a blob index tag to each Word document in that container.
+
+4. Select **Next** to open the **Assignments** page.
+
+## Add an assignment
+
+A storage task _assignment_ specifies a storage account. After you enable the storage task, the conditions and operations of your task will be applied to that storage account. The assignment also contains configuration properties which help you target specific blobs, or specify when and how often the task runs. You can add an assignment for each account you want to target.
+
+1. Select **Add assignment**.
+
+ The **Add assignment** pane appears.
+
+2. In the **Select scope** section, select your subscription and storage account and name the assignment _mystoragetaskassignment_.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of the Select scope section of the assignment pane.](../media/storage-tasks/storage-task-quickstart-portal/assignment-pane-select-scope.png)
+
+3. In the **Role assignment** section, in the **Role** drop-down list, select the **Storage Blob Data Owner** to assign that role to the system-assigned managed identity of the storage task.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of the Role assignment section of the assignment pane.](../media/storage-tasks/storage-task-assignment-create/assignment-role.png)
+
+4. In the **Filter objects** section, make sure that the **Blob prefix** option is selected. Then, in the **Blob prefixes** box, enter the prefix of the container that you are using to complete this quickstart followed by the `/` character. For example, if your test container is named `mycontainer`, then enter `mycontainer/`.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of the Filter objects section of the Add assignment pane.](../media/storage-tasks/storage-task-quickstart-portal/assignment-pane-filter-prefix.png)
+
+ Filters help you narrow the scope of execution. If you want the task to evaluate all of the containers and blobs in an account, then you can select the **Do not filter** option instead.
+
+5. In the **Trigger details** section, select **Single run (only once)** and then select the container where you'd like to store the execution reports.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of the Trigger details section of the Add assignment pane.](../media/storage-tasks/storage-task-quickstart-portal/assignment-pane-trigger-details.png)
+
+6. Select **Add**.
+
+7. In the **Tags** tab, select **Next**.
+
+8. In the **Review + Create** tab, select **Review + create**.
+
+ When the task is deployed, the **Your deployment is complete** page appears.
+
+9. Select **Go to resource** to open the **Overview** page of the storage task.
+
+## Enable the task assignment
+
+Storage task assignments are disabled by default. Enable assignments from the **Assignments** page.
+
+1. Select **Assignments**, select the **mystoragetaskassignment** assignment, and then select **Enable**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of the Assignments option and the storage task assignment link.](../media/storage-tasks/storage-task-quickstart-portal/open-task-assignment.png)
+
+ The task assignment is queued to run.
+
+2. Periodically select **Refresh** to view an updated status.
+
+ Until the task runs and then completes, the string **In progress** appears beneath the **Last run status** column. When the task completes, the string **Completed** appears in that column.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of the completed status appearing next to the task assignment.](../media/storage-tasks/storage-task-quickstart-portal/completed-status.png)
+
+## View results of the task run
+
+After the task completes running, you can view the results of the run.
+
+1. With the **Assignments** page still open, select **View task runs**.
+
+ The **Execution tasks** pane appears, and in that pane, a line item which describes the report appears.
+
+2. Select the **View report** link to download a report.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of the Execution tasks pane.](../media/storage-tasks/storage-task-quickstart-portal/run-report.png)
+
+ The report appears as a comma-separated list of the container, the blob, and the operation performed along with a status. You can also view these comma-separated reports in the container that you specified when you configured the assignment.
+
+## Next steps
+
+[Create a storage task](storage-task-create.md)
storage-actions Storage Task Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-actions/storage-tasks/storage-task-runs.md
+
+ Title: Analyze storage task runs
+
+description: Learn how to view lists of runs and review run reports.
+++++ Last updated : 01/17/2024++
+# Analyze storage task runs
+
+You can view a list of task runs. For each completed run, you can open a report that lists each object that met the conditions of the task and the outcome of the operation against that object.
+
+> [!IMPORTANT]
+> Azure Storage Actions is currently in PREVIEW and is available these [regions](../overview.md#supported-regions).
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## View a list of task runs
+
+After a task run completes, it appears in a list that you can open and review.
+
+### Storage task view
+
+You can see all of the runs attempted by a storage task by opening a list of runs from the storage task menu. You'll see only those runs against accounts to which you have read permission.
+
+Navigate to the storage task in the Azure portal and then under **Storage task management**, select **Task runs**.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of the the storage task list when opened from a storage task.](../media/storage-tasks/storage-task-runs/storage-task-runs-storage-task-view.png)
+
+Metrics appear as tiles that you can select to view a list of the task runs which comprise the metric. For example, to view list of task runs where at least one operation failed, select the **Objects on which the operation failed** tile. Then, a filtered list of task runs will appear. You can filter this list by time frame. The following table describes each field that appears in this list.
+
+| Column | Description |
+|--|--|
+| Execution start time | The date and time when the task run started. |
+| Storage account name | The name of the storage account that was targeted by the task assignment. |
+| Assignment name | The name of the assignment. This name is specified by the creator of the assignment at configuration time. |
+| Status | Specifies whether the task completed or is pending execution. |
+| Completed / attempted count | The number of objects which met the conditions of the storage task and the total number of objects targeted by the storage task. |
+| Report download | A link to an execution report. This report appears only when the status is `Completed`.|
+
+### Storage account view
+
+You can see all runs against a specific storage account by opening a list of runs from the storage account menu.
+
+Navigate to the storage account in the Azure portal and then under **Data management**, select **Storage tasks**.
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of the the storage task list when opened from a storage account.](../media/storage-tasks/storage-task-runs/storage-task-runs-storage-account-view.png)
+
+A list of task runs appears. You can filter this list by time frame. The following table describes each field that appears in this list.
+
+| Column | Description |
+|--|--|
+| Execution start time | The date and time when the task run started. |
+| Assignment name | The name of the assignment. This name is specified by the creator of the assignment at configuration time. |
+| Task name | The name of the storage task. This name is specified by the creator of the assignment at configuration time. |
+| Status | Specifies whether the task completed or is pending execution. When the status is `Completed`, a link to an execution report appears. |
+| Completed / attempted count | The number of objects which met the conditions of the storage task and the total number of objects targeted by the storage task. |
+
+## View execution reports
+
+Each task run generates an execution report. That report is stored in a container that is located in the targeted storage account. The name of that container is specified when the assignment is created. A link to the report appears next to each run in the task run list. Use that link to open a report which contains status information about each object that was targeted by the run.
+
+To open the report, select the **View report** link for any listed task run. Then, the report will begin downloading. The report is packaged as a compressed file so you'll have to extract the contents of that file to a directory on your client.
+
+The report is formatted as a CSV file. Each row of the report contains the details about the execution of the operation on each object that is targeted by the task. The following table describes the columns of information that appear in the execution report:
+
+| Name | Description |
+|--|--|
+| Container | Definition |
+| Blob | The fully qualified name of the blob. |
+| Snapshot | The ID of the snapshot for this task. This field is empty if the object is a base blob. |
+| VersionId | The ID of the version of this task. This field is empty if the object is a base blob. |
+| Operation | The operation attempted. |
+| Result | This field contains a value of `Success` or `Failed`. This field is set to `N/A` if the application of the defined conditions on the object evaluates to false. |
+| Error description | This field is empty if the operation succeeds or the application of the defined conditions on the object evaluates to false. |
+| Error code | This field is empty if the operation succeeds or the application of the defined conditions on the object evaluates to false. |
+| Matched condition block | The condition block associated with the row. |
+
+The following example shows an execution report:
+
+> [!div class="mx-imgBorder"]
+> ![Screenshot of a sample execution report.](../media/storage-tasks/storage-task-runs/execution-report-example.png)
+
+## See also
+
+- [Monitor Azure Storage Actions](monitor-storage-tasks.md)
+- [Storage Tasks Overview](../overview.md)
storage-actions Storage Tasks Monitor Data Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-actions/storage-tasks/storage-tasks-monitor-data-reference.md
+
+ Title: Storage Actions monitoring data reference
+
+description: Important reference material needed when you monitor Azure Storage Actions.
+++++ Last updated : 01/17/2024+++
+# Azure Storage Actions monitoring data reference
+
+See [Monitoring Azure Storage Actions](monitor-storage-tasks.md) for details on collecting and analyzing monitoring data for Azure Storage Actions.
+
+## Metrics
+
+This section lists all the automatically collected platform metrics for Azure Storage Actions.
+
+|Metric Type | Resource Provider / Type Namespace<br/> and link to individual metrics |
+|-|--|
+| Storage tasks | [Microsoft.StorageActions/storageTasks](/azure/azure-monitor/reference/supported-metrics/microsoft-storage-storagetasks-metrics) |
+| Storage tasks | [Microsoft.StorageActions/storageAccounts/storageTasks](/azure/azure-monitor/reference/supported-metrics/microsoft-storage-storageaccounts-storagetasks-metrics) |
+
+## Metric dimensions
+
+For more information on what metric dimensions are, see [Multi-dimensional metrics](/azure/azure-monitor/platform/data-platform-metrics#multi-dimensional-metrics).
+
+Azure Storage Actions support the following dimensions for metrics in Azure Monitor.
+
+| Dimension Name | Description |
+| - | -- |
+| **AccountName** | The name of a storage account |
+| **TaskAssignmentId** | The ID of the task assignment |
+
+## Activity log
+
+The following table lists the operations that Azure Storage Actions might record in the activity log. This list of entries is a subset of the possible entries that you might find in the activity log.
+
+| Namespace | Description |
+|:|:|
+| Microsoft.StorageActions/storageTasks/read | Reads an existing storage task. |
+| Microsoft.StorageActions/storageTasks/delete | Deletes a storage task. |
+| Microsoft.StorageActions/storageTasks/promote/action | Promotes specific version of storage task to current version. |
+| Microsoft.StorageActions/storageTasks/write | Edits a storage task. |
+| Microsoft.StorageActions/storageAccounts/storageTasks/delete | Deletes a storage task. |
+| Microsoft.StorageActions/storageAccounts/storageTasks/read | Reads an existing storage task. |
+| Microsoft.StorageActions/storageAccounts/storageTasks/executionsummary/action | Opens task runs. |
+| Microsoft.StorageActions/storageAccounts/storageTasks/assignmentexecutionsummary/action | Opens task runs from the Assignments pane. |
+| Microsoft.StorageActions/storageAccounts/storageTasks/write | Edits a storage task. |
+
+See [all the possible resource provider operations in the activity log](/azure/role-based-access-control/resource-provider-operations).
+
+For more information on the schema of Activity Log entries, see [Activity Log schema](/azure/azure-monitor/essentials/activity-log-schema).
+
+## See Also
+
+- See [Monitoring Azure Storage Actions](monitor-storage-tasks.md) for a description of monitoring Azure Storage Actions.
+- See [Monitoring Azure resources with Azure Monitor](../../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
storage Storage Blob Tags Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-java.md
Previously updated : 08/02/2023 Last updated : 02/02/2024 ms.devlang: java
This article shows how to use blob index tags to manage and find data using the
- [Set Blob Tags](/rest/api/storageservices/set-blob-tags#authorization) - [Find Blobs by Tags](/rest/api/storageservices/find-blobs-by-tags#authorization)
-## About blob index tags
-
-Blob index tags categorize data in your storage account using key-value tag attributes. These tags are automatically indexed and exposed as a searchable multi-dimensional index to easily find data. This article shows you how to set, get, and find data using blob index tags.
-
-To learn more about this feature along with known issues and limitations, see [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md).
## Set tags
storage Storage Blob Tags Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-javascript.md
description: Learn how to categorize, manage, and query for blob objects by usin
Previously updated : 11/30/2022 Last updated : 02/02/2024 ms.devlang: javascript
This article shows how to use blob index tags to manage and find data using the
- [Set Blob Tags](/rest/api/storageservices/set-blob-tags#authorization) - [Find Blobs by Tags](/rest/api/storageservices/find-blobs-by-tags#authorization)
-## About blob index tags
-
-Blob index tags categorize data in your storage account using key-value tag attributes. These tags are automatically indexed and exposed as a searchable multi-dimensional index to easily find data. This article shows you how to set, get, and find data using blob index tags.
-
-To learn more about this feature along with known issues and limitations, see [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md).
## Set tags
storage Storage Blob Tags Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-python.md
Previously updated : 11/29/2023 Last updated : 02/02/2024 ms.devlang: python
To learn about setting blob index tags using asynchronous APIs, see [Set blob in
- [Set Blob Tags](/rest/api/storageservices/set-blob-tags#authorization) - [Find Blobs by Tags](/rest/api/storageservices/find-blobs-by-tags#authorization)
-## About blob index tags
-
-Blob index tags categorize data in your storage account using key-value tag attributes. These tags are automatically indexed and exposed as a searchable multi-dimensional index to easily find data. This article shows you how to set, get, and find data using blob index tags.
-
-To learn more about this feature along with known issues and limitations, see [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md).
## Set tags
storage Storage Blob Tags Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-typescript.md
description: Learn how to categorize, manage, and query for blob objects with Ty
Previously updated : 03/21/2023 Last updated : 02/02/2024 ms.devlang: typescript
This article shows how to use blob index tags to manage and find data using the
- [Set Blob Tags](/rest/api/storageservices/set-blob-tags#authorization) - [Find Blobs by Tags](/rest/api/storageservices/find-blobs-by-tags#authorization)
-## About blob index tags
-
-Blob index tags categorize data in your storage account using key-value tag attributes. These tags are automatically indexed and exposed as a searchable multi-dimensional index to easily find data. This article shows you how to set, get, and find data using blob index tags.
-
-To learn more about this feature along with known issues and limitations, see [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md).
## Set tags
storage Storage Blob Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags.md
This article shows how to use blob index tags to manage and find data using the
- [Set Blob Tags](/rest/api/storageservices/set-blob-tags#authorization) - [Find Blobs by Tags](/rest/api/storageservices/find-blobs-by-tags#authorization)
-## About blob index tags
-
-Blob index tags categorize data in your storage account using key-value tag attributes. These tags are automatically indexed and exposed as a searchable multi-dimensional index to easily find data. This article shows you how to set, get, and find data using blob index tags.
-
-To learn more about this feature along with known issues and limitations, see [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md).
## Set tags
storage Elastic San Scale Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-scale-targets.md
The appliance scale targets vary depending on region and redundancy of the SAN i
|Resource |France Central |Southeast Asia |Australia East |North Europe | West Europe | UK South | East US | East US 2 | South Central US| West US 2 | West US 3 | Sweden Central | ||||| |Maximum number of Elastic SAN that can be deployed per subscription per region |5 |5 |5 |5 |5 |5 |5 |5 |5 | 5 | 5|5|
-|Maximum total capacity units (TiB) |100 |100 |600 |600|600|600| |600 |600 |600 | 100 | 100 |
+|Maximum capacity units (TiB) |100 |100 |600 |600|600|600| 600 |600 |600 |600 | 100 | 100 |
|Maximum base capacity units (TiB) |100 |100 |400 |400 | 400|400 |400 |400 |400 |400 | 100 |100 | |Minimum total SAN capacity (TiB) |1 |1 |1 |1 |1 |1 |1 |1 | 1 | 1 | 1 |1| |Maximum total IOPS |500,000 |500,000 |2,000,000 |2,000,000|2,000,000 |2,000,000 |2,000,000 |2,000,000 |2,000,000 |2,000,000 | 500,000 |500,000 |
ZRS is only available in France Central, North Europe, West Europe and West US 2
|Resource |France Central |North Europe | West Europe |West US 2 | ||||| |Maximum number of Elastic SAN that can be deployed per subscription per region |5 |5 |5 |5 |
-|Maximum total capacity units (TiB) |200 |200 |200 |200 |
+|Maximum capacity units (TiB) |200 |200 |200 |200 |
|Maximum base capacity units (TiB) |100 |100 |100 |100 | |Minimum total SAN capacity (TiB) |1 |1 |1 |1 | |Maximum total IOPS |500,000 |500,000 |500,000 |500,000 |
synapse-analytics Optimize Write For Apache Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/optimize-write-for-apache-spark.md
CREATE TABLE <table_name> TBLPROPERTIES (delta.autoOptimize.optimizeWrite = true
2. Scala
-Using the [DeltaTableBuilder API](https://docs.delta.io/latest/api/scala/io/delta/tables/DeltaTableBuilder.html):
+Using the [DeltaTableBuilder API](https://docs.delta.io/latest/delta-apidoc.html):
```scala val table = DeltaTable.create()
ALTER TABLE <table_name> SET TBLPROPERTIES (delta.autoOptimize.optimizeWrite = t
2. Scala
-Using the [DeltaTableBuilder API](https://docs.delta.io/latest/api/scala/io/delta/tables/DeltaTableBuilder.html)
+Using the [DeltaTableBuilder API](https://docs.delta.io/latest/delta-apidoc.html)
```scala val table = DeltaTable.replace()
virtual-machines Disks Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-metrics.md
The following metrics help diagnose bottleneck in your Virtual Machine and Disk
Metrics that help diagnose disk IO capping: -- **Data Disk IOPS Consumed Percentage**: The percentage calculated by the data disk IOPS completed over the provisioned data disk IOPS. If this amount is at 100%, your application running is IO capped from your data disk's IOPS limit.-- **Data Disk Bandwidth Consumed Percentage**: The percentage calculated by the data disk throughput completed over the provisioned data disk throughput. If this amount is at 100%, your application running is IO capped from your data disk's bandwidth limit.-- **OS Disk IOPS Consumed Percentage**: The percentage calculated by the OS disk IOPS completed over the provisioned OS disk IOPS. If this amount is at 100%, your application running is IO capped from your OS disk's IOPS limit.-- **OS Disk Bandwidth Consumed Percentage**: The percentage calculated by the OS disk throughput completed over the provisioned OS disk throughput. If this amount is at 100%, your application running is IO capped from your OS disk's bandwidth limit.
+- **Data Disk IOPS Consumed Percentage**: The percentage calculated by dividing the actual data disk IOPS completed by the provisioned data disk IOPS. If this amount is at 100%, your application running is IO capped from your data disk's IOPS limit.
+- **Data Disk Bandwidth Consumed Percentage**: The percentage calculated by dividing the actual data disk throughput completed by the provisioned data disk throughput. If this amount is at 100%, your application running is IO capped from your data disk's bandwidth limit.
+- **OS Disk IOPS Consumed Percentage**: The percentage calculated by dividing the actual OS disk IOPS completed by the provisioned OS disk IOPS. If this amount is at 100%, your application running is IO capped from your OS disk's IOPS limit.
+- **OS Disk Bandwidth Consumed Percentage**: The percentage calculated by dividing the actual OS disk throughput completed by the provisioned OS disk throughput. If this amount is at 100%, your application running is IO capped from your OS disk's bandwidth limit.
Metrics that help diagnose VM IO capping: -- **VM Cached IOPS Consumed Percentage**: The percentage calculated by the total IOPS completed over the max cached virtual machine IOPS limit. If this amount is at 100%, your application running is IO capped from your VM's cached IOPS limit.-- **VM Cached Bandwidth Consumed Percentage**: The percentage calculated by the total disk throughput completed over the max cached virtual machine throughput. If this amount is at 100%, your application running is IO capped from your VM's cached bandwidth limit.-- **VM uncached IOPS Consumed Percentage**: The percentage calculated by the total IOPS on a virtual machine completed over the max uncached virtual machine IOPS limit. If this amount is at 100%, your application running is IO capped from your VM's uncached IOPS limit.-- **VM Uncached Bandwidth Consumed Percentage**: The percentage calculated by the total disk throughput on a virtual machine completed over the max provisioned virtual machine throughput. If this amount is at 100%, your application running is IO capped from your VM's uncached bandwidth limit.
+- **VM Cached IOPS Consumed Percentage**: The percentage calculated by dividing the total actual cached IOPS completed by the max cached virtual machine IOPS limit. If this amount is at 100%, your application running is IO capped from your VM's cached IOPS limit.
+- **VM Cached Bandwidth Consumed Percentage**: The percentage calculated by dividing the total actual cached throughput completed by the max cached virtual machine throughput. If this amount is at 100%, your application running is IO capped from your VM's cached bandwidth limit.
+- **VM uncached IOPS Consumed Percentage**: The percentage calculated by dividing the total actual uncached IOPS on a virtual machine completed by the max uncached virtual machine IOPS limit. If this amount is at 100%, your application running is IO capped from your VM's uncached IOPS limit.
+- **VM Uncached Bandwidth Consumed Percentage**: The percentage calculated by dividing the total actual uncached throughput on a virtual machine completed over the max provisioned virtual machine throughput. If this amount is at 100%, your application running is IO capped from your VM's uncached bandwidth limit.
## Storage IO metrics example
virtual-machines Tenable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/tenable.md
Lines 5-6 in the PowerShell example are equivalent to Step 6 in the UI procedure
### Nessus Linking Key
-The most important field is the Nessus Linking Key (**nessusLinkingKey**, required). It is always required. This document explains where to find it: [Retrieve the Tenable Nessus Agent Linking Key (Tenable Nessus Agent 10.4)](https://docs.tenable.com/nessusagent/Content/RetrieveLinkingKey.htm). In the PowerShell interface, specify nessusLinkingKey under `-ProtectedSettings` so that it will be encrypted by Azure. All other fields are passed unencrypted through -Settings.
+The most important field is the Nessus Linking Key (**nessusLinkingKey**, required). It is always required. This document explains where to find it: [Retrieve the Tenable Nessus Agent Linking Key (Tenable Nessus Agent 10.4)](https://docs.tenable.com/nessus/Content/RetrieveLinkingKey.htm). In the PowerShell interface, specify nessusLinkingKey under `-ProtectedSettings` so that it will be encrypted by Azure. All other fields are passed unencrypted through -Settings.
You can choose whether to link with Nessus Manager or Tenable.io. In the command-line interface, this is done by setting `nessusManagerApp` (**nessusManagerApp**, required) to `cloud`, or to `local`. Those are the only two choices.
virtual-machines Create Ssh Secured Vm From Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/create-ssh-secured-vm-from-template.md
Previously updated : 03/22/2019 Last updated : 02/01/2023
An alternative is to deploy the template from the Azure portal. To open the temp
Azure Resource Manager templates are JSON files that define the infrastructure and configuration of your Azure solution. By using a template, you can repeatedly deploy your solution throughout its lifecycle and have confidence your resources are deployed in a consistent state. To learn more about the format of the template and how you construct it, see [Quickstart: Create and deploy Azure Resource Manager templates by using the Azure portal](../../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md). To view the JSON syntax for resources types, see [Define resources in Azure Resource Manager templates](/azure/templates/microsoft.compute/allversions).
-## Create a virtual machine
+## Quickstart template
-Creating an Azure virtual machine usually includes two steps:
+>[!NOTE]
+> The provided template creates an [Azure Generation 2 VM](../generation-2.md) by default.
-1. Create a resource group. An Azure resource group is a logical container into which Azure resources are deployed and managed. A resource group must be created before a virtual machine.
-1. Create a virtual machine.
+>[!NOTE]
+> Only SSH authentication is enabled by default when using the quickstart template. When prompted, provide the value of your own SSH public key, such as the contents of *~/.ssh/id_rsa.pub*.
+>
+> If you don't have an SSH key pair, [create and use an SSH key pair for Linux VMs in Azure](mac-create-ssh-keys.md).
+
+Click **Copy** to add the quickstart template to your clipboard:
-The following example creates a VM from an [Azure Quickstart template](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.compute/vm-sshkey/azuredeploy.json). This template creates an Azure Generation 2 VM by default. See [Support for generation 2 VMs on Azure](../generation-2.md) to learn more about Azure Generation 2 VMs. Only SSH authentication is allowed for this deployment. When prompted, provide the value of your own SSH public key, such as the contents of *~/.ssh/id_rsa.pub*. If you need to create an SSH key pair, see [How to create and use an SSH key pair for Linux VMs in Azure](mac-create-ssh-keys.md). Here is a copy of the template:
[!code-json[create-linux-vm](~/quickstart-templates/quickstarts/microsoft.compute/vm-sshkey/azuredeploy.json)]
-To run the CLI script, Select **Try it** to open the Azure Cloud shell. To paste the script, right-click the shell, and then select **Paste**:
+You can also download or create a template and specify the local path with the `--template-file` parameter.
+
+## Create a quickstart template VM with Azure CLI
+
+After acquiring or creating a quickstart template, create a VM with it using the Azure CLI.
+
+The following command requests several pieces of input from the user. These include:
+- Name of the Resource Group (resourceGroupName)
+- Location of the Azure datacenter that hosts the VM (location)
+- A name for resources related to the VM (projectName)
+- Username for the administrator user (username)
+- A public SSH key for accessing the VM's terminal (key)
+
+Creating an Azure virtual machine requires a [resource group](./../../azure-resource-manager/management/manage-resource-groups-portal.md). Quickstart templates include resource group creation as part of the process.
+
+To run the CLI script, click **Open Cloudshell**. Once you have access to the Azure Cloudshell, click **Copy** to copy the command, right-click the shell, then select **Paste**.
```azurecli-interactive echo "Enter the Resource Group name:" &&
az deployment group create --resource-group $resourceGroupName --template-uri ht
az vm show --resource-group $resourceGroupName --name "$projectName-vm" --show-details --query publicIps --output tsv ```
-The last Azure CLI command shows the public IP address of the newly created VM. You need the public IP address to connect to the virtual machine. See the next section of this article.
-
-In the previous example, you specified a template stored in GitHub. You can also download or create a template and specify the local path with the `--template-file` parameter.
-
-Here are some additional resources:
--- To learn how to develop Resource Manager templates, see [Azure Resource Manager documentation](../../azure-resource-manager/index.yml).-- To see the Azure virtual machine schemas, see [Azure template reference](/azure/templates/microsoft.compute/allversions).-- To see more virtual machine template samples, see [Azure Quickstart templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Compute&pageNumber=1&sort=Popular).
+The last line in the command shows the public IP address of the newly created VM. You need the public IP address to connect to the virtual machine.
## Connect to virtual machine
-You can then SSH to your VM as normal. Provide you own public IP address from the preceding command:
+You can then SSH to your VM as normal. Provide your own public IP address from the preceding command:
```bash ssh <adminUsername>@<ipAddress> ```
-## Next steps
+## Other templates
-In this example, you created a basic Linux VM. For more Resource Manager templates that include application frameworks or create more complex environments, browse the [Azure Quickstart templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Compute&pageNumber=1&sort=Popular).
+In this example, you created a basic Linux VM. For more Resource Manager templates that include application frameworks or create more complex environments, browse the [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Compute&pageNumber=1&sort=Popular).
To learn more about creating templates, view the JSON syntax and properties for the resources types you deployed:
To learn more about creating templates, view the JSON syntax and properties for
- [Microsoft.Network/virtualNetworks](/azure/templates/microsoft.network/virtualnetworks) - [Microsoft.Network/networkInterfaces](/azure/templates/microsoft.network/networkinterfaces) - [Microsoft.Compute/virtualMachines](/azure/templates/microsoft.compute/virtualmachines)+
+## Next steps
+
+- To learn how to develop Resource Manager templates, see [Azure Resource Manager documentation](../../azure-resource-manager/index.yml).
+- To see the Azure virtual machine schemas, see [Azure template reference](/azure/templates/microsoft.compute/allversions).
+- To see more virtual machine template samples, see [Azure Quickstart templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Compute&pageNumber=1&sort=Popular).
virtual-machines Previous Gen Sizes List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/previous-gen-sizes-list.md
Title: Previous generation Azure VM sizes
+ Title: Previous generation Azure VM size series
description: A list containing all previous generation and capacity limited VM size series. Previously updated : 01/31/2024 Last updated : 02/02/2024
-# Previous generation Azure VM sizes
+# Previous generation Azure VM size series
-This article provides a list of all sizes that are considered *previous-gen* or *capacity limited*. For sizes that require it there are *migration guides* to help move to replacement sizes.
-
-To learn more about size series retirement, see the [size series retirement overview](./retirement-overview.md).
+This article provides a list of all size series that are considered *previous-gen*. Status is listed as *next-gen available* or *capacity limited* based on capacity. For sizes that require it there are *migration guides* to help move to replacement sizes.
> [!NOTE]
-> *Previous generation* and *capacity limited* sizes **are not currently retired** and can still be used.
-
-## What are previous-gen sizes?
-Previous generations virtual machine sizes can still be used, but there are newer generations available. Capacity increases are not guaranteed for previous-gen sizes. It's recommended to migrate to the latest generation replacements.
-
-## What are capacity limited previous-gen sizes?
-Capacity limited virtual machine sizes are older sizes which are still fully supported, but they won't receive more capacity. Unlike other size series which will be deployed based on demand, capacity limited sizes are limited to what is currently deployed and decreases as hardware is phased out. There are newer or alternative sizes that are generally available.
+> Series listed as *previous-gen* are **not currently retired** and can still be used.
+## What are previous-gen size series?
+Previous generations virtual machine size series are running on older hardware. While they can still be used and are fully supported, there are newer generations available. It's recommended to migrate to the latest generation replacements.
+To learn more about previous-gen sizes, retirement, and the status states of previous-gen size series, see the [size series retirement overview](./retirement-overview.md).
## General purpose previous-gen sizes |Series name | Status | Migration guide | |-||-|
-| Basic A-series | Capacity limited |
-| Standard A-series | Capacity limited |
-| Compute-intensive A-series | Capacity limited |
-| Standard D-series | Capacity limited |
-| Preview DC-series | Capacity limited |
-| DS-series | Capacity limited |
+| Basic A-series | [Capacity limited](./retirement-overview.md#capacity-limited) |
+| Standard A-series | [Capacity limited](./retirement-overview.md#capacity-limited) |
+| Compute-intensive A-series | [Capacity limited](./retirement-overview.md#capacity-limited) |
+| Standard D-series | [Capacity limited](./retirement-overview.md#capacity-limited) |
+| Preview DC-series | [Capacity limited](./retirement-overview.md#capacity-limited) |
+| DS-series | [Capacity limited](./retirement-overview.md#capacity-limited) |
For a list of general purpose sizes listed as "retired" and "announced for retirement" (sizes that are no longer available or soon to be unavailable for use), see [retired general purpose sizes](./retired-sizes-list.md#general-purpose-retired-sizes).
For a list of general purpose sizes listed as "retired" and "announced for retir
|Series name | Status | Migration guide | ||-|-|
-| F-series | Previous-gen | |
-| Fs-series | Previous-gen | |
+| F-series | [Next-gen available](./retirement-overview.md#next-gen-available) | |
+| Fs-series | [Next-gen available](./retirement-overview.md#next-gen-available) | |
For a list of compute optimized sizes listed as "retired" and "announced for retirement" (sizes that are no longer available or soon to be unavailable for use), see [retired compute optimized sizes](./retired-sizes-list.md#compute-optimized-retired-sizes).
For a list of compute optimized sizes listed as "retired" and "announced for ret
|Series name | Replacement series |Migration guide | ||-|-|
-| GS-series | Capacity limited | |
-| G-series | Capacity limited | |
-| Memory-optimized D-series | Capacity limited | |
-| Memory-optimized DS-series| Capacity limited | |
+| GS-series | [Capacity limited](./retirement-overview.md#capacity-limited) | |
+| G-series | [Capacity limited](./retirement-overview.md#capacity-limited) | |
+| Memory-optimized D-series | [Capacity limited](./retirement-overview.md#capacity-limited) | |
+| Memory-optimized DS-series| [Capacity limited](./retirement-overview.md#capacity-limited) | |
For a list of memory optimized sizes listed as "retired" and "announced for retirement" (sizes that are no longer available or soon to be unavailable for use), see [retired memory optimized sizes](./retired-sizes-list.md#memory-optimized-retired-sizes).
For a list of memory optimized sizes listed as "retired" and "announced for reti
|Series name | Replacement series | Migration guide| ||-|-|
-| Ls-series | Capacity limited | |
+| Ls-series | [Capacity limited](./retirement-overview.md#capacity-limited) | |
For a list of storage optimized sizes listed as "retired" and "announced for retirement" (sizes that are no longer available or soon to be unavailable for use), see [retired storage optimized sizes](./retired-sizes-list.md#storage-optimized-retired-sizes).
For a list of storage optimized sizes listed as "retired" and "announced for ret
|Series name | Status | Migration guide | |-||-|
-| NVv2-series | Previous-gen | |
+| NVv2-series | [Next-gen available](./retirement-overview.md#next-gen-available) | |
For a list of GPU accelerated sizes listed as "retired" and "announced for retirement" (sizes that are no longer available or soon to be unavailable for use), see [retired GPU accelerated sizes](./retired-sizes-list.md#gpu-accelerated-retired-sizes).
virtual-machines Retired Sizes List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/retired-sizes-list.md
Title: Retired Azure VM sizes
+ Title: Retired Azure VM size series
description: A list containing all retired and soon to be retired VM size series and their replacement series. Previously updated : 01/31/2024 Last updated : 02/02/2024
-# Retired Azure VM sizes
+# Retired Azure VM size series
This article provides a list of all sizes that are retired or have been announced for retirement. For sizes that require it there are migration guides to help move to replacement sizes.
-To learn more about size series retirement, see the [size series retirement overview](./retirement-overview.md).
+> [!WARNING]
+> Series with *Retirement Status* listed as *Retired* are **no longer available** and can't be provisioned.
-> [!NOTE]
-> Series with *Retirement Status* listed as **Retired** are **no longer available** and can't be provisioned.
->
-> If you are currently using one of the size series listed as *Retired*, view the migration guide to switch to a replacement series as soon as possible.
+## What are retired size series?
+Retired virtual machine size series are running on older hardware which is no longer supported. The hardware will be replaced with newer generations of hardware.
+
+Series with *Retirement Status* listed as *Announced* are still available, but will be retired on the *Planned Retirement Date*. It's recommended that you plan your migration to a replacement series well before the listed retirement date.
-Series with *Retirement Status* listed as **Announced** are still available, but will be retired on the *Planned Retirement Date*. It's recommended that you plan your migration to a replacement series well before the listed retirement date.
+To learn more about size series retirement, previous-gen sizes, and the retirement process, see the [size series retirement overview](./retirement-overview.md).
+
+> [!IMPORTANT]
+> If you are currently using one of the size series listed as *Retired*, view the migration guide to switch to a replacement series as soon as possible.
-*Capacity limited* series and *previous-gen* series are not retired and still fully supported, but they have limitations similar to series that are announced for retirement. For a list of previous-gen sizes, see [previous generation Azure VM sizes](./previous-gen-sizes-list.md).
+*Previous-gen* size series are not retired and still fully supported, but they have limitations similar to series that are announced for retirement. For a list of previous-gen sizes, see [previous generation Azure VM sizes](./previous-gen-sizes-list.md).
## General purpose retired sizes
Currently there are no retired storage optimized series retired or announced for
|-|-|--|-|--| | NV-Series | **Retired** | - | 9/6/23 | [NV-series Retirement](./migration-guides/nv-series-retirement.md) | | NC-Series | **Retired** | - | 9/6/23 | [NC-series Retirement](./migration-guides/nc-series-retirement.md) |
-| NCv2-Series | **Retired** | - | 9/6/23 | [NC-series Retirement](./migration-guides/ncv2-series-retirement.md) |
-| ND-Series | **Retired** | - | 9/6/23 | [NC-series Retirement](./migration-guides/nd-series-retirement.md) |
+| NCv2-Series | **Retired** | - | 9/6/23 | [NCv2-series Retirement](./migration-guides/ncv2-series-retirement.md) |
+| ND-Series | **Retired** | - | 9/6/23 | [ND-series Retirement](./migration-guides/nd-series-retirement.md) |
## FPGA accelerated retired sizes
virtual-machines Retirement Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/retirement-overview.md
Title: Previous-gen and retired VM sizes
-description: Overview of the retirement process for virtual machine sizes and information on previous-gen sizes.
+description: Overview of the retirement process for virtual machine size series and information on previous-gen sizes.
Previously updated : 01/31/2024 Last updated : 02/02/2024
-# Previous-gen and retired VM sizes
+# Previous-gen and retired VM size series
- This process requires that previously established VM sizes are moved to "previous-gen" status, then eventually retired and made unavailable. This article provides an overview of the retirement of virtual machine sizes and explains the reasoning behind this process.
+Azure virtual machine sizes are the amount of resources allocated to a virtual machine in the cloud. These resources are a portion of the physical serverΓÇÖs hardware capabilities. A *size series* is a collection of all sizes that are available within a single physical serverΓÇÖs hardware. As a size series' physical hardware ages and newer components are released, Microsoft stops deploying more of previously established series' hardware. Once users migrate off of said hardware or the hardware becomes sufficiently outdated, it's retired to make room for new infrastructure.
![A diagram showing a greyed out Azure VM icon with an arrow pointing to a new sparkling Azure VM icon.](./media/size-retirement-new-vm.png "Moving from old to new VM sizes")
-When hardware begins the retirement process, it's recommended to migrate workloads to newer generation hardware that provides better performance and reliability. This helps you to avoid any potential issues that may arise from using outdated hardware. By keeping your hardware up-to-date, you can ensure that your workloads are running smoothly and efficiently.
+When hardware becomes *previous-gen* or begins the retirement process, we recommend migrating workloads to newer generation hardware. Newer hardware provides better performance and future scalability. Migrating also helps you to avoid any potential issues that may arise from using outdated hardware. By keeping your hardware up-to-date, you can ensure that your workloads are running smoothly and efficiently.
+
+This article describes the various states of older hardware and explains the processes behind these states.
## Previous-gen sizes
-Previous generation sizes **are not currently retired** and can still be used. These sizes are still fully supported, but they won't receive more capacity. It's recommended to migrate to the latest generation replacements as soon as possible. For a list of sizes that are considered "previous-gen", see the [list of previous-gen sizes](./previous-gen-sizes-list.md).
+Previous generation size series **are not currently retired** and can still be used. These sizes are still fully supported, but they aren't guaranteed to receive more capacity. We recommend migrating to the latest generation replacements as soon as possible, especially when trying to scale up your installations.
+
+There are two types of previous-gen sizes; *next-gen available* and *capacity limited*.
+
+### Next-gen available
+
+Size series listed as *next-gen available* means that while no capacity limitation plans have been made, there are sufficient deployments of newer alternative sizes to justify considering these series "previous-gen". *Next-gen available* series have no formal announcement of retirement timelines, but they'll eventually be retired.
+
+For a list of previous-gen size series that are considered *next-gen available*, see the [list of previous-gen sizes](./previous-gen-sizes-list.md).
+
+### Capacity limited
+
+Size series listed as *capacity limited* means that while no retirement plans have been made, no extra capacity for this series will be deployed. Similar to *next-gen available* previous-gen series, there are sufficient deployments of newer alternative sizes available to replace *capacity limited* size deployments.
+
+If you had one VM running on a *capacity limited* size and require another VM of a similar size in the same series, there is no guarantee that capacity will be able to fulfill this request. Any new VMs should be created using sizes in [series that aren't considered previous-gen](../sizes.md). Migrating to sizes considered *next-gen available* is not recommended since these sizes are also similarly outdated.
+
+For a list of previous-gen size series that are considered *capacity limited*, see the [list of previous-gen sizes](./previous-gen-sizes-list.md).
## Retired sizes
-Retiring hardware is necessary over time to ensure that the latest and greatest technology is available on Azure. This ensures that the hardware is reliable, secure, and efficient. It also allows for the latest features and capabilities that may not be present on previous generations of hardware.
+Retired size series are **no longer available** and can't be used. If you have VMs running on sizes that are considered *retired*, migrate to a newer next-gen size as soon as possible.
+
+Retiring hardware is necessary over time to ensure that the latest and greatest technology is available on Azure. Retirement ensures that the hardware is reliable, secure, and efficient. It also allows for the latest features and capabilities that may not be present on previous generations of hardware.
-Retired sizes are **no longer available** and can't be used. For a list of retired sizes, see the [list of retired sizes](./retired-sizes-list.md).
+Hardware which is currently listed as *previous-gen* are often preparing for retirement which will be announced formally prior to the retirement date.
+
+For information on size series that have been retired or are scheduled for retirement, see the [list of retired sizes](./retired-sizes-list.md).
## Migrate to newer sizes
-Migrating to newer sizes allows you to keep up with the latest hardware available on Azure. You can [resize your VM](./resize-vm.md) to a newer size using the Azure portal, Azure PowerShell, Azure CLI, or Terraform.
+Migrating to newer sizes allows you to keep up with the latest hardware and features available on Azure.
+
+Some size series have specific migration instructions due to the unique hardware or software features. To see a list of migration guide, see [retired size migration guides](./retired-sizes-list.md) and [previous-gen size migration guides](./previous-gen-sizes-list.md).
+
+For sizes without specific instructions, you can [resize your VM](./resize-vm.md) to a newer size using the Azure portal, Azure PowerShell, Azure CLI, or Terraform. Make sure that the new size has support for all features that your workload requires.
## Next steps - For more information on VM sizes, see [Sizes for virtual machines in Azure](../sizes.md). - For a list of retired sizes, see [Retired Azure VM sizes](./retired-sizes-list.md).-- For a list of previous-gen sizes, see [Previous generation Azure VM sizes](./previous-gen-sizes-list.md).
+- For a list of previous-gen sizes, see [Previous generation Azure VM sizes](./previous-gen-sizes-list.md).
virtual-machines Partner Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/partner-workloads.md
For more help with mainframe emulation and services, refer to the [Azure Mainfra
## Migrate mainframe closer to Azure -- [ASNA Visual RPG for .NET](https://asna.com/us/products/visual-rpg) compiler for the .NET Framework via Visual Studio plug-in.
+- [ASNA Visual RPG for .NET](https://docs.asna.com/documentation/Help170/AVR/_HTML/AvrInstallationandsetup.htm) compiler for the .NET Framework via Visual Studio plug-in.
- Asysco AMT COBOL development environment (Unisys, IBM mainframes, and other COBOL dialects such as Micro Focus COBOL). - Asysco AMT GO cloud-based deployment architecture for high-end workloads. - Asysco AMT Transform for converting data, code, scripting, security, interfaces and other mainframe artifacts.
virtual-network Virtual Network Tap Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-tap-overview.md
The accounts you use to apply TAP configuration on network interfaces must be as
- [Ixia CloudLens](https://www.ixiacom.com/cloudlens/cloudlens-azure) -- [cPacket Cloud Visbility](https://www.cpacket.com/solutions/cloud-visibility/)
+- [cPacket Cloud Visbility](https://www.cpacket.com/resources/ccloud/)
- [Big Switch Big Monitoring Fabric](https://www.arista.com/en/bigswitch)
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
The current behavior is to prefer the ExpressRoute circuit path over hub-to-hub
The current behavior is to prefer the ExpressRoute circuit path for standalone (non-Virtual WAN) VNet to Virtual WAN connectivity. It's recommended that the customer [create a Virtual Network connection](howto-connect-vnet-hub.md) to directly connect the standalone VNet to the Virtual WAN hub. Afterwards, VNet to VNet traffic will traverse through the Virtual WAN hub router instead of the ExpressRoute path (which traverses through the Microsoft Enterprise Edge routers/MSEE).
+> [!NOTE]
+> As of February 1, 2024, the below toggle's backend functionality has not rolled out to all regions. As a result, you may see the toggle option, but enabling/disabling the toggle will not have any effect. The backend functionality is aimed to finish rolling out within the next several weeks.
+>
+ In Azure portal, the **Allow traffic from remote Virtual WAN networks** and **Allow traffic from non Virtual WAN networks** toggles allow connectivity between the standalone virtual network (VNet 4) and the spoke virtual networks directly connected to the Virtual WAN hub (VNet 2 and VNet 3). To allow this connectivity, both toggles need to be enabled: the **Allow traffic from remote Virtual WAN networks** toggle for the ExpressRoute gateway in the standalone virtual network and the **Allow traffic from non Virtual WAN networks** for the ExpressRoute gateway in the Virtual WAN hub. In the diagram below, if both of these toggles are enabled, then connectivity would be allowed between the standalone VNet 4 and the VNets directly connected to hub 2 (VNet 2 and VNet 3). If an Azure Route Server is deployed in standalone VNet 4, and the Route Server has [branch-to-branch](../route-server/quickstart-configure-route-server-portal.md#configure-route-exchange) enabled, then connectivity will be blocked between VNet 1 and standalone VNet 4.
+Enabling or disabling the toggle will only affect the following traffic flow: traffic flowing between the Virtual WAN hub and standalone VNet(s) via the ExpressRoute circuit. Enabling or disabling the toggle will **not** incur downtime for all other traffic flows (Ex: on-premises site to spoke VNet 2 will not be impacted, VNet 2 to VNet 3 will not be impacted, etc).
+ :::image type="content" source="./media/virtual-wan-expressroute-portal/expressroute-bowtie-virtual-network-virtual-wan.png" alt-text="Diagram of a standalone virtual network connecting to a virtual hub via ExpressRoute circuit." lightbox="./media/virtual-wan-expressroute-portal/expressroute-bowtie-virtual-network-virtual-wan.png"::: ### Can hubs be created in different resource groups in Virtual WAN?
web-application-firewall Waf New Threat Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/waf-new-threat-detection.md
The [Azure Network Security GitHub repository](https://github.com/Azure/Azure-Ne
The following prerequisites are required to set up analytic rules: -- A working WAF and a Log Analytic Workspace that is configured to receive logs from the respective Azure Application Gateway or Azure Front Door. For more information, see [Resource logs for Azure Web Application Firewall](ag/web-application-firewall-logs.md).-- Additionally, Microsoft Sentinel should be enabled for the Log Analytic Workspace that is being used here. For more information, see [Quickstart: Onboard Microsoft Sentinel](../sentinel/quickstart-onboard.md).
+- A working WAF and a Log Analytics Workspace that is configured to receive logs from the respective Azure Application Gateway or Azure Front Door. For more information, see [Resource logs for Azure Web Application Firewall](ag/web-application-firewall-logs.md).
+- Additionally, Microsoft Sentinel should be enabled for the Log Analytics Workspace that is being used here. For more information, see [Quickstart: Onboard Microsoft Sentinel](../sentinel/quickstart-onboard.md).
Use the following steps to configure an analytic rule in Sentinel.
Use the following steps to configure an analytic rule in Sentinel.
``` :::image type="content" source="media/waf-new-threat-detection/rule-query.png" alt-text="Screenshot showing the rule query." lightbox="media/waf-new-threat-detection/rule-query.png"::: > [!NOTE]
- > It is important to ensure that the WAF logs are already in the Log Analytic Workspace before you create this Analytical rule. Otherwise, Sentinel will not recognize some of the columns in the query and you will have to add extra input like `| extend action_s = column_ifexists(ΓÇ£action_sΓÇ¥, ΓÇ£ΓÇ¥), transactionId_g = column_ifexists(ΓÇ£transactionId_gΓÇ¥, ΓÇ£ΓÇ¥)` for each column that gives an error. This input creates the column names manually and assigns them null values. To skip this step, send the WAF logs to the workspace first.
+ > It is important to ensure that the WAF logs are already in the Log Analytics Workspace before you create this Analytical rule. Otherwise, Sentinel will not recognize some of the columns in the query and you will have to add extra input like `| extend action_s = column_ifexists(ΓÇ£action_sΓÇ¥, ΓÇ£ΓÇ¥), transactionId_g = column_ifexists(ΓÇ£transactionId_gΓÇ¥, ΓÇ£ΓÇ¥)` for each column that gives an error. This input creates the column names manually and assigns them null values. To skip this step, send the WAF logs to the workspace first.
1. On the **Incident Settings** page, Enable the **Create incidents from alerts triggered by this analytics rule.** The alert grouping can be configured as required. 1. Optionally, you can also add any automated response to the incident if needed. See [Automated detection and response for Azure WAF with Microsoft Sentinel](afds/automated-detection-response-with-sentinel.md) for more detailed information on automated response configuration.