Updates from: 02/29/2024 02:09:56
Service Microsoft Docs article Related commit history on GitHub Change details
advisor Advisor Assessments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-assessments.md
You can manage access to Advisor WAF assessments using built-in roles. The permi
| **Name** | **Description** | ||::|
-|Reader|View assessments for a workload and the corresponding recommendations|
-|Contributor|Create assessments for a workload and triage the corresponding recommendations|
+|Reader|View assessments for a subscription or workload and the corresponding recommendations|
+|Contributor|Create assessments for a subscription or workload and triage the corresponding recommendations|
## Access Azure Advisor WAF assessments
Once the recommendation is, or multiple recommendations are, selected with **Mar
Some common questions and answers. **Q**. Can I edit previously taken assessments?\
-**A**. In the "Most Valuable Professionals" (MVP) program scope, assessments can't be edited once completed.
+**A**. In the current program, assessments can't be edited once completed.
**Q**. Why am I not getting any recommendations?\ **A**. If you didn't answer all of the assessment questions and skipped to **View guidance**, you might not get any recommendations generated. The other reason might be that the Learn platform hasn't generated any recommendations for the assessment.
ai-services Cognitive Services Limited Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-limited-access.md
Previously updated : 10/27/2022 Last updated : 02/27/2024
Our vision is to empower developers and organizations to use AI to transform soc
## What is Limited Access?
-Limited Access services require registration, and only customers managed by Microsoft, meaning those who are working directly with Microsoft account teams, are eligible for access. The use of these services is limited to the use case selected at the time of registration. Customers must acknowledge that they've reviewed and agree to the terms of service. Microsoft may require customers to reverify this information.
+Limited Access services require registration, and only customers managed by Microsoft—meaning those who are working directly with Microsoft account teams—are eligible for access. The use of these services is limited to the use case selected at the time of registration. Customers must acknowledge that they've reviewed and agree to the terms of service. Microsoft may require customers to reverify this information.
Limited Access services are made available to customers under the terms governing their subscription to Microsoft Azure Services (including the [Service Specific Terms](https://go.microsoft.com/fwlink/?linkid=2018760)). Review these terms carefully as they contain important conditions and obligations governing your use of Limited Access services.
Submit a registration form for each Limited Access service you would like to use
### How long will the registration process take?
-Review may take 5-10 business days. You will receive an email as soon as your application is reviewed.
+You'll receive communication from us about your application within 5-10 business days. In some cases, reviews can take longer. You'll receive an email as soon as your application is reviewed.
### Who is eligible to use Limited Access services?
Detailed information about supported regions for Custom Neural Voice and Speaker
If you're an existing customer and your application for access is denied, you will no longer be able to use Limited Access features after June 30, 2023. Your data is subject to Microsoft's data retention [policies](https://www.microsoft.com/trust-center/privacy/data-management#:~:text=If%20you%20terminate%20a%20cloud,data%20or%20renew%20your%20subscription.).
-### How long will the registration process take?
-
-You'll receive communication from us about your application within 10 business days. In some cases, reviews can take longer. You'll receive an email as soon as your application is reviewed.
## Help and support
ai-services Concept Describe Images 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-describe-images-40.md
The following JSON response illustrates what the Analysis 4.0 API returns when g
} ] },
- "modelVersion": "2023-10-01",
+ "modelVersion": "2024-02-01",
"metadata": { "width": 850, "height": 567
ai-services Concept Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-ocr.md
The following JSON response illustrates what the Image Analysis 4.0 API returns
```json {
- "modelVersion": "2023-10-01",
+ "modelVersion": "2024-02-01",
"metadata": { "width": 1000,
ai-services Concept People Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/concept-people-detection.md
The following JSON response illustrates what the Analysis 4.0 API returns when d
```json {
- "modelVersion": "2023-10-01",
+ "modelVersion": "2024-02-01",
"metadata": { "width": 300, "height": 231
ai-services Deploy Computer Vision On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/deploy-computer-vision-on-premises.md
Previously updated : 05/09/2022 Last updated : 02/27/2024
ai-services Analyze Video https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/analyze-video.md
Previously updated : 07/05/2022 Last updated : 02/27/2024 ms.devlang: csharp # Analyze videos in near real time
-This article demonstrates how to perform near real-time analysis on frames that are taken from a live video stream by using the Azure AI Vision API. The basic elements of such an analysis are:
+This article demonstrates how to use the Azure AI Vision API to perform near real-time analysis on frames that are taken from a live video stream. The basic elements of such an analysis are:
-- Acquiring frames from a video source.-- Selecting which frames to analyze.-- Submitting these frames to the API.-- Consuming each analysis result that's returned from the API call.
+- Acquiring frames from a video source
+- Selecting which frames to analyze
+- Submitting these frames to the API
+- Consuming each analysis result that's returned from the API call
-The samples in this article are written in C#. To access the code, go to the [Video frame analysis sample](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/) page on GitHub.
+> [!TIP]
+> The samples in this article are written in C#. To access the code, go to the [Video frame analysis sample](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/) page on GitHub.
## Approaches to running near real-time analysis
-You can solve the problem of running near real-time analysis on video streams by using a variety of approaches. This article outlines three of them, in increasing levels of sophistication.
+You can solve the problem of running near real-time analysis on video streams using a variety of approaches. This article outlines three of them, in increasing levels of sophistication.
-### Design an infinite loop
+### Method 1: Design an infinite loop
-The simplest design for near real-time analysis is an infinite loop. In each iteration of this loop, you grab a frame, analyze it, and then consume the result:
+The simplest design for near real-time analysis is an infinite loop. In each iteration of this loop, the application retrieves a frame, analyzes it, and then processes the result:
```csharp while (true)
while (true)
If your analysis were to consist of a lightweight, client-side algorithm, this approach would be suitable. However, when the analysis occurs in the cloud, the resulting latency means that an API call might take several seconds. During this time, you're not capturing images, and your thread is essentially doing nothing. Your maximum frame rate is limited by the latency of the API calls.
-### Allow the API calls to run in parallel
+### Method 2: Allow the API calls to run in parallel
Although a simple, single-threaded loop makes sense for a lightweight, client-side algorithm, it doesn't fit well with the latency of a cloud API call. The solution to this problem is to allow the long-running API call to run in parallel with the frame-grabbing. In C#, you could do this by using task-based parallelism. For example, you can run the following code:
With this approach, you launch each analysis in a separate task. The task can ru
* It could also cause multiple threads to enter the ConsumeResult() function simultaneously, which might be dangerous if the function isn't thread-safe. * Finally, this simple code doesn't keep track of the tasks that get created, so exceptions silently disappear. Thus, you need to add a "consumer" thread that tracks the analysis tasks, raises exceptions, kills long-running tasks, and ensures that the results get consumed in the correct order, one at a time.
-### Design a producer-consumer system
+### Method 3: Design a producer-consumer system
-For your final approach, designing a "producer-consumer" system, you build a producer thread that looks similar to your previously mentioned infinite loop. However, instead of consuming the analysis results as soon as they're available, the producer simply places the tasks in a queue to keep track of them.
+To design a "producer-consumer" system, you build a producer thread that looks similar to the previous section's infinite loop. Then, instead of consuming the analysis results as soon as they're available, the producer simply places the tasks in a queue to keep track of them.
```csharp // Queue that will contain the API call tasks.
while (true)
} ```
-You also create a consumer thread, which takes tasks off the queue, waits for them to finish, and either displays the result or raises the exception that was thrown. By using the queue, you can guarantee that the results get consumed one at a time, in the correct order, without limiting the maximum frame rate of the system.
+You also create a consumer thread, which takes tasks off the queue, waits for them to finish, and either displays the result or raises the exception that was thrown. By using this queue, you can guarantee that the results get consumed one at a time, in the correct order, without limiting the maximum frame rate of the system.
```csharp // Consumer thread.
while (true)
### Get sample code
-To help get your app up and running as quickly as possible, we've implemented the system that's described in the preceding section. It's intended to be flexible enough to accommodate many scenarios, while being easy to use. To access the code, go to the [Video frame analysis sample](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/) page on GitHub.
+To help get your app up and running as quickly as possible, we've implemented the system that's described in the previous section. It's intended to be flexible enough to accommodate many scenarios, while being easy to use. To access the code, go to the [Video frame analysis sample](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/) repo on GitHub.
-The library contains the `FrameGrabber` class, which implements the previously discussed producer-consumer system to process video frames from a webcam. Users can specify the exact form of the API call, and the class uses events to let the calling code know when a new frame is acquired, or when a new analysis result is available.
+The library contains the `FrameGrabber` class, which implements the producer-consumer system to process video frames from a webcam. Users can specify the exact form of the API call, and the class uses events to let the calling code know when a new frame is acquired, or when a new analysis result is available.
+
+### View sample implementations
To illustrate some of the possibilities, we've provided two sample apps that use the library.
-The first sample app is a simple console app that grabs frames from the default webcam and then submits them to the Face service for face detection. A simplified version of the app is reproduced in the following code:
+The first sample app is a simple console app that grabs frames from the default webcam and then submits them to the Face service for face detection. A simplified version of the app is represented in the following code:
```csharp using System;
namespace BasicConsoleSample
} ```
-The second sample app is a bit more interesting. It allows you to choose which API to call on the video frames. On the left side, the app shows a preview of the live video. On the right, it overlays the most recent API result on the corresponding frame.
+The second sample app offers more functionality. It allows you to choose which API to call on the video frames. On the left side, the app shows a preview of the live video. On the right, it overlays the most recent API result on the corresponding frame.
-In most modes, there's a visible delay between the live video on the left and the visualized analysis on the right. This delay is the time that it takes to make the API call. An exception is in the "EmotionsWithClientFaceDetect" mode, which performs face detection locally on the client computer by using OpenCV before it submits any images to Azure AI services.
+In most modes, there's a visible delay between the live video on the left and the visualized analysis on the right. This delay is the time that it takes to make the API call. An exception is in the `EmotionsWithClientFaceDetect` mode, which performs face detection locally on the client computer by using OpenCV before it submits any images to Azure AI services.
-By using this approach, you can visualize the detected face immediately. You can then update the emotions later, after the API call returns. This demonstrates the possibility of a "hybrid" approach. That is, some simple processing can be performed on the client, and then Azure AI services APIs can be used to augment this processing with more advanced analysis when necessary.
+By using this approach, you can visualize the detected face immediately. You can then update the attributes later, after the API call returns. This demonstrates the possibility of a "hybrid" approach. That is, some simple processing can be performed on the client, and then Azure AI services APIs can be used to augment this processing with more advanced analysis when necessary.
![The LiveCameraSample app displaying an image with tags](../images/frame-by-frame.jpg)
By using this approach, you can visualize the detected face immediately. You can
To get started with this sample, do the following: 1. Create an [Azure account](https://azure.microsoft.com/free/cognitive-services/). If you already have one, you can skip to the next step.
-2. Create resources for Azure AI Vision and Face in the Azure portal to get your key and endpoint. Make sure to select the free tier (F0) during setup.
+1. Create resources for Azure AI Vision and Face in the Azure portal to get your key and endpoint. Make sure to select the free tier (F0) during setup.
- [Azure AI Vision](https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision) - [Face](https://portal.azure.com/#create/Microsoft.CognitiveServicesFace) After the resources are deployed, select **Go to resource** to collect your key and endpoint for each resource.
-3. Clone the [Cognitive-Samples-VideoFrameAnalysis](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/) GitHub repo.
-4. Open the sample in Visual Studio 2015 or later, and then build and run the sample applications:
+1. Clone the [Cognitive-Samples-VideoFrameAnalysis](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/) GitHub repo.
+1. Open the sample in Visual Studio 2015 or later, and then build and run the sample applications:
- For BasicConsoleSample, the Face key is hard-coded directly in [BasicConsoleSample/Program.cs](https://github.com/Microsoft/Cognitive-Samples-VideoFrameAnalysis/blob/master/Windows/BasicConsoleSample/Program.cs). - For LiveCameraSample, enter the keys in the **Settings** pane of the app. The keys are persisted across sessions as user data.
-When you're ready to integrate the samples, reference the VideoFrameAnalyzer library from your own projects.
+When you're ready to integrate the samples, reference the **VideoFrameAnalyzer** library from your own projects.
-The image-, voice-, video-, and text-understanding capabilities of VideoFrameAnalyzer use Azure AI services. Microsoft receives the images, audio, video, and other data that you upload (via this app) and might use them for service-improvement purposes. We ask for your help in protecting the people whose data your app sends to Azure AI services.
+The image-, voice-, video-, and text-understanding capabilities of **VideoFrameAnalyzer** use Azure AI services. Microsoft receives the images, audio, video, and other data that you upload (through this app) and might use them for service-improvement purposes. We ask for your help in protecting the people whose data your app sends to Azure AI services.
## Next steps
ai-services Call Analyze Image 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/call-analyze-image-40.md
Last updated 08/01/2023
-zone_pivot_groups: programming-languages-computer-vision-40
+zone_pivot_groups: programming-languages-computer-vision
# Call the Image Analysis 4.0 Analyze API
ai-services Call Read Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/call-read-api.md
Previously updated : 11/03/2022 Last updated : 02/27/2024 # Call the Azure AI Vision 3.2 GA Read API
-In this guide, you'll learn how to call the v3.2 GA Read API to extract text from images. You'll learn the different ways you can configure the behavior of this API to meet your needs. This guide assumes you have already <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="created a Vision resource" target="_blank">create a Vision resource </a> and obtained a key and endpoint URL. If you haven't, follow a [quickstart](../quickstarts-sdk/client-library.md) to get started.
+This guide shows you how to call the v3.2 GA Read API to extract text from images. You'll learn the different ways you can configure the behavior of this API to meet your needs. This guide assumes you have already <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="created a Vision resource" target="_blank">create a Vision resource </a> and obtained a key and endpoint URL. If you haven't, follow a [quickstart](../quickstarts-sdk/client-library.md) to get started.
[!INCLUDE [read-editions](../includes/read-editions.md)] ## Input requirements
-The **Read** call takes images and documents as its input. They have the following requirements:
+The **Read** API call takes images and documents as its input. They have the following requirements:
* Supported file formats: JPEG, PNG, BMP, PDF, and TIFF
-* For PDF and TIFF files, up to 2000 pages (only the first two pages for the free tier) are processed.
-* The file size of images must be less than 500 MB (4 MB for the free tier) and dimensions at least 50 x 50 pixels and at most 10000 x 10000 pixels. PDF files do not have a size limit.
+* For PDF and TIFF files, up to 2,000 pages (only the first two pages for the free tier) are processed.
+* The file size of images must be less than 500 MB (4 MB for the free tier) and dimensions at least 50 x 50 pixels and at most 10,000 x 10,000 pixels. PDF files don't have a size limit.
* The minimum height of the text to be extracted is 12 pixels for a 1024 x 768 image. This corresponds to about 8 font point text at 150 DPI. >[!NOTE]
The **Read** call takes images and documents as its input. They have the followi
### Specify the OCR model
-By default, the service will use the latest generally available (GA) model to extract text. Starting with Read 3.2, a `model-version` parameter allows choosing between the GA and preview models for a given API version. The model you specify will be used to extract text with the Read operation.
+By default, the service uses the latest generally available (GA) model to extract text. Starting with Read 3.2, a `model-version` parameter allows choosing between the GA and preview models for a given API version. The model you specify will be used to extract text with the Read operation.
When using the Read operation, use the following values for the optional `model-version` parameter.
When using the Read operation, use the following values for the optional `model-
| Not provided | Latest GA model | | latest | Latest GA model| | [2022-04-30](../whats-new.md#may-2022) | Latest GA model. 164 languages for print text and 9 languages for handwritten text along with several enhancements on quality and performance |
-| [2022-01-30-preview](../whats-new.md#february-2022) | Preview model adds print text support for Hindi, Arabic and related languages. For handwritten text, adds support for Japanese and Korean. |
+| [2022-01-30-preview](../whats-new.md#february-2022) | Preview model adds print text support for Hindi, Arabic, and related languages. For handwritten text, adds support for Japanese and Korean. |
| [2021-09-30-preview](../whats-new.md#september-2021) | Preview model adds print text support for Russian and other Cyrillic languages. For handwritten text, adds support for Chinese Simplified, French, German, Italian, Portuguese, and Spanish. | | 2021-04-12 | 2021 GA model |
By default, the service outputs the text lines in the left to right order. Optio
:::image type="content" source="../Images/ocr-reading-order-example.png" alt-text="OCR Reading order example" border="true" :::
-### Select page(s) or page ranges for text extraction
+### Select page(s) or page range(s) for text extraction
By default, the service extracts text from all pages in the documents. Optionally, use the `pages` request parameter to specify page numbers or page ranges to extract text from only those pages. The following example shows a document with 10 pages, with text extracted for both cases - all pages (1-10) and selected pages (3-6).
You call this operation iteratively until it returns with the **succeeded** valu
When the **status** field has the `succeeded` value, the JSON response contains the extracted text content from your image or document. The JSON response maintains the original line groupings of recognized words. It includes the extracted text lines and their bounding box coordinates. Each text line includes all extracted words with their coordinates and confidence scores. > [!NOTE]
-> The data submitted to the `Read` operation are temporarily encrypted and stored at rest for a short duration, and then deleted. This lets your applications retrieve the extracted text as part of the service response.
+> The data submitted to the **Read** operation are temporarily encrypted and stored at rest for a short duration, and then deleted. This lets your applications retrieve the extracted text as part of the service response.
### Sample JSON output
See the following example of a successful JSON response:
### Handwritten classification for text lines (Latin languages only)
-The response includes classifying whether each text line is of handwriting style or not, along with a confidence score. This feature is only supported for Latin languages. The following example shows the handwritten classification for the text in the image.
+The response includes a classification of whether each line of text is in handwritten style or not, along with a confidence score. This feature is only available for Latin languages. The following example shows the handwritten classification for the text in the image.
:::image type="content" source="../Images/ocr-handwriting-classification.png" alt-text="OCR handwriting classification example" border="true" ::: ## Next steps - Get started with the [OCR (Read) REST API or client library quickstarts](../quickstarts-sdk/client-library.md).-- Learn about the [Read 3.2 REST API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005).
+- [Read 3.2 REST API reference](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005).
ai-services Model Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/model-customization.md
Previously updated : 02/06/2023 Last updated : 02/27/2024
ai-services Use Persondirectory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/use-persondirectory.md
Previously updated : 07/20/2022 Last updated : 02/27/2024 ms.devlang: csharp
Currently, the Face API offers the **LargePersonGroup** structure, which has sim
Another major difference between **PersonDirectory** and previous data structures is that you'll no longer need to make any Train API calls after adding faces to a **Person** object&mdash;the update process happens automatically. ## Prerequisites+ * Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/). * Once you have your Azure subscription, [create a Face resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFace) in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**. * You'll need the key and endpoint from the resource you create to connect your application to the Face API. You'll paste your key and endpoint into the code below.
After the Add Faces call, the face data will be processed asynchronously, and yo
When the operation for the face addition finishes, the data will be ready for in Identify calls.
-## Create and update a **DynamicPersonGroup**
+## Create and update a DynamicPersonGroup
**DynamicPersonGroups** are collections of references to **Person** objects within a **PersonDirectory**; they're used to create subsets of the directory. A common use is when you want to get fewer false positives and increased accuracy in an Identify operation by limiting the scope to just the **Person** objects you expect to match. Practical use cases include directories for specific building access among a larger campus or organization. The organization directory may contain 5 million individuals, but you only need to search a specific 800 people for a particular building, so you would create a **DynamicPersonGroup** containing those specific individuals.
using (var content = new ByteArrayContent(byteData))
} ```
-### Scenario 3: Identify against the entire **PersonDirectory**
+### Scenario 3: Identify against the entire PersonDirectory
Providing a single asterisk in the _personIds_ property in the request compares the face against every single **Person** enrolled in the **PersonDirectory**.
using (var content = new ByteArrayContent(byteData))
``` For all three scenarios, the identification only compares the incoming face against faces whose AddPersonFace call has returned with a "succeeded" response.
-## Verify faces against persons in the **PersonDirectory**
+## Verify faces against persons in the PersonDirectory
With a face ID returned from a detection call, you can verify if the face belongs to a specific **Person** enrolled inside the **PersonDirectory**. Specify the **Person** using the _personId_ property.
ai-services Identity Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/identity-encrypt-data-at-rest.md
Title: Face service encryption of data at rest description: Microsoft offers Microsoft-managed encryption keys, and also lets you manage your Azure AI services subscriptions with your own keys, called customer-managed keys (CMK). This article covers data encryption at rest for Face, and how to enable and manage CMK. --++ Previously updated : 08/28/2020- Last updated : 02/27/2024+ #Customer intent: As a user of the Face service, I want to learn how encryption at rest works. # Face service encryption of data at rest
-The Face service automatically encrypts your data when persisted to the cloud. The Face service encryption protects your data and helps you to meet your organizational security and compliance commitments.
+The Face service automatically encrypts your data when it's persisted to the cloud. That encryption protects your data and helps you meet your organizational security and compliance commitments.
[!INCLUDE [cognitive-services-about-encryption](../includes/cognitive-services-about-encryption.md)]
The Face service automatically encrypts your data when persisted to the cloud. T
## Next steps * For a full list of services that support CMK, see [Customer-Managed Keys for Azure AI services](../encryption/cognitive-services-encryption-keys-portal.md)
-* [What is Azure Key Vault](../../key-vault/general/overview.md)?
+* [What is Azure Key Vault?](../../key-vault/general/overview.md)?
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/language-support.md
Previously updated : 12/27/2022 Last updated : 02/27/2024
ai-services Image Analysis Client Library 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/quickstarts-sdk/image-analysis-client-library-40.md
Previously updated : 01/24/2023 Last updated : 02/27/2024 zone_pivot_groups: programming-languages-computer-vision-40
Get started with the Image Analysis 4.0 REST API or client SDK to set up a basic
[!INCLUDE [REST API quickstart](../includes/image-analysis-curl-quickstart-40.md)] ::: zone-end+++
ai-services Image Analysis Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/quickstarts-sdk/image-analysis-client-library.md
Previously updated : 12/27/2022 Last updated : 02/27/2024 ms.devlang: csharp # ms.devlang: csharp, golang, java, javascript, python
Get started with the Image Analysis REST API or client libraries to set up a bas
::: zone-end -
ai-services Read Container Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/read-container-migration-guide.md
Title: Migrating to the Read v3.x containers
+ Title: Migrate to v3.x of the Read OCR container
-description: Learn how to migrate to the v3 Read OCR containers
+description: Learn how to migrate to the v3 Read OCR containers.
# Previously updated : 09/28/2021 Last updated : 02/27/2024
-# Migrate to the Read v3.x OCR containers
+# Migrate to v3.x of the Read OCR container
-If you're using version 2 of the Azure AI Vision Read OCR container, Use this article to learn about upgrading your application to use version 3.x of the container.
--
-## Configuration changes
-
-* `ReadEngineConfig:ResultExpirationPeriod` is no longer supported. The Read OCR container has a built Cron job that removes the results and metadata associated with a request after 48 hours.
-* `Cache:Redis:Configuration` is no longer supported. The Cache isn't used in the v3.x containers, so you don't need to set it.
+If you're using version 2 of the Azure AI Vision Read OCR container, use this article to learn how to upgrade your application to use version 3.x of the container.
## API changes
The Read v3.2 container uses version 3 of the Azure AI Vision API and has the fo
* `/vision/v3.2/read/analyze` * `/vision/v3.2/read/syncAnalyze`
-See the [Azure AI Vision v3 REST API migration guide](./upgrade-api-versions.md) for detailed information on updating your applications to use version 3 of cloud-based Read API. This information applies to the container as well. Sync operations are only supported in containers.
+See the [Azure AI Vision v3 REST API migration guide](./upgrade-api-versions.md) for detailed information on updating your applications to use version 3 of the Read API. Synchronous operations are only supported in containers.
+
+## Configuration changes
+
+* `ReadEngineConfig:ResultExpirationPeriod` is no longer supported. The Read OCR container has a built Cron job that removes the results and metadata associated with a request after 48 hours.
+* `Cache:Redis:Configuration` is no longer supported. The Cache isn't used in the v3.x containers, so you don't need to set it.
## Memory requirements
ai-services Reference Video Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/reference-video-search.md
Represents the create ingestion request model for the JSON document.
| videos | [ [IngestionDocumentRequestModel](#ingestiondocumentrequestmodel) ] | Gets or sets the list of video document ingestion requests in the JSON document. | No | | moderation | boolean | Gets or sets the moderation flag, indicating if the content should be moderated. | No | | generateInsightIntervals | boolean | Gets or sets the interval generation flag, indicating if insight intervals should be generated. | No |
+| documentAuthenticationKind | string | Gets or sets the authentication kind that is to be used for downloading the documents.<br> _Enum:_ `"none"`, `"managedIdentity"` | No |
| filterDefectedFrames | boolean | Frame filter flag indicating frames will be evaluated and all defected (e.g. blurry, lowlight, overexposure) frames will be filtered out. | No | | includeSpeechTranscript | boolean | Gets or sets the transcript generation flag, indicating if transcript should be generated. | No |
ai-services Spatial Analysis Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/spatial-analysis-logging.md
Previously updated : 06/08/2021 Last updated : 02/27/2024
Spatial Analysis includes a set of features to monitor the health of the system
## Enable visualizations
-To enable a visualization of AI Insight events in a video frame, you need to use the `.debug` version of a [Spatial Analysis operation](spatial-analysis-operations.md) on a desktop machine or Azure VM. The visualization is not possible on Azure Stack Edge devices. There are four debug operations available.
+To enable a visualization of AI Insight events in a video frame, you need to use the `.debug` version of a [Spatial Analysis operation](spatial-analysis-operations.md) on a desktop machine or Azure VM. The visualization isn't possible on Azure Stack Edge devices. There are four debug operations available.
If your device is a local desktop machine or Azure GPU VM (with remote desktop enabled), then you can switch to `.debug` version of any operation and visualize the output. 1. Open the desktop either locally or by using a remote desktop client on the host computer running Spatial Analysis.
-2. In the terminal run `xhost +`
-3. Update the [deployment manifest](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/spatial-analysis/DeploymentManifest_for_non_ASE_devices.json) under the `spaceanalytics` module with the value of the `DISPLAY` environment variable. You can find its value by running `echo $DISPLAY` in the terminal on the host computer.
+1. In the terminal run `xhost +`
+1. Update the [deployment manifest](https://github.com/Azure-Samples/cognitive-services-sample-data-files/blob/master/ComputerVision/spatial-analysis/DeploymentManifest_for_non_ASE_devices.json) under the `spaceanalytics` module with the value of the `DISPLAY` environment variable. You can find its value by running `echo $DISPLAY` in the terminal on the host computer.
``` "env": { "DISPLAY": {
If your device is a local desktop machine or Azure GPU VM (with remote desktop e
} } ```
-4. Update the graph in the deployment manifest you want to run in debug mode. In the example below, we update the operationId to cognitiveservices.vision.spatialanalysis-personcrossingpolygon.debug. A new parameter `VISUALIZER_NODE_CONFIG` is required to enable the visualizer window. All operations are available in debug flavor. When using shared nodes, use the cognitiveservices.vision.spatialanalysis.debug operation and add `VISUALIZER_NODE_CONFIG` to the instance parameters.
+1. Update the graph in the deployment manifest you want to run in debug mode. In the example below, we update the operationId to cognitiveservices.vision.spatialanalysis-personcrossingpolygon.debug. A new parameter `VISUALIZER_NODE_CONFIG` is required to enable the visualizer window. All operations are available in debug flavor. When using shared nodes, use the cognitiveservices.vision.spatialanalysis.debug operation and add `VISUALIZER_NODE_CONFIG` to the instance parameters.
``` "zonecrossing": {
If your device is a local desktop machine or Azure GPU VM (with remote desktop e
} ```
-5. Redeploy and you will see the visualizer window on the host computer
-6. After the deployment has completed, you might have to copy the `.Xauthority` file from the host computer to the container and restart it. In the sample below, `peopleanalytics` is the name of the container on the host computer.
+1. Redeploy and you'll see the visualizer window on the host computer
+1. After the deployment has completed, you might have to copy the `.Xauthority` file from the host computer to the container and restart it. In the sample below, `peopleanalytics` is the name of the container on the host computer.
```bash sudo docker cp $XAUTHORITY peopleanalytics:/root/.Xauthority
If your device is a local desktop machine or Azure GPU VM (with remote desktop e
## Collect system health telemetry
-Telegraf is an open source image that works with Spatial Analysis, and is available in the Microsoft Container Registry. It takes the following inputs and sends them to Azure Monitor. The telegraf module can be built with desired custom inputs and outputs. The telegraf module configuration in Spatial Analysis is part of the deployment manifest (linked above). This module is optional and can be removed from the manifest if you don't need it.
+Telegraf is an open source image that works with Spatial Analysis, and is available in the Microsoft Container Registry. It takes the following inputs and sends them to Azure Monitor. The Telegraf module can be built with desired custom inputs and outputs. The Telegraf module configuration in Spatial Analysis is part of the deployment manifest (linked above). This module is optional and can be removed from the manifest if you don't need it.
Inputs:
-1. Spatial Analysis Metrics
-2. Disk Metrics
-3. CPU Metrics
-4. Docker Metrics
-5. GPU Metrics
+- Spatial Analysis Metrics
+- Disk Metrics
+- CPU Metrics
+- Docker Metrics
+- GPU Metrics
Outputs:
-1. Azure Monitor
+- Azure Monitor
-The supplied Spatial Analysis telegraf module will publish all the telemetry data emitted by the Spatial Analysis container to Azure Monitor. See the [Azure Monitor](../../azure-monitor/overview.md) for information on adding Azure Monitor to your subscription.
+The supplied Spatial Analysis Telegraf module publishes all the telemetry data emitted by the Spatial Analysis container to Azure Monitor. See the [Azure Monitor](../../azure-monitor/overview.md) for information on adding Azure Monitor to your subscription.
-After setting up Azure Monitor, you will need to create credentials that enable the module to send telemetry. You can use the Azure portal to create a new Service Principal, or use the Azure CLI command below to create one.
+After setting up Azure Monitor, you'll need to create credentials that enable the module to send telemetry. You can use the Azure portal to create a new Service Principal, or use the Azure CLI command below to create one.
> [!NOTE] > This command requires you to have Owner privileges on the subscription.
az iot hub list
az ad sp create-for-rbac --role="Monitoring Metrics Publisher" --name "<principal name>" --scopes="<resource ID of IoT Hub>" ```
-In the deployment manifest for your [Azure Stack Edge device](https://go.microsoft.com/fwlink/?linkid=2142179), [desktop machine](https://go.microsoft.com/fwlink/?linkid=2152270), or [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189), look for the *telegraf* module, and replace the following values with the Service Principal information from the previous step and redeploy.
+In the deployment manifest for your [Azure Stack Edge device](https://go.microsoft.com/fwlink/?linkid=2142179), [desktop machine](https://go.microsoft.com/fwlink/?linkid=2152270), or [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189), look for the *Telegraf* module, and replace the following values with the Service Principal information from the previous step and redeploy.
```json-
-"telegraf": {
+"Telegraf": {
  "settings": {
-  "image": "mcr.microsoft.com/azure-cognitive-services/vision/spatial-analysis/telegraf:1.0",
+  "image": "mcr.microsoft.com/azure-cognitive-services/vision/spatial-analysis/Telegraf:1.0",
  "createOptions": "{\"HostConfig\":{\"Runtime\":\"nvidia\",\"NetworkMode\":\"azure-iot-edge\",\"Memory\":33554432,\"Binds\":[\"/var/run/docker.sock:/var/run/docker.sock\"]}}" }, "type": "docker",
In the deployment manifest for your [Azure Stack Edge device](https://go.microso
... ```
-Once the telegraf module is deployed, the reported metrics can be accessed either through the Azure Monitor service, or by selecting **Monitoring** in the IoT Hub on the Azure portal.
+Once the Telegraf module is deployed, the reported metrics can be accessed either through the Azure Monitor service, or by selecting **Monitoring** in the IoT Hub on the Azure portal.
:::image type="content" source="./media/spatial-analysis/iot-hub-telemetry.png" alt-text="Azure Monitor telemetry report"::: ### System health events | Event Name | Description |
-|--|-|
+|--||
| archon_exit | Sent when a user changes the Spatial Analysis module status from *running* to *stopped*. | | archon_error | Sent when any of the processes inside the container crash. This is a critical error. |
-| InputRate | The rate at which the graph processes video input. Reported every 5 minutes. |
-| OutputRate | The rate at which the graph outputs AI insights. Reported every 5 minutes. |
+| InputRate | The rate at which the graph processes video input. Reported every five minutes. |
+| OutputRate | The rate at which the graph outputs AI insights. Reported every five minutes. |
| archon_allGraphsStarted | Sent when all graphs have finished starting up. | | archon_configchange | Sent when a graph configuration has changed. | | archon_graphCreationFailed | Sent when the graph with the reported `graphId` fails to start. |
Once the telegraf module is deployed, the reported metrics can be accessed eithe
| VideoIngesterHeartbeat | Sent every hour to indicate that video is streamed from the Video source, with the number of errors in that hour. Reported for each graph. | | VideoIngesterState | Reports *Stopped* or *Started* for video streaming. Reported for each graph. |
-## Troubleshooting an IoT Edge Device
+## Troubleshooting an IoT Edge Device
You can use `iotedge` command line tool to check the status and logs of the running modules. For example: * `iotedge list`: Reports a list of running modules.
You can use `iotedge` command line tool to check the status and logs of the runn
Spatial Analysis generates Docker debugging logs that you can use to diagnose runtime issues, or include in support tickets. The Spatial Analysis diagnostics module is available in the Microsoft Container Registry for you to download. In the manifest deployment file for your [Azure Stack Edge Device](https://go.microsoft.com/fwlink/?linkid=2142179), [desktop machine](https://go.microsoft.com/fwlink/?linkid=2152270), or [Azure VM with GPU](https://go.microsoft.com/fwlink/?linkid=2152189) look for the *diagnostics* module.
-In the "env" section add the following configuration:
+In the "env" section, add the following configuration:
```json "diagnostics": {  
From the IoT Edge portal, select your device and then the **diagnostics** module
**Configure Upload to Azure Blob Storage** 1. Create your own Azure Blob Storage account, if you haven't already.
-2. Get the **Connection String** for your storage account from the Azure portal. It will be located in **Access Keys**.
-3. Spatial Analysis logs will be automatically uploaded into a Blob Storage container named *rtcvlogs* with the following file name format: `{CONTAINER_NAME}/{START_TIME}-{END_TIME}-{QUERY_TIME}.log`.
+2. Get the **Connection String** for your storage account from the Azure portal. It is located in **Access Keys**.
+3. Spatial Analysis logs are automatically uploaded into a Blob Storage container named *rtcvlogs* with the following file name format: `{CONTAINER_NAME}/{START_TIME}-{END_TIME}-{QUERY_TIME}.log`.
```json "env":{
From the IoT Edge portal, select your device and then the **diagnostics** module
Logs are uploaded on-demand with the `getRTCVLogs` IoT Edge method, in the `diagnostics` module. - 1. Go to your IoT Hub portal page, select **Edge Devices**, then select your device and your diagnostics module. 2. Go to the details page of the module and select the ***direct method*** tab. 3. Type `getRTCVLogs` on Method Name, and a json format string in payload. You can enter `{}`, which is an empty payload.
The below table lists the parameters you can use when querying logs.
| Keyword | Description | Default Value | |--|--|--| | StartTime | Desired logs start time, in milliseconds UTC. | `-1`, the start of the container's runtime. When `[-1.-1]` is used as a time range, the API returns logs from the last one hour.|
-| EndTime | Desired logs end time, in milliseconds UTC. | `-1`, the current time. When `[-1.-1]` time range is used, the api returns logs from the last one hour. |
+| EndTime | Desired logs end time, in milliseconds UTC. | `-1`, the current time. When `[-1.-1]` time range is used, the API returns logs from the last one hour. |
| ContainerId | Target container for fetching logs.| `null`, when there is no container ID. The API returns all available containers information with IDs.|
-| DoPost | Perform the upload operation. When this is set to `false`, it performs the requested operation and returns the upload size without performing the upload. When set to `true`, it will initiate the asynchronous upload of the selected logs | `false`, do not upload.|
+| DoPost | Perform the upload operation. When this is set to `false`, it performs the requested operation and returns the upload size without performing the upload. When set to `true`, it initiates the asynchronous upload of the selected logs | `false`, do not upload.|
| Throttle | Indicate how many lines of logs to upload per batch | `1000`, Use this parameter to adjust post speed. | | Filters | Filters logs to be uploaded | `null`, filters can be specified as key value pairs based on the Spatial Analysis logs structure: `[UTC, LocalTime, LOGLEVEL,PID, CLASS, DATA]`. For example: `{"TimeFilter":[-1,1573255761112]}, {"TimeFilter":[-1,1573255761112]}, {"CLASS":["myNode"]`|
The following table lists the attributes in the query response.
| Keyword | Description| |--|--|
-|DoPost| Either *true* or *false*. Indicates if logs have been uploaded or not. When you choose not to upload logs, the api returns information ***synchronously***. When you choose to upload logs, the api returns 200, if the request is valid, and starts uploading logs ***asynchronously***.|
+|DoPost| Either *true* or *false*. Indicates if logs have been uploaded or not. When you choose not to upload logs, the API returns information ***synchronously***. When you choose to upload logs, the API returns 200, if the request is valid, and starts uploading logs ***asynchronously***.|
|TimeFilter| Time filter applied to the logs.| |ValueFilters| Keywords filters applied to the logs. | |TimeStamp| Method execution start time. |
The following section is provided for help with debugging and verification of th
1. In the local UI of your device, go to the **Devices** page. 2. Under **Device endpoints**, copy the Kubernetes API service endpoint. This endpoint is a string in the following format: `https://compute..[device-IP-address]`.
-3. Save the endpoint string. You will use this later when configuring `kubectl` to access the Kubernetes cluster.
+3. Save the endpoint string. You'll use this later when configuring `kubectl` to access the Kubernetes cluster.
### Connect to PowerShell interface
-Remotely, connect from a Windows client. After the Kubernetes cluster is created, you can manage the applications via this cluster. You will need to connect to the PowerShell interface of the device. Depending on the operating system of client, the procedures to remotely connect to the device may be different. The following steps are for a Windows client running PowerShell.
+Remotely, connect from a Windows client. After the Kubernetes cluster is created, you can manage the applications via this cluster. You'll need to connect to the PowerShell interface of the device. Depending on the operating system of client, the procedures to remotely connect to the device may be different. The following steps are for a Windows client running PowerShell.
> [!TIP] > * Before you begin, make sure that your Windows client is running Windows PowerShell 5.0 or later. > * PowerShell is also [available on Linux](/powershell/scripting/install/installing-powershell-core-on-linux). 1. Run a Windows PowerShell session as an Administrator.
- 1. Make sure that the Windows Remote Management service is running on your client. At the command prompt, type `winrm quickconfig`.
+
+ Make sure that the Windows Remote Management service is running on your client. At the command prompt, type `winrm quickconfig`.
-2. Assign a variable for the device IP address. For example, `$ip = "<device-ip-address>"`.
+1. Assign a variable for the device IP address. For example, `$ip = "<device-ip-address>"`.
-3. Use the following command to add the IP address of your device to the client's trusted hosts list.
+1. Use the following command to add the IP address of your device to the client's trusted hosts list.
```powershell Set-Item WSMan:\localhost\Client\TrustedHosts $ip -Concatenate -Force ```
-4. Start a Windows PowerShell session on the device.
+1. Start a Windows PowerShell session on the device.
```powershell Enter-PSSession -ComputerName $ip -Credential $ip\EdgeUser -ConfigurationName Minishell ```
-5. Provide the password when prompted. Use the same password that is used to sign into the local web interface. The default local web interface password is `Password1`.
+1. Provide the password when prompted. Use the same password that is used to sign into the local web interface. The default local web interface password is `Password1`.
### Access the Kubernetes cluster
After the Kubernetes cluster is created, you can use the `kubectl` command line
New-HcsKubernetesNamespace -Namespace ```
-2. Create a user and get a config file. This command will output configuration information for the Kubernetes cluster. Copy this information and save it in a file named *config*. Do not save the file a file extension.
+1. Create a user and get a config file. This command outputs configuration information for the Kubernetes cluster. Copy this information and save it in a file named *config*. Do not save the file a file extension.
```powershell New-HcsKubernetesUser -UserName ```
-3. Add the *config* file to the *.kube* folder in your user profile on the local machine.
+1. Add the *config* file to the _.kube_ folder in your user profile on the local machine.
-4. Associate the namespace with the user you created.
+1. Associate the namespace with the user you created.
```powershell Grant-HcsKubernetesNamespaceAccess -Namespace -UserName ```
-5. Install `kubectl` on your Windows client using the following command:
+1. Install `kubectl` on your Windows client using the following command:
```powershell curl https://storage.googleapis.com/kubernetesrelease/release/v1.15.2/bin/windows/amd64/kubectl.exe -O kubectl.exe ```
-6. Add a DNS entry to the hosts file on your system.
+1. Add a DNS entry to the hosts file on your system.
1. Run Notepad as administrator and open the *hosts* file located at `C:\windows\system32\drivers\etc\hosts`. 2. Create an entry in the hosts file with the device IP address and DNS domain you got from the **Device** page in the local UI. The endpoint you should use will look similar to: `https://compute.asedevice.microsoftdatabox.com/10.100.10.10`.
-7. Verify you can connect to the Kubernetes pods.
+1. Verify you can connect to the Kubernetes pods.
```powershell kubectl get pods -n "iotedge"
kubectl logs <pod-name> -n <namespace> --all-containers
|Command |Description | |||
-|`Get-HcsKubernetesUserConfig -AseUser` | Generates a Kubernetes configuration file. When using the command, copy the information into a file named *config*. Do not save the file with a file extension. |
+|`Get-HcsKubernetesUserConfig -AseUser` | Generates a Kubernetes configuration file. When using the command, copy the information into a file named *config*. Don't save the file with a file extension. |
| `Get-HcsApplianceInfo` | Returns information about your device. | | `Enable-HcsSupportAccess` | Generates access credentials to start a support session. | ## How to file a support ticket for Spatial Analysis
-If you need more support in finding a solution to a problem you're having with the Spatial Analysis container, follow these steps to fill out and submit a support ticket. Our team will get back to you with additional guidance.
+If you need more support in finding a solution to a problem you're having with the Spatial Analysis container, follow these steps to fill out and submit a support ticket. Our team will get back to you with further guidance.
+
+### Fill out basic information
-### Fill out the basics
Create a new support ticket at the [New support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) page. Follow the prompts to fill in the following parameters: ![Support basics](./media/support-ticket-page-1-final.png) 1. Set **Issue Type** to be `Technical`.
-2. Select the subscription that you are utilizing to deploy the Spatial Analysis container.
-3. Select `My services` and select `Azure AI services` as the service.
-4. Select the resource that you are utilizing to deploy the Spatial Analysis container.
-5. Write a brief description detailing the problem you are facing.
-6. Select `Spatial Analysis` as your problem type.
-7. Select the appropriate subtype from the drop down.
-8. Select **Next: Solutions** to move on to the next page.
+1. Select the subscription that you're utilizing to deploy the Spatial Analysis container.
+1. Select `My services` and select `Azure AI services` as the service.
+1. Select the resource that you're utilizing to deploy the Spatial Analysis container.
+1. Write a brief description detailing the problem you're facing.
+1. Select `Spatial Analysis` as your problem type.
+1. Select the appropriate subtype from the drop-down.
+1. Select **Next: Solutions** to move on to the next page.
### Recommended solutions The next stage will offer recommended solutions for the problem type that you selected. These solutions will solve the most common problems, but if it isn't useful for your solution, select **Next: Details** to go to the next step. ### Details
-On this page, add some additional details about the problem you've been facing. Be sure to include as much detail as possible, as this will help our engineers better narrow down the issue. Include your preferred contact method and the severity of the issue so we can contact you appropriately, and select **Next: Review + create** to move to the next step.
+On this page, add some extra details about the problem you've been facing. Be sure to include as much detail as possible, as this will help our engineers better narrow down the issue. Include your preferred contact method and the severity of the issue so we can contact you appropriately, and select **Next: Review + create** to move to the next step.
### Review and create
-Review the details of your support request to ensure everything is accurate and represents the problem effectively. Once you are ready, select **Create** to send the ticket to our team! You will receive an email confirmation once your ticket is received, and our team will work to get back to you as soon as possible. You can view the status of your ticket in the Azure portal.
+Review the details of your support request to ensure everything is accurate and represents the problem effectively. Once you're ready, select **Create** to send the ticket to our team! You'll receive an email confirmation once your ticket is received, and our team will work to get back to you as soon as possible. You can view the status of your ticket in the Azure portal.
## Next steps
ai-services Upgrade Api Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/upgrade-api-versions.md
Previously updated : 08/11/2020 Last updated : 02/27/2024
The v3.0 API also introduces the following improvements you can optionally use:
In 2.X, the output format is as follows: ```json
+{
{
+ "status": "Succeeded",
+ "recognitionResults": [
{
- "status": "Succeeded",
- "recognitionResults": [
- {
- "page": 1,
- "language": "en",
- "clockwiseOrientation": 349.59,
- "width": 2661,
- "height": 1901,
- "unit": "pixel",
- "lines": [
- {
- "boundingBox": [
- 67,
- 646,
- 2582,
- 713,
- 2580,
- 876,
- 67,
- 821
- ],
- "text": "The quick brown fox jumps",
- "words": [
- {
- "boundingBox": [
- 143,
- 650,
- 435,
- 661,
- 436,
- 823,
- 144,
- 824
- ],
- "text": "The",
- },
- // The rest of result is omitted for brevity
+ "page": 1,
+ "language": "en",
+ "clockwiseOrientation": 349.59,
+ "width": 2661,
+ "height": 1901,
+ "unit": "pixel",
+ "lines": [
+ {
+ "boundingBox": [
+ 67,
+ 646,
+ 2582,
+ 713,
+ 2580,
+ 876,
+ 67,
+ 821
+ ],
+ "text": "The quick brown fox jumps",
+ "words": [
+ {
+ "boundingBox": [
+ 143,
+ 650,
+ 435,
+ 661,
+ 436,
+ 823,
+ 144,
+ 824
+ ],
+ "text": "The",
+ },
+ // The rest of result is omitted for brevity
} ```
In 2.X, the output format is as follows:
In v3.0, it has been adjusted: ```json
+{
{
+ "status": "succeeded",
+ "createdDateTime": "2020-05-28T05:13:21Z",
+ "lastUpdatedDateTime": "2020-05-28T05:13:22Z",
+ "analyzeResult": {
+ "version": "3.0.0",
+ "readResults": [
{
- "status": "succeeded",
- "createdDateTime": "2020-05-28T05:13:21Z",
- "lastUpdatedDateTime": "2020-05-28T05:13:22Z",
- "analyzeResult": {
- "version": "3.0.0",
- "readResults": [
+ "page": 1,
+ "language": "en",
+ "angle": 0.8551,
+ "width": 2661,
+ "height": 1901,
+ "unit": "pixel",
+ "lines": [
+ {
+ "boundingBox": [
+ 67,
+ 646,
+ 2582,
+ 713,
+ 2580,
+ 876,
+ 67,
+ 821
+ ],
+ "text": "The quick brown fox jumps",
+ "words": [
{
- "page": 1,
- "language": "en",
- "angle": 0.8551,
- "width": 2661,
- "height": 1901,
- "unit": "pixel",
- "lines": [
- {
- "boundingBox": [
- 67,
- 646,
- 2582,
- 713,
- 2580,
- 876,
- 67,
- 821
- ],
- "text": "The quick brown fox jumps",
- "words": [
- {
- "boundingBox": [
- 143,
- 650,
- 435,
- 661,
- 436,
- 823,
- 144,
- 824
- ],
- "text": "The",
- "confidence": 0.958
- },
- // The rest of result is omitted for brevity
-
- }
+ "boundingBox": [
+ 143,
+ 650,
+ 435,
+ 661,
+ 436,
+ 823,
+ 144,
+ 824
+ ],
+ "text": "The",
+ "confidence": 0.958
+ },
+// The rest of result is omitted for brevity
+
+}
```
-## Service only
+## Cloud service only
### `Recognize Text` `Recognize Text` is a *preview* operation that is being *deprecated in all versions of Azure AI Vision API*. You must migrate from `Recognize Text` to `Read` (v3.0) or `Batch Read File` (v2.0, v2.1). v3.0 of `Read` includes newer, better models for text recognition and other features, so it's recommended. To upgrade from `Recognize Text` to `Read`:
The v3.0 API also introduces the following improvements you can optionally use.
In 2.X, the output format is as follows: ```json
+{
{
+ "status": "Succeeded",
+ "recognitionResult": [
{
- "status": "Succeeded",
- "recognitionResult": [
- {
- "lines": [
- {
- "boundingBox": [
- 67,
- 646,
- 2582,
- 713,
- 2580,
- 876,
- 67,
- 821
- ],
- "text": "The quick brown fox jumps",
- "words": [
- {
- "boundingBox": [
- 143,
- 650,
- 435,
- 661,
- 436,
- 823,
- 144,
- 824
- ],
- "text": "The",
- },
- // The rest of result is omitted for brevity
-
- }
+ "lines": [
+ {
+ "boundingBox": [
+ 67,
+ 646,
+ 2582,
+ 713,
+ 2580,
+ 876,
+ 67,
+ 821
+ ],
+ "text": "The quick brown fox jumps",
+ "words": [
+ {
+ "boundingBox": [
+ 143,
+ 650,
+ 435,
+ 661,
+ 436,
+ 823,
+ 144,
+ 824
+ ],
+ "text": "The",
+ },
+// The rest of result is omitted for brevity
+
+}
``` In v3.x, it has been adjusted: ```json
+{
{
+ "status": "succeeded",
+ "createdDateTime": "2020-05-28T05:13:21Z",
+ "lastUpdatedDateTime": "2020-05-28T05:13:22Z",
+ "analyzeResult": {
+ "version": "3.0.0",
+ "readResults": [
{
- "status": "succeeded",
- "createdDateTime": "2020-05-28T05:13:21Z",
- "lastUpdatedDateTime": "2020-05-28T05:13:22Z",
- "analyzeResult": {
- "version": "3.0.0",
- "readResults": [
+ "page": 1,
+ "angle": 0.8551,
+ "width": 2661,
+ "height": 1901,
+ "unit": "pixel",
+ "lines": [
+ {
+ "boundingBox": [
+ 67,
+ 646,
+ 2582,
+ 713,
+ 2580,
+ 876,
+ 67,
+ 821
+ ],
+ "text": "The quick brown fox jumps",
+ "words": [
{
- "page": 1,
- "angle": 0.8551,
- "width": 2661,
- "height": 1901,
- "unit": "pixel",
- "lines": [
- {
- "boundingBox": [
- 67,
- 646,
- 2582,
- 713,
- 2580,
- 876,
- 67,
- 821
- ],
- "text": "The quick brown fox jumps",
- "words": [
- {
- "boundingBox": [
- 143,
- 650,
- 435,
- 661,
- 436,
- 823,
- 144,
- 824
- ],
- "text": "The",
- "confidence": 0.958
- },
- // The rest of result is omitted for brevity
-
- }
+ "boundingBox": [
+ 143,
+ 650,
+ 435,
+ 661,
+ 436,
+ 823,
+ 144,
+ 824
+ ],
+ "text": "The",
+ "confidence": 0.958
+ },
+// The rest of result is omitted for brevity
+
+}
``` ## Container only
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/whats-new.md
Previously updated : 04/07/2023 Last updated : 02/27/2024
ai-services Export Delete Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/export-delete-data.md
Previously updated : 03/21/2019 Last updated : 02/27/2024 # View or delete user data in Custom Vision
-Custom Vision collects user data to operate the service, but customers have full control over viewing and deleting their data using the Custom Vision [Training APIs](https://go.microsoft.com/fwlink/?linkid=865446).
+Custom Vision collects user data to operate the service, but customers have full control to viewing and delete their data using the Custom Vision [Training APIs](https://go.microsoft.com/fwlink/?linkid=865446).
[!INCLUDE [GDPR-related guidance](../../../includes/gdpr-intro-sentence.md)]
-To learn how to view and delete user data in Custom Vision, see the following table.
+To learn how to view or delete different kinds of user data in Custom Vision, see the following table:
| Data | View operation | Delete operation | | - | - | - |
-| Account info (Keys) | [GetAccountInfo](https://go.microsoft.com/fwlink/?linkid=865446) | Delete using Azure portal (Azure Subscriptions). Or using "Delete Your Account" button in CustomVision.ai settings page (Microsoft Account Subscriptions) |
+| Account info (Keys) | [GetAccountInfo](https://go.microsoft.com/fwlink/?linkid=865446) | Delete using Azure portal (for Azure Subscriptions). Or use **Delete Your Account** button in [CustomVision.ai](https://customvision.ai) settings page (for Microsoft Account Subscriptions) |
| Iteration details | [GetIteration](https://go.microsoft.com/fwlink/?linkid=865446) | [DeleteIteration](https://go.microsoft.com/fwlink/?linkid=865446) | | Iteration performance details | [GetIterationPerformance](https://go.microsoft.com/fwlink/?linkid=865446) | [DeleteIteration](https://go.microsoft.com/fwlink/?linkid=865446) | | List of iterations | [GetIterations](https://go.microsoft.com/fwlink/?linkid=865446) | [DeleteIteration](https://go.microsoft.com/fwlink/?linkid=865446) |
ai-services Storage Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/storage-integration.md
Previously updated : 06/25/2021 Last updated : 02/27/2024 # Integrate Azure storage for notifications and backup
-You can integrate your Custom Vision project with an Azure blob storage queue to get push notifications of project training/export activity and backup copies of published models. This feature is useful to avoid continually polling the service for results when long operations are running. Instead, you can integrate the storage queue notifications into your workflow.
+You can integrate your Custom Vision project with an Azure blob storage queue to get push notifications of project training/export activity. This feature is useful to avoid continually polling the service for results when long operations are running. Instead, you can integrate the storage queue notifications into your workflow.
-This guide shows you how to use these REST APIs with cURL. You can also use an HTTP request service like Postman to issue the requests.
+You can also use Azure storage to store backup copies of your published models.
+
+This guide shows you how to use these REST APIs with cURL. You can also use an HTTP request service like Postman to make the requests.
> [!NOTE] > Push notifications depend on the optional _notificationQueueUri_ parameter in the **CreateProject** API, and model backups require that you also use the optional _exportModelContainerUri_ parameter. This guide will use both for the full set of features. ## Prerequisites -- A Custom Vision resource in Azure. If you don't have one, go to the Azure portal and [create a new Custom Vision resource](https://portal.azure.com/?microsoft_azure_marketplace_ItemHideKey=microsoft_azure_cognitiveservices_customvision#create/Microsoft.CognitiveServicesCustomVision?azure-portal=true). This feature doesn't currently support the [Azure AI services multi-service resource](../multi-service-resource.md).
+- An Azure Custom Vision resource. If you don't have one, go to the Azure portal and [create a new Custom Vision resource](https://portal.azure.com/?microsoft_azure_marketplace_ItemHideKey=microsoft_azure_cognitiveservices_customvision#create/Microsoft.CognitiveServicesCustomVision?azure-portal=true).
+ > [!NOTE]
+ > This feature doesn't support the [Azure AI services multi-service resource](../multi-service-resource.md).
- An Azure Storage account with a blob container. Follow the [Storage quickstart](../../storage/blobs/storage-quickstart-blobs-portal.md) if you need help with this step. - [PowerShell version 6.0+](/powershell/scripting/install/installing-powershell-core-on-windows), or a similar command-line application. ## Set up Azure storage integration
-Go to your Custom Vision training resource on the Azure portal, select the **Identity** page, and enable system assigned managed identity.
+Go to your Custom Vision training resource on the Azure portal, select the **Identity** page, and enable **system assigned managed identity**.
Next, go to your storage resource in the Azure portal. Go to the **Access control (IAM)** page and select **Add role assignment (Preview)**. Then add a role assignment for either integration feature, or both:
-* If you plan to use the model backup feature, select the **Storage Blob Data Contributor** role, and add your Custom Vision training resource as a member. Select **Review + assign** to complete.
-* If you plan to use the notification queue feature, then select the **Storage Queue Data Contributor** role, and add your Custom Vision training resource as a member. Select **Review + assign** to complete.
+- If you plan to use the model backup feature, select the **Storage Blob Data Contributor** role, and add your Custom Vision training resource as a member. Select **Review + assign** to complete.
+- If you plan to use the notification queue feature, then select the **Storage Queue Data Contributor** role, and add your Custom Vision training resource as a member. Select **Review + assign** to complete.
For help with role assignments, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
For the model backup integration URL, go to the **Containers** page of your stor
> ![Azure storage container properties page](./media/storage-integration/container-url.png)
-## Integrate Custom Vision project
+## Integrate a Custom Vision project
Now that you have the integration URLs, you can create a new Custom Vision project that integrates the Azure Storage features. You can also update an existing project to add the features.
-### Create new project
+#### [Create a new project](#tab/create)
When you call the [CreateProject](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeae) API, add the optional parameters _exportModelContainerUri_ and _notificationQueueUri_. Assign the URL values you got in the previous section.
If you receive a `200/OK` response, that means the URLs have been set up success
} ```
-### Update existing project
+#### [Update an existing project](#tab/update)
To update an existing project with Azure storage feature integration, call the [UpdateProject](https://westus2.dev.cognitive.microsoft.com/docs/services/Custom_Vision_Training_3.3/operations/5eb0bcc6548b571998fddeb1) API, using the ID of the project you want to update.
Set the request body (`body`) to the following JSON format, filling in the appro
If you receive a `200/OK` response, that means the URLs have been set up successfully. ++ ## Verify the connection Your API call in the previous section should have already triggered new information in your Azure storage account.
ai-services How To Launch Immersive Reader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/immersive-reader/how-to-launch-immersive-reader.md
Title: "How to launch the Immersive Reader"
-description: Learn how to launch the Immersive reader using JavaScript, Python, Android, or iOS. Immersive Reader uses proven techniques to improve reading comprehension for language learners, emerging readers, and students with learning differences.
-
+description: Learn how to launch the Immersive reader using JavaScript, Python, Android, or iOS.
+ Previously updated : 03/04/2021- Last updated : 02/21/2024+ zone_pivot_groups: immersive-reader-how-to-guides # How to launch the Immersive Reader
-In the [overview](./overview.md), you learned about what the Immersive Reader is and how it implements proven techniques to improve reading comprehension for language learners, emerging readers, and students with learning differences. This article demonstrates how to launch the Immersive Reader JavaScript, Python, Android, or iOS.
+In the [overview](./overview.md), you learned about the Immersive Reader and how it implements proven techniques to improve reading comprehension for language learners, emerging readers, and students with learning differences. This article demonstrates how to launch the Immersive Reader using JavaScript, Python, Android, or iOS.
::: zone pivot="programming-language-javascript"
ai-services Use Native Documents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/native-document-support/use-native-documents.md
A native document refers to the file format used to create the original document
|Type|support and limitations| ||| |**PDFs**| Fully scanned PDFs aren't supported.|
-|**Text within images**| Digital images with imbedded text aren't supported.|
+|**Text within images**| Digital images with embedded text aren't supported.|
|**Digital tables**| Tables in scanned documents aren't supported.| ***Document Size***
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
The following Embeddings models are available with [Azure Government](/azure/azu
| `davinci-002` | North Central US <br> Sweden Central | 16,384 | Sep 2021 | | `gpt-35-turbo` (0613) | North Central US <br> Sweden Central | 4,096 | Sep 2021 | | `gpt-35-turbo` (1106) | North Central US <br> Sweden Central | Input: 16,385<br> Output: 4,096 | Sep 2021|
+| `gpt-35-turbo` (0125) | North Central US <br> Sweden Central | 16,385 | Sep 2021 |
### Whisper models (Preview)
ai-services Gpt With Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/gpt-with-vision.md
The **Optical character recognition (OCR)** integration allows the model to prod
The **object grounding** integration brings a new layer to data analysis and user interaction, as the feature can visually distinguish and highlight important elements in the images it processes. > [!IMPORTANT]
-> To use Vision enhancement, you need a Computer Vision resource. It must be in the paid (S1) tier and in the same Azure region as your GPT-4 Turbo with Vision resource.
+> To use the Vision enhancement with an Azure OpenAI resource, you need to specify a Computer Vision resource. It must be in the paid (S1) tier and in the same Azure region as your GPT-4 Turbo with Vision resource. If you're using an Azure AI Services resource, you don't need an additional Computer Vision resource.
> [!CAUTION] > Azure AI enhancements for GPT-4 Turbo with Vision will be billed separately from the core functionalities. Each specific Azure AI enhancement for GPT-4 Turbo with Vision has its own distinct charges. For details, see the [special pricing information](../concepts/gpt-with-vision.md#special-pricing-information).
GPT-4 Turbo with Vision provides exclusive access to Azure AI Services tailored
Follow these steps to set up a video retrieval system and integrate it with your AI chat model. > [!IMPORTANT]
-> To use Vision enhancement, you need an Azure AI Vision resource. It must be in the paid (S1) tier and in the same Azure region as your GPT-4 Turbo with Vision resource.
+> To use the Vision enhancement with an Azure OpenAI resource, you need to specify a Computer Vision resource. It must be in the paid (S1) tier and in the same Azure region as your GPT-4 Turbo with Vision resource. If you're using an Azure AI Services resource, you don't need an additional Computer Vision resource.
> [!CAUTION] > Azure AI enhancements for GPT-4 Turbo with Vision will be billed separately from the core functionalities. Each specific Azure AI enhancement for GPT-4 Turbo with Vision has its own distinct charges. For details, see the [special pricing information](../concepts/gpt-with-vision.md#special-pricing-information).
Follow these steps to set up a video retrieval system and integrate it with your
> [!TIP] > If you prefer, you can carry out the below steps using a Jupyter notebook instead: [Video chat completions notebook](https://github.com/Azure-Samples/azureai-samples/blob/main/scenarios/GPT-4V/video/video_chatcompletions_example_restapi.ipynb).
+### Upload videos to Azure Blob Storage
+
+You need to upload your videos to an Azure Blob Storage container. [Create a new storage account](https://ms.portal.azure.com/#create/Microsoft.StorageAccount) if you don't have one already.
+
+Once your videos are uploaded, you can get their SAS URLs, which you use to access them in later steps.
+
+#### Ensure proper read access
+
+Depending on your authentication method, you may need to do some extra steps to grant access to the Azure Blob Storage container. If you're using an Azure AI Services resource instead of an Azure OpenAI resource, you need to use Managed Identities to grant it **read** access to Azure Blob Storage:
+
+#### [using System assigned identities](#tab/system-assigned)
+
+Enable System assigned identities on your Azure AI Services resource by following these steps:
+1. From your AI Services resource in Azure portal select **Resource Management** -> **Identity** and toggle the status to **ON**.
+1. Assign **Storage Blob Data Read** access to the AI Services resource: From the **Identity** page, select **Azure role assignments**, and then **Add role assignment** with the following settings:
+ - scope: storage
+ - subscription: {your subscription}
+ - Resource: {select the Azure Blob Storage resource}
+ - Role: Storage Blob Data Reader
+1. Save your settings.
+
+#### [using User assigned identities](#tab/user-assigned)
+
+To use a User assigned identity on your Azure AI Services resource, follow these steps:
+1. Create a new Managed Identity resource in the Azure portal.
+1. Navigate to the new resource, then to **Azure Role Assignments**.
+1. Add a **New Role Assignment** with the following settings:
+ - scope: storage
+ - subscription: {your subscription}
+ - Resource: {select the Azure Blob Storage resource}
+ - Role: Storage Blob Data Reader
+1. Save your new configuration.
+1. Navigate to your AI Services resource's **Identity** page.
+1. Select the **User Assigned** Tab, then click **+Add** to select the newly created Managed Identity.
+1. Save your configuration.
+++ ### Create a video retrieval index 1. Get an Azure AI Vision resource in the same region as the Azure OpenAI resource you're using.
print(response)
```
+> [!IMPORTANT]
+> The `"dataSources"` object's content varies depending on which Azure resource type and authentication method you're using. See the following reference:
+>
+> #### [Azure OpenAI resource](#tab/resource)
+>
+> ```json
+> "dataSources": [
+> {
+> "type": "AzureComputerVisionVideoIndex",
+> "parameters": {
+> "endpoint": "<your_computer_vision_endpoint>",
+> "computerVisionApiKey": "<your_computer_vision_key>",
+> "indexName": "<name_of_your_index>",
+> "videoUrls": ["<your_video_SAS_URL>"]
+> }
+> }],
+> ```
+>
+> #### [Azure AIServices resource + SAS authentication](#tab/resource-sas)
+>
+> ```json
+> "dataSources": [
+> {
+> "type": "AzureComputerVisionVideoIndex",
+> "parameters": {
+> "indexName": "<name_of_your_index>",
+> "videoUrls": ["<your_video_SAS_URL>"]
+> }
+> }],
+> ```
+>
+> #### [Azure AIServices resource + Managed Identities](#tab/resource-mi)
+>
+> ```json
+> "dataSources": [
+> {
+> "type": "AzureComputerVisionVideoIndex",
+> "parameters": {
+> "indexName": "<name_of_your_index>",
+> "documentAuthenticationKind": "managedidentity",
+> }
+> }],
+> ```
+>
+ ### Output The chat responses you receive from the model should include information about the video. The API response should look like the following.
ai-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/quotas-limits.md
The following sections provide you with a quick guide to the default quotas and
The default quota for models varies by model and region. Default quota limits are subject to change.
-| Region | Text-Embedding-Ada-002 | text-embedding-3-small | text-embedding-3-large | GPT-35-Turbo | GPT-35-Turbo-1106 | GPT-35-Turbo-16K | GPT-35-Turbo-Instruct | GPT-4 | GPT-4-32K | GPT-4-Turbo | GPT-4-Turbo-V | Babbage-002 | Babbage-002 - finetune | Davinci-002 | Davinci-002 - finetune | GPT-35-Turbo - finetune | GPT-35-Turbo-1106 - finetune |
-|:--|:-|:-|:-|:|:--|:-|:|:--|:|:--|:-|:--|:-|:--|:-|:--|:-|
-| australiaeast | 350 K | - | - | 300 K | 120 K | 300 K | - | 40 K | 80 K | 80 K | 30 K | - | - | - | - | - | - |
-| brazilsouth | 350 K | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
-| canadaeast | 350 K | 350 K | 350 K | 300 K | 120 K | 300 K | - | 40 K | 80 K | 80 K | - | - | - | - | - | - | - |
-| eastus | 240 K | 350 K | 350 K | 240 K | - | 240 K | 240 K | - | - | 80 K | - | - | - | - | - | - | - |
-| eastus2 | 350 K | 350 K | 350 K | 300 K | - | 300 K | - | 40 K | 80 K | 80 K | - | - | - | - | - | - | - |
-| francecentral | 240 K | - | - | 240 K | 120 K | 240 K | - | 20 K | 60 K | 80 K | - | - | - | - | - | - | - |
-| japaneast | 350 K | - | - | 300 K | - | 300 K | - | 40 K | 80 K | - | 30 K | - | - | - | - | - | - |
-| northcentralus | 350 K | - | - | 300 K | - | 300 K | - | - | - | 80 K | - | 240 K | 250 K | 240 K | 250 K | 250 K | 250 K |
-| norwayeast | 350 K | - | - | - | - | - | - | - | - | 150 K | - | - | - | - | - | - | - |
-| southafricanorth | 350 K | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
-| southcentralus | 240 K | - | - | 240 K | - | - | - | - | - | 80 K | - | - | - | - | - | - | - |
-| southindia | 350 K | - | - | - | 120 K | - | - | - | - | 150 K | - | - | - | - | - | - | - |
-| swedencentral | 350 K | - | - | 300 K | 120 K | 300 K | 240 K | 40 K | 80 K | 150 K | 30 K | 240 K | 250 K | 240 K | 250 K | 250 K | 250 K |
-| switzerlandnorth | 350 K | - | - | 300 K | - | 300 K | - | 40 K | 80 K | - | 30 K | - | - | - | - | - | - |
-| uksouth | 350 K | - | - | 240 K | 120 K | 240 K | - | 40 K | 80 K | 80 K | - | - | - | - | - | - | - |
-| westeurope | 240 K | - | - | 240 K | - | - | - | - | - | - | - | - | - | - | - | - | - |
-| westus | 350 K | - | - | - | 120 K | - | - | - | - | 80 K | 30 K | - | - | - | - | - | - |
+| Region | Text-Embedding-Ada-002 | text-embedding-3-small | text-embedding-3-large | GPT-35-Turbo | GPT-35-Turbo-1106 | GPT-35-Turbo-16K | GPT-35-Turbo-Instruct | GPT-4 | GPT-4-32K | GPT-4-Turbo | GPT-4-Turbo-V | Babbage-002 | Babbage-002 - finetune | Davinci-002 | Davinci-002 - finetune | GPT-35-Turbo - finetune | GPT-35-Turbo-1106 - finetune | GPT-35-Turbo-0125 - finetune |
+|:--|:-|:-|:-|:|:--|:-|:|:--|:|:--|:-|:--|:-|:--|:-|:--|:-|:-|
+| australiaeast | 350 K | - | - | 300 K | 120 K | 300 K | - | 40 K | 80 K | 80 K | 30 K | - | - | - | - | - | - | - |
+| brazilsouth | 350 K | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
+| canadaeast | 350 K | 350 K | 350 K | 300 K | 120 K | 300 K | - | 40 K | 80 K | 80 K | - | - | - | - | - | - | - | - |
+| eastus | 240 K | 350 K | 350 K | 240 K | - | 240 K | 240 K | - | - | 80 K | - | - | - | - | - | - | - | - |
+| eastus2 | 350 K | 350 K | 350 K | 300 K | - | 300 K | - | 40 K | 80 K | 80 K | - | - | - | - | - | - | - | - |
+| francecentral | 240 K | - | - | 240 K | 120 K | 240 K | - | 20 K | 60 K | 80 K | - | - | - | - | - | - | - | - |
+| japaneast | 350 K | - | - | 300 K | - | 300 K | - | 40 K | 80 K | - | 30 K | - | - | - | - | - | - | - |
+| northcentralus | 350 K | - | - | 300 K | - | 300 K | - | - | - | 80 K | - | 240 K | 250 K | 240 K | 250 K | 250 K | 250 K | 250 K |
+| norwayeast | 350 K | - | - | - | - | - | - | - | - | 150 K | - | - | - | - | - | - | - | - |
+| southafricanorth | 350 K | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
+| southcentralus | 240 K | - | - | 240 K | - | - | - | - | - | 80 K | - | - | - | - | - | - | - | - |
+| southindia | 350 K | - | - | - | 120 K | - | - | - | - | 150 K | - | - | - | - | - | - | - | - |
+| swedencentral | 350 K | - | - | 300 K | 120 K | 300 K | 240 K | 40 K | 80 K | 150 K | 30 K | 240 K | 250 K | 240 K | 250 K | 250 K | 250 K | 250 K |
+| switzerlandnorth | 350 K | - | - | 300 K | - | 300 K | - | 40 K | 80 K | - | 30 K | - | - | - | - | - | - | - |
+| uksouth | 350 K | - | - | 240 K | 120 K | 240 K | - | 40 K | 80 K | 80 K | - | - | - | - | - | - | - | - |
+| westeurope | 240 K | - | - | 240 K | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
+| westus | 350 K | - | - | - | 120 K | - | - | - | - | 80 K | 30 K | - | - | - | - | - | - | - |
### General best practices to remain within rate limits
ai-services What Is Text To Speech Avatar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/text-to-speech-avatar/what-is-text-to-speech-avatar.md
Sample code for text to speech avatar is available on [GitHub](https://github.co
* [Batch synthesis (REST)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch-avatar) * [Real-time synthesis (SDK)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/js/browser/avatar)
-* [Live chat with Azure Open AI in behind (SDK)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/js/browser/avatar#chat-sample)
+* [Live chat with Azure OpenAI in behind (SDK)](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/js/browser/avatar#chat-sample)
## Pricing
-When you use the text to speech avatar feature, you'll be billed by the minutes of video output, and the text to speech, speech to text, Azure Open AI, or other Azure services are charged separately.
+When you use the text to speech avatar feature, you'll be billed by the minutes of video output, and the text to speech, speech to text, Azure OpenAI, or other Azure services are charged separately.
For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
aks Network Observability Byo Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/network-observability-byo-cli.md
AKS Network Observability is used to collect the network traffic data of your AKS cluster. Network Observability enables a centralized platform for monitoring application and network health. Prometheus collects AKS Network Observability metrics, and Grafana visualizes them. Both Cilium and non-Cilium data plane are supported. In this article, learn how to enable the Network Observability add-on and use BYO Prometheus and Grafana to visualize the scraped metrics.
+> [!NOTE]
+>Starting with Kubernetes version 1.29, the network observability feature no longer supports Bring Your Own (BYO) Prometheus and Grafana. However, you can still enable it using the Azure Managed Prometheus and Grafana offering
+>
+ > [!IMPORTANT] > AKS Network Observability is currently in PREVIEW. > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
aks Network Observability Managed Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/network-observability-managed-cli.md
az group create \
--name myResourceGroup \ --location eastus ```
+> [!NOTE]
+>For Kubernetes version 1.29 or higher, network observability is enabled with the [AMA metrics profile](/azure/azure-monitor/containers/prometheus-metrics-scrape-configuration) and the AFEC flag (NetworkObservabilityPreview) until it reaches general availability.
+>
+>Starting with Kubernetes version 1.29, the --enable-network-observability tag is no longer required when creating or updating an Azure Kubernetes Service (AKS) cluster.
+>
+>For AKS clusters running Kubernetes version 1.28 or earlier, enabling network observability requires the --enable-network-observability tag during cluster creation or update.
+>
## Create AKS cluster
az aks get-credentials --name myAKSCluster --resource-group myResourceGroup
> [!NOTE] > The following section requires deployments of Azure managed Prometheus and Grafana.
->[!WARNING]
-> File should only be named as **`prometheus-config`**. Do not add any extensions like .yaml or .txt.
-
-1. Use the following example to create a file named **`prometheus-config`**. Copy the code in the example into the file created.
-
- ```yaml
- global:
- scrape_interval: 30s
- scrape_configs:
- - job_name: "cilium-pods"
- kubernetes_sd_configs:
- - role: pod
- relabel_configs:
- - source_labels: [__meta_kubernetes_pod_container_name]
- action: keep
- regex: cilium-agent
- - source_labels:
- [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
- separator: ":"
- regex: ([^:]+)(?::\d+)?
- target_label: __address__
- replacement: ${1}:${2}
- action: replace
- - source_labels: [__meta_kubernetes_pod_node_name]
- action: replace
- target_label: instance
- - source_labels: [__meta_kubernetes_pod_label_k8s_app]
- action: replace
- target_label: k8s_app
- - source_labels: [__meta_kubernetes_pod_name]
- action: replace
- regex: (.*)
- target_label: pod
- metric_relabel_configs:
- - source_labels: [__name__]
- action: keep
- regex: (.*)
- ```
-
-1. To create the `configmap`, use the following example:
-
- ```azurecli-interactive
- kubectl create configmap ama-metrics-prometheus-config \
- --from-file=./prometheus-config \
- --namespace kube-system
- ```
- 1. Azure Monitor pods should restart themselves, if they don't, rollout restart with following command: ```azurecli-interactive
- kubectl rollout restart deploy -n kube-system ama-metrics
+ kubectl get po -owide -n kube-system | grep ama-
+ ```
+
+ ```output
+ ama-metrics-5bc6c6d948-zkgc9 2/2 Running 0 (21h ago) 26h
+ ama-metrics-ksm-556d86b5dc-2ndkv 1/1 Running 0 (26h ago) 26h
+ ama-metrics-node-lbwcj 2/2 Running 0 (21h ago) 26h
+ ama-metrics-node-rzkzn 2/2 Running 0 (21h ago) 26h
+ ama-metrics-win-node-gqnkw 2/2 Running 0 (26h ago) 26h
+ ama-metrics-win-node-tkrm8 2/2 Running 0 (26h ago) 26h
``` 1. Once the Azure Monitor pods have been deployed on the cluster, port forward to the `ama` pod to verify the pods are being scraped. Use the following example to port forward to the pod:
aks Use Azure Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-linux.md
Title: Use the Azure Linux container host on Azure Kubernetes Service (AKS)
description: Learn how to use the Azure Linux container host on Azure Kubernetes Service (AKS) Previously updated : 09/18/2023 Last updated : 02/27/2024 # Use the Azure Linux container host for Azure Kubernetes Service (AKS)
The Azure Linux container host is available for use in the same regions as AKS.
To learn more about Azure Linux, see the [Azure Linux documentation][azurelinuxdocumentation]. <!-- LINKS - Internal -->
-[azurelinux-doc]: https://microsoft.github.io/CBL-Mariner/docs/#cbl-mariner-linux
+[azurelinux-doc]: ../azure-linux/intro-azure-linux.md
[azurelinux-capabilities]: ../azure-linux/intro-azure-linux.md#azure-linux-container-host-key-benefits [azurelinux-cluster-config]: cluster-configuration.md#azure-linux-container-host-for-aks [azurelinux-node-pool]: create-node-pools.md#add-an-azure-linux-node-pool
analysis-services Analysis Services Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-logging.md
Title: Learn about diagnostic logging for Azure Analysis Services | Microsoft Docs
-description: Describes how to setup up logging to monitoring your Azure Analysis Services server.
+ Title: Set up diagnostic logging for Azure Analysis Services | Microsoft Docs
+description: Describes how to set up logging to monitor your Azure Analysis Services server.
Previously updated : 01/27/2023 Last updated : 02/16/2024
-# Setup diagnostic logging
+# Set up diagnostic logging
-An important part of any Analysis Services solution is monitoring how your servers are performing. Azure Analysis services is integrated with Azure Monitor. With [Azure Monitor resource logs](../azure-monitor/essentials/platform-logs-overview.md), you can monitor and send logs to [Azure Storage](https://azure.microsoft.com/services/storage/), stream them to [Azure Event Hubs](https://azure.microsoft.com/services/event-hubs/), and export them to [Azure Monitor logs](../azure-monitor/overview.md).
+An important part of any Analysis Services solution is monitoring how your servers are performing. For general information about monitoring Azure Analysis Services, see [Monitor Azure Analysis Services](monitor-analysis-services.md).
+
+This article describes how to set up, view, and manage [Azure Monitor resource logs](/azure/azure-monitor/essentials/platform-logs-overview) for your Analysis Services servers. You can send resource logs to [Azure Storage](https://azure.microsoft.com/services/storage/), stream them to [Azure Event Hubs](https://azure.microsoft.com/services/event-hubs/), and export them to [Azure Monitor logs](/azure/azure-monitor/overview).
![Resource logging to Storage, Event Hubs, or Azure Monitor logs](./media/analysis-services-logging/aas-logging-overview.png)
An important part of any Analysis Services solution is monitoring how your serve
## What's logged?
-You can select **Engine**, **Service**, and **Metrics** categories.
-
-### Engine
-
-Selecting **Engine** logs all [xEvents](/analysis-services/instances/monitor-analysis-services-with-sql-server-extended-events). You cannot select individual events.
-
-|XEvent categories |Event name |
-|||
-|Security Audit | Audit Login |
-|Security Audit | Audit Logout |
-|Security Audit | Audit Server Starts And Stops |
-|Progress Reports | Progress Report Begin |
-|Progress Reports | Progress Report End |
-|Progress Reports | Progress Report Current |
-|Queries | Query Begin |
-|Queries | Query End |
-|Commands | Command Begin |
-|Commands | Command End |
-|Errors & Warnings | Error |
-|Discover | Discover End |
-|Notification | Notification |
-|Session | Session Initialize |
-|Locks | Deadlock |
-|Query Processing | VertiPaq SE Query Begin |
-|Query Processing | VertiPaq SE Query End |
-|Query Processing | VertiPaq SE Query Cache Match |
-|Query Processing | Direct Query Begin |
-|Query Processing | Direct Query End |
-
-### Service
-
-|Operation name |Occurs when |
-|||
-|ResumeServer | Resume a server |
-|SuspendServer | Pause a server |
-|DeleteServer | Delete a server |
-|RestartServer | User restarts a server through SSMS or PowerShell |
-|GetServerLogFiles | User exports server log through PowerShell |
-|ExportModel | User exports a model in the portal by using Open in Visual Studio |
-
-### All metrics
-
-The Metrics category logs the same [Server metrics](analysis-services-monitor.md#server-metrics) to the AzureMetrics table. If you're using query [scale-out](analysis-services-scale-out.md) and need to separate metrics for each read replica, use the AzureDiagnostics table instead, where **OperationName** is equal to **LogMetric**.
-
-## Setup diagnostics logging
+You can select **Engine**, **Service**, and **Metrics** log categories. For a listing of what's logged for each category, see [Supported resource logs for Microsoft.AnalysisServices/servers](monitor-analysis-services-reference.md#supported-resource-logs-for-microsoftanalysisservicesservers).
+
+## Set up diagnostics logging
### Azure portal
The Metrics category logs the same [Server metrics](analysis-services-monitor.md
* **Name**. Enter a name for the logs to create.
- * **Archive to a storage account**. To use this option, you need an existing storage account to connect to. See [Create a storage account](../storage/common/storage-account-create.md). Follow the instructions to create a Resource Manager, general-purpose account, then select your storage account by returning to this page in the portal. It may take a few minutes for newly created storage accounts to appear in the drop-down menu.
- * **Stream to an event hub**. To use this option, you need an existing Event Hub namespace and event hub to connect to. To learn more, see [Create an Event Hubs namespace and an event hub using the Azure portal](../event-hubs/event-hubs-create.md). Then return to this page in the portal to select the Event Hub namespace and policy name.
- * **Send to Azure Monitor (Log Analytics workspace)**. To use this option, either use an existing workspace or [create a new workspace](../azure-monitor/logs/quick-create-workspace.md) resource in the portal. For more information on viewing your logs, see [View logs in Log Analytics workspace](#view-logs-in-log-analytics-workspace) in this article.
+ * **Archive to a storage account**. To use this option, you need an existing storage account to connect to. See [Create a storage account](/azure/storage/common/storage-account-create). Follow the instructions to create a Resource Manager, general-purpose account, then select your storage account by returning to this page in the portal. It may take a few minutes for newly created storage accounts to appear in the drop-down menu.
+ * **Stream to an event hub**. To use this option, you need an existing Event Hub namespace and event hub to connect to. To learn more, see [Create an Event Hubs namespace and an event hub using the Azure portal](/azure/event-hubs/event-hubs-create). Then return to this page in the portal to select the Event Hub namespace and policy name.
+ * **Send to Azure Monitor (Log Analytics workspace)**. To use this option, either use an existing workspace or [create a new workspace](/azure/azure-monitor/logs/quick-create-workspace) resource in the portal. For more information on viewing your logs, see [View logs in Log Analytics workspace](#view-logs-in-log-analytics-workspace) in this article.
* **Engine**. Select this option to log xEvents. If you're archiving to a storage account, you can select the retention period for the resource logs. Logs are autodeleted after the retention period expires. * **Service**. Select this option to log service level events. If you are archiving to a storage account, you can select the retention period for the resource logs. Logs are autodeleted after the retention period expires.
Logs are typically available within a couple hours of setting up logging. It's u
## View logs in Log Analytics workspace
-Metrics and server events are integrated with xEvents in your Log Analytics workspace resource for side-by-side analysis. Log Analytics workspace can also be configured to receive events from other Azure services providing a holistic view of diagnostic logging data across your architecture.
- To view your diagnostic data, in Log Analytics workspace, open **Logs** from the left menu. ![Screenshot showing log Search options in the Azure portal.](./media/analysis-services-logging/aas-logging-open-log-search.png) In the query builder, expand **LogManagement** > **AzureDiagnostics**. AzureDiagnostics includes Engine and Service events. Notice a query is created on-the-fly. The EventClass\_s field contains xEvent names, which may look familiar if you've used xEvents for on-premises logging. Click **EventClass\_s** or one of the event names and Log Analytics workspace continues constructing a query. Be sure to save your queries to reuse later.
-### Example queries
-
-#### Example 1
-
-The following query returns durations for each query end/refresh end event for a model database and server. If scaled out, the results are broken out by replica because the replica number is included in ServerName_s. Grouping by RootActivityId_g reduces the row count retrieved from the Azure Diagnostics REST API and helps stay within the limits as described in Log Analytics Rate limits.
-
-```Kusto
-let window = AzureDiagnostics
- | where ResourceProvider == "MICROSOFT.ANALYSISSERVICES" and Resource =~ "MyServerName" and DatabaseName_s =~ "MyDatabaseName" ;
-window
-| where OperationName has "QueryEnd" or (OperationName has "CommandEnd" and EventSubclass_s == 38)
-| where extract(@"([^,]*)", 1,Duration_s, typeof(long)) > 0
-| extend DurationMs=extract(@"([^,]*)", 1,Duration_s, typeof(long))
-| project StartTime_t,EndTime_t,ServerName_s,OperationName,RootActivityId_g,TextData_s,DatabaseName_s,ApplicationName_s,Duration_s,EffectiveUsername_s,User_s,EventSubclass_s,DurationMs
-| order by StartTime_t asc
-```
-
-#### Example 2
-
-The following query returns memory and QPU consumption for a server. If scaled out, the results are broken out by replica because the replica number is included in ServerName_s.
-
-```Kusto
-let window = AzureDiagnostics
- | where ResourceProvider == "MICROSOFT.ANALYSISSERVICES" and Resource =~ "MyServerName";
-window
-| where OperationName == "LogMetric"
-| where name_s == "memory_metric" or name_s == "qpu_metric"
-| project ServerName_s, TimeGenerated, name_s, value_s
-| summarize avg(todecimal(value_s)) by ServerName_s, name_s, bin(TimeGenerated, 1m)
-| order by TimeGenerated asc
-```
-
-#### Example 3
-
-The following query returns the Rows read/sec Analysis Services engine performance counters for a server.
-
-```Kusto
-let window = AzureDiagnostics
- | where ResourceProvider == "MICROSOFT.ANALYSISSERVICES" and Resource =~ "MyServerName";
-window
-| where OperationName == "LogMetric"
-| where parse_json(tostring(parse_json(perfobject_s).counters))[0].name == "Rows read/sec"
-| extend Value = tostring(parse_json(tostring(parse_json(perfobject_s).counters))[0].value)
-| project ServerName_s, TimeGenerated, Value
-| summarize avg(todecimal(Value)) by ServerName_s, bin(TimeGenerated, 1m)
-| order by TimeGenerated asc
-```
-
-There are hundreds of queries you can use. To learn more about queries, see [Get started with Azure Monitor log queries](../azure-monitor/logs/get-started-queries.md).
-
+For more queries you can use with Analysis Services, see [Sample Kusto queries](monitor-analysis-services.md#sample-kusto-queries).
## Turn on logging by using PowerShell
To complete this tutorial, you must have the following resources:
* An existing Azure Analysis Services server. For instructions on creating a server resource, see [Create a server in Azure portal](analysis-services-create-server.md), or [Create an Azure Analysis Services server by using PowerShell](analysis-services-create-powershell.md).
-### </a>Connect to your subscriptions
+### Connect to your subscriptions
Start an Azure PowerShell session and sign in to your Azure account with the following command:
Set-AzDiagnosticSetting -ResourceId $account.ResourceId`
## Next steps
-Learn more about [Azure Monitor resource logging](../azure-monitor/essentials/platform-logs-overview.md).
-
-See [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) in PowerShell help.
+- Learn more about [Azure Monitor resource logging](/azure/azure-monitor/essentials/platform-logs-overview).
+- See [Set-AzDiagnosticSetting](/powershell/module/az.monitor/set-azdiagnosticsetting) in PowerShell help.
analysis-services Analysis Services Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-monitor.md
- Title: Monitor Azure Analysis Services server metrics | Microsoft Docs
-description: Learn how Analysis Services use Azure Metrics Explorer, a free tool in the portal, to help you monitor the performance and health of your servers.
--- Previously updated : 03/04/2020----
-# Monitor server metrics
-
-Analysis Services provides metrics in Azure Metrics Explorer, a free tool in the portal, to help you monitor the performance and health of your servers. For example, monitor memory and CPU usage, number of client connections, and query resource consumption. Analysis Services uses the same monitoring framework as most other Azure services. To learn more, see [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md).
-
-To perform more in-depth diagnostics, track performance, and identify trends across multiple service resources in a resource group or subscription, use [Azure Monitor](../azure-monitor/overview.md). Azure Monitor (service) may result in a billable service.
--
-## To monitor metrics for an Analysis Services server
-
-1. In Azure portal, select **Metrics**.
-
- ![Monitor in Azure portal](./media/analysis-services-monitor/aas-monitor-portal.png)
-
-2. In **Metric**, select the metrics to include in your chart.
-
- ![Monitor chart](./media/analysis-services-monitor/aas-monitor-chart.png)
-
-<a id="#server-metrics"></a>
-
-## Server metrics
-
-Use this table to determine which metrics are best for your monitoring scenario. Only metrics of the same unit can be shown on the same chart.
-
-|Metric|Metric Display Name|Unit|Aggregation Type|Description|
-||||||
-|CommandPoolJobQueueLength|Command Pool Job Queue Length|Count|Average|Number of jobs in the queue of the command thread pool.|
-|CurrentConnections|Connection: Current connections|Count|Average|Current number of client connections established.|
-|CurrentUserSessions|Current User Sessions|Count|Average|Current number of user sessions established.|
-|mashup_engine_memory_metric|M Engine Memory|Bytes|Average|Memory usage by mashup engine processes|
-|mashup_engine_qpu_metric|M Engine QPU|Count|Average|QPU usage by mashup engine processes|
-|memory_metric|Memory|Bytes|Average|Memory. Range 0-25 GB for S1, 0-50 GB for S2 and 0-100 GB for S4|
-|memory_thrashing_metric|Memory Thrashing|Percent|Average|Average memory thrashing.|
-|CleanerCurrentPrice|Memory: Cleaner Current Price|Count|Average|Current price of memory, $/byte/time, normalized to 1000.|
-|CleanerMemoryNonshrinkable|Memory: Cleaner Memory nonshrinkable|Bytes|Average|Amount of memory, in bytes, not subject to purging by the background cleaner.|
-|CleanerMemoryShrinkable|Memory: Cleaner Memory shrinkable|Bytes|Average|Amount of memory, in bytes, subject to purging by the background cleaner.|
-|MemoryLimitHard|Memory: Memory Limit Hard|Bytes|Average|Hard memory limit, from configuration file.|
-|MemoryLimitHigh|Memory: Memory Limit High|Bytes|Average|High memory limit, from configuration file.|
-|MemoryLimitLow|Memory: Memory Limit Low|Bytes|Average|Low memory limit, from configuration file.|
-|MemoryLimitVertiPaq|Memory: Memory Limit VertiPaq|Bytes|Average|In-memory limit, from configuration file.|
-|MemoryUsage|Memory: Memory Usage|Bytes|Average|Memory usage of the server process as used in calculating cleaner memory price. Equal to counter Process\PrivateBytes plus the size of memory-mapped data, ignoring any memory, which was mapped or allocated by the in-memory analytics engine (VertiPaq) in excess of the engine Memory Limit.|
-|private_bytes_metric|Private Bytes |Bytes|Average|The total amount of memory the Analysis Services engine process and Mashup container processes have allocated, not including memory shared with other processes.|
-|virtual_bytes_metric|Virtual Bytes |Bytes|Average|The current size of the virtual address space that Analysis Services engine process and Mashup container processes are using.|
-|mashup_engine_private_bytes_metric|M Engine Private Bytes |Bytes|Average|The total amount of memory Mashup container processes have allocated, not including memory shared with other processes.|
-|mashup_engine_virtual_bytes_metric|M Engine Virtual Bytes |Bytes|Average|The current size of the virtual address space Mashup container processes are using.|
-|Quota|Memory: Quota|Bytes|Average|Current memory quota, in bytes. Memory quota is also known as a memory grant or memory reservation.|
-|QuotaBlocked|Memory: Quota Blocked|Count|Average|Current number of quota requests that are blocked until other memory quotas are freed.|
-|VertiPaqNonpaged|Memory: VertiPaq Nonpaged|Bytes|Average|Bytes of memory locked in the working set for use by the in-memory engine.|
-|VertiPaqPaged|Memory: VertiPaq Paged|Bytes|Average|Bytes of paged memory in use for in-memory data.|
-|ProcessingPoolJobQueueLength|Processing Pool Job Queue Length|Count|Average|Number of non-I/O jobs in the queue of the processing thread pool.|
-|RowsConvertedPerSec|Processing: Rows converted per sec|CountPerSecond|Average|Rate of rows converted during processing.|
-|RowsReadPerSec|Processing: Rows read per sec|CountPerSecond|Average|Rate of rows read from all relational databases.|
-|RowsWrittenPerSec|Processing: Rows written per sec|CountPerSecond|Average|Rate of rows written during processing.|
-|qpu_metric|QPU|Count|Average|QPU. Range 0-100 for S1, 0-200 for S2 and 0-400 for S4|
-|QueryPoolBusyThreads|Query Pool Busy Threads|Count|Average|Number of busy threads in the query thread pool.|
-|SuccessfulConnectionsPerSec|Successful Connections Per Sec|CountPerSecond|Average|Rate of successful connection completions.|
-|CommandPoolBusyThreads|Threads: Command pool busy threads|Count|Average|Number of busy threads in the command thread pool.|
-|CommandPoolIdleThreads|Threads: Command pool idle threads|Count|Average|Number of idle threads in the command thread pool.|
-|LongParsingBusyThreads|Threads: Long parsing busy threads|Count|Average|Number of busy threads in the long parsing thread pool.|
-|LongParsingIdleThreads|Threads: Long parsing idle threads|Count|Average|Number of idle threads in the long parsing thread pool.|
-|LongParsingJobQueueLength|Threads: Long parsing job queue length|Count|Average|Number of jobs in the queue of the long parsing thread pool.|
-|ProcessingPoolIOJobQueueLength|Threads: Processing pool I/O job queue length|Count|Average|Number of I/O jobs in the queue of the processing thread pool.|
-|ProcessingPoolBusyIOJobThreads|Threads: Processing pool busy I/O job threads|Count|Average|Number of threads running I/O jobs in the processing thread pool.|
-|ProcessingPoolBusyNonIOThreads|Threads: Processing pool busy non-I/O threads|Count|Average|Number of threads running non-I/O jobs in the processing thread pool.|
-|ProcessingPoolIdleIOJobThreads|Threads: Processing pool idle I/O job threads|Count|Average|Number of idle threads for I/O jobs in the processing thread pool.|
-|ProcessingPoolIdleNonIOThreads|Threads: Processing pool idle non-I/O threads|Count|Average|Number of idle threads in the processing thread pool dedicated to non-I/O jobs.|
-|QueryPoolIdleThreads|Threads: Query pool idle threads|Count|Average|Number of idle threads for I/O jobs in the processing thread pool.|
-|QueryPoolJobQueueLength|Threads: Query pool job queue length|Count|Average|Number of jobs in the queue of the query thread pool.|
-|ShortParsingBusyThreads|Threads: Short parsing busy threads|Count|Average|Number of busy threads in the short parsing thread pool.|
-|ShortParsingIdleThreads|Threads: Short parsing idle threads|Count|Average|Number of idle threads in the short parsing thread pool.|
-|ShortParsingJobQueueLength|Threads: Short parsing job queue length|Count|Average|Number of jobs in the queue of the short parsing thread pool.|
-|TotalConnectionFailures|Total Connection Failures|Count|Average|Total failed connection attempts.|
-|TotalConnectionRequests|Total Connection Requests|Count|Average|Total connection requests. |
-
-## Next steps
-[Azure Monitor overview](../azure-monitor/overview.md)
-[Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md)
-[Metrics in Azure Monitor REST API](/rest/api/monitor/metrics)
analysis-services Analysis Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-overview.md
Go up, down, or pause your server. Use the Azure portal or have total control on
### Scale out resources for fast query response
-With scale-out, client queries are distributed among multiple *query replicas* in a query pool. Query replicas have synchronized copies of your tabular models. By spreading the query workload, response times during high query workloads can be reduced. Model processing operations can be separated from the query pool, ensuring client queries are not adversely affected by processing operations.
+With scale-out, client queries are distributed among multiple *query replicas* in a query pool. Query replicas have synchronized copies of your tabular models. By spreading the query workload, you can reduce response times during high query workloads. Model processing operations can be separated from the query pool, ensuring client queries are not adversely affected by processing operations.
You can create a query pool with up to seven additional query replicas (eight total, including your server). The number of query replicas you can have in your pool depend on your chosen plan and region. Query replicas cannot be spread outside your server's region. Query replicas are billed at the same rate as your server.
User authentication is handled by [Microsoft Entra ID](../active-directory/funda
### Data security
-Azure Analysis Services uses Azure Blob storage to persist storage and metadata for Analysis Services databases. Data files within Blob are encrypted using [Azure Blob Server Side Encryption (SSE)](../storage/common/storage-service-encryption.md). When using Direct Query mode, only metadata is stored. The actual data is accessed through encrypted protocol from the data source at query time.
+Azure Analysis Services uses Azure Blob storage to persist storage and metadata for Analysis Services databases. Data files within Blob are encrypted using [Azure Blob Server Side Encryption (SSE)](../storage/common/storage-service-encryption.md). When you use Direct Query mode, only metadata is stored. The actual data is accessed through encrypted protocol from the data source at query time.
Secure access to data sources on-premises in your organization is achieved by installing and configuring an [On-premises data gateway](analysis-services-gateway.md). Gateways provide access to data for both DirectQuery and in-memory modes.
Non-administrative users who query data are granted access through database role
### Row-level security
-Tabular models at all compatibility levels support row-level security. Row-level security is configured in the model by using DAX expressions that define the rows in a table, and any rows in the many direction of a related table that a user can query. Row filters using DAX expressions are defined for the Read and Read and Process permissions.
+Tabular models at all compatibility levels support row-level security. Row-level security is configured in the model by using DAX expressions that define the rows in a table, and any rows in the many directions of a related table that a user can query. Row filters using DAX expressions are defined for the **Read** and **Read and Process** permissions.
### Object-level security
Manage your servers and model databases by using [SQL Server Management Studio (
### Open-source tools
-Analysis Services has a vibrant community of developers who create tools. [DAX Studio](https://daxstudio.org/), is a great open-source tool for DAX authoring, diagnosis, performance tuning, and analysis.
+Analysis Services has a vibrant community of developers who create tools. [DAX Studio](https://daxstudio.org/) is a great open-source tool for DAX authoring, diagnosis, performance tuning, and analysis.
### PowerShell
Modern data exploration and visualization tools like Power BI, Excel, Reporting
## Monitoring and diagnostics
-Azure Analysis Services is integrated with Azure Monitor metrics, providing an extensive number of resource-specific metrics to help you monitor the performance and health of your servers. To learn more, see [Monitor server metrics](analysis-services-monitor.md). Record metrics with [resource platform logs](../azure-monitor/essentials/platform-logs-overview.md). Monitor and send logs to [Azure Storage](https://azure.microsoft.com/services/storage/), stream them to [Azure Event Hubs](https://azure.microsoft.com/services/event-hubs/), and export them to [Azure Monitor logs](https://azure.microsoft.com/services/log-analytics/), a service of [Azure](https://www.microsoft.com/cloud-platform/operations-management-suite). To learn more, see [Setup diagnostic logging](analysis-services-logging.md).
+Azure Analysis Services is integrated with Azure Monitor metrics, providing an extensive number of resource-specific metrics to help you monitor the performance and health of your servers. Record metrics with [resource platform logs](/azure/azure-monitor/essentials/platform-logs-overview). Monitor and send logs to [Azure Storage](https://azure.microsoft.com/services/storage/), stream them to [Azure Event Hubs](https://azure.microsoft.com/services/event-hubs/), and export them to [Azure Monitor logs](https://azure.microsoft.com/services/log-analytics/), a service of the [Azure secure and well-managed cloud](https://www.microsoft.com/cloud-platform/operations-management-suite). To learn more, see [Monitor Analysis Services](monitor-analysis-services.md).
Azure Analysis Services also supports using [Dynamic Management Views (DMVs)](/analysis-services/instances/use-dynamic-management-views-dmvs-to-monitor-analysis-services). Based on SQL syntax, DMVs interface schema rowsets that return metadata and monitoring information about server instance.
analysis-services Analysis Services Scale Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-scale-out.md
# Azure Analysis Services scale-out
-With scale-out, client queries can be distributed among multiple *query replicas* in a *query pool*, reducing response times during high query workloads. You can also separate processing from the query pool, ensuring client queries are not adversely affected by processing operations. Scale-out can be configured in Azure portal or by using the Analysis Services REST API.
+With scale-out, client queries can be distributed among multiple *query replicas* in a *query pool*, reducing response times during high query workloads. You can also separate processing from the query pool, ensuring client queries aren't adversely affected by processing operations. Scale-out can be configured in Azure portal or by using the Analysis Services REST API.
-Scale-out is available for servers in the Standard pricing tier. Each query replica is billed at the same rate as your server. All query replicas are created in the same region as your server. The number of query replicas you can configure are limited by the region your server is in. To learn more, see [Availability by region](analysis-services-overview.md#availability-by-region). Scale-out does not increase the amount of available memory for your server. To increase memory, you need to upgrade your plan.
+Scale-out is available for servers in the Standard pricing tier. Each query replica is billed at the same rate as your server. All query replicas are created in the same region as your server. The number of query replicas you can configure are limited by the region your server is in. To learn more, see [Availability by region](analysis-services-overview.md#availability-by-region). Scale-out doesn't increase the amount of available memory for your server. To increase memory, you need to upgrade your plan.
## Why scale out? In a typical server deployment, one server serves as both processing server and query server. If the number of client queries against models on your server exceeds the Query Processing Units (QPU) for your server's plan, or model processing occurs at the same time as high query workloads, performance can decrease.
-With scale-out, you can create a query pool with up to seven additional query replica resources (eight total, including your *primary* server). You can scale the number of replicas in the query pool to meet QPU demands at critical times, and you can separate a processing server from the query pool at any time.
+With scale-out, you can create a query pool with up to seven more query replica resources (eight total, including your *primary* server). You can scale the number of replicas in the query pool to meet QPU demands at critical times, and you can separate a processing server from the query pool at any time.
-Regardless of the number of query replicas you have in a query pool, processing workloads are not distributed among query replicas. The primary server serves as the processing server. Query replicas serve only queries against the model databases synchronized between the primary server and each replica in the query pool.
+Regardless of the number of query replicas you have in a query pool, processing workloads aren't distributed among query replicas. The primary server serves as the processing server. Query replicas serve only queries against the model databases synchronized between the primary server and each replica in the query pool.
-When scaling out, it can take up to five minutes for new query replicas to be incrementally added to the query pool. When all new query replicas are up and running, new client connections are load balanced across resources in the query pool. Existing client connections are not changed from the resource they are currently connected to. When scaling in, any existing client connections to a query pool resource that is being removed from the query pool are terminated. Clients can reconnect to a remaining query pool resource.
+When you scale out, it can take up to five minutes for new query replicas to be incrementally added to the query pool. When all new query replicas are up and running, new client connections are load balanced across resources in the query pool. Existing client connections aren't changed from the resource they are currently connected to. When you scale in, any existing client connections to a query pool resource that is being removed from the query pool are terminated. Clients can reconnect to a remaining query pool resource.
## How it works
-When configuring scale-out the first time, model databases on your primary server are *automatically* synchronized with new replicas in a new query pool. Automatic synchronization occurs only once. During automatic synchronization, the primary server's data files (encrypted at rest in blob storage) are copied to a second location, also encrypted at rest in blob storage. Replicas in the query pool are then *hydrated* with data from the second set of files.
+When you configure scale out the first time, model databases on your primary server are *automatically* synchronized with new replicas in a new query pool. Automatic synchronization occurs only once. During automatic synchronization, the primary server's data files (encrypted at rest in blob storage) are copied to a second location, also encrypted at rest in blob storage. Replicas in the query pool are then *hydrated* with data from the second set of files.
-While an automatic synchronization is performed only when you scale-out a server for the first time, you can also perform a manual synchronization. Synchronizing assures data on replicas in the query pool match that of the primary server. When processing (refresh) models on the primary server, a synchronization must be performed *after* processing operations are completed. This synchronization copies updated data from the primary server's files in blob storage to the second set of files. Replicas in the query pool are then hydrated with updated data from the second set of files in blob storage.
+While an automatic synchronization is performed only when you scale out a server for the first time, you can also perform a manual synchronization. Synchronizing assures data on replicas in the query pool match that of the primary server. When processing (refresh) models on the primary server, a synchronization must be performed *after* processing operations are completed. This synchronization copies updated data from the primary server's files in blob storage to the second set of files. Replicas in the query pool are then hydrated with updated data from the second set of files in blob storage.
-When performing a subsequent scale-out operation, for example, increasing the number of replicas in the query pool from two to five, the new replicas are hydrated with data from the second set of files in blob storage. There is no synchronization. If you then perform a synchronization after scaling out, the new replicas in the query pool would be hydrated twice - a redundant hydration. When performing a subsequent scale-out operation, it's important to keep in mind:
+When you perform a subsequent scale-out operation, for example, increasing the number of replicas in the query pool from two to five, the new replicas are hydrated with data from the second set of files in blob storage. There's no synchronization. If you then perform a synchronization after scaling out, the new replicas in the query pool would be hydrated twice - a redundant hydration. When performing a subsequent scale-out operation, it's important to keep in mind:
-* Perform a synchronization *before the scale-out operation* to avoid redundant hydration of the added replicas. Concurrent synchronization and scale-out operations running at the same time are not allowed.
+* Perform a synchronization *before the scale-out operation* to avoid redundant hydration of the added replicas. Concurrent synchronization and scale-out operations running at the same time aren't allowed.
* When automating both processing *and* scale-out operations, it's important to first process data on the primary server, then perform a synchronization, and then perform the scale-out operation. This sequence assures minimal impact on QPU and memory resources. * During scale-out operations, all servers in the query pool, including the primary server, are temporarily offline.
-* Synchronization is allowed even when there are no replicas in the query pool. If you are scaling out from zero to one or more replicas with new data from a processing operation on the primary server, perform the synchronization first with no replicas in the query pool, and then scale-out. Synchronizing before scaling out avoids redundant hydration of the newly added replicas.
+* Synchronization is allowed even when there are no replicas in the query pool. If you're scaling out from zero to one or more replicas with new data from a processing operation on the primary server, perform the synchronization first with no replicas in the query pool, and then scale-out. Synchronizing before scaling out avoids redundant hydration of the newly added replicas.
-* When deleting a model database from the primary server, it does not automatically get deleted from replicas in the query pool. You must perform a synchronization operation by using the [Sync-AzAnalysisServicesInstance](/powershell/module/az.analysisservices/sync-AzAnalysisServicesinstance) PowerShell command that removes the file/s for that database from the replica's shared blob storage location and then deletes the model database on the replicas in the query pool. To determine if a model database exists on replicas in the query pool but not on the primary server, ensure the **Separate the processing server from querying pool** setting is to **Yes**. Then use SSMS to connect to the primary server using the `:rw` qualifier to see if the database exists. Then connect to replicas in the query pool by connecting without the `:rw` qualifier to see if the same database also exists. If the database exists on replicas in the query pool but not on the primary server, run a sync operation.
+* When you delete a model database from the primary server, it doesn't automatically get deleted from replicas in the query pool. You must perform a synchronization operation by using the [Sync-AzAnalysisServicesInstance](/powershell/module/az.analysisservices/sync-AzAnalysisServicesinstance) PowerShell command that removes the file/s for that database from the replica's shared blob storage location and then deletes the model database on the replicas in the query pool. To determine if a model database exists on replicas in the query pool but not on the primary server, ensure the **Separate the processing server from querying pool** setting is to **Yes**. Then use SQL Server Management Studio (SSMS) to connect to the primary server using the `:rw` qualifier to see if the database exists. Then connect to replicas in the query pool by connecting without the `:rw` qualifier to see if the same database also exists. If the database exists on replicas in the query pool but not on the primary server, run a sync operation.
-* When renaming a database on the primary server, there's an additional step necessary to ensure the database is properly synchronized to any replicas. After renaming, perform a synchronization by using the [Sync-AzAnalysisServicesInstance](/powershell/module/az.analysisservices/sync-AzAnalysisServicesinstance) command specifying the `-Database` parameter with the old database name. This synchronization removes the database and files with the old name from any replicas. Then perform another synchronization specifying the `-Database` parameter with the new database name. The second synchronization copies the newly named database to the second set of files and hydrates any replicas. These synchronizations cannot be performed by using the Synchronize model command in the portal.
+* When you rename a database on the primary server, there's another step necessary to ensure the database is properly synchronized to any replicas. After renaming, perform a synchronization by using the [Sync-AzAnalysisServicesInstance](/powershell/module/az.analysisservices/sync-AzAnalysisServicesinstance) command specifying the `-Database` parameter with the old database name. This synchronization removes the database and files with the old name from any replicas. Then perform another synchronization specifying the `-Database` parameter with the new database name. The second synchronization copies the newly named database to the second set of files and hydrates any replicas. These synchronizations can't be performed by using the Synchronize model command in the portal.
### Synchronization mode
By default, query replicas are rehydrated in full, not incrementally. Rehydratio
- Significant reduction in synchronization time. - Data across replicas are more likely to be consistent during the synchronization process. -- Because databases are kept online on all replicas throughout the synchronization process, clients do not need to reconnect.
+- Because databases are kept online on all replicas throughout the synchronization process, clients don't need to reconnect.
- The in-memory cache is updated incrementally with only the changed data, which can be faster than fully rehydrating the model. #### Setting ReplicaSyncMode
Use SSMS to set ReplicaSyncMode in Advanced Properties. The possible values are:
![RelicaSyncMode setting](media/analysis-services-scale-out/aas-scale-out-sync-mode.png)
-When setting **ReplicaSyncMode=2**, depending on how much of the cache needs to be updated, additional memory may be consumed by the query replicas. To keep the database online and available for queries, depending on how much of the data has changed, the operation can require up to *double the memory* on the replica because both the old and new segments are kept in memory simultaneously. Replica nodes have the same memory allocation as the primary node, and there is normally extra memory on the primary node for refresh operations, so it may be unlikely that the replicas would run out of memory. Additionally, the common scenario is that the database is incrementally updated on the primary node, and therefore the requirement for double the memory should be uncommon. If the Sync operation does encounter an out of memory error, it will retry using the default technique (attach/detach two at a time).
+When setting **ReplicaSyncMode=2**, depending on how much of the cache needs to be updated, more memory may be consumed by the query replicas. To keep the database online and available for queries, depending on how much of the data has changed, the operation can require up to *double the memory* on the replica because both the old and new segments are kept in memory simultaneously. Replica nodes have the same memory allocation as the primary node, and there's normally extra memory on the primary node for refresh operations, so it may be unlikely that the replicas would run out of memory. Additionally, the common scenario is that the database is incrementally updated on the primary node, and therefore the requirement for double the memory should be uncommon. If the Sync operation does encounter an out of memory error, it retries using the default technique (attach/detach two at a time).
### Separate processing from query pool
-For maximum performance for both processing and query operations, you can choose to separate your processing server from the query pool. When separated, new client connections are assigned to query replicas in the query pool only. If processing operations only take up a short amount of time, you can choose to separate your processing server from the query pool only for the amount of time it takes to perform processing and synchronization operations, and then include it back into the query pool. When separating the processing server from the query pool, or adding it back into the query pool can take up to five minutes for the operation to complete.
+For maximum performance for both processing and query operations, you can choose to separate your processing server from the query pool. When separated, new client connections are assigned to query replicas in the query pool only. If processing operations only take up a short amount of time, you can choose to separate your processing server from the query pool only for the amount of time it takes to perform processing and synchronization operations, and then include it back into the query pool. Separating the processing server from the query pool or adding it back into the query pool can take up to five minutes for the operation to complete.
## Monitor QPU usage
-To determine if scale-out for your server is necessary, [monitor your server](analysis-services-monitor.md) in Azure portal by using Metrics. If your QPU regularly maxes out, it means the number of queries against your models is exceeding the QPU limit for your plan. The Query pool job queue length metric also increases when the number of queries in the query thread pool queue exceeds available QPU.
+To determine if scale-out for your server is necessary, [monitor your server metrics](monitor-analysis-services.md#platform-metrics) in the Azure portal. If your QPU regularly maxes out, it means the number of queries against your models is exceeding the QPU limit for your plan. The Query pool job queue length metric also increases when the number of queries in the query thread pool queue exceeds available QPU.
-Another good metric to watch is average QPU by ServerResourceType. This metric compares average QPU for the primary server with the query pool.
+Another good metric to watch is average QPU by ServerResourceType. This metric compares average QPU for the primary server with the query pool.
![Query scale out metrics](media/analysis-services-scale-out/aas-scale-out-monitor.png)
Another good metric to watch is average QPU by ServerResourceType. This metric c
### Detailed diagnostic logging
-Use Azure Monitor Logs for more detailed diagnostics of scaled out server resources. With logs, you can use Log Analytics queries to break out QPU and memory by server and replica. To learn more, see example queries in [Analysis Services diagnostics logging](analysis-services-logging.md#example-queries).
-
+Use Azure Monitor Logs for more detailed diagnostics of scaled out server resources. With logs, you can use Log Analytics queries to break out QPU and memory by server and replica. For more information, see [Analyze logs in Log Analytics workspace](monitor-analysis-services.md#analyze-logs-in-log-analytics-workspace). For example queries, see [Sample Kusto queries]((monitor-analysis-services.md#sample-kusto-queries).
## Configure scale-out
Use Azure Monitor Logs for more detailed diagnostics of scaled out server resour
3. Click **Save** to provision your new query replica servers.
-When configuring scale-out for a server the first time, models on your primary server are automatically synchronized with replicas in the query pool. Automatic synchronization only occurs once, when you first configure scale-out to one or more replicas. Subsequent changes to the number of replicas on the same server *will not trigger another automatic synchronization*. Automatic synchronization will not occur again even if you set the server to zero replicas and then again scale-out to any number of replicas.
+When you configure scale-out for a server the first time, models on your primary server are automatically synchronized with replicas in the query pool. Automatic synchronization only occurs once, when you first configure scale-out to one or more replicas. Subsequent changes to the number of replicas on the same server *doesn't trigger another automatic synchronization*. Automatic synchronization doesn't occur again even if you set the server to zero replicas and then again scale out to any number of replicas.
## Synchronize
For SSMS, Visual Studio, and connection strings in PowerShell, Azure Function ap
## Scale-up, Scale-down vs. Scale-out
-You can change the pricing tier on a server with multiple replicas. The same pricing tier applies to all replicas. A scale operation will first bring down all replicas all at once then bring up all replicas on the new pricing tier.
+You can change the pricing tier on a server with multiple replicas. The same pricing tier applies to all replicas. A scale operation first brings down all replicas at once then brings up all replicas on the new pricing tier.
## Troubleshoot
-**Issue:** Users get error **Cannot find server '\<Name of the server>' instance in connection mode 'ReadOnly'.**
+**Issue:** Users get error **can't find server '\<Name of the server>' instance in connection mode 'ReadOnly'.**
-**Solution:** When selecting the **Separate the processing server from the querying pool** option, client connections using the default connection string (without `:rw`) are redirected to query pool replicas. If replicas in the query pool are not yet online because synchronization has not yet been completed, redirected client connections can fail. To prevent failed connections, there must be at least two servers in the query pool when performing a synchronization. Each server is synchronized individually while others remain online. If you choose to not have the processing server in the query pool during processing, you can choose to remove it from the pool for processing, and then add it back into the pool after processing is complete, but prior to synchronization. Use Memory and QPU metrics to monitor synchronization status.
+**Solution:** When selecting the **Separate the processing server from the querying pool** option, client connections using the default connection string (without `:rw`) are redirected to query pool replicas. If replicas in the query pool aren't yet online because synchronization has not yet been completed, redirected client connections can fail. To prevent failed connections, there must be at least two servers in the query pool when performing a synchronization. Each server is synchronized individually while others remain online. If you choose to not have the processing server in the query pool during processing, you can choose to remove it from the pool for processing, and then add it back into the pool after processing is complete, but prior to synchronization. Use Memory and QPU metrics to monitor synchronization status.
## Related information
-[Monitor server metrics](analysis-services-monitor.md)
+[Monitor Azure Analysis Services](monitor-analysis-services.md)
[Manage Azure Analysis Services](analysis-services-manage.md)
analysis-services Monitor Analysis Services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/monitor-analysis-services-reference.md
+
+ Title: Monitoring data reference for Azure Analysis Services
+description: This article contains important reference material you need when you monitor Azure Analysis Services.
Last updated : 02/28/2024+++++++
+# Azure Analysis Services monitoring data reference
++
+See [Monitor Azure Analysis Services](monitor-analysis-services.md) for details on the data you can collect for Azure Analysis Services and how to use it.
++
+### Supported metrics for Microsoft.AnalysisServices/servers
+The following table lists the metrics available for the Microsoft.AnalysisServices/servers resource type.
+
+Analysis Services metrics have the dimension `ServerResourceType`.
++
+### Supported resource logs for Microsoft.AnalysisServices/servers
+
+When you set up logging for Analysis Services, you can select **Engine** or **Service** events to log.
+
+#### Engine
+
+The **Engine** category logs all [xEvents](/analysis-services/instances/monitor-analysis-services-with-sql-server-extended-events). You can't select individual events.
+
+|XEvent categories |Event name |
+|||
+|Security Audit | Audit Login |
+|Security Audit | Audit Logout |
+|Security Audit | Audit Server Starts And Stops |
+|Progress Reports | Progress Report Begin |
+|Progress Reports | Progress Report End |
+|Progress Reports | Progress Report Current |
+|Queries | Query Begin |
+|Queries | Query End |
+|Commands | Command Begin |
+|Commands | Command End |
+|Errors & Warnings | Error |
+|Discover | Discover End |
+|Notification | Notification |
+|Session | Session Initialize |
+|Locks | Deadlock |
+|Query Processing | VertiPaq SE Query Begin |
+|Query Processing | VertiPaq SE Query End |
+|Query Processing | VertiPaq SE Query Cache Match |
+|Query Processing | Direct Query Begin |
+|Query Processing | Direct Query End |
+
+#### Service
+
+The **Service** category logs the following events:
+
+|Operation name |Occurs when |
+|||
+|ResumeServer | Resume a server |
+|SuspendServer | Pause a server |
+|DeleteServer | Delete a server |
+|RestartServer | User restarts a server through SSMS or PowerShell |
+|GetServerLogFiles | User exports server log through PowerShell |
+|ExportModel | User exports a model in the portal by using Open in Visual Studio |
+
+### Analysis Services
+microsoft.analysisservices/servers
+
+- [AzureActivity](/azure/azure-monitor/reference/tables/AzureActivity#columns)
+- [AzureMetrics](/azure/azure-monitor/reference/tables/AzureMetrics#columns)
+- [AzureDiagnostics](/azure/azure-monitor/reference/tables/AzureDiagnostics#columns)
+
+When you set up logging, selecting **AllMetrics** logs the [server metrics](#metrics) to the [AzureMetrics](/azure/azure-monitor/reference/tables/AzureMetrics) table. If you're using query [scale-out](analysis-services-scale-out.md) and need to separate metrics for each read replica, use the [AzureDiagnostics](/azure/azure-monitor/reference/tables/AzureDiagnostics) table instead, where **OperationName** is equal to **LogMetric**.
+
+- [Microsoft.AnalysisServices resource provider operations](/azure/role-based-access-control/resource-provider-operations#microsoftanalysisservices)
+
+## Related content
+
+- See [Monitor Analysis Services](monitor-analysis-services.md) for a description of monitoring Analysis Services.
+- See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
+
analysis-services Monitor Analysis Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/monitor-analysis-services.md
+
+ Title: Monitor Azure Analysis Services
+description: Start here to learn how to monitor Azure Analysis Services.
Last updated : 02/28/2024+++++++
+# Monitor Azure Analysis Services
++
+Analysis Services also provides several non-Azure Monitor monitoring mechanisms:
+
+- SQL Server Profiler, installed with SQL Server Management Studio (SSMS), captures data about engine process events such as the start of a batch or a transaction, enabling you to monitor server and database activity. For more information, see [Monitor Analysis Services with SQL Server Profiler](/analysis-services/instances/use-sql-server-profiler-to-monitor-analysis-services).
+- Extended Events (xEvents) is a light-weight tracing and performance monitoring system that uses few system resources, making it an ideal tool for diagnosing problems on both production and test servers. For more information, see [Monitor Analysis Services with SQL Server Extended Events](/analysis-services/instances/monitor-analysis-services-with-sql-server-extended-events).
+- Dynamic Management Views (DMVs) use SQL syntax to interface schema rowsets that return metadata and monitoring information about server instances. For more information, see [Use Dynamic Management Views (DMVs) to monitor Analysis Services](/analysis-services/instances/use-dynamic-management-views-dmvs-to-monitor-analysis-services).
+
+For more information about the resource types for Analysis Services, see [Analysis Services monitoring data reference](monitor-analysis-services-reference.md).
++
+<a name="server-metrics"></a>
+For a list of available metrics for Analysis Services, see [Analysis Services monitoring data reference](monitor-analysis-services-reference.md#metrics).
+
+- For the available resource log categories, associated Log Analytics tables, and the logs schemas for Analysis Services, see [Analysis Services monitoring data reference](monitor-analysis-services-reference.md#resource-logs).
+## Analysis Services resource logs
+
+When you set up logging for Analysis Services, you can select **Engine** or **Service** events to log, or select **AllMetrics** to log metrics data. For more information, see [Supported resource logs for Microsoft.AnalysisServices/servers](monitor-analysis-services-reference.md#supported-resource-logs-for-microsoftanalysisservicesservers).
++++
+### Analyze Analysis Services metrics
+
+You can use Analysis Services metrics in Azure Monitor Metrics Explorer to help you monitor the performance and health of your servers. For example, you can monitor memory and CPU usage, number of client connections, and query resource consumption.
+
+To determine if scale-out for your server is necessary, monitor your server **QPU** and **Query pool job queue length** metrics. A good metric to watch is average QPU by ServerResourceType, which compares average QPU for the primary server with the query pool. For detailed instructions on how to scale out your server based on metrics data, see [Azure Analysis Services scale-out](analysis-services-scale-out.md).
+
+![Query scale out metrics](media/analysis-services-scale-out/aas-scale-out-monitor.png)
+
+For a complete listing of metrics collected for Analysis Services, see [Analysis Services monitoring data reference](monitor-analysis-services-reference.md#metrics).
+
+### Analyze logs in Log Analytics workspace
+
+Metrics and server events are integrated with xEvents in your Log Analytics workspace resource for side-by-side analysis. Log Analytics workspace can also be configured to receive events from other Azure services, providing a holistic view of diagnostic logging data across your architecture.
+
+To view your diagnostic data, in Log Analytics workspace, open **Logs** from the left menu.
+
+![Screenshot showing log Search options in the Azure portal.](./media/analysis-services-logging/aas-logging-open-log-search.png)
+
+In the query builder, expand **LogManagement** > **AzureDiagnostics**. AzureDiagnostics includes **Engine** and **Service** events. Notice a query is created on the fly. The **EventClass\_s** field contains xEvent names, which might look familiar if you use xEvents for on-premises logging. Select **EventClass\_s** or one of the event names, and Log Analytics workspace continues constructing a query. Be sure to save your queries to reuse later.
+
+<a name="example-queries"></a>
+
+The following queries are useful for monitoring your Analysis Services server.
+
+#### Example 1
+
+The following query returns durations for each query end/refresh end event for a model database and server. If scaled out, the results are broken out by replica because the replica number is included in ServerName_s. Grouping by RootActivityId_g reduces the row count retrieved from the Azure Diagnostics REST API and helps stay within the limits as described in Log Analytics Rate limits.
+
+```Kusto
+let window = AzureDiagnostics
+ | where ResourceProvider == "MICROSOFT.ANALYSISSERVICES" and Resource =~ "MyServerName" and DatabaseName_s =~ "MyDatabaseName" ;
+window
+| where OperationName has "QueryEnd" or (OperationName has "CommandEnd" and EventSubclass_s == 38)
+| where extract(@"([^,]*)", 1,Duration_s, typeof(long)) > 0
+| extend DurationMs=extract(@"([^,]*)", 1,Duration_s, typeof(long))
+| project StartTime_t,EndTime_t,ServerName_s,OperationName,RootActivityId_g,TextData_s,DatabaseName_s,ApplicationName_s,Duration_s,EffectiveUsername_s,User_s,EventSubclass_s,DurationMs
+| order by StartTime_t asc
+```
+
+#### Example 2
+
+The following query returns memory and QPU consumption for a server. If scaled out, the results are broken out by replica because the replica number is included in ServerName_s.
+
+```Kusto
+let window = AzureDiagnostics
+ | where ResourceProvider == "MICROSOFT.ANALYSISSERVICES" and Resource =~ "MyServerName";
+window
+| where OperationName == "LogMetric"
+| where name_s == "memory_metric" or name_s == "qpu_metric"
+| project ServerName_s, TimeGenerated, name_s, value_s
+| summarize avg(todecimal(value_s)) by ServerName_s, name_s, bin(TimeGenerated, 1m)
+| order by TimeGenerated asc
+```
+
+#### Example 3
+
+The following query returns the Rows read/sec Analysis Services engine performance counters for a server.
+
+```Kusto
+let window = AzureDiagnostics
+ | where ResourceProvider == "MICROSOFT.ANALYSISSERVICES" and Resource =~ "MyServerName";
+window
+| where OperationName == "LogMetric"
+| where parse_json(tostring(parse_json(perfobject_s).counters))[0].name == "Rows read/sec"
+| extend Value = tostring(parse_json(tostring(parse_json(perfobject_s).counters))[0].value)
+| project ServerName_s, TimeGenerated, Value
+| summarize avg(todecimal(Value)) by ServerName_s, bin(TimeGenerated, 1m)
+| order by TimeGenerated asc
+```
++
+### Analysis Services alert rules
+The following table lists some common and popular alert rules for Analysis Services.
+
+| Alert type | Condition | Description |
+|:|:|:|
+|Metric | Whenever the maximum qpu_metric is greater than dynamic threshold. | If your QPU regularly maxes out, it means the number of queries against your models is exceeding the QPU limit for your plan.|
+|Metric | Whenever the maximum QueryPoolJobQueueLength is greater than dynamic threshold. | The number of queries in the query thread pool queue exceeds available QPU.|
++
+## Related content
+
+- See [Analysis Services monitoring data reference](monitor-analysis-services-reference.md) for a reference of the metrics, logs, and other important values created for Analysis Services.
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for general details on monitoring Azure resources.
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
Migration requires a three to six hour service window for App Service Environmen
- The existing App Service Environment is shut down and replaced by the new App Service Environment v3. - All App Service plans in the App Service Environment are converted from the Isolated to Isolated v2 tier. - All of the apps that are on your App Service Environment are temporarily down. **You should expect about one hour of downtime during this period**.
- - If you can't support downtime, see the [side by side migration feature](side-by-side-migrate.md) or the[migration-alternatives](migration-alternatives.md#migrate-manually).
+ - If you can't support downtime, see the [side by side migration feature](side-by-side-migrate.md) or the [migration-alternatives](migration-alternatives.md#migrate-manually).
- The public addresses that are used by the App Service Environment change to the IPs generated during the IP generation step. As in the IP generation step, you can't scale, modify your App Service Environment, or deploy apps to it during this process. When migration is complete, the apps that were on the old App Service Environment are running on the new App Service Environment v3.
app-service Tutorial Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-custom-container.md
The streamed logs look like this:
Azure App Service uses the Docker container technology to host both built-in images and custom images. To see a list of built-in images, run the Azure CLI command, ['az webapp list-runtimes --os linux'](/cli/azure/webapp#az-webapp-list-runtimes). If those images don't satisfy your needs, you can build and deploy a custom image.
+> [!NOTE]
+> Container should target x86-x64 architecture, ARM64 is not supported.
+>
+ In this tutorial, you learn how to: > [!div class="checklist"]
application-gateway Application Gateway Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-diagnostics.md
Previously updated : 01/10/2024 Last updated : 02/28/2024
Azure generates the activity log by default. The logs are preserved for 90 days
### Access log
-The access log is generated only if you've enabled it on each Application Gateway instance, as detailed in the preceding steps. The data is stored in the storage account that you specified when you enabled the logging. Each access of Application Gateway is logged in JSON format as shown below.
+The access log is generated only if you've enabled it on each Application Gateway instance, as detailed in the preceding steps. The data is stored in the storage account that you specified when you enabled the logging. Each access of Application Gateway is logged in JSON format as shown below.
#### For Application Gateway and WAF v2 SKU
+> [!NOTE]
+> For TLS/TCP proxy related information, visit [data reference](monitor-application-gateway-reference.md#tlstcp-proxy-logs).
+ |Value |Description | ||| |instanceId | Application Gateway instance that served the request. |
application-gateway Application Gateway Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-metrics.md
Application Gateway publishes data points to [Azure Monitor](../azure-monitor/ov
## Metrics supported by Application Gateway V2 SKU
+> [!NOTE]
+> For TLS/TCP proxy related information, visit [data reference](monitor-application-gateway-reference.md#tlstcp-proxy-metrics).
+ ### Timing metrics Application Gateway provides several builtΓÇæin timing metrics related to the request and response, which are all measured in milliseconds.
application-gateway How To Backend Mtls Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-backend-mtls-gateway-api.md
This document helps set up an example application that uses the following resources from Gateway API. Steps are provided to: - Create a [Gateway](https://gateway-api.sigs.k8s.io/concepts/api-overview/#gateway) resource with one HTTPS listener.-- Create an [HTTPRoute](https://gateway-api.sigs.k8s.io/v1alpha2/api-types/httproute/) resource that references a backend service.
+- Create an [HTTPRoute](https://gateway-api.sigs.k8s.io/api-types/httproute/) resource that references a backend service.
- Create a [BackendTLSPolicy](api-specification-kubernetes.md#alb.networking.azure.io/v1.BackendTLSPolicy) resource that has a client and CA certificate for the backend service referenced in the HTTPRoute. ## Background
See the following figure:
## Prerequisites
-1. If following the BYO deployment strategy, ensure you have set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md)
-2. If following the ALB managed deployment strategy, ensure you have provisioned your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and provisioned the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).
-3. Deploy sample HTTP application
+1. If following the BYO deployment strategy, ensure you set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md).
+2. If following the ALB managed deployment strategy, ensure you provision your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and provision the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).
+3. Deploy sample HTTP application:
Apply the following deployment.yaml file on your cluster to create a sample web application and deploy sample secrets to demonstrate backend mutual authentication (mTLS).
See the following figure:
This command creates the following on your cluster:
- - a namespace called `test-infra`
- - one service called `mtls-app` in the `test-infra` namespace
- - one deployment called `mtls-app` in the `test-infra` namespace
- - one config map called `mtls-app-nginx-cm` in the `test-infra` namespace
- - four secrets called `backend.com`, `frontend.com`, `gateway-client-cert`, and `ca.bundle` in the `test-infra` namespace
+ - A namespace called `test-infra`
+ - One service called `mtls-app` in the `test-infra` namespace
+ - One deployment called `mtls-app` in the `test-infra` namespace
+ - One config map called `mtls-app-nginx-cm` in the `test-infra` namespace
+ - Four secrets called `backend.com`, `frontend.com`, `gateway-client-cert`, and `ca.bundle` in the `test-infra` namespace
## Deploy the required Gateway API resources # [ALB managed deployment](#tab/alb-managed)
-Create a gateway:
+Create a gateway
```bash kubectl apply -f - <<EOF
-apiVersion: gateway.networking.k8s.io/v1beta1
+apiVersion: gateway.networking.k8s.io/v1
kind: Gateway metadata: name: gateway-01
EOF
1. Set the following environment variables
-```bash
-RESOURCE_GROUP='<resource group name of the Application Gateway For Containers resource>'
-RESOURCE_NAME='alb-test'
+ ```bash
+ RESOURCE_GROUP='<resource group name of the Application Gateway For Containers resource>'
+ RESOURCE_NAME='alb-test'
-RESOURCE_ID=$(az network alb show --resource-group $RESOURCE_GROUP --name $RESOURCE_NAME --query id -o tsv)
-FRONTEND_NAME='frontend'
-az network alb frontend create -g $RESOURCE_GROUP -n $FRONTEND_NAME --alb-name $AGFC_NAME
-```
+ RESOURCE_ID=$(az network alb show --resource-group $RESOURCE_GROUP --name $RESOURCE_NAME --query id -o tsv)
+ FRONTEND_NAME='frontend'
+ az network alb frontend create -g $RESOURCE_GROUP -n $FRONTEND_NAME --alb-name $AGFC_NAME
+ ```
2. Create a Gateway
-```bash
-kubectl apply -f - <<EOF
-apiVersion: gateway.networking.k8s.io/v1beta1
-kind: Gateway
-metadata:
- name: gateway-01
- namespace: test-infra
- annotations:
- alb.networking.azure.io/alb-id: $RESOURCE_ID
-spec:
- gatewayClassName: azure-alb-external
- listeners:
- - name: https-listener
- port: 443
- protocol: HTTPS
- allowedRoutes:
- namespaces:
- from: Same
- tls:
- mode: Terminate
- certificateRefs:
- - kind : Secret
- group: ""
- name: frontend.com
- addresses:
- - type: alb.networking.azure.io/alb-frontend
- value: $FRONTEND_NAME
-EOF
-```
+
+ ```bash
+ kubectl apply -f - <<EOF
+ apiVersion: gateway.networking.k8s.io/v1
+ kind: Gateway
+ metadata:
+ name: gateway-01
+ namespace: test-infra
+ annotations:
+ alb.networking.azure.io/alb-id: $RESOURCE_ID
+ spec:
+ gatewayClassName: azure-alb-external
+ listeners:
+ - name: https-listener
+ port: 443
+ protocol: HTTPS
+ allowedRoutes:
+ namespaces:
+ from: Same
+ tls:
+ mode: Terminate
+ certificateRefs:
+ - kind : Secret
+ group: ""
+ name: frontend.com
+ addresses:
+ - type: alb.networking.azure.io/alb-frontend
+ value: $FRONTEND_NAME
+ EOF
+ ```
-Once the gateway resource has been created, ensure the status is valid, the listener is _Programmed_, and an address is assigned to the gateway.
+Once the gateway resource is created, ensure the status is valid, the listener is _Programmed_, and an address is assigned to the gateway.
```bash kubectl get gateway gateway-01 -n test-infra -o yaml ```
-Example output of successful gateway creation.
+Example output of successful gateway creation:
```yaml status:
status:
kind: HTTPRoute ```
-Once the gateway has been created, create an HTTPRoute resource.
+Once the gateway is created, create an HTTPRoute resource.
```bash kubectl apply -f - <<EOF
-apiVersion: gateway.networking.k8s.io/v1beta1
+apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute metadata: name: https-route
spec:
EOF ```
-Once the HTTPRoute resource has been created, ensure the route has been _Accepted_ and the Application Gateway for Containers resource has been _Programmed_.
+Once the HTTPRoute resource is created, ensure the route is _Accepted_ and the Application Gateway for Containers resource is _Programmed_.
```bash kubectl get httproute https-route -n test-infra -o yaml ```
-Verify the status of the Application Gateway for Containers resource has been successfully updated.
+Verify the status of the Application Gateway for Containers resource is successfully updated.
```yaml status:
spec:
EOF ```
-Once the BackendTLSPolicy object has been created check the status on the object to ensure that the policy is valid.
+Once the BackendTLSPolicy object is created, check the status on the object to ensure that the policy is valid:
```bash kubectl get backendtlspolicy -n test-infra mtls-app-tls-policy -o yaml ```
-Example output of valid BackendTLSPolicy object creation.
+Example output of valid BackendTLSPolicy object creation:
```yaml status:
status:
## Test access to the application
-Now we're ready to send some traffic to our sample application, via the FQDN assigned to the frontend. Use the following command to get the FQDN.
+Now we're ready to send some traffic to our sample application, via the FQDN assigned to the frontend. Use the following command to get the FQDN:
```bash fqdn=$(kubectl get gateway gateway-01 -n test-infra -o jsonpath='{.status.addresses[0].value}')
application-gateway How To Header Rewrite Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-header-rewrite-gateway-api.md
Application Gateway for Containers allows you to rewrite HTTP headers of client
## Usage details
-Header rewrites take advantage of [filters](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1beta1.HTTPURLRewriteFilter) as defined by Kubernetes Gateway API.
+Header rewrites take advantage of [filters](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1.HTTPURLRewriteFilter) as defined by Kubernetes Gateway API.
## Background
Create a gateway:
```bash kubectl apply -f - <<EOF
-apiVersion: gateway.networking.k8s.io/v1beta1
+apiVersion: gateway.networking.k8s.io/v1
kind: Gateway metadata: name: gateway-01
FRONTEND_NAME='frontend'
```bash kubectl apply -f - <<EOF
-apiVersion: gateway.networking.k8s.io/v1beta1
+apiVersion: gateway.networking.k8s.io/v1
kind: Gateway metadata: name: gateway-01
This example also demonstrates addition of a new header called `AGC-Header-Add`
```bash kubectl apply -f - <<EOF
-apiVersion: gateway.networking.k8s.io/v1beta1
+apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute metadata: name: header-rewrite-route
application-gateway How To Multiple Site Hosting Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-multiple-site-hosting-gateway-api.md
Title: Multiple site hosting with Application Gateway for Containers - Gateway API
+ Title: Multi-site hosting with Application Gateway for Containers - Gateway API
description: Learn how to host multiple sites with Application Gateway for Containers using the Gateway API.
Last updated 02/27/2024
-# Multiple site hosting with Application Gateway for Containers - Gateway API
+# Multi-site hosting with Application Gateway for Containers - Gateway API
This document helps you set up an example application that uses the resources from Gateway API to demonstrate hosting multiple sites on the same Kubernetes Gateway resource / Application Gateway for Containers frontend. Steps are provided to:+ - Create a [Gateway](https://gateway-api.sigs.k8s.io/concepts/api-overview/#gateway) resource with one HTTP listener. - Create two [HTTPRoute](https://gateway-api.sigs.k8s.io/v1alpha2/api-types/httproute/) resources that each reference a unique backend service.
Application Gateway for Containers enables multi-site hosting by allowing you to
## Prerequisites
-1. If you follow the BYO deployment strategy, ensure you set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md)
-2. If you follow the ALB managed deployment strategy, ensure provisioning of your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).
-3. Deploy sample HTTP application
- Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate path, query, and header based routing.
- ```bash
- kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/traffic-split-scenario/deployment.yaml
- ```
-
- This command creates the following on your cluster:
- - a namespace called `test-infra`
- - two services called `backend-v1` and `backend-v2` in the `test-infra` namespace
- - two deployments called `backend-v1` and `backend-v2` in the `test-infra` namespace
+1. If you used the BYO deployment strategy, ensure you set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md).
+2. If you used the ALB managed deployment strategy, ensure provisioning of your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).
+3. Deploy sample HTTP application:<br>
+ Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate path, query, and header based routing.
+
+ ```bash
+ kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/traffic-split-scenario/deployment.yaml
+ ```
+
+ This command creates the following on your cluster:
+
+ - A namespace called `test-infra`
+ - Two services called `backend-v1` and `backend-v2` in the `test-infra` namespace
+ - Two deployments called `backend-v1` and `backend-v2` in the `test-infra` namespace
## Deploy the required Gateway API resources # [ALB managed deployment](#tab/alb-managed) 1. Create a Gateway+ ```bash kubectl apply -f - <<EOF
-apiVersion: gateway.networking.k8s.io/v1beta1
+apiVersion: gateway.networking.k8s.io/v1
kind: Gateway metadata: name: gateway-01
EOF
1. Set the following environment variables
-```bash
-RESOURCE_GROUP='<resource group name of the Application Gateway For Containers resource>'
-RESOURCE_NAME='alb-test'
+ ```bash
+ RESOURCE_GROUP='<resource group name of the Application Gateway For Containers resource>'
+ RESOURCE_NAME='alb-test'
-RESOURCE_ID=$(az network alb show --resource-group $RESOURCE_GROUP --name $RESOURCE_NAME --query id -o tsv)
-FRONTEND_NAME='frontend'
-```
+ RESOURCE_ID=$(az network alb show --resource-group $RESOURCE_GROUP --name $RESOURCE_NAME --query id -o tsv)
+ FRONTEND_NAME='frontend'
+ ```
2. Create a Gateway
-```bash
-kubectl apply -f - <<EOF
-apiVersion: gateway.networking.k8s.io/v1beta1
-kind: Gateway
-metadata:
- name: gateway-01
- namespace: test-infra
- annotations:
- alb.networking.azure.io/alb-id: $RESOURCE_ID
-spec:
- gatewayClassName: azure-alb-external
- listeners:
- - name: http-listener
- port: 80
- protocol: HTTP
- allowedRoutes:
- namespaces:
- from: Same
- addresses:
- - type: alb.networking.azure.io/alb-frontend
- value: $FRONTEND_NAME
-EOF
-```
+
+ ```bash
+ kubectl apply -f - <<EOF
+ apiVersion: gateway.networking.k8s.io/v1
+ kind: Gateway
+ metadata:
+ name: gateway-01
+ namespace: test-infra
+ annotations:
+ alb.networking.azure.io/alb-id: $RESOURCE_ID
+ spec:
+ gatewayClassName: azure-alb-external
+ listeners:
+ - name: http-listener
+ port: 80
+ protocol: HTTP
+ allowedRoutes:
+ namespaces:
+ from: Same
+ addresses:
+ - type: alb.networking.azure.io/alb-frontend
+ value: $FRONTEND_NAME
+ EOF
+ ```
Once the gateway resource is created, ensure the status is valid, the listener is _Programmed_, and an address is assigned to the gateway.+ ```bash kubectl get gateway gateway-01 -n test-infra -o yaml ``` Example output of successful gateway creation.+ ```yaml status: addresses:
status:
``` Once the gateway is created, create two HTTPRoute resources for `contoso.com` and `fabrikam.com` domain names. Each domain forwards traffic to a different backend service.+ ```bash kubectl apply -f - <<EOF
-apiVersion: gateway.networking.k8s.io/v1beta1
+apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute metadata: name: contoso-route
spec:
- name: backend-v1 port: 8080
-apiVersion: gateway.networking.k8s.io/v1beta1
+apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute metadata: name: fabrikam-route
application-gateway How To Multiple Site Hosting Ingress Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-multiple-site-hosting-ingress-api.md
# Multi-site hosting with Application Gateway for Containers - Ingress API This document helps you set up an example application that uses the Ingress API to demonstrate hosting multiple sites on the same Kubernetes Ingress resource / Application Gateway for Containers frontend. Steps are provided to:+ - Create an [Ingress](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#ingressrule-v1-networking-k8s-io) resource with two hosts. ## Background
Application Gateway for Containers enables multi-site hosting by allowing you to
## Prerequisites
-1. If you follow the BYO deployment strategy, ensure that you set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md)
-2. If you follow the ALB managed deployment strategy, ensure provisioning of your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).
-3. Deploy sample HTTP application
- Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate path, query, and header based routing.
- ```bash
- kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/traffic-split-scenario/deployment.yaml
- ```
+1. If you used the BYO deployment strategy, ensure that you set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md).
+2. If you used the ALB managed deployment strategy, ensure provisioning of your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).
+3. Deploy sample HTTP application:<br>
+ Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate path, query, and header based routing.
+
+ ```bash
+ kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/traffic-split-scenario/deployment.yaml
+ ```
- This command creates the following on your cluster:
- - a namespace called `test-infra`
- - two services called `backend-v1` and `backend-v2` in the `test-infra` namespace
- - two deployments called `backend-v1` and `backend-v2` in the `test-infra` namespace
+ This command creates the following on your cluster:
+
+ - A namespace called `test-infra`
+ - Two services called `backend-v1` and `backend-v2` in the `test-infra` namespace
+ - Two deployments called `backend-v1` and `backend-v2` in the `test-infra` namespace
## Deploy the required Ingress resource # [ALB managed deployment](#tab/alb-managed) 1. Create an Ingress+ ```bash kubectl apply -f - <<EOF apiVersion: networking.k8s.io/v1
spec:
EOF ``` - # [Bring your own (BYO) deployment](#tab/byo) 1. Set the following environment variables
FRONTEND_NAME='frontend'
``` 2. Create an Ingress+ ```bash kubectl apply -f - <<EOF apiVersion: networking.k8s.io/v1
EOF
Once the ingress resource is created, ensure the status shows the hostname of your load balancer and that both ports are listening for requests.+ ```bash kubectl get ingress ingress-01 -n test-infra -o yaml ``` Example output of successful Ingress creation.+ ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress
curl -k --resolve contoso.com:80:$fqdnIp http://contoso.com
``` Via the response we should see:+ ```json { "path": "/",
curl -k --resolve fabrikam.com:80:$fqdnIp http://fabrikam.com
``` Via the response we should see:+ ```json { "path": "/",
application-gateway How To Ssl Offloading Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-ssl-offloading-gateway-api.md
Application Gateway for Containers enables SSL [offloading](/azure/architecture/
```bash kubectl apply -f - <<EOF
- apiVersion: gateway.networking.k8s.io/v1beta1
+ apiVersion: gateway.networking.k8s.io/v1
kind: Gateway metadata: name: gateway-01
Application Gateway for Containers enables SSL [offloading](/azure/architecture/
1. Set the following environment variables
-```bash
-RESOURCE_GROUP='<resource group name of the Application Gateway For Containers resource>'
-RESOURCE_NAME='alb-test'
+ ```bash
+ RESOURCE_GROUP='<resource group name of the Application Gateway For Containers resource>'
+ RESOURCE_NAME='alb-test'
-RESOURCE_ID=$(az network alb show --resource-group $RESOURCE_GROUP --name $RESOURCE_NAME --query id -o tsv)
-FRONTEND_NAME='frontend'
-```
+ RESOURCE_ID=$(az network alb show --resource-group $RESOURCE_GROUP --name $RESOURCE_NAME --query id -o tsv)
+ FRONTEND_NAME='frontend'
+ ```
2. Create a Gateway
-```bash
-kubectl apply -f - <<EOF
-apiVersion: gateway.networking.k8s.io/v1beta1
-kind: Gateway
-metadata:
- name: gateway-01
- namespace: test-infra
- annotations:
- alb.networking.azure.io/alb-id: $RESOURCE_ID
-spec:
- gatewayClassName: azure-alb-external
- listeners:
- - name: https-listener
- port: 443
- protocol: HTTPS
- allowedRoutes:
- namespaces:
- from: Same
- tls:
- mode: Terminate
- certificateRefs:
- - kind : Secret
- group: ""
- name: listener-tls-secret
- addresses:
- - type: alb.networking.azure.io/alb-frontend
- value: $FRONTEND_NAME
-EOF
-```
+ ```bash
+ kubectl apply -f - <<EOF
+ apiVersion: gateway.networking.k8s.io/v1
+ kind: Gateway
+ metadata:
+ name: gateway-01
+ namespace: test-infra
+ annotations:
+ alb.networking.azure.io/alb-id: $RESOURCE_ID
+ spec:
+ gatewayClassName: azure-alb-external
+ listeners:
+ - name: https-listener
+ port: 443
+ protocol: HTTPS
+ allowedRoutes:
+ namespaces:
+ from: Same
+ tls:
+ mode: Terminate
+ certificateRefs:
+ - kind : Secret
+ group: ""
+ name: listener-tls-secret
+ addresses:
+ - type: alb.networking.azure.io/alb-frontend
+ value: $FRONTEND_NAME
+ EOF
+ ```
Once the gateway is created, create an HTTPRoute resource.
```bash kubectl apply -f - <<EOF
-apiVersion: gateway.networking.k8s.io/v1beta1
+apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute metadata: name: https-route
application-gateway How To Traffic Splitting Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-traffic-splitting-gateway-api.md
This document helps set up an example application that uses the following resources from Gateway API: - [Gateway](https://gateway-api.sigs.k8s.io/concepts/api-overview/#gateway) - creating a gateway with one http listener-- [HTTPRoute](https://gateway-api.sigs.k8s.io/v1alpha2/api-types/httproute/) - creating an HTTP route that references two backend services having different weights
+- [HTTPRoute](https://gateway-api.sigs.k8s.io/api-types/httproute/) - creating an HTTP route that references two backend services having different weights
## Background
Application Gateway for Containers enables you to set weights and shift traffic
## Prerequisites
-1. If following the BYO deployment strategy, ensure you have set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md)
+1. If following the BYO deployment strategy, ensure you set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md).
2. If following the ALB managed deployment strategy, ensure you have provisioned your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and provisioned the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).
-3. Deploy sample HTTP application
+3. Deploy sample HTTP application:<br>
Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate traffic splitting / weighted round robin support.
- ```bash
- kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/traffic-split-scenario/deployment.yaml
- ```
+
+ ```bash
+ kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/traffic-split-scenario/deployment.yaml
+ ```
- This command creates the following on your cluster:
- - a namespace called `test-infra`
- - two services called `backend-v1` and `backend-v2` in the `test-infra` namespace
- - two deployments called `backend-v1` and `backend-v2` in the `test-infra` namespace
+ This command creates the following on your cluster:
+
+ - A namespace called `test-infra`
+ - Two services called `backend-v1` and `backend-v2` in the `test-infra` namespace
+ - Two deployments called `backend-v1` and `backend-v2` in the `test-infra` namespace
## Deploy the required Gateway API resources
Create a gateway:
```bash kubectl apply -f - <<EOF
-apiVersion: gateway.networking.k8s.io/v1beta1
+apiVersion: gateway.networking.k8s.io/v1
kind: Gateway metadata: name: gateway-01
EOF
1. Set the following environment variables
-```bash
-RESOURCE_GROUP='<resource group name of the Application Gateway For Containers resource>'
-RESOURCE_NAME='alb-test'
+ ```bash
+ RESOURCE_GROUP='<resource group name of the Application Gateway For Containers resource>'
+ RESOURCE_NAME='alb-test'
-RESOURCE_ID=$(az network alb show --resource-group $RESOURCE_GROUP --name $RESOURCE_NAME --query id -o tsv)
-FRONTEND_NAME='frontend'
-```
+ RESOURCE_ID=$(az network alb show --resource-group $RESOURCE_GROUP --name $RESOURCE_NAME --query id -o tsv)
+ FRONTEND_NAME='frontend'
+ ```
2. Create a Gateway
-```bash
-kubectl apply -f - <<EOF
-apiVersion: gateway.networking.k8s.io/v1beta1
-kind: Gateway
-metadata:
- name: gateway-01
- namespace: test-infra
- annotations:
- alb.networking.azure.io/alb-id: $RESOURCE_ID
-spec:
- gatewayClassName: azure-alb-external
- listeners:
- - name: http
- port: 80
- protocol: HTTP
- allowedRoutes:
- namespaces:
- from: Same
- addresses:
- - type: alb.networking.azure.io/alb-frontend
- value: $FRONTEND_NAME
-EOF
-```
+
+ ```bash
+ kubectl apply -f - <<EOF
+ apiVersion: gateway.networking.k8s.io/v1
+ kind: Gateway
+ metadata:
+ name: gateway-01
+ namespace: test-infra
+ annotations:
+ alb.networking.azure.io/alb-id: $RESOURCE_ID
+ spec:
+ gatewayClassName: azure-alb-external
+ listeners:
+ - name: http
+ port: 80
+ protocol: HTTP
+ allowedRoutes:
+ namespaces:
+ from: Same
+ addresses:
+ - type: alb.networking.azure.io/alb-frontend
+ value: $FRONTEND_NAME
+ EOF
+ ```
-Once the gateway resource has been created, ensure the status is valid, the listener is _Programmed_, and an address is assigned to the gateway.
+Once the gateway resource is created, ensure the status is valid, the listener is _Programmed_, and an address is assigned to the gateway.
```bash kubectl get gateway gateway-01 -n test-infra -o yaml
status:
kind: HTTPRoute ```
-Once the gateway has been created, create an HTTPRoute
+Once the gateway is created, create an HTTPRoute
```bash kubectl apply -f - <<EOF
-apiVersion: gateway.networking.k8s.io/v1beta1
+apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute metadata: name: traffic-split-route
spec:
EOF ```
-Once the HTTPRoute resource has been created, ensure the route has been _Accepted_ and the Application Gateway for Containers resource has been _Programmed_.
+Once the HTTPRoute resource is created, ensure the route is _Accepted_ and the Application Gateway for Containers resource is _Programmed_.
```bash kubectl get httproute traffic-split-route -n test-infra -o yaml
status:
## Test Access to the Application
-Now we're ready to send some traffic to our sample application, via the FQDN assigned to the frontend. Use the command below to get the FQDN.
+Now we're ready to send some traffic to our sample application, via the FQDN assigned to the frontend. Use the following command to get the FQDN:
```bash fqdn=$(kubectl get gateway gateway-01 -n test-infra -o jsonpath='{.status.addresses[0].value}')
application-gateway How To Url Redirect Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-url-redirect-gateway-api.md
Application Gateway for Containers allows you to return a redirect response to t
## Usage details
-URL redirects take advantage of the [RequestRedirect rule filter](https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1beta1.HTTPRequestRedirectFilter) as defined by Kubernetes Gateway API.
+URL redirects take advantage of the [RequestRedirect rule filter](https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1.HTTPRequestRedirectFilter) as defined by Kubernetes Gateway API.
## Redirection+ A redirect sets the response status code returned to clients to understand the purpose of the redirect. The following types of redirection are supported: -- 301 (Moved permanently): Indicates that the target resource has been assigned a new permanent URI. Any future references to this resource uses one of the enclosed URIs. Use 301 status code for HTTP to HTTPS redirection.
+- 301 (Moved permanently): Indicates that the target resource is assigned a new permanent URI. Future references to this resource use one of the enclosed URIs. Use 301 status code for HTTP to HTTPS redirection.
- 302 (Found): Indicates that the target resource is temporarily under a different URI. Since the redirection can change on occasion, the client should continue to use the effective request URI for future requests. ## Redirection capabilities
A redirect sets the response status code returned to clients to understand the p
- Hostname redirection matches the fully qualified domain name (fqdn) of the request. This is commonly observed in redirecting an old domain name to a new domain name; such as `contoso.com` to `fabrikam.com`. - Path redirection has two different variants: `prefix` and `full`.
- - `Prefix` redirection type will redirect all requests starting with a defined value. For example, a prefix of /shop would match /shop and any text after. For example, /shop, /shop/checkout, and /shop/item-a would all redirect to /shop as well.
- - `Full` redirection type matches an exact value. For example, /shop could redirect to /store, but /shop/checkout wouldn't redirect to /store.
+ - `Prefix` redirection type will redirect all requests starting with a defined value. For example: a prefix of /shop would match /shop and any text after. For example, /shop, /shop/checkout, and /shop/item-a would all redirect to /shop as well.
+ - `Full` redirection type matches an exact value. For example: /shop could redirect to /store, but /shop/checkout wouldn't redirect to /store.
The following figure illustrates an example of a request destined for _contoso.com/summer-promotion_ being redirected to _contoso.com/shop/category/5_. In addition, a second request initiated to contoso.com via http protocol is returned a redirect to initiate a new connection to its https variant.
The following figure illustrates an example of a request destined for _contoso.c
## Prerequisites
-1. If following the BYO deployment strategy, ensure you have set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md)
-2. If following the ALB managed deployment strategy, ensure you have provisioned your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and provisioned the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).
-3. Deploy sample HTTP application
+1. If following the BYO deployment strategy, ensure you set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md).
+2. If following the ALB managed deployment strategy, ensure you provision your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and provision the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).
+3. Deploy sample HTTP application:
Apply the following deployment.yaml file on your cluster to deploy a sample TLS certificate to demonstrate redirect capabilities.
The following figure illustrates an example of a request destined for _contoso.c
```bash kubectl apply -f - <<EOF
- apiVersion: gateway.networking.k8s.io/v1beta1
+ apiVersion: gateway.networking.k8s.io/v1
kind: Gateway metadata: name: gateway-01
The following figure illustrates an example of a request destined for _contoso.c
```bash kubectl apply -f - <<EOF
- apiVersion: gateway.networking.k8s.io/v1beta1
+ apiVersion: gateway.networking.k8s.io/v1
kind: Gateway metadata: name: gateway-01
Create an HTTPRoute resource for `contoso.com` that handles traffic received via
```bash kubectl apply -f - <<EOF
-apiVersion: gateway.networking.k8s.io/v1beta1
+apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute metadata: name: https-contoso
Once the gateway is created, create an HTTPRoute resource for `contoso.com` with
```bash kubectl apply -f - <<EOF
-apiVersion: gateway.networking.k8s.io/v1beta1
+apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute metadata: name: http-to-https-contoso-redirect
Create an HTTPRoute resource for `contoso.com` that handles a redirect for the p
```bash kubectl apply -f - <<EOF
-apiVersion: gateway.networking.k8s.io/v1beta1
+apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute metadata: name: summer-promotion-redirect
application-gateway How To Url Redirect Ingress Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-url-redirect-ingress-api.md
URL redirects take advantage of the [RequestRedirect rule filter](https://gatewa
A redirect sets the response status code returned to clients to understand the purpose of the redirect. The following types of redirection are supported: -- 301 (Moved permanently): Indicates that the target resource has been assigned a new permanent URI. Any future references to this resource use one of the enclosed URIs. Use 301 status code for HTTP to HTTPS redirection.
+- 301 (Moved permanently): Indicates that the target resource is assigned a new permanent URI. Any future references to this resource use one of the enclosed URIs. Use 301 status code for HTTP to HTTPS redirection.
- 302 (Found): Indicates that the target resource is temporarily under a different URI. Since the redirection can change on occasion, the client should continue to use the effective request URI for future requests. ## Redirection capabilities
A redirect sets the response status code returned to clients to understand the p
- Hostname redirection matches the fully qualified domain name (fqdn) of the request. This is commonly observed in redirecting an old domain name to a new domain name; such as `contoso.com` to `fabrikam.com`. - Path redirection has two different variants: `prefix` and `full`.
- - `Prefix` redirection type will redirect all requests starting with a defined value. For example, a prefix of /shop would match /shop and any text after. For example, /shop, /shop/checkout, and /shop/item-a would all redirect to /shop as well.
+ - `Prefix` redirection type will redirect all requests starting with a defined value. For example, a prefix of /shop would match /shop and any text after. For example, /shop, /shop/checkout, and /shop/item-a would all redirect to /shop as well.
- `Full` redirection type matches an exact value. For example, /shop could redirect to /store, but /shop/checkout wouldn't redirect to /store. The following figure illustrates an example of a request destined for _contoso.com/summer-promotion_ being redirected to _contoso.com/shop/category/5_. In addition, a second request initiated to contoso.com via http protocol is returned a redirect to initiate a new connection to its https variant.
The following figure illustrates an example of a request destined for _contoso.c
## Prerequisites
-1. If following the BYO deployment strategy, ensure you have set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md)
-2. If following the ALB managed deployment strategy, ensure you have provisioned your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and provisioned the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).
-3. Deploy sample HTTP application
+1. If following the BYO deployment strategy, ensure you set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md).
+2. If following the ALB managed deployment strategy, ensure you provision your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and provision the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).
+3. Deploy sample HTTP application:
Apply the following deployment.yaml file on your cluster to deploy a sample TLS certificate to demonstrate redirect capabilities.
application-gateway How To Url Rewrite Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-url-rewrite-gateway-api.md
Application Gateway for Containers allows you to rewrite the URL of a client req
## Usage details
-URL Rewrites take advantage of [filters](https://gateway-api.sigs.k8s.io/references/spec/#gateway.networking.k8s.io/v1beta1.HTTPURLRewriteFilter) as defined by Kubernetes Gateway API.
+URL Rewrites take advantage of [filters](https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1.HTTPURLRewriteFilter) as defined by Kubernetes Gateway API.
## Background
URL rewrite enables you to translate an incoming request to a different URL when
The following figure illustrates an example of a request destined for _contoso.com/shop_ being rewritten to _contoso.com/ecommerce_. The request is initiated to the backend target by Application Gateway for Containers:
-[ ![A diagram showing the Application Gateway for Containers rewriting a URL to the backend.](./media/how-to-url-rewrite-gateway-api/url-rewrite.png) ](./media/how-to-url-rewrite-gateway-api/url-rewrite.png#lightbox)
-
+[![A diagram showing the Application Gateway for Containers rewriting a URL to the backend.](./media/how-to-url-rewrite-gateway-api/url-rewrite.png)](./media/how-to-url-rewrite-gateway-api/url-rewrite.png#lightbox)
## Prerequisites
-1. If following the BYO deployment strategy, ensure you have set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md)
-2. If following the ALB managed deployment strategy, ensure you have provisioned your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and provisioned the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).
+1. If following the BYO deployment strategy, ensure you set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md).
+2. If following the ALB managed deployment strategy, ensure you provision your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and provision the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).
3. Deploy sample HTTP application Apply the following deployment.yaml file on your cluster to deploy a sample TLS certificate to demonstrate redirect capabilities.
The following figure illustrates an example of a request destined for _contoso.c
This command creates the following on your cluster:
- - a namespace called `test-infra`
- - one service called `echo` in the `test-infra` namespace
- - one deployment called `echo` in the `test-infra` namespace
- - one secret called `listener-tls-secret` in the `test-infra` namespace
+ - A namespace called `test-infra`
+ - One service called `echo` in the `test-infra` namespace
+ - One deployment called `echo` in the `test-infra` namespace
+ - One secret called `listener-tls-secret` in the `test-infra` namespace
## Deploy the required Gateway API resources
The following figure illustrates an example of a request destined for _contoso.c
```bash kubectl apply -f - <<EOF
- apiVersion: gateway.networking.k8s.io/v1beta1
+ apiVersion: gateway.networking.k8s.io/v1
kind: Gateway metadata: name: gateway-01
The following figure illustrates an example of a request destined for _contoso.c
```bash kubectl apply -f - <<EOF
- apiVersion: gateway.networking.k8s.io/v1beta1
+ apiVersion: gateway.networking.k8s.io/v1
kind: Gateway metadata: name: gateway-01
Once the gateway is created, create an HTTPRoute resource for `contoso.com`. Th
```bash kubectl apply -f - <<EOF
-apiVersion: gateway.networking.k8s.io/v1beta1
+apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute metadata: name: rewrite-example
application-gateway How To Url Rewrite Ingress Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-url-rewrite-ingress-api.md
The following figure illustrates a request destined for _contoso.com/shop_ being
## Prerequisites
-1. If following the BYO deployment strategy, ensure you have set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md)
-2. If following the ALB managed deployment strategy, ensure you have provisioned your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and provisioned the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).
-3. Deploy sample HTTP application
+1. If following the BYO deployment strategy, ensure you set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md).
+2. If following the ALB managed deployment strategy, ensure you provision your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and provision the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).
+3. Deploy sample HTTP application:<br>
Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate path, query, and header based routing. ```bash kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/traffic-split-scenario/deployment.yaml ``` This command creates the following on your cluster:
- - a namespace called `test-infra`
- - two services called `backend-v1` and `backend-v2` in the `test-infra` namespace
- - two deployments called `backend-v1` and `backend-v2` in the `test-infra` namespace
+ - A namespace called `test-infra`
+ - Two services called `backend-v1` and `backend-v2` in the `test-infra` namespace
+ - Two deployments called `backend-v1` and `backend-v2` in the `test-infra` namespace
## Deploy the required Ingress API resources
application-gateway Monitor Application Gateway Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/monitor-application-gateway-reference.md
Similarly, if the *Application gateway total time* has a spike but the *Backend
|**Unhealthy host count**|Count|The number of backends that are determined unhealthy by the health probe. You can filter on a per backend pool basis to show the number of unhealthy hosts in a specific backend pool.| |**Requests per minute per Healthy Host**|Count|The average number of requests received by each healthy member in a backend pool in a minute. Specify the backend pool using the *BackendPool HttpSettings* dimension.|
-## Application Gateway layer 4 proxy monitoring
+### Backend health API
-### Layer 4 metrics
+See [Application Gateways - Backend Health](/rest/api/application-gateway/application-gateways/backend-health?tabs=HTTP) for details of the API call to retrieve the backend health of an application gateway.
+
+Sample Request:
+``output
+POST
+https://management.azure.com/subscriptions/subid/resourceGroups/rg/providers/Microsoft.Network/
+applicationGateways/appgw/backendhealth?api-version=2021-08-01
+After
+``
+
+After sending this POST request, you should see an HTTP 202 Accepted response. In the response headers, find the Location header and send a new GET request using that URL.
+
+``output
+GET
+https://management.azure.com/subscriptions/subid/providers/Microsoft.Network/locations/region-name/operationResults/GUID?api-version=2021-08-01
+``
+
+### Application Gateway TLS/TCP proxy monitoring
+
+#### TLS/TCP proxy metrics
With layer 4 proxy feature now available with Application Gateway, there are some Common metrics (apply to both layer 7 as well as layer 4), and some layer 4 specific metrics. The following table describes all the metrics are the applicable for layer 4 usage. | Metric | Description | Type | Dimension | |:--|:|:-|:-|
-| Current Connections | The number of active connections: reading, writing, or waiting. The count of current connections established with Application Gateway. | Common | None |
-| New Connections per second | The average number of connections handled per second in last 1 minute. | Common | None |
-| Throughput | The rate of data flow (inBytes+ outBytes) in the last 1 minute. | Common | None |
-| Healthy host count | The number of healthy backend hosts. | Common | BackendSettingsPool |
-| Unhealthy host | The number of unhealthy backend hosts. | Common | BackendSettingsPool |
-| ClientRTT | Average round trip time between clients and Application Gateway. | Common | Listener |
-| Backend Connect Time | Time spent establishing a connection with a backend server. | Common | Listener, BackendServer, BackendPool, BackendSetting |
-| Backend First Byte Response Time | Time interval between start of establishing a connection to backend server and receiving the first byte of data (approximating processing time of backend server). | Common | Listener, BackendServer, BackendPool, BackendHttpSetting`*` |
-| Backend Session Duration | The total time of a backend connection. The average time duration from the start of a new connection to its termination. | L4 only | Listener, BackendServer, BackendPool, BackendHttpSetting`*` |
-| Connection Lifetime | The total time of a client connection to application gateway. The average time duration from the start of a new connection to its termination in milliseconds. | L4 only | Listener |
+| Current Connections | The number of active connections: reading, writing, or waiting. The count of current connections established with Application Gateway. | Common metric | None |
+| New Connections per second | The average number of connections handled per second during that minute. | Common metric | None |
+| Throughput | The rate of data flow (inBytes+ outBytes) during that minute. | Common metric | None |
+| Healthy host count | The number of healthy backend hosts. | Common metric | BackendSettingsPool |
+| Unhealthy host | The number of unhealthy backend hosts. | Common metric | BackendSettingsPool |
+| ClientRTT | Average round trip time between clients and Application Gateway. | Common metric | Listener |
+| Backend Connect Time | Time spent establishing a connection with a backend server. | Common metric | Listener, BackendServer, BackendPool, BackendSetting |
+| Backend First Byte Response Time | Time interval between start of establishing a connection to backend server and receiving the first byte of data (approximating processing time of backend server). | Common metric | Listener, BackendServer, BackendPool, BackendHttpSetting`*` |
+| Backend Session Duration | The total time of a backend connection. The average time duration from the start of a new connection to its termination. | L4-specific | Listener, BackendServer, BackendPool, BackendHttpSetting`*` |
+| Connection Lifetime | The total time of a client connection to application gateway. The average time duration from the start of a new connection to its termination in milliseconds. | L4-specific | Listener |
`*` BackendHttpSetting dimension includes both layer 7 and layer 4 backend settings.
-### Layer 4 logs
+#### TLS/TCP proxy logs
Application GatewayΓÇÖs Layer 4 proxy provides log data through access logs. These logs are only generated and published if they are configured in the diagnostic settings of your gateway. - Also see: [Supported categories for Azure Monitor resource logs](/azure/azure-monitor/essentials/resource-logs-categories#microsoftnetworkapplicationgateways).
Application GatewayΓÇÖs Layer 4 proxy provides log data through access logs. The
| serverStatus |200 - session completed successfully. 400 - client data could not be parsed. 500 - internal server error. 502 - bad gateway. For example, when an upstream server could not be reached. 503 - service unavailable. For example, if access is limited by the number of connections. | | ResourceId |Application Gateway resource URI |
-### Layer 4 backend health
+### TLS/TCP proxy backend health
Application GatewayΓÇÖs layer 4 proxy provides the capability to monitor the health of individual members of the backend pools through the portal and REST API. ![Screenshot of backend health](./media/monitor-application-gateway-reference/backend-health.png)
-### REST API
-
-See [Application Gateways - Backend Health](/rest/api/application-gateway/application-gateways/backend-health?tabs=HTTP) for details of the API call to retrieve the backend health of an application gateway.
-Sample Request:
-``output
-POST
-https://management.azure.com/subscriptions/subid/resourceGroups/rg/providers/Microsoft.Network/
-applicationGateways/appgw/backendhealth?api-version=2021-08-01
-After
-``
-
-After sending this POST request, you should see an HTTP 202 Accepted response. In the response headers, find the Location header and send a new GET request using that URL.
-
-``output
-GET
-https://management.azure.com/subscriptions/subid/providers/Microsoft.Network/locations/region-name/operationResults/GUID?api-version=2021-08-01
-``
## Application Gateway v1 metrics
application-gateway Multiple Site Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/multiple-site-overview.md
description: This article provides an overview of the Azure Application Gateway
Previously updated : 02/26/2024 Last updated : 02/28/2024
In the Azure portal, under the multi-site listener, you must choose the **Multip
See [create multi-site using Azure PowerShell](tutorial-multiple-sites-powershell.md) or [using Azure CLI](tutorial-multiple-sites-cli.md) for the step-by-step guide on how to configure wildcard host names in a multi-site listener.
-## Multi-site listeners for Application Gateway layer 4 proxy
+## Multi-site listener for TLS and TCP protocol listeners
-Multi-site hosting enables you to configure more than one backend TLS or TCP-based application on the same port of application gateway. This can be achieved by using TLS listeners only. This allows you to configure a more efficient topology for your deployments by adding multiple backend applications on the same port using single application gateway. The traffic for each application can be directed to its own backend pool by providing domain names in the TLS listener.
-
-For example, you can create three multisite listeners each with its own domain (contoso.com, fabrikam.com, and *.adatum.com), and route them to their respective backend pools having different applications. All three domains must point to the frontend IP address of the application gateway. This feature is in preview phase for use with layer 4 proxy.
-
-### Feature information:
--- Multi-site listener allows you to add listeners using the same port number.-- For multisite TLS listeners, Application Gateway uses the Server Name Indication (SNI) value. SNI is primarily used to present clients with the domain server certificate and route a connection to the appropriate backend pool. This is done by picking the common name in TLS handshake data of an incoming connection.-- Application Gateway allows domain-based routing using multisite TLS listener. You can use wildcard characters like asterisk (*) and question mark (?) in the host name, and up to 5 domains per multi-site TLS listener. For example, *.contoso.com.-- The TCP connection inherently has no concept of hostname or domain name. Hence, with Layer 4 proxy the multisite listener isn't supported for TCP listeners.
+The multi-site feature is also available for Layer4 proxy, but only for its TLS listeners. You can direct the traffic for each application to its backend pool by providing domain names in the TLS listener. For the functioning of the multisite feature on TLS listeners, Application Gateway uses the Server Name Indication (SNI) value (the clients primarily present SNI extension to fetch the correct TLS certificate). A multisite TLS listener would pick this SNI value from the TLS handshake data of an incoming connection and route that connection to the appropriate backend pool. The TCP connection inherently has no concept of hostname or domain name; hence, this isn't available for TCP listeners.
## Host headers and Server Name Indication (SNI)
application-gateway Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-bicep.md
description: In this quickstart, you learn how to use Bicep to create an Azure A
Previously updated : 04/14/2022 Last updated : 02/28/2024
In this quickstart, you use Bicep to create an Azure Application Gateway. Then y
[!INCLUDE [About Bicep](../../includes/resource-manager-quickstart-bicep-introduction.md)]
+![Conceptual diagram of the quickstart setup.](./media/quick-create-portal/application-gateway-qs-resources.png)
+ ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
application-gateway Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-portal.md
description: In this quickstart, you learn how to use the Azure portal to create
Previously updated : 11/28/2023 Last updated : 02/28/2024
In this quickstart, you use the Azure portal to create an [Azure Application Gateway](overview.md) and test it to make sure it works correctly. You will assign listeners to ports, create rules, and add resources to a backend pool. For the sake of simplicity, a simple setup is used with a public frontend IP address, a basic listener to host a single site on the application gateway, a basic request routing rule, and two virtual machines (VMs) in the backend pool.
-![Quickstart setup](./media/quick-create-portal/application-gateway-qs-resources.png)
+![Conceptual diagram of the quickstart setup.](./media/quick-create-portal/application-gateway-qs-resources.png)
For more information about the components of an application gateway, see [Application gateway components](application-gateway-components.md).
application-gateway Quick Create Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-terraform.md
description: In this quickstart, you learn how to use Terraform to create an Azu
Previously updated : 09/26/2023 Last updated : 02/28/2024
In this quickstart, you use Terraform to create an Azure Application Gateway. Th
> * Create an Azure Windows Virtual Machine using [azurerm_windows_virtual_machine](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/windows_virtual_machine) > * Create an Azure Virtual Machine Extension using [azurerm_virtual_machine_extension](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_machine_extension)
+![Conceptual diagram of the quickstart setup.](./media/quick-create-portal/application-gateway-qs-resources.png)
+ ## Prerequisites - [Install and configure Terraform](/azure/developer/terraform/quickstart-configure)
automation Automation Runbook Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-types.md
Title: Azure Automation runbook types
description: This article describes the types of runbooks that you can use in Azure Automation and considerations for determining which type to use. Previously updated : 02/22/2024 Last updated : 02/29/2024
The following are the current limitations and known issues with PowerShell runbo
- Executing child scripts using `.\child-runbook.ps1` is not supported.</br> **Workaround**: Use `Start-AutomationRunbook` (internal cmdlet) or `Start-AzAutomationRunbook` (from *Az.Automation* module) to start another runbook from parent runbook. - When you use [ExchangeOnlineManagement](/powershell/exchange/exchange-online-powershell?view=exchange-ps&preserve-view=true) module version: 3.0.0 or higher, you can experience errors. To resolve the issue, ensure that you explicitly upload [PowerShellGet](/powershell/module/powershellget/) and [PackageManagement](/powershell/module/packagemanagement/) modules.
+- When you utilize theΓÇ»`New-AzAutomationVariable`ΓÇ» cmdlet within Az.Automation Module to upload a variable of type **object**, the operation doesn't function as expected.
+
+ **Workaround**: Convert the object to a JSON string using the ConvertTo-Json cmdlet and then upload the variable with the JSON string as its value. This workaround ensures proper handling of the variable within the Azure Automation environment as a JSON string.
+
+ **Example**ΓÇ»- Create a PowerShell object that has stored information around Azure VMs
+
+ ```azurepowershell
+ # Retrieve Azure virtual machines with status information for the 'northeurope' region
+ $AzVM = Get-AzVM -Status | Where-Object {$_.Location -eq "northeurope"}
+
+ $VMstopatch = @($AzVM).Id
+ # Create an Azure Automation variable (This cmdlet will not fail, but the variable may not work as intended when used in the runbook.)
+ New-AzAutomationVariable -ResourceGroupName "mrg" -AutomationAccountName "mAutomationAccount2" -Name "complex1" -Encrypted $false -Value $VMstopatch
+
+ # Convert the object to a JSON string
+ $jsonString = $VMstopatch | ConvertTo-Json
+
+ # Create an Azure Automation variable with a JSON string value (works effectively within the automation runbook)
+ New-AzAutomationVariable -ResourceGroupName "mrg" -AutomationAccountName "mAutomationAccount2" -Name "complex1" -Encrypted $false -Value $jsonString
+ ```
+
# [PowerShell 5.1](#tab/lps51)
The following are the current limitations and known issues with PowerShell runbo
* A PowerShell runbook can fail if it tries to write a large amount of data to the output stream at once. You can typically work around this issue by having the runbook output just the information needed to work with large objects. For example, instead of using `Get-Process` with no limitations, you can have the cmdlet output just the required parameters as in `Get-Process | Select ProcessName, CPU`. * When you use [ExchangeOnlineManagement](/powershell/exchange/exchange-online-powershell?view=exchange-ps&preserve-view=true) module version: 3.0.0 or higher, you may experience errors. To resolve the issue, ensure that you explicitly upload [PowerShellGet](/powershell/module/powershellget/) and [PackageManagement](/powershell/module/packagemanagement/) modules as well. * If you import module Az.Accounts with version 2.12.3 or newer, ensure that you import the **Newtonsoft.Json** v10 module explicitly if PowerShell 5.1 runbooks have a dependency on this version of the module. The workaround for this issue is to use PowerShell 7.2 runbooks.
+* When you utilize theΓÇ»`New-AzAutomationVariable` cmdlet within Az.Automation Module to upload a variable of type **object**, the operation doesn't function as expected.
+
+ **Workaround**: Convert the object to a JSON string using the ConvertTo-Json cmdlet and then upload the variable with the JSON string as its value. This workaround ensures proper handling of the variable within the Azure Automation environment as a JSON string.
+
+ **Example**ΓÇ»- Create a PowerShell object that has stored information around Azure VMs
+
+ ```azurepowershell
+ # Retrieve Azure virtual machines with status information for the 'northeurope' region
+ $AzVM = Get-AzVM -Status | Where-Object {$_.Location -eq "northeurope"}
+
+ $VMstopatch = @($AzVM).Id
+ # Create an Azure Automation variable (This cmdlet will not fail, but the variable may not work as intended when used in the runbook.)
+ New-AzAutomationVariable -ResourceGroupName "mrg" -AutomationAccountName "mAutomationAccount2" -Name "complex1" -Encrypted $false -Value $VMstopatch
+
+ # Convert the object to a JSON string
+ $jsonString = $VMstopatch | ConvertTo-Json
+
+ # Create an Azure Automation variable with a JSON string value (works effectively within the automation runbook)
+ New-AzAutomationVariable -ResourceGroupName "mrg" -AutomationAccountName "mAutomationAccount2" -Name "complex1" -Encrypted $false -Value $jsonString
+ ```
# [PowerShell 7.1](#tab/lps71)
The following are the current limitations and known issues with PowerShell runbo
- When you start PowerShell 7 runbook using the webhook, it auto-converts the webhook input parameter to an invalid JSON. - We recommend that you use [ExchangeOnlineManagement](/powershell/exchange/exchange-online-powershell?view=exchange-ps&preserve-view=true) module version: 3.0.0 or lower because version: 3.0.0 or higher may lead to job failures. - If you import module Az.Accounts with version 2.12.3 or newer, ensure that you import the **Newtonsoft.Json** v10 module explicitly if PowerShell 7.1 runbooks have a dependency on this version of the module. The workaround for this issue is to use PowerShell 7.2 runbooks.
+- When you utilize theΓÇ»`New-AzAutomationVariable`ΓÇ» cmdlet within Az.Automation Module to upload a variable of type **object**, the operation doesn't function as expected.
+
+ **Workaround**: Convert the object to a JSON string using the ConvertTo-Json cmdlet and then upload the variable with the JSON string as its value. This workaround ensures proper handling of the variable within the Azure Automation environment as a JSON string.
+
+ **Example**ΓÇ»- Create a PowerShell object that has stored information around Azure VMs
+
+ ```azurepowershell
+ # Retrieve Azure virtual machines with status information for the 'northeurope' region
+ $AzVM = Get-AzVM -Status | Where-Object {$_.Location -eq "northeurope"}
+
+ $VMstopatch = @($AzVM).Id
+ # Create an Azure Automation variable (This cmdlet will not fail, but the variable may not work as intended when used in the runbook.)
+ New-AzAutomationVariable -ResourceGroupName "mrg" -AutomationAccountName "mAutomationAccount2" -Name "complex1" -Encrypted $false -Value $VMstopatch
+
+ # Convert the object to a JSON string
+ $jsonString = $VMstopatch | ConvertTo-Json
+
+ # Create an Azure Automation variable with a JSON string value (works effectively within the automation runbook)
+ New-AzAutomationVariable -ResourceGroupName "mrg" -AutomationAccountName "mAutomationAccount2" -Name "complex1" -Encrypted $false -Value $jsonString
+ ```
## PowerShell Workflow runbooks
azure-app-configuration Howto Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-geo-replication.md
configurationBuilder.AddAzureAppConfiguration(options =>
``` > [!NOTE]
-> The automatic replica discovery support is available if you use version **7.1.0-preview** or later of any of the following packages.
+> The automatic replica discovery support is available if you use version **7.1.0** or later of any of the following packages.
> - `Microsoft.Extensions.Configuration.AzureAppConfiguration` > - `Microsoft.Azure.AppConfiguration.AspNetCore` > - `Microsoft.Azure.AppConfiguration.Functions.Worker`
azure-app-configuration Howto Set Up Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-set-up-private-access.md
This command will prompt your web browser to launch and load an Azure sign-in pa
1. Select **Next : Virtual Network >**.
- 1. Select an existing **Virtual network** to deploy the private endpoint to. If you don't have a virtual network, [create a virtual network](../private-link/create-private-endpoint-portal.md#create-a-virtual-network-and-bastion-host).
+ 1. Select an existing **Virtual network** to deploy the private endpoint to. If you don't have a virtual network, [create a virtual network](../private-link/create-private-endpoint-portal.md).
1. Select a **Subnet** from the list.
azure-app-configuration Quickstart Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-azure-kubernetes-service.md
In this section, you will create a simple ASP.NET Core web application running i
</div> ```
-1. Create a file named *mysettings.json* at the root of your project directory, and enter the following content.
+1. Create a *config* directory in the root of your project and add a *mysettings.json* file to it with the following content.
```json {
In this section, you will create a simple ASP.NET Core web application running i
// ... ... // Add a JSON configuration source
- builder.Configuration.AddJsonFile("mysettings.json");
+ builder.Configuration.AddJsonFile("config/mysettings.json", reloadOnChange: true, optional: false);
var app = builder.Build();
Add following key-values to the App Configuration store and leave **Label** and
> - The ConfigMap will be reset based on the present data in your App Configuration store if it's deleted or modified by any other means. > - The ConfigMap will be deleted if the App Configuration Kubernetes Provider is uninstalled.
-2. Update the *deployment.yaml* file in the *Deployment* directory to use the ConfigMap `configmap-created-by-appconfig-provider` as a mounted data volume. It is important to ensure that the `volumeMounts.mountPath` matches the `WORKDIR` specified in your *Dockerfile*.
+2. Update the *deployment.yaml* file in the *Deployment* directory to use the ConfigMap `configmap-created-by-appconfig-provider` as a mounted data volume. It is important to ensure that the `volumeMounts.mountPath` matches the `WORKDIR` specified in your *Dockerfile* and the *config* directory created before.
```yaml apiVersion: apps/v1
Add following key-values to the App Configuration store and leave **Label** and
- containerPort: 80 volumeMounts: - name: config-volume
- mountPath: /app
+ mountPath: /app/config
volumes: - name: config-volume configMap: name: configmap-created-by-appconfig-provider
- items:
- - key: mysettings.json
- path: mysettings.json
``` 3. Run the following command to deploy the changes. Replace the namespace if you are using your existing AKS application.
azure-arc Troubleshoot Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/troubleshoot-resource-bridge.md
This article provides information on troubleshooting and resolving issues that c
### Logs
-For issues encountered with Arc resource bridge, collect logs for further investigation using the Azure CLI [`az arcappliance logs`](/cli/azure/arcappliance/logs) command. This command needs to be run from the same management machine that was used to run commands to deploy the Arc resource bridge. If there is a problem collecting logs, most likely the management machine is unable to reach the Appliance VM, and the network administrator needs to allow communication between the management machine to the Appliance VM.
+For issues encountered with Arc resource bridge, collect logs for further investigation using the Azure CLI [`az arcappliance logs`](/cli/azure/arcappliance/logs) command. This command needs to be run from the same management machine that was used to run commands to deploy the Arc resource bridge. If there's a problem collecting logs, most likely the management machine is unable to reach the Appliance VM, and the network administrator needs to allow communication between the management machine to the Appliance VM.
The `az arcappliance logs` command requires SSH to the Azure Arc resource bridge VM. The SSH key is saved to the management machine. To use a different machine to run the logs command, make sure the following files are copied to the machine in the same location:
If you run `az arcappliance` CLI commands for Arc Resource Bridge via remote Pow
Using `az arcappliance` commands from remote PowerShell isn't currently supported. Instead, sign in to the node through Remote Desktop Protocol (RDP) or use a console session.
-### Resource bridge configurations cannot be updated
+### Resource bridge configurations can't be updated
In this release, all the parameters are specified at time of creation. To update the Azure Arc resource bridge, you must delete it and redeploy it again.
To resolve this issue, delete the appliance and update the appliance YAML file.
### Appliance Network Unavailable
-If Arc resource bridge is experiencing a network communication problem or is offline, you may see an "Appliance Network Unavailable" error when trying to perform an action that interacts with the resource bridge or an extension operating on top of the bridge. In general, any network or infrastructure connectivity issue to the appliance VM may cause this error. This error can also surface as "Error while dialing dial tcp xx.xx.xxx.xx:55000: connect: no route to host" and this is typically a network communication problem. The problem could be that communication from the host to the Arc resource bridge VM needs to be opened with the help of your network administrator. It could be that there was a temporary network issue not allowing the host to reach the Arc resource bridge VM and once the network issue is resolved, you can retry the operation. You may also need to check that the appliance VM for Arc resource bridge is not stopped. In the case of Azure Stack HCI, the host storage may be full which has caused the appliance VM to pause and the storage will need to be addressed.
+If Arc resource bridge is experiencing a network communication problem or is offline, you may see an "Appliance Network Unavailable" error when trying to perform an action that interacts with the resource bridge or an extension operating on top of the bridge. In general, any network or infrastructure connectivity issue to the appliance VM may cause this error. This error can also surface as "Error while dialing dial tcp xx.xx.xxx.xx:55000: connect: no route to host" and this is typically a network communication problem. The problem could be that communication from the host to the Arc resource bridge VM needs to be opened with the help of your network administrator. It could be that there was a temporary network issue not allowing the host to reach the Arc resource bridge VM and once the network issue is resolved, you can retry the operation. You may also need to check that the appliance VM for Arc resource bridge isn't stopped. In the case of Azure Stack HCI, the host storage may be full which has caused the appliance VM to pause and the storage will need to be addressed.
### Connection closed before server preface received
When you run the Azure CLI commands, the following error might be returned: *The
When using the `az arcappliance createConfig` or `az arcappliance run` command, there will be an interactive experience which shows the list of the VMware entities where user can select to deploy the virtual appliance. This list will show all user-created resource pools along with default cluster resource pools, but the default host resource pools aren't listed.
-When the appliance is deployed to a host resource pool, there is no high availability if the host hardware fails. Because of this, we recommend that you don't try to deploy the appliance in a host resource pool.
+When the appliance is deployed to a host resource pool, there's no high availability if the host hardware fails. Because of this, we recommend that you don't try to deploy the appliance in a host resource pool.
### Resource bridge status "Offline" and `provisioningState` "Failed"
When trying to deploy Arc resource bridge, you might see an error that contains
If you receive an error that contains `Not able to connect to https://example.url.com`, check with your network administrator to ensure your network allows all of the required firewall and proxy URLs to deploy Arc resource bridge. For more information, see [Azure Arc resource bridge network requirements](network-requirements.md).
-### .local not supported
+### Http2 server sent GOAWAY
+
+When trying to deploy Arc resource bridge, you might receive an error message similar to:
+
+`"errorResponse": "{\n\"message\": \"Post \\\"https://region.dp.kubernetesconfiguration.azure.com/azure-arc-appliance-k8sagents/GetLatestHelmPackagePath?api-version=2019-11-01-preview\\u0026releaseTrain=stable\\\": http2: server sent GOAWAY and closed the connection; LastStreamID=1, ErrCode=NO_ERROR, debug=\\\"\\\"\"\n}"`
+
+This occurs when a firewall or proxy has SSL/TLS inspection enabled and blocks http2 calls from the machine used to deploy the resource bridge. To confirm this is the problem, run the following PowerShell cmdlet to invoke the web request with http2 (requires PowerShell version 7 or above), replacing the region in the URL and api-version (ex:2019-11-01) with values from the error:
+`Invoke-WebRequest -HttpVersion 2.0 -UseBasicParsing -Uri https://region.dp.kubernetesconfiguration.azure.com/azure-arc-appliance-k8sagents/GetLatestHelmPackagePath?api-version=2019-11-01-preview"&"releaseTrain=stable -Method Post -Verbose`
+
+If the result is `The response ended prematurely while waiting for the next frame from the server`, then the http2 call is being blocked and needs to be allowed. Work with your network administrator to disable the SSL/TLS inspection to allow http2 calls from the machine used to deploy the bridge.
+
+### .local not supported
When trying to set the configuration for Arc resource bridge, you might receive an error message similar to: `"message": "Post \"https://esx.lab.local/52b-bcbc707ce02c/disk-0.vmdk\": dial tcp: lookup esx.lab.local: no such host"`
For clarity, "management machine" refers to the machine where deployment CLI com
To resolve the error, one or more network misconfigurations might need to be addressed. Follow the steps below to address the most common reasons for this error.
-1. When there is a problem with deployment, the first step is to collect logs by Appliance VM IP (not by kubeconfig, as the kubeconfig could be empty if the deploy command didn't complete). Problems collecting logs are most likely due to the management machine being unable to reach the Appliance VM.
+1. When there's a problem with deployment, the first step is to collect logs by Appliance VM IP (not by kubeconfig, as the kubeconfig could be empty if the deploy command didn't complete). Problems collecting logs are most likely due to the management machine being unable to reach the Appliance VM.
Once logs are collected, extract the folder and open kva.log. Review the kva.log for more information on the failure to help pinpoint the cause of the KVA timeout error.
azure-arc Switch To New Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/switch-to-new-version.md
Title: Switch to the new version
-description: Learn to switch to the new version of VMware vSphere and use its capabilities
+description: Learn how to switch to the new version of Azure Arc-enabled VMware vSphere and use its capabilities.
Previously updated : 11/15/2023 Last updated : 02/28/2024
On August 21, 2023, we rolled out major changes to **Azure Arc-enabled VMware vSphere**. By switching to the new version, you can use all the Azure management services that are available for Arc-enabled Servers.
+If you onboarded to Azure Arc-enabled VMware vSphere before **August 21, 2023**, and your VMs were Azure-enabled, you'll encounter the following breaking changes:
+
+- For the VMs with Arc agents, starting from **February 27, 2024**, you'll no longer be able to perform any Azure management service-related operations.
+- From **March 15, 2024**, you'll no longer be able to perform any operations on the VMs, except the **Remove from Azure** operation.
+
+To continue using these machines, follow these instructions to switch to the new version.
+ > [!NOTE] > If you're new to Arc-enabled VMware vSphere, you'll be able to leverage the new capabilities by default. To get started with the new version, see [Quickstart: Connect VMware vCenter Server to Azure Arc by using the helper script](quick-start-connect-vcenter-to-arc-using-script.md). ## Switch to the new version (Existing customer)
-If you've onboarded to **Azure Arc-enabled VMware** before August 21, 2023, for VMs that are Azure-enabled, follow these steps to switch to the new version:
+If you onboarded to **Azure Arc-enabled VMware** before August 21, 2023, for VMs that are Azure-enabled, follow these steps to switch to the new version:
>[!Note] >If you had enabled guest management on any of the VMs, remove [VM extensions](/azure/azure-arc/vmware-vsphere/remove-vcenter-from-arc-vmware#step-1-remove-vm-extensions) and [disconnect agents](/azure/azure-arc/vmware-vsphere/remove-vcenter-from-arc-vmware#step-2-disconnect-the-agent-from-azure-arc).
azure-cache-for-redis Cache Configure Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-configure-role-based-access-control.md
The following list contains some examples of permission strings for various scen
## Configure a custom data access policy for your application
-1. In the Azure portal, select the Azure Cache for Redis instance that you want to configure Microsoft Entra token based authentication for.
+1. In the Azure portal, select the Azure Cache for Redis instance where you want to configure Microsoft Entra token-based authentication.
1. From the Resource menu, select **(PREVIEW) Data Access configuration**.
azure-cache-for-redis Cache How To Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-encryption.md
-# Configure disk encryption for Azure Cache for Redis instances using customer managed keys (preview)
+# Configure disk encryption for Azure Cache for Redis instances using customer managed keys
Data in a Redis server is stored in memory by default. This data isn't encrypted. You can implement your own encryption on the data before writing it to the cache. In some cases, data can reside on-disk, either due to the operations of the operating system, or because of deliberate actions to persist data using [export](cache-how-to-import-export-data.md) or [data persistence](cache-how-to-premium-persistence.md).
Azure Cache for Redis offers platform-managed keys (PMKs), also know as Microsof
| Tier | Basic, Standard, Premium | Enterprise, Enterprise Flash | |:-:||| |Microsoft managed keys (MMK) | Yes | Yes |
-|Customer managed keys (CMK) | No | Yes (preview) |
+|Customer managed keys (CMK) | No | Yes |
> [!WARNING] > By default, all Azure Cache for Redis tiers use Microsoft managed keys to encrypt disks mounted to cache instances. However, in the Basic and Standard tiers, the C0 and C1 SKUs do not support any disk encryption.
azure-cache-for-redis Cache Tutorial Functions Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-tutorial-functions-getting-started.md
You need to install `Microsoft.Azure.WebJobs.Extensions.Redis`, the NuGet packag
Install this package by going to the **Terminal** tab in VS Code and entering the following command: ```terminal
-dotnet add package Microsoft.Azure.WebJobs.Extensions.Redis --prerelease
+dotnet add package Microsoft.Azure.WebJobs.Extensions.Redis --version 0.3.1-preview
``` ## Configure the cache
azure-functions Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/start-stop-vms/deploy.md
ms.custon: subject-rbac-steps
Perform the steps in this article in sequence to install the Start/Stop VMs v2 feature. After completing the setup process, configure the schedules to customize it to your requirements.
-## Permissions considerations
+## Permissions and Policy considerations
Keep the following considerations in mind before and during deployment:
Keep the following considerations in mind before and during deployment:
+ When managing a Start/Stop v2 solution, you should consider the permissions of users to the Start/Stop v2 solution, particularly when whey don't have permission to directly modify the target virtual machines. ++ When you deploy the Start/Stop v2 solution to a new or existing resource group, a tag named **SolutionName** with a value of **StartStopV2** is added to resource group and to its resources that are deployed by Start/Stop v2. Any other tags on these resources are removed. If you have an Azure policy that denies management operations based on resource tags, you must allow management operations for resources that contain only this tag.++ ## Deploy feature The deployment is initiated from the [Start/Stop VMs v2 GitHub organization](https://github.com/microsoft/startstopv2-deployments/blob/main/README.md). While this feature is intended to manage all of your VMs in your subscription across all resource groups from a single deployment within the subscription, you can install another instance of it based on the operations model or requirements of your organization. It also can be configured to centrally manage VMs across multiple subscriptions.
azure-maps Power Bi Visual Add Bubble Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-bubble-layer.md
The **Category labels** settings enable you to customize font setting such as fo
:::image type="content" source="./media/power-bi-visual/category-labels-example.png" alt-text="A screenshot showing the category labels on an Azure Maps map in Power BI." lightbox="./media/power-bi-visual/category-labels-example.png":::
-## Adding a cluster bubble layer
-
-Cluster bubble layers enable you to use enhanced data aggregation capabilities based on different zoom levels. Cluster bubble layers are designed to optimize the visualization and analysis of data by allowing dynamic adjustments to granularity as users zoom in or out on the map.
--
-Azure Maps Power BI visual offers a range of configuration options to provide flexibility when customizing the appearance of cluster bubbles. With parameters like cluster bubble size, color, text size, text color, border color, and border width, you can tailor the visual representation of clustered data to align with your reporting needs.
--
-| Setting | Description | Values |
-||||
-| Bubble Size  | The size of each cluster bubble. Default: 12 px | 1-50 px |
-| Cluster Color | Fill color of each cluster bubble.ΓÇ» | |
-| Text Size | The size of the number indicating the quantity of clustered bubbles. Default: 18 px.| 1-60 px|
-| Text Color | Text color of the number displayed in the cluster bubbles.| |
-| Border Color | The color of the bubbles outline. | |
-| Border Width | The width of the border in pixels. Default: 2 px | 1-20 px |
- ## Next steps Change how your data is displayed on the map: > [!div class="nextstepaction"]
-> [Add a 3D column layer](power-bi-visual-add-3d-column-layer.md)
+> [Add a cluster bubble layer](power-bi-visual-cluster-bubbles.md)
> [!div class="nextstepaction"]
-> [Add a heat map layer](power-bi-visual-add-heat-map-layer.md)
+> [Add a 3D column layer](power-bi-visual-add-3d-column-layer.md)
Add more context to the map:
Add more context to the map:
> [!div class="nextstepaction"] > [Add a tile layer](power-bi-visual-add-tile-layer.md)
-> [!div class="nextstepaction"]
-> [Show real-time traffic](power-bi-visual-show-real-time-traffic.md)
- Customize the visual: > [!div class="nextstepaction"]
azure-maps Power Bi Visual Cluster Bubbles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-cluster-bubbles.md
+
+ Title: Add a cluster bubble layer to an Azure Maps Power BI visual
+
+description: In this article, you learn how to use the cluster bubble layer in an Azure Maps Power BI visual.
++ Last updated : 02/27/2024+++++
+# Add a cluster bubble layer
+
+Cluster bubble layers enable you to use enhanced data aggregation capabilities based on different zoom levels. Cluster bubble layers are designed to optimize the visualization and analysis of data by allowing dynamic adjustments to granularity as users zoom in or out on the map.
++
+Azure Maps Power BI visual offers a range of configuration options to provide flexibility when customizing the appearance of cluster bubbles. With parameters like cluster bubble size, color, text size, text color, border color, and border width, you can tailor the visual representation of clustered data to align with your reporting needs.
++
+| Setting | Description | Values |
+||||
+| Bubble Size  | The size of each cluster bubble. Default: 12 px | 1-50 px |
+| Cluster Color | Fill color of each cluster bubble.ΓÇ» | |
+| Text Size | The size of the number indicating the quantity of clustered bubbles. Default: 18 px.| 1-60 px|
+| Text Color | Text color of the number displayed in the cluster bubbles.| |
+| Border Color | The color of the bubbles outline. | |
+| Border Width | The width of the border in pixels. Default: 2 px | 1-20 px |
+
+## Next steps
+
+Change how your data is displayed on the map:
+
+> [!div class="nextstepaction"]
+> [Add a 3D column layer](power-bi-visual-add-3d-column-layer.md)
+
+> [!div class="nextstepaction"]
+> [Add a heat map layer](power-bi-visual-add-heat-map-layer.md)
+
+Add more context to the map:
+
+> [!div class="nextstepaction"]
+> [Add a reference layer](power-bi-visual-add-reference-layer.md)
+
+> [!div class="nextstepaction"]
+> [Add a tile layer](power-bi-visual-add-tile-layer.md)
+
+> [!div class="nextstepaction"]
+> [Show real-time traffic](power-bi-visual-show-real-time-traffic.md)
+
+Customize the visual:
+
+> [!div class="nextstepaction"]
+> [Tips and tricks for color formatting in Power BI](/power-bi/visuals/service-tips-and-tricks-for-color-formatting)
+
+> [!div class="nextstepaction"]
+> [Customize visualization titles, backgrounds, and legends](/power-bi/visuals/power-bi-visualization-customize-title-background-and-legend)
azure-monitor Alerts Create Log Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-log-alert-rule.md
description: This article shows you how to create a new log search alert rule.
Previously updated : 02/22/2024 Last updated : 02/28/2024
Alerts triggered by these alert rules contain a payload that uses the [common al
:::image type="content" source="media/alerts-create-new-alert-rule/alerts-logs-conditions-tab.png" alt-text="Screenshot that shows the Condition tab when creating a new log search alert rule.":::
- For sample log search alert queries that query ARG or ADX, see [Log search alert query samples](./alerts-log-alert-query-samples.md)
+ For sample log search alert queries that query ARG or ADX, see [Log search alert query samples](./alerts-log-alert-query-samples.md)
+
+ For limitations:
+ * [Cross-service query limitations](https://learn.microsoft.co/azure/azure-monitor/logs/azure-monitor-data-explorer-proxy#limitations)
+ * [Combine Azure Resource Graph tables with a Log Analytics workspace](https://learn.microsoft.com/azure/azure-monitor/logs/azure-monitor-data-explorer-proxy#combine-azure-resource-graph-tables-with-a-log-analytics-workspace) limitations
+ * Not suppprted in Gov clouds
1. Select **Run** to run the alert. 1. The **Preview** section shows you the query results. When you're finished editing your query, select **Continue Editing Alert**. 1. The **Condition** tab opens populated with your log query. By default, the rule counts the number of results in the last five minutes. If the system detects summarized query results, the rule is automatically updated with that information.
azure-monitor Alerts Log Alert Query Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log-alert-query-samples.md
This query finds virtual machines marked as critical that had a heartbeat more t
arg("").resourcechanges | extend changeTime = todatetime(properties.changeAttributes.timestamp), changeType = tostring(properties.changeType),targetResourceType = tostring(properties.targetResourceType),
- changedBy = tostring(properties.changeAttributes.changedBy)
+ changedBy = tostring(properties.changeAttributes.changedBy),
+ createdResource = tostring(properties.targetResourceId)
| where changeType == "Create" and changeTime <ago(1h)
- | project changeTime,targetResourceId,changedBy
+ | project changeTime,createdResource,changedBy
+ } ```
azure-monitor Azure Web Apps Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net-core.md
To enable telemetry collection with Application Insights, only the application s
|App setting name | Definition | Value | |--|:|-:|
-|ApplicationInsightsAgent_EXTENSION_VERSION | Main extension, which controls runtime monitoring. | `~3` |
+|ApplicationInsightsAgent_EXTENSION_VERSION | Main extension, which controls runtime monitoring. | `~2` for Windows or `~3` for Linux |
|XDT_MicrosoftApplicationInsights_Mode | In default mode, only essential features are enabled to ensure optimal performance. | `disabled` or `recommended`. | |XDT_MicrosoftApplicationInsights_PreemptSdk | For ASP.NET Core apps only. Enables Interop (interoperation) with the Application Insights SDK. Loads the extension side by side with the SDK and uses it to send telemetry. (Disables the Application Insights SDK.) |`1`|
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
The structure of a Log Analytics workspace is described in [Log Analytics worksp
| traces | AppTraces | Detailed logs (traces) emitted through application code/logging frameworks recorded via `TrackTrace()`. | > [!CAUTION]
-> Wait for new telemetry in Log Analytics before relying on it. After starting the migration, telemetry first goes to Classic Application Insights. Aim to switch to Log Analytics within 24 hours, avoiding data loss or double writing. Once done, Log Analytics solely captures new telemetry.
+> Wait for new telemetry in Log Analytics before relying on it. After starting the migration, telemetry first goes to Classic Application Insights. Telemetry ingestion is switched to Log Analytics within 24 hours. Once done, Log Analytics solely captures new telemetry.
### Table schemas
azure-monitor Release And Work Item Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/release-and-work-item-insights.md
You can use the `CreateReleaseAnnotation` PowerShell script to create annotation
[parameter(Mandatory = $true)][string]$releaseName, [parameter(Mandatory = $false)]$releaseProperties = @() )+
+ # Function to ensure all Unicode characters in a JSON string are properly escaped
+ function Convert-UnicodeToEscapeHex {
+ param (
+ [parameter(Mandatory = $true)][string]$JsonString
+ )
+ $JsonObject = ConvertFrom-Json -InputObject $JsonString
+ foreach ($property in $JsonObject.PSObject.Properties) {
+ $name = $property.Name
+ $value = $property.Value
+ if ($value -is [string]) {
+ $value = [regex]::Unescape($value)
+ $OutputString = ""
+ foreach ($char in $value.ToCharArray()) {
+ $dec = [int]$char
+ if ($dec -gt 127) {
+ $hex = [convert]::ToString($dec, 16)
+ $hex = $hex.PadLeft(4, '0')
+ $OutputString += "\u$hex"
+ }
+ else {
+ $OutputString += $char
+ }
+ }
+ $JsonObject.$name = $OutputString
+ }
+ }
+ return ConvertTo-Json -InputObject $JsonObject -Compress
+ }
$annotation = @{ Id = [GUID]::NewGuid();
You can use the `CreateReleaseAnnotation` PowerShell script to create annotation
Properties = ConvertTo-Json $releaseProperties -Compress }
- $body = (ConvertTo-Json $annotation -Compress) -replace '(\\+)"', '$1$1"' -replace "`"", "`"`""
+ $annotation = ConvertTo-Json $annotation -Compress
+ $annotation = Convert-UnicodeToEscapeHex -JsonString $annotation
+
+ $body = $annotation -replace '(\\+)"', '$1$1"' -replace "`"", "`"`""
+ az rest --method put --uri "$($aiResourceId)/Annotations?api-version=2015-05-01" --body "$($body) " # Use the following command for Linux Azure DevOps Hosts or other PowerShell scenarios
azure-monitor Azure Monitor Data Explorer Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/azure-monitor-data-explorer-proxy.md
More use cases:
- Use a tag to determine whether VMs should be running 24x7 or should be shut down at night. - Show alerts on any server that contains a certain number of cores.
-### Combine Azure Resource Graph tables with a Log Analytics workspace
-
-Use the `union` command to combine cluster tables with a Log Analytics workspace.
-
-For example:
-
-```kusto
-union AzureActivity, arg("").Resources
-| take 10
-```
-```kusto
-let CL1 = arg("").Resources ;
-union AzureActivity, CL1 | take 10
-```
-
-When you use the [`join` operator](/azure/data-explorer/kusto/query/joinoperator) instead of union, you need to use a [`hint`](/azure/data-explorer/kusto/query/joinoperator#join-hints) to combine the data in Azure Resource Graph with data in the Log Analytics workspace. Use `Hint.remote={Direction of the Log Analytics Workspace}`. For example:
-
-```kusto
-Perf | where ObjectName == "Memory" and (CounterName == "Available MBytes Memory")
-| extend _ResourceId = replace_string(replace_string(replace_string(_ResourceId, 'microsoft.compute', 'Microsoft.Compute'), 'virtualmachines','virtualMachines'),"resourcegroups","resourceGroups")
-| join hint.remote=left (arg("").Resources | where type =~ 'Microsoft.Compute/virtualMachines' | project _ResourceId=id, tags) on _ResourceId | project-away _ResourceId1 | where tostring(tags.env) == "prod"
-```
- ## Create an alert based on a cross-service query To create a new alert rule based on a cross-service query, follow the steps in [Create a new alert rule](../alerts/alerts-create-new-alert-rule.md), selecting your Log Analytics workspace on the **Scope** tab.
To create a new alert rule based on a cross-service query, follow the steps in [
* Database names are case sensitive. * Identifying the Timestamp column in the cluster isn't supported. The Log Analytics Query API won't pass the time filter. * Cross-service queries support data retrieval only.
-* [Private Link](../logs/private-link-security.md) (private endpoints) and [IP restrictions](/azure/data-explorer/security-network-restrict-public-access) do not support cross-service queries.
-* `mv-expand` is limited to 2000 records.
-* Azure Monitor Logs does not support the `external_table()` function, which lets you query external tables in Azure Data Explorer. To query an external table, define `external_table(<external-table-name>)` as a parameterless function in Azure Data Explorer. You can then call the function using the expression `adx("").<function-name>`.
+* [Private Link](../logs/private-link-security.md) (private endpoints) and [IP restrictions](/azure/data-explorer/security-network-restrict-public-access) don't support cross-service queries.
+* `mv-expand` is limited to 2,000 records.
+* Azure Monitor Logs doesn't support the `external_table()` function, which lets you query external tables in Azure Data Explorer. To query an external table, define `external_table(<external-table-name>)` as a parameterless function in Azure Data Explorer. You can then call the function using the expression `adx("").<function-name>`.
### Azure Resource Graph cross-service query limitations
-* Microsoft Sentinel does not support cross-service queries to Azure Resource Graph.
+* Microsoft Sentinel doesn't support cross-service queries to Azure Resource Graph.
* When you query Azure Resource Graph data from Azure Monitor:
- * The query returns the first 1000 records only.
+ * The query returns the first 1,000 records only.
* Azure Monitor doesn't return Azure Resource Graph query errors. * The Log Analytics query editor marks valid Azure Resource Graph queries as syntax errors. * These operators aren't supported: `smv-apply()`, `rand()`, `arg_max()`, `arg_min()`, `avg()`, `avg_if()`, `countif()`, `sumif()`, `percentile()`, `percentiles()`, `percentilew()`, `percentilesw()`, `stdev()`, `stdevif()`, `stdevp()`, `variance()`, `variancep()`, `varianceif()`. +
+### Combine Azure Resource Graph tables with a Log Analytics workspace
+
+Use the `union` command to combine cluster tables with a Log Analytics workspace.
+
+For example:
+
+```kusto
+union AzureActivity, arg("").Resources
+| take 10
+```
+```kusto
+let CL1 = arg("").Resources ;
+union AzureActivity, CL1 | take 10
+```
+
+When you use the [`join` operator](/azure/data-explorer/kusto/query/joinoperator) instead of union, you need to use a [`hint`](/azure/data-explorer/kusto/query/joinoperator#join-hints) to combine the data in Azure Resource Graph with data in the Log Analytics workspace. Use `Hint.remote={Direction of the Log Analytics Workspace}`. For example:
+
+```kusto
+Perf | where ObjectName == "Memory" and (CounterName == "Available MBytes Memory")
+| extend _ResourceId = replace_string(replace_string(replace_string(_ResourceId, 'microsoft.compute', 'Microsoft.Compute'), 'virtualmachines','virtualMachines'),"resourcegroups","resourceGroups")
+| join hint.remote=left (arg("").Resources | where type =~ 'Microsoft.Compute/virtualMachines' | project _ResourceId=id, tags) on _ResourceId | project-away _ResourceId1 | where tostring(tags.env) == "prod"
+```
+ ## Next steps * [Write queries](/azure/data-explorer/write-queries) * [Perform cross-resource log queries in Azure Monitor](../logs/cross-workspace-query.md)
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-data-export.md
If the data export rule includes an unsupported table, the configuration will su
| AzureAttestationDiagnostics | | | AzureBackupOperations | | | AzureDevOpsAuditing | |
-| AzureDiagnostics | |
| AzureLoadTestingOperation | | | AzureMetricsV2 | | | BehaviorAnalytics | |
azure-monitor Logs Dedicated Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md
Authorization: Bearer <token>
- Moving a cluster to another resource group or subscription isn't currently supported.
+- Moving a cluster to another region isn't supported.
+ - Cluster update shouldn't include both identity and key identifier details in the same operation. In case you need to update both, the update should be in two consecutive operations. - Lockbox isn't currently available in China.
azure-netapp-files Cool Access Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cool-access-introduction.md
Standard storage with cool access is supported for the following regions:
* Canada Central * Canada East * Central India
+* Central US
* East Asia * East US 2 * France Central * Germany West Central * Japan East
+* Japan West
* North Central US * North Europe * Southeast Asia
Standard storage with cool access is supported for the following regions:
* US Gov Texas * US Gov Virginia * West US
+* West US 2
* West US 3 ## Effects of cool access on data
azure-resource-manager Deployment Script Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-script-template.md
The identity that your deployment script uses needs to be authorized to work wit
With Microsoft.Resources/deploymentScripts version 2023-08-01, you can run deployment scripts in private networks with some additional configurations. - Create a user-assigned managed identity, and specify it in the `identity` property. To assign the identity, see [Identity](#identity).-- Create a storage account in the private network, and specify the deployment script to use the existing storage account. To specify an existing storage account, see [Use existing storage account](#use-existing-storage-account). Some additional configuration is required for the storage account.
+- Create a storage account, and specify the deployment script to use the existing storage account. To specify an existing storage account, see [Use existing storage account](#use-existing-storage-account). Some additional configuration is required for the storage account.
1. Open the storage account in the [Azure portal](https://portal.azure.com). 1. From the left menu, select **Access Control (IAM)**, and then select the **Role assignments** tab.
azure-vmware Azure Security Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-security-integration.md
Title: Integrate Microsoft Defender for Cloud with Azure VMware Solution
description: Learn how to protect your Azure VMware Solution VMs with Azure's native security tools from the workload protection dashboard. Previously updated : 11/27/2023 Last updated : 2/28/2024
azure-vmware Vmware Hcx Mon Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/vmware-hcx-mon-guidance.md
Title: VMware HCX Mobility Optimized Networking (MON) guidance
description: Learn about Azure VMware Solution-specific use cases for Mobility Optimized Networking (MON). Previously updated : 12/20/2023 Last updated : 2/28/2024
>[!NOTE] >
-> VMware HCX Mobility Optimized Networking is officially supported by VMware and Azure VMware Solutions from HCX version 4.1.0.
+> VMware HCX Mobility Optimized Networking is officially supported by VMware and Azure VMware Solution from HCX version 4.1.0.
>[!IMPORTANT] >
> >[Limitations for any HCX deployment including MON](https://docs.vmware.com/en/VMware-HCX/4.2/hcx-user-guide/GUID-BEC26054-D560-46D0-98B4-7FF09501F801.html) >
->VMware HCX Mobility Optimized Networking (MON) is not supported with the use of a 3rd party gateway. It may only be used with the T1 gateway directly connected to the T0 gateway with no network virtual appliance (NVA). You may be able to make this configuration function, but we do not support it.
+>VMware HCX Mobility Optimized Networking (MON) is not supported with the use of a 3rd party gateway. It may only be used with the T1 gateway directly connected to the T0 gateway without a network virtual appliance (NVA). You may be able to make this configuration function, but we do not support it.
[HCX Mobility Optimized Networking (MON)](https://docs.vmware.com/en/VMware-HCX/4.2/hcx-user-guide/GUID-0E254D74-60A9-479C-825D-F373C41F40BC.html) is an optional feature to enable when using [HCX Network Extensions (NE)](configure-hcx-network-extension.md). MON provides optimal traffic routing under certain scenarios to prevent network tromboning between the on-premises and cloud-based resources on extended networks.
In this scenario, we assume a VM from on-premises is migrated to Azure VMware So
>[!IMPORTANT] >The main point here is to plan and avoid asymmetric traffic flows carefully.
-By default and without using MON, a VM in Azure VMware Solution on a stretched network without MON can communicate back to on-premises using the ExpressRoute preferred path. Based on customer use cases, one should evaluate how a VM on an Azure VMware Solution stretched segment enabled with MON should be traversing back to on-premises, either over the NE or the T0 gateway via the ExpressRoute while keeping traffic flows symmetric.
+By default and without using MON, a VM in Azure VMware Solution on a stretched network without MON can communicate back to on-premises using the ExpressRoute preferred path. Based on customer use-cases, one should evaluate how a VM on an Azure VMware Solution stretched segment enabled with MON should be traversing back to on-premises, either over the Network Extension or the T0 gateway via the ExpressRoute while keeping traffic flows symmetric.
-If choosing the NE path for example, the MON policy routes have to specifically address the subnet on the on-premises side; otherwise, the 0.0.0.0/0 default route is used. Policy routes can be found under the NE segment, selecting advanced. By default, all RFC1918 IP addresses are included in the MON policy routes definition.
+If choosing the NE path for example, the MON policy routes have to specifically address the subnet at the on-premises side; otherwise, the 0.0.0.0/0 default route is used. Policy routes can be found under the NE segment, by selecting advanced.
+By default, all RFC 1918 IP addresses are included in the MON policy routes definition.
++
+This results in all RFC 1918 egress traffic being tunneled over the NE path and all internet and public traffic being routed to the T0 gateway.
+ Policy routes are evaluated only if the VM gateway is migrated to the cloud. The effect of this configuration is that any matching subnets for the destination get tunneled over the NE appliance. If not matched, they get routed through the T0 gateway. >[!NOTE]
->Special consideration for using MON in Azure VMware Solution is to give the /32 routes advertised over BGP to its peers; this includes on-premises and Azure over the ExpressRoute connection. For example, a VM in Azure learns the path to an Azure VMware Solution VM on an Azure VMware Solution MON enabled segment. Once the return traffic is sent back to the T0 gateway as expected, if the return subnet is an RFC1918 match, traffic is forced over the NE instead of the T0. Then egresses over the ExpressRoute back to Azure on the on-premises side. This can cause confusion for stateful firewalls in the middle and asymmetric routing behavior. It's also a good idea to determine how VMs on NE MON segments will need to access the internet, either via the T0 in Azure VMware Solution or only through the NE back to on-premises. In general, all of the default policy routes should be removed to avoid asymmetric traffic. Only enable policy routes if the network infrastructure as been configured in such a way to account for and prevent asymmetric traffic.
+>Special consideration for using MON in Azure VMware Solution is to give the /32 routes advertised over BGP to its peers; this includes on-premises and Azure over the ExpressRoute connection. For example, a VM in Azure learns the path to an Azure VMware Solution VM on an Azure VMware Solution MON enabled segment. Once the return traffic is sent back to the T0 gateway as expected, if the return subnet is an RFC 1918 match, traffic is forced over the NE instead of the T0. Then egresses over the ExpressRoute back to Azure on the on-premises side. This can cause confusion for stateful firewalls in the middle and asymmetric routing behavior. It's also a good idea to determine how VMs on NE MON segments will need to access the internet, either via the T0 gateway in Azure VMware Solution or only through the NE back to on-premises. In general, all of the default policy routes should be removed to avoid asymmetric traffic. Only enable policy routes if the network infrastructure as been configured in such a way to account for and prevent asymmetric traffic.
+
+The MON policy routes can be deleted with none defined. This results in all egress traffic being routed to the T0 gateway.
++
+The MON policy routes can be updated with a default route (0.0.0.0/0). This results in all egress traffic being tunneled over the NE path.
-As outlined in the above diagram, the importance is to match a policy route to each required subnet. Otherwise, the traffic gets routed over the T0 and not the NE.
+As outlined in the above diagrams, the importance is to match a policy route to each required subnet. Otherwise, the traffic gets routed over the T0 and not tunneled over the NE.
To learn more about policy routes, see [Mobility Optimized Networking Policy Routes](https://docs.vmware.com/en/VMware-HCX/4.1/hcx-user-guide/GUID-F45B1DB5-C640-4A75-AEC5-45C58B1C9D63.html).
backup Azure Kubernetes Service Backup Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-backup-troubleshoot.md
Title: Troubleshoot Azure Kubernetes Service backup description: Symptoms, causes, and resolutions of the Azure Kubernetes Service backup and restore operations. Previously updated : 12/28/2023 Last updated : 02/28/2024 - ignite-2023
This error code can appear while you enable AKS backup to store backups in a vau
3. Create a backup policy for operational tier backup (only snapshots for the AKS cluster).
+## AKS backup and restore jobs completed with warnings
+
+### UserErrorPVSnapshotDisallowedByPolicy
+
+**Error code**: UserErrorPVSnapshotDisallowedByPolicy
+
+**Cause**: An Azure policy is assigned over subscription that ceases the CSI driver to take the volume snapshot.
+
+**Recommended action**: Remove the Azure Policy ceasing the disk snapshot operation, and then perform an on-demand backup.
+
+### UserErrorPVSnapshotLimitReached
+
+**Error code**: UserErrorPVSnapshotLimitReached
+
+**Cause**: There is a limited number of snapshots for a Persistent Volume that can exist at a point-in-time. For Azure Disk-based Persistent Volumes, the limit is *500 snapshots*. This error appears when snapshots for specific Persistent Volumes aren't taken due to existence of snapshots higher than the supported limits.
+
+**Recommended action**: Update the Backup Policy to reduce the retention duration and wait for older recovery points to be deleted by the Backup vault.
+
+### CSISnapshottingTimedOut
+
+**Error code**: CSISnapshottingTimedOut
+
+**Cause**: Snapshot has failed because CSI Driver is getting timed out to fetch the snapshot handle.
+
+**Recommended action**: Review the logs and retry the operation to get successful snapshots by running an on-demand backup, or wait for next scheduled backup.
+
+### UserErrorHookExecutionFailed
+
+**Error code**: UserErrorHookExecutionFailed
+
+**Cause**: When hooks applied to run along with backups and restores have encountered an error, and aren't successfully applied.
+
+**Recommended action**: Review the logs, update the hooks, and then retry backup/restore operation.
+
+### UserErrorNamespaceNotFound
+
+**Error code**: UserErrorNamespaceNotFound
+
+**Cause**: Namespaces provided in Backup Configuration is missing while performing backups. Either the namespace was wrongly provided or has been deleted.
+
+**Recommended action**: Check if the Namespaces to be backed up are correctly provided.
+
+### UserErrorPVCHasNoVolume
+
+**Error code**: UserErrorPVCHasNoVolume
+
+**Cause**: The Persistent Volume Claim (PVC) in context does not have a Persistent Volume attached to it. So, the PVC will not be backed up.
+
+**Recommended action**: Attach a volume to the PVC, if it needs to be backed up.
+
+### UserErrorPVCNotBoundToVolume
+
+**Error code**: UserErrorPVCNotBoundToVolume
+
+**Cause**: The PVC in context is in *Pending* state and doesn't have a Persistent Volume attached to it. So, the PVC will not be backed up.
+
+**Recommended action**: Attach a volume to the PVC, if it needs to be backed up.
+
+### UserErrorPVNotFound
+
+**Error code**: UserErrorPVNotFound
+
+**Cause**: The underlying storage medium for the Persistent Volume is missing.
+
+**Recommended action**: Check and attached a new Persistent Volume with actual storage medium attached.
+
+### UserErrorStorageClassMissingForPVC
+
+**Error code**: UserErrorStorageClassMissingForPVC
+
+**Cause**: AKS backup checks for the storage class being used and skips the Persistent Volume from taking snapshots due to unavailability of the class.
+
+**Recommended action**: Update the PVC specifications with the storage class used.
+
+### UserErrorSourceandTargetClusterCRDVersionMismatch
+
+**Error code**: UserErrorSourceandTargetClusterCRDVersionMismatch
+
+**Cause**: The source AKS cluster and Target AKS cluster during restore have different versions of *FlowSchema* and *PriorityLevelConfigurations CRs*. Some Kubernetes resources aren't restored due to the mismatch in cluster versions.
+
+**Recommended action**: Use same cluster version for Target cluster as Source cluster or manually apply the CRs.
+ ## Next steps - [About Azure Kubernetes Service (AKS) backup](azure-kubernetes-service-backup-overview.md)
backup Azure Kubernetes Service Cluster Manage Backups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-manage-backups.md
- devx-track-azurecli - ignite-2023 Previously updated : 02/27/2024 Last updated : 02/28/2024
To enable Trusted Access between Backup vault and AKS cluster, use the following
Learn more about [other commands related to Trusted Access](../aks/trusted-access-feature.md#trusted-access-feature-overview).
+## Monitor AKS backup jobs completed with warnings
+
+When a scheduled or an on-demand backup or restore operation is performed, a job is created corresponding to the operation to track its progress. In case of a failure, these jobs allow you to identify error codes and fix issues to run a successful job later.
+
+For AKS backup, backup and restore jobs can show the status **Completed with Warnings**. This status appears when the backup and restore operation isn't fully successful due to issues in user-defined configurations or internal state of the workload.
++
+For example, if a backup job for an AKS cluster completes with the status **Completed with Warnings**, a restore point will be created, but it might not have been able to back up all the resources in the cluster as per the backup configuration. The job will show warning details, providing the *issues* and *resources* that were impacted during the operation.
+
+To view these warnings, select **View Details** next to **Warning Details**.
++
+Learn [how to identify and resolve the error](azure-kubernetes-service-backup-troubleshoot.md#aks-backup-extension-installation-error-resolutions).
+ ## Next steps - [Back up Azure Kubernetes Service cluster](azure-kubernetes-service-cluster-backup.md)
batch Plan To Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/plan-to-manage-costs.md
Use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculato
![Screenshot showing the your estimate section and main options available for Azure Batch.](media/plan-to-manage-costs/batch-pricing-calculator-overview.png)
- You can learn more about the cost of running virtual machines from the [Plan to manage costs for virtual machines documentation](../virtual-machines/plan-to-manage-costs.md).
+ You can learn more about the cost of running virtual machines from the [Plan to manage costs for virtual machines documentation](../virtual-machines/cost-optimization-plan-to-manage-costs.md).
## Understand the full billing model for Azure Batch
Azure Batch is a free service. There are no costs for Batch itself. However, the
Although Batch itself is a free service, many of the underlying resources that run your workloads aren't. These include: - [Virtual Machines](https://azure.microsoft.com/pricing/details/virtual-machines/windows/)
- - To learn more about the costs associated with virtual machines, see the [How you're charged for virtual machines section of Plan to manage costs for virtual machines](../virtual-machines/plan-to-manage-costs.md#how-youre-charged-for-virtual-machines).
+ - To learn more about the costs associated with virtual machines, see the [How you're charged for virtual machines section of Plan to manage costs for virtual machines](../virtual-machines/cost-optimization-plan-to-manage-costs.md#how-youre-charged-for-virtual-machines).
- Each VM in a pool created with [Virtual Machine Configuration](nodes-and-pools.md#virtual-machine-configuration) has an associated OS disk that uses Azure-managed disks. Azure-managed disks have an additional cost, and other disk performance tiers have different costs as well. - Storage - When applications are deployed to Batch node virtual machines using [application packages](batch-application-packages.md), you're billed for the Azure Storage resources that your application packages consume. You're also billed for the storage of any input or output files, such as resource files and other log data.
communication-services Record Calls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/record-calls.md
zone_pivot_groups: acs-plat-web-ios-android-windows
- Optional: Completion of the [quickstart to add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md). ::: zone pivot="platform-web" ::: zone-end ::: zone pivot="platform-android"
container-apps Firewall Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/firewall-integration.md
The following tables describe how to configure a collection of NSG allow rules.
|--|--|--|--|--|--| | TCP | Your client IPs | \* | Your container app's subnet<sup>1</sup> | `80`, `31080` | Allow your Client IPs to access Azure Container Apps when using HTTP. `31080` is the port on which the Container Apps Environment Edge Proxy responds to the HTTP traffic. It is behind the internal load balancer. | | TCP | Your client IPs | \* | Your container app's subnet<sup>1</sup> | `443`, `31443` | Allow your Client IPs to access Azure Container Apps when using HTTPS. `31443` is the port on which the Container Apps Environment Edge Proxy responds to the HTTPS traffic. It is behind the internal load balancer. |
-| TCP | AzureLoadBalancer | \* | Your container app's subnet | `30000-32676`<sup>2</sup> | Allow Azure Load Balancer to probe backend pools. |
+| TCP | AzureLoadBalancer | \* | Your container app's subnet | `30000-32767`<sup>2</sup> | Allow Azure Load Balancer to probe backend pools. |
# [Consumption only environment](#tab/consumption-only)
The following tables describe how to configure a collection of NSG allow rules.
|--|--|--|--|--|--| | TCP | Your client IPs | \* | Your container app's subnet<sup>1</sup> | `80`, `443` | Allow your Client IPs to access Azure Container Apps. Use port `80` for HTTP and `443` for HTTPS. | | TCP | Your client IPs | \* | The `staticIP` of your container app environment | `80`, `443` | Allow your Client IPs to access Azure Container Apps. Use port `80` for HTTP and `443` for HTTPS. |
-| TCP | AzureLoadBalancer | \* | Your container app's subnet | `30000-32676`<sup>2</sup> | Allow Azure Load Balancer to probe backend pools. |
+| TCP | AzureLoadBalancer | \* | Your container app's subnet | `30000-32767`<sup>2</sup> | Allow Azure Load Balancer to probe backend pools. |
| TCP | Your container app's subnet | \* | Your container app's subnet | \* | Required to allow the container app envoy sidecar to connect to envoy service. |
The following tables describe how to configure a collection of NSG allow rules.
| Any | Your container app's subnet | \* | Your container app's subnet | \* | Allow communication between IPs in your container app's subnet. | | TCP | Your container app's subnet | \* | `AzureActiveDirectory` | `443` | If you're using managed identity, this is required. | | TCP | Your container app's subnet | \* | `AzureMonitor` | `443` | Only required when using Azure Monitor. Allows outbound calls to Azure Monitor. |
+| TCP and UDP | Your container app's subnet | \* | `168.63.129.16` | `53` | Enables the environment to use Azure DNS to resolve the hostname. |
# [Consumption only environment](#tab/consumption-only)
The following tables describe how to configure a collection of NSG allow rules.
| UDP | Your container app's subnet | \* | \* | `123` | NTP server. | | Any | Your container app's subnet | \* | Your container app's subnet | \* | Allow communication between IPs in your container app's subnet. | | TCP | Your container app's subnet | \* | `AzureMonitor` | `443` | Only required when using Azure Monitor. Allows outbound calls to Azure Monitor. |
+| TCP and UDP | Your container app's subnet | \* | `168.63.129.16` | `53` | Enables the environment to use Azure DNS to resolve the hostname. |
container-apps Jobs Get Started Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/jobs-get-started-cli.md
Job executions output logs to the logging provider that you configured for the C
] ``` - ## Clean up resources If you're not going to continue to use this application, run the following command to delete the resource group along with all the resources created in this quickstart.
container-apps Jobs Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/jobs-get-started-portal.md
Next, create an environment for your container app.
The logs show the output of the job execution. It may take a few minutes for the logs to appear. - ## Clean up resources If you're not going to continue to use this application, you can delete the Azure Container Apps instance and all the associated services by removing the resource group.
container-instances Monitor Azure Container Instances Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/monitor-azure-container-instances-reference.md
Title: Monitoring Azure Container Instances data reference
-description: Important reference material needed when you monitor Azure Container Instances
-
+ Title: Monitoring data reference for Container Instances
+description: This article contains important reference material you need when you monitor Container Instances.
Last updated : 02/27/2024+ + - Previously updated : 06/06/2022
-# Monitoring Azure Container Instances data reference
-
-See [Monitoring Azure Container Instances](monitor-azure-container-instances.md) for details on collecting and analyzing monitoring data for Azure Container Instances.
+# Container Instances monitoring data reference
-## Metrics
+<!-- Intro. Required. -->
-This section lists all the automatically collected platform metrics collected for Azure Container Instances.
+See [Monitor Container Instances](monitor-azure-container-instances.md) for details on the data you can collect for Container Instances and how to use it.
-|Metric Type | Resource Provider / Type Namespace<br/> and link to individual metrics |
-|-|--|
-| Container Instances | [Microsoft.ContainerInstance/containerGroups](/azure/azure-monitor/platform/metrics-supported#microsoftcontainerinstancecontainergroups) |
+<!-- ## Metrics. Required section. -->
-## Metric dimensions
+### Supported metrics for Microsoft.ContainerInstance/containerGroups
+The following table lists the metrics available for the Microsoft.ContainerInstance/containerGroups resource type.
-For more information on what metric dimensions are, see [Multi-dimensional metrics](/azure/azure-monitor/platform/data-platform-metrics#multi-dimensional-metrics).
+### Supported metrics for Microsoft.ContainerInstance/containerScaleSets
+The following table lists the metrics available for the Microsoft.ContainerInstance/containerScaleSets resource type.
-Azure Container Instances has the following dimension associated with its metrics.
+<!-- ## Metric dimensions. Required section. -->
| Dimension Name | Description | | - | -- | | **containerName** | The name of the container. The name must be between 1 and 63 characters long. It can contain only lowercase letters numbers, and dashes. Dashes can't begin or end the name, and dashes can't be consecutive. The name must be unique in its resource group. |
-## Activity log
+<!-- ## Resource logs. Required section. -->
-The following table lists the operations that Azure Container Instances may record in the Activity log. This is a subset of the possible entries you might find in the activity log. You can also find this information in the [Azure role-based access control (RBAC) Resource provider operations documentation](../role-based-access-control/resource-provider-operations.md#microsoftcontainerinstance).
-
-| Operation | Description |
-|:|:|
-| Microsoft.ContainerInstance/register/action | Registers the subscription for the container instance resource provider and enables the creation of container groups. |
-| Microsoft.ContainerInstance/containerGroupProfiles/read | Get all container group profiles. |
-| Microsoft.ContainerInstance/containerGroupProfiles/write | Create or update a specific container group profile. |
-| Microsoft.ContainerInstance/containerGroupProfiles/delete | Delete the specific container group profile. |
-| Microsoft.ContainerInstance/containerGroups/read | Get all container groups. |
-| Microsoft.ContainerInstance/containerGroups/write | Create or update a specific container group. |
-| Microsoft.ContainerInstance/containerGroups/delete | Delete the specific container group. |
-| Microsoft.ContainerInstance/containerGroups/restart/action | Restarts a specific container group. This log only captures customer-intiated restarts, not restarts initiated by Azure Container Instances infrastructure. |
-| Microsoft.ContainerInstance/containerGroups/stop/action | Stops a specific container group. Compute resources will be deallocated and billing will stop. |
-| Microsoft.ContainerInstance/containerGroups/start/action | Starts a specific container group. |
-| Microsoft.ContainerInstance/containerGroups/containers/exec/action | Exec into a specific container. |
-| Microsoft.ContainerInstance/containerGroups/containers/attach/action | Attach to the output stream of a container. |
-| Microsoft.ContainerInstance/containerGroups/containers/buildlogs/read | Get build logs for a specific container. |
-| Microsoft.ContainerInstance/containerGroups/containers/logs/read | Get logs for a specific container. |
-| Microsoft.ContainerInstance/containerGroups/detectors/read | List Container Group Detectors |
-| Microsoft.ContainerInstance/containerGroups/operationResults/read | Get async operation result |
-| Microsoft.ContainerInstance/containerGroups/outboundNetworkDependenciesEndpoints/read | List Container Group Detectors |
-| Microsoft.ContainerInstance/containerGroups/providers/Microsoft.Insights/diagnosticSettings/read | Gets the diagnostic setting for the container group. |
-| Microsoft.ContainerInstance/containerGroups/providers/Microsoft.Insights/diagnosticSettings/write | Creates or updates the diagnostic setting for the container group. |
-| Microsoft.ContainerInstance/containerGroups/providers/Microsoft.Insights/metricDefinitions/read | Gets the available metrics for container group. |
-| Microsoft.ContainerInstance/locations/deleteVirtualNetworkOrSubnets/action | Notifies Microsoft.ContainerInstance that virtual network or subnet is being deleted. |
-| Microsoft.ContainerInstance/locations/cachedImages/read | Gets the cached images for the subscription in a region. |
-| Microsoft.ContainerInstance/locations/capabilities/read | Get the capabilities for a region. |
-| Microsoft.ContainerInstance/locations/operationResults/read | Get async operation result |
-| Microsoft.ContainerInstance/locations/operations/read | List the operations for Azure Container Instance service. |
-| Microsoft.ContainerInstance/locations/usages/read | Get the usage for a specific region. |
-| Microsoft.ContainerInstance/operations/read | List the operations for Azure Container Instance service. |
-| Microsoft.ContainerInstance/serviceassociationlinks/delete | Delete the service association link created by Azure Container Instance resource provider on a subnet. |
+### Supported resource logs for Microsoft.ContainerInstance/containerGroups
-See [all the possible resource provider operations in the activity log](../role-based-access-control/resource-provider-operations.md).
+<!-- ## Azure Monitor Logs tables. Required section. -->
-For more information on the schema of Activity Log entries, see [Activity Log schema](../azure-monitor/essentials/activity-log-schema.md).
+Container Instances has two table schemas, a legacy schema for Log Analytics and a new schema that supports diagnostic settings. Diagnostic settings is in public preview in the Azure portal. You can use either or both schemas at the same time.
-## Schemas
+### Legacy Log Analytics tables
-The following schemas are in use by Azure Container Instances.
+The following *_CL* tables represent the legacy Log Analytics integration. Users provide the Log Analytics workspace ID and key in the Container Group payload.
> [!NOTE]
-> Some of the columns listed below only exist as part of the schema, and won't have any data emitted in logs. These columns are denoted below with a description of 'Empty'.
+> Some of the columns in the following list exist only as part of the schema, and don't have any data emitted in logs. These columns are denoted with a description of 'Empty'.
-### ContainerInstanceLog_CL
+#### ContainerInstanceLog_CL
| Column | Type | Description | |-|-|-|
The following schemas are in use by Azure Container Instances.
|_ResourceId|string|A unique identifier for the resource that the record is associated with| |_SubscriptionId|string|A unique identifier for the subscription that the record is associated with|
-### ContainerEvent_CL
+#### ContainerEvent_CL
|Column|Type|Description| |-|-|-|
The following schemas are in use by Azure Container Instances.
|_ResourceId|string|A unique identifier for the resource that the record is associated with| |_SubscriptionId|string|A unique identifier for the subscription that the record is associated with|
-## See also
+### Azure Monitor Log Analytics tables
+
+The newer tables require use of a diagnostic setting to route information to Log Analytics. Diagnostic settings for Container Instances in the Azure portal is in public preview. The table names are similar, but without the _CL, and some columns are different. For more information, see [Use diagnostic settings](container-instances-log-analytics.md#using-diagnostic-settings).
+
+#### Container Instances
+Microsoft.ContainerInstance/containerGroups
+- [ContainerInstanceLog](/azure/azure-monitor/reference/tables/containerinstancelog)
+- [ContainerEvent](/azure/azure-monitor/reference/tables/containerevent)
+
+<!-- ## Activity log. Required section. -->
+
+The following table lists a subset of the operations that Azure Container Instances may record in the Activity log. For the complete listing, see [Microsoft.ContainerInstance resource provider operations](/azure/role-based-access-control/resource-provider-operations#microsoftcontainerinstance).
+
+| Operation | Description |
+|:|:|
+| Microsoft.ContainerInstance/register/action | Registers the subscription for the container instance resource provider and enables the creation of container groups. |
+| Microsoft.ContainerInstance/containerGroupProfiles/read | Get all container group profiles. |
+| Microsoft.ContainerInstance/containerGroupProfiles/write | Create or update a specific container group profile. |
+| Microsoft.ContainerInstance/containerGroupProfiles/delete | Delete the specific container group profile. |
+| Microsoft.ContainerInstance/containerGroups/read | Get all container groups. |
+| Microsoft.ContainerInstance/containerGroups/write | Create or update a specific container group. |
+| Microsoft.ContainerInstance/containerGroups/delete | Delete the specific container group. |
+| Microsoft.ContainerInstance/containerGroups/restart/action | Restarts a specific container group. This log only captures customer-intiated restarts, not restarts initiated by Azure Container Instances infrastructure. |
+| Microsoft.ContainerInstance/containerGroups/stop/action | Stops a specific container group. Compute resources are deallocated and billing stops. |
+| Microsoft.ContainerInstance/containerGroups/start/action | Starts a specific container group. |
+| Microsoft.ContainerInstance/containerGroups/containers/exec/action | Exec into a specific container. |
+| Microsoft.ContainerInstance/containerGroups/containers/attach/action | Attach to the output stream of a container. |
+| Microsoft.ContainerInstance/containerGroups/containers/buildlogs/read | Get build logs for a specific container. |
+| Microsoft.ContainerInstance/containerGroups/containers/logs/read | Get logs for a specific container. |
+| Microsoft.ContainerInstance/containerGroups/detectors/read | List Container Group Detectors |
+| Microsoft.ContainerInstance/containerGroups/operationResults/read | Get async operation result |
+| Microsoft.ContainerInstance/containerGroups/outboundNetworkDependenciesEndpoints/read | List Container Group Detectors |
+| Microsoft.ContainerInstance/containerGroups/providers/Microsoft.Insights/diagnosticSettings/read | Gets the diagnostic setting for the container group. |
+| Microsoft.ContainerInstance/containerGroups/providers/Microsoft.Insights/diagnosticSettings/write | Creates or updates the diagnostic setting for the container group. |
+| Microsoft.ContainerInstance/containerGroups/providers/Microsoft.Insights/metricDefinitions/read | Gets the available metrics for container group. |
+| Microsoft.ContainerInstance/locations/deleteVirtualNetworkOrSubnets/action | Notifies Microsoft.ContainerInstance that virtual network or subnet is being deleted. |
+| Microsoft.ContainerInstance/locations/cachedImages/read | Gets the cached images for the subscription in a region. |
+| Microsoft.ContainerInstance/locations/capabilities/read | Get the capabilities for a region. |
+| Microsoft.ContainerInstance/locations/operationResults/read | Get async operation result |
+| Microsoft.ContainerInstance/locations/operations/read | List the operations for Azure Container Instance service. |
+| Microsoft.ContainerInstance/locations/usages/read | Get the usage for a specific region. |
+| Microsoft.ContainerInstance/operations/read | List the operations for Azure Container Instance service. |
+| Microsoft.ContainerInstance/serviceassociationlinks/delete | Delete the service association link created by Azure Container Instance resource provider on a subnet. |
+
+## Related content
-- See [Monitoring Azure Container Instances](monitor-azure-container-instances.md) for a description of monitoring Azure Container Instances.-- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+- See [Monitor Container Instances](monitor-azure-container-instances.md) for a description of monitoring Container Instances.
+- See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
container-instances Monitor Azure Container Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/monitor-azure-container-instances.md
Title: Monitoring Azure Container Instances
-description: Start here to learn how to monitor Azure Container Instances
+ Title: Monitor Azure Container Instances
+description: Start here to learn how to monitor Azure Container Instances.
Last updated : 02/27/2024++ - - Previously updated : 06/06/2022
-# Monitoring Azure Container Instances
+# Monitor Azure Container Instances
-When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
+<!-- Intro. Required. -->
-This article describes the monitoring data generated by Azure Container Instances. Azure Container Instances includes built-in support for [Azure Monitor](../azure-monitor/overview.md). If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
+<!-- ## Resource types. Required section. -->
+For more information about the resource types for Azure Container Instances, see [Container Instances monitoring data reference](monitor-azure-container-instances-reference.md).
-## Monitoring overview page in Azure portal
+<!-- ## Data storage. Required section. Optionally, add service-specific information about storing your monitoring data after the include. -->
-The **Overview** page in the Azure portal for each container instance includes a brief view of resource usage and telemetry.
+<!-- METRICS SECTION START ->
- ![Graphs of resource usage displayed on Container Instance overview page, PNG.](./media/monitor-azure-container-instances/overview-monitoring-data.png)
+<!-- ## Platform metrics. Required section. -->
-## Monitoring data
+For a list of available metrics for Container Instances, see [Container Instances monitoring data reference](monitor-azure-container-instances-reference.md#metrics).
-Azure Container Instances collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data-from-azure-resources).
+All metrics for Container Instances are in the namespace **Container group standard metrics**. In a container group with multiple containers, you can filter on the **containerName** dimension to acquire metrics from a specific container within the group.
-See [Monitoring *Azure Container Instances* data reference](monitor-azure-container-instances-reference.md) for detailed information on the metrics and logs metrics created by Azure Container Instances.
+Containers generate similar data as other Azure resources, but they require a containerized agent to collect required data. For more information about container metrics for Container Instances, see [Monitor container resources in Azure Container Instances](container-instances-monitor.md).
-## Collection and routing
+<!-- METRICS SECTION END ->
-Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
+<!-- LOGS SECTION START -->
-Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
+<!-- ## Resource logs. Required section.-->
-See [Create diagnostic setting to collect platform logs and metrics in Azure](/azure/azure-monitor/platform/diagnostic-settings) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect.
+- For more information about how to get log data for Container Instances, see [Retrieve container logs and events in Azure Container Instances](container-instances-get-logs.md).
+- For the available resource log categories, associated Log Analytics tables, and the logs schemas for Container Instances, see [Container Instances monitoring data reference](monitor-azure-container-instances-reference.md#resource-logs).
-The metrics and logs you can collect are discussed in the following sections.
+<!-- ## Activity log. Required section. Optionally, add service-specific information about your activity log after the include. -->
-## Analyzing metrics
+<!-- LOGS SECTION END ->
-You can analyze metrics for *Azure Container Instances* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool.
+<!-- ANALYSIS SECTION START -->
-For a list of the platform metrics collected for Azure Container Instances, see [Monitoring Azure Container Instances data reference metrics](monitor-azure-container-instances-reference.md#metrics).
+<!-- ## Analyze data. Required section. -->
-All metrics for Azure Container Instances are in the namespace **Container group standard metrics**. In a container group with multiple containers, you can additionally filter on the [dimension](monitor-azure-container-instances-reference.md#metric-dimensions) **containerName** to acquire metrics from a specific container within the group.
+<!-- ### External tools. Required section. -->
-For reference, you can see a list of [all resource metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
+### Analyze Container Instances logs
-### View operation level metrics for Azure Container Instances
+You can use Log Analytics to analyze and query container instance logs, and you can also enable diagnostic settings as a preview feature in the Azure portal. Log Analytics and diagnostic settings have slightly different table schemas to use for queries. Once you enable diagnostic settings, you can use either or both schemas at the same time.
-1. Sign in to the [Azure portal](https://portal.azure.com/).
+For detailed information and instructions for querying logs, see [Container group and instance logging with Azure Monitor logs](container-instances-log-analytics.md). For more information about diagnostic settings, see [Use diagnostic settings](container-instances-log-analytics.md#using-diagnostic-settings).
-1. Select **Monitor** from the left-hand navigation bar, and select **Metrics**.
+For the Azure Monitor logs table schemas for Container Instances, see [Container Instances monitoring data reference](monitor-azure-container-instances-reference.md#azure-monitor-logs-tables).
- ![Screenshot of metrics tab under Monitor on the Azure portal, PNG.](./media/monitor-azure-container-instances/azure-monitor-metrics-pane.png)
+<!-- ### Sample Kusto queries. Required section. If you have sample Kusto queries for your service, add them after the include. -->
-1. On the **Select a scope** page, choose your **subscription** and **resource group**. Under **Refine scope**, choose **Container instances** for **Resource type**. Pick one of your container instances from the list and select **Apply**.
+The following query examples use the legacy Log Analytics log tables. The basic structure of a query is the source table, `ContainerInstanceLog_CL` or `ContainerEvent_CL`, followed by a series of operators separated by the pipe character (`|`). You can chain several operators to refine the results and perform advanced functions.
- ![Screenshot of selecting scope for metrics analysis on the Azure portal, PNG.](./media/monitor-azure-container-instances/select-a-scope.png)
-
-1. Next, you can pick a metric to view from the list of available metrics. Here, we choose **CPU Usage** and use **Avg** as the aggregation value.
-
- ![Screenshot of selecting CPU Usage metric, PNG.](./media/monitor-azure-container-instances/select-a-metric.png)
-
-### Add filters to metrics
-
-In a scenario where you have a container group with multiple containers, you may find it useful to apply a filter on the metric dimension **containerName**. This will allow you to view metrics by container as opposed to an aggregate of the group as a whole.
-
- ![Screenshot of filtering metrics by container name in a container group, PNG.](./media/monitor-azure-container-instances/apply-a-filter.png)
-
-## Analyzing logs
-
-Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
-
-All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../azure-monitor/essentials/resource-logs-schema.md) The schema for Azure Container Instances resource logs is found in the [Azure Container Instances Data Reference](monitor-azure-container-instances-reference.md#schemas).
-
-The [Activity log](../azure-monitor/essentials/activity-log.md) is a type of Azure platform log that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics. You can see a list of the kinds of operations that will be logged in the [Azure Container Instances Data Reference](monitor-azure-container-instances-reference.md#activity-log)
-
-### Sample Kusto queries
-
-Azure Monitor logs includes an extensive [query language][query_lang] for pulling information from potentially thousands of lines of log output.
-
-The basic structure of a query is the source table (in this article, `ContainerInstanceLog_CL` or `ContainerEvent_CL`) followed by a series of operators separated by the pipe character (`|`). You can chain several operators to refine the results and perform advanced functions.
+In the newer table schema for diagnostic settings, the table names appear without *_CL*, and some columns are different. If you have diagnostic settings enabled, you can use either or both tables.
To see example query results, paste the following query into the query text box, and select the **Run** button to execute the query. This query displays all log entries whose "Message" field contains the word "warn":
-```query
+```kusto
ContainerInstanceLog_CL | where Message contains "warn" ``` More complex queries are also supported. For example, this query displays only those log entries for the "mycontainergroup001" container group generated within the last hour:
-```query
+```kusto
ContainerInstanceLog_CL | where (ContainerGroup_s == "mycontainergroup001") | where (TimeGenerated > ago(1h)) ```
-> [!IMPORTANT]
-> When you select **Logs** from the Azure Container Instances menu, Log Analytics is opened with the query scope set to the current Azure Container Instances. This means that log queries will only include data from that resource. If you want to run a query that includes data from other resources or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../azure-monitor/logs/scope.md) for details.
+<!-- ANALYSIS SECTION END ->
+
+<!-- ALERTS SECTION START -->
+
+<!-- ## Alerts. Required section. -->
+
-For a list of common queries for Azure Container Instances, see the [Log Analytics queries interface](../azure-monitor/logs/queries.md).
+<!-- ### Container Instances alert rules. Required section.-->
-## Alerts
+### Container Instances alert rules
+The following table lists common and recommended alert rules for Container Instances.
-Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](/azure/azure-monitor/alerts/alerts-metric-overview), [logs](/azure/azure-monitor/alerts/alerts-unified-log), and the [activity log](/azure/azure-monitor/alerts/activity-log-alerts). Different types of alerts have benefits and drawbacks.
+| Alert type | Condition | Description |
+|:|:|:|
+| Metrics | vCPU usage, memory usage, or network input and output utilization exceeding a certain threshold | Depending on the function of the container, setting an alert for when the metric exceeds an expected threshold may be useful. |
+| Activity logs | Container Instances operations like create, update, and delete | See the [Container Instances monitoring data reference](monitor-azure-container-instances-reference.md#activity-log) for a list of activities you can track. |
+| Log alerts | `stdout` and `stderr` outputs in the logs | Use custom log search to set alerts for specific outputs that appear in logs. |
-For Azure Container Instances, there are three categories for alerting:
+<!-- ### Advisor recommendations. Required section. -->
-* **Activity logs** - You can set alerts for Azure Container Instances operations like create, update, and delete. See the [Monitoring Azure Container Instances data reference](monitor-azure-container-instances-reference.md#activity-log) for a list of activities you can track.
-* **Metrics** - You can set alerts for vCPU usage, memory usage, and network input and output utilization. Depending on the function of the container you deploy, you may want to monitor different metrics. For example, if you don't expect your container's memory usage to exceed a certain threshold, setting an alert for when memory usage exceeds it may be useful.
-* **Custom log search** - You can set alerts for specific outputs in logs. For example, you can use these alerts to robustly capture stdout and stderr by setting alerts for when those outputs appear in the logs.
+<!-- ALERTS SECTION END -->
-## Next steps
+## Related content
-* See the [Monitoring Azure Container Instances data reference](monitor-azure-container-instances-reference.md) for a reference of the metrics, logs, and other important values created by Azure Container Instances.
-* See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+- See [Container Instances monitoring data reference](monitor-azure-container-instances-reference.md) for a reference of the metrics, logs, and other important values created for Container Instances.
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for general details on monitoring Azure resources.
cosmos-db Cmk Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cmk-troubleshooting-guide.md
This troubleshooting guide shows you how to restore access when running into the
## Default Identity is unauthorized to access the Azure Key Vault key
-### Reason for error?
+### Reason for error
You see the error when the default identity associated with the Azure Cosmos DB account is no longer authorized to perform either a get, a wrap or unwrap call to the Key Vault.
After assigning the permissions, wait upwards to one hour for the account to sto
## Microsoft Entra Token Acquisition error
-### Reason for error?
+### Reason for error
You see this error when Azure Cosmos DB is unable to obtain the default's identity Microsoft Entra access token. The token is used for communicating with the Azure Key Vault in order to wrap and unwrap the data encryption key.
After updating the account's default identity, you need to wait upwards to one h
## Azure Key Vault Resource not found
-### Reason for error?
+### Reason for error
You see this error when the Azure Key Vault or specified Key are not found.
Check if the Azure Key Vault or the specified key exist and restore them if acci
## Invalid Azure Cosmos DB default identity
-### Reason for error?
+### Reason for error
The Azure Cosmos DB account goes into revoke state if it doesn't have any of these identity types set as a default identity:
Make sure that your default identity is that of a valid Azure resource and has a
## Improper Syntax Detected on the Key Vault URI Property
-### Reason for error?
+### Reason for error
You see this error when internal validation detects that the Key Vault URI property on the Azure Cosmos DB account is different than expected.
Once the _KeyVaultKeyUri_ property has been updated, wait upwards to one hour fo
## Internal unwrapping procedure error
-### Reason for error?
+### Reason for error
You see the error message when the Azure Cosmos DB service is unable to unwrap the key properly.
In case that either the Key Vault or the Costumer Managed Key has been recently
## Unable to Resolve the Key Vault's DNS
-### Reason for error?
+### Reason for error
You see the error message when the Key Vault DNS name can't be resolved. The error may indicate that there's a major issue within the Azure Key Vault service that blocks Cosmos DB from accessing your key.
cosmos-db Cosmos Db Vs Mongodb Atlas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/cosmos-db-vs-mongodb-atlas.md
Previously updated : 06/03/2023 Last updated : 02/27/2024 # Comparing MongoDB Atlas and Azure Cosmos DB for MongoDB
Last updated 06/03/2023
## Azure Cosmos DB for MongoDB vs MongoDB Atlas
-| Feature | Azure Cosmos DB for MongoDB | Vendor-managed MongoDB Atlas |
+| Feature | Azure Cosmos DB for MongoDB | MongoDB Atlas by MongoDB, Inc |
|||-| | MongoDB wire protocol | Yes | Yes | | Compatible with MongoDB tools and drivers | Yes | Yes |
-| Global Distribution | Yes, [globally distributed](../distribute-data-globally.md) with automatic and fast data replication across any number of Azure regions | Yes, globally distributed with manual and scheduled data replication across any number of cloud providers or regions |
-| SLA covers cloud platform | Yes | No. "Services, hardware, or software provided by a third party, such as cloud platform services on which MongoDB Atlas runs are not covered" |
-| 99.999% availability SLA | [Yes](../high-availability.md) | No |
-| Instantaneous Scaling | Yes, [database instantaneously scales](../provision-throughput-autoscale.md) with zero performance impact on your applications | No, requires 1+ hours to vertically scale up and 24+ hours to vertically scale down. Performance impact during scale up may be noticeable |
-| True active-active clusters | Yes, with [multi-primary writes](./how-to-configure-multi-region-write.md). Data for the same shard can be written to multiple regions | No |
-| Vector Search for AI applications | Yes, with [Azure Cosmos DB for MongoDB vCore Vector Search](./vcore/vector-search.md) | Yes |
-| Vector Search in Free Tier | Yes, with [Azure Cosmos DB for MongoDB vCore Vector Search](./vcore/vector-search.md) | No |
+| Global Distribution | Yes, [globally distributed](../distribute-data-globally.md) with automatic and fast data replication across any number of Azure regions | Yes, globally distributed with automatic and fast data replication across supported cloud providers or regions |
+| 99.999% availability SLA | [Yes](../high-availability.md) | No. MongoDB Atlas offers a 99.995% availability SLA |
+| SLA covers cloud platform | Yes | No. For more details, read the MongoDB Atlas SLA |
+| Instantaneous and automatic scaling | Yes, ​Azure Cosmos DB RU-based deployments [automatically and instantaneously scale 10x with zero performance impact](../provision-throughput-autoscale.md) on your applications. Scaling of vCore-based instances is managed by users | ​​​Atlas dedicated instances managed by users, or scale automatically after analyzing the workload over a day. |
+| Multi-region writes (also known as multi-master) | ​​Yes. With multi-region writes, customers can update any document in any region, enabling 99.999% availability SLA | ​​​Yes. With multi-region zones, customers can configure different write regions per shard. Data within a single shard is writable in a single region.​​ |
+| Limitless scale | ΓÇïΓÇïAzure Cosmos DB provides ability to scale RUs up to and beyond a billion requests per second, with unlimited storage, fully managed, as a serviceΓÇï. ΓÇïΓÇïvCore-based Azure Cosmos DB for MongoDB deployments support scaling through sharding | ΓÇïΓÇïΓÇïΓÇïMongoDB Atlas deployments support scaling through shardingΓÇï. |
+| Independent scaling for throughput and storage | Yes, with RU-based Azure Cosmos DB for MongoDB | No |
+| Vector Search for AI applications | Yes, with [vCore-based Azure Cosmos DB for MongoDB](./vcore/vector-search.md) | Yes, with MongoDB Atlas dedicated instances |
| Integrated text search, geospatial processing | Yes | Yes |
-| Free tier | [1,000 request units (RUs) and 25 GB storage forever](../try-free.md). Prevents you from exceeding limits if you want. Azure Cosmos DB for MognoDB vCore offers Free Tier with 32GB storage forever. | Yes, with 512 MB storage |
+| Free tier | [1,000 request units (RUs) and 25 GB storage forever](../try-free.md). Prevents you from exceeding limits if you want. vCore-based Azure Cosmos DB for MongoDB offers Free Tier with 32GB storage forever. | Yes, with 512 MB storage |
| Live migration | Yes | Yes |
-| Azure Integrations | Native [first-party integrations](./integrations-overview.md) with Azure services such as Azure Functions, Azure Logic Apps, Azure Stream Analytics, and Power BI and more | Limited number of third party integrations |
-| Choice of instance configuration | Yes, with [Azure Cosmos DB for MongoDB vCore](./vcore/introduction.md) | Yes |
-| Expert Support | Microsoft, with 24x7 support for Azure Cosmos DB. One support team to address all of your Azure products | MongoDB, Inc. with 24x7 support for MongoDB Atlas. Need separate support teams depending on the product. Support plans costs rise significantly depending on response time chosen |
-| Support for MongoDB multi-document ACID transactions | Yes, with [Azure Cosmos DB for MongoDB vCore](./vcore/introduction.md) | Yes |
+| Azure Integrations | [Native first-party integrations with Azure services](./integrations-overview.md) | Third party integrations, including some native Azure services |
+| Choice of instance configuration | Yes, with [vCore-based Azure Cosmos DB for MongoDB](./vcore/introduction.md) | Yes |
+| Expert Support | 24x7 support provided by Microsoft for Azure Cosmos DB. An Azure Support contract covers all Azure products, including Azure Cosmos DB, which allows you to work with one support team without additional support costs | 24x7 support provided by MongoDB for MongoDB Atlas with various SLA options available |
+| Support for MongoDB multi-document ACID transactions | Yes, with [vCore-based Azure Cosmos DB for MongoDB](./vcore/introduction.md) | Yes |
| JSON data type support | BSON (Binary JSON) | BSON (Binary JSON) |
-| Support for MongoDB aggregation pipeline | Yes | Yes |
+| Support for MongoDB aggregation pipeline | Yes. Supporting MongoDB wire protocol v5.0 in RU-based and 6.0 in vCore-basedΓÇï | Yes |
| Maximum document size | 16 MB | 16 MB | | JSON schema for data governance controls | Currently in development | Yes | | Integrated text search | Yes | Yes | | Integrated querying of data in cloud object storage | Yes, with Synapse Link | Yes | | Blend data with joins and unions for analytics queries | Yes | Yes | | Performance recommendations | Yes, with native Microsoft tools | Yes |
-| Replica set configuration | Yes, with [Azure Cosmos DB for MongoDB vCore](./vcore/introduction.md) | Yes |
-| Automatic sharding support | Yes | Limited, since the number of shards must be configured by the developer. Extra costs apply for additional configuration servers. |
+| Replica set configuration | Yes, with [vCore-based Azure Cosmos DB for MongoDB](./vcore/introduction.md) | Yes |
+| Sharding support | Azure Cosmos DB supports automatic, server-side sharding. It manages shard creation, placement, and balancing automatically | Multiple sharding methodologies supported to fit various use cases. Sharding strategy can be changed without impacting the application |
| Pause and resume clusters | Currently in development | Yes |
-| Data explorer | Yes, using native Azure tooling and MongoDB tooling such as Robo3T | Yes |
+| Data explorer | Yes, using native Azure tooling and Azure Cosmos DB Explorer. Support for third party tools such as Robo3T | Yes, using native MongoDB tools such as Compass and Atlas Data Explorer. Support for third party tools such as Robo3T |
| Cloud Providers | Azure. MongoDB wire protocol compatibility enables you to remain vendor-agnostic | Azure, AWS, and Google Cloud | | SQL-based connectivity | Yes | Yes |
-| Native data visualization without 3rd party BI tools | Yes, using Power BI | Yes |
+| Native data visualization without 3rd party BI tools | Yes, using Power BI | Yes, with Atlas Charts |
| Database supported in on-premises and hybrid deployments | No | Yes | | Embeddable database with sync for mobile devices | No, due to low user demand | Yes | | Granular role-based access control | Yes | Yes |
Last updated 06/03/2023
| Client-side field level encryption | Yes | Yes | | LDAP Integration | Yes | Yes | | Database-level auditing | Yes | Yes |
+| Multi-document ACID transactions across collections and partitions | Yes | Yes |
+| Continuous backup with on-demand restore | Yes | Yes |
## Next steps
cosmos-db Free Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/free-tier.md
Azure Cosmos DB for MongoDB vCore now introduces a new SKU, the "Free Tier," ena
boasting command and feature parity with a regular Azure Cosmos DB for MongoDB vCore account. It makes it easy for you to get started, develop, test your applications, or even run small production workloads for free. With Free Tier, you get a dedicated MongoDB cluster with 32-GB storage, perfect
-for all of your learning & evaluation needs. Users can provision a single free DB server per supported Azure region for a given subscription. This feature is currently available for our users in the East US, West Europe and Southeast Asia regions.
+for all of your learning & evaluation needs. Users can provision a single free DB server per supported Azure region for a given subscription. This feature is currently available for our users in the West Europe, Southeast Asia, East US and East US 2 regions.
## Get started
specify your storage requirements, and you're all set. Rest assured, your data,
## Restrictions
-* For a given subscription, only one free tier account is permissible in a region.
-* Free tier is currently available in East US, West Europe and Southeast Asia regions only.
+* For a given subscription, only one free tier account is permissible.
+* Free tier is currently available in West Europe, Southeast Asia, East US and East US 2 regions only.
* High availability, Azure Active Directory (Azure AD) and Diagnostic Logging are not supported.
cosmos-db Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/security.md
Let's dig into each one in detail.
|Security requirement|Azure Cosmos DB's security approach| |||
-|Network security|Using an IP firewall is the first layer of protection to secure your database. Azure Cosmos DB for MongoDB vCore supports policy driven IP-based access controls for inbound firewall support. The IP-based access controls are similar to the firewall rules used by traditional database systems. However, they're expanded so that an Azure Cosmos DB for MongoDB vCore cluster is only accessible from an approved set of machines or cloud services. <br><br>Azure Cosmos DB for MongoDB vCore enables you to enable a specific IP address (168.61.48.0), an IP range (168.61.48.0/8), and combinations of IPs and ranges. <br><br>All requests originating from machines outside this allowed list are blocked by Azure Cosmos DB for MongoDB vCore. Requests from approved machines and cloud services then must complete the authentication process to be given access control to the resources.<br><br> You can use [virtual network service tags](../../../virtual-network/service-tags-overview.md) to achieve network isolation and protect your Azure Cosmos DB for MongoDB vCore resources from the general Internet. Use service tags in place of specific IP addresses when you create security rules. By specifying the service tag name (for example, AzureCosmosDB) in the appropriate source or destination field of a rule, you can allow or deny the traffic for the corresponding service.|
+|Network security|Using an IP firewall is the first layer of protection to secure your database. Azure Cosmos DB for MongoDB vCore supports policy driven IP-based access controls for inbound firewall support. The IP-based access controls are similar to the firewall rules used by traditional database systems. However, they're expanded so that an Azure Cosmos DB for MongoDB vCore cluster is only accessible from an approved set of machines or cloud services. <br><br>Azure Cosmos DB for MongoDB vCore enables you to enable a specific IP address (168.61.48.0), an IP range (168.61.48.0/8), and combinations of IPs and ranges. <br><br>All requests originating from machines outside this allowed list are blocked by Azure Cosmos DB for MongoDB vCore. Requests from approved machines and cloud services then must complete the authentication process to be given access control to the resources.<br><br>|
|Local replication|Even within a single data center, Azure Cosmos DB for MongoDB vCore replicates the data using LRS. HA-enabled clusters also have another layer of replication between a primary and secondary node, thus guaranteeing a 99.995% [availability SLA](https://azure.microsoft.com/support/legal/sla/cosmos-db).| |Automated online backups|Azure Cosmos DB for MongoDB vCore databases are backed up regularly and stored in a geo redundant store. | |Restore deleted data|The automated online backups can be used to recover data you may have accidentally deleted up to ~7 days after the event. |
cost-management-billing Cost Management Billing Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/cost-management-billing-overview.md
description: You use Cost Management + Billing features to conduct billing admin
Previously updated : 08/07/2023 Last updated : 02/28/2024
Organizing and allocating costs are critical to ensuring invoices are routed to
- **Subscriptions** and **resource groups** are the lowest level at which you can organize your cloud solutions. At Microsoft, every product ΓÇô sometimes even limited to a single region ΓÇô is managed within its own subscription. It simplifies cost governance but requires more overhead for subscription management. Most organizations use subscriptions for business units and separating dev/test from production or other environments, then use resource groups for the products. It complicates cost management because resource group owners don't have a way to manage cost across resource groups. On the other hand, it's a straightforward way to understand who's responsible for most resource-based charges. Keep in mind that not all charges come from resources and some don't have resource groups or subscriptions associated with them. It also changes as you move to MCA billing accounts. - **Resource tags** are the only way to add your own business context to cost details and are perhaps the most flexible way to map resources to applications, business units, environments, owners, etc. For more information, see [How tags are used in cost and usage data](./costs/understand-cost-mgt-data.md#how-tags-are-used-in-cost-and-usage-data) for limitations and important considerations.
-In addition to organizing resources and subscriptions using the subscription hierarchy and metadata (tags), Cost Management also offers the ability to *move* or split shared costs via cost allocation rules. Cost allocation doesn't change the invoice. Cost allocation simply moves charges from one subscription, resource group, or tag to another subscription, resource group, or tag. The goal of cost allocation is to split and move shared costs to reduce overhead. And, to more accurately report on where charges are ultimately coming from (albeit indirectly), which should drive more complete accountability. For more information, see [Allocate Azure costs](./costs/allocate-costs.md).
+Cost allocation is the set of practices to divide up a consolidated invoice. Or, to bill the people responsible for its various component parts. It's the process of assigning costs to different groups within an organization based on their consumption of resources and application of benefits. By providing visibility into costs to groups who are responsible for it, cost allocation helps organizations track and optimize their spending, improve budgeting and forecasting, and increase accountability and transparency. For more information, see [Cost allocation](./costs/cost-allocation-introduction.md).
How you organize and allocate costs plays a huge role in how people within your organization can manage and optimize costs. Be sure to plan ahead and revisit your allocation strategy yearly.
cost-management-billing Capabilities Allocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-allocation.md
Cost allocation is usually an afterthought and requires some level of cleanup wh
- Consider a top-down approach that prioritizes getting departmental costs in place before optimizing at the lowest project and environment level. You may want to implement it in phases, depending on how broad and deep your organization is. - Enable [tag inheritance in Cost Management](../costs/enable-tag-inheritance.md) to copy subscription and resource group tags in cost data only. It doesn't change tags on your resources. - Use Azure Policy to [enforce your tagging strategy](../../azure-resource-manager/management/tag-policies.md), automate the application of tags at scale, and track compliance status. Use compliance as a KPI for your tagging strategy.
- - If you need to move costs between subscriptions, resource groups, or add or change tags, [configure allocation rules in Cost Management](../costs/allocate-costs.md). Cost allocation is covered in detail at [Managing shared costs](capabilities-shared-cost.md).
+ - If you need to move costs between subscriptions, resource groups, or add or change tags, [configure allocation rules in Cost Management](../costs/allocate-costs.md). For more information about cost allocation in Microsoft Cost Management, see [Introduction to cost allocation](../costs/cost-allocation-introduction.md). Cost allocation is covered in detail at [Managing shared costs](capabilities-shared-cost.md).
- Consider [grouping related resources together with the ΓÇ£cm-resource-parentΓÇ¥ tag](../costs/group-filter.md#group-related-resources-in-the-resources-view) to view costs together in Cost analysis. - Distribute responsibility for any remaining change to scale out and drive efficiencies. - Make note of any unallocated costs or costs that should be split but couldn't be. You cover it as part of [Managing shared costs](capabilities-shared-cost.md).
data-factory Tutorial Managed Virtual Network On Premise Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-managed-virtual-network-on-premise-sql-server.md
data factory from the resources list.
4. Select + **New** under **Managed private endpoints**. 5. Select the **Private Link Service** tile from the list and select **Continue**. 6. Enter the name of private endpoint and select **myPrivateLinkService** in private link service list.
-7. Add the `<FQDN>,<port>` of your target on-premises SQL Server. By default, port is 1433.
+7. Add the `<FQDN>` of your target on-premises SQL Server.
:::image type="content" source="./media/tutorial-managed-virtual-network/private-endpoint-6.png" alt-text="Screenshot that shows the private endpoint settings.":::
databox-online Azure Stack Edge Connect Powershell Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-connect-powershell-interface.md
Previously updated : 04/14/2022 Last updated : 02/27/2024 # Manage an Azure Stack Edge Pro FPGA device via Windows PowerShell + Azure Stack Edge Pro FPGA solution lets you process data and send it over the network to Azure. This article describes some of the configuration and management tasks for your Azure Stack Edge Pro FPGA device. You can use the Azure portal, local web UI, or the Windows PowerShell interface to manage your device. This article focuses on the tasks you do using the PowerShell interface.
databox-online Azure Stack Edge Contact Microsoft Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-contact-microsoft-support.md
Previously updated : 06/09/2021 Last updated : 02/27/2024
[!INCLUDE [applies-to-GPU-and-pro-r-and-mini-r-fpga-databox-gateway-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-fpga-databox-gateway-sku.md)] + This article applies to Azure Stack Edge and Azure Data Box Gateway both of which are managed by the Azure Stack Edge / Azure Data Box Gateway service. If you encounter any issues with your service, you can create a service request for technical support. This article walks you through: * How to create a support request.
databox-online Azure Stack Edge Create Iot Edge Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-create-iot-edge-module.md
Previously updated : 08/06/2019 Last updated : 02/27/2024 # Develop a C# IoT Edge module to move files with Azure Stack Edge Pro FPGA + This article steps you through how to create an IoT Edge module for deployment with your Azure Stack Edge Pro FPGA device. Azure Stack Edge Pro FPGA is a storage solution that allows you to process data and send it over network to Azure. You can use Azure IoT Edge modules with your Azure Stack Edge Pro FPGA to transform the data as it moved to Azure. The module used in this article implements the logic to copy a file from a local share to a cloud share on your Azure Stack Edge Pro FPGA device.
databox-online Azure Stack Edge Deploy Add Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-deploy-add-shares.md
Previously updated : 01/04/2021 Last updated : 02/27/2024 # Customer intent: As an IT admin, I need to understand how to add and connect to shares on Azure Stack Edge Pro FPGA so I can use it to transfer data to Azure. # Tutorial: Transfer data with Azure Stack Edge Pro FPGA + This tutorial describes how to add and connect to shares on your Azure Stack Edge Pro FPGA device. After you've added the shares, Azure Stack Edge Pro FPGA can transfer data to Azure. This procedure can take around 10 minutes to complete.
databox-online Azure Stack Edge Deploy Configure Compute Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-deploy-configure-compute-advanced.md
Previously updated : 01/06/2021 Last updated : 02/27/2024 # Customer intent: As an IT admin, I need to understand how to configure compute on Azure Stack Edge Pro FPGA for advanced deployment flow so I can use it to transform the data before sending it to Azure. # Tutorial: Transform data with Azure Stack Edge Pro FPGA for advanced deployment flow + This tutorial describes how to configure a compute role for an advanced deployment flow on your Azure Stack Edge Pro FPGA device. After you configure the compute role, Azure Stack Edge Pro FPGA can transform data before sending it to Azure. Compute can be configured for simple or advanced deployment flow on your device.
databox-online Azure Stack Edge Deploy Configure Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-deploy-configure-compute.md
Previously updated : 01/06/2021 Last updated : 02/27/2024 # Customer intent: As an IT admin, I need to understand how to configure compute on Azure Stack Edge Pro FPGA so I can use it to transform the data before sending it to Azure. # Tutorial: Transform the data with Azure Stack Edge Pro FPGA + This tutorial describes how to configure a compute role on your Azure Stack Edge Pro FPGA device. After you configure the compute role, Azure Stack Edge Pro FPGA can transform data before sending it to Azure. This procedure can take around 10 to 15 minutes to complete.
databox-online Azure Stack Edge Deploy Connect Setup Activate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-deploy-connect-setup-activate.md
Previously updated : 03/28/2019 Last updated : 02/27/2024 # Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Pro FPGA so I can use it to transfer data to Azure.
-# Tutorial: Connect, set up, and activate Azure Stack Edge Pro FPGA
+# Tutorial: Connect, set up, and activate Azure Stack Edge Pro FPGA
+ This tutorial describes how you can connect to, set up, and activate your Azure Stack Edge Pro FPGA device by using the local web UI.
databox-online Azure Stack Edge Deploy Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-deploy-install.md
Previously updated : 01/17/2020 Last updated : 02/27/2024 # Customer intent: As an IT admin, I need to understand how to install Azure Stack Edge Pro FPGA in datacenter so I can use it to transfer data to Azure. # Tutorial: Install Azure Stack Edge Pro FPGA + This tutorial describes how to install a Azure Stack Edge Pro FPGA physical device. The installation procedure involves unpacking, rack mounting, and cabling the device. The installation can take around two hours to complete.
databox-online Azure Stack Edge Deploy Prep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-deploy-prep.md
Previously updated : 07/23/2021 Last updated : 02/27/2024 # Customer intent: As an IT admin, I need to understand how to prepare the portal to deploy Azure Stack Edge Pro FPGA so I can use it to transfer data to Azure.
-# Tutorial: Prepare to deploy Azure Stack Edge Pro FPGA
+# Tutorial: Prepare to deploy Azure Stack Edge Pro FPGA
+ This is the first tutorial in the series of deployment tutorials that are required to completely deploy Azure Stack Edge Pro FPGA. This tutorial describes how to prepare the Azure portal to deploy an Azure Stack Edge resource.
databox-online Azure Stack Edge Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-limits.md
Previously updated : 10/12/2020 Last updated : 02/27/2024 # Azure Stack Edge limits + Consider these limits as you deploy and operate your Microsoft Azure Stack Edge Pro GPU or Azure Stack Edge Pro FPGA solution. ## Azure Stack Edge service limits
databox-online Azure Stack Edge Manage Access Power Connectivity Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-manage-access-power-connectivity-mode.md
Previously updated : 06/24/2019 Last updated : 02/27/2024 # Manage access, power, and connectivity mode for your Azure Stack Edge Pro FPGA + This article describes how to manage the access, power, and connectivity mode for your Azure Stack Edge Pro FPGA. These operations are performed via the local web UI or the Azure portal. In this article, you learn how to:
databox-online Azure Stack Edge Manage Bandwidth Schedules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-manage-bandwidth-schedules.md
Previously updated : 03/22/2019 Last updated : 02/27/2024 # Use the Azure portal to manage bandwidth schedules on your Azure Stack Edge Pro FPGA + This article describes how to manage users on your Azure Stack Edge Pro FPGA. Bandwidth schedules allow you to configure network bandwidth usage across multiple time-of-day schedules. These schedules can be applied to the upload and download operations from your device to the cloud. You can add, modify, or delete the bandwidth schedules for your Azure Stack Edge Pro FPGA via the Azure portal.
databox-online Azure Stack Edge Manage Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-manage-compute.md
Previously updated : 01/06/2021 Last updated : 02/27/2024 # Manage compute on your Azure Stack Edge Pro FPGA + This article describes how to manage compute on your Azure Stack Edge Pro FPGA. You can manage the compute via the Azure portal or via the local web UI. Use the Azure portal to manage modules, triggers, and compute configuration, and the local web UI to manage compute settings. In this article, you learn how to:
databox-online Azure Stack Edge Manage Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-manage-shares.md
Previously updated : 01/04/2021 Last updated : 02/27/2024 # Use the Azure portal to manage shares on Azure Stack Edge Pro FPGA + This article describes how to manage shares on your Azure Stack Edge Pro FPGA device. You can manage the Azure Stack Edge Pro FPGA device via the Azure portal or via the local web UI. Use the Azure portal to add, delete, refresh shares, or sync storage key for storage account associated with the shares. ## About shares
databox-online Azure Stack Edge Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-manage-users.md
Previously updated : 01/05/2021 Last updated : 02/27/2024 # Use the Azure portal to manage users on your Azure Stack Edge Pro FPGA + This article describes how to manage users on your Azure Stack Edge Pro FPGA device. You can manage the Azure Stack Edge Pro FPGA via the Azure portal or via the local web UI. Use the Azure portal to add, modify, or delete users. In this article, you learn how to:
databox-online Azure Stack Edge Migrate Fpga Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-migrate-fpga-gpu.md
Previously updated : 07/07/2023 Last updated : 02/27/2024 # Migrate workloads from an Azure Stack Edge Pro FPGA to an Azure Stack Edge Pro 2 or Azure Stack Edge GPU device + This article describes how to migrate workloads and data from an Azure Stack Edge Pro FPGA device to an Azure Stack Edge Pro 2 or Pro GPU device. The migration process begins with selection of a new device, a migration plan, and a review of migration considerations. The migration procedure gives detailed steps ending with verification and device cleanup. [!INCLUDE [Azure Stack Edge Pro FPGA end-of-life](../../includes/azure-stack-edge-fpga-eol.md)]
databox-online Azure Stack Edge Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-monitor.md
Previously updated : 10/11/2021 Last updated : 02/27/2024 # Monitor your Azure Stack Edge device [!INCLUDE [applies-to-GPU-and-pro-r-mini-r-and-fpga-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-fpga-sku.md)] + This article describes how to monitor your Azure Stack Edge device. To monitor your device, you can use the Azure portal or the local web UI. Use the Azure portal to view metrics, view device events, and configure and manage alerts. Use the local web UI on your physical device to view the hardware status of the various device components. In this article, you learn how to:
databox-online Azure Stack Edge Return Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-return-device.md
Previously updated : 04/27/2023 Last updated : 02/27/2024
[!INCLUDE [applies-to-pro-fpga](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-fpga-sku.md)] + This article describes how to wipe the data and then return your Azure Stack Edge device. After you've returned the device, you can also delete the resource associated with the device. In this article, you learn how to:
databox-online Azure Stack Edge Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-security.md
Previously updated : 08/21/2019 Last updated : 02/27/2024 # Azure Stack Edge security and data protection + Security is a major concern when you're adopting a new technology, especially if the technology is used with confidential or proprietary data. Azure Stack Edge helps you ensure that only authorized entities can view, modify, or delete your data. This article describes the Azure Stack Edge security features that help protect each of the solution components and the data stored in them.
databox-online Azure Stack Edge System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-system-requirements.md
Previously updated : 04/26/2021 Last updated : 02/27/2024 # Azure Stack Edge Pro FPGA system requirements + This article describes the important system requirements for your Microsoft Azure Stack Edge Pro FPGA solution and for the clients connecting to Azure Stack Edge Pro FPGA. We recommend that you review the information carefully before you deploy your Azure Stack Edge Pro FPGA. You can refer back to this information as necessary during the deployment and subsequent operation. The system requirements for the Azure Stack Edge Pro FPGA include:
databox-online Azure Stack Edge Technical Specifications Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-technical-specifications-compliance.md
Previously updated : 04/12/2023 Last updated : 02/27/2024 # Azure Stack Edge Pro FPGA technical specifications + The hardware components of your Microsoft Azure Stack Edge Pro FPGA device adhere to the technical specifications and regulatory standards outlined in this article. The technical specifications describe the Power supply units (PSUs), storage capacity, enclosures, and environmental standards. ## Compute, memory specifications
databox-online Azure Stack Edge Technical Specifications Power Cords Regional https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-technical-specifications-power-cords-regional.md
Previously updated : 03/09/2023 Last updated : 02/27/2024
[!INCLUDE [applies-to-gpu-and-pro-fpga-sku](../../includes/azure-stack-edge-applies-to-gpu-pro-fpga-sku.md)] + Your Azure Stack Edge device will need a power cord that will vary depending on your Azure region. ## Supported power cords
databox-online Azure Stack Edge Troubleshoot Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-troubleshoot-iot-edge.md
Previously updated : 06/08/2021 Last updated : 02/27/2024 # Troubleshoot IoT Edge issues on your Azure Stack Edge FPGA device [!INCLUDE [applies-to-GPU-and-pro-r-and-mini-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-sku.md)] + This article describes how to troubleshoot compute-related errors on an Azure Stack Edge FPGA device by reviewing IoT Edge agent runtime responses. ## IoT Edge runtime responses
databox-online Azure Stack Edge Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-troubleshoot.md
Previously updated : 06/08/2021 Last updated : 02/24/2024 # Troubleshoot your Azure Stack Edge Pro FPGA issues + This article describes how to troubleshoot issues on your Azure Stack Edge Pro FPGA. In this article, you learn how to:
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md
A Defender for Endpoint tenant is automatically created, when you use Defender f
- **Moving subscriptions:** If you've moved your Azure subscription between Azure tenants, some manual preparatory steps are required before Defender for Cloud will deploy Defender for Endpoint. For full details, [contact Microsoft support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
-## Next Steps
+## Next step
-[Enable the Microsoft Defender for Endpoint integration](enable-defender-for-endpoint.md).
+[Enable the Microsoft Defender for Endpoint integration](enable-defender-for-endpoint.md)
defender-for-cloud Just In Time Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/just-in-time-access-overview.md
To learn how to apply JIT to your VMs using the Azure portal (either Defender fo
Threat actors actively hunt accessible machines with open management ports, like RDP or SSH. All of your virtual machines are potential targets for an attack. When a VM is successfully compromised, it's used as the entry point to attack further resources within your environment.
-## Why JIT VM access is the solution
+## Why JIT VM access is the solution
As with all cybersecurity prevention techniques, your goal should be to reduce the attack surface. In this case that means having fewer open ports especially management ports.
To solve this dilemma, Microsoft Defender for Cloud offers JIT. With JIT, you ca
## How JIT operates with network resources in Azure and AWS
-In Azure, you can block inbound traffic on specific ports, by enabling just-in-time VM access. Defender for Cloud ensures "deny all inbound traffic" rules exist for your selected ports in the [network security group](../virtual-network/network-security-groups-overview.md#security-rules) (NSG) and [Azure Firewall rules](../firewall/rule-processing.md). These rules restrict access to your Azure VMsΓÇÖ management ports and defend them from attack.
+In Azure, you can block inbound traffic on specific ports, by enabling just-in-time VM access. Defender for Cloud ensures "deny all inbound traffic" rules exist for your selected ports in the [network security group](../virtual-network/network-security-groups-overview.md#security-rules) (NSG) and [Azure Firewall rules](../firewall/rule-processing.md). These rules restrict access to your Azure VMsΓÇÖ management ports and defend them from attack.
If other rules already exist for the selected ports, then those existing rules take priority over the new "deny all inbound traffic" rules. If there are no existing rules on the selected ports, then the new rules take top priority in the NSG and Azure Firewall.
When a user requests access to a VM, Defender for Cloud checks that the user has
## How Defender for Cloud identifies which VMs should have JIT applied
-The following diagram shows the logic that Defender for Cloud applies when deciding how to categorize your supported VMs:
+The following diagram shows the logic that Defender for Cloud applies when deciding how to categorize your supported VMs:
### [**Azure**](#tab/defender-for-container-arch-aks)+ [![Just-in-time (JIT) virtual machine (VM) logic flow.](media/just-in-time-explained/jit-logic-flow.png)](media/just-in-time-explained/jit-logic-flow.png#lightbox) ### [**AWS**](#tab/defender-for-container-arch-eks)+ :::image type="content" source="media/just-in-time-explained/aws-jit-logic-flow.png" alt-text="A chart that explains the logic flow for the AWS Just in time (J I T) virtual machine (V M) logic flow.":::
-When Defender for Cloud finds a machine that can benefit from JIT, it adds that machine to the recommendation's **Unhealthy resources** tab.
+When Defender for Cloud finds a machine that can benefit from JIT, it adds that machine to the recommendation's **Unhealthy resources** tab.
![Just-in-time (JIT) virtual machine (VM) access recommendation.](./media/just-in-time-explained/unhealthy-resources.png)
-## Next steps
+## Next step
This page explained why just-in-time (JIT) virtual machine (VM) access should be used. To learn how to enable JIT and request access to your JIT-enabled VMs:
defender-for-cloud Just In Time Access Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/just-in-time-access-usage.md
In this article, you learn how to include JIT in your security program, includin
## Prerequisites -- JIT requires [Microsoft Defender for Servers Plan 2](plan-defender-for-servers-select-plan.md#plan-features) to be enabled on the subscription.
+- JIT requires [Microsoft Defender for Servers Plan 2](plan-defender-for-servers-select-plan.md#plan-features) to be enabled on the subscription.
- **Reader** and **SecurityReader** roles can both view the JIT status and parameters.
In this article, you learn how to include JIT in your security program, includin
| To enable a user to: | Permissions to set| | | |
- |Configure or edit a JIT policy for a VM | *Assign these actions to the role:* <ul><li>On the scope of a subscription or resource group that is associated with the VM:<br/> `Microsoft.Security/locations/jitNetworkAccessPolicies/write` </li><li> On the scope of a subscription or resource group of VM: <br/>`Microsoft.Compute/virtualMachines/write`</li></ul> |
+ |Configure or edit a JIT policy for a VM | *Assign these actions to the role:* <ul><li>On the scope of a subscription or resource group that is associated with the VM:<br/> `Microsoft.Security/locations/jitNetworkAccessPolicies/write` </li><li> On the scope of a subscription or resource group of VM: <br/>`Microsoft.Compute/virtualMachines/write`</li></ul> |
|Request JIT access to a VM | *Assign these actions to the user:* <ul><li> `Microsoft.Security/locations/jitNetworkAccessPolicies/initiate/action` </li><li> `Microsoft.Security/locations/jitNetworkAccessPolicies/*/read` </li><li> `Microsoft.Compute/virtualMachines/read` </li><li> `Microsoft.Network/networkInterfaces/*/read` </li> <li> `Microsoft.Network/publicIPAddresses/read` </li></ul> | |Read JIT policies| *Assign these actions to the user:* <ul><li>`Microsoft.Security/locations/jitNetworkAccessPolicies/read`</li><li>`Microsoft.Security/locations/jitNetworkAccessPolicies/initiate/action`</li><li>`Microsoft.Security/policies/read`</li><li>`Microsoft.Security/pricings/read`</li><li>`Microsoft.Compute/virtualMachines/read`</li><li>`Microsoft.Network/*/read`</li>|
In this article, you learn how to include JIT in your security program, includin
- To set up JIT on your Amazon Web Service (AWS) VM, you need to [connect your AWS account](quickstart-onboard-aws.md) to Microsoft Defender for Cloud. > [!TIP]
- > To create a least-privileged role for users that need to request JIT access to a VM, and perform no other JIT operations, use the [Set-JitLeastPrivilegedRole script](https://github.com/Azure/Azure-Security-Center/tree/main/Powershell%20scripts/JIT%20Scripts/JIT%20Custom%20Role) from the Defender for Cloud GitHub community pages.
-
+ > To create a least-privileged role for users that need to request JIT access to a VM, and perform no other JIT operations, use the [Set-JitLeastPrivilegedRole script](https://github.com/Azure/Azure-Security-Center/tree/main/Powershell%20scripts/JIT%20Scripts/JIT%20Custom%20Role) from the Defender for Cloud GitHub community pages.
> [!NOTE]
-> In order to successfully create a custom JIT policy, the policy name, together with the targeted VM name, must not exceed a total of 56 characters.
+> In order to successfully create a custom JIT policy, the policy name, together with the targeted VM name, must not exceed a total of 56 characters.
## Work with JIT VM access using Microsoft Defender for Cloud
From Defender for Cloud, you can enable and configure the JIT VM access.
1. Open the **Workload protections** and, in the advanced protections, select **Just-in-time VM access**.
-1. In the **Not configured** virtual machines tab, mark the VMs to protect with JIT and select **Enable JIT on VMs**.
+1. In the **Not configured** virtual machines tab, mark the VMs to protect with JIT and select **Enable JIT on VMs**.
The JIT VM access page opens listing the ports that Defender for Cloud recommends protecting: - 22 - SSH - 3389 - RDP
- - 5985 - WinRM
+ - 5985 - WinRM
- 5986 - WinRM To customize the JIT access:
To edit the existing JIT rules for a VM:
1. Open the **Workload protections** and, in the advanced protections, select **Just-in-time VM access**.
-1. In the **Configured** virtual machines tab, right-click on a VM and select **Edit**.
+1. In the **Configured** virtual machines tab, right-click on a VM and select **Edit**.
1. In the **JIT VM access configuration**, you can either edit the list of port or select **Add** a new custom port. 1. When you finish editing the ports, select **Save**.
-### Request access to a JIT-enabled VM from Microsoft Defender for Cloud
+### Request access to a JIT-enabled VM from Microsoft Defender for Cloud
When a VM has a JIT enabled, you have to request access to connect to it. You can request access in any of the supported ways, regardless of how you enabled JIT.
You can enable JIT on a VM from the Azure virtual machines pages of the Azure po
> [!TIP] > If a VM already has JIT enabled, the VM configuration page shows that JIT is enabled. You can use the link to open the JIT VM access page in Defender for Cloud to view and change the settings.
-1. From the [Azure portal](https://portal.azure.com), search for and select **Virtual machines**.
+1. From the [Azure portal](https://portal.azure.com), search for and select **Virtual machines**.
1. Select the virtual machine you want to protect with JIT. 1. In the menu, select **Configuration**.
-1. Under **Just-in-time access**, select **Enable just-in-time**.
+1. Under **Just-in-time access**, select **Enable just-in-time**.
By default, just-in-time access for the VM uses these settings:
You can enable JIT on a VM from the Azure virtual machines pages of the Azure po
1. From Defender for Cloud's menu, select **Just-in-time VM access**.
- 1. From the **Configured** tab, right-click on the VM to which you want to add a port, and select **Edit**.
+ 1. From the **Configured** tab, right-click on the VM to which you want to add a port, and select **Edit**.
![Editing a JIT VM access configuration in Microsoft Defender for Cloud.](./media/just-in-time-access-usage/jit-policy-edit-security-center.png)
The following PowerShell commands create this JIT configuration:
``` 1. Insert the VM just-in-time VM access rules into an array:
-
+ ```azurepowershell $JitPolicyArr=@($JitPolicy) ``` 1. Configure the just-in-time VM access rules on the selected VM:
-
+ ```azurepowershell Set-AzJitNetworkAccessPolicy -Kind "Basic" -Location "LOCATION" -Name "default" -ResourceGroupName "RESOURCEGROUP" -VirtualMachine $JitPolicyArr ```
Run the following commands in PowerShell:
```azurepowershell $JitPolicyArr=@($JitPolicyVm1) ```
-
+ 1. Send the request access (use the resource ID from step 1) ```azurepowershell
Learn more in the [PowerShell cmdlet documentation](/powershell/scripting/develo
#### Enable JIT on your VMs using the REST API
-The just-in-time VM access feature can be used via the Microsoft Defender for Cloud API. Use this API to get information about configured VMs, add new ones, request access to a VM, and more.
+The just-in-time VM access feature can be used via the Microsoft Defender for Cloud API. Use this API to get information about configured VMs, add new ones, request access to a VM, and more.
Learn more at [JIT network access policies](/rest/api/defenderforcloud/jit-network-access-policies). #### Request access to a JIT-enabled VM using the REST API
-The just-in-time VM access feature can be used via the Microsoft Defender for Cloud API. Use this API to get information about configured VMs, add new ones, request access to a VM, and more.
+The just-in-time VM access feature can be used via the Microsoft Defender for Cloud API. Use this API to get information about configured VMs, add new ones, request access to a VM, and more.
Learn more at [JIT network access policies](/rest/api/defenderforcloud/jit-network-access-policies).
You can gain insights into VM activities using log search. To view the logs:
1. From **Just-in-time VM access**, select the **Configured** tab. 1. For the VM that you want to audit, open the ellipsis menu at the end of the row.
-
+ 1. Select **Activity Log** from the menu. ![Select just-in-time JIT activity log.](./media/just-in-time-access-usage/jit-select-activity-log.png)
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes description: This page is updated frequently with the latest updates in Defender for Cloud. Previously updated : 02/07/2024 Last updated : 02/26/2024 # What's new in Microsoft Defender for Cloud?
If you're looking for items older than six months, you can find them in the [Arc
|Date | Update | |-|-|
+| February 28 | [Updated security policy management expands support to AWS and GCP](#updated-security-policy-management-expands-support-to-aws-and-gcp) |
| February 26 | [Cloud support for Defender for Containers](#cloud-support-for-defender-for-containers) | | February 20 | [New version of Defender Agent for Defender for Containers](#new-version-of-defender-agent-for-defender-for-containers) | | February 18| [Open Container Initiative (OCI) image format specification support](#open-container-initiative-oci-image-format-specification-support) | | February 13 | [AWS container vulnerability assessment powered by Trivy retired](#aws-container-vulnerability-assessment-powered-by-trivy-retired) | | February 8 | [Recommendations released for preview: four recommendations for Azure Stack HCI resource type](#recommendations-released-for-preview-four-recommendations-for-azure-stack-hci-resource-type) |
+### Updated security policy management expands support to AWS and GCP
+
+February 28, 2024
+
+The updated experience for managing security policies, initially released in Preview for Azure, is expanding its support to cross cloud (AWS and GCP) environments. This Preview release includes:
+- Managing [regulatory compliance standards](update-regulatory-compliance-packages.md) in Defender for Cloud across Azure, AWS, and GCP environments.
+- Same cross cloud interface experience for creating and managing [Microsoft Cloud Security Benchmark(MCSB) custom recommendations](manage-mcsb.md).
+- The updated experience is applied to AWS and GCP for [creating custom recommendations with a KQL query](create-custom-recommendations.md).
+ ### Cloud support for Defender for Containers February 26, 2024
See the [list of security recommendations](recommendations-reference.md).
January 31, 2024
-A new insight for Azure DevOps repositories has been added to the Cloud Security Explorer to indicate whether repositories are active. This insight indicates that the code repository is not archived or disabled, meaning that write access to code, builds, and pull requests is still available for users. Archived and disabled repositories might be considered lower priority as the code is not typically used in active deployments.
+A new insight for Azure DevOps repositories has been added to the Cloud Security Explorer to indicate whether repositories are active. This insight indicates that the code repository is not archived or disabled, meaning that write access to code, builds, and pull requests is still available for users. Archived and disabled repositories might be considered lower priority as the code isn't typically used in active deployments.
-To test out the query through Cloud Security Explorer, use [this query link](https://ms.portal.azure.com#view/Microsoft_Azure_Security/SecurityGraph.ReactView/query/%7B%22type%22%3A%22securitygraphquery%22%2C%22version%22%3A2%2C%22properties%22%3A%7B%22source%22%3A%7B%22type%22%3A%22datasource%22%2C%22properties%22%3A%7B%22sources%22%3A%5B%7B%22type%22%3A%22entity%22%2C%22properties%22%3A%7B%22source%22%3A%22azuredevopsrepository%22%7D%7D%5D%2C%22conditions%22%3A%7B%22type%22%3A%22conditiongroup%22%2C%22properties%22%3A%7B%22operator%22%3A%22and%22%2C%22conditions%22%3A%5B%7B%22type%22%3A%22insights%22%2C%22properties%22%3A%7B%22name%22%3A%226b8f221b-c0ce-48e3-9fbb-16f917b1c095%22%7D%7D%5D%7D%7D%7D%7D%7D%7D)
+To test out the query through Cloud Security Explorer, use [this query link](https://ms.portal.azure.com#view/Microsoft_Azure_Security/SecurityGraph.ReactView/query/%7B%22type%22%3A%22securitygraphquery%22%2C%22version%22%3A2%2C%22properties%22%3A%7B%22source%22%3A%7B%22type%22%3A%22datasource%22%2C%22properties%22%3A%7B%22sources%22%3A%5B%7B%22type%22%3A%22entity%22%2C%22properties%22%3A%7B%22source%22%3A%22azuredevopsrepository%22%7D%7D%5D%2C%22conditions%22%3A%7B%22type%22%3A%22conditiongroup%22%2C%22properties%22%3A%7B%22operator%22%3A%22and%22%2C%22conditions%22%3A%5B%7B%22type%22%3A%22insights%22%2C%22properties%22%3A%7B%22name%22%3A%226b8f221b-c0ce-48e3-9fbb-16f917b1c095%22%7D%7D%5D%7D%7D%7D%7D%7D%7D).
### Deprecation of security alerts and update of security alerts to informational severity level
defender-for-cloud Update Regulatory Compliance Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/update-regulatory-compliance-packages.md
Title: Assign regulatory compliance standards in Microsoft Defender for Cloud description: Learn how to assign regulatory compliance standards in Microsoft Defender for Cloud. Previously updated : 11/20/2023 Last updated : 02/26/2024
In Defender for Cloud, you assign security standards to specific scopes such as
Defender for Cloud continually assesses the environment-in-scope against standards. Based on assessments, it shows in-scope resources as being compliant or noncompliant with the standard, and provides remediation recommendations.
-This article describes how to add regulatory compliance standards as security standards in an Azure subscriptions, AWS account, or GCP project.
+This article describes how to add regulatory compliance standards as security standards in an Azure subscription, AWS account, or GCP project.
## Before you start
defender-for-iot Defender Iot Firmware Analysis Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/defender-iot-firmware-analysis-rbac.md
To upload firmware images:
## Invite third parties to interact with your firmware analysis results You might want to invite someone to interact solely with your firmware analysis results, without allowing access to other parts of your organization (like other resource groups within your subscription). To allow this type of access, invite the user as a Firmware Analysis Admin at the FirmwareAnalysisRG Resource Group level.
-To invite a third party, follow the [Assign Azure roles to external guest users using the Azure portal](../../../articles/role-based-access-control/role-assignments-external-users.md#add-a-guest-user-to-your-directory) tutorial.
+To invite a third party, follow the [Assign Azure roles to external users using the Azure portal](../../../articles/role-based-access-control/role-assignments-external-users.md#invite-an-external-user-to-your-directory) tutorial.
* In step 3, navigate to the **FirmwareAnalysisRG** Resource Group. * In step 7, select the **Firmware Analysis Admin** role.
defender-for-iot Eiot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/eiot-sensor.md
# Discover Enterprise IoT devices with an Enterprise IoT network sensor (Public preview)
+> [!IMPORTANT]
+> Registering a new Enterprise IoT network sensor as described in this article is no longer available. For customers with the Azure Consumption Revenue (ACR) or legacy license, Defender for IoT maintains existing Enterprise IoT network sensors.
+ This article describes how to register an Enterprise IoT network sensor in Microsoft Defender for IoT. Microsoft Defender XDR customers with an Enterprise IoT network sensor can see all discovered devices in the **Device inventory** in either Microsoft Defender XDR or Defender for IoT. You'll also get extra security value from more alerts, vulnerabilities, and recommendations in Microsoft Defender XDR for the newly discovered devices.
This section describes how to register an Enterprise IoT sensor in Defender for
**To register a sensor in the Azure portal**:
-1. Go to **Defender for IoT** > **Sites and sensors**, and then select **Onboard sensor** > **EIoT**.
+1. Go to **Defender for IoT** > **Sites and sensors**, and then select **Onboard sensor** > **EIoT**.
1. On the **Set up Enterprise IoT Security** page, enter the following details, and then select **Register**:
dns Dns Private Resolver Get Started Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-bicep.md
Title: 'Quickstart: Create an Azure DNS Private Resolver - Bicep'
description: Learn how to create Azure DNS Private Resolver. This article is a step-by-step quickstart to create and manage your first Azure DNS Private Resolver using Bicep. -- Previously updated : 10/07/2022++ Last updated : 02/28/2024
This quickstart describes how to use Bicep to create Azure DNS Private Resolver.
[!INCLUDE [About Bicep](../../includes/resource-manager-quickstart-bicep-introduction.md)]
+The following figure summarizes the general setup used. Subnet address ranges used in templates are slightly different than those shown in the figure.
+
+![Conceptual figure displaying components of the private resolver.](./media/dns-resolver-getstarted-portal/resolver-components.png)
+ ## Prerequisites If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
This Bicep file is configured to create a:
:::code language="bicep" source="~/quickstart-templates/quickstarts/microsoft.network/azure-dns-private-resolver/main.bicep":::
-Seven resources have been defined in this template:
+Seven resources are defined in this template:
- [**Microsoft.Network/virtualnetworks**](/azure/templates/microsoft.network/virtualnetworks) - [**Microsoft.Network/dnsResolvers**](/azure/templates/microsoft.network/dnsresolvers)
Remove-AzDnsResolver -Name mydnsresolver -ResourceGroupName myresourcegroup
## Next steps
-In this quickstart, you created a virtual network and DNS private resolver. Now configure name resolution for Azure and on-premises domains
+In this quickstart, you created a virtual network and DNS private resolver. Now configure name resolution for Azure and on-premises domains.
- [Resolve Azure and on-premises domains](private-resolver-hybrid-dns.md)
dns Dns Private Resolver Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-portal.md
description: In this quickstart, you create and test a private DNS resolver in A
Previously updated : 11/03/2023 Last updated : 02/28/2024 -+ #Customer intent: As an experienced network administrator, I want to create an Azure private DNS resolver, so I can resolve host names on my private virtual networks.
Azure DNS Private Resolver enables you to query Azure DNS private zones from an
## In this article: - Two VNets are created: **myvnet** and **myvnet2**.-- An Azure DNS Private Resolver is created in the first VNet with an inbound endpoint at **10.0.0.4**.
+- An Azure DNS Private Resolver is created in the first VNet with an inbound endpoint at **10.10.0.4**.
- A DNS forwarding ruleset is created for the private resolver. - The DNS forwarding ruleset is linked to the second VNet. - Example rules are added to the DNS forwarding ruleset.
An Azure subscription is required.
Before you can use **Microsoft.Network** services with your Azure subscription, you must register the **Microsoft.Network** namespace:
-1. Select the **Subscription** blade in the Azure portal, and then choose your subscription by selecting on it.
+1. Select the **Subscription** blade in the Azure portal, and then choose your subscription.
2. Under **Settings** select **Resource Providers**. 3. Select **Microsoft.Network** and then select **Register**.
First, create or choose an existing resource group to host the resources for you
Next, add a virtual network to the resource group that you created, and configure subnets.
-1. In the Azure portal, search for and select **Virtual networks**.
-2. On the **Virtual networks** page, select **Create**.
-3. On the **Basics** tab, select the resource group you just created, enter **myvnet** for the virtual network name, and select the **Region** that is the same as your resource group.
-4. Select the **IP Addresses** tab and enter an **IPv4 address space** of 10.0.0.0/16. This address range might be entered by default.
-5. Select the **default** subnet.
-6. Enter the following values on the **Edit subnet** page:
- - Name: snet-inbound
- - IPv4 address range: 10.0.0.0/16
- - Starting address: 10.0.0.0
- - Size: /28 (16 IP addresses)
- - Select **Save**
-7. Select **Add a subnet** and enter the following values on the **Add a subnet** page:
- - Subnet purpose: Default
- - Name: snet-outbound
- - IPv4 address range: 10.0.0.0/16
- - Starting address: 10.0.1.0
- - Size: /28 (16 IP addresses)
- - Select **Add**
-8. Select the **Review + create** tab and then select **Create**.
+1. Select the resource group you created, select **Create**, select **Networking** from the list of categories, and then next to **Virtual network**, select **Create**.
+2. On the **Basics** tab, enter a name for the new virtual network and select the **Region** that is the same as your resource group.
+3. On the **IP Addresses** tab, modify the **IPv4 address space** to be 10.0.0.0/8.
+4. Select **Add subnet** and enter the subnet name and address range:
+ - Subnet name: snet-inbound
+ - Subnet address range: 10.0.0.0/28
+ - Select **Add** to add the new subnet.
+5. Select **Add subnet** and configure the outbound endpoint subnet:
+ - Subnet name: snet-outbound
+ - Subnet address range: 10.1.1.0/28
+ - Select **Add** to add this subnet.
+6. Select **Review + create** and then select **Create**.
![create virtual network](./media/dns-resolver-getstarted-portal/virtual-network.png) ## Create a DNS resolver inside the virtual network
-1. In the Azure portal, search for **DNS Private Resolvers**.
+1. Open the Azure portal and search for **DNS Private Resolvers**.
2. Select **DNS Private Resolvers**, select **Create**, and then on the **Basics** tab for **Create a DNS Private Resolver** enter the following: - Subscription: Choose the subscription name you're using. - Resource group: Choose the name of the resource group that you created.
Next, add a virtual network to the resource group that you created, and configur
![create resolver - basics](./media/dns-resolver-getstarted-portal/dns-resolver.png) 3. Select the **Inbound Endpoints** tab, select **Add an endpoint**, and then enter a name next to **Endpoint name** (ex: myinboundendpoint).
-4. Next to **Subnet**, select the inbound endpoint subnet you created (ex: snet-inbound, 10.0.0.0/28).
-5. Next to **IP address assignment**, select **Static**.
-6. Next to IP address, enter **10.0.0.4** and then select **Save**.
-
- > [!NOTE]
- > You can choose a static or dynamic IP address for the inbound endpoint. A dynamic IP address is used by default. Typically the first available [non-reserved](../virtual-network/virtual-networks-faq.md#are-there-any-restrictions-on-using-ip-addresses-within-these-subnets) IP address is assigned (example: 10.0.0.4). This dynamic IP address does not change unless the endpoint is deleted and reprovisioned (for example using a different subnet). In this example **Static** is selected and the first available IP address is entered.
-
+4. Next to **Subnet**, select the inbound endpoint subnet you created (ex: snet-inbound, 10.0.0.0/28) and then select **Save**.
5. Select the **Outbound Endpoints** tab, select **Add an endpoint**, and then enter a name next to **Endpoint name** (ex: myoutboundendpoint). 6. Next to **Subnet**, select the outbound endpoint subnet you created (ex: snet-outbound, 10.1.1.0/28) and then select **Save**. 7. Select the **Ruleset** tab, select **Add a ruleset**, and enter the following: - Ruleset name: Enter a name for your ruleset (ex: **myruleset**).
- - Endpoints: Select the outbound endpoint that you created (ex: myoutboundendpoint).
+ - Endpoints: Select the outbound endpoint that you created (ex: myoutboundendpoint).
8. Under **Rules**, select **Add** and enter your conditional DNS forwarding rules. For example: - Rule name: Enter a rule name (ex: contosocom). - Domain Name: Enter a domain name with a trailing dot (ex: contoso.com.). - Rule State: Choose **Enabled** or **Disabled**. The default is enabled.
- - Under **Destination** enter a desired destination IPv4 address (ex: 11.0.1.4).
+ - Select **Add a destination** and enter a desired destination IPv4 address (ex: 11.0.1.4).
- If desired, select **Add a destination** again to add another destination IPv4 address (ex: 11.0.1.5). - When you're finished adding destination IP addresses, select **Add**. 9. Select **Review and Create**, and then select **Create**. ![create resolver - ruleset](./media/dns-resolver-getstarted-portal/resolver-ruleset.png)
- This example has only one conditional forwarding rule, but you can create many. Edit the rules to enable or disable them as needed. You can also add or edit rules and rulesets at any time after deployment.
+ This example has only one conditional forwarding rule, but you can create many. Edit the rules to enable or disable them as needed.
- After selecting **Create**, the new DNS resolver begins deployment. This process might take a minute or two. The status of each component is displayed during deployment.
+ ![Screenshot of Create resolver - review.](./media/dns-resolver-getstarted-portal/resolver-review.png)
+
+ After selecting **Create**, the new DNS resolver will begin deployment. This process might take a minute or two. The status of each component is displayed during deployment.
![create resolver - status](./media/dns-resolver-getstarted-portal/resolver-status.png)
Create a second virtual network to simulate an on-premises or other environment.
2. Select **Create**, and then on the **Basics** tab select your subscription and choose the same resource group that you have been using in this guide (ex: myresourcegroup). 3. Next to **Name**, enter a name for the new virtual network (ex: myvnet2). 4. Verify that the **Region** selected is the same region used previously in this guide (ex: West Central US).
-5. Select the **IP Addresses** tab and edit the default IP address space. Replace the address space with a simulated on-premises address space (ex: 10.1.0.0/16).
-6. Select and edit the **default** subnet:
- - Subnet purpose: Default
- - Name: backendsubnet
- - Subnet address range: 10.1.0.0/16
- - Starting address: 10.1.0.0
- - Size: /24 (256 addresses)
-7. Select **Save**, select **Review + create**, and then select **Create**.
+5. Select the **IP Addresses** tab and edit the default IP address space. Replace the address space with a simulated on-premises address space (ex: 12.0.0.0/8).
+6. Select **Add subnet** and enter the following:
+ - Subnet name: backendsubnet
+ - Subnet address range: 12.2.0.0/24
+7. Select **Add**, select **Review + create**, and then select **Create**.
- ![second vnet create](./media/dns-resolver-getstarted-portal/vnet-create.png)
+ ![Screenshot showing creation of a second vnet.](./media/dns-resolver-getstarted-portal/vnet-create.png)
## Link your forwarding ruleset to the second virtual network
-> [!NOTE]
-> In this procedure, a forwarding ruleset is linked to a VNet that was created earlier to simulate an on-premises environment. It is not possible to create a ruleset link to non-Azure resources. The purpose of the following procedure is only to demonstrate how ruleset links can be added or deleted. To understand how a private resolver can be used to resolve on-premises names, see [Resolve Azure and on-premises domains](private-resolver-hybrid-dns.md).
- To apply your forwarding ruleset to the second virtual network, you must create a virtual link. 1. Search for **DNS forwarding rulesets** in the Azure services list and select your ruleset (ex: **myruleset**).
-2. Under **Settings**, select **Virtual Network Links**
- - The link **myvnet-link** is already present. This was created automatically when the ruleset was provisioned.
-3. Select **Add**, choose **myvnet2** from the **Virtual Network** drop-down list. Use the default **Link Name** of **myvnet2-link**.
+2. Select **Virtual Network Links**, select **Add**, choose **myvnet2** and use the default Link Name **myvnet2-link**.
3. Select **Add** and verify that the link was added successfully. You might need to refresh the page. ![Screenshot of ruleset virtual network links.](./media/dns-resolver-getstarted-portal/ruleset-links.png)
Add or remove specific rules your DNS forwarding ruleset as desired, such as:
Individual rules can be deleted or disabled. In this example, a rule is deleted.
-1. Search for **DNS Forwarding Rulesets** in the Azure Services list and select it.
-2. Select the ruleset you previously configured (ex: **myruleset**) and then under **Settings** select **Rules**.
+1. Search for **Dns Forwarding Rulesets** in the Azure Services list and select it.
+2. Select the ruleset you previously configured (ex: **myruleset**) and then select **Rules**.
3. Select the **contosocom** sample rule that you previously configured, select **Delete**, and then select **OK**. ### Add rules to the forwarding ruleset
Add three new conditional forwarding rules to the ruleset.
- Rule Name: **Internal** - Domain Name: **internal.contoso.com.** - Rule State: **Enabled**
-4. Under **Destination IP address** enter 10.1.0.5, and then select **Add**.
+4. Under **Destination IP address** enter 192.168.1.2, and then select **Add**.
5. On the **myruleset | Rules** page, select **Add**, and enter the following rule data: - Rule Name: **Wildcard** - Domain Name: **.** (enter only a dot)
Add three new conditional forwarding rules to the ruleset.
In this example: - 10.0.0.4 is the resolver's inbound endpoint. -- 10.1.0.5 is an on-premises DNS server.
+- 192.168.1.2 is an on-premises DNS server.
- 10.5.5.5 is a protective DNS service. ## Test the private resolver
dns Dns Private Resolver Get Started Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-powershell.md
description: In this quickstart, you learn how to create and manage your first p
Previously updated : 11/30/2023 Last updated : 02/28/2024 -+ #Customer intent: As an experienced network administrator, I want to create an Azure private DNS resolver, so I can resolve host names on my private virtual networks.
This article walks you through the steps to create your first private DNS zone a
Azure DNS Private Resolver is a new service that enables you to query Azure DNS private zones from an on-premises environment and vice versa without deploying VM based DNS servers. For more information, including benefits, capabilities, and regional availability, see [What is Azure DNS Private Resolver](dns-private-resolver-overview.md).
+The following figure summarizes the setup used in this article:
+
+![Conceptual figure displaying components of the private resolver.](./media/dns-resolver-getstarted-portal/resolver-components.png)
+ ## Prerequisites If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
Connect PowerShell to Azure cloud.
Connect-AzAccount -Environment AzureCloud ```
-If multiple subscriptions are present, the first subscription ID is used. To specify a different subscription ID, use the following command.
+If multiple subscriptions are present, the first subscription ID will be used. To specify a different subscription ID, use the following command.
```Azure PowerShell Select-AzSubscription -SubscriptionObject (Get-AzSubscription -SubscriptionId <your-sub-id>)
New-AzResourceGroup -Name myresourcegroup -Location westcentralus
Create a virtual network in the resource group that you created. ```Azure PowerShell
-New-AzVirtualNetwork -Name myvnet -ResourceGroupName myresourcegroup -Location westcentralus -AddressPrefix "10.0.0.0/16"
+New-AzVirtualNetwork -Name myvnet -ResourceGroupName myresourcegroup -Location westcentralus -AddressPrefix "10.0.0.0/8"
``` Create a DNS resolver in the virtual network that you created.
Create an inbound endpoint to enable name resolution from on-premises or another
> [!TIP] > Using PowerShell, you can specify the inbound endpoint IP address to be dynamic or static.<br>
-> If the endpoint IP address is specified as dynamic, the address does not change unless the endpoint is deleted and reprovisioned. Typically the same IP address is assigned again during reprovisioning.<br>
+> If the endpoint IP address is specified as dynamic, the address does not change unless the endpoint is deleted and reprovisioned. Typically the same IP address will be assigned again during reprovisioning.<br>
> If the endpoint IP address is static, it can be specified and reused if the endpoint is reprovisioned. The IP address that you choose can't be a [reserved IP address in the subnet](../virtual-network/virtual-networks-faq.md#are-there-any-restrictions-on-using-ip-addresses-within-these-subnets).
-#### Dynamic IP address
- The following commands provision a dynamic IP address: ```Azure PowerShell $ipconfig = New-AzDnsResolverIPConfigurationObject -PrivateIPAllocationMethod Dynamic -SubnetId /subscriptions/<your sub id>/resourceGroups/myresourcegroup/providers/Microsoft.Network/virtualNetworks/myvnet/subnets/snet-inbound New-AzDnsResolverInboundEndpoint -DnsResolverName mydnsresolver -Name myinboundendpoint -ResourceGroupName myresourcegroup -Location westcentralus -IpConfiguration $ipconfig ```
-#### Static IP address
- Use the following commands to specify a static IP address. Do not use both the dynamic and static sets of commands. You must specify an IP address in the subnet that was created previously. The IP address that you choose can't be a [reserved IP address in the subnet](../virtual-network/virtual-networks-faq.md#are-there-any-restrictions-on-using-ip-addresses-within-these-subnets).
$virtualNetworkLink.ToJsonString()
Create a second virtual network to simulate an on-premises or other environment. ```Azure PowerShell
-$vnet2 = New-AzVirtualNetwork -Name myvnet2 -ResourceGroupName myresourcegroup -Location westcentralus -AddressPrefix "10.1.0.0/16"
+$vnet2 = New-AzVirtualNetwork -Name myvnet2 -ResourceGroupName myresourcegroup -Location westcentralus -AddressPrefix "12.0.0.0/8"
$vnetlink2 = New-AzDnsForwardingRulesetVirtualNetworkLink -DnsForwardingRulesetName $dnsForwardingRuleset.Name -ResourceGroupName myresourcegroup -VirtualNetworkLinkName "vnetlink2" -VirtualNetworkId $vnet2.Id -SubscriptionId <your sub id> ```
$targetDNS2 = New-AzDnsResolverTargetDnsServerObject -IPAddress 192.168.1.3 -Por
$targetDNS3 = New-AzDnsResolverTargetDnsServerObject -IPAddress 10.0.0.4 -Port 53 $targetDNS4 = New-AzDnsResolverTargetDnsServerObject -IPAddress 10.5.5.5 -Port 53 $forwardingrule = New-AzDnsForwardingRulesetForwardingRule -ResourceGroupName myresourcegroup -DnsForwardingRulesetName myruleset -Name "Internal" -DomainName "internal.contoso.com." -ForwardingRuleState "Enabled" -TargetDnsServer @($targetDNS1,$targetDNS2)
-$forwardingrule = New-AzDnsForwardingRulesetForwardingRule -ResourceGroupName myresourcegroup -DnsForwardingRulesetName myruleset -Name "AzurePrivate" -DomainName "azure.contoso.com." -ForwardingRuleState "Enabled" -TargetDnsServer $targetDNS3
+$forwardingrule = New-AzDnsForwardingRulesetForwardingRule -ResourceGroupName myresourcegroup -DnsForwardingRulesetName myruleset -Name "AzurePrivate" -DomainName "azure.contoso.com" -ForwardingRuleState "Enabled" -TargetDnsServer $targetDNS3
$forwardingrule = New-AzDnsForwardingRulesetForwardingRule -ResourceGroupName myresourcegroup -DnsForwardingRulesetName myruleset -Name "Wildcard" -DomainName "." -ForwardingRuleState "Enabled" -TargetDnsServer $targetDNS4 ```
dns Dns Private Resolver Get Started Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-template.md
description: Learn how to create Azure DNS Private Resolver. This article is a s
Previously updated : 07/17/2023 Last updated : 02/28/2024
This quickstart describes how to use an Azure Resource Manager template (ARM tem
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
+The following figure summarizes the general setup used. Subnet address ranges used in templates are slightly different than those shown in the figure.
+
+![Conceptual figure displaying components of the private resolver.](./media/dns-resolver-getstarted-portal/resolver-components.png)
+ If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal. [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.network%2Fazure-dns-private-resolver%2Fazuredeploy.json)
This template is configured to create a:
:::code language="bicep" source="~/quickstart-templates/quickstarts/microsoft.network/azure-dns-private-resolver/azuredeploy.json":::
-Seven resources have been defined in this template:
+Seven resources are defined in this template:
- [**Microsoft.Network/virtualnetworks**](/azure/templates/microsoft.network/virtualnetworks) - [**Microsoft.Network/dnsResolvers**](/azure/templates/microsoft.network/dnsresolvers)
New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateUri
## Next steps
-In this quickstart, you created a virtual network and DNS private resolver. Now configure name resolution for Azure and on-premises domains
+In this quickstart, you created a virtual network and DNS private resolver. Now configure name resolution for Azure and on-premises domains.
- [Resolve Azure and on-premises domains](private-resolver-hybrid-dns.md)
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md
Previously updated : 02/16/2024 Last updated : 02/27/2024
The following table shows connectivity locations and the service providers for e
| **Milan** | [IRIDEOS](https://irideos.it/en/data-centers/) | 1 | Italy North | Supported | Colt<br/>Equinix<br/>Fastweb<br/>IRIDEOS<br/>Retelit<br/>Vodafone | | **Milan2** | [DATA4](https://www.data4group.com/it/data-center-a-milano-italia/) | 1 | Italy North | Supported | | | **Minneapolis** | [Cologix MIN1](https://www.cologix.com/data-centers/minneapolis/min1/) and [Cologix MIN3](https://www.cologix.com/data-centers/minneapolis/min3/) | 1 | n/a | Supported | Cologix<br/>Megaport |
-| **Montreal** | [Cologix MTL3](https://www.cologix.com/data-centers/montreal/mtl3/) | 1 | n/a | Supported | Bell Canada<br/>CenturyLink Cloud Connect<br/>Cologix<br/>Fibrenoire<br/>Megaport<br/>RISQ<br/>Telus<br/>Zayo |
+| **Montreal** | [Cologix MTL3](https://www.cologix.com/data-centers/montreal/mtl3/)<br/>[Cologix MTL7](https://cologix.com/data-centers/montreal/mtl7/) | 1 | n/a | Supported | Bell Canada<br/>CenturyLink Cloud Connect<br/>Cologix<br/>Fibrenoire<br/>Megaport<br/>RISQ<br/>Telus<br/>Zayo |
| **Mumbai** | Tata Communications | 2 | West India | Supported | BSNL<br/>British Telecom<br/>DE-CIX<br/>Global CloudXchange (GCX)<br/>InterCloud<br/>Reliance Jio<br/>Sify<br/>Tata Communications<br/>Verizon | | **Mumbai2** | Airtel | 2 | West India | Supported | Airtel<br/>Sify<br/>Orange<br/>Vodafone Idea | | **Munich** | [EdgeConneX](https://www.edgeconnex.com/locations/europe/munich/) | 1 | n/a | Supported | Colt<br/>DE-CIX<br/>Megaport |
frontdoor Integrate Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/integrate-storage-account.md
description: This article shows you how to use Azure Front Door to deliver high-
-+ Last updated 08/22/2023
hdinsight Hdinsight Using Spark Query Hbase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-using-spark-query-hbase.md
description: Use the Spark HBase Connector to read and write data from a Spark c
Previously updated : 01/04/2024 Last updated : 02/27/2024 # Use Apache Spark to read and write Apache HBase data
__NOTE__: Before proceeding, make sure you've added the Spark clusterΓÇÖs storag
|Property | Value | |||
- |Bash script URI|`https://hdiconfigactions.blob.core.windows.net/hbasesparkconnectorscript/connector-hbase.sh`|
+ |Bash script URI|`https://hdiconfigactions2.blob.core.windows.net/hbasesparkconnect/connector-hbase.sh`|
|Node type(s)|Region|
- |Parameters|`-s SECONDARYS_STORAGE_URL`|
+ |Parameters|`-s SECONDARYS_STORAGE_URL -d "DOMAIN_NAME`|
|Persisted|yes|
- * `SECONDARYS_STORAGE_URL` is the url of the Spark side default storage. Parameter Example: `-s wasb://sparkcon-2020-08-03t18-17-37-853z@sparkconhdistorage.blob.core.windows.net`
+ * `SECONDARYS_STORAGE_URL` is the url of the Spark side default storage. Parameter Example: `-s
+wasb://sparkcon-2020-08-03t18-17-37-853z@sparkconhdistorage.blob.core.windows.net
+-d "securehadooprc"`
2. Use Script Action on your Spark cluster to apply the changes with the following considerations: |Property | Value | |||
- |Bash script URI|`https://hdiconfigactions.blob.core.windows.net/hbasesparkconnectorscript/connector-spark.sh`|
+ |Bash script URI|`https://hdiconfigactions2.blob.core.windows.net/hbasesparkconnect/connector-spark.sh`|
|Node type(s)|Head, Worker, Zookeeper|
- |Parameters|`-s "SPARK-CRON-SCHEDULE"` (optional) `-h "HBASE-CRON-SCHEDULE"` (optional)|
+ |Parameters|`-s "SPARK-CRON-SCHEDULE" (optional) -h "HBASE-CRON-SCHEDULE" (optional) -d "DOMAIN_NAME" (mandatory)`|
|Persisted|yes| * You can specify how often you want this cluster to automatically check if update. Default: -s ΓÇ£*/1 * * * *ΓÇ¥ -h 0 (In this example, the Spark cron runs every minute, while the HBase cron doesn't run)
- * Since HBase cron isn't set up by default, you need to rerun this script when perform scaling to your HBase cluster. If your HBase cluster scales often, you may choose to set up HBase cron job automatically. For example: `-h "*/30 * * * *"` configures the script to perform checks every 30 minutes. This will run HBase cron schedule periodically to automate downloading of new HBase information on the common storage account to local node.
+ * Since HBase cron isn't set up by default, you need to rerun this script when perform scaling to your HBase cluster. If your HBase cluster scales often, you may choose to set up HBase cron job automatically. For example: `-s '*/1 * * * *' -h '*/30 * * * *' -d "securehadooprc"` configures the script to perform checks every 30 minutes. This will run HBase cron schedule periodically to automate downloading of new HBase information on the common storage account to local node.
+
+>[!NOTE]
+>These scripts works only on HDI 5.0 and HDI 5.1 clusters.
hdinsight Apache Spark Zeppelin Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-zeppelin-notebook.md
description: Step-by-step instructions on how to use Zeppelin notebooks with Apa
Previously updated : 12/14/2023 Last updated : 02/27/2024 # Use Apache Zeppelin notebooks with Apache Spark cluster on Azure HDInsight
This action saves the notebook as a JSON file in your download location.
> If you want the notebook to be available even after cluster deletion , you can try to use azure file storage (Using SMB protocol ) and link it to local path. For more details, see [Mount SMB Azure file share on Linux](/azure/storage/files/storage-how-to-use-files-linux) > > After mounting it, you can modify the zeppelin configuration zeppelin.notebook.dir to the mounted path in ambari UI.
+>
+> - The SMB fileshare as GitNotebookRepo storage is not recommended for zeppelin version 0.10.1
## Use `Shiro` to Configure Access to Zeppelin Interpreters in Enterprise Security Package (ESP) Clusters
iot-operations Howto Develop Dapr Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/develop/howto-develop-dapr-apps.md
To start, create a yaml file with the following definitions:
- name: mq-dapr-app image: <YOUR_DAPR_APPLICATION>
- # Container for the Dapr Pub/sub component
- - name: aio-mq-pubsub-pluggable
- image: ghcr.io/azure/iot-mq-dapr-components/pubsub:latest
- volumeMounts:
- - name: dapr-unix-domain-socket
- mountPath: /tmp/dapr-components-sockets
- - name: mqtt-client-token
- mountPath: /var/run/secrets/tokens
- - name: aio-ca-trust-bundle
- mountPath: /var/run/certs/aio-mq-ca-cert/
-
- # Container for the Dapr State store component
- - name: aio-mq-statestore-pluggable
- image: ghcr.io/azure/iot-mq-dapr-components/statestore:latest
+ # Container for the pluggable component
+ - name: aio-mq-components
+ image: ghcr.io/azure/iot-mq-dapr-components:latest
volumeMounts: - name: dapr-unix-domain-socket mountPath: /tmp/dapr-components-sockets
To start, create a yaml file with the following definitions:
pod/dapr-workload created NAME READY STATUS RESTARTS AGE ...
- dapr-workload 4/4 Running 0 30s
+ dapr-workload 3/3 Running 0 30s
``` ## Troubleshooting
iot-operations Tutorial Event Driven With Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/develop/tutorial-event-driven-with-dapr.md
To start, create a yaml file that uses the following definitions:
- name: mq-event-driven-dapr image: ghcr.io/azure-samples/explore-iot-operations/mq-event-driven-dapr:latest
- # Container for the Pub/sub component
- - name: aio-mq-pubsub-pluggable
- image: ghcr.io/azure/iot-mq-dapr-components/pubsub:latest
- volumeMounts:
- - name: dapr-unix-domain-socket
- mountPath: /tmp/dapr-components-sockets
- - name: mqtt-client-token
- mountPath: /var/run/secrets/tokens
- - name: aio-ca-trust-bundle
- mountPath: /var/run/certs/aio-mq-ca-cert/
-
- # Container for the State Management component
- - name: aio-mq-statestore-pluggable
- image: ghcr.io/azure/iot-mq-dapr-components/statestore:latest
+ # Container for the pluggable component
+ - name: aio-mq-components
+ image: ghcr.io/azure/iot-mq-dapr-components:latest
volumeMounts: - name: dapr-unix-domain-socket mountPath: /tmp/dapr-components-sockets
iot-operations Howto Configure Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-connectivity/howto-configure-authentication.md
kubectl create configmap client-ca --from-file=client_ca.pem -n azure-iot-operat
To check the root CA certificate is properly imported, run `kubectl describe configmap`. The result shows the same base64 encoding of the PEM certificate file. ```console
-$ kubectl describe configmap client-ca
+$ kubectl describe configmap client-ca -n azure-iot-operations
Name: client-ca Namespace: azure-iot-operations
iot-operations Howto Configure Availability Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-connectivity/howto-configure-availability-scale.md
If you don't specify settings, default values are used. The following table show
| Name | Required | Format | Default| Description | | | -- | - | -|-|
-| `brokerRef` | true | String |N/A |The associated broker |
-| `diagnosticServiceEndpoint` | true | String |N/A |An endpoint to send metrics/ traces to |
-| `enableMetrics` | false | Boolean |true |Enable or disable broker metrics |
-| `enableTracing` | false | Boolean |true |Enable or disable tracing |
-| `logLevel` | false | String | `info` |Log level. `trace`, `debug`, `info`, `warn`, or `error` |
-| `enableSelfCheck` | false | Boolean |true |Component that periodically probes the health of broker |
-| `enableSelfTracing` | false | Boolean |true |Automatically traces incoming messages at a frequency of 1 every `selfTraceFrequencySeconds` |
-| `logFormat` | false | String | `text` |Log format in `json` or `text` |
+| `diagnosticServiceEndpoint` | true | String | N/A |An endpoint to send metrics/ traces to |
+| `enableMetrics` | false | Boolean | true |Enable or disable broker metrics |
+| `enableTracing` | false | Boolean | true |Enable or disable tracing |
+| `logLevel` | false | String |`info` |Log level. `trace`, `debug`, `info`, `warn`, or `error` |
+| `enableSelfCheck` | false | Boolean | true |Component that periodically probes the health of broker |
+| `enableSelfTracing` | false | Boolean | true |Automatically traces incoming messages at a frequency of 1 every `selfTraceFrequencySeconds` |
+| `logFormat` | false | String |`text` |Log format in `json` or `text` |
| `metricUpdateFrequencySeconds` | false | Integer | 30 |The frequency to send metrics to diagnostics service endpoint, in seconds | | `selfCheckFrequencySeconds` | false | Integer | 30 |How often the probe sends test messages|
-| `selfCheckTimeoutSeconds` | false | Integer | 15 |Timeout interval for probe messages |
-| `selfTraceFrequencySeconds` | false | Integer |30 |How often to automatically trace external messages if `enableSelfTracing` is true |
-| `spanChannelCapacity` | false | Integer |1000 |Maximum number of spans that selftest can store before sending to the diagnostics service |
+| `selfCheckTimeoutSeconds` | false | Integer | 15 |Timeout interval for probe messages |
+| `selfTraceFrequencySeconds` | false | Integer | 30 |How often to automatically trace external messages if `enableSelfTracing` is true |
+| `spanChannelCapacity` | false | Integer | 1000 |Maximum number of spans that selftest can store before sending to the diagnostics service |
| `probeImage` | true | String |mcr.microsoft.com/azureiotoperations/diagnostics-probe:0.1.0-preview | Image used for self check |
key-vault How To Export Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/how-to-export-certificate.md
$certificateName = '<YourCert>'
$password = '<YourPwd>' $pfxSecret = Get-AzKeyVaultSecret -VaultName $vaultName -Name $certificateName -AsPlainText
-$secretByte = [Convert]::FromBase64String($pfxSecret)
-$x509Cert = New-Object Security.Cryptography.X509Certificates.X509Certificate2
-$x509Cert.Import($secretByte, $null, [Security.Cryptography.X509Certificates.X509KeyStorageFlags]::Exportable)
-$pfxFileByte = $x509Cert.Export([Security.Cryptography.X509Certificates.X509ContentType]::Pkcs12, $password)
+$certBytes = [Convert]::FromBase64String($pfxSecret)
# Write to a file
-[IO.File]::WriteAllBytes("KeyVaultcertificate.pfx", $pfxFileByte)
+Set-Content -Path cert.pfx -Value $certBytes -AsByteStream
```
key-vault Authentication Requests And Responses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/authentication-requests-and-responses.md
Title: Authentication, requests and responses description: Learn how Azure Key Vault uses JSON-formatted requests and responses and about required authentication for using a key vault. -+
Here are the URL suffixes used to access each type of object
|Secrets|/secrets| |Certificates| /certificates| |Storage account keys|/storageaccounts
-||
Azure Key Vault supports JSON formatted requests and responses. Requests to the Azure Key Vault are directed to a valid Azure Key Vault URL using HTTPS with some URL parameters and JSON encoded request and response bodies.
key-vault Overview Vnet Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/overview-vnet-service-endpoints.md
Here's a list of trusted services that are allowed to access a key vault if the
| Azure Import/Export| [Use customer-managed keys in Azure Key Vault for Import/Export service](../../import-export/storage-import-export-encryption-key-portal.md) | Azure Information Protection|Allow access to tenant key for [Azure Information Protection.](/azure/information-protection/what-is-information-protection)| | Azure Machine Learning|[Secure Azure Machine Learning in a virtual network](../../machine-learning/how-to-secure-workspace-vnet.md)|
+| Azure Policy Scan| Control plane policies for secrets, keys stored in data plane |
| Azure Resource Manager template deployment service|[Pass secure values during deployment](../../azure-resource-manager/templates/key-vault-parameter.md).| | Azure Service Bus|[Allow access to a key vault for customer-managed keys scenario](../../service-bus-messaging/configure-customer-managed-key.md)| | Azure SQL Database|[Transparent Data Encryption with Bring Your Own Key support for Azure SQL Database and Azure Synapse Analytics](/azure/azure-sql/database/transparent-data-encryption-byok-overview).|
lab-services How To Attach Detach Shared Image Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-attach-detach-shared-image-gallery.md
The Azure Lab Services service principal needs to have the [Owner](/azure/role-b
To attach a compute gallery to a lab plan, assign the [Owner](/azure/role-based-access-control/built-in-roles#owner) role to the service principal with application ID `c7bb12bf-0b39-4f7f-9171-f418ff39b76a`.
-If your Azure account is a guest user, your Azure account needs to have the [Directory Readers](/azure/active-directory/roles/permissions-reference#directory-readers) role to perform the role assignment. Learn about [role assignments for guest users](/azure/role-based-access-control/role-assignments-external-users#guest-user-cannot-browse-users-groups-or-service-principals-to-assign-roles).
+If your Azure account is a guest user, your Azure account needs to have the [Directory Readers](/azure/active-directory/roles/permissions-reference#directory-readers) role to perform the role assignment. Learn about [role assignments for external users](/azure/role-based-access-control/role-assignments-external-users#external-user-cannot-browse-users-groups-or-service-principals-to-assign-roles).
# [Azure CLI](#tab/azure-cli)
logic-apps Quickstart Create Logic Apps Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-create-logic-apps-visual-studio-code.md
Title: Quickstart - Create integration workflows with Azure Logic Apps in Visual Studio Code
-description: Create and manage workflow definitions with multi-tenant Azure Logic Apps in Visual Studio Code.
+description: Create and manage workflow definitions with multitenant Azure Logic Apps in Visual Studio Code.
ms.suite: integration
Last updated 01/04/2024
#Customer intent: As a developer, I want to create my first automated workflow by using Azure Logic Apps while working in Visual Studio Code
-# Quickstart: Create and manage logic app workflow definitions with multi-tenant Azure Logic Apps and Visual Studio Code
+# Quickstart: Create and manage logic app workflow definitions with multitenant Azure Logic Apps and Visual Studio Code
[!INCLUDE [logic-apps-sku-consumption](../../includes/logic-apps-sku-consumption.md)]
-This quickstart shows how to create and manage logic app workflows that help you automate tasks and processes that integrate apps, data, systems, and services across organizations and enterprises by using multi-tenant [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and Visual Studio Code. You can create and edit the underlying workflow definitions, which use JavaScript Object Notation (JSON), for logic apps through a code-based experience. You can also work on existing logic apps that are already deployed to Azure. For more information about multi-tenant versus single-tenant model, review [Single-tenant versus multi-tenant and integration service environment](single-tenant-overview-compare.md).
+This quickstart shows how to create and manage logic app workflows that help you automate tasks and processes that integrate apps, data, systems, and services across organizations and enterprises by using multitenant [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and Visual Studio Code. You can create and edit the underlying workflow definitions, which use JavaScript Object Notation (JSON), for logic apps through a code-based experience. You can also work on existing logic apps that are already deployed to Azure. For more information about multitenant versus single-tenant model, review [Single-tenant versus multitenant and integration service environment](single-tenant-overview-compare.md).
Although you can perform these same tasks in the [Azure portal](https://portal.azure.com) and in Visual Studio, you can get started faster in Visual Studio Code when you're already familiar with logic app definitions and want to work directly in code. For example, you can disable, enable, delete, and refresh already created logic apps. Also, you can work on logic apps and integration accounts from any development platform where Visual Studio Code runs, such as Linux, Windows, and Mac.
For this article, you can create the same logic app from this [quickstart](../lo
Before you start, make sure that you have these items:
-* If you don't have an Azure account and subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/).
+* If you don't have an Azure account and subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* Basic knowledge about [logic app workflow definitions](../logic-apps/logic-apps-workflow-definition-language.md) and their structure as described with JSON
Before you start, make sure that you have these items:
![Select "Sign in to Azure"](./media/quickstart-create-logic-apps-visual-studio-code/sign-in-azure-visual-studio-code.png)
- 1. If sign in takes longer than usual, Visual Studio Code prompts you to sign in through a Microsoft authentication website by providing you a device code. To sign in with the code instead, select **Use Device Code**.
+ 1. If sign in takes longer than usual, Visual Studio Code prompts you to sign in through a Microsoft authentication website by providing you with a device code. To sign in with the code instead, select **Use Device Code**.
![Continue with device code instead](./media/quickstart-create-logic-apps-visual-studio-code/use-device-code-prompt.png)
logic-apps Quickstart Create Logic Apps With Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-create-logic-apps-with-visual-studio.md
Title: Quickstart - Create integration workflows with multi-tenant Azure Logic Apps in Visual Studio
-description: Create automated integration workflows with multi-tenant Azure Logic Apps and Visual Studio Code.
+ Title: Quickstart - Create integration workflows with multitenant Azure Logic Apps in Visual Studio
+description: Create automated integration workflows with multitenant Azure Logic Apps and Visual Studio Code.
ms.suite: integration
Last updated 01/04/2024
#Customer intent: As a developer, I want to create my first automated workflow by using Azure Logic Apps while working in Visual Studio
-# Quickstart: Create automated integration workflows with multi-tenant Azure Logic Apps and Visual Studio
+# Quickstart: Create automated integration workflows with multitenant Azure Logic Apps and Visual Studio
[!INCLUDE [logic-apps-sku-consumption](../../includes/logic-apps-sku-consumption.md)]
-This quickstart shows how to design, develop, and deploy automated workflows that integrate apps, data, systems, and services across enterprises and organizations by using multi-tenant [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and Visual Studio. Although you can perform these tasks in the Azure portal, Visual Studio lets you add your logic apps to source control, publish different versions, and create Azure Resource Manager templates for different deployment environments. For more information about multi-tenant versus single-tenant model, review [Single-tenant versus multi-tenant and integration service environment](single-tenant-overview-compare.md).
+This quickstart shows how to design, develop, and deploy automated workflows that integrate apps, data, systems, and services across enterprises and organizations by using multitenant [Azure Logic Apps](logic-apps-overview.md) and Visual Studio. Although you can perform these tasks in the Azure portal, Visual Studio lets you add your logic apps to source control, publish different versions, and create Azure Resource Manager templates for different deployment environments. For more information about multitenant versus single-tenant model, review [Single-tenant versus multitenant and integration service environment](single-tenant-overview-compare.md).
If you're new to Azure Logic Apps and just want the basic concepts, try the [quickstart for creating an example Consumption logic app workflow in the Azure portal](quickstart-create-example-consumption-workflow.md). The workflow designer works similarly in both the Azure portal and Visual Studio. In this quickstart, you create the same logic app workflow with Visual Studio as the Azure portal quickstart. You can also learn to [create an example logic app workflow in Visual Studio Code](quickstart-create-logic-apps-visual-studio-code.md), and [create and manage logic app workflows using the Azure CLI](quickstart-logic-apps-azure-cli.md). This logic app workflow monitors a website's RSS feed and sends email for each new item in that feed. Your finished logic app workflow looks like the following high-level workflow:
-![Screenshot that shows the high-level workflow of a finished logic app workflow.](./media/quickstart-create-logic-apps-with-visual-studio/high-level-workflow-overview.png)
+![Screenshot shows high-level view for example logic app workflow.](./media/quickstart-create-logic-apps-with-visual-studio/high-level-workflow-overview.png)
<a name="prerequisites"></a> ## Prerequisites
-* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/). If you have an Azure Government subscription, follow these additional steps to [set up Visual Studio for Azure Government Cloud](#azure-government).
+* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). If you have an Azure Government subscription, follow these additional steps to [set up Visual Studio for Azure Government Cloud](#azure-government).
* Download and install these tools, if you don't have them already:
- * [Visual Studio 2019, 2017, or 2015 - Community edition](https://aka.ms/download-visual-studio), which is free. The Azure Logic Apps extension is currently unavailable for Visual Studio 2022. This quickstart uses Visual Studio Community 2017.
+ * [Visual Studio 2019 - Community edition](https://aka.ms/download-visual-studio), which is free. This quickstart uses Visual Studio Community 2017.
> [!IMPORTANT]
- > If you use Visual Studio 2019 or 2017, make sure that you select the **Azure development** workload.
+ >
+ > If you use Visual Studio 2019 or 2017, make sure that you select the **Azure development** workload.
+ >
+ > The Azure Logic Apps extension is unavailable for Visual Studio 2022.
* [Microsoft Azure SDK for .NET (2.9.1 or later)](https://azure.microsoft.com/downloads/). Learn more about [Azure SDK for .NET](/dotnet/azure/intro). * [Azure PowerShell](https://github.com/Azure/azure-powershell#installation)
- * The corresponding Azure Logic Apps Tools for the Visual Studio extension, currently unavailable for Visual Studio 2022:
+ * The corresponding Azure Logic Apps Tools for the Visual Studio extension, which is unavailable for Visual Studio 2022:
* [Visual Studio 2019](https://aka.ms/download-azure-logic-apps-tools-visual-studio-2019) * [Visual Studio 2017](https://aka.ms/download-azure-logic-apps-tools-visual-studio-2017)
- * [Visual Studio 2015](https://aka.ms/download-azure-logic-apps-tools-visual-studio-2015)
-
- You can either download and install Azure Logic Apps Tools directly from the Visual Studio Marketplace, or learn [how to install this extension from inside Visual Studio](/visualstudio/ide/finding-and-using-visual-studio-extensions). Make sure that you restart Visual Studio after you finish installing.
+ You can download and install Azure Logic Apps Tools directly from the Visual Studio Marketplace, or learn [how to install this extension from inside Visual Studio](/visualstudio/ide/finding-and-using-visual-studio-extensions). Make sure that you restart Visual Studio after you finish installing.
* Access to the web while using the embedded workflow designer
- The designer needs an internet connection to create resources in Azure and to read properties and data from connectors in your logic app.
+ The designer needs an internet connection to create resources in Azure and to read properties and data from connectors in your logic app workflow.
* An email account that's supported by Azure Logic Apps, such as Outlook for Microsoft 365, Outlook.com, or Gmail. For other providers, review the [connectors list here](/connectors/). This example uses Office 365 Outlook. If you use a different provider, the overall steps are the same, but your UI might slightly differ. > [!IMPORTANT]
- > If you want to use the Gmail connector, only G-Suite business accounts can use this connector without restriction in logic apps.
+ >
+ > If you want to use the Gmail connector, only G-Suite business accounts can use this connector without restriction in logic app workflows.
> If you have a Gmail consumer account, you can use this connector with only specific Google-approved services, or you can > [create a Google client app to use for authentication with your Gmail connector](/connectors/gmail/#authentication-and-bring-your-own-application). > For more information, see [Data security and privacy policies for Google connectors in Azure Logic Apps](../connectors/connectors-google-data-security-privacy-policy.md).
In this quickstart, you create the same logic app workflow with Visual Studio as
## Set up Visual Studio for Azure Government
-### Visual Studio 2017
-
-You can use the [Azure Environment Selector Visual Studio extension](https://devblogs.microsoft.com/azuregov/introducing-the-azure-environment-selector-visual-studio-extension/), which you can download and install from the [Visual Studio Marketplace](https://marketplace.visualstudio.com/items?itemName=SteveMichelotti.AzureEnvironmentSelector).
- ### Visual Studio 2019 To work with Azure Government subscriptions in Azure Logic Apps, you need to [add a discovery endpoint for Azure Government Cloud to Visual Studio](../azure-government/documentation-government-connect-vs.md). However, *before you sign in to Visual Studio with your Azure Government account*, you need to rename the JSON file that's generated after you add the discovery endpoint by following these steps:
To revert this setup, delete the JSON file at the following location, and restar
`%localappdata%\.IdentityService\AadConfigurations\AadProvider.Configuration.json`
+### Visual Studio 2017
+
+You can use the [Azure Environment Selector Visual Studio extension](https://devblogs.microsoft.com/azuregov/introducing-the-azure-environment-selector-visual-studio-extension/), which you can download and install from the [Visual Studio Marketplace](https://marketplace.visualstudio.com/items?itemName=SteveMichelotti.AzureEnvironmentSelector).
+ <a name="create-resource-group-project"></a> ## Create Azure resource group project
To get started, create an [Azure Resource Group project](../azure-resource-manag
1. On the **File** menu, select **New** > **Project**. (Keyboard: Ctrl + Shift + N)
- ![Screenshot showing Visual Studio "File" menu with "New" > "Project" selected.](./media/quickstart-create-logic-apps-with-visual-studio/create-new-visual-studio-project.png)
+ ![Screenshot shows Visual Studio, File menu with selected options for New, Project.](./media/quickstart-create-logic-apps-with-visual-studio/create-new-visual-studio-project.png)
1. Under **Installed**, select **Visual C#** or **Visual Basic**. Select **Cloud** > **Azure Resource Group**. Name your project, for example:
- ![Screenshot showing how to create Azure Resource Group project.](./media/quickstart-create-logic-apps-with-visual-studio/create-azure-cloud-service-project.png)
+ ![Screenshot shows how to create Azure Resource Group project.](./media/quickstart-create-logic-apps-with-visual-studio/create-azure-cloud-service-project.png)
> [!NOTE]
+ >
> Resource group names can contain only letters, numbers, > periods (`.`), underscores (`_`), hyphens (`-`), and > parentheses (`(`, `)`), but can't *end* with periods (`.`).
To get started, create an [Azure Resource Group project](../azure-resource-manag
1. From the template list, select the **Logic App** template. Select **OK**.
- ![Screenshot showing the "Logic App" template selected.](./media/quickstart-create-logic-apps-with-visual-studio/select-logic-app-template.png)
+ ![Screenshot shows selected Logic App template.](./media/quickstart-create-logic-apps-with-visual-studio/select-logic-app-template.png)
After Visual Studio creates your project, Solution Explorer opens and shows your solution. In your solution, the **LogicApp.json** file not only stores your logic app definition but is also an Azure Resource Manager template that you can use for deployment.
- ![Screenshot showing Solution Explorer with new logic app solution and deployment file.](./media/quickstart-create-logic-apps-with-visual-studio/logic-app-solution-created.png)
+ ![Screenshot shows Solution Explorer with new logic app solution and deployment file.](./media/quickstart-create-logic-apps-with-visual-studio/logic-app-solution-created.png)
-## Create blank logic app
+## Create blank logic app workflow
When you have your Azure Resource Group project, create your logic app with the **Blank Logic App** template. 1. In Solution Explorer, open the **LogicApp.json** file's shortcut menu. Select **Open With Logic App Designer**. (Keyboard: Ctrl + L)
- ![Screenshot showing the workflow designer with the opened logic app .json file.](./media/quickstart-create-logic-apps-with-visual-studio/open-logic-app-designer.png)
+ ![Screenshot shows workflow designer with opened logic app .json file.](./media/quickstart-create-logic-apps-with-visual-studio/open-logic-app-designer.png)
> [!TIP]
+ >
> If you don't have this command in Visual Studio 2019, check that you have the latest updates for Visual Studio.
- Visual Studio prompts you for your Azure subscription and an Azure resource group for creating and deploying resources for your logic app and connections.
+ Visual Studio prompts you for your Azure subscription and an Azure resource group for creating and deploying resources for your logic app workflow and connections.
1. For **Subscription**, select your Azure subscription. For **Resource group**, select **Create New** to create another Azure resource group.
When you have your Azure Resource Group project, create your logic app with the
| User account | Fabrikam <br> sophia-owen@fabrikam.com | The account that you used when you signed in to Visual Studio | | **Subscription** | Pay-As-You-Go <br> (sophia-owen@fabrikam.com) | The name for your Azure subscription and associated account | | **Resource Group** | MyLogicApp-RG <br> (West US) | The Azure resource group and location for storing and deploying your logic app's resources |
- | **Location** | **Same as Resource Group** | The location type and specific location for deploying your logic app. The location type is either an Azure region or an existing [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment.md). <p>For this quickstart, keep the location type set to **Region** and the location set to **Same as Resource Group**. <p>**Note**: After you create your resource group project, you can [change the location type and the location](manage-logic-apps-with-visual-studio.md#change-location), but different location type affects your logic app in various ways. |
- ||||
+ | **Location** | **Same as Resource Group** | The location type and specific location for deploying your logic app resource. The location type is either an Azure region or an existing [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment.md). <p>For this quickstart, keep the location type set to **Region** and the location set to **Same as Resource Group**. <p>**Note**: After you create your resource group project, you can [change the location type and the location](manage-logic-apps-with-visual-studio.md#change-location), but different location type affects your logic app in various ways. |
1. The workflow designer opens a page that shows an introduction video and commonly used triggers. Scroll down past the video and triggers to **Templates**, and select **Blank Logic App**.
- ![Screenshot showing "Blank Logic App" selected.](./media/quickstart-create-logic-apps-with-visual-studio/choose-blank-logic-app-template.png)
+ ![Screenshot shows selected template named Blank Logic App.](./media/quickstart-create-logic-apps-with-visual-studio/choose-blank-logic-app-template.png)
-## Build logic app workflow
+## Build your workflow
-Next, add an RSS [trigger](../logic-apps/logic-apps-overview.md#logic-app-concepts) that fires when a new feed item appears. Every logic app starts with a trigger, which fires when specific criteria is met. Each time the trigger fires, the Azure Logic Apps engine creates a logic app instance that runs your workflow.
+Next, add an RSS [trigger](../logic-apps/logic-apps-overview.md#logic-app-concepts) that fires when a new feed item appears. Every workflow starts with a trigger, which fires when specific criteria are met. Each time the trigger fires, the Azure Logic Apps engine creates a logic app workflow instance that runs your workflow.
-1. In workflow designer, under the search box, select **All**. In the search box, enter "rss". From the triggers list, select this trigger: **When a feed item is published**
+1. On the workflow designer, [follow these general steps to add the **RSS** trigger named **When a feed item is published**](quickstart-create-example-consumption-workflow.md?tabs=consumption#add-rss-trigger).
- ![Screenshot showing workflow designer with RSS trigger selected.](./media/quickstart-create-logic-apps-with-visual-studio/add-trigger-logic-app.png)
+1. Finish building the workflow by [following these general steps to add the **Office 365 Outlook** action named **Send an email**](quickstart-create-example-consumption-workflow.md#add-email-action), then return to this article.
-1. After the trigger appears in the designer, finish building the logic app workflow by following the workflow steps in the [Azure portal quickstart](../logic-apps/quickstart-create-example-consumption-workflow.md#add-rss-trigger), then return to this article. When you're done, your logic app looks like this example:
+ When you're done, your workflow looks like this example:
- ![Screenshot showing finished logic app workflow.](./media/quickstart-create-logic-apps-with-visual-studio/finished-logic-app-workflow.png)
+ ![Screenshot shows finished logic app workflow.](./media/quickstart-create-logic-apps-with-visual-studio/finished-logic-app-workflow.png)
1. Save your Visual Studio solution. (Keyboard: Ctrl + S)
Next, add an RSS [trigger](../logic-apps/logic-apps-overview.md#logic-app-concep
## Deploy logic app to Azure
-Before you can run and test your logic app, deploy the app to Azure from Visual Studio.
+Before you can run and test your workflow, deploy the app to Azure from Visual Studio.
1. In Solution Explorer, on your project's shortcut menu, select **Deploy** > **New**. If prompted, sign in with your Azure account.
- ![Screenshot showing project menu with "Deploy" > "New" selected.](./media/quickstart-create-logic-apps-with-visual-studio/create-logic-app-deployment.png)
+ ![Screenshot shows project menu with selected options for Deploy, New.](./media/quickstart-create-logic-apps-with-visual-studio/create-logic-app-deployment.png)
1. For this deployment, keep the default Azure subscription, resource group, and other settings. Select **Deploy**.
- ![Screenshot showing project deployment box with "Deploy" selected.](./media/quickstart-create-logic-apps-with-visual-studio/select-azure-subscription-resource-group-deployment.png)
+ ![Screenshot shows project deployment box with selected option named Deploy.](./media/quickstart-create-logic-apps-with-visual-studio/select-azure-subscription-resource-group-deployment.png)
1. If the **Edit Parameters** box appears, provide a resource name for your logic app. Save your settings.
- ![Screenshot showing "Edit Parameters" box with resource name for logic app.](./media/quickstart-create-logic-apps-with-visual-studio/edit-parameters-deployment.png)
+ ![Screenshot shows Edit Parameters box with resource name for logic app.](./media/quickstart-create-logic-apps-with-visual-studio/edit-parameters-deployment.png)
When deployment starts, your app's deployment status appears in the Visual Studio **Output** window. If the status doesn't appear, open the **Show output from** list, and select your Azure resource group.
- ![Screenshot showing "Output" window with deployment status output.](./media/quickstart-create-logic-apps-with-visual-studio/logic-app-output-window.png)
+ ![Screenshot shows Output window with deployment status output.](./media/quickstart-create-logic-apps-with-visual-studio/logic-app-output-window.png)
If your selected connectors need input from you, a PowerShell window opens in the background and prompts for any necessary passwords or secret keys. After you enter this information, deployment continues.
- ![Screenshot showing PowerShell window with prompt to provide connection credentials.](./media/quickstart-create-logic-apps-with-visual-studio/logic-apps-powershell-window.png)
+ ![Screenshot shows PowerShell window with prompt to provide connection credentials.](./media/quickstart-create-logic-apps-with-visual-studio/logic-apps-powershell-window.png)
- After deployment finishes, your logic app is live in the Azure portal and runs on your specified schedule (every minute). If the trigger finds new feed items, the trigger fires and creates a workflow instance that runs your logic app workflow's actions. Your workflow sends email for each new item. Otherwise, if the trigger doesn't find new items, the trigger doesn't fire and "skips" instantiating the workflow. Your workflow waits until the next interval before checking.
+ After deployment finishes, your logic app is live in the Azure portal and runs on your specified schedule (every minute). If the trigger finds new feed items, the trigger fires and creates a workflow instance that runs the workflow's actions. Your workflow sends email for each new item. Otherwise, if the trigger doesn't find new items, the trigger doesn't fire and "skips" instantiating the workflow. Your workflow waits until the next interval before checking.
Here are sample emails that this workflow sends. If you don't get any emails, check your junk email folder.
- ![Outlook sends email for each new RSS item](./media/quickstart-create-logic-apps-with-visual-studio/outlook-email.png)
+ ![Screenshot shows example Outlook email sent for each new RSS item](./media/quickstart-create-logic-apps-with-visual-studio/outlook-email.png)
-Congratulations, you've successfully built and deployed your logic app workflow with Visual Studio. To manage your logic app workflow and review its run history, see [Manage logic apps with Visual Studio](manage-logic-apps-with-visual-studio.md).
+Congratulations, you've successfully built and deployed your logic app workflow with Visual Studio. To manage your logic app workflow and review the run history, see [Manage logic apps with Visual Studio](manage-logic-apps-with-visual-studio.md).
## Add new logic app
When you have an existing Azure Resource Group project, you can add a new blank
1. To add a resource to the template file, select **Add Resource** at the top of the JSON Outline window. Or in the JSON Outline window, open the **resources** shortcut menu, and select **Add New Resource**.
- ![Screenshot showing the "JSON Outline" window.](./media/quickstart-create-logic-apps-with-visual-studio/json-outline-window-add-resource.png)
+ ![Screenshot shows window named JSON Outline.](./media/quickstart-create-logic-apps-with-visual-studio/json-outline-window-add-resource.png)
-1. In the **Add Resource** dialog box, in the search box, find `logic app`, and select **Logic App**. Name your logic app resource, and select **Add**.
+1. In the **Add Resource** dialog box, in the search box, find **logic app**, and select **Logic App**. Name your logic app resource, and select **Add**.
- ![Screenshot showing steps to add resource.](./media/quickstart-create-logic-apps-with-visual-studio/add-logic-app-resource.png)
+ ![Screenshot shows steps to add resource.](./media/quickstart-create-logic-apps-with-visual-studio/add-logic-app-resource.png)
## Clean up resources
When you're done with your logic app, delete the resource group that contains yo
1. On the **Overview** page, select **Delete resource group**. Enter the resource group name as confirmation, and select **Delete**.
- ![Screenshot showing "Resource groups" > "Overview" > "Delete resource group" selected.](./media/quickstart-create-logic-apps-with-visual-studio/clean-up-resources.png)
+ ![Screenshot shows selected options for Resource groups, Overview, Delete resource group.](./media/quickstart-create-logic-apps-with-visual-studio/clean-up-resources.png)
1. Delete the Visual Studio solution from your local computer. ## Next steps
-In this article, you built, deployed, and ran your logic app workflow with Visual Studio. To learn about managing and performing advanced deployment for logic apps with Visual Studio, see these articles:
+In this article, you built, deployed, and ran your logic app workflow with Visual Studio. To learn about managing and performing advanced deployment for logic apps with Visual Studio, see the following article:
> [!div class="nextstepaction"]
-> [Manage logic apps with Visual Studio](../logic-apps/manage-logic-apps-with-visual-studio.md)
+> [Manage logic apps with Visual Studio](manage-logic-apps-with-visual-studio.md)
machine-learning Concept Causal Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-causal-inference.md
Previously updated : 08/17/2022 Last updated : 02/27/2024
Machine learning models are powerful in identifying patterns in data and making
Practitioners have become increasingly focused on using historical data to inform their future decisions and business interventions. For example, how would the revenue be affected if a corporation pursued a new pricing strategy? Would a new medication improve a patient's condition, all else equal?
-The *causal inference* component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) addresses these questions by estimating the effect of a feature on an outcome of interest on average, across a population or a cohort, and on an individual level. It also helps construct promising interventions by simulating feature responses to various interventions and creating rules to determine which population cohorts would benefit from an intervention. Collectively, these functionalities allow decision-makers to apply new policies and effect real-world change.
+The *causal inference* component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) addresses these questions by estimating the effect of a feature on an outcome of interest on average, across a population or a cohort, and on an individual level. It also helps construct promising interventions by simulating feature responses to various interventions and creating rules to determine which population cohorts would benefit from an intervention. Collectively, these functionalities allow decision-makers to apply new policies and drive real-world change.
The capabilities of this component come from the [EconML](https://github.com/Microsoft/EconML) package. It estimates heterogeneous treatment effects from observational data via the [double machine learning](https://econml.azurewebsites.net/spec/estimation/dml.html) technique.
machine-learning Concept Counterfactual Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-counterfactual-analysis.md
Previously updated : 08/17/2022 Last updated : 02/27/2024
machine-learning Concept Responsible Ai Scorecard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-responsible-ai-scorecard.md
Previously updated : 11/09/2022 Last updated : 02/27/2024
machine-learning How To Kubernetes Inference Routing Azureml Fe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-kubernetes-inference-routing-azureml-fe.md
Previously updated : 08/31/2022 Last updated : 02/05/2024
replicas = ceil(concurrentRequests / maxReqPerContainer)
### Performance of azureml-fe
-The `azureml-fe` can reach 5K requests per second (QPS) with good latency, having an overhead not exceeding 3ms on average and 15ms at 99% percentile.
+The `azureml-fe` can reach 5 K requests per second (QPS) with good latency, having an overhead not exceeding 3 ms on average and 15 ms at 99% percentile.
>[!Note]
AKS cluster is deployed with one of the following two network models:
* Kubenet networking - The network resources are typically created and configured as the AKS cluster is deployed. * Azure Container Networking Interface (CNI) networking - The AKS cluster is connected to an existing virtual network resource and configurations.
-For Kubenet networking, the network is created and configured properly for Azure Machine Learning service. For the CNI networking, you need to understand the connectivity requirements and ensure DNS resolution and outbound connectivity for AKS inferencing. For example, you may be using a firewall to block network traffic.
+For Kubenet networking, the network is created and configured properly for Azure Machine Learning service. For the CNI networking, you need to understand the connectivity requirements and ensure DNS resolution and outbound connectivity for AKS inferencing. For example, you may require additional steps if you are using a firewall to block network traffic.
The following diagram shows the connectivity requirements for AKS inferencing. Black arrows represent actual communication, and blue arrows represent the domain names. You may need to add entries for these hosts to your firewall or to your custom DNS server.
machine-learning How To Migrate From V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-migrate-from-v1.md
Previously updated : 09/23/2022 Last updated : 02/22/2024 monikerRange: 'azureml-api-2 || azureml-api-1'
You should use v2 if you're starting a new machine learning project or workflow.
* Responsible AI dashboard * Registry of assets
-A new v2 project can reuse existing resources like workspaces and compute and existing assets like models and environments created using v1.
+A new v2 project can reuse existing v1 resources like workspaces and compute and existing assets like models and environments created using v1.
Some feature gaps in v2 include:
You should then ensure the features you need in v2 meet your organization's requ
> [!IMPORTANT] > New features in Azure Machine Learning will only be launched in v2.
-## Should I upgrade existing code to v2
-
-You can reuse your existing assets in your v2 workflows. For instance a model created in v1 can be used to perform Managed Inferencing in v2.
-
-Optionally, if you want to upgrade specific parts of your existing code to v2, please refer to the comparison links provided in the details of each resource or asset in the rest of this document.
- ## Which v2 API should I use? In v2 interfaces via REST API, CLI, and Python SDK are available. The interface you should use depends on your scenario and preferences.
In v2 interfaces via REST API, CLI, and Python SDK are available. The interface
|CLI|Recommended for automation with CI/CD or per personal preference. Allows quick iteration with YAML files and straightforward separation between Azure Machine Learning and ML model code.| |Python SDK|Recommended for complicated scripting (for example, programmatically generating large pipeline jobs) or per personal preference. Allows quick iteration with YAML files or development solely in Python.|
-## Can I use v1 and v2 together?
+## Mapping of Python SDK v1 to v2
-v1 and v2 can co-exist in a workspace. You can reuse your existing assets in your v2 workflows. For instance a model created in v1 can be used to perform Managed Inferencing in v2. Resources like workspace, compute, and datastore work across v1 and v2, with exceptions. A user can call the v1 Python SDK to change a workspace's description, then using the v2 CLI extension change it again. Jobs (experiments/runs/pipelines in v1) can be submitted to the same workspace from the v1 or v2 Python SDK. A workspace can have both v1 and v2 model deployment endpoints.
+See each of the following articles for a comparison code mapping for SDK v1 vs v2.
-### Using v1 and v2 code together
-We do not recommend using the v1 and v2 SDKs together in the same code. It is technically possible to use v1 and v2 in the same code because they use different Azure namespaces. However, there are many classes with the same name across these namespaces (like Workspace, Model) which can cause confusion and make code readability and debuggability challenging.
+|Resources and assets |Article |
+|||
+|Workspace | [Workspace management in SDK v1 and SDK v2](migrate-to-v2-resource-workspace.md) |
+|Datastore | [Datastore management in SDK v1 and SDK v2](migrate-to-v2-resource-datastore.md) |
+|Data | [Data assets in SDK v1 and v2](migrate-to-v2-assets-data.md) |
+|Compute | [Compute management in SDK v1 and SDK v2](migrate-to-v2-resource-compute.md) |
+|Training | [Run a script](migrate-to-v2-command-job.md) |
+|Training | [Local runs](migrate-to-v2-local-runs.md) |
+|Training | [Hyperparameter tuning](migrate-to-v2-execution-hyperdrive.md) |
+|Training | [Parallel Run](migrate-to-v2-execution-parallel-run-step.md) |
+|Training | [Pipelines](migrate-to-v2-execution-pipeline.md) |
+|Training | [AutoML](migrate-to-v2-execution-automl.md) |
+| Models | [Model management in SDK v1 and SDK v2](migrate-to-v2-assets-model.md) |
+| Deployment | [Upgrade deployment endpoints to SDK v2](migrate-to-v2-deploy-endpoints.md) |
-> [!IMPORTANT]
-> If your workspace uses a private endpoint, it will automatically have the `v1_legacy_mode` flag enabled, preventing usage of v2 APIs. See [how to configure network isolation with v2](how-to-configure-network-isolation-with-v2.md?view=azureml-api-2&preserve-view=true) for details.
## Resources and assets in v1 and v2
Object storage datastore types created with v1 are fully available for use in v2
For a comparison of SDK v1 and v2 code, see [Datastore management in SDK v1 and SDK v2](migrate-to-v2-resource-datastore.md).
-### Compute
+### Data (datasets in v1)
-Compute of type `AmlCompute` and `ComputeInstance` are fully available for use in v2.
+Datasets are renamed to data assets. *Backwards compatibility* is provided, which means you can use V1 Datasets in V2. When you consume a V1 Dataset in a V2 job you will notice they are automatically mapped into V2 types as follows:
-For a comparison of SDK v1 and v2 code, see [Compute management in SDK v1 and SDK v2](migrate-to-v2-resource-compute.md).
+* V1 FileDataset = V2 Folder (`uri_folder`)
+* V1 TabularDataset = V2 Table (`mltable`)
-### Endpoint and deployment (endpoint and web service in v1)
+It should be noted that *forwards compatibility* is **not** provided, which means you **cannot** use V2 data assets in V1.
-With SDK/CLI v1, you can deploy models on ACI or AKS as web services. Your existing v1 model deployments and web services will continue to function as they are, but Using SDK/CLI v1 to deploy models on ACI or AKS as web services is now considered as **legacy**. For new model deployments, we recommend upgrading to v2. In v2, we offer [managed endpoints or Kubernetes endpoints](./concept-endpoints.md?view=azureml-api-2&preserve-view=true). The following table guides our recommendation:
+This article talks more about handling data in v2 - [Read and write data in a job](how-to-read-write-data-v2.md?view=azureml-api-2&preserve-view=true)
-|Endpoint type in v2|Upgrade from|Notes|
-|-|-|-|
-|Local|ACI|Quick test of model deployment locally; not for production.|
-|Managed online endpoint|ACI, AKS|Enterprise-grade managed model deployment infrastructure with near real-time responses and massive scaling for production.|
-|Managed batch endpoint|ParallelRunStep in a pipeline for batch scoring|Enterprise-grade managed model deployment infrastructure with massively parallel batch processing for production.|
-|Azure Kubernetes Service (AKS)|ACI, AKS|Manage your own AKS cluster(s) for model deployment, giving flexibility and granular control at the cost of IT overhead.|
-|Azure Arc Kubernetes|N/A|Manage your own Kubernetes cluster(s) in other clouds or on-premises, giving flexibility and granular control at the cost of IT overhead.|
+For a comparison of SDK v1 and v2 code, see [Data assets in SDK v1 and v2](migrate-to-v2-assets-data.md).
-For a comparison of SDK v1 and v2 code, see [Upgrade deployment endpoints to SDK v2](migrate-to-v2-deploy-endpoints.md).
-For migration steps from your existing ACI web services to managed online endpoints, see our [upgrade guide article](migrate-to-v2-managed-online-endpoints.md) and [blog](https://aka.ms/acimoemigration).
+
+### Compute
+
+Compute of type `AmlCompute` and `ComputeInstance` are fully available for use in v2.
+
+For a comparison of SDK v1 and v2 code, see [Compute management in SDK v1 and SDK v2](migrate-to-v2-resource-compute.md).
### Jobs (experiments, runs, pipelines in v1)
You can continue to use designer to build pipelines using classic prebuilt compo
You cannot build a pipeline using both existing designer classic prebuilt components and v2 custom components.
-### Data (datasets in v1)
-
-Datasets are renamed to data assets. *Backwards compatibility* is provided, which means you can use V1 Datasets in V2. When you consume a V1 Dataset in a V2 job you will notice they are automatically mapped into V2 types as follows:
-* V1 FileDataset = V2 Folder (`uri_folder`)
-* V1 TabularDataset = V2 Table (`mltable`)
+### Model
-It should be noted that *forwards compatibility* is **not** provided, which means you **cannot** use V2 data assets in V1.
+Models created from v1 can be used in v2.
-This article talks more about handling data in v2 - [Read and write data in a job](how-to-read-write-data-v2.md?view=azureml-api-2&preserve-view=true)
+For a comparison of SDK v1 and v2 code, see [Model management in SDK v1 and SDK v2](migrate-to-v2-assets-model.md)
-For a comparison of SDK v1 and v2 code, see [Data assets in SDK v1 and v2](migrate-to-v2-assets-data.md).
+### Endpoint and deployment (endpoint and web service in v1)
-### Model
+With SDK/CLI v1, you can deploy models on ACI or AKS as web services. Your existing v1 model deployments and web services will continue to function as they are, but Using SDK/CLI v1 to deploy models on ACI or AKS as web services is now considered as **legacy**. For new model deployments, we recommend upgrading to v2. In v2, we offer [managed endpoints or Kubernetes endpoints](./concept-endpoints.md?view=azureml-api-2&preserve-view=true). The following table guides our recommendation:
-Models created from v1 can be used in v2.
+|Endpoint type in v2|Upgrade from|Notes|
+|-|-|-|
+|Local|ACI|Quick test of model deployment locally; not for production.|
+|Managed online endpoint|ACI, AKS|Enterprise-grade managed model deployment infrastructure with near real-time responses and massive scaling for production.|
+|Managed batch endpoint|ParallelRunStep in a pipeline for batch scoring|Enterprise-grade managed model deployment infrastructure with massively parallel batch processing for production.|
+|Azure Kubernetes Service (AKS)|ACI, AKS|Manage your own AKS cluster(s) for model deployment, giving flexibility and granular control at the cost of IT overhead.|
+|Azure Arc Kubernetes|N/A|Manage your own Kubernetes cluster(s) in other clouds or on-premises, giving flexibility and granular control at the cost of IT overhead.|
-For a comparison of SDK v1 and v2 code, see [Model management in SDK v1 and SDK v2](migrate-to-v2-assets-model.md)
+For a comparison of SDK v1 and v2 code, see [Upgrade deployment endpoints to SDK v2](migrate-to-v2-deploy-endpoints.md).
+For migration steps from your existing ACI web services to managed online endpoints, see our [upgrade guide article](migrate-to-v2-managed-online-endpoints.md) and [blog](https://aka.ms/acimoemigration).
### Environment
A key paradigm with v2 is serializing machine learning entities as YAML files fo
You can obtain a YAML representation of any entity with the CLI via `az ml <entity> show --output yaml`. Note that this output will have system-generated properties, which can be ignored or deleted.
+## Should I upgrade existing v1 code to v2
+
+You can reuse your existing v1 assets in your v2 workflows. For instance a model created in v1 can be used to perform Managed Inferencing in v2.
+
+Optionally, if you want to upgrade specific parts of your existing v1 code to v2, please refer to the comparison links provided in this document.
+
+## Can I use v1 and v2 together?
+
+v1 and v2 can co-exist in a workspace. You can reuse your existing assets in your v2 workflows. For instance a model created in v1 can be used to perform Managed Inferencing in v2. Resources like workspace, compute, and datastore work across v1 and v2, with exceptions. A user can call the v1 Python SDK to change a workspace's description, then using the v2 CLI extension change it again. Jobs (experiments/runs/pipelines in v1) can be submitted to the same workspace from the v1 or v2 Python SDK. A workspace can have both v1 and v2 model deployment endpoints.
+
+### Using v1 and v2 code together
+
+We do not recommend using the v1 and v2 SDKs together in the same code. It is technically possible to use v1 and v2 in the same code because they use different Azure namespaces. However, there are many classes with the same name across these namespaces (like Workspace, Model) which can cause confusion and make code readability and debuggability challenging.
+
+> [!IMPORTANT]
+> If your workspace uses a private endpoint, it will automatically have the `v1_legacy_mode` flag enabled, preventing usage of v2 APIs. See [how to configure network isolation with v2](how-to-configure-network-isolation-with-v2.md?view=azureml-api-2&preserve-view=true) for details.
+++ ## Next steps - [Get started with the CLI (v2)](how-to-configure-cli.md?view=azureml-api-2&preserve-view=true)
machine-learning How To Mlflow Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-mlflow-batch.md
Last updated 10/10/2022 -+ # Deploy MLflow models in batch deployments
In this article, learn how to deploy [MLflow](https://www.mlflow.org) models to
## About this example
-This example shows how you can deploy an MLflow model to a batch endpoint to perform batch predictions. This example uses an MLflow model based on the [UCI Heart Disease Data Set](https://archive.ics.uci.edu/ml/datasets/Heart+Disease). The database contains 76 attributes, but we are using a subset of 14 of them. The model tries to predict the presence of heart disease in a patient. It is integer valued from 0 (no presence) to 1 (presence).
+This example shows how you can deploy an MLflow model to a batch endpoint to perform batch predictions. This example uses an MLflow model based on the [UCI Heart Disease Data Set](https://archive.ics.uci.edu/ml/datasets/Heart+Disease). The database contains 76 attributes, but we're using a subset of 14 of them. The model tries to predict the presence of heart disease in a patient. It is integer valued from 0 (no presence) to 1 (presence).
The model has been trained using an `XGBBoost` classifier and all the required preprocessing has been packaged as a `scikit-learn` pipeline, making this model an end-to-end pipeline that goes from raw data to predictions.
You can follow along this sample in the following notebooks. In the cloned repos
Follow these steps to deploy an MLflow model to a batch endpoint for running batch inference over new data:
-1. Batch Endpoint can only deploy registered models. In this case, we already have a local copy of the model in the repository, so we only need to publish the model to the registry in the workspace. You can skip this step if the model you are trying to deploy is already registered.
+1. Batch Endpoint can only deploy registered models. In this case, we already have a local copy of the model in the repository, so we only need to publish the model to the registry in the workspace. You can skip this step if the model you're trying to deploy is already registered.
# [Azure CLI](#tab/cli)
Follow these steps to deploy an MLflow model to a batch endpoint for running bat
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/heart-classifier-mlflow/mlflow-for-batch-tabular.ipynb?name=register_model)]
-1. Before moving any forward, we need to make sure the batch deployments we are about to create can run on some infrastructure (compute). Batch deployments can run on any Azure Machine Learning compute that already exists in the workspace. That means that multiple batch deployments can share the same compute infrastructure. In this example, we are going to work on an Azure Machine Learning compute cluster called `cpu-cluster`. Let's verify the compute exists on the workspace or create it otherwise.
+1. Before moving any forward, we need to make sure the batch deployments we're about to create can run on some infrastructure (compute). Batch deployments can run on any Azure Machine Learning compute that already exists in the workspace. That means that multiple batch deployments can share the same compute infrastructure. In this example, we're going to work on an Azure Machine Learning compute cluster called `cpu-cluster`. Let's verify the compute exists on the workspace or create it otherwise.
# [Azure CLI](#tab/cli)
Follow these steps to deploy an MLflow model to a batch endpoint for running bat
> [!IMPORTANT] > Configure `timeout` in your deployment based on how long it takes for your model to run inference on a single batch. The bigger the batch size the longer this value has to be. Remeber that `mini_batch_size` indicates the number of files in a batch, not the number of samples. When working with tabular data, each file may contain multiple rows which will increase the time it takes for the batch endpoint to process each file. Use high values on those cases to avoid time out errors.
-7. Although you can invoke a specific deployment inside of an endpoint, you will usually want to invoke the endpoint itself and let the endpoint decide which deployment to use. Such deployment is named the "default" deployment. This gives you the possibility of changing the default deployment and hence changing the model serving the deployment without changing the contract with the user invoking the endpoint. Use the following instruction to update the default deployment:
+7. Although you can invoke a specific deployment inside of an endpoint, you'll usually want to invoke the endpoint itself and let the endpoint decide which deployment to use. Such deployment is named the "default" deployment. This gives you the possibility of changing the default deployment and hence changing the model serving the deployment without changing the contract with the user invoking the endpoint. Use the following instruction to update the default deployment:
# [Azure CLI](#tab/cli)
Follow these steps to deploy an MLflow model to a batch endpoint for running bat
## Testing out the deployment
-For testing our endpoint, we are going to use a sample of unlabeled data located in this repository and that can be used with the model. Batch endpoints can only process data that is located in the cloud and that is accessible from the Azure Machine Learning workspace. In this example, we are going to upload it to an Azure Machine Learning data store. Particularly, we are going to create a data asset that can be used to invoke the endpoint for scoring. However, notice that batch endpoints accept data that can be placed in various locations.
+For testing our endpoint, we're going to use a sample of unlabeled data located in this repository and that can be used with the model. Batch endpoints can only process data that is located in the cloud and that is accessible from the Azure Machine Learning workspace. In this example, we're going to upload it to an Azure Machine Learning data store. Particularly, we're going to create a data asset that can be used to invoke the endpoint for scoring. However, notice that batch endpoints accept data that can be placed in various locations.
1. Let's create the data asset first. This data asset consists of a folder with multiple CSV files that we want to process in parallel using batch endpoints. You can skip this step is your data is already registered as a data asset or you want to use a different input type.
For testing our endpoint, we are going to use a sample of unlabeled data located
> [!TIP]
- > Notice how we are not indicating the deployment name in the invoke operation. That's because the endpoint automatically routes the job to the default deployment. Since our endpoint only has one deployment, then that one is the default one. You can target an specific deployment by indicating the argument/parameter `deployment_name`.
+ > Notice how we're not indicating the deployment name in the invoke operation. That's because the endpoint automatically routes the job to the default deployment. Since our endpoint only has one deployment, then that one is the default one. You can target an specific deployment by indicating the argument/parameter `deployment_name`.
3. A batch job is started as soon as the command returns. You can monitor the status of the job until it finishes:
Output predictions are generated in the `predictions.csv` file as indicated in t
The file is structured as follows:
-* There is one row per each data point that was sent to the model. For tabular data, this means that one row is generated for each row in the input files and hence the number of rows in the generated file (`predictions.csv`) equals the sum of all the rows in all the processed files. For other data types, there is one row per each processed file.
+* There is one row per each data point that was sent to the model. For tabular data, it means that the file (`predictions.csv`) contains one row for every row present in each of the processed files. For other data types (e.g. images, audio, text), there is one row per each processed file.
-* Two columns are indicated:
-
- * The file name where the data was read from. In tabular data, use this field to know which prediction belongs to which input data. For any given file, predictions are returned in the same order they appear in the input file so you can rely on the row number to match the corresponding prediction.
- * The prediction associated with the input data. This value is returned "as-is" it was provided by the model's `predict().` function.
+* The following columns are in the file (in order):
+ * `row` (optional), the corresponding row index in the input data file. This only applies if the input data is tabular. Predictions are returned in the same order they appear in the input file so you can rely on the row number to match the corresponding prediction.
+ * `prediction`, the prediction associated with the input data. This value is returned "as-is" it was provided by the model's `predict().` function.
+ * `file_name`, the file name where the data was read from. In tabular data, use this field to know which prediction belongs to which input data.
You can download the results of the job by using the job name:
Once the file is downloaded, you can open it using your favorite tool. The follo
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/heart-classifier-mlflow/mlflow-for-batch-tabular.ipynb?name=read_outputs)]
-> [!WARNING]
-> The file `predictions.csv` may not be a regular CSV file and can't be read correctly using `pandas.read_csv()` method.
- The output looks as follows:
-| file | prediction |
-| -- | -- |
-| heart-unlabeled-0.csv | 0 |
-| heart-unlabeled-0.csv | 1 |
-| ... | 1 |
-| heart-unlabeled-3.csv | 0 |
+|row | prediction | file |
+|--| -- | -- |
+| 0 | 0 | heart-unlabeled-0.csv |
+| 1 | 1 | heart-unlabeled-0.csv |
+| 2 | 0 | heart-unlabeled-0.csv |
+| ... | ... | ... |
+| 307 | 0 | heart-unlabeled-3.csv |
> [!TIP] > Notice that in this example the input data was tabular data in `CSV` format and there were 4 different input files (heart-unlabeled-0.csv, heart-unlabeled-1.csv, heart-unlabeled-2.csv and heart-unlabeled-3.csv).
Azure Machine Learning supports deploying MLflow models to batch endpoints witho
Batch Endpoints distribute work at the file level, for both structured and unstructured data. As a consequence, only [URI file](reference-yaml-data.md) and [URI folders](reference-yaml-data.md) are supported for this feature. Each worker processes batches of `Mini batch size` files at a time. For tabular data, batch endpoints don't take into account the number of rows inside of each file when distributing the work. > [!WARNING]
-> Nested folder structures are not explored during inference. If you are partitioning your data using folders, make sure to flatten the structure beforehand.
+> Nested folder structures are not explored during inference. If you're partitioning your data using folders, make sure to flatten the structure beforehand.
Batch deployments will call the `predict` function of the MLflow model once per file. For CSV files containing multiple rows, this may impose a memory pressure in the underlying compute and may increase the time it takes for the model to score a single file (specially for expensive models like large language models). If you encounter several out-of-memory exceptions or time-out entries in logs, consider splitting the data in smaller files with less rows or implement batching at the row level inside of the model/scoring script.
The following data types are supported for batch inference when deploying MLflow
| `.png`, `.jpg`, `.jpeg`, `.tiff`, `.bmp`, `.gif` | `np.ndarray` | `TensorSpec`. Input is reshaped to match tensors shape if available. If no signature is available, tensors of type `np.uint8` are inferred. For additional guidance read [Considerations for MLflow models processing images](how-to-image-processing-batch.md#considerations-for-mlflow-models-processing-images). | > [!WARNING]
-> Be advised that any unsupported file that may be present in the input data will make the job to fail. You will see an error entry as follows: *"ERROR:azureml:Error processing input file: '/mnt/batch/tasks/.../a-given-file.avro'. File type 'avro' is not supported."*.
+> Be advised that any unsupported file that may be present in the input data will make the job to fail. You'll see an error entry as follows: *"ERROR:azureml:Error processing input file: '/mnt/batch/tasks/.../a-given-file.avro'. File type 'avro' is not supported."*.
### Signature enforcement for MLflow models
Batch deployments only support deploying MLflow models with a `pyfunc` flavor. I
MLflow models can be deployed to batch endpoints without indicating a scoring script in the deployment definition. However, you can opt in to indicate this file (usually referred as the *batch driver*) to customize how inference is executed.
-You will typically select this workflow when:
+You'll typically select this workflow when:
> [!div class="checklist"] > * You need to process a file type not supported by batch deployments MLflow deployments. > * You need to customize the way the model is run, for instance, use an specific flavor to load it with `mlflow.<flavor>.load()`.
You will typically select this workflow when:
> * You model can't process each file at once because of memory constrains and it needs to read it in chunks. > [!IMPORTANT]
-> If you choose to indicate a scoring script for an MLflow model deployment, you will also have to specify the environment where the deployment will run.
+> If you choose to indicate a scoring script for an MLflow model deployment, you'll also have to specify the environment where the deployment will run.
### Steps
Use the following steps to deploy an MLflow model with a custom scoring script.
b. Go to the section __Models__.
- c. Select the model you are trying to deploy and click on the tab __Artifacts__.
+ c. Select the model you're trying to deploy and click on the tab __Artifacts__.
d. Take note of the folder that is displayed. This folder was indicated when the model was registered.
Use the following steps to deploy an MLflow model with a custom scoring script.
:::code language="python" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/heart-classifier-mlflow/deployment-custom/code/batch_driver.py" :::
-1. Let's create an environment where the scoring script can be executed. Since our model is MLflow, the conda requirements are also specified in the model package (for more details about MLflow models and the files included on it see [The MLmodel format](concept-mlflow-models.md#the-mlmodel-format)). We are going then to build the environment using the conda dependencies from the file. However, __we need also to include__ the package `azureml-core` which is required for Batch Deployments.
+1. Let's create an environment where the scoring script can be executed. Since our model is MLflow, the conda requirements are also specified in the model package (for more details about MLflow models and the files included on it see [The MLmodel format](concept-mlflow-models.md#the-mlmodel-format)). We're going then to build the environment using the conda dependencies from the file. However, __we need also to include__ the package `azureml-core` which is required for Batch Deployments.
> [!TIP] > If your model is already registered in the model registry, you can download/copy the `conda.yml` file associated with your model by going to [Azure Machine Learning studio](https://ml.azure.com) > Models > Select your model from the list > Artifacts. Open the root folder in the navigation and select the `conda.yml` file listed. Click on Download or copy its content.
machine-learning How To Bulk Test Evaluate Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-bulk-test-evaluate-flow.md
If an evaluation method uses Large Language Models (LLMs) to measure the perform
After you finish the input mapping, select on **"Next"** to review your settings and select on **"Submit"** to start the batch run with evaluation.
+> [!NOTE]
+> Batch runs have a maximum duration of 10 hours. If a batch run exceeds this limit, it will be terminated and marked as failed. We advise monitoring your Large Language Model (LLM) capacity to avoid throttling. If necessary, consider reducing the size of your data. If you continue to experience issues or need further assistance, don't hesitate to reach out to our product team through the feedback form or support request.
+ ## View the evaluation result and metrics After submission, you can find the submitted batch run in the run list tab in prompt flow page. Select a run to navigate to the run detail page.
machine-learning How To Access Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-access-data.md
Previously updated : 05/11/2022 Last updated : 02/27/2024 #Customer intent: As an experienced Python developer, I need to make my data in Azure storage available to my remote compute to train my machine learning models. # Connect to storage services on Azure with datastores - [!INCLUDE [sdk v1](../includes/machine-learning-sdk-v1.md)] [!INCLUDE [cli v1](../includes/machine-learning-cli-v1.md)] In this article, learn how to connect to data storage services on Azure with Azure Machine Learning datastores and the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro).
-Datastores securely connect to your storage service on Azure without putting your authentication credentials and the integrity of your original data source at risk. They store connection information, like your subscription ID and token authorization in your [Key Vault](https://azure.microsoft.com/services/key-vault/) that's associated with the workspace, so you can securely access your storage without having to hard code them in your scripts. You can create datastores that connect to [these Azure storage solutions](#supported-data-storage-service-types).
+Datastores securely connect to your storage service on Azure, and they avoid risk to your authentication credentials or the integrity of your original data store. A datastore stores connection information - for example, your subscription ID or token authorization - in the [Key Vault](https://azure.microsoft.com/services/key-vault/) associated with the workspace. With a datastore, you can securely access your storage because you can avoid hard-coding connection information in your scripts. You can create datastores that connect to [these Azure storage solutions](#supported-data-storage-service-types).
-To understand where datastores fit in Azure Machine Learning's overall data access workflow, see the [Securely access data](concept-data.md#data-workflow) article.
+For information that describes how datastores fit with the Azure Machine Learning overall data access workflow, visit [Securely access data](concept-data.md#data-workflow) article.
-For a low code experience, see how to use the [Azure Machine Learning studio to create and register datastores](how-to-connect-data-ui.md#create-datastores).
+To learn how to connect to a data storage resource with a UI, visit [Connect to data storage with the studio UI](how-to-connect-data-ui.md#create-datastores).
>[!TIP]
-> This article assumes you want to connect to your storage service with credential-based authentication credentials, like a service principal or a shared access signature (SAS) token. Keep in mind, if credentials are registered with datastores, all users with workspace *Reader* role are able to retrieve these credentials. [Learn more about workspace *Reader* role.](../how-to-assign-roles.md#default-roles).
+> This article assumes that you will connect to your storage service with credential-based authentication credentials - for example, a service principal or a shared access signature (SAS) token. Note that if credentials are registered with datastores, all users with the workspace *Reader* role can retrieve those credentials. For more information, visit [Manage roles in your workspace](../how-to-assign-roles.md#default-roles).
>
-> If this is a concern, learn how to [Connect to storage services with identity based access](../how-to-identity-based-data-access.md).
+> For more information about identity-based data access, visit [Identity-based data access to storage services (v1)](../how-to-identity-based-data-access.md).
## Prerequisites -- An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+- An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/)
-- An Azure storage account with a [supported storage type](#supported-data-storage-service-types).
+- An Azure storage account with a [supported storage type](#supported-data-storage-service-types)
-- The [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/intro).
+- The [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/intro)
- An Azure Machine Learning workspace.
- Either [create an Azure Machine Learning workspace](../quickstart-create-resources.md) or use an existing one via the Python SDK.
+ [Create an Azure Machine Learning workspace](../quickstart-create-resources.md), or use an existing workspace via the Python SDK
- Import the `Workspace` and `Datastore` class, and load your subscription information from the file `config.json` using the function `from_config()`. This looks for the JSON file in the current directory by default, but you can also specify a path parameter to point to the file using `from_config(path="your/file/path")`.
+ Import the `Workspace` and `Datastore` class, and load your subscription information from the `config.json` file with the `from_config()` function. By default, the function looks for the JSON file in the current directory, but you can also specify a path parameter to point to the file with `from_config(path="your/file/path")`:
```Python import azureml.core
For a low code experience, see how to use the [Azure Machine Learning studio to
ws = Workspace.from_config() ```
- When you create a workspace, an Azure blob container and an Azure file share are automatically registered as datastores to the workspace. They're named `workspaceblobstore` and `workspacefilestore`, respectively. The `workspaceblobstore` is used to store workspace artifacts and your machine learning experiment logs. It's also set as the **default datastore** and can't be deleted from the workspace. The `workspacefilestore` is used to store notebooks and R scripts authorized via [compute instance](../concept-compute-instance.md#accessing-files).
-
- > [!NOTE]
- > Azure Machine Learning designer will create a datastore named **azureml_globaldatasets** automatically when you open a sample in the designer homepage. This datastore only contains sample datasets. Please **do not** use this datastore for any confidential data access.
+ Workspace creation automatically registers an Azure blob container and an Azure file share, as datastores, to the workspace. They're named `workspaceblobstore` and `workspacefilestore`, respectively. The `workspaceblobstore` stores workspace artifacts and your machine learning experiment logs. It serves as the **default datastore** and can't be deleted from the workspace. The `workspacefilestore` stores notebooks and R scripts authorized via [compute instance](../concept-compute-instance.md#accessing-files).
+ > [!NOTE]
+ > Azure Machine Learning designer automatically creates a datastore named **azureml_globaldatasets** when you open a sample in the designer homepage. This datastore only contains sample datasets. Please **do not** use this datastore for any confidential data access.
## Supported data storage service types
-Datastores currently support storing connection information to the storage services listed in the following matrix.
+Datastores currently support storage of connection information to the storage services listed in this matrix:
> [!TIP]
-> **For unsupported storage solutions** (those not listed in the table below), you may run into issues connecting and working with your data. We suggest you [move your data](#move-data-to-supported-azure-storage-solutions) to a supported Azure storage solution. Doing this may also help with additional scenarios, like saving data egress cost during ML experiments.
+> **For unsupported storage solutions** (those not listed in the following table), you might encounter issues as you connect and work with your data. We suggest that you [move your data](#move-data-to-supported-azure-storage-solutions) to a supported Azure storage solution. This can also help with additional scenarios
| Storage&nbsp;type | Authentication&nbsp;type | [Azure&nbsp;Machine&nbsp;Learning studio](https://ml.azure.com/) | [Azure&nbsp;Machine&nbsp;Learning&nbsp; Python SDK](/python/api/overview/azure/ml/intro) | [Azure&nbsp;Machine&nbsp;Learning CLI](reference-azure-machine-learning-cli.md) | [Azure&nbsp;Machine&nbsp;Learning&nbsp; REST API](/rest/api/azureml/) | VS Code ||||||
Datastores currently support storing connection information to the storage servi
* MySQL is only supported for pipeline [DataTransferStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.datatransferstep). * Databricks is only supported for pipeline [DatabricksStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.databricks_step.databricksstep). - ### Storage guidance
-We recommend creating a datastore for an [Azure Blob container](../../storage/blobs/storage-blobs-introduction.md). Both standard and premium storage are available for blobs. Although premium storage is more expensive, its faster throughput speeds might improve the speed of your training runs, particularly if you train against a large dataset. For information about the cost of storage accounts, see the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=machine-learning-service).
+We recommend creation of a datastore for an [Azure Blob container](../../storage/blobs/storage-blobs-introduction.md). Both standard and premium storage are available for blobs. Although premium storage is more expensive, its faster throughput speeds might improve the speed of your training runs, especially if you train against a large dataset. For information about storage account costs, visit the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=machine-learning-service).
-[Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md) is built on top of Azure Blob storage and designed for enterprise big data analytics. A fundamental part of Data Lake Storage Gen2 is the addition of a [hierarchical namespace](../../storage/blobs/data-lake-storage-namespace.md) to Blob storage. The hierarchical namespace organizes objects/files into a hierarchy of directories for efficient data access.
+[Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md) is built on top of Azure Blob storage. It's designed for enterprise big data analytics. As part of Data Lake Storage Gen2, Blob storage features a [hierarchical namespace](../../storage/blobs/data-lake-storage-namespace.md). The hierarchical namespace organizes objects/files into a hierarchy of directories for efficient data access.
## Storage access and permissions
-To ensure you securely connect to your Azure storage service, Azure Machine Learning requires that you have permission to access the corresponding data storage container. This access depends on the authentication credentials used to register the datastore.
+To ensure you securely connect to your Azure storage service, Azure Machine Learning requires that you have permission to access the corresponding data storage container. This access depends on the authentication credentials used to register the datastore.
> [!NOTE]
-> This guidance also applies to [datastores created with identity-based data access](../how-to-identity-based-data-access.md).
+> This guidance also applies to [datastores created with identity-based data access](../how-to-identity-based-data-access.md).
-### Virtual network
+### Virtual network
-Azure Machine Learning requires extra configuration steps to communicate with a storage account that is behind a firewall or within a virtual network. If your storage account is behind a firewall, you can [add your client's IP address to an allowlist](../../storage/common/storage-network-security.md#managing-ip-network-rules) via the Azure portal.
+To communicate with a storage account located behind a firewall or within a virtual network, Azure Machine Learning requires extra configuration steps. For a storage account located behind a firewall, you can [add your client's IP address to an allowlist](../../storage/common/storage-network-security.md#managing-ip-network-rules) with the Azure portal.
-Azure Machine Learning can receive requests from clients outside of the virtual network. To ensure that the entity requesting data from the service is safe and to enable data being displayed in your workspace, [use a private endpoint with your workspace](../how-to-configure-private-link.md).
+Azure Machine Learning can receive requests from clients outside of the virtual network. To ensure that the entity requesting data from the service is safe, and to enable display of data in your workspace, [use a private endpoint with your workspace](../how-to-configure-private-link.md).
-**For Python SDK users**, to access your data via your training script on a compute target, the compute target needs to be inside the same virtual network and subnet of the storage. You can [use a compute instance/cluster in the same virtual network](how-to-secure-training-vnet.md).
+**For Python SDK users**: To access your data on a compute target with your training script, you must locate the compute target inside the same virtual network and subnet of the storage. You can [use a compute instance/cluster in the same virtual network](how-to-secure-training-vnet.md).
-**For Azure Machine Learning studio users**, several features rely on the ability to read data from a dataset, such as dataset previews, profiles, and automated machine learning. For these features to work with storage behind virtual networks, use a [workspace managed identity in the studio](../how-to-enable-studio-virtual-network.md) to allow Azure Machine Learning to access the storage account from outside the virtual network.
+**For Azure Machine Learning studio users**: Several features rely on the ability to read data from a dataset - for example, dataset previews, profiles, and automated machine learning. For these features to work with storage behind virtual networks, use a [workspace managed identity in the studio](../how-to-enable-studio-virtual-network.md) to allow Azure Machine Learning to access the storage account from outside the virtual network.
> [!NOTE]
-> If your data storage is an Azure SQL Database behind a virtual network, be sure to set *Deny public access* to **No** via the [Azure portal](https://portal.azure.com/) to allow Azure Machine Learning to access the storage account.
+> For data stored in an Azure SQL Database behind a virtual network, set *Deny public access* to **No** with the [Azure portal](https://portal.azure.com/), to allow Azure Machine Learning to access the storage account.
### Access validation > [!WARNING]
-> Cross tenant access to storage accounts is not supported. If cross tenant access is needed for your scenario, please reach out to the Azure Machine Learning Data Support team alias at amldatasupport@microsoft.com for assistance with a custom code solution.
+> Cross tenant access to storage accounts is not supported. If your scenario needs cross tenant access, reach out to the Azure Machine Learning Data Support team alias at **amldatasupport@microsoft.com** for assistance with a custom code solution.
-**As part of the initial datastore creation and registration process**, Azure Machine Learning automatically validates that the underlying storage service exists and the user provided principal (username, service principal, or SAS token) has access to the specified storage.
+**As part of the initial datastore creation and registration process**, Azure Machine Learning automatically validates that the underlying storage service exists and that the user-provided principal (username, service principal, or SAS token) can access the specified storage.
-**After datastore creation**, this validation is only performed for methods that require access to the underlying storage container, **not** each time datastore objects are retrieved. For example, validation happens if you want to download files from your datastore; but if you just want to change your default datastore, then validation doesn't happen.
+**After datastore creation**, this validation is only performed for methods that require access to the underlying storage container, **not** each time datastore objects are retrieved. For example, validation happens if you want to download files from your datastore. However, if you only want to change your default datastore, then validation doesn't happen.
To authenticate your access to the underlying storage service, you can provide either your account key, shared access signatures (SAS) tokens, or service principal in the corresponding `register_azure_*()` method of the datastore type you want to create. The [storage type matrix](#supported-data-storage-service-types) lists the supported authentication types that correspond to each datastore type.
-You can find account key, SAS token, and service principal information on your [Azure portal](https://portal.azure.com).
+You can find account key, SAS token, and service principal information at your [Azure portal](https://portal.azure.com).
-* If you plan to use an account key or SAS token for authentication, select **Storage Accounts** on the left pane, and choose the storage account that you want to register.
- * The **Overview** page provides information such as the account name, container, and file share name.
- * For account keys, go to **Access keys** on the **Settings** pane.
- * For SAS tokens, go to **Shared access signatures** on the **Settings** pane.
+* To use an account key or SAS token for authentication, select **Storage Accounts** on the left pane, and choose the storage account that you want to register
+ * The **Overview** page provides account name, file share name, container, etc. information
+ * For account keys, go to **Access keys** on the **Settings** pane
+ * For SAS tokens, go to **Shared access signatures** on the **Settings** pane
-* If you plan to use a service principal for authentication, go to your **App registrations** and select which app you want to use.
- * Its corresponding **Overview** page will contain required information like tenant ID and client ID.
+* To use a service principal for authentication, go to your **App registrations** and select the app you want to use
+ * The corresponding **Overview** page of the selected app contains required information - for example, tenant ID and client ID
> [!IMPORTANT]
-> If you need to change your access keys for an Azure Storage account (account key or SAS token), be sure to sync the new credentials with your workspace and the datastores connected to it. Learn how to [sync your updated credentials](../how-to-change-storage-access-key.md).
+> To change your access keys for an Azure Storage account (account key or SAS token), sync the new credentials with your workspace and the datastores connected to it. For more information, visit [sync your updated credentials](../how-to-change-storage-access-key.md).
### Permissions
-For Azure blob container and Azure Data Lake Gen 2 storage, make sure your authentication credentials have **Storage Blob Data Reader** access. Learn more about [Storage Blob Data Reader](../../role-based-access-control/built-in-roles.md#storage-blob-data-reader). An account SAS token defaults to no permissions.
-* For data **read access**, your authentication credentials must have a minimum of list and read permissions for containers and objects.
-
-* For data **write access**, write and add permissions also are required.
+For Azure blob container and Azure Data Lake Gen 2 storage, ensure that your authentication credentials have **Storage Blob Data Reader** access. For more information, visit [Storage Blob Data Reader](../../role-based-access-control/built-in-roles.md#storage-blob-data-reader). An account SAS token defaults to no permissions.
+* For data **read access**, your authentication credentials must have a minimum of list and read permissions for containers and objects
+* Data **write access** also requires write and add permissions
## Create and register datastores
-When you register an Azure storage solution as a datastore, you automatically create and register that datastore to a specific workspace. Review the [storage access & permissions](#storage-access-and-permissions) section for guidance on virtual network scenarios, and where to find required authentication credentials.
+Registration of an Azure storage solution as a datastore automatically creates and registers that datastore to a specific workspace. Review [storage access & permissions](#storage-access-and-permissions) in this document for guidance about virtual network scenarios, and where to find required authentication credentials.
-Within this section are examples for how to create and register a datastore via the Python SDK for the following storage types. The parameters provided in these examples are the **required parameters** to create and register a datastore.
+That section offers examples that describe how to create and register a datastore via the Python SDK for these storage types. The parameters shown these examples are the **required parameters** to create and register a datastore:
* [Azure blob container](#azure-blob-container) * [Azure file share](#azure-file-share) * [Azure Data Lake Storage Generation 2](#azure-data-lake-storage-generation-2)
- To create datastores for other supported storage services, see the [reference documentation for the applicable `register_azure_*` methods](/python/api/azureml-core/azureml.core.datastore.datastore#methods).
+ To create datastores for other supported storage services, visit the [reference documentation for the applicable `register_azure_*` methods](/python/api/azureml-core/azureml.core.datastore.datastore#methods).
-If you prefer a low code experience, see [Connect to data with Azure Machine Learning studio](how-to-connect-data-ui.md).
->[!IMPORTANT]
-> If you unregister and re-register a datastore with the same name, and it fails, the Azure Key Vault for your workspace may not have soft-delete enabled. By default, soft-delete is enabled for the key vault instance created by your workspace, but it may not be enabled if you used an existing key vault or have a workspace created prior to October 2020. For information on how to enable soft-delete, see [Turn on Soft Delete for an existing key vault](../../key-vault/general/soft-delete-change.md#turn-on-soft-delete-for-an-existing-key-vault).
+To learn how to connect to a data storage resource with a UI, visit [Connect to data with Azure Machine Learning studio](how-to-connect-data-ui.md).
+>[!IMPORTANT]
+> If you unregister and re-register a datastore with the same name, and the re-registration fails, the Azure Key Vault for your workspace may not have soft-delete enabled. By default, soft-delete is enabled for the key vault instance created by your workspace, but it may not be enabled if you used an existing key vault or have a workspace created before October 2020. For information that describes how to enable soft-delete, see [Turn on Soft Delete for an existing key vault](../../key-vault/general/soft-delete-change.md#turn-on-soft-delete-for-an-existing-key-vault).
> [!NOTE]
-> Datastore name should only consist of lowercase letters, digits and underscores.
+> A datastore name should only contain lowercase letters, digits and underscores.
### Azure blob container
-To register an Azure blob container as a datastore, use [`register_azure_blob_container()`](/python/api/azureml-core/azureml.core.datastore%28class%29#register-azure-blob-container-workspace--datastore-name--container-name--account-name--sas-token-none--account-key-none--protocol-none--endpoint-none--overwrite-false--create-if-not-exists-false--skip-validation-false--blob-cache-timeout-none--grant-workspace-access-false--subscription-id-none--resource-group-none-).
+To register an Azure blob container as a datastore, use the [`register_azure_blob_container()`](/python/api/azureml-core/azureml.core.datastore%28class%29#azureml-core-datastore-register-azure-blob-container) method.
-The following code creates and registers the `blob_datastore_name` datastore to the `ws` workspace. This datastore accesses the `my-container-name` blob container on the `my-account-name` storage account, by using the provided account access key. Review the [storage access & permissions](#storage-access-and-permissions) section for guidance on virtual network scenarios, and where to find required authentication credentials.
+This code sample creates and registers the `blob_datastore_name` datastore to the `ws` workspace. The datastore uses the provided account access key to access the `my-container-name` blob container on the `my-account-name` storage account. Review the [storage access & permissions](#storage-access-and-permissions) section for guidance about virtual network scenarios, and where to find required authentication credentials.
```Python blob_datastore_name='azblobsdk' # Name of the datastore to workspace
blob_datastore = Datastore.register_azure_blob_container(workspace=ws,
### Azure file share
-To register an Azure file share as a datastore, use [`register_azure_file_share()`](/python/api/azureml-core/azureml.core.datastore%28class%29#register-azure-file-share-workspace--datastore-name--file-share-name--account-name--sas-token-none--account-key-none--protocol-none--endpoint-none--overwrite-false--create-if-not-exists-false--skip-validation-false-).
+To register an Azure file share as a datastore, use the [`register_azure_file_share()`](/python/api/azureml-core/azureml.core.datastore%28class%29#azureml-core-datastore-register-azure-file-share) method.
-The following code creates and registers the `file_datastore_name` datastore to the `ws` workspace. This datastore accesses the `my-fileshare-name` file share on the `my-account-name` storage account, by using the provided account access key. Review the [storage access & permissions](#storage-access-and-permissions) section for guidance on virtual network scenarios, and where to find required authentication credentials.
+This code sample creates and registers the `file_datastore_name` datastore to the `ws` workspace. The datastore uses the `my-fileshare-name` file share on the `my-account-name` storage account, with the provided account access key. Review the [storage access & permissions](#storage-access-and-permissions) section for guidance about virtual network scenarios, and where to find required authentication credentials.
```Python file_datastore_name='azfilesharesdk' # Name of the datastore to workspace
file_datastore = Datastore.register_azure_file_share(workspace=ws,
### Azure Data Lake Storage Generation 2
-For an Azure Data Lake Storage Generation 2 (ADLS Gen 2) datastore, use [register_azure_data_lake_gen2()](/python/api/azureml-core/azureml.core.datastore.datastore#register-azure-data-lake-gen2-workspace--datastore-name--filesystem--account-name--tenant-id--client-id--client-secret--resource-url-none--authority-url-none--protocol-none--endpoint-none--overwrite-false-) to register a credential datastore connected to an Azure DataLake Gen 2 storage with [service principal permissions](../../active-directory/develop/howto-create-service-principal-portal.md).
+For an Azure Data Lake Storage Generation 2 (ADLS Gen 2) datastore, use the[register_azure_data_lake_gen2()](/python/api/azureml-core/azureml.core.datastore%28class%29#azureml-core-datastore-register-azure-data-lake-gen2) method to register a credential datastore connected to an Azure Data Lake Gen 2 storage with [service principal permissions](../../active-directory/develop/howto-create-service-principal-portal.md).
-In order to utilize your service principal, you need to [register your application](../../active-directory/develop/app-objects-and-service-principals.md) and grant the service principal data access via either Azure role-based access control (Azure RBAC) or access control lists (ACL). Learn more about [access control set up for ADLS Gen 2](../../storage/blobs/data-lake-storage-access-control-model.md).
+To use your service principal, you must [register your application](../../active-directory/develop/app-objects-and-service-principals.md) and grant the service principal data access via either Azure role-based access control (Azure RBAC) or access control lists (ACL). For more information, visit [access control set up for ADLS Gen 2](../../storage/blobs/data-lake-storage-access-control-model.md).
-The following code creates and registers the `adlsgen2_datastore_name` datastore to the `ws` workspace. This datastore accesses the file system `test` in the `account_name` storage account, by using the provided service principal credentials.
-Review the [storage access & permissions](#storage-access-and-permissions) section for guidance on virtual network scenarios, and where to find required authentication credentials.
+This code creates and registers the `adlsgen2_datastore_name` datastore to the `ws` workspace. This datastore accesses the file system `test` in the `account_name` storage account, through use of the provided service principal credentials. Review the [storage access & permissions](#storage-access-and-permissions) section for guidance on virtual network scenarios, and where to find required authentication credentials.
```python adlsgen2_datastore_name = 'adlsgen2datastore'
adlsgen2_datastore = Datastore.register_azure_data_lake_gen2(workspace=ws,
client_secret=client_secret) # the secret of service principal ``` -- ## Create datastores with other Azure tools
-In addition to creating datastores with the Python SDK and the studio, you can also use Azure Resource Manager templates or the Azure Machine Learning VS Code extension.
-
+In addition to datastore creation with the Python SDK and the studio, you can also create datastores with Azure Resource Manager templates or the Azure Machine Learning VS Code extension.
### Azure Resource Manager
-There are several templates at [https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices) that can be used to create datastores.
-
-For information on using these templates, see [Use an Azure Resource Manager template to create a workspace for Azure Machine Learning](../how-to-create-workspace-template.md).
+You can use several templates at [https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.machinelearningservices) to create datastores. For information about these templates, visit [Use an Azure Resource Manager template to create a workspace for Azure Machine Learning](../how-to-create-workspace-template.md).
### VS Code extension
-If you prefer to create and manage datastores using the Azure Machine Learning VS Code extension, visit the [VS Code resource management how-to guide](../how-to-manage-resources-vscode.md#datastores) to learn more.
+For more information about creation and management of datastores with the Azure Machine Learning VS Code extension, visit the [VS Code resource management how-to guide](../how-to-manage-resources-vscode.md#datastores).
## Use data in your datastores
-After you create a datastore, [create an Azure Machine Learning dataset](how-to-create-register-datasets.md) to interact with your data. Datasets package your data into a lazily evaluated consumable object for machine learning tasks, like training.
-
-With datasets, you can [download or mount](how-to-train-with-datasets.md#mount-vs-download) files of any format from Azure storage services for model training on a compute target. [Learn more about how to train ML models with datasets](how-to-train-with-datasets.md).
--
+After datastore creation, [create an Azure Machine Learning dataset](how-to-create-register-datasets.md) to interact with your data. A dataset packages your data into a lazily evaluated consumable object for machine learning tasks, like training. With datasets, you can [download or mount](how-to-train-with-datasets.md#mount-vs-download) files of any format from Azure storage services for model training on a compute target. [Learn more about how to train ML models with datasets](how-to-train-with-datasets.md).
## Get datastores from your workspace
To get a specific datastore registered in the current workspace, use the [`get()
# Get a named datastore from the current workspace datastore = Datastore.get(ws, datastore_name='your datastore name') ```
-To get the list of datastores registered with a given workspace, you can use the [`datastores`](/python/api/azureml-core/azureml.core.workspace%28class%29#datastores) property on a workspace object:
+To get the list of datastores registered with a given workspace, use the [`datastores`](/python/api/azureml-core/azureml.core.workspace%28class%29#datastores) property on a workspace object:
```Python # List all datastores registered in the current workspace
for name, datastore in datastores.items():
print(name, datastore.datastore_type) ```
-To get the workspace's default datastore, use this line:
+This code sample shows how to get the default datastore of the workspace:
```Python datastore = ws.get_default_datastore() ```
-You can also change the default datastore with the following code. This ability is only supported via the SDK.
+You can also change the default datastore with this code sample. Only the SDK supports this ability:
```Python ws.set_default_datastore(new_default_datastore)
You can also change the default datastore with the following code. This ability
## Access data during scoring
-Azure Machine Learning provides several ways to use your models for scoring. Some of these methods don't provide access to datastores. Use the following table to understand which methods allow you to access datastores during scoring:
+Azure Machine Learning provides several ways to use your models for scoring. Some of these methods provide no access to datastores. The following table describes which methods allow access to datastores during scoring:
| Method | Datastore access | Description | | -- | :--: | -- | | [Batch prediction](../tutorial-pipeline-batch-scoring-classification.md) | Γ£ö | Make predictions on large quantities of data asynchronously. | | [Web service](how-to-deploy-and-where.md) | &nbsp; | Deploy models as a web service. |
-For situations where the SDK doesn't provide access to datastores, you might be able to create custom code by using the relevant Azure SDK to access the data. For example, the [Azure Storage SDK for Python](https://github.com/Azure/azure-storage-python) is a client library that you can use to access data stored in blobs or files.
+When the SDK doesn't provide access to datastores, you might be able to create custom code with the relevant Azure SDK to access the data. For example, the [Azure Storage SDK for Python](https://github.com/Azure/azure-storage-python) client library can access data stored in blobs or files.
## Move data to supported Azure storage solutions
-Azure Machine Learning supports accessing data from Azure Blob storage, Azure Files, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure SQL Database, and Azure Database for PostgreSQL. If you're using unsupported storage, we recommend that you move your data to supported Azure storage solutions by using [Azure Data Factory and these steps](../../data-factory/quickstart-hello-world-copy-data-tool.md). Moving data to supported storage can help you save data egress costs during machine learning experiments.
+Azure Machine Learning supports accessing data from
+
+- Azure Blob storage
+- Azure Files
+- Azure Data Lake Storage Gen1
+- Azure Data Lake Storage Gen2
+- Azure SQL Database
+- Azure Database for PostgreSQL
+
+If you use unsupported storage, we recommend that you use [Azure Data Factory and these steps](../../data-factory/quickstart-hello-world-copy-data-tool.md) to move your data to supported Azure storage solutions. Moving data to supported storage can help you save data egress costs during machine learning experiments.
-Azure Data Factory provides efficient and resilient data transfer with more than 80 prebuilt connectors at no extra cost. These connectors include Azure data services, on-premises data sources, Amazon S3 and Redshift, and Google BigQuery.
+Azure Data Factory provides efficient and resilient data transfer, with more than 80 prebuilt connectors, at no extra cost. These connectors include Azure data services, on-premises data sources, Amazon S3 and Redshift, and Google BigQuery.
## Next steps
-* [Create an Azure machine learning dataset](how-to-create-register-datasets.md)
+* [Create an Azure Machine Learning dataset](how-to-create-register-datasets.md)
* [Train a model](how-to-set-up-training-targets.md) * [Deploy a model](how-to-deploy-and-where.md)
managed-grafana How To Set Up Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-set-up-private-access.md
In this guide, you'll learn how to disable public access to your Azure Managed G
Public access is enabled by default when you create an Azure Grafana workspace. Disabling public access prevents all traffic from accessing the resource unless you go through a private endpoint. > [!NOTE]
-> When private access (preview) is enabled, pinging charts using the [*Pin to Grafana*](../azure-monitor/visualize/grafana-plugin.md#pin-charts-from-the-azure-portal-to-azure-managed-grafana) feature will no longer work as the Azure portal canΓÇÖt access an Azure Managed Grafana workspace on a private IP address.
+> When private access (preview) is enabled, pinging charts using the [*Pin to Grafana*](../azure-monitor/visualize/grafana-plugin.md#pin-charts-from-the-azure-portal-to-azure-managed-grafana) feature will no longer work as the Azure portal can't access an Azure Managed Grafana workspace on a private IP address.
### [Portal](#tab/azure-portal)
Once you have disabled public access, set up a [private endpoint](../private-lin
1. Select **Next : Virtual Network >**.
- 1. Select an existing **Virtual network** to deploy the private endpoint to. If you don't have a virtual network, [create a virtual network](../private-link/create-private-endpoint-portal.md#create-a-virtual-network-and-bastion-host).
+ 1. Select an existing **Virtual network** to deploy the private endpoint to. If you don't have a virtual network, [create a virtual network](../private-link/create-private-endpoint-portal.md).
1. Select a **Subnet** from the list.
mysql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/overview.md
One advantage of running your workload in Azure is its global reach. Azure Datab
| France Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | France South | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | Germany West Central | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
+| Germany North | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
| Israel Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| Italy North | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: |
| Japan East | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Japan West | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | Jio India West | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: |
One advantage of running your workload in Azure is its global reach. Azure Datab
| North Europe | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Norway East | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: | | Norway West | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: |
+| Poland Central | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: |
| Qatar Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: | | South Africa North | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| South Africa West | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: |
| South Central US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | South India | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | Southeast Asia | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| Spain Central | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
| Sweden Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: | | Switzerland North | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Switzerland West | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
operator-insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/overview.md
Title: What is Azure Operator Insights?
-description: Azure Operator Insights is an Azure service for monitoring and analyzing data from multiple sources
+description: Azure Operator Insights is an Azure service for monitoring and analyzing data from multiple sources.
High scale ingestion to handle large amounts of network data from operator data
- Pipelines managed for all operators, leading to economies of scale dropping the price. - Operator privacy module. - Operator compliance including handling retention policies. -- Common data model with open standards such as parquet and delta lake for easy integration with other Microsoft and non-Microsoft services.
+- Common data model with open standards such as Apache Parquet for easy integration with other Microsoft and non-Microsoft services.
- High speed analytics to enable fast data exploration and correlation between different data sets produced by disaggregated 5G multi-vendor networks. The result is that the operator has a lower total cost of ownership but higher insights of their network over equivalent on-premises or cloud chemistry set platforms.
The result is that the operator has a lower total cost of ownership but higher i
Azure Operator Insights requires two separate types of resources. - _Ingestion agents_ in your network collect data from your network and upload them to Data Products in Azure.-- _Data Product_ resources in Azure process the data provided by ingestion agents, enrich it and make it available to you.
+- _Data Product_ resources in Azure process the data provided by ingestion agents, enrich it, and make it available to you.
- You can use prebuilt dashboards provided by the Data Product or build your own in Azure Data Explorer. Azure Data Explorer also allows you to query your data directly, analyze it in Power BI or use it with Logic Apps. For more information, see [Data visualization in Data Products](concept-data-visualization.md). - Data Products provide [metrics for monitoring the quality of your data](concept-data-quality-monitoring.md). - Data Products are designed for specific types of source data and provide specialized processing for that source data. For more information, see [Data types](concept-data-types.md).
operator-nexus Quickstarts Tenant Workload Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-tenant-workload-prerequisites.md
New-AzNetworkCloudTrunkedNetwork -Name "<YourTrunkedNetworkName>" `
#### Create a cloud services network
-Your VM requires at least one cloud services network. You need the egress endpoints that you want to add to the proxy for your VM to access. This list should include any domains needed to pull images or access data, such as `.azurecr.io` or `.docker.io`.
+To create an Operator Nexus virtual machine (VM) or Operator Nexus Kubernetes cluster, you must have a cloud services network. Without this network, you can't create a VM or cluster.
+
+While the cloud services network automatically enables access to essential platform endpoints, you need to add others, such as docker.io, if your application requires them. Configuring the cloud services network proxy is a crucial step in ensuring a successful connection to your desired endpoints. To achieve this, you can add the egress endpoints to the cloud services network during the initial creation or as an update, using the `--additional-egress-endpoints` parameter. While wildcards for the URL endpoints might seem convenient, it isn't recommended for security reasons. For example, if you want to configure the proxy to allow image pull from any repository hosted off docker.io, you can specify `.docker.io` as an endpoint.
+
+The egress endpoints must comply with the domain name structures and hostname specifications outlined in RFC 1034, RFC 1035, and RFC 1123. Valid domain names include alphanumeric characters, hyphens (not at the start or end), and can have subdomains separated by dots. Here are a few examples to demonstrate compliant naming conventions for domain and hostnames.
+
+- `contoso.com`: The base domain, serving as a second-level domain under the .com top-level domain.
+- `sales.contoso.com`: A subdomain of contoso.com, serving as a third-level domain under the .com top-level domain.
+- `web-server-1.contoso.com`: A hostname for a specific web server, using hyphens to separate the words and the numeral.
+- `api.v1.contoso.com`: Incorporates two subdomains (`v1` and `api`) above the base domain contoso.com.
### [Azure CLI](#tab/azure-cli)
New-AzNetworkCloudServicesNetwork -CloudServicesNetworkName "<YourCloudServicesN
+After setting up the cloud services network, you can use it to create a VM or cluster that can connect to the egress endpoints you have specified. Remember that the proxy only works with HTTPS.
+ #### Using the proxy to reach outside of the virtual machine
-Once you have created your VM or Kubernetes cluster with this cloud services network, you can use the proxy to reach outside of the virtual machine. Proxy is useful if you need to access resources outside of the virtual machine, such as pulling images or accessing data.
+After creating your Operator Nexus VM or Operator Nexus Kubernetes cluster with this cloud services network, you need to additionally set appropriate environment variables within VM to use tenant proxy and to reach outside of virtual machine. This tenant proxy is useful if you need to access resources outside of the virtual machine, such as managing packages or installing software.
To use the proxy, you need to set the following environment variables: ```bash
-export HTTP_PROXY=http://169.254.0.11:3128
-export http_proxy=http://169.254.0.11:3128
export HTTPS_PROXY=http://169.254.0.11:3128 export https_proxy=http://169.254.0.11:3128 ```
-Once you have set the environment variables, your virtual machine should be able to reach outside of the virtual network using the proxy.
+After setting the proxy environment variables, your virtual machine will be able to reach the configured egress endpoints.
-In order to reach the desired endpoints, you need to add the required egress endpoints to the cloud services network. Egress endpoints can be added using the `--additional-egress-endpoints` parameter when creating the network. Be sure to include any domains needed to pull images or access data, such as `.azurecr.io` or `.docker.io`.
+> [!NOTE]
+> HTTP is not supported due to security reasons when using the proxy to access resources outside of the virtual machine. It is required to use HTTPS for secure communication when managing packages or installing software on the Operator Nexus VM or Operator Nexus Kubernetes cluster with this cloud services network.
> [!IMPORTANT] > When using a proxy, it's also important to set the `no_proxy` environment variable properly. This variable can be used to specify domains or IP addresses that shouldn't be accessed through the proxy. If not set properly, it can cause issues while accessing services, such as the Kubernetes API server or cluster IP. Make sure to include the IP address or domain name of the Kubernetes API server and any cluster IP addresses in the `no_proxy` variable.
In order to reach the desired endpoints, you need to add the required egress end
When you're creating a Nexus Kubernetes cluster, you can schedule the cluster onto specific racks or distribute it across multiple racks. This technique can improve resource utilization and fault tolerance.
-If you don't specify a zone when you're creating a Nexus Kubernetes cluster, the Azure Operator Nexus platform automatically implements a default anti-affinity rule. This rule aims to prevent scheduling the cluster VM on a node that already has a VM from the same cluster, but it's a best-effort approach and can't make guarantees.
+If you don't specify a zone when you're creating a Nexus Kubernetes cluster, the Azure Operator Nexus platform automatically implements a default anti-affinity rule to spread the VM across racks and bare metal nodes and isn't guaranteed. This rule also aims to prevent scheduling the cluster VM on a node that already has a VM from the same cluster, but it's a best-effort approach and can't make guarantees.
To get the list of available zones in the Azure Operator Nexus instance, you can use the following command:
operator-nexus Reference Operator Nexus Fabric Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/reference-operator-nexus-fabric-skus.md
+
+ Title: Azure Operator Nexus Fabric SKUs
+description: SKU options for Azure Operator Nexus Network Fabric
+ Last updated : 02/26/2024+++++
+# Azure Operator Nexus Fabric SKUs
+
+Operator Nexus Fabric SKUs for Azure Operator Nexus are meticulously designed to streamline the procurement and deployment processes, offering standardized bill of materials (BOM), topologies, wiring, and workflows. Microsoft crafts and prevalidates each SKU in collaboration with OEM vendors, ensuring seamless integration and optimal performance for operators.
+
+Operator Nexus Fabric SKUs offer a comprehensive range of options, allowing operators to tailor their deployments according to their specific requirements. With prevalidated configurations and standardized BOMs, the procurement and deployment processes are streamlined, ensuring efficiency and performance across the board.
+
+The following table outlines the various configurations of Operator Nexus Fabric SKUs, catering to different use-cases and functionalities required by operators.
+
+| **S.No** | **Use-Case** | **Network Fabric SKU ID** | **Description** |
+|--|--|--|--|
+| 1 | Multi Rack Near-Edge | M4-A400-A100-C16-ab | <ul><li>Support 400-Gbps link between Operator Nexus fabric CEs and Provider Edge PEs</li><li>Support up to four compute rack deployment and aggregator rack</li><li>Each compute rack can have up to 16 compute servers</li><li>One Network Packet Broker</li></ul> |
+| 2 | Multi Rack Near-Edge | M8-A400-A100-C16-ab | <ul><li>Support 400-Gbps link between Operator Nexus fabric CEs and Provider Edge PEs </li><li>Support up to eight compute rack deployment and aggregator rack </li><li>Each compute rack can have up to 16 compute servers </li><li>One Network Packet Broker for deployment size between one and four compute racks. Two network packet brokers for deployment size of five to eight compute racks </li></ul> |
+| 3 | Multi Rack Near-Edge | M8-A100-A25-C16-aa | <ul><li>Support 100-Gbps link between Operator Nexus fabric CEs and Provider Edge PEs </li><li>Support up to eight compute rack deployment and aggregator rack </li><li>Each compute rack can have up to 16 compute servers </li><li>One Network Packet Broker for 1 to 4 rack compute rack deployment and two network packet brokers with deployment size of 5 to 8 compute racks </li></ul> |
+| 4 | Single Rack Near-Edge | S-A100-A25-C12-aa | <ul><li>Supports 100-Gbps link between Operator Nexus fabric CEs and Provider Edge PEs </li><li>Single rack with shared aggregator and compute rack </li><li>Each compute rack can have up to 12 compute servers </li><li>One Network Packet Broker </li></ul> |
+
+The BOM for each SKU requires:
+
+- A pair of Customer Edge (CE) devices
+- For the multi-rack SKUs, a pair of Top-of-Rack (TOR) switches per deployed rack
+- One management switch per deployed rack
+- One of more NPB devices (see table)
+- Terminal Server
+- Cable and optics
postgresql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-backup-restore.md
Azure Backup and Azure Database for PostgreSQL flexible server services have bui
- In preview, LTR restore is currently available as RestoreasFiles to storage accounts. RestoreasServer capability will be added in the future. - In preview, you can perform LTR backups for all databases, single db backup support will be added in the future.
+- LTR backup is currently not supported for CMK-enabled servers. This capability will be added in the future.
For more information about performing a long term backup, visit the [how-to guide](../../backup/backup-azure-database-postgresql-flex.md).
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-extensions.md
Title: Extensions
description: Learn about the available PostgreSQL extensions in Azure Database for PostgreSQL - Flexible Server. Previously updated : 1/8/2024 Last updated : 2/20/2024
The following extensions are available in Azure Database for PostgreSQL flexible
|[pg_prewarm](https://www.postgresql.org/docs/13/pgprewarm.html) |Prewarm relation data |1.2 |1.2 |1.2 |1.2 |1.2 |1.2 | |[pg_repack](https://reorg.github.io/pg_repack/) |Lets you remove bloat from tables and indexes |1.4.7 |1.4.7 |1.4.7 |1.4.7 |1.4.7 |1.4.7 | |[pg_squeeze](https://github.com/cybertec-postgresql/pg_squeeze) |A tool to remove unused space from a relation. |1.6 |1.5 |1.5 |1.5 |1.5 |1.5 |
-|[pg_stat_statements](https://www.postgresql.org/docs/13/pgstatstatements.html) |Track execution statistics of all SQL statements executed |1.1 |1.8 |1.8 |1.8 |1.7 |1.6 |
+|[pg_stat_statements](https://www.postgresql.org/docs/13/pgstatstatements.html) |Track execution statistics of all SQL statements executed |1.10 |1.10 |1.9 |1.8 |1.7 |1.6 |
|[pg_trgm](https://www.postgresql.org/docs/13/pgtrgm.html) |Text similarity measurement and index searching based on trigrams |1.6 |1.5 |1.5 |1.5 |1.4 |1.4 | |[pg_visibility](https://www.postgresql.org/docs/13/pgvisibility.html) |Examine the visibility map (VM) and page-level visibility info |1.2 |1.2 |1.2 |1.2 |1.2 |1.2 | |[pgaudit](https://www.pgaudit.org/) |Provides auditing functionality |N/A |1.7 |1.6.2 |1.5 |1.4 |1.3.1 |
postgresql Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-connectivity-architecture.md
The following table lists the gateway IP address subnets of the Azure Database f
| **Region name** |**Current Gateway IP address**| **Gateway IP address subnets** | |:-|:--|:--|
-| Australia Central | 20.36.105.32 | 20.36.105.32/29 |
-| Australia Central2 | 20.36.113.32 | 20.36.113.32/29 |
-| Australia East | 13.70.112.32 | 13.70.112.32/29, 40.79.160.32/29, 40.79.168.32/29 |
-| Australia South East |13.77.49.33 |13.77.49.32/29 |
-| Brazil South | 191.233.201.8, 191.233.200.16 | 191.233.200.32/29, 191.234.144.32/29|
-| Canada Central | 13.71.168.32| 13.71.168.32/29, 20.38.144.32/29, 52.246.152.32/29|
-| Canada East |40.69.105.32 | 40.69.105.32/29 |
-| Central US | 52.182.136.37, 52.182.136.38 | 104.208.21.192/29, 13.89.168.192/29, 52.182.136.192/29
-| China East | 52.130.112.139 | 52.130.112.136/29|
-| China East 2 | 40.73.82.1, 52.130.120.89 | 52.130.120.88/29|
-| China North | 52.130.128.89| 52.130.128.88/29 |
-| China North 2 |40.73.50.0 | 52.130.40.64/29|
-| East Asia |13.75.33.20, 13.75.33.21 | 13.75.32.192/29, 13.75.33.192/29|
-| East US | 40.71.8.203, 40.71.83.113|20.42.65.64/29, 20.42.73.0/29, 52.168.116.64/29|
-| East US 2 |52.167.105.38, 40.70.144.38|104.208.150.192/29, 40.70.144.192/29, 52.167.104.192/29|
-| France Central |40.79.129.1 | 40.79.136.32/29, 40.79.144.32/29 |
-| France South |40.79.176.40 | 40.79.176.40/29, 40.79.177.32/29|
-| Germany West Central | 51.116.152.0 | 51.116.152.32/29, 51.116.240.32/29, 51.116.248.32/29|
-| India Central |20.192.96.33 | 104.211.86.32/29, 20.192.96.32/29|
-| India South | 40.78.192.32| 40.78.192.32/29, 40.78.193.32/29|
-| India West | 104.211.144.32| 104.211.144.32/29, 104.211.145.32/29 |
-| Japan East | 40.79.184.8, 40.79.192.23|13.78.104.32/29, 40.79.184.32/29, 40.79.192.32/29 |
-| Japan West |40.74.96.6| 40.74.96.32/29 |
-| Korea Central | 52.231.17.13 | 20.194.64.32/29,20.44.24.32/29, 52.231.16.32/29 |
-| Korea South |52.231.145.3| 52.231.145.0/29 |
-| North Central US | 52.162.104.35, 52.162.104.36 | 52.162.105.192/29|
-| North Europe |52.138.224.6, 52.138.224.7 |13.69.233.136/29, 13.74.105.192/29, 52.138.229.72/29 |
-| South Africa North | 102.133.152.0 | 102.133.120.32/29, 102.133.152.32/29, 102.133.248.32/29 |
-| South Africa West |102.133.24.0 | 102.133.25.32/29|
-| South Central US | 20.45.120.0 |20.45.121.32/29, 20.49.88.32/29, 20.49.89.32/29, 40.124.64.136/29|
-| South East Asia | 23.98.80.12, 40.78.233.2| 13.67.16.192/29, 23.98.80.192/29, 40.78.232.192/29 |
+| Australia Central | 20.36.105.32 | 20.36.105.32/29, 20.53.48.96/27 |
+| Australia Central2 | 20.36.113.32 | 20.36.113.32/29, 20.53.56.32/27 |
+| Australia East | 13.70.112.32 | 13.70.112.32/29, 40.79.160.32/29, 40.79.168.32/29, 40.79.160.32/29, 20.53.46.128/27 |
+| Australia South East |13.77.49.33 |3.77.49.32/29, 104.46.179.160/27|
+| Brazil South | 191.233.201.8, 191.233.200.16 | 191.234.153.32/27, 191.234.152.32/27, 191.234.157.136/29, 191.233.200.32/29, 191.234.144.32/29, 191.234.142.160/27|
+|Brazil Southeast|191.233.48.2|191.233.48.32/29, 191.233.15.160/27|
+| Canada Central | 13.71.168.32| 13.71.168.32/29, 20.38.144.32/29, 52.246.152.32/29, 20.48.196.32/27|
+| Canada East |40.69.105.32 | 40.69.105.32/29, 52.139.106.192/27 |
+| Central US | 52.182.136.37, 52.182.136.38 | 104.208.21.192/29, 13.89.168.192/29, 52.182.136.192/29, 20.40.228.128/27|
+| China East | 52.130.112.139 | 52.130.112.136/29, 52.130.13.96/2752.130.112.136/29, 52.130.13.96/27|
+| China East 2 | 40.73.82.1, 52.130.120.89 | 52.130.120.88/29, 52.130.7.0/27|
+| China North | 52.130.128.89| 52.130.128.88/29, 40.72.77.128/27 |
+| China North 2 |40.73.50.0 | 52.130.40.64/29, 52.130.21.160/27|
+| East Asia |13.75.33.20, 13.75.33.21 | 20.205.77.176/29, 20.205.83.224/29, 20.205.77.200/29, 13.75.32.192/29, 13.75.33.192/29, 20.195.72.32/27|
+| East US | 40.71.8.203, 40.71.83.113|20.42.65.64/29, 20.42.73.0/29, 52.168.116.64/29, 20.62.132.160/27|
+| East US 2 |52.167.105.38, 40.70.144.38| 104.208.150.192/29, 40.70.144.192/29, 52.167.104.192/29, 20.62.58.128/27|
+| France Central |40.79.129.1 | 40.79.128.32/29, 40.79.136.32/29, 40.79.144.32/29, 20.43.47.192/27 |
+| France South |40.79.176.40 | 40.79.176.40/29, 40.79.177.32/29, 52.136.185.0/27|
+| Germany North| 51.116.56.0| 51.116.57.32/29, 51.116.54.96/27|
+| Germany West Central | 51.116.152.0 | 51.116.152.32/29, 51.116.240.32/29, 51.116.248.32/29, 51.116.149.32/27|
+| India Central |20.192.96.33 | 40.80.48.32/29, 104.211.86.32/29, 20.192.96.32/29, 20.192.43.160/27|
+| India South | 40.78.192.32| 40.78.192.32/29, 40.78.193.32/29, 52.172.113.96/27|
+| India West | 104.211.144.32| 104.211.144.32/29, 104.211.145.32/29, 52.136.53.160/27|
+| Japan East | 40.79.184.8, 40.79.192.23| 13.78.104.32/29, 40.79.184.32/29, 40.79.192.32/29, 20.191.165.160/27 |
+| Japan West |40.74.96.6| 20.18.179.192/29, 40.74.96.32/29, 20.189.225.160/27 |
+| Jio India Central| 20.192.233.32|20.192.233.32/29, 20.192.48.32/27|
+| Jio India West|20.193.200.32|20.193.200.32/29, 20.192.167.224/27|
+| Korea Central | 52.231.17.13 | 20.194.64.32/29, 20.44.24.32/29, 52.231.16.32/29, 20.194.73.64/27|
+| Korea South |52.231.145.3| 52.231.151.96/27, 52.231.151.88/29, 52.231.145.0/29, 52.147.112.160/27 |
+| North Central US | 52.162.104.35, 52.162.104.36 | 52.162.105.200/29, 20.125.171.192/29, 52.162.105.192/29, 20.49.119.32/27|
+| North Europe |52.138.224.6, 52.138.224.7 |13.69.233.136/29, 13.74.105.192/29, 52.138.229.72/29, 52.146.133.128/27 |
+|Norway East|51.120.96.0|51.120.208.32/29, 51.120.104.32/29, 51.120.96.32/29, 51.120.232.192/27|
+|Norway West|51.120.216.0|51.120.217.32/29, 51.13.136.224/27|
+| South Africa North | 102.133.152.0 | 102.133.120.32/29, 102.133.152.32/29, 102.133.248.32/29, 102.133.221.224/27 |
+| South Africa West |102.133.24.0 | 102.133.25.32/29, 102.37.80.96/27|
+| South Central US | 20.45.120.0 |20.45.121.32/29, 20.49.88.32/29, 20.49.89.32/29, 40.124.64.136/29, 20.65.132.160/27|
+| South East Asia | 23.98.80.12, 40.78.233.2|13.67.16.192/29, 23.98.80.192/29, 40.78.232.192/29, 20.195.65.32/27 |
+| Sweden Central|51.12.96.32|51.12.96.32/29, 51.12.232.32/29, 51.12.224.32/29, 51.12.46.32/27|
+| Sweden South|51.12.200.32|51.12.201.32/29, 51.12.200.32/29, 51.12.198.32/27|
| Switzerland North |51.107.56.0 |51.107.56.32/29, 51.103.203.192/29, 20.208.19.192/29, 51.107.242.32/27|
-| Switzerland West | 51.107.152.0| 51.107.153.32/29|
-| UAE Central | 20.37.72.64| 20.37.72.96/29, 20.37.73.96/29 |
-| UAE North |65.52.248.0 |40.120.72.32/29, 65.52.248.32/29 |
-| UK South | 51.105.64.0|51.105.64.32/29, 51.105.72.32/29, 51.140.144.32/29|
-| UK West |51.140.208.98 |51.140.208.96/29, 51.140.209.32/29 |
-| West Central US |13.71.193.34 | 13.71.193.32/29 |
-| West Europe | 13.69.105.208,104.40.169.187|104.40.169.32/29, 13.69.112.168/29, 52.236.184.32/29|
-| West US |13.86.216.212, 13.86.217.212 |13.86.217.224/29|
-| West US 2 | 13.66.136.192 | 13.66.136.192/29, 40.78.240.192/29, 40.78.248.192/29|
-| West US 3 |20.150.184.2 | 20.150.168.32/29, 20.150.176.32/29, 20.150.184.32/29 |
+| Switzerland West | 51.107.152.0| 51.107.153.32/29, 51.107.250.64/27|
+| UAE Central | 20.37.72.64| 20.37.72.96/29, 20.37.73.96/29, 20.37.71.64/27 |
+| UAE North |65.52.248.0 |20.38.152.24/29, 40.120.72.32/29, 65.52.248.32/29, 20.38.143.64/27 |
+| UK South | 51.105.64.0|51.105.64.32/29, 51.105.72.32/29, 51.140.144.32/29, 51.143.209.224/27|
+| UK West |51.140.208.98 |51.140.208.96/29, 51.140.209.32/29, 20.58.66.128/27 |
+| West Central US |13.71.193.34 | 13.71.193.32/29, 20.69.0.32/27 |
+| West Europe | 13.69.105.208,104.40.169.187|104.40.169.32/29, 13.69.112.168/29, 52.236.184.32/29, 20.61.99.192/27|
+| West US |13.86.216.212, 13.86.217.212 |20.168.163.192/29, 13.86.217.224/29, 20.66.3.64/27|
+| West US 2 | 13.66.136.192 | 13.66.136.192/29, 40.78.240.192/29, 40.78.248.192/29, 20.51.9.128/27|
+| West US 3 |20.150.184.2 | 20.150.168.32/29, 20.150.176.32/29, 20.150.184.32/29, 20.150.241.128/27 |
## Frequently asked questions
-### What you need to know about this planned maintenance?
+### What you need to know about this planned maintenance
This is a DNS change only, which makes it transparent to clients. While the IP address for FQDN is changed in the DNS server, the local DNS cache is refreshed within 5 minutes, and it is automatically done by the operating systems. After the local DNS refresh, all the new connections will connect to the new IP address, all existing connections will remain connected to the old IP address with no interruption until the old IP addresses are fully decommissioned. The old IP address takes roughly three to four weeks before getting decommissioned; therefore, it should have no effect on the client applications.
private-link Create Private Endpoint Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-portal.md
description: In this quickstart, learn how to create a private endpoint using th
Previously updated : 06/13/2023 Last updated : 02/26/2024 #Customer intent: As someone who has a basic network background but is new to Azure, I want to create a private endpoint on a SQL server so that I can securely connect to it.
private-link Create Private Endpoint Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-powershell.md
Previously updated : 06/14/2023 Last updated : 02/26/2024 #Customer intent: As someone who has a basic network background but is new to Azure, I want to create a private endpoint by using Azure PowerShell.
remote-rendering Get Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/how-tos/conversion/get-information.md
Title: Get information about conversions
-description: Get information about conversions
+description: Get information about conversions.
Last updated 03/05/2020
## Information about a conversion: The result file
-When the conversion service converts an asset, it writes a summary of any issues into a "result file".
-For example, if a file `buggy.gltf` is converted, the output container will contain a file called `buggy.result.json`.
+When the conversion service converts an asset, it writes a summary of any issues into a result file.
+For example, if a file `buggy.gltf` is converted, the output container contains a file called `buggy.result.json`.
The result file lists any errors and warnings that occurred during the conversion and gives a result summary, which is one of `succeeded`, `failed` or `succeeded with warnings`. The result file is structured as a JSON array of objects, each of which has a string property that is one of `warning`, `error`, `internal warning`, `internal error`, and `result`.
-There will be at most one error (either `error` or `internal error`) and there will always be one `result`.
+There is at most one error (either `error` or `internal error`) and there's always one `result`.
## Example *result* file
However, since there was a missing texture, the resulting arrAsset may not be as
## Information about a converted model: The info file
-The arrAsset file produced by the conversion service is solely intended for consumption by the rendering service. There may be times, however, when you want to access information about a model without starting a rendering session. To support this workflow, the conversion service places a JSON file beside the arrAsset file in the output container. For example, if a file `buggy.gltf` is converted, the output container will contain a file called `buggy.info.json` beside the converted asset `buggy.arrAsset`. It contains information about the source model, the converted model, and about the conversion itself.
+The arrAsset file produced by the conversion service is solely intended for consumption by the rendering service. There may be times, however, when you want to access information about a model without starting a rendering session. To support this workflow, the conversion service places a JSON file beside the arrAsset file in the output container. For example, if a file `buggy.gltf` is converted, the output container contains a file called `buggy.info.json` beside the converted asset `buggy.arrAsset`. It contains information about the source model, the converted model, and about the conversion itself.
## Example *info* file
Here's an example *info* file produced by converting a file called `buggy.gltf`:
This section contains the provided filenames. * `input`: The name of the source file.
-* `output`: The name of the output file, when the user has specified a non-default name.
+* `output`: The name of the output file, when the user has specified a nondefault name.
### The *conversionSettings* section
It contains the following information:
* `numOverrides`: The number of override entries read from the material override file. * `numOverriddenMaterials`: The number of materials that were overridden.
+This section isn't present for point cloud conversions.
+ ### The *inputStatistics* section This section provides information about the source scene. There will often be discrepancies between the values in this section and the equivalent values in the tool that created the source model. Such differences are expected, because the model gets modified during the export and conversion steps.
+The content of this section is different for triangular meshes and point clouds.
+
+# [Triangular meshes](#tab/TriangularMeshes)
+ * `numMeshes`: The number of mesh parts, where each part can reference a single material.
-* `numFaces`: The total number of triangles/points in the whole model. This number contributes to the primitive limit in the [standard rendering server size](../../reference/vm-sizes.md#how-the-renderer-evaluates-the-number-of-primitives).
+* `numFaces`: The total number of triangles in the whole model. This number contributes to the primitive limit in the [standard rendering server size](../../reference/vm-sizes.md#how-the-renderer-evaluates-the-number-of-primitives).
* `numVertices`: The total number of vertices in the whole model. * `numMaterial`: The total number of materials in the whole model. * `numFacesSmallestMesh`: The number of triangles/points in the smallest mesh of the model.
This section provides information about the source scene. There will often be di
* `numMeshUsagesInScene`: The number of times nodes reference meshes. More than one node may reference the same mesh. * `maxNodeDepth`: The maximum depth of the nodes within the scene graph.
+# [Point clouds](#tab/PointClouds)
+
+For point cloud conversions, this section contains only a single entry:
+
+* `numPoints`: The total number of points in the input model.
++ ### The *outputInfo* section This section records general information about the generated output.
This section records general information about the generated output.
### The *outputStatistics* section
-This section records information calculated from the converted asset.
+This section records information calculated from the converted asset. Again, the section holds different information for triangular meshes and point clouds.
+
+# [Triangular meshes](#tab/TriangularMeshes)
* `numMeshPartsCreated`: The number of meshes in the arrAsset. It can differ from `numMeshes` in the `inputStatistics` section, because instancing is affected by the conversion process. * `numMeshPartsInstanced`: The number of meshes that are reused in the arrAsset. * `recenteringOffset`: When the `recenterToOrigin` option in the [ConversionSettings](configure-model-conversion.md) is enabled, this value is the translation that would move the converted model back to its original position. * `boundingBox`: The bounds of the model.
+# [Point clouds](#tab/PointClouds)
+
+* `numPoints`: The overall number of points in the converted model. This number contributes to the primitive limit in the [standard rendering server size](../../reference/vm-sizes.md#how-the-renderer-evaluates-the-number-of-primitives).
+* `recenteringOffset`: When the `recenterToOrigin` option in the [ConversionSettings](configure-model-conversion.md) is enabled, this value is the translation that would move the converted model back to its original position.
+* `boundingBox`: The bounds of the model.
++ ## Deprecated features The conversion service writes the files `stdout.txt` and `stderr.txt` to the output container, and these files had been the only source of warnings and errors.
role-based-access-control Classic Administrators https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/classic-administrators.md
To add a guest user as a Co-Administrator, follow the same steps as in the previ
For more information, about how to add a guest user to your directory, see [Add Microsoft Entra B2B collaboration users in the Azure portal](../active-directory/external-identities/add-users-administrator.md).
-Before you remove a guest user from your directory, you should first remove any role assignments for that guest user. For more information, see [Remove a guest user from your directory](./role-assignments-external-users.md#remove-a-guest-user-from-your-directory).
+Before you remove a guest user from your directory, you should first remove any role assignments for that guest user. For more information, see [Remove an external user from your directory](./role-assignments-external-users.md#remove-an-external-user-from-your-directory).
### Differences for guest users
Guest users that have been assigned the Co-Administrator role might see some dif
You would expect that user B could manage everything. The reason for this difference is that the Microsoft account is added to the subscription as a guest user instead of a member user. Guest users have different default permissions in Microsoft Entra ID as compared to member users. For example, member users can read other users in Microsoft Entra ID and guest users cannot. Member users can register new service principals in Microsoft Entra ID and guest users cannot.
-If a guest user needs to be able to perform these tasks, a possible solution is to assign the specific Microsoft Entra roles the guest user needs. For example, in the previous scenario, you could assign the [Directory Readers](../active-directory/roles/permissions-reference.md#directory-readers) role to read other users and assign the [Application Developer](../active-directory/roles/permissions-reference.md#application-developer) role to be able to create service principals. For more information about member and guest users and their permissions, see [What are the default user permissions in Microsoft Entra ID?](../active-directory/fundamentals/users-default-permissions.md). For more information about granting access for guest users, see [Assign Azure roles to external guest users using the Azure portal](role-assignments-external-users.md).
+If a guest user needs to be able to perform these tasks, a possible solution is to assign the specific Microsoft Entra roles the guest user needs. For example, in the previous scenario, you could assign the [Directory Readers](../active-directory/roles/permissions-reference.md#directory-readers) role to read other users and assign the [Application Developer](../active-directory/roles/permissions-reference.md#application-developer) role to be able to create service principals. For more information about member and guest users and their permissions, see [What are the default user permissions in Microsoft Entra ID?](../active-directory/fundamentals/users-default-permissions.md). For more information about granting access for guest users, see [Assign Azure roles to external users using the Azure portal](role-assignments-external-users.md).
Note that the [Azure built-in roles](../role-based-access-control/built-in-roles.md) are different than the [Microsoft Entra roles](../active-directory/roles/permissions-reference.md). The built-in roles don't grant any access to Microsoft Entra ID. For more information, see [Understand the different roles](../role-based-access-control/rbac-and-directory-admin-roles.md).
role-based-access-control Role Assignments External Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-external-users.md
Title: Assign Azure roles to external guest users using the Azure portal - Azure RBAC
+ Title: Assign Azure roles to external users using the Azure portal - Azure RBAC
description: Learn how to grant access to Azure resources for users external to an organization using the Azure portal and Azure role-based access control (Azure RBAC). Previously updated : 06/07/2023 Last updated : 02/28/2024
-# Assign Azure roles to external guest users using the Azure portal
-[Azure role-based access control (Azure RBAC)](overview.md) allows better security management for large organizations and for small and medium-sized businesses working with external collaborators, vendors, or freelancers that need access to specific resources in your environment, but not necessarily to the entire infrastructure or any billing-related scopes. You can use the capabilities in [Microsoft Entra B2B](../active-directory/external-identities/what-is-b2b.md) to collaborate with external guest users and you can use Azure RBAC to grant just the permissions that guest users need in your environment.
+# Assign Azure roles to external users using the Azure portal
+
+[Azure role-based access control (Azure RBAC)](overview.md) allows better security management for large organizations and for small and medium-sized businesses working with external collaborators, vendors, or freelancers that need access to specific resources in your environment, but not necessarily to the entire infrastructure or any billing-related scopes. You can use the capabilities in [Microsoft Entra B2B](../active-directory/external-identities/what-is-b2b.md) to collaborate with external users and you can use Azure RBAC to grant just the permissions that external users need in your environment.
## Prerequisites
To assign Azure roles or remove role assignments, you must have:
- `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/roleAssignments/delete` permissions, such as [User Access Administrator](built-in-roles.md#user-access-administrator) or [Owner](built-in-roles.md#owner)
-## When would you invite guest users?
+## When would you invite external users?
-Here are a couple example scenarios when you might invite guest users to your organization and grant permissions:
+Here are a couple example scenarios when you might invite users to your organization and grant permissions:
- Allow an external self-employed vendor that only has an email account to access your Azure resources for a project. - Allow an external partner to manage certain resources or an entire subscription.
Here are a couple example scenarios when you might invite guest users to your or
## Permission differences between member users and guest users
-Native members of a directory (member users) have different permissions than users invited from another directory as a B2B collaboration guest (guest users). For example, members user can read almost all directory information while guest users have restricted directory permissions. For more information about member users and guest users, see [What are the default user permissions in Microsoft Entra ID?](../active-directory/fundamentals/users-default-permissions.md).
+Users of a directory with member type (member users) have different permissions by default than users invited from another directory as a B2B collaboration guest (guest users). For example, member users can read almost all directory information while guest users have restricted directory permissions. For more information about member users and guest users, see [What are the default user permissions in Microsoft Entra ID?](../active-directory/fundamentals/users-default-permissions.md).
-## Add a guest user to your directory
+## Invite an external user to your directory
-Follow these steps to add a guest user to your directory using the Microsoft Entra ID page.
+Follow these steps to invite an external user to your directory in Microsoft Entra ID.
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure your organization's external collaboration settings are configured such that you're allowed to invite guests. For more information, see [Configure external collaboration settings](../active-directory/external-identities/external-collaboration-settings-configure.md).
+1. Make sure your organization's external collaboration settings are configured such that you're allowed to invite external users. For more information, see [Configure external collaboration settings](../active-directory/external-identities/external-collaboration-settings-configure.md).
+
+1. Select **Microsoft Entra ID** > **Users**.
-1. Click **Microsoft Entra ID** > **Users** > **New guest user**.
+1. Select **New user** > **Invite external user**.
- ![Screenshot of New guest user feature in Azure portal.](./media/role-assignments-external-users/invite-guest-user.png)
+ :::image type="content" source="media/role-assignments-external-users/invite-external-user.png" alt-text="Screenshot of Invite external user page in Azure portal." lightbox="media/role-assignments-external-users/invite-external-user.png":::
-1. Follow the steps to add a new guest user. For more information, see [Add Microsoft Entra B2B collaboration users in the Azure portal](../active-directory/external-identities/add-users-administrator.md#add-guest-users-to-the-directory).
+1. Follow the steps to invite an external user. For more information, see [Add Microsoft Entra B2B collaboration users in the Azure portal](../active-directory/external-identities/add-users-administrator.md#add-guest-users-to-the-directory).
-After you add a guest user to the directory, you can either send the guest user a direct link to a shared app, or the guest user can click the accept invitation link in the invitation email.
+After you invite an external user to the directory, you can either send the external user a direct link to a shared app, or the external user can select the accept invitation link in the invitation email.
-![Screenshot of guest user invite email.](./media/role-assignments-external-users/invite-email.png)
-For the guest user to be able to access your directory, they must complete the invitation process.
+For the external user to be able to access your directory, they must complete the invitation process.
-![Screenshot of guest user invite review permissions.](./media/role-assignments-external-users/invite-review-permissions.png)
For more information about the invitation process, see [Microsoft Entra B2B collaboration invitation redemption](../active-directory/external-identities/redemption-experience.md).
-## Assign a role to a guest user
+## Assign a role to an external user
-In Azure RBAC, to grant access, you assign a role. To assign a role to a guest user, you follow [same steps](role-assignments-portal.md) as you would for a member user, group, service principal, or managed identity. Follow these steps assign a role to a guest user at different scopes.
+In Azure RBAC, to grant access, you assign a role. To assign a role to an external user, you follow [same steps](role-assignments-portal.md) as you would for a member user, group, service principal, or managed identity. Follow these steps assign a role to an external user at different scopes.
1. Sign in to the [Azure portal](https://portal.azure.com). 1. In the Search box at the top, search for the scope you want to grant access to. For example, search for **Management groups**, **Subscriptions**, **Resource groups**, or a specific resource.
-1. Click the specific resource for that scope.
+1. Select the specific resource for that scope.
-1. Click **Access control (IAM)**.
+1. Select **Access control (IAM)**.
The following shows an example of the Access control (IAM) page for a resource group.
- ![Screenshot of Access control (IAM) page for a resource group.](./media/shared/rg-access-control.png)
+ :::image type="content" source="./media/shared/rg-access-control.png" alt-text="Screenshot of Access control (IAM) page for a resource group." lightbox="./media/shared/rg-access-control.png":::
-1. Click the **Role assignments** tab to view the role assignments at this scope.
+1. Select the **Role assignments** tab to view the role assignments at this scope.
-1. Click **Add** > **Add role assignment**.
+1. Select **Add** > **Add role assignment**.
- If you don't have permissions to assign roles, the Add role assignment option will be disabled.
+ If you don't have permissions to assign roles, the **Add role assignment** option will be disabled.
- ![Screenshot of Add > Add role assignment menu.](./media/shared/add-role-assignment-menu.png)
+ :::image type="content" source="./media/shared/add-role-assignment-menu.png" alt-text="Screenshot of Add > Add role assignment menu." lightbox="./media/shared/add-role-assignment-menu.png":::
- The Add role assignment page opens.
+ The **Add role assignment** page opens.
1. On the **Role** tab, select a role such as **Virtual Machine Contributor**.
- ![Screenshot of Add role assignment page with Roles tab.](./media/shared/roles.png)
+ :::image type="content" source="./media/shared/roles.png" alt-text="Screenshot of Add role assignment page with Roles tab." lightbox="./media/shared/roles.png":::
1. On the **Members** tab, select **User, group, or service principal**.
- ![Screenshot of Add role assignment page with Members tab.](./media/shared/members.png)
+ :::image type="content" source="./media/shared/members.png" alt-text="Screenshot of Add role assignment page with Members tab." lightbox="./media/shared/members.png":::
-1. Click **Select members**.
+1. Select **Select members**.
-1. Find and select the guest user. If you don't see the user in the list, you can type in the **Select** box to search the directory for display name or email address.
+1. Find and select the external user. If you don't see the user in the list, you can type in the **Select** box to search the directory for display name or email address.
You can type in the **Select** box to search the directory for display name or email address.
- ![Screenshot of Select members pane.](./media/role-assignments-external-users/select-members.png)
+ :::image type="content" source="./media/role-assignments-external-users/select-members.png" alt-text="Screenshot of Select members pane." lightbox="./media/role-assignments-external-users/select-members.png":::
-1. Click **Select** to add the guest user to the Members list.
+1. Select **Select** to add the external user to the Members list.
-1. On the **Review + assign** tab, click **Review + assign**.
+1. On the **Review + assign** tab, select **Review + assign**.
- After a few moments, the guest user is assigned the role at the selected scope.
+ After a few moments, the external user is assigned the role at the selected scope.
- ![Screenshot of role assignment for Virtual Machine Contributor.](./media/role-assignments-external-users/access-control-role-assignments.png)
+ :::image type="content" source="./media/role-assignments-external-users/access-control-role-assignments.png" alt-text="Screenshot of role assignment for Virtual Machine Contributor." lightbox="./media/role-assignments-external-users/access-control-role-assignments.png":::
-## Assign a role to a guest user not yet in your directory
+## Assign a role to an external user not yet in your directory
-To assign a role to a guest user, you follow [same steps](role-assignments-portal.md) as you would for a member user, group, service principal, or managed identity.
+To assign a role to an external user, you follow [same steps](role-assignments-portal.md) as you would for a member user, group, service principal, or managed identity.
-If the guest user is not yet in your directory, you can invite the user directly from the Select members pane.
+If the external user is not yet in your directory, you can invite the user directly from the Select members pane.
1. Sign in to the [Azure portal](https://portal.azure.com). 1. In the Search box at the top, search for the scope you want to grant access to. For example, search for **Management groups**, **Subscriptions**, **Resource groups**, or a specific resource.
-1. Click the specific resource for that scope.
+1. Select the specific resource for that scope.
-1. Click **Access control (IAM)**.
+1. Select **Access control (IAM)**.
-1. Click **Add** > **Add role assignment**.
+1. Select **Add** > **Add role assignment**.
- If you don't have permissions to assign roles, the Add role assignment option will be disabled.
+ If you don't have permissions to assign roles, the **Add role assignment** option will be disabled.
- ![Screenshot of Add > Add role assignment menu.](./media/shared/add-role-assignment-menu.png)
+ :::image type="content" source="./media/shared/add-role-assignment-menu.png" alt-text="Screenshot of Add > Add role assignment menu." lightbox="./media/shared/add-role-assignment-menu.png":::
- The Add role assignment page opens.
+ The **Add role assignment** page opens.
1. On the **Role** tab, select a role such as **Virtual Machine Contributor**. 1. On the **Members** tab, select **User, group, or service principal**.
- ![Screenshot of Add role assignment page with Members tab.](./media/shared/members.png)
+ :::image type="content" source="./media/shared/members.png" alt-text="Screenshot of Add role assignment page with Members tab." lightbox="./media/shared/members.png":::
-1. Click **Select members**.
+1. Select **Select members**.
1. In the **Select** box, type the email address of the person you want to invite and select that person.
- ![Screenshot of Invite guest user in Select members pane.](./media/role-assignments-external-users/select-members-new-guest.png)
+ :::image type="content" source="./media/role-assignments-external-users/select-members-new-guest.png" alt-text="Screenshot of invite external user in Select members pane." lightbox="./media/role-assignments-external-users/select-members-new-guest.png":::
-1. Click **Select** to add the guest user to the Members list.
+1. Select **Select** to add the external user to the Members list.
-1. On the **Review + assign** tab, click **Review + assign** to add the guest user to your directory, assign the role, and send an invite.
+1. On the **Review + assign** tab, select **Review + assign** to add the external user to your directory, assign the role, and send an invite.
After a few moments, you'll see a notification of the role assignment and information about the invite.
- ![Screenshot of role assignment and invited user notification.](./media/role-assignments-external-users/invited-user-notification.png)
+ :::image type="content" source="./media/role-assignments-external-users/invited-user-notification.png" alt-text="Screenshot of role assignment and invited user notification." lightbox="./media/role-assignments-external-users/invited-user-notification.png":::
-1. To manually invite the guest user, right-click and copy the invitation link in the notification. Don't click the invitation link because it starts the invitation process.
+1. To manually invite the external user, right-click and copy the invitation link in the notification. Don't select the invitation link because it starts the invitation process.
The invitation link will have the following format: `https://login.microsoftonline.com/redeem?rd=https%3a%2f%2finvitations.microsoft.com%2fredeem%2f%3ftenant%3d0000...`
-1. Send the invitation link to the guest user to complete the invitation process.
+1. Send the invitation link to the external user to complete the invitation process.
For more information about the invitation process, see [Microsoft Entra B2B collaboration invitation redemption](../active-directory/external-identities/redemption-experience.md).
-## Remove a guest user from your directory
+## Remove an external user from your directory
-Before you remove a guest user from a directory, you should first remove any role assignments for that guest user. Follow these steps to remove a guest user from a directory.
+Before you remove an external user from a directory, you should first remove any role assignments for that external user. Follow these steps to remove an external user from a directory.
-1. Open **Access control (IAM)** at a scope, such as management group, subscription, resource group, or resource, where the guest user has a role assignment.
+1. Open **Access control (IAM)** at a scope, such as management group, subscription, resource group, or resource, where the external user has a role assignment.
-1. Click the **Role assignments** tab to view all the role assignments.
+1. Select the **Role assignments** tab to view all the role assignments.
-1. In the list of role assignments, add a check mark next to the guest user with the role assignment you want to remove.
+1. In the list of role assignments, add a check mark next to the external user with the role assignment you want to remove.
- ![Screenshot of selected role assignment to remove.](./media/role-assignments-external-users/remove-role-assignment-select.png)
+ :::image type="content" source="./media/role-assignments-external-users/remove-role-assignment-select.png" alt-text="Screenshot of selected role assignment to remove." lightbox="./media/role-assignments-external-users/remove-role-assignment-select.png":::
-1. Click **Remove**.
+1. Select **Remove**.
- ![Screenshot of Remove role assignment message.](./media/shared/remove-role-assignment.png)
+ :::image type="content" source="./media/shared/remove-role-assignment.png" alt-text="Screenshot of Remove role assignment message." lightbox="./media/shared/remove-role-assignment.png":::
-1. In the remove role assignment message that appears, click **Yes**.
+1. In the remove role assignment message that appears, select **Yes**.
-1. Click the **Classic administrators** tab.
+1. Select the **Classic administrators** tab.
-1. If the guest user has a Co-Administrator assignment, add a check mark next to the guest user and click **Remove**.
+1. If the external user has a Co-Administrator assignment, add a check mark next to the external user and select **Remove**.
-1. In the left navigation bar, click **Microsoft Entra ID** > **Users**.
+1. In the left navigation bar, select **Microsoft Entra ID** > **Users**.
-1. Click the guest user you want to remove.
+1. Select the external user you want to remove.
-1. Click **Delete**.
+1. Select **Delete**.
- ![Screenshot of deleting guest user.](./media/role-assignments-external-users/delete-guest-user.png)
+ :::image type="content" source="./media/role-assignments-external-users/delete-guest-user.png" alt-text="Screenshot of deleting an external user." lightbox="./media/role-assignments-external-users/delete-guest-user.png":::
-1. In the delete message that appears, click **Yes**.
+1. In the delete message that appears, select **Yes**.
## Troubleshoot
-### Guest user cannot browse the directory
+### External user cannot browse the directory
-Guest users have restricted directory permissions. For example, guest users cannot browse the directory and cannot search for groups or applications. For more information, see [What are the default user permissions in Microsoft Entra ID?](../active-directory/fundamentals/users-default-permissions.md).
+External users have restricted directory permissions. For example, external users can't browse the directory and can't search for groups or applications. For more information, see [What are the default user permissions in Microsoft Entra ID?](../active-directory/fundamentals/users-default-permissions.md).
-![Screenshot of guest user cannot browse users in a directory.](./media/role-assignments-external-users/directory-no-users.png)
-If a guest user needs additional privileges in the directory, you can assign a Microsoft Entra role to the guest user. If you really want a guest user to have full read access to your directory, you can add the guest user to the [Directory Readers](../active-directory/roles/permissions-reference.md#directory-readers) role in Microsoft Entra ID. For more information, see [Add Microsoft Entra B2B collaboration users in the Azure portal](../active-directory/external-identities/add-users-administrator.md).
+If an external user needs additional privileges in the directory, you can assign a Microsoft Entra role to the external user. If you really want an external user to have full read access to your directory, you can add the external user to the [Directory Readers](../active-directory/roles/permissions-reference.md#directory-readers) role in Microsoft Entra ID. For more information, see [Add Microsoft Entra B2B collaboration users in the Azure portal](../active-directory/external-identities/add-users-administrator.md).
-![Screenshot of assigning Directory Readers role.](./media/role-assignments-external-users/directory-roles.png)
-### Guest user cannot browse users, groups, or service principals to assign roles
+### External user cannot browse users, groups, or service principals to assign roles
-Guest users have restricted directory permissions. Even if a guest user is an [Owner](built-in-roles.md#owner) at a scope, if they try to assign a role to grant someone else access, they cannot browse the list of users, groups, or service principals.
+External users have restricted directory permissions. Even if an external user is an [Owner](built-in-roles.md#owner) at a scope, if they try to assign a role to grant someone else access, they can't browse the list of users, groups, or service principals.
-![Screenshot of guest user cannot browse security principals to assign roles.](./media/role-assignments-external-users/directory-no-browse.png)
-If the guest user knows someone's exact sign-in name in the directory, they can grant access. If you really want a guest user to have full read access to your directory, you can add the guest user to the [Directory Readers](../active-directory/roles/permissions-reference.md#directory-readers) role in Microsoft Entra ID. For more information, see [Add Microsoft Entra B2B collaboration users in the Azure portal](../active-directory/external-identities/add-users-administrator.md).
+If the external user knows someone's exact sign-in name in the directory, they can grant access. If you really want an external user to have full read access to your directory, you can add the external user to the [Directory Readers](../active-directory/roles/permissions-reference.md#directory-readers) role in Microsoft Entra ID. For more information, see [Add Microsoft Entra B2B collaboration users in the Azure portal](../active-directory/external-identities/add-users-administrator.md).
-### Guest user cannot register applications or create service principals
+### External user cannot register applications or create service principals
-Guest users have restricted directory permissions. If a guest user needs to be able to register applications or create service principals, you can add the guest user to the [Application Developer](../active-directory/roles/permissions-reference.md#application-developer) role in Microsoft Entra ID. For more information, see [Add Microsoft Entra B2B collaboration users in the Azure portal](../active-directory/external-identities/add-users-administrator.md).
+External users have restricted directory permissions. If an external user needs to be able to register applications or create service principals, you can add the external user to the [Application Developer](../active-directory/roles/permissions-reference.md#application-developer) role in Microsoft Entra ID. For more information, see [Add Microsoft Entra B2B collaboration users in the Azure portal](../active-directory/external-identities/add-users-administrator.md).
-![Screenshot of guest user cannot register applications.](./media/role-assignments-external-users/directory-access-denied.png)
-### Guest user does not see the new directory
+### External user does not see the new directory
-If a guest user has been granted access to a directory, but they do not see the new directory listed in the Azure portal when they try to switch in their **Directories** page, make sure the guest user has completed the invitation process. For more information about the invitation process, see [Microsoft Entra B2B collaboration invitation redemption](../active-directory/external-identities/redemption-experience.md).
+If an external user has been granted access to a directory, but they don't see the new directory listed in the Azure portal when they try to switch in their **Directories** page, make sure the external user has completed the invitation process. For more information about the invitation process, see [Microsoft Entra B2B collaboration invitation redemption](../active-directory/external-identities/redemption-experience.md).
-### Guest user does not see resources
+### External user does not see resources
-If a guest user has been granted access to a directory, but they do not see the resources they have been granted access to in the Azure portal, make sure the guest user has selected the correct directory. A guest user might have access to multiple directories. To switch directories, in the upper left, click **Settings** > **Directories**, and then click the appropriate directory.
+If an external user has been granted access to a directory, but they don't see the resources they have been granted access to in the Azure portal, make sure the external user has selected the correct directory. An external user might have access to multiple directories. To switch directories, in the upper left, select **Settings** > **Directories**, and then select the appropriate directory.
-![Screenshot of Portal setting Directories section in Azure portal.](./media/role-assignments-external-users/directory-switch.png)
## Next steps
role-based-access-control Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/troubleshooting.md
The guest user doesn't have permissions to the resource at the selected scope.
**Solution**
-Check that the guest user is assigned a role with least privileged permissions to the resource at the selected scope. For more information, [Assign Azure roles to external guest users using the Azure portal](role-assignments-external-users.md).
+Check that the guest user is assigned a role with least privileged permissions to the resource at the selected scope. For more information, [Assign Azure roles to external users using the Azure portal](role-assignments-external-users.md).
### Symptom - Unable to create a support request
If you're a Microsoft Entra Global Administrator and you don't have access to a
## Next steps -- [Troubleshoot for guest users](role-assignments-external-users.md#troubleshoot)
+- [Troubleshoot for external users](role-assignments-external-users.md#troubleshoot)
- [Assign Azure roles using the Azure portal](role-assignments-portal.md) - [View activity logs for Azure RBAC changes](change-history-report.md)
sap Exchange Online Integration Sap Email Outbound https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/exchange-online-integration-sap-email-outbound.md
# Exchange Online Integration for Email-Outbound from SAP NetWeaver
-Sending emails from your SAP backend is a standard feature widely distributed for use cases such as alerting for batch jobs, SAP workflow state changes or invoice distribution. Many customers established the setup using [Exchange Server On-Premises](/exchange/exchange-server). With a shift to [Microsoft 365](https://www.microsoft.com/microsoft-365) and [Exchange Online](/exchange/exchange-online) comes a set of cloud-native approaches impacting that setup.
+Sending emails from your SAP backend is a standard feature widely distributed for use cases such as alerting for batch jobs, SAP workflow state changes or invoice distribution. Many customers established the setup using [Exchange Server on-premises](/exchange/exchange-server). With a shift to [Microsoft 365](https://www.microsoft.com/microsoft-365) and [Exchange Online](/exchange/exchange-online) comes a set of cloud-native approaches impacting that setup.
This article describes the setup for **outbound** email-communication from NetWeaver-based SAP systems to Exchange Online. That applies to SAP ECC, S/4HANA, SAP RISE managed, and any other NetWeaver based system.
Existing implementations relied on SMTP Auth and elevated trust relationship bec
Follow our standard [guide](/exchange/mail-flow-best-practices/how-to-set-up-a-multifunction-device-or-application-to-send-email-using-microsoft-365-or-office-365) to understand the general configuration of a "device" that wants to send email via Microsoft 365. > [!IMPORTANT]
-> Microsoft disabled Basic Authentication for Exchange online as of 2020 for newly created Microsoft 365 tenants. In addition to that, the feature gets disabled for existing tenants with no prior usage of Basic Authentication starting October 2020. See our developer [blog](https://devblogs.microsoft.com/microsoft365dev/deferred-end-of-support-date-for-basic-authentication-in-exchange-online/) for reference.
-
-> [!IMPORTANT]
-> SMTP Auth was exempted from the Basic Auth feature sunset
-process. However, this is a security risk for your estate, so we advise
-against it. See the latest [post](https://techcommunity.microsoft.com/t5/exchange-team-blog/basic-authentication-and-exchange-online-september-2021-update/ba-p/2772210)
-by our Exchange Team on the matter.
-
-> [!IMPORTANT]
-> Current OAuth support for SMTP is described on our [Exchange Server documentation for legacy protocols](/exchange/client-developer/legacy-protocols/how-to-authenticate-an-imap-pop-smtp-application-by-using-oauth).
+> SMTP Auth was exempted from the [Basic Auth feature sunset process](https://devblogs.microsoft.com/microsoft365dev/deferred-end-of-support-date-for-basic-authentication-in-exchange-online/) for continued support. However, this is a security risk for your estate. See the latest [post](https://techcommunity.microsoft.com/t5/exchange-team-blog/basic-authentication-and-exchange-online-september-2021-update/ba-p/2772210) by our Exchange Online team on the matter.
## Setup considerations
This will enable SMTP AUTH for that individual user in Exchange Online that you
3. Restart ICM service from SMICM transaction and make sure SMTP service is active.
- :::image type="content" source="media/exchange-online-integration/scot-smicm-sec-1-3.png" alt-text="Screenshot of ICM setting":::
+ :::image type="content" source="media/exchange-online-integration/scot-smicm-sec-1-3.png" alt-text="Screenshot of ICM setting.":::
4. Activate SAPConnect service in SICF transaction.
This will enable SMTP AUTH for that individual user in Exchange Online that you
:::image type="content" source="media/exchange-online-integration/scot-smtp-security-serttings-sec-1-5.png" alt-text="SMTP security config":::
- Coming back to the previous screen: Click on "Set" button and check "Internet" under "Supported Address Types". Using the wildcard "\*" option will allow to send emails to all domains without restriction.
+ Coming back to the previous screen: Click on "Set" button and check "Internet" under "Supported Address Types". Using the wildcard "\*" option will allow you to send emails to all domains without restriction.
:::image type="content" source="media/exchange-online-integration/scot-smtp-address-type-sec-1-5.png" alt-text="SMTP address type":::
sentinel Deploy Power Platform Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/business-applications/deploy-power-platform-solution.md
+
+ Title: Deploy the Microsoft Sentinel solution for Microsoft Power Platform
+description: Learn how to deploy the Microsoft Power Platform solution for Microsoft Sentinel.
++++ Last updated : 02/28/2024
+#CustomerIntent: As a security engineer, I want to ingest Power Platform activity logs into Microsoft Sentinel for security monitoring, detect related threats, and respond to incidents.
++
+# Deploy the Microsoft Sentinel solution for Microsoft Power Platform
+
+The Microsoft Sentinel solution for Power Platform allows you to monitor and detect suspicious or malicious activities in your Power Platform environment. The solution collects activity logs from different Power Platform components and inventory data. For more information, see [Microsoft Sentinel solution for Microsoft Power Platform overview](power-platform-solution-overview.md).
+
+> [!IMPORTANT]
+> - The Microsoft Sentinel solution for Power Platform is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> - The solution is a premium offering. Pricing information will be available before the solution becomes generally available.
+> - Provide feedback for this solution by completing this survey: [https://aka.ms/SentinelPowerPlatformSolutionSurvey](https://aka.ms/SentinelPowerPlatformSolutionSurvey).
+
+## Prerequisites
+
+- The Microsoft Sentinel solution is enabled.
+- You have a defined Microsoft Sentinel workspace and have read and write permissions to the workspace.
+- Your organization uses Power Platform to create and use Power Apps.
+- You can create an Azure Function App with the `Microsoft.Web/Sites`, `Microsoft.Web/ServerFarms`, `Microsoft.Insights/Components`, andΓÇ»`Microsoft.Storage/StorageAccounts` permissions.
+- You can create [Data Collection Rules/Endpoints](/azure/azure-monitor/essentials/data-collection-rule-overview) with the permissions to:
+ - `Microsoft.Insights/DataCollectionEndpoints`, and `Microsoft.Insights/DataCollectionRules`.
+ - Assign the Monitoring Metrics Publisher role to the Azure Function.
+- Audit logging is enabled in Microsoft Purview. For more information, see [Turn auditing on or off for Microsoft Purview](/microsoft-365/compliance/audit-log-enable-disable)
+- For the Power Platform inventory connector, have the following resources and configurations set up.
+ - Storage account to use with Azure Data Lake Storage Gen2. For more information, see [Create a storage account to use with Azure Data Lake Storage Gen2](/azure/storage/blobs/create-data-lake-storage-account).
+ - Blob service endpoint URL for the storage account. For more information, see [Get service endpoints for the storage account](/azure/storage/common/storage-account-get-info?tabs=portal#get-service-endpoints-for-the-storage-account).
+ - Power Platform self-service analytics configured to use the Azure Data Lake Storage Gen2 storage account. This process can take up to 48 hours to activate. For more information, see [Set up Microsoft Power Platform self-service analytics to export Power Platform inventory and usage data](/power-platform/admin/self-service-analytics). Review the prerequisites and requirements for the Power Platform self-service analytics feature. The requirements include that you enable public access to the storage account and that you have the permissions required to set up the data export.
+ - Permissions to assign Storage Blob Data Reader role to the Azure Function
++
+Enabling the Power Platform inventory data connector is recommended but not required to fully deploy the Microsoft Power Platform solution. For more information, see [Power Platform inventory data connector](#power-platform-inventory-data-connector).
+
+## Install the Power Platform solution in Microsoft Sentinel
+
+Install the solution from the content hub in Microsoft Sentinel by using the following steps.
+
+1. In the Azure portal, search for and select **Microsoft Sentinel**.
+1. Select the Microsoft Sentinel workspace where you're planning to deploy the solution.
+1. Under **Content management**, select **Content hub**.
+1. Search for and select **Power Platform**.
+1. Select **Install**.
+1. On the solution details page, select **Create**.
+1. On the **Basics** tab, enter the subscription, resource group, and workspace to deploy the solution.
+1. Select **Review + create** > **Create** to deploy the solution.
+
+## Enable the data connectors
+
+In Microsoft Sentinel, enable the six data connectors to collect activity logs and inventory data from the Power Platform components.
+
+### Power Platform inventory data connector
+
+The Power Platform inventory data connector allows you to resolve the GUIDs for Power Platform and PowerApps environments in the incident details to the human readable names that appear in Power Platform admin center and the Power Apps maker portal. We recommend enabling this data connector but it's not required to fully deploy the Microsoft Power Platform solution.
+
+To optimize ingestion, the Power Platform inventory data connector ingests data in full every 7 days and incremental updates daily. The incremental updates only include inventory assets that have changes since the previous day.
+
+To collect Power Apps and Power Automate inventory data, deploy the Azure Resource Manager template to create a function app. To complete the deployment, you need the blob service URL for your Azure Data Lake Storage Gen2 storage account. After you create the function app, grant the managed identity for the function app access to the storage account.
++
+1. In Microsoft Sentinel, under **Configuration**, select **Data connectors**.
+1. Search for and select **Power Platform Inventory (using Azure Functions)**.
+1. Select **Open connector page**.
+1. If you didn't enable Power Platform self-service analytics feature, under **Configuration** follow steps 1 and 2.
+1. Under **Configuration** > **Step 3 - Azure Resource Manager (ARM) Template**, select **Deploy to Azure**.
+1. Follow all the steps in the Azure Resource Manager template deployment wizard and select **Review + create** > **Create**.
+1. If you don't have the required permissions for role assignments during the Resource Manager template deployment, under **Configuration**, follow steps 4 and 5.
+
+### Other data connectors
+
+Connect each of the remaining data connectors by completing the following steps.
+
+1. In Microsoft Sentinel, under **Configuration**, select **Data connectors**.
+1. Search for and select the data connectors in the solution that you need to connect like **Microsoft Power Apps**.
+1. Select **Open connector page** > **Connect**.
+1. Repeat these steps for each of the following data connectors that are a part of the Power Platform solution.
+ - **Microsoft Power Automate**
+ - **Microsoft Power Platform Connectors**
+ - **Microsoft Power Platform DLP**
+ - **Microsoft Power Platform Admin Activity**
+ - **Microsoft Dataverse**
+
+## Enable auditing in your Microsoft Dataverse environment
+
+Dataverse activity logging is available only for Production dataverse environments. Other types of environments, such as sandbox, don't support activity logging. See [Microsoft Dataverse and model-driven apps activity logging requirements](/power-platform/admin/enable-use-comprehensive-auditing#requirements). Dataverse activity logging isn't enabled by default. Enable auditing at the global level for Dataverse and for each Dataverse entity.
+
+### Audit at the global level
+
+In your Dataverse environment, go to **Settings** > **Audit settings**. Under **Auditing**, select all three checkboxes.
+
+- **Start auditing**
+- **Log access**
+- **Read logs**
+
+For more information about these steps, see [Manage Dataverse auditing](/power-platform/admin/manage-dataverse-auditing#startstop-auditing-for-an-environment-and-set-retention-policy).
+
+### Audit Dataverse entities
+
+Enable detailed auditing on each of the Dataverse entities. To enable auditing on default entities, import a Power Platform managed solution. To enable auditing on custom entities, you must manually enable detailed auditing on each of the custom entities.
+
+#### Automatically enable auditing on default entities
+
+The quickest way to enable default audit settings for all Dataverse entities is to import the appropriate Power Platform managed solution in your Power Platform environment. This managed solution enables detailed auditing for each of the default entities listed in the following file: [https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE5eo4g](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE5eo4g). To enable auditing on custom entities, you must manually enable detailed auditing on each of the custom entities.
+
+To automatically enable entity auditing, complete the following steps.
+
+1. Go to [https://make.powerapps.com](https://make.powerapps.com).
+1. Choose the environment you want to monitor from the top right-hand side of the page.
+1. Go to **Solutions** > **Import solution**.
+1. Import one of the following solutions depending on whether your Power Platform environment is used for Dynamics 365 CE Apps or not.
+
+ - For use with Dynamics 365 CE Apps, import [https://aka.ms/AuditSettings/Dynamics](https://aka.ms/AuditSettings/Dynamics).
+ - Otherwise, import [https://aka.ms/AuditSettings/DataverseOnly](https://aka.ms/AuditSettings/DataverseOnly).
+
+#### Manually enable entity auditing
+
+To enable auditing on each Dataverse entity manually, including custom entities, follow the steps in the section **Enable or disable entities and fields for auditing** in [Manage Dataverse auditing](/power-platform/admin/manage-dataverse-auditing#enable-or-disable-entities-and-fields-for-auditing).
+
+To get the full incident detection value of the solution, we recommend that you enable, for each Dataverse entity you want to audit, the following options in the **General** tab of the Dataverse entity settings page:
+- Under the **Data Services** section, select **Auditing**.
+- Under the **Auditing** section, select **Single record auditing** and **Multiple record auditing**.
+
+Save and publish your customizations.
+
+## Verify that the data connector is ingesting logs to Microsoft Sentinel
+
+To verify that log ingestion is working, complete the following steps.
+
+### Generate activity and inventory logs
+
+1. Run activities like create, update, and delete to generate logs for data that you enabled for monitoring.
+1. Wait up to 60 minutes for Microsoft Sentinel to ingest the activity logs to the logs table in the workspace.
+1. For Power Platform inventory data, wait up to 24 hours for Microsoft Sentinel to ingest the data to the log tables in the workspace.
+
+### View ingested data in Microsoft Sentinel
+
+After you wait for Microsoft Sentinel to ingest the data, complete the following steps to verify you get the data you expect.
+
+1. In Microsoft Sentinel, select **Logs**.
+1. Run KQL queries against the tables that collect the activity logs from the data connectors. For example, run the following query to return 50 rows from the table with the Power Apps activity logs.
+
+ ```kusto
+ PowerAppsActivity
+ | take 50
+ ```
+
+ The following table lists the Log Analytics tables to query.
+
+ |Log Analytics tables |Data collected |
+ |||
+ |PowerAppsActivity |Power Apps activity logs |
+ |PowerAutomateActivity |Power Automate activity logs |
+ |PowerPlatformConnectorActivity |Power Platform connector activity logs |
+ |PowerPlatformDlpActivity |Data loss prevention activity logs |
+ |PowerPlatformAdminActivity|Power Platform administrative logs|
+ |DataverseActivity |Dataverse and model-driven apps activity logging |
+
+ Use the following parsers to return inventory and watchlist data.
+
+ |Parser |Data returned |
+ |||
+ |`InventoryApps` | Power Apps Inventory |
+ |`InventoryAppsConnections` | Power Apps connections Inventoryconnections |
+ |`InventoryEnvironments` |Power Platform environments Inventory |
+ |`InventoryFlows` | Power Automate flows Inventory |
+ |`MSBizAppsTerminatedEmployees` | Terminated employees watchlist |
+1. Verify that the results for each table show the activities you generated.
+
+## Next steps
+
+In this article, you learned how to deploy the Microsoft Sentinel solution for Power Platform.
+
+- To review the solution content available with this solution, see [Microsoft Sentinel solution for Microsoft Power Platform: security content reference](power-platform-solution-security-content.md).
+- To manage the solution components and enable security content, see [Discover and deploy out-of-the-box content](/azure/sentinel/sentinel-solutions-deploy).
sentinel Power Platform Solution Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/business-applications/power-platform-solution-overview.md
+
+ Title: Microsoft Sentinel solution for Microsoft Power Platform overview
+description: Learn about the Microsoft Sentinel Solution for Power Platform.
+++ Last updated : 02/28/2024++
+# Microsoft Sentinel solution for Microsoft Power Platform overview
+
+The Microsoft Sentinel solution for Power Platform allows you to monitor and detect suspicious or malicious activities in your Power Platform environment. The solution collects activity logs from different Power Platform components and inventory data. It analyzes those activity logs to detect threats and suspicious activities like the following activities:
+
+- Power Apps execution from unauthorized geographies
+- Suspicious data destruction by Power Apps
+- Mass deletion of Power Apps
+- Phishing attacks made possible through Power Apps
+- Power Automate flows activity by departing employees
+- Microsoft Power Platform connectors added to the environment
+- Update or removal of Microsoft Power Platform data loss prevention policies
+
+> [!IMPORTANT]
+> - The Microsoft Sentinel solution for Power Platform is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> - The solution is a premium offering. Pricing information will be available before the solution becomes generally available.
+> - Provide feedback for this solution by completing this survey: [https://aka.ms/SentinelPowerPlatformSolutionSurvey](https://aka.ms/SentinelPowerPlatformSolutionSurvey).
+
+## Why you should install the solution
+
+ The Microsoft Sentinel solution for Microsoft Power Platform helps organizations to:
+
+- Collect Microsoft Power Platform and Power Apps activity logs, audits, and events into the Microsoft Sentinel workspace.
+- Detect execution of suspicious, malicious, or illegitimate activities within Microsoft Power Platform and Power Apps.
+- Investigate threats detected in Microsoft Power Platform and Power Apps and contextualize them with other user activities across the organization.
+- Respond to Microsoft Power Platform-related and Power Apps-related threats and incidents in a simple and canned manner manually, automatically, or through a predefined workflow.
+
+## What the solution includes
+
+The Microsoft Sentinel solution for Power Platform includes several data connectors and analytic rules.
+
+### Data connectors
+
+The Microsoft Sentinel solution for Power Platform ingests and cross-correlates activity logs and inventory data from multiple sources. So, the solution requires that you enable the following data connectors that are available as part of the solution.
+
+|Connector name |Data collected |Log Analytics tables |
+||||
+|Power Platform Inventory (using Azure Functions) | Power Apps and Power Automate inventory data <br><br> For more information, see [Set up Microsoft Power Platform self-service analytics to export Power Platform inventory and usage data](/power-platform/admin/self-service-analytics). | PowerApps_CL,<br>PowerPlatrformEnvironments_CL,<br>PowerAutomateFlows_CL,<br>PowerAppsConnections_CL |
+|Microsoft Power Apps (Preview) | Power Apps activity logs <br><br> For more information, see [Power Apps activity logging](/power-platform/admin/logging-powerapps). | PowerAppsActivity |
+|Microsoft Power Automate (Preview) | Power Automate activity logs <br><br>For more information, see [View Power Automate audit logs](/power-platform/admin/logging-power-automate). | PowerAutomateActivity |
+|Microsoft Power Platform Connectors (Preview) | Power Platform connector activity logs <br><br>For more information, see [View the Power Platform connector activity logs](/power-platform/admin/connector-events-power-platform). | PowerPlatformConnectorActivity |
+|Microsoft Power Platform DLP (Preview) | Data loss prevention activity logs <br><br>For more information, see [Data loss prevention activity logging](/power-platform/admin/dlp-activity-logging). | PowerPlatformDlpActivity |
+|Microsoft Power Platform Admin Activity (Preview)|Power Platform administrator activity logs<br><Br> For more information, see [View Power Platform administrative logs using auditing solutions in Microsoft Purview (preview)](/power-platform/admin/admin-activity-logging).||
+|Microsoft Dataverse (Preview) | Dataverse and model-driven apps activity logging <br><br>For more information, see [Microsoft Dataverse and model-driven apps activity logging](/power-platform/admin/enable-use-comprehensive-auditing).<br><br>If you use the data connector for Dynamics 365, migrate to the data connector for Microsoft Dataverse. This data connector replaces the legacy data connector for Dynamics 365 and supports data collection rules. | DataverseActivity |
+
+### Analytic rules
+
+The solution includes analytics rules to detect threats and suspicious activity in your Power Platform environment. These activities include Power Apps being run from unauthorized geographies, suspicious data destruction by Power Apps, mass deletion of Power Apps, and more. For more information, see [Microsoft Sentinel solution for Microsoft Power Platform: security content reference](power-platform-solution-security-content.md).
+
+## Parsers
+
+The solution includes parsers that are used to access data from the raw data tables. Parsers ensure that the correct data is returned with a consistent schema. We recommend that you use the parsers instead of directly querying the inventory tables and watchlists. For more information, see [Microsoft Sentinel solution for Microsoft Power Platform: security content reference](power-platform-solution-security-content.md).
+
+## Next steps
+
+[Deploy the Microsoft Sentinel solution for Microsoft Power Platform](deploy-power-platform-solution.md)
sentinel Power Platform Solution Security Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/business-applications/power-platform-solution-security-content.md
+
+ Title: Microsoft Sentinel solution for Microsoft Power Platform - security content reference
+description: Learn about the built-in security content provided by the Microsoft Sentinel solution for Power Platform.
+++ Last updated : 02/28/2024++
+# Microsoft Sentinel solution for Microsoft Power Platform: security content reference
+
+This article details the security content available for the Microsoft Sentinel solution for Power Platform. For more information about this solution, see [Microsoft Sentinel solution for Microsoft Power Platform overview](power-platform-solution-overview.md).
+
+> [!IMPORTANT]
+> - The Microsoft Sentinel solution for Power Platform is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> - The solution is a premium offering. Pricing information will be available before the solution becomes generally available.
+> - Provide feedback for this solution by completing this survey: [https://aka.ms/SentinelPowerPlatformSolutionSurvey](https://aka.ms/SentinelPowerPlatformSolutionSurvey).
+
+## Built-in analytics rules
+
+The following analytic rules are included when you install the solution for Power Platform. The data sources listed include the data connector name and table in Log Analytics. To avoid missing data in the inventory sources, we recommend that you don't change the default lookback period defined in the analytic rule templates.
+
+| Rule name | Description | Source action | Tactics |
+| | | | |
+|PowerApps - App activity from unauthorized geo|Identifies Power Apps activity from countries in a predefined list of unauthorized countries. <br><br> Get the list of ISO 3166-1 alpha-2 country codes from [ISO Online Browsing Platform (OBP)](https://www.iso.org/obp/ui).<br><br>This detection uses logs ingested from Microsoft Entra ID. So, we recommend that you enable the Microsoft Entra ID data connector. |Run an activity in Power App from a country that's on the unauthorized country code list.<br><br>**Data sources**: <br>- Power Platform Inventory (using Azure Functions) <br>`InventoryApps`<br>`InventoryEnvironments`<br>- Microsoft Power Apps (Preview)<br>`PowerAppsActivity`<br>- Microsoft Entra ID<br>`SigninLogs`<br>|Initial access|
+|PowerApps - Multiple apps deleted|Identifies mass delete activity where multiple Power Apps are deleted, matching a predefined threshold of total apps deleted or app deleted events across multiple Power Platform environments.|Delete many Power Apps from the Power Platform admin center. <br><br>**Data sources**:<br>- Power Platform Inventory (using Azure Functions)<br>`InventoryApps`<br>`InventoryEnvironments`<br>- Microsoft Power Apps (Preview)<br>`PowerAppsActivity`|Impact|
+|PowerApps - Data destruction following publishing of a new app|Identifies a chain of events when a new app is created or published and is followed within 1 hour by mass update or delete events in Dataverse. If the app publisher is on the list of users in the **TerminatedEmployees** watchlist template, the incident severity is raised.|Delete a number of records in Power Apps within 1 hour of the Power App being created or published.<br><br>**Data sources**:<br>- Power Platform Inventory (using Azure Functions)<br>`InventoryApps`<br>`InventoryEnvironments`<br>- Microsoft Power Apps (Preview)<br>`PowerAppsActivity`<br>- Microsoft Dataverse (Preview)<br>`DataverseActivity`|Impact|
+|PowerApps - Multiple users accessing a malicious link after launching new app|Identifies a chain of events when a new Power App is created and is followed by these events:<br>- Multiple users launch the app within the detection window.<br>- Multiple users open the same malicious URL.<br><br>This detection cross correlates Power Apps execution logs with malicious URL click events from either of the following sources:<br>- The Microsoft 365 Defender data connector or <br>- Malicious URL indicators of compromise (IOC) in Microsoft Sentinel Threat Intelligence with the Advanced Security Information Model (ASIM) web session normalization parser.<br><br>Get the distinct number of users who launch or click the malicious link by creating a query.|Multiple users launch a new PowerApp and open a known malicious URL from the app.<br><br>**Data sources**:<br>- Power Platform Inventory (using Azure Functions)<br>`InventoryApps`<br>`InventoryEnvironments`<br>- Microsoft Power Apps (Preview)<br>`PowerAppsActivity`<br>- Threat Intelligence <br>`ThreatIntelligenceIndicator`<br>- Microsoft Defender XDR<br>`UrlClickEvents`<br>|Initial access|
+|PowerAutomate - Departing employee flow activity|Identifies instances where an employee who has been notified or is already terminated, and is on the **Terminated Employees** watchlist, creates or modifies a Power Automate flow.|User defined in the **Terminated Employees** watchlist creates or updates a Power Automate flow.<br><br>**Data sources**:<br>Microsoft Power Automate (Preview)<br>`PowerAutomateActivity`<br>- Power Platform Inventory (using Azure Functions)<br>`InventoryFlows`<br>`InventoryEnvironments`<br>Terminated employees watchlist|Exfiltration, impact|
+|PowerPlatform - Connector added to a sensitive environment|Identifies the creation of new API connectors within Power Platform, specifically targeting a predefined list of sensitive environments.|Add a new Power Platform connector in a sensitive Power Platform environment.<br><br>**Data sources**:<br>- Microsoft Power Platform Connectors (Preview)<br>`PowerPlatformConnectorActivity`<br>- Power Platform Inventory (using Azure Functions)<br>`InventoryApps`<br>`InventoryEnvironments`<br>`InventoryAppsConnections`<br>|Execution, Exfiltration|
+|PowerPlatform - DLP policy updated or removed|Identifies changes to the data loss prevention policy, specifically policies that are updated or removed.|Update or remove a Power Platform data loss prevention policy in Power Platform environment.<br><br>**Data sources**:<br>Microsoft Power Platform DLP (Preview)<br>`PowerPlatformDlpActivity`|Defense Evasion|
+|Dataverse - Guest user exfiltration following Power Platform defense impairment|(Identifies a chain of events starting with disablement of Power Platform tenant isolation and removal of an environment's access security group. These events are correlated with Dataverse exfiltration alerts associated with the impacted environment and recently created Microsoft Entra guest users.<br><Br>Activate other Dataverse analytics rules with the MITRE tactic 'Exfiltration' before enabling this rule.|As a new guest users, trigger exfiltration alerts after Power Platform security controls are disabled.<br><br>**Data sources:**<br>- PowerPlatformAdmin<br>`PowerPlatformAdminActivity`<br><br>- Dataverse<br>`DataverseActivity`<br>- Power Platform Inventory (using Azure Functions)<br>`InventoryEnvironments`<br>|Defense Evasion |
+|Dataverse - Mass export of records to Excel|Identifies users exporting a large amount of records from Dynamics 365 to Excel. The amount of records exported is significantly more than any other recent activity by that user. Large exports from users with no recent activity are identified using a predefined threshold.|Export many records from Dataverse to Excel.<br><br>**Data sources:**<br>- Dataverse<br>`DataverseActivity`<br>- Power Platform Inventory (using Azure Functions)<br>`InventoryEnvironments`<br>|Exfiltration|
+|Dataverse - User bulk retrieval outside normal activity|Identifies users retrieving significantly more records from Dataverse than they have in the past 2 weeks.|User retrieves many records from Dataverse<br><br>**Data sources:**<br>- Dataverse<br>`DataverseActivity`<br>- Power Platform Inventory (using Azure Functions)<br>`InventoryEnvironments`<br>|Exfiltration|
+|Power Apps - Bulk sharing of Power Apps to newly created guest users|Identifies unusual bulk sharing of Power Apps to newly created Microsoft Entra guest users. Unusual bulk sharing is based on a predefined threshold in the query.|Share an app with multiple external users.<br><br>**Data sources:**<br>- Microsoft Power Apps (Preview)<br>`PowerAppsActivity`<br>- Power Platform Inventory (using Azure Functions)<br>`InventoryApps`<br>`InventoryEnvironments`<br>- Microsoft Entra ID<br>`AuditLogs`|Resource Development,<br>Initial Access,<br>Lateral Movement|
+|Power Automate - Unusual bulk deletion of flow resources|Identifies bulk deletion of Power Automate flows that exceed a predefined threshold defined in the query and deviate from activity patterns observed in the last 14 days.|Bulk deletion of Power Automate flows.<br><br>**Data sources:**<br>- PowerAutomate<br>`PowerAutomateActivity`<br>|Impact, <br>Defense Evasion|
+|Power Platform - Possibly compromised user accesses Power Platform services|Identifies user accounts flagged at risk in Microsoft Entra Identity Protection and correlates these users with sign-in activity in Power Platform, including Power Apps, Power Automate, and Power Platform Admin Center.|User with risk signals accesses Power Platform portals.<br><br>**Data sources:**<br>- Microsoft Entra ID<br>`SigninLogs`|Initial Access, Lateral Movement|
++
+## Built-in parsers
+
+The solution includes parsers that are used to access data from the raw data tables. Parsers ensure that the correct data is returned with a consistent schema. We recommend that you use the parsers instead of directly querying the inventory tables and watchlists. The Power Platform inventory related parsers return data from the last 7 days.
+
+|Parser |Data returned |Table queried |
+||||
+|`InventoryApps` | Power Apps Inventory | `PowerApps_CL` |
+|`InventoryAppsConnections` | Power Apps connections Inventoryconnections | `PowerAppsConnections_CL` |
+|`InventoryEnvironments` |Power Platform environments Inventory | `PowerPlatrformEnvironments_CL` |
+|`InventoryFlows` | Power Automate flows Inventory | `PowerAutomateFlows_CL` |
+|`MSBizAppsTerminatedEmployees` | Terminated employees watchlist (from watchlist template) | `TerminatedEmployees` |
++
+For more information about analytic rules, see [Detect threats out-of-the-box](../detect-threats-built-in.md).
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
description: This article describes new features in Microsoft Sentinel from the
Previously updated : 01/11/2024 Last updated : 02/28/2024 # What's new in Microsoft Sentinel
The listed features were released in the last three months. For information abou
## February 2024 +
+- [Microsoft Sentinel solution for Microsoft Power Platform preview available](#microsoft-sentinel-solution-for-microsoft-power-platform-preview-available)
- [New Google Pub/Sub-based connector for ingesting Security Command Center findings (Preview)](#new-google-pubsub-based-connector-for-ingesting-security-command-center-findings-preview) - [Incident tasks now generally available (GA)](#incident-tasks-now-generally-available-ga) - [AWS and GCP data connectors now support Azure Government clouds](#aws-and-gcp-data-connectors-now-support-azure-government-clouds) - [Windows DNS Events via AMA connector now generally available (GA)](#windows-dns-events-via-ama-connector-now-generally-available-ga) +
+### Microsoft Sentinel solution for Microsoft Power Platform preview available
+
+The Microsoft Sentinel solution for Power Platform (preview) allows you to monitor and detect suspicious or malicious activities in your Power Platform environment. The solution collects activity logs from different Power Platform components and inventory data. It analyzes those activity logs to detect threats and suspicious activities like the following activities:
+
+- Power Apps execution from unauthorized geographies
+- Suspicious data destruction by Power Apps
+- Mass deletion of Power Apps
+- Phishing attacks made possible through Power Apps
+- Power Automate flows activity by departing employees
+- Microsoft Power Platform connectors added to the environment
+- Update or removal of Microsoft Power Platform data loss prevention policies
+
+Find this solution in the Microsoft Sentinel content hub.
+
+For more information, see:
+- [Microsoft Sentinel solution for Microsoft Power Platform overview](business-applications/power-platform-solution-overview.md)
+- [Microsoft Sentinel solution for Microsoft Power Platform: security content reference](business-applications/power-platform-solution-security-content.md)
+- [Deploy the Microsoft Sentinel solution for Microsoft Power Platform](business-applications/deploy-power-platform-solution.md)
+ ### New Google Pub/Sub-based connector for ingesting Security Command Center findings (Preview) You can now ingest logs from Google Security Command Center, using the new Google Cloud Platform (GCP) Pub/Sub-based connector (now in PREVIEW).
The integration with Microsoft Sentinel allows you to have visibility and contro
- Learn how to [set up the new connector](connect-google-cloud-platform.md) and ingest events from Google Security Command Center. + ### Incident tasks now generally available (GA) Incident tasks, which help you standardize your incident investigation and response practices so you can more effectively manage incident workflow, are now generally available (GA) in Microsoft Sentinel.
service-bus-messaging Service Bus Java How To Use Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-java-how-to-use-queues.md
Title: Get started with Azure Service Bus queues (Java) description: This tutorial shows you how to send messages to and receive messages from Azure Service Bus queues using the Java programming language. Previously updated : 04/12/2023 Last updated : 02/28/2024 ms.devlang: java
In this quickstart, you create a Java app to send messages to and receive messag
## Send messages to a queue
-In this section, you create a Java console project, and add code to send messages to the queue that you created earlier.
+In this section, you'll create a Java console project, and add code to send messages to the queue that you created earlier.
### Create a Java console project Create a Java project using Eclipse or a tool of your choice.
Update the `pom.xml` file to add a dependency to the Azure Service Bus package.
import com.azure.messaging.servicebus.*; import com.azure.identity.*;
- import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit; import java.util.Arrays; import java.util.List;
Update the `pom.xml` file to add a dependency to the Azure Service Bus package.
```java import com.azure.messaging.servicebus.*;
- import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit; import java.util.Arrays; import java.util.List;
In this section, you add code to retrieve messages from the queue.
// handles received messages static void receiveMessages() throws InterruptedException {
- CountDownLatch countdownLatch = new CountDownLatch(1);
- DefaultAzureCredential credential = new DefaultAzureCredentialBuilder() .build();
In this section, you add code to retrieve messages from the queue.
.credential(credential) .processor() .queueName(queueName)
- .processMessage(QueueTest::processMessage)
- .processError(context -> processError(context, countdownLatch))
+ .processMessage(context -> processMessage(context))
+ .processError(context -> processError(context))
.buildProcessorClient(); System.out.println("Starting the processor");
In this section, you add code to retrieve messages from the queue.
// handles received messages static void receiveMessages() throws InterruptedException {
- CountDownLatch countdownLatch = new CountDownLatch(1);
- // Create an instance of the processor through the ServiceBusClientBuilder ServiceBusProcessorClient processorClient = new ServiceBusClientBuilder() .connectionString(connectionString) .processor() .queueName(queueName) .processMessage(QueueTest::processMessage)
- .processError(context -> processError(context, countdownLatch))
+ .processError(context -> processError(context))
.buildProcessorClient(); System.out.println("Starting the processor");
In this section, you add code to retrieve messages from the queue.
3. Add the `processError` method to handle error messages. ```java
- private static void processError(ServiceBusErrorContext context, CountDownLatch countdownLatch) {
+ private static void processError(ServiceBusErrorContext context) {
System.out.printf("Error when receiving messages from namespace: '%s'. Entity: '%s'%n", context.getFullyQualifiedNamespace(), context.getEntityPath());
In this section, you add code to retrieve messages from the queue.
|| reason == ServiceBusFailureReason.UNAUTHORIZED) { System.out.printf("An unrecoverable error occurred. Stopping processing with reason %s: %s%n", reason, exception.getMessage());-
- countdownLatch.countDown();
} else if (reason == ServiceBusFailureReason.MESSAGE_LOCK_LOST) { System.out.printf("Message lock lost for message: %s%n", context.getException()); } else if (reason == ServiceBusFailureReason.SERVICE_BUSY) {
In this section, you add code to retrieve messages from the queue.
### [Passwordless (Recommended)](#tab/passwordless) 1. If you're using Eclipse, right-click the project, select **Export**, expand **Java**, select **Runnable JAR file**, and follow the steps to create a runnable JAR file.
-1. If you are signed into the machine using a user account that's different from the user account added to the **Azure Service Bus Data Owner** role, follow these steps. Otherwise, skip this step and move on to run the Jar file in the next step.
+1. If you're signed into the machine using a user account that's different from the user account added to the **Azure Service Bus Data Owner** role, follow these steps. Otherwise, skip this step and move on to run the Jar file in the next step.
1. [Install Azure CLI](/cli/azure/install-azure-cli-windows) on your machine. 1. Run the following CLI command to sign in to Azure. Use the same user account that you added to the **Azure Service Bus Data Owner** role.
Stopping and closing the processor
```
-On the **Overview** page for the Service Bus namespace in the Azure portal, you can see **incoming** and **outgoing** message count. You may need to wait for a minute or so and then refresh the page to see the latest values.
+On the **Overview** page for the Service Bus namespace in the Azure portal, you can see **incoming** and **outgoing** message count. Wait for a minute or so and then refresh the page to see the latest values.
:::image type="content" source="./media/service-bus-java-how-to-use-queues/overview-incoming-outgoing-messages.png" alt-text="Incoming and outgoing message count" lightbox="./media/service-bus-java-how-to-use-queues/overview-incoming-outgoing-messages.png":::
service-bus-messaging Service Bus Java How To Use Topics Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-java-how-to-use-topics-subscriptions.md
Title: Get started with Azure Service Bus topics (Java) description: This tutorial shows you how to send messages to Azure Service Bus topics and receive messages from topics' subscriptions using the Java programming language. Previously updated : 04/12/2023 Last updated : 02/28/2024 ms.devlang: java
Update the `pom.xml` file to add a dependency to the Azure Service Bus package.
import com.azure.messaging.servicebus.*; import com.azure.identity.*;
- import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit; import java.util.Arrays; import java.util.List;
Update the `pom.xml` file to add a dependency to the Azure Service Bus package.
```java import com.azure.messaging.servicebus.*;
- import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit; import java.util.Arrays; import java.util.List;
In this section, you add code to retrieve messages from a subscription to the to
// handles received messages static void receiveMessages() throws InterruptedException {
- CountDownLatch countdownLatch = new CountDownLatch(1);
- DefaultAzureCredential credential = new DefaultAzureCredentialBuilder() .build();
In this section, you add code to retrieve messages from a subscription to the to
.processor() .topicName(topicName) .subscriptionName(subName)
- .processMessage(ServiceBusTopicTest::processMessage)
- .processError(context -> processError(context, countdownLatch))
+ .processMessage(context -> processMessage(context))
+ .processError(context -> processError(context))
.buildProcessorClient(); System.out.println("Starting the processor");
In this section, you add code to retrieve messages from a subscription to the to
// handles received messages static void receiveMessages() throws InterruptedException {
- CountDownLatch countdownLatch = new CountDownLatch(1);
- // Create an instance of the processor through the ServiceBusClientBuilder ServiceBusProcessorClient processorClient = new ServiceBusClientBuilder() .connectionString(connectionString) .processor() .topicName(topicName) .subscriptionName(subName)
- .processMessage(ServiceBusTopicTest::processMessage)
- .processError(context -> processError(context, countdownLatch))
+ .processMessage(context -> processMessage(context))
+ .processError(context -> processError(context))
.buildProcessorClient(); System.out.println("Starting the processor");
In this section, you add code to retrieve messages from a subscription to the to
3. Add the `processError` method to handle error messages. ```java
- private static void processError(ServiceBusErrorContext context, CountDownLatch countdownLatch) {
+ private static void processError(ServiceBusErrorContext context) {
System.out.printf("Error when receiving messages from namespace: '%s'. Entity: '%s'%n", context.getFullyQualifiedNamespace(), context.getEntityPath());
In this section, you add code to retrieve messages from a subscription to the to
|| reason == ServiceBusFailureReason.UNAUTHORIZED) { System.out.printf("An unrecoverable error occurred. Stopping processing with reason %s: %s%n", reason, exception.getMessage());-
- countdownLatch.countDown();
} else if (reason == ServiceBusFailureReason.MESSAGE_LOCK_LOST) { System.out.printf("Message lock lost for message: %s%n", context.getException()); } else if (reason == ServiceBusFailureReason.SERVICE_BUSY) {
Run the program to see the output similar to the following output:
### [Passwordless (Recommended)](#tab/passwordless) 1. If you're using Eclipse, right-click the project, select **Export**, expand **Java**, select **Runnable JAR file**, and follow the steps to create a runnable JAR file.
-1. If you are signed into the machine using a user account that's different from the user account added to the **Azure Service Bus Data Owner** role, follow these steps. Otherwise, skip this step and move on to run the Jar file in the next step.
+1. If you're signed into the machine using a user account that's different from the user account added to the **Azure Service Bus Data Owner** role, follow these steps. Otherwise, skip this step and move on to run the Jar file in the next step.
1. [Install Azure CLI](/cli/azure/install-azure-cli-windows) on your machine. 1. Run the following CLI command to sign in to Azure. Use the same user account that you added to the **Azure Service Bus Data Owner** role.
Processing message. Session: 7bd3bd3e966a40ebbc9b29b082da14bb, Sequence #: 4. Co
```
-On the **Overview** page for the Service Bus namespace in the Azure portal, you can see **incoming** and **outgoing** message count. You may need to wait for a minute or so and then refresh the page to see the latest values.
+On the **Overview** page for the Service Bus namespace in the Azure portal, you can see **incoming** and **outgoing** message count. Wait for a minute or so and then refresh the page to see the latest values.
:::image type="content" source="./media/service-bus-java-how-to-use-queues/overview-incoming-outgoing-messages.png" alt-text="Incoming and outgoing message count" lightbox="./media/service-bus-java-how-to-use-queues/overview-incoming-outgoing-messages.png":::
spring-apps Quickstart Monitor End To End Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/quickstart-monitor-end-to-end-enterprise.md
You must manually provide the Application Insights connection string to the Orde
1. Use the following commands to retrieve the Application Insights connection string and set it in Key Vault: ```azurecli
- export INSTRUMENTATION_KEY=$(az monitor app-insights component show \
+ export CONNECTION_STRING=$(az monitor app-insights component show \
--resource-group <resource-group-name> \ --app <app-insights-name> \ --query "connectionString" \
You must manually provide the Application Insights connection string to the Orde
az keyvault secret set \ --vault-name <key-vault-name> \ --name "ApplicationInsights--ConnectionString" \
- --value ${INSTRUMENTATION_KEY}
+ --value ${CONNECTION_STRING}
``` > [!NOTE]
You must manually provide the Application Insights connection string to the Orde
--builder-name default \ --name default \ --type ApplicationInsights \
- --properties sampling-rate=100 connection_string=${INSTRUMENTATION_KEY}
+ --properties sampling-rate=100 connection_string=${CONNECTION_STRING}
``` 1. Use the following commands to restart applications to reload configuration:
storage File Sync Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-release-notes.md
# Release notes for Azure File Sync Azure File Sync enables centralizing your organization's file shares in Azure Files, while keeping the flexibility, performance, and compatibility of a Windows file server. While some users may opt to keep a full copy of their data locally, Azure File Sync additionally has the ability to transform Windows Server into a quick cache of your Azure file share. You can use any protocol that's available on Windows Server to access your data locally, including SMB, NFS, and FTPS. You can have as many caches as you need across the world.
-This article provides the release notes for Azure File Sync. It is important to note that major releases of Azure File Sync include service and agent improvements (for example, 15.0.0.0). Minor releases of Azure File Sync are typically for agent improvements (for example, 15.2.0.0).
+This article provides the release notes for Azure File Sync. It's important to note that major releases of Azure File Sync include service and agent improvements (for example, 15.0.0.0). Minor releases of Azure File Sync are typically for agent improvements (for example, 15.2.0.0).
## Supported versions The following Azure File Sync agent versions are supported: | Milestone | Agent version number | Release date | Status | |-|-|--||
+| V17.2 Release - [KB5023055](https://support.microsoft.com/topic/dfa4c285-a4cb-4561-b0ed-bbd4ae09d91d)| 17.2.0.0 | February 28, 2024 | Supported |
| V17.1 Release - [KB5023054](https://support.microsoft.com/topic/azure-file-sync-agent-v17-1-release-february-2024-security-only-update-bd1ce41c-27f4-4e3d-a80f-92f74817c55b)| 17.1.0.0 | February 13, 2024 | Supported - Security Update| | V16.2 Release - [KB5023052](https://support.microsoft.com/topic/azure-file-sync-agent-v16-2-release-february-2024-security-only-update-8247bf99-8f51-4eb6-b378-b86b6d1d45b8)| 16.2.0.0 | February 13, 2024 | Supported - Security Update| | V17.0 Release - [KB5023053](https://support.microsoft.com/topic/azure-file-sync-agent-v17-release-december-2023-flighting-2d8cba16-c035-4c54-b35d-1bd8fd795ba9)| 17.0.0.0 | December 6, 2023 | Supported - Flighting |
The following Azure File Sync agent versions have expired and are no longer supp
### Azure File Sync agent update policy [!INCLUDE [storage-sync-files-agent-update-policy](../../../includes/storage-sync-files-agent-update-policy.md)]
+## Windows Server 2012 R2 agent support will end on March 4, 2025
+Windows Server 2012 R2 reached [end of support](https://learn.microsoft.com/lifecycle/announcements/windows-server-2012-r2-end-of-support) on October 10, 2023. Azure File Sync will continue to support Windows Server 2012 R2 until the v17.x agent is expired on March 4, 2025. Once the v17 agent is expired, Windows Server 2012 R2 servers will stop syncing to your Azure file shares.
+
+**Action Required**
+
+Perform one of the following options for your Windows Server 2012 R2 servers prior to v17 agent expiration on March 4, 2025:
+
+- Option #1: Perform an [in-place upgrade](/windows-server/get-started/perform-in-place-upgrade) to a [supported operation system version](file-sync-planning.md#operating-system-requirements). Once the in-place upgrade completes, uninstall the Azure File Sync agent for Windows Server 2012 R2, restart the server, and then install the agent for the new server operating system (Windows Server 2016, Windows Server 2019, or Windows Server 2022).
+
+- Option #2: Deploy a new Azure File Sync server that is running a [supported operation system version](file-sync-planning.md#operating-system-requirements) to replace your Windows 2012 R2 servers. For guidance, see [Replace an Azure File Sync server](file-sync-replace-server.md).
+
+>[!Note]
+>Azure File Sync agent v17.2 is the last agent release currently planned for Windows Server 2012 R2. To continue to receive product improvements and bug fixes, upgrade your servers to Windows Server 2016 or later.
+
+## Version 17.2.0.0
+The following release notes are for Azure File Sync version 17.2.0.0 (released February 28, 2024). This release contains improvements for the Azure File Sync service and agent.
+
+### Improvements and issues that are fixed
+The Azure File Sync v17.2 release is a rollup update for the v17.0 and v17.1 releases:
+- [Azure File Sync Agent v17 Release - December 2023](https://support.microsoft.com/topic/azure-file-sync-agent-v17-release-december-2023-flighting-2d8cba16-c035-4c54-b35d-1bd8fd795ba9)
+- [Azure File Sync Agent v17.1 Release - February 2024](https://support.microsoft.com/topic/azure-file-sync-agent-v17-1-release-february-2024-security-only-update-bd1ce41c-27f4-4e3d-a80f-92f74817c55b)
+**Note**: If your server has v17.1 agent installed, you don't need to install the v17.2 agent.
+
+### Evaluation tool
+Before deploying Azure File Sync, you should evaluate whether it's compatible with your system using the Azure File Sync evaluation tool. This tool is an Azure PowerShell cmdlet that checks for potential issues with your file system and dataset, such as unsupported characters or an unsupported OS version. For installation and usage instructions, see [Evaluation Tool](file-sync-planning.md#evaluation-cmdlet) section in the planning guide.
+
+### Agent installation and server configuration
+For more information on how to install and configure the Azure File Sync agent with Windows Server, see [Planning for an Azure File Sync deployment](file-sync-planning.md) and [How to deploy Azure File Sync](file-sync-deployment-guide.md).
+
+- The agent installation package must be installed with elevated (admin) permissions.
+- The agent is not supported on Nano Server deployment option.
+- The agent is supported only on Windows Server 2019, Windows Server 2016, Windows Server 2012 R2, and Windows Server 2022.
+- The agent installation package is for a specific operating system version. If a server with an Azure File Sync agent installed is upgraded to a newer operating system version, you must uninstall the existing agent, restart the server, and install the agent for the new server operating system (Windows Server 2016, Windows Server 2019, or Windows Server 2022).
+- The agent requires at least 2 GiB of memory. If the server is running in a virtual machine with dynamic memory enabled, the VM should be configured with a minimum 2048 MiB of memory. See [Recommended system resources](file-sync-planning.md#recommended-system-resources) for more information.
+- The Storage Sync Agent (FileSyncSvc) service doesn't support server endpoints located on a volume that has the system volume information (SVI) directory compressed. This configuration will lead to unexpected results.
+
+### Interoperability
+- Antivirus, backup, and other applications that access tiered files can cause undesirable recall unless they respect the offline attribute and skip reading the content of those files. For more information, see [Troubleshoot Azure File Sync](/troubleshoot/azure/azure-storage/file-sync-troubleshoot?toc=/azure/storage/file-sync/toc.json).
+- File Server Resource Manager (FSRM) file screens can cause endless sync failures when files are blocked because of the file screen.
+- Running sysprep on a server that has the Azure File Sync agent installed isn't supported and can lead to unexpected results. The Azure File Sync agent should be installed after deploying the server image and completing sysprep mini-setup.
+
+### Sync limitations
+The following items don't sync, but the rest of the system continues to operate normally:
+- Files with unsupported characters. See [Troubleshooting guide](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-sync-errors?toc=/azure/storage/file-sync/toc.json#handling-unsupported-characters) for a list of unsupported characters.
+- Files or directories that end with a period.
+- Paths that are longer than 2,048 characters.
+- The system access control list (SACL) portion of a security descriptor that's used for auditing.
+- Extended attributes.
+- Alternate data streams.
+- Reparse points.
+- Hard links.
+- Compression (if it's set on a server file) isn't preserved when changes sync to that file from other endpoints.
+- Any file that's encrypted with EFS (or other user mode encryption) that prevents the service from reading the data.
+
+ > [!Note]
+ > Azure File Sync always encrypts data in transit. Data is always encrypted at rest in Azure.
+
+### Server endpoint
+- A server endpoint can be created only on an NTFS volume. ReFS, FAT, FAT32, and other file systems aren't currently supported by Azure File Sync.
+- Cloud tiering isn't supported on the system volume. To create a server endpoint on the system volume, disable cloud tiering when creating the server endpoint.
+- Failover Clustering is supported only with clustered disks, but not with Cluster Shared Volumes (CSVs).
+- A server endpoint can't be nested. It can coexist on the same volume in parallel with another endpoint.
+- Don't store an OS or application paging file within a server endpoint location.
+
+### Cloud endpoint
+- Azure File Sync supports making changes to the Azure file share directly. However, any changes made on the Azure file share first need to be discovered by an Azure File Sync change detection job. A change detection job is initiated for a cloud endpoint once every 24 hours. To immediately sync files that are changed in the Azure file share, you can use the [Invoke-AzStorageSyncChangeDetection](/powershell/module/az.storagesync/invoke-azstoragesyncchangedetection) PowerShell cmdlet to manually initiate the detection of changes in the Azure file share.
+- The storage sync service and/or storage account can be moved to a different resource group, subscription, or Azure AD tenant. After the storage sync service or storage account is moved, you need to give the Microsoft.StorageSync application access to the storage account (see [Ensure Azure File Sync has access to the storage account](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-sync-errors?toc=/azure/storage/file-sync/toc.json#troubleshoot-rbac)).
+
+ > [!Note]
+ > When creating the cloud endpoint, the storage sync service and storage account must be in the same Azure AD tenant. Once the cloud endpoint is created, the storage sync service and storage account can be moved to different Azure AD tenants.
+
+### Cloud tiering
+- If a tiered file is copied to another location by using Robocopy, the resulting file isn't tiered. The offline attribute might be set because Robocopy incorrectly includes that attribute in copy operations.
+- When copying files using Robocopy, use the /MIR option to preserve file timestamps. This will ensure older files are tiered sooner than recently accessed files.
+ ## Version 17.1.0.0 (Security Update) The following release notes are for Azure File Sync version 17.1.0.0 (released February 13, 2024). This release contains a security update for the Azure File Sync agent. These notes are in addition to the release notes listed for version 17.0.0.0.
The following release notes are for Azure File Sync version 17.0.0.0 (released D
- Miscellaneous reliability and telemetry improvements for cloud tiering and sync ### Evaluation Tool
-Before deploying Azure File Sync, you should evaluate whether it is compatible with your system using the Azure File Sync evaluation tool. This tool is an Azure PowerShell cmdlet that checks for potential issues with your file system and dataset, such as unsupported characters or an unsupported OS version. For installation and usage instructions, see [Evaluation Tool](file-sync-planning.md#evaluation-cmdlet) section in the planning guide.
+Before deploying Azure File Sync, you should evaluate whether it's compatible with your system using the Azure File Sync evaluation tool. This tool is an Azure PowerShell cmdlet that checks for potential issues with your file system and dataset, such as unsupported characters or an unsupported OS version. For installation and usage instructions, see [Evaluation Tool](file-sync-planning.md#evaluation-cmdlet) section in the planning guide.
### Agent installation and server configuration For more information on how to install and configure the Azure File Sync agent with Windows Server, see [Planning for an Azure File Sync deployment](file-sync-planning.md) and [How to deploy Azure File Sync](file-sync-deployment-guide.md).
The following release notes are for Azure File Sync version 16.0.0.0 (released J
- Miscellaneous reliability and telemetry improvements for cloud tiering and sync ### Evaluation Tool
-Before deploying Azure File Sync, you should evaluate whether it is compatible with your system using the Azure File Sync evaluation tool. This tool is an Azure PowerShell cmdlet that checks for potential issues with your file system and dataset, such as unsupported characters or an unsupported OS version. For installation and usage instructions, see [Evaluation Tool](file-sync-planning.md#evaluation-cmdlet) section in the planning guide.
+Before deploying Azure File Sync, you should evaluate whether it's compatible with your system using the Azure File Sync evaluation tool. This tool is an Azure PowerShell cmdlet that checks for potential issues with your file system and dataset, such as unsupported characters or an unsupported OS version. For installation and usage instructions, see [Evaluation Tool](file-sync-planning.md#evaluation-cmdlet) section in the planning guide.
### Agent installation and server configuration For more information on how to install and configure the Azure File Sync agent with Windows Server, see [Planning for an Azure File Sync deployment](file-sync-planning.md) and [How to deploy Azure File Sync](file-sync-deployment-guide.md).
The following release notes are for Azure File Sync version 15.0.0.0 (released M
- Reliability and telemetry improvements for cloud tiering and sync. ### Evaluation Tool
-Before deploying Azure File Sync, you should evaluate whether it is compatible with your system using the Azure File Sync evaluation tool. This tool is an Azure PowerShell cmdlet that checks for potential issues with your file system and dataset, such as unsupported characters or an unsupported OS version. For installation and usage instructions, see [Evaluation Tool](file-sync-planning.md#evaluation-cmdlet) section in the planning guide.
+Before deploying Azure File Sync, you should evaluate whether it's compatible with your system using the Azure File Sync evaluation tool. This tool is an Azure PowerShell cmdlet that checks for potential issues with your file system and dataset, such as unsupported characters or an unsupported OS version. For installation and usage instructions, see [Evaluation Tool](file-sync-planning.md#evaluation-cmdlet) section in the planning guide.
### Agent installation and server configuration For more information on how to install and configure the Azure File Sync agent with Windows Server, see [Planning for an Azure File Sync deployment](file-sync-planning.md) and [How to deploy Azure File Sync](file-sync-deployment-guide.md).
The following release notes are for Azure File Sync version 14.0.0.0 (released O
- Reliability and telemetry improvements for cloud tiering and sync. ### Evaluation Tool
-Before deploying Azure File Sync, you should evaluate whether it is compatible with your system using the Azure File Sync evaluation tool. This tool is an Azure PowerShell cmdlet that checks for potential issues with your file system and dataset, such as unsupported characters or an unsupported OS version. For installation and usage instructions, see [Evaluation Tool](file-sync-planning.md#evaluation-cmdlet) section in the planning guide.
+Before deploying Azure File Sync, you should evaluate whether it's compatible with your system using the Azure File Sync evaluation tool. This tool is an Azure PowerShell cmdlet that checks for potential issues with your file system and dataset, such as unsupported characters or an unsupported OS version. For installation and usage instructions, see [Evaluation Tool](file-sync-planning.md#evaluation-cmdlet) section in the planning guide.
### Agent installation and server configuration For more information on how to install and configure the Azure File Sync agent with Windows Server, see [Planning for an Azure File Sync deployment](file-sync-planning.md) and [How to deploy Azure File Sync](file-sync-deployment-guide.md).
storage File Sync Replace Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-replace-server.md
+
+ Title: Replace an Azure File Sync server
+description: How to replace an Azure File Sync server due to hardware decommissioning or end of support.
+++ Last updated : 02/27/2024+++
+# Replace an Azure File Sync server
+
+This article provides guidance on how to replace an Azure File Sync server due to hardware decommissioning or end of support (for example, Windows Server 2012 R2).
+
+## New Azure File Sync server
+1. Deploy a new on-premises server or Azure virtual machine that is running a [supported Windows Server operating system version](file-sync-planning.md#operating-system-requirements).
+2. [Install the latest Azure File Sync agent](file-sync-deployment-guide.md#install-the-azure-file-sync-agent) on the new server, then [register the server](file-sync-deployment-guide.md#register-windows-server-with-storage-sync-service) to the same Storage Sync Service as the server that is being replaced (referred to as old server in this guide).
+3. Create file shares on the new server and verify the share-level permissions match the permissions configured on the old server.
+4. Optional: To reduce the amount of data that needs to be downloaded to the new server from the Azure file share, use Robocopy to copy the files in the cache from the old server to the new server.
+
+ ```console
+ RobocopyΓÇ»<source> <destination> /COPY:DATSO /MIR /DCOPY:AT /XA:O /B /IT /UNILOG:RobocopyLog.txt
+ ```
+ Once the initial copy is completed, run the Robocopy command again to copy any remaining changes.
+
+5. In the Azure portal, navigate to the Storage Sync Service. Go to the sync group which has a server endpoint for the old server and [create a server endpoint](file-sync-server-endpoint-create.md#create-a-server-endpoint) on the new server. Repeat this step for every sync group that has a server endpoint for the old server.
+
+ For example, if the old server has 4 server endpoints (four sync groups), 4 server endpoints should be created on the new server.
+
+6. Wait for the namespace download to complete to the new server. To monitor progress, see [How do I monitor the progress of a current sync session?](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-sync-errors?tabs=portal1%2Cazure-portal#how-do-i-monitor-the-progress-of-a-current-sync-session).
+
+## User cut-over
+To redirect user access to the new Azure File Sync server, perform one of the following options:
+- Option #1: Rename the old server to a random name, then rename the new server to the same name as the old server.
+- Option #2: Use [Distributed File Systems Namespaces (DFS-N)](/windows-server/storage/dfs-namespaces/dfs-overview) to redirect users to the new server.
+
+## Old Azure File Sync server
+1. Follow the steps in the [Deprovision or delete your Azure File Sync server endpoint](file-sync-server-endpoint-delete.md#scenario-1-you-intend-to-delete-your-server-endpoint-and-stop-using-your-local-server--vm) documentation to verify that all files have synced to the Azure file share prior to deleting one or more server endpoints on the old server.
+2. Once all server endpoints are deleted on the old server, you can [unregister the server](file-sync-server-registration.md#unregister-the-server).
stream-analytics Blob Storage Azure Data Lake Gen2 Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/blob-storage-azure-data-lake-gen2-output.md
Title: Blob storage and Azure Data Lake Gen2 output from Azure Stream Analytics
-description: This article describes blob storage and Azure Data Lake Gen 2 as output for Azure Stream Analytics.
+ Title: Blob storage and Azure Data Lake Gen2 output
+description: This article describes blob storage and Azure Data Lake Gen 2 as output for an Azure Stream Analytics job.
Previously updated : 10/12/2022 Last updated : 02/27/2024 # Blob storage and Azure Data Lake Gen2 output from Azure Stream Analytics
The following table lists the property names and their descriptions for creating
| Output alias | A friendly name used in queries to direct the query output to this blob storage. | | Storage account | The name of the storage account where you're sending your output. | | Storage account key | The secret key associated with the storage account. |
-| Container | A logical grouping for blobs stored in the Azure Blob service. When you upload a blob to the Blob service, you must specify a container for that blob. <br /><br /> Dynamic container name is optional. It supports one and only one dynamic {field} in the container name. The field must exist in the output data, and follow the [container name policy](/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata).<br /><br />The field data type must be `string`. To use multiple dynamic fields, or combine static text along with dynamic field, you can define it in the query with built-in string functions, like CONCAT, LTRIM, etc. |
+| Container | A logical grouping for blobs stored in the Azure Blob service. When you upload a blob to the Blob service, you must specify a container for that blob. <br /><br /> Dynamic container name is optional. It supports one and only one dynamic `{field}` in the container name. The field must exist in the output data, and follow the [container name policy](/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata).<br /><br />The field data type must be `string`. To use multiple dynamic fields, or combine static text along with dynamic field, you can define it in the query with built-in string functions, like CONCAT, LTRIM, etc. |
| Event serialization format | Serialization format for output data. JSON, CSV, Avro, and Parquet are supported. Delta Lake is listed as an option here. The data is in Parquet format if Delta Lake is selected. Learn more about [Delta Lake](write-to-delta-lake.md) | | Delta path name | Required when Event serialization format is Delta Lake. The path that is used to write the delta lake table within the specified container. It includes the table name. [More details and examples.](write-to-delta-lake.md) | |Write Mode | Write mode controls the way Azure Stream Analytics writes to output file. Exactly once delivery only happens when write mode is Once. More information in the next section. |
stream-analytics Stream Analytics Scale Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-scale-jobs.md
Previously updated : 06/22/2017 Last updated : 02/27/2024 # Scale an Azure Stream Analytics job to increase throughput This article shows you how to tune a Stream Analytics query to increase throughput for Streaming Analytics jobs. You can use the following guide to scale your job to handle higher load and take advantage of more system resources (such as more bandwidth, more CPU resources, more memory).
-As a prerequisite, you may need to read the following articles:
-- [Understand and adjust Streaming Units](stream-analytics-streaming-unit-consumption.md)-- [Create parallelizable jobs](stream-analytics-parallelization.md)+
+As a prerequisite, read the following articles:
+
+- [Understand and adjust Streaming Units](stream-analytics-streaming-unit-consumption.md)
+- [Create parallelizable jobs](stream-analytics-parallelization.md)
## Case 1 ΓÇô Your query is inherently fully parallelizable across input partitions If your query is inherently fully parallelizable across input partitions, you can follow the following steps:
-1. Author your query to be embarrassingly parallel by using **PARTITION BY** keyword. See more details in the Embarrassingly parallel jobs section [on this page](stream-analytics-parallelization.md).
-2. Depending on output types used in your query, some output may either be not parallelizable, or need further configuration to be embarrassingly parallel. For example, Power BI output isn't parallelizable. Outputs are always merged before sending to the output sink. Blobs, Tables, ADLS, Service Bus, and Azure Function are automatically parallelized. SQL and Azure Synapse Analytics outputs have an option for parallelization. Event Hubs need to have the PartitionKey configuration set to match with the **PARTITION BY** field (usually PartitionId). For Event Hubs, also pay extra attention to match the number of partitions for all inputs and all outputs to avoid cross-over between partitions.
-3. Run your query with **1 SU V2** (which is the full capacity of a single computing node) to measure maximum achievable throughput, and if you're using **GROUP BY**, measure how many groups (cardinality) the job can handle. General symptoms of the job hitting system resource limits are the following.
- - SU % utilization metric is over 80%. This indicates memory usage is high. The factors contributing to the increase of this metric are described [here](stream-analytics-streaming-unit-consumption.md).
- - Output timestamp is falling behind with respect to wall clock time. Depending on your query logic, the output timestamp may have a logic offset from the wall clock time. However, they should progress at roughly the same rate. If the output timestamp is falling further and further behind, itΓÇÖs an indicator that the system is overworking. It can be a result of downstream output sink throttling, or high CPU utilization. We donΓÇÖt provide CPU utilization metric at this time, so it can be difficult to differentiate the two.
- - If the issue is due to sink throttling, you may need to increase the number of output partitions (and also input partitions to keep the job fully parallelizable), or increase the amount of resources of the sink (for example number of Request Units for Cosmos DB).
- - In job diagram, there's a per partition backlog event metric for each input. If the backlog event metric keeps increasing, itΓÇÖs also an indicator that the system resource is constrained (either because of output sink throttling, or high CPU).
-4. Once you have determined the limits of what a 1 SU V2 job can reach, you can extrapolate linearly the processing capacity of the job as you add more SUs, assuming you donΓÇÖt have any data skew that makes certain partition "hot."
+
+- Author your query to be embarrassingly parallel by using **PARTITION BY** keyword. For more information, see [Use query parallelization in Azure Stream Analytics](stream-analytics-parallelization.md).
+- Depending on output types used in your query, some output can be either not parallelizable, or need further configuration to be embarrassingly parallel. For example, Power BI output isn't parallelizable. Outputs are always merged before sending to the output sink. Blobs, Tables, Azure Data Lake Storage, Service Bus, and Azure Function are automatically parallelized. SQL and Azure Synapse Analytics outputs have an option for parallelization. An event hub needs to have the PartitionKey configuration set to match the **PARTITION BY** field (usually `PartitionId`). For Event Hubs, also pay extra attention to match the number of partitions for all inputs and all outputs to avoid cross-over between partitions.
+- Run your query with **1 streaming unit (SU) V2** (which is the full capacity of a single computing node) to measure maximum achievable throughput, and if you're using **GROUP BY**, measure how many groups (cardinality) the job can handle. General symptoms of the job hitting system resource limits are the following.
+ - Stream unit (SU) % utilization metric is over 80%. It indicates memory usage is high. The factors contributing to the increase of this metric are described [Understand and adjust Stream Analytics streaming units](stream-analytics-streaming-unit-consumption.md).
+ - Output timestamp is falling behind with respect to wall clock time. Depending on your query logic, the output timestamp can have a logic offset from the wall clock time. However, they should progress at roughly the same rate. If the output timestamp is falling further and further behind, itΓÇÖs an indicator that the system is overworking. It can be a result of downstream output sink throttling, or high CPU utilization. We donΓÇÖt provide CPU utilization metric at this time, so it can be difficult to differentiate the two.
+ - If the issue is due to sink throttling, you need to increase the number of output partitions (and also input partitions to keep the job fully parallelizable), or increase the amount of resources of the sink (for example number of Request Units for Cosmos DB).
+ - In the job diagram, there's a per partition backlog event metric for each input. If the backlog event metric keeps increasing, itΓÇÖs also an indicator that the system resource is constrained (either because of output sink throttling, or high CPU).
+- Once you have determined the limits of what a one SU V2 job can reach, you can extrapolate linearly the processing capacity of the job as you add more SUs, assuming you donΓÇÖt have any data skew that makes certain partition "hot."
> [!NOTE]
-> Choose the right number of Streaming Units:
+> Choose the right number of streaming units:
> Because Stream Analytics creates a processing node for each 1 SU V2 added, itΓÇÖs best to make the number of nodes a divisor of the number of input partitions, so the partitions can be evenly distributed across the nodes. For example, you have measured your 1 SU V2 job can achieve 4 MB/s processing rate, and your input partition count is 4. You can choose to run your job with 2 SU V2s to achieve roughly 8 MB/s processing rate, or 4 SU V2s to achieve 16 MB/s. You can then decide when to increase SU number for the job to what value, as a function of your input rate. ## Case 2 - If your query isn't embarrassingly parallel.
-If your query isn't embarrassingly parallel, you can follow the following steps.
-1. Start with a query with no **PARTITION BY** first to avoid partitioning complexity, and run your query with 1 SU V2 to measure maximum load as in [Case 1](#case-1--your-query-is-inherently-fully-parallelizable-across-input-partitions).
-2. If you can achieve your anticipated load in term of throughput, you're done. Alternatively, you may choose to measure the same job running with fractional nodes at 2/3 SU V2 and 1/3 SU V2, to find out the minimum number of streaming units that works for your scenario.
-3. If you canΓÇÖt achieve the desired throughput, try to break your query into multiple steps if possible if it doesnΓÇÖt have multiple steps already, and allocate up to 1 SU V2 for each step in the query. For example if you have 3 steps, allocate 3 SU V2s in the "Scale" option.
-4. When running such a job, Stream Analytics puts each step on its own node with dedicated 1 SU V2 resource.
-5. If you still havenΓÇÖt achieved your load target, you can attempt to use **PARTITION BY** starting from steps closer to the input. For **GROUP BY** operator that may not be naturally partitionable, you can use the local/global aggregate pattern to perform a partitioned **GROUP BY** followed by a non-partitioned **GROUP BY**. For example, if you want to count how many cars going through each toll booth every 3 minutes, and the volume of the data is beyond what can be handled by 1 SU V2.
+If your query isn't embarrassingly parallel, you can follow these steps.
+
+- Start with a query with no **PARTITION BY** first to avoid partitioning complexity, and run your query with 1 SU V2 to measure maximum load as in [Case 1](#case-1--your-query-is-inherently-fully-parallelizable-across-input-partitions).
+- If you can achieve your anticipated load in term of throughput, you're done. Alternatively, you can choose to measure the same job running with fractional nodes at 2/3 SU V2 and 1/3 SU V2, to find out the minimum number of streaming units that works for your scenario.
+- If you canΓÇÖt achieve the desired throughput, try to break your query into multiple steps if it doesnΓÇÖt have multiple steps already, and allocate up to one SU V2 for each step in the query. For example if you have three steps, allocate three SU V2s in the "Scale" option.
+- To run such a job, Stream Analytics puts each step on its own node with dedicated one SU V2 resource.
+- If you still havenΓÇÖt achieved your load target, you can attempt to use **PARTITION BY** starting from steps closer to the input. For **GROUP BY** operator that isn't naturally partitionable, you can use the local/global aggregate pattern to perform a partitioned **GROUP BY** followed by a nonpartitioned **GROUP BY**. For example, if you want to count how many cars going through each toll booth every 3 minutes, and the volume of the data is beyond what can be handled by one SU V2.
Query:
Query:
FROM Step1 GROUP BY TumblingWindow(minute, 3), TollBoothId ```
-In the query above, you're counting cars per toll booth per partition, and then adding the count from all partitions together.
+In the query, you're counting cars per toll booth per partition, and then adding the count from all partitions together.
-Once partitioned, for each partition of the step, allocate 1 SU V2 so each partition can be placed on its own processing node.
+Once partitioned, for each partition of the step, allocate one SU V2 so each partition can be placed on its own processing node.
> [!Note]
-> If your query cannot be partitioned, adding additional SU in a multi-steps query may not always improve throughput. One way to gain performance is to reduce volume on the initial steps using local/global aggregate pattern, as described above in step 5.
+> If your query cannot be partitioned, adding additional SU in a multi-steps query may not always improve throughput. One way to gain performance is to reduce volume on the initial steps using local/global aggregate pattern, as described in the step 5.
## Case 3 - You're running lots of independent queries in a job.
-For certain ISV use cases, where itΓÇÖs more cost-efficient to process data from multiple tenants in a single job, using separate inputs and outputs for each tenant, you may end up running quite a few (for example 20) independent queries in a single job. The assumption is each such subqueryΓÇÖs load is relatively small.
-In this case, you can follow the following steps.
-1. In this case, don't use **PARTITION BY** in the query
-2. Reduce the input partition count to the lowest possible value of 2 if you're using Event Hubs.
-3. Run the query with 1 SU V2. With expected load for each subquery, add as many such subqueries as possible, until the job is hitting system resource limits. Refer to [Case 1](#case-1--your-query-is-inherently-fully-parallelizable-across-input-partitions) for the symptoms when this happens.
-4. Once you're hitting the subquery limit measured above, start adding the subquery to a new job. The number of jobs to run as a function of the number of independent queries should be fairly linear, assuming you donΓÇÖt have any load skew. You can then forecast how many 1 SU V2 jobs you need to run as a function of the number of tenants you would like to serve.
-5. When using reference data join with such queries, union the inputs together before joining with the same reference data. Then, split out the events if necessary. Otherwise, each reference data join keeps a copy of reference data in memory, likely blowing up the memory usage unnecessarily.
+For certain ISV use cases, where itΓÇÖs more cost-efficient to process data from multiple tenants in a single job, using separate inputs and outputs for each tenant, you end up running quite a few (for example 20) independent queries in a single job. The assumption is each such subqueryΓÇÖs load is relatively small.
+
+In this case, you can follow these steps.
+
+- In this case, don't use **PARTITION BY** in the query
+- Reduce the input partition count to the lowest possible value of 2 if you're using Event Hubs.
+- Run the query with one SU V2. With expected load for each subquery, add as many such subqueries as possible, until the job is hitting system resource limits. Refer to [Case 1](#case-1--your-query-is-inherently-fully-parallelizable-across-input-partitions) for the symptoms when it happens.
+- Once you're hitting the subquery limit measured, start adding the subquery to a new job. The number of jobs to run as a function of the number of independent queries should be fairly linear, assuming you donΓÇÖt have any load skew. You can then forecast how many SU V2 jobs you need to run as a function of the number of tenants you would like to serve.
+- When using reference data join with such queries, union the inputs together before joining with the same reference data. Then, split out the events if necessary. Otherwise, each reference data join keeps a copy of reference data in memory, likely blowing up the memory usage unnecessarily.
> [!Note] > How many tenants to put in each job?
stream-analytics Stream Analytics Solution Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-solution-patterns.md
Previously updated : 06/21/2019 Last updated : 02/27/2024 # Azure Stream Analytics solution patterns
Like many other services in Azure, Stream Analytics is best used with other serv
## Create a Stream Analytics job to power real-time dashboarding experience
-With Azure Stream Analytics, you can quickly stand up real-time dashboards and alerts. A simple solution ingests events from Event Hubs or IoT Hub, and [feeds the Power BI dashboard with a streaming data set](/power-bi/service-real-time-streaming). For more information, see the detailed tutorial [Analyze fraudulent call data with Stream Analytics and visualize results in Power BI dashboard](stream-analytics-real-time-fraud-detection.md).
+With Azure Stream Analytics, you can quickly create real-time dashboards and alerts. A simple solution ingests events from Event Hubs or IoT Hub, and [feeds the Power BI dashboard with a streaming data set](/power-bi/service-real-time-streaming). For more information, see the detailed tutorial [Analyze fraudulent call data with Stream Analytics and visualize results in Power BI dashboard](stream-analytics-real-time-fraud-detection.md).
-![ASA Power BI dashboard](media/stream-analytics-solution-patterns/power-bi-dashboard.png)
-This solution can be built in just a few minutes from Azure portal. There is no extensive coding involved, and SQL language is used to express the business logic.
+
+You can build this solution in just a few minutes using the Azure portal. You don't need to code extensively. Instead, you can use SQL language to express the business logic.
This solution pattern offers the lowest latency from the event source to the Power BI dashboard in a browser. Azure Stream Analytics is the only Azure service with this built-in capability. ## Use SQL for dashboard
-The Power BI dashboard offers low latency, but it cannot be used to produce full fledged Power BI reports. A common reporting pattern is to output your data to SQL Database first. Then use Power BI's SQL connector to query SQL for the latest data.
+The Power BI dashboard offers low latency, but you can't use it to produce full fledged Power BI reports. A common reporting pattern is to output your data to SQL Database first. Then use Power BI's SQL connector to query SQL for the latest data.
-![ASA SQL dashboard](media/stream-analytics-solution-patterns/sql-dashboard.png)
-Using SQL Database gives you more flexibility but at the expense of a slightly higher latency. This solution is optimal for jobs with latency requirements greater than one second. With this method, you can maximize Power BI capabilities to further slice and dice the data for reports, and much more visualization options. You also gain the flexibility of using other dashboard solutions, such as Tableau.
+When you use SQL Database, it gives you more flexibility but at the expense of a slightly higher latency. This solution is optimal for jobs with latency requirements greater than one second. When you use this method, you can maximize Power BI capabilities to further slice and dice the data for reports, and much more visualization options. You also gain the flexibility of using other dashboard solutions, such as Tableau.
-SQL is not a high throughput data store. The maximum throughput to SQL Database from Azure Stream Analytics is currently around 24 MB/s. If the event sources in your solution produce data at a higher rate, you need to use processing logic in Stream Analytics to reduce the output rate to SQL. Techniques such as filtering, windowed aggregates, pattern matching with temporal joins, and analytic functions can be used. The output rate to SQL can be further optimized using techniques described in [Azure Stream Analytics output to Azure SQL Database](stream-analytics-sql-output-perf.md).
+SQL isn't a high throughput data store. The maximum throughput to SQL Database from Azure Stream Analytics is currently around 24 MB/s. If the event sources in your solution produce data at a higher rate, you need to use processing logic in Stream Analytics to reduce the output rate to SQL. You can use techniques such as filtering, windowed aggregates, pattern matching with temporal joins, and analytic functions. You can optimize the output rate to SQL using techniques described in [Azure Stream Analytics output to Azure SQL Database](stream-analytics-sql-output-perf.md).
## Incorporate real-time insights into your application with event messaging
-The second most popular use of Stream Analytics is to generate real-time alerts. In this solution pattern, business logic in Stream Analytics can be used to detect [temporal and spatial patterns](stream-analytics-geospatial-functions.md) or [anomalies](stream-analytics-machine-learning-anomaly-detection.md), then produce alerting signals. However, unlike the dashboard solution where Stream Analytics uses Power BI as a preferred endpoint, a number of intermediate data sinks can be used. These sinks include Event Hubs, Service Bus, and Azure Functions. You, as the application builder, need to decide which data sink works best for your scenario.
+The second most popular use of Stream Analytics is to generate real-time alerts. In this solution pattern, business logic in Stream Analytics can be used to detect [temporal and spatial patterns](stream-analytics-geospatial-functions.md) or [anomalies](stream-analytics-machine-learning-anomaly-detection.md), then produce alerting signals. However, unlike the dashboard solution where Stream Analytics uses Power BI as a preferred endpoint, you can use other intermediate data sinks. These sinks include Event Hubs, Service Bus, and Azure Functions. You, as the application builder, need to decide which data sink works best for your scenario.
-Downstream event consumer logic must be implemented to generate alerts in your existing business workflow. Because you can implement custom logic in Azure Functions, Azure Functions is the fastest way you can perform this integration. A tutorial for using Azure Function as the output for a Stream Analytics job can be found in [Run Azure Functions from Azure Stream Analytics jobs](stream-analytics-with-azure-functions.md). Azure Functions also supports various types of notifications including text and email. Logic App may also be used for such integration, with Event Hubs between Stream Analytics and Logic App.
+You need to implement the downstream event consumer logic to generate alerts in your existing business workflow. Because you can implement custom logic in Azure Functions, Azure Functions is the fastest way you can perform this integration. For a tutorial on using Azure Function as the output for a Stream Analytics job, see [Run Azure Functions from Azure Stream Analytics jobs](stream-analytics-with-azure-functions.md). Azure Functions also supports various types of notifications including text and email. You can also use Logic Apps for such integration, with Event Hubs between Stream Analytics and Logic Apps.
-![ASA event messaging app](media/stream-analytics-solution-patterns/event-messaging-app.png)
-Event Hubs, on the other hand, offers the most flexible integration point. Many other services, like Azure Data Explorer and Time Series Insights can consume events from Event Hubs. Services can be connected directly to the Event Hubs sink from Azure Stream Analytics to complete the solution. Event Hubs is also the highest throughput messaging broker available on Azure for such integration scenarios.
+Azure Event Hubs service, on the other hand, offers the most flexible integration point. Many other services, like Azure Data Explorer and Time Series Insights can consume events from Event Hubs. Services can be connected directly to the Event Hubs sink from Azure Stream Analytics to complete the solution. Event Hubs is also the highest throughput messaging broker available on Azure for such integration scenarios.
## Dynamic applications and websites
-You can create custom real-time visualizations, such as dashboard or map visualization, using Azure Stream Analytics and Azure SignalR Service. Using SignalR, web clients can be updated and show dynamic content in real-time.
+You can create custom real-time visualizations, such as dashboard or map visualization, using Azure Stream Analytics and Azure SignalR Service. When you use SignalR, web clients can be updated and show dynamic content in real-time.
-![ASA dynamic app](media/stream-analytics-solution-patterns/dynamic-app.png)
## Incorporate real-time insights into your application through data stores
Most web services and web applications today use a request/response pattern to s
High data volume often creates performance bottlenecks in a CRUD-based system. The [event sourcing solution pattern](/azure/architecture/patterns/event-sourcing) is used to address the performance bottlenecks. Temporal patterns and insights are also difficult and inefficient to extract from a traditional data store. Modern high-volume data driven applications often adopt a dataflow-based architecture. Azure Stream Analytics as the compute engine for data in motion is a linchpin in that architecture.
-![ASA event sourcing app](media/stream-analytics-solution-patterns/event-sourcing-app.png)
In this solution pattern, events are processed and aggregated into data stores by Azure Stream Analytics. The application layer interacts with data stores using the traditional request/response pattern. Because of Stream Analytics' ability to process a large number of events in real-time, the application is highly scalable without the need to bulk up the data store layer. The data store layer is essentially a materialized view in the system. [Azure Stream Analytics output to Azure Cosmos DB](stream-analytics-documentdb-output.md) describes how Azure Cosmos DB is used as a Stream Analytics output.
-In real applications where processing logic is complex and there is the need to upgrade certain parts of the logic independently, multiple Stream Analytics jobs can be composed together with Event Hubs as the intermediary event broker.
+In real applications where processing logic is complex and there's the need to upgrade certain parts of the logic independently, multiple Stream Analytics jobs can be composed together with Event Hubs as the intermediary event broker.
-![ASA complex event sourcing app](media/stream-analytics-solution-patterns/event-sourcing-app-complex.png)
-This pattern improves the resiliency and manageability of the system. However, even though Stream Analytics guarantees exactly once processing, there is a small chance that duplicate events may land in the intermediary Event Hubs. It's important for the downstream Stream Analytics job to dedupe events using logic keys in a lookback window. For more information on event delivery, see [Event Delivery Guarantees](/stream-analytics-query/event-delivery-guarantees-azure-stream-analytics) reference.
+This pattern improves the resiliency and manageability of the system. However, even though Stream Analytics guarantees exactly once processing, there's a small chance that duplicate events land in the intermediary Event Hubs. It's important for the downstream Stream Analytics job to dedupe events using logic keys in a lookback window. For more information on event delivery, see [Event Delivery Guarantees](/stream-analytics-query/event-delivery-guarantees-azure-stream-analytics) reference.
## Use reference data for application customization
The Azure Stream Analytics reference data feature is designed specifically for e
This pattern can also be used to implement a rules engine where the thresholds of the rules are defined from reference data. For more information on rules, see [Process configurable threshold-based rules in Azure Stream Analytics](stream-analytics-threshold-based-rules.md).
-![ASA reference data app](media/stream-analytics-solution-patterns/reference-data-app.png)
## Add Machine Learning to your real-time insights
Azure Stream Analytics' built-in [Anomaly Detection model](stream-analytics-mach
For advanced users who want to incorporate online training and scoring into the same Stream Analytics pipeline, see this example of how do that with [linear regression](stream-analytics-high-frequency-trading.md).
-![ASA Machine Learning app](media/stream-analytics-solution-patterns/machine-learning-app.png)
## Real-time data warehousing
-Another common pattern is real-time data warehousing, also called streaming data warehouse. In addition to events arriving at Event Hubs and IoT Hub from your application, [Azure Stream Analytics running on IoT Edge](stream-analytics-edge.md) can be used to fulfill data cleansing, data reduction, and data store and forward needs. Stream Analytics running on IoT Edge can gracefully handle bandwidth limitation and connectivity issues in the system. Stream Analytics can support throughput rates of upto 200MB/sec while writing to Azure Synapse Analytics.
-
-![ASA Data Warehousing](media/stream-analytics-solution-patterns/data-warehousing.png)
+Another common pattern is real-time data warehousing, also called streaming data warehouse. In addition to events arriving at Event Hubs and IoT Hub from your application, [Azure Stream Analytics running on IoT Edge](stream-analytics-edge.md) can be used to fulfill data cleansing, data reduction, and data store and forward needs. Stream Analytics running on IoT Edge can gracefully handle bandwidth limitation and connectivity issues in the system. Stream Analytics can support throughput rates of upto 200 MB/sec while writing to Azure Synapse Analytics.
## Archiving real-time data for analytics
-Most data science and analytics activities still happen offline. Data can be archived by Azure Stream Analytics through Azure Data Lake Store Gen2 output and Parquet output formats. This capability removes the friction to feed data directly into Azure Data Lake Analytics, Azure Databricks, and Azure HDInsight. Azure Stream Analytics is used as a near real-time ETL engine in this solution. You can explore archived data in Data Lake using various compute engines.
+Most data science and analytics activities still happen offline. You can archive data in Azure Stream Analytics through Azure Data Lake Store Gen2 output and Parquet output formats. This capability removes the friction to feed data directly into Azure Data Lake Analytics, Azure Databricks, and Azure HDInsight. Azure Stream Analytics is used as a near real-time Extract-Transform-Load (ETL) engine in this solution. You can explore archived data in Data Lake using various compute engines.
-> [!div class="mx-imgBorder"]
-> ![ASA offline analytics](media/stream-analytics-solution-patterns/offline-analytics.png)
## Use reference data for enrichment Data enrichment is often a requirement for ETL engines. Azure Stream Analytics supports data enrichment with [reference data](stream-analytics-use-reference-data.md) from both SQL Database and Azure Blob storage. Data enrichment can be done for data landing in both Azure Data Lake and Azure Synapse Analytics. -
-![ASA offline analytics with data enrichment](media/stream-analytics-solution-patterns/offline-analytics-enriched.png)
## Operationalize insights from archived data If you combine the offline analytics pattern with the near real-time application pattern, you can create a feedback loop. The feedback loop lets the application automatically adjust for changing patterns in the data. This feedback loop can be as simple as changing the threshold value for alerting, or as complex as retraining Machine Learning models. The same solution architecture can be applied to both ASA jobs running in the cloud and on IoT Edge.
-![ASA insights operationalization](media/stream-analytics-solution-patterns/insights-operationalization.png)
## How to monitor ASA jobs
-An Azure Stream Analytics job can be run 24/7 to process incoming events continuously in real time. Its uptime guarantee is crucial to the health of the overall application. While Stream Analytics is the only streaming analytics service in the industry that offers a [99.9% availability guarantee](https://azure.microsoft.com/support/legal/sl).
+An Azure Stream Analytics job can be run 24/7 to process incoming events continuously in real time. Its uptime guarantee is crucial to the health of the overall application. While Stream Analytics is the only streaming analytics service in the industry that offers a [99.9% availability guarantee](https://azure.microsoft.com/support/legal/sl).
-![ASA monitoring](media/stream-analytics-solution-patterns/monitoring.png)
There are two key things to monitor: - [Job failed state](job-states.md)
- First and foremost, you need to make sure the job is running. Without the job in the running state, no new metrics or logs are generated. Jobs can change to a failed state for various reasons, including having a high SU utilization level (i.e., running out of resources).
+ First and foremost, you need to make sure the job is running. Without the job in the running state, no new metrics or logs are generated. Jobs can change to a failed state for various reasons, including having a high SU utilization level (that is, running out of resources).
- [Watermark delay metrics](https://azure.microsoft.com/blog/new-metric-in-azure-stream-analytics-tracks-latency-of-your-streaming-pipeline/)
Upon failure, activity logs and [diagnostics logs](stream-analytics-job-diagnost
Regardless of Azure Stream Analytics' SLA guarantee and how careful you run your end-to-end application, outages happen. If your application is mission critical, you need to be prepared for outages in order to recover gracefully.
-For alerting applications, the most important thing is to detect the next alert. You may choose to restart the job from the current time when recovering, ignoring past alerts. The job start time semantics are by the first output time, not the first input time. The input is rewound backwards an appropriate amount of time to guarantee the first output at the specified time is complete and correct. You won't get partial aggregates and trigger alerts unexpectedly as a result.
+For alerting applications, the most important thing is to detect the next alert. You can choose to restart the job from the current time when recovering, ignoring past alerts. The job start time semantics are by the first output time, not the first input time. The input is rewound backwards an appropriate amount of time to guarantee the first output at the specified time is complete and correct. You won't get partial aggregates and trigger alerts unexpectedly as a result.
-You may also choose to start output from some amount of time in the past. Both Event Hubs and IoT Hub's retention policies hold a reasonable amount of data to allow processing from the past. The tradeoff is how fast you can catch up to the current time and start to generate timely new alerts. Data loses its value rapidly over time, so it's important to catch up to the current time quickly. There are two ways to catch up quickly:
+You can also choose to start output from some amount of time in the past. Both Event Hubs and IoT Hub's retention policies hold a reasonable amount of data to allow processing from the past. The tradeoff is how fast you can catch up to the current time and start to generate timely new alerts. Data loses its value rapidly over time, so it's important to catch up to the current time quickly. There are two ways to catch up quickly:
- Provision more resources (SU) when catching up. - Restart from current time.
Provisioning more resources can speed up the process, but the effect of having a
- Make sure there are enough partitions in the upstream Event Hubs or IoT Hub that you can add more Throughput Units (TUs) to scale the input throughput. Remember, each Event Hubs TU maxes out at an output rate of 2 MB/s. -- Make sure you have provisioned enough resources in the output sinks (i.e., SQL Database, Azure Cosmos DB), so they don't throttle the surge in output, which can sometimes cause the system to lock up.
+- Make sure you have provisioned enough resources in the output sinks (that is, SQL Database, Azure Cosmos DB), so they don't throttle the surge in output, which can sometimes cause the system to lock up.
The most important thing is to anticipate the processing rate change, test these scenarios before going into production, and be ready to scale the processing correctly during failure recovery time.
-In the extreme scenario that incoming events are all delayed, [it's possible all the delayed events are dropped](stream-analytics-time-handling.md) if you have applied a late arriving window to your job. The dropping of the events may appear to be a mysterious behavior at the beginning; however, considering Stream Analytics is a real-time processing engine, it expects incoming events to be close to the wall clock time. It has to drop events that violate these constraints.
+In the extreme scenario that incoming events are all delayed, [it's possible all the delayed events are dropped](stream-analytics-time-handling.md) if you have applied a late arriving window to your job. The dropping of the events might appear to be a mysterious behavior at the beginning; however, considering Stream Analytics is a real-time processing engine, it expects incoming events to be close to the wall clock time. It has to drop events that violate these constraints.
### Lambda Architectures or Backfill process
Fortunately, the previous data archiving pattern can be used to process these la
![ASA backfill](media/stream-analytics-solution-patterns/back-fill.png)
-The backfill process has to be done with an offline batch processing system, which most likely has a different programming model than Azure Stream Analytics. This means you have to re-implement the entire processing logic.
+The backfill process has to be done with an offline batch processing system, which most likely has a different programming model than Azure Stream Analytics. This means you have to reimplement the entire processing logic.
For backfilling, it's still important to at least temporarily provision more resource to the output sinks to handle higher throughput than the steady state processing needs.
For backfilling, it's still important to at least temporarily provision more res
## Putting it all together
-It's not hard to imagine that all the solution patterns mentioned above can be combined together in a complex end-to-end system. The combined system can include dashboards, alerting, event sourcing application, data warehousing, and offline analytics capabilities.
+It's not hard to imagine that all the solution patterns mentioned earlier can be combined together in a complex end-to-end system. The combined system can include dashboards, alerting, event sourcing application, data warehousing, and offline analytics capabilities.
The key is to design your system in composable patterns, so each subsystem can be built, tested, upgraded, and recover independently. ## Next steps
-You now have seen a variety of solution patterns using Azure Stream Analytics. Next, you can dive deep and create your first Stream Analytics job:
+You now have seen various solution patterns using Azure Stream Analytics. Next, you can dive deep and create your first Stream Analytics job:
* [Create a Stream Analytics job by using the Azure portal](stream-analytics-quick-create-portal.md). * [Create a Stream Analytics job by using Azure PowerShell](stream-analytics-quick-create-powershell.md).
synapse-analytics Access Data From Aml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/access-data-from-aml.md
+
+ Title: Access ADLSg2 data from Azure Machine Learning
+description: This article provides an overview on how you can access data in your Azure Data Lake Storage Gen 2 (ADLSg2) account directly from Azure Machine Learning.
++++ Last updated : 02/27/2024+++
+# Tutorial: Accessing Azure Synapse ADLS Gen2 Data in Azure Machine Learning
+
+In this tutorial, we'll guide you through the process of accessing data stored in Azure Synapse Azure Data Lake Storage Gen2 (ADLS Gen2) from Azure Machine Learning (Azure Machine Learning). This capability is especially valuable when you aim to streamline your machine learning workflow by leveraging tools such as Automated ML, integrated model and experiment tracking, or specialized hardware like GPUs available in Azure Machine Learning.
+
+To access ADLS Gen2 data in Azure Machine Learning, we will create an Azure Machine Learning Datastore that points to the Azure Synapse ADLS Gen2 storage account.
+
+## Prerequisites
+- An [Azure Synapse Analytics workspace](../get-started-create-workspace.md). Ensure that it has an Azure Data Lake Storage Gen2 storage account configured as the default storage. For the Data Lake Storage Gen2 file system that you work with, ensure that you're the *Storage Blob Data Contributor*.
+- An [Azure Machine Learning workspace](../../machine-learning/quickstart-create-resources.md).
+
+## Install libraries
+
+First, we will install the ```azure-ai-ml``` package.
+
+```python
+%pip install azure-ai-ml
+
+```
+
+## Create a Datastore
+
+Azure Machine Learning offers a feature known as a Datastore, which acts as a reference to your existing Azure storage account. We will create a Datastore which references our Azure Synapse ADLS Gen2 storage account.
+
+In this example, we'll create a Datastore linking to our Azure Synapse ADLS Gen2 storage. After initializing an ```MLClient``` object, you can provide connection details to your ADLS Gen2 account. Finally, you can execute the code to create or update the Datastore.
+
+```python
+from azure.ai.ml.entities import AzureDataLakeGen2Datastore
+from azure.ai.ml import MLClient
+
+ml_client = MLClient.from_config()
+
+# Provide the connection details to your Azure Synapse ADLSg2 storage account
+store = AzureDataLakeGen2Datastore(
+ name="",
+ description="",
+ account_name="",
+ filesystem=""
+)
+
+ml_client.create_or_update(store)
+```
+
+You can learn more about creating and managing Azure Machine Learning datastores using this [tutorial on Azure Machine Learning data stores](../../machine-learning/concept-data.md).
+
+## Mount your ADLS Gen2 Storage Account
+
+Once you have set up your data store, you can then access this data by creating a **mount** to your ADLSg2 account. In Azure Machine Learning, creating a mount to your ADLS Gen2 account entails establishing a direct link between your workspace and the storage account, enabling seamless access to the data stored within. Essentially, a mount acts as a pathway that allows Azure Machine Learning to interact with the files and folders in your ADLS Gen2 account as if they were part of the local filesystem within your workspace.
+
+Once the storage account is mounted, you can effortlessly read, write, and manipulate data stored in ADLS Gen2 using familiar filesystem operations directly within your Azure Machine Learning environment, simplifying data preprocessing, model training, and experimentation tasks.
+
+To do this:
+
+1. Start your compute engine.
+2. Select **Data Actions** and then select **Mount**.
+
+ ![Screenshot of Azure Machine Learning option to select data actions.](./media/./tutorial-access-data-from-aml/data-actions.png)
+
+1. From here, you should see and select your ADLSg2 storage account name. It may take a few moments for your mount to be created.
+1. Once your mount is ready, you can select **Data actions** and then **Consume**. Under **Data**, you can then select the mount that you want to consume data from.
+
+Now, you can use your preferred libraries to directly read data from your mounted Azure Data Lake Storage account.
+
+## Read data from your storage account
+
+```python
+import os
+# List the files in the mounted path
+print(os.listdir("/home/azureuser/cloudfiles/data/datastore/{name of mount}"))
+
+# Get the path of your file and load the data using your preferred libraries
+import pandas as pd
+df = pd.read_csv("/home/azureuser/cloudfiles/data/datastore/{name of mount}/{file name}")
+print(df.head(5))
+```
+
+## Next steps
+- [Create and manage GPUs in Azure Machine Learning](../../machine-learning/how-to-train-distributed-gpu.md)
+- [Create Automated ML jobs in Azure Machine Learning](../../machine-learning/concept-automated-ml.md)
synapse-analytics Concept Deep Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/concept-deep-learning.md
Previously updated : 04/19/2022 Last updated : 02/27/2024
Apache Spark in Azure Synapse Analytics enables machine learning with big data, providing the ability to obtain valuable insight from large amounts of structured, unstructured, and fast-moving data. There are several options when training machine learning models using Azure Spark in Azure Synapse Analytics: Apache Spark MLlib, Azure Machine Learning, and various other open-source libraries.
+> [!WARNING]
+> - The GPU accelerated preview is limited to the [Azure Synapse 3.1 (unsupported)](../spark/apache-spark-3-runtime.md) and [Apache Spark 3.2 (EOLA)](../spark/apache-spark-32-runtime.md) runtimes.
+> - Azure Synapse Runtime for Apache Spark 3.1 has reached its end of life (EOL) as of January 26, 2023, with official support discontinued effective January 26, 2024, and no further addressing of support tickets, bug fixes, or security updates beyond this date.
+> - Azure Synapse Runtime for Apache Spark 3.2 has reached its end of life (EOL) as of July 8, 2023, with no further bug or feature fixes, but security fixes may be backported based on risk assessment, and it will be retired and disabled as of July 8, 2024.
+ ## GPU-enabled Apache Spark pools To simplify the process for creating and managing pools, Azure Synapse takes care of pre-installing low-level libraries and setting up all the complex networking requirements between compute nodes. This integration allows users to get started with GPU- accelerated pools within just a few minutes. To learn more about how to create a GPU-accelerated pool, you can visit the quickstart on how to [create a GPU-accelerated pool](../quickstart-create-apache-gpu-pool-portal.md). > [!NOTE] > - GPU-accelerated pools can be created in workspaces located in East US, Australia East, and North Europe.
-> - GPU-accelerated pools are only available with the Apache Spark 3.1 and 3.2 runtime.
+> - GPU-accelerated pools are only available with the Apache Spark 3.1 (unsupported) and 3.2 runtime.
> - You might need to request a [limit increase](../spark/apache-spark-rapids-gpu.md#quotas-and-resource-constraints-in-azure-synapse-gpu-enabled-pools) in order to create GPU-enabled clusters. ## GPU ML Environment
synapse-analytics Tutorial Horovod Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-horovod-pytorch.md
description: Tutorial on how to run distributed training with the Horovod Estima
Previously updated : 04/19/2022 Last updated : 02/27/2024
[Horovod](https://github.com/horovod/horovod) is a distributed training framework for libraries like TensorFlow and PyTorch. With Horovod, users can scale up an existing training script to run on hundreds of GPUs in just a few lines of code.
-Within Azure Synapse Analytics, users can quickly get started with Horovod using the default Apache Spark 3 runtime.For Spark ML pipeline applications using PyTorch, users can use the horovod.spark estimator API. This notebook uses an Apache Spark dataframe to perform distributed training of a distributed neural network (DNN) model on MNIST dataset. This tutorial leverages PyTorch and the Horovod Estimator to run the training process.
+Within Azure Synapse Analytics, users can quickly get started with Horovod using the default Apache Spark 3 runtime. For Spark ML pipeline applications using PyTorch, users can use the horovod.spark estimator API. This notebook uses an Apache Spark dataframe to perform distributed training of a distributed neural network (DNN) model on MNIST dataset. This tutorial uses PyTorch and the Horovod Estimator to run the training process.
## Prerequisites - [Azure Synapse Analytics workspace](../get-started-create-workspace.md) with an Azure Data Lake Storage Gen2 storage account configured as the default storage. You need to be the *Storage Blob Data Contributor* of the Data Lake Storage Gen2 file system that you work with. - Create a GPU-enabled Apache Spark pool in your Azure Synapse Analytics workspace. For details, see [Create a GPU-enabled Apache Spark pool in Azure Synapse](../spark/apache-spark-gpu-concept.md). For this tutorial, we suggest using the GPU-Large cluster size with 3 nodes.
+> [!WARNING]
+> - The GPU accelerated preview is limited to the [Azure Synapse 3.1 (unsupported)](../spark/apache-spark-3-runtime.md) and [Apache Spark 3.2 (EOLA)](../spark/apache-spark-32-runtime.md) runtimes.
+> - Azure Synapse Runtime for Apache Spark 3.1 has reached its end of life (EOL) as of January 26, 2023, with official support discontinued effective January 26, 2024, and no further addressing of support tickets, bug fixes, or security updates beyond this date.
+> - Azure Synapse Runtime for Apache Spark 3.2 has reached its end of life (EOL) as of July 8, 2023, with no further bug or feature fixes, but security fixes may be backported based on risk assessment, and it will be retired and disabled as of July 8, 2024.
++ ## Configure the Apache Spark session
-At the start of the session, we will need to configure a few Apache Spark settings. In most cases, we only needs to set the numExecutors and spark.rapids.memory.gpu.reserve. For very large models, users may also need to configure the ```spark.kryoserializer.buffer.max``` setting. For Tensorflow models, users will need to set the ```spark.executorEnv.TF_FORCE_GPU_ALLOW_GROWTH``` to be true.
+At the start of the session, we need to configure a few Apache Spark settings. In most cases, we only need to set the numExecutors and spark.rapids.memory.gpu.reserve. For large models, users may also need to configure the ```spark.kryoserializer.buffer.max``` setting. For Tensorflow models, users need to set the ```spark.executorEnv.TF_FORCE_GPU_ALLOW_GROWTH``` to be true.
-In the example below, you can see how the Spark configurations can be passed with the ```%%configure``` command. The detailed meaning of each parameter is explained in the [Apache Spark configuration documentation](https://spark.apache.org/docs/latest/configuration.html). The values provided below are the suggested, best practice values for Azure Synapse GPU-large pools.
+In the example, you can see how the Spark configurations can be passed with the ```%%configure``` command. The detailed meaning of each parameter is explained in the [Apache Spark configuration documentation](https://spark.apache.org/docs/latest/configuration.html). The values provided are the suggested, best practice values for Azure Synapse GPU-large pools.
```spark
For this tutorial, we will use the following configurations:
## Import dependencies
-In this tutorial, we will leverage PySpark to read and process the dataset. We will then use PyTorch and Horovod to build the distributed neural network (DNN) model and run the training process. To get started, we will need to import the following dependencies:
+In this tutorial, we use PySpark to read and process the dataset. Then, we use PyTorch and Horovod to build the distributed neural network (DNN) model and run the training process. To get started, we need to import the following dependencies:
```python # base libs
from azure.synapse.ml.horovodutils import AdlsStore
## Connect to alternative storage account
-We will need the Azure Data Lake Storage (ADLS) account for storing intermediate and model data. If you are using an alternative storage account, be sure to set up the [linked service](../../data-factory/concepts-linked-services.md) to automatically authenticate and read from the account. In addition, you will need to modify the following properties below: ```remote_url```, ```account_name```, and ```linked_service_name```.
+We need the Azure Data Lake Storage (ADLS) account for storing intermediate and model data. If you are using an alternative storage account, be sure to set up the [linked service](../../data-factory/concepts-linked-services.md) to automatically authenticate and read from the account. In addition, you need to modify the following properties: ```remote_url```, ```account_name```, and ```linked_service_name```.
```python num_proc = 3 # equal to numExecutors
train_df.count()
## Define DNN model
-Once we have finished processing our dataset, we can now define our PyTorch model. The same code could also be used to train a single-node PyTorch model.
+Once we are finished processing our dataset, we can now define our PyTorch model. The same code could also be used to train a single-node PyTorch model.
```python # Define the PyTorch model without any Horovod-specific parameters
torch_model = torch_estimator.fit(train_df).setOutputCols(['label_prob'])
## Evaluate trained model
-Once the training process has finished, we can then evaluate the model on the test dataset.
+Once the training process completes, we can then evaluate the model on the test dataset.
```python # Evaluate the model on the held-out test DataFrame
synapse-analytics Tutorial Horovod Tensorflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-horovod-tensorflow.md
[Horovod](https://github.com/horovod/horovod) is a distributed training framework for libraries like TensorFlow and PyTorch. With Horovod, users can scale up an existing training script to run on hundreds of GPUs in just a few lines of code.
-Within Azure Synapse Analytics, users can quickly get started with Horovod using the default Apache Spark 3 runtime.For Spark ML pipeline applications using TensorFlow, users can use ```HorovodRunner```. This notebook uses an Apache Spark dataframe to perform distributed training of a distributed neural network (DNN) model on MNIST dataset. This tutorial leverages TensorFlow and the ```HorovodRunner``` to run the training process.
+Within Azure Synapse Analytics, users can quickly get started with Horovod using the default Apache Spark 3 runtime. For Spark ML pipeline applications using TensorFlow, users can use ```HorovodRunner```. This notebook uses an Apache Spark dataframe to perform distributed training of a distributed neural network (DNN) model on MNIST dataset. This tutorial uses TensorFlow and the ```HorovodRunner``` to run the training process.
## Prerequisites - [Azure Synapse Analytics workspace](../get-started-create-workspace.md) with an Azure Data Lake Storage Gen2 storage account configured as the default storage. You need to be the *Storage Blob Data Contributor* of the Data Lake Storage Gen2 file system that you work with. - Create a GPU-enabled Apache Spark pool in your Azure Synapse Analytics workspace. For details, see [Create a GPU-enabled Apache Spark pool in Azure Synapse](../spark/apache-spark-gpu-concept.md). For this tutorial, we suggest using the GPU-Large cluster size with 3 nodes.
+> [!WARNING]
+> - The GPU accelerated preview is only available on the [Azure Synapse 3.1 (unsupported)](../spark/apache-spark-3-runtime.md) and [Apache Spark 3.2](../spark/apache-spark-32-runtime.md) runtimes.
+> - Azure Synapse Runtime for Apache Spark 3.1 has reached its end of life (EOL) as of January 26, 2023, with official support discontinued effective January 26, 2024, and no further addressing of support tickets, bug fixes, or security updates beyond this date, strongly advising users to transition to a higher runtime version for continued functionality and security.
++ ## Configure the Apache Spark session
-At the start of the session, we will need to configure a few Apache Spark settings. In most cases, we only needs to set the ```numExecutors``` and ```spark.rapids.memory.gpu.reserve```. For very large models, users may also need to configure the ```spark.kryoserializer.buffer.max``` setting. For TensorFlow models, users will need to set the ```spark.executorEnv.TF_FORCE_GPU_ALLOW_GROWTH``` to be true.
+At the start of the session, we need to configure a few Apache Spark settings. In most cases, we only need to set the ```numExecutors``` and ```spark.rapids.memory.gpu.reserve```. For very large models, users may also need to configure the ```spark.kryoserializer.buffer.max``` setting. For TensorFlow models, users need to set the ```spark.executorEnv.TF_FORCE_GPU_ALLOW_GROWTH``` to be true.
-In the example below, you can see how the Spark configurations can be passed with the ```%%configure``` command. The detailed meaning of each parameter is explained in the [Apache Spark configuration documentation](https://spark.apache.org/docs/latest/configuration.html). The values provided below are the suggested, best practice values for Azure Synapse GPU-large pools.
+In the example, you can see how the Spark configurations can be passed with the ```%%configure``` command. The detailed meaning of each parameter is explained in the [Apache Spark configuration documentation](https://spark.apache.org/docs/latest/configuration.html). The values provided are the suggested, best practice values for Azure Synapse GPU-large pools.
```spark
In the example below, you can see how the Spark configurations can be passed wit
} ```
-For this tutorial, we will use the following configurations:
+For this tutorial, we use the following configurations:
```python
For this tutorial, we will use the following configurations:
## Setup primary storage account
-We will need the Azure Data Lake Storage (ADLS) account for storing intermediate and model data. If you are using an alternative storage account, be sure to set up the [linked service](../../data-factory/concepts-linked-services.md) to automatically authenticate and read from the account.
+We need the Azure Data Lake Storage (ADLS) account for storing intermediate and model data. If you are using an alternative storage account, be sure to set up the [linked service](../../data-factory/concepts-linked-services.md) to automatically authenticate and read from the account.
-In this example, we will read from the primary Azure Synapse Analytics storage account. To do this, you will need to modify the following properties below: ```remote_url```.
+In this example, we read data from the primary Azure Synapse Analytics storage account. To read the results you need to modify the following properties: ```remote_url```.
```python # Specify training parameters
remote_url = "<<abfss path to storage account>>
## Prepare dataset
-Next, we will prepare the dataset for training. In this tutorial, we will use the MNIST dataset from [Azure Open Datasets](../../open-datasets/dataset-mnist.md?tabs=azureml-opendatasets).
+Next, we prepare the dataset for training. In this tutorial, we use the MNIST dataset from [Azure Open Datasets](../../open-datasets/dataset-mnist.md?tabs=azureml-opendatasets).
```python def get_dataset(rank=0, size=1):
def get_dataset(rank=0, size=1):
## Define DNN model
-Once we have finished processing our dataset, we can now define our TensorFlow model. The same code could also be used to train a single-node TensorFlow model.
+Once our dataset is processed, we can define our TensorFlow model. The same code could also be used to train a single-node TensorFlow model.
```python # Define the TensorFlow model without any Horovod-specific parameters
def get_model():
## Define a training function for a single node
-First, we will train our TensorFlow model on the driver node of the Apache Spark pool. Once we have finished the training process, we will evaluate the model and print the loss and accuracy scores.
+First, we train our TensorFlow model on the driver node of the Apache Spark pool. Once the training process is complete, we evaluate the model and print the loss and accuracy scores.
```python
Next, we will take a look at how the same code could be re-run using ```HorovodR
### Define training function
-To do this, we will first define a training function for ```HorovodRunner```.
+To train a model, we first define a training function for ```HorovodRunner```.
```python # Define training function for Horovod runner
def train_hvd(learning_rate=0.1):
### Run training
-Once we have defined the model, we will run the training process.
+Once the model is defined, we can run the training process.
```python # Run training
best_model_bytes = \
### Save checkpoints to ADLS storage
-The code below shows how to save the checkpoints to the Azure Data Lake Storage (ADLS) account.
+The code shows how to save the checkpoints to the Azure Data Lake Storage (ADLS) account.
```python import tempfile
print(adls_ckpt_file)
### Evaluate Horovod trained model
-Once we have finished training our model, we can then take a look at the loss and accuracy for the final model.
+Once the model training is complete, we can then take a look at the loss and accuracy for the final model.
```python import tensorflow as tf
synapse-analytics Tutorial Load Data Petastorm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-load-data-petastorm.md
Previously updated : 04/19/2022 Last updated : 02/27/2024 # Load data with Petastorm (Preview)
-Petastorm is an open source data access library which enables single-node or distributed training of deep learning models. This library enables training directly from datasets in Apache Parquet format and datasets that have already been loaded as an Apache Spark DataFrame. Petastorm supports popular training frameworks such as Tensorflow and PyTorch.
+Petastorm is an open source data access library, which enables single-node or distributed training of deep learning models. This library enables training directly from datasets in Apache Parquet format and datasets that are loaded as an Apache Spark DataFrame. Petastorm supports popular training frameworks such as Tensorflow and PyTorch.
For more information about Petastorm, you can visit the [Petastorm GitHub page](https://github.com/uber/petastorm) or the [Petastorm API documentation](https://petastorm.readthedocs.io/en/latest).
For more information about Petastorm, you can visit the [Petastorm GitHub page](
- [Azure Synapse Analytics workspace](../get-started-create-workspace.md) with an Azure Data Lake Storage Gen2 storage account configured as the default storage. You need to be the *Storage Blob Data Contributor* of the Data Lake Storage Gen2 file system that you work with. - Create a GPU-enabled Apache Spark pool in your Azure Synapse Analytics workspace. For details, see [Create a GPU-enabled Apache Spark pool in Azure Synapse](../spark/apache-spark-gpu-concept.md). For this tutorial, we suggest using the GPU-Large cluster size with 3 nodes.
+> [!WARNING]
+> - The GPU accelerated preview is limited to the [Azure Synapse 3.1 (unsupported)](../spark/apache-spark-3-runtime.md) and [Apache Spark 3.2 (EOLA)](../spark/apache-spark-32-runtime.md) runtimes.
+> - Azure Synapse Runtime for Apache Spark 3.1 has reached its end of life (EOL) as of January 26, 2023, with official support discontinued effective January 26, 2024, and no further addressing of support tickets, bug fixes, or security updates beyond this date.
+> - Azure Synapse Runtime for Apache Spark 3.2 has reached its end of life (EOL) as of July 8, 2023, with no further bug or feature fixes, but security fixes may be backported based on risk assessment, and it will be retired and disabled as of July 8, 2024.
++ ## Configure the Apache Spark session
-At the start of the session, we will need to configure a few Apache Spark settings. In most cases, we only needs to set the ```numExecutors``` and ```spark.rapids.memory.gpu.reserve```. In the example below, you can see how the Spark configurations can be passed with the ```%%configure``` command. The detailed meaning of each parameter is explained in the [Apache Spark configuration documentation](https://spark.apache.org/docs/latest/configuration.html).
+At the start of the session, we need to configure a few Apache Spark settings. In most cases, we only need to set the ```numExecutors``` and ```spark.rapids.memory.gpu.reserve```. In the example, you can see how the Spark configurations can be passed with the ```%%configure``` command. The detailed meaning of each parameter is explained in the [Apache Spark configuration documentation](https://spark.apache.org/docs/latest/configuration.html).
```python %%configure -f
At the start of the session, we will need to configure a few Apache Spark settin
A dataset created using Petastorm is stored in an Apache Parquet format. On top of a Parquet schema, Petastorm also stores higher-level schema information that makes multidimensional arrays into a native part of a Petastorm dataset.
-In the sample below, we create a dataset using PySpark. We write the dataset to an Azure Data Lake Storage Gen2 account.
+In the sample, we create a dataset using PySpark. We write the dataset to an Azure Data Lake Storage Gen2 account.
```python import numpy as np
generate_petastorm_dataset(output_url)
The ```petastorm.reader.Reader``` class is the main entry point for user code that accesses the data from an ML framework such as Tensorflow or Pytorch. You can read a dataset using the ```petastorm.reader.Reader``` class and the ```petastorm.make_reader``` factory method.
-In the example below, you can see how you can pass an ```abfs``` URL protocol.
+In the example, you can see how you can pass an ```abfs``` URL protocol.
```python from petastorm import make_reader
with make_reader('abfs://<container_name>/<data directory path>/') as reader:
### Read dataset from secondary storage account
-If you are using an alternative storage account, be sure to set up the [linked service](../../data-factory/concepts-linked-services.md) to automatically authenticate and read from the account. In addition, you will need to modify the following properties below: ```remote_url```, ```account_name```, and ```linked_service_name```.
+If you are using an alternative storage account, be sure to set up the [linked service](../../data-factory/concepts-linked-services.md) to automatically authenticate and read from the account. In addition, you need to modify the following properties: ```remote_url```, ```account_name```, and ```linked_service_name```.
```python from petastorm import make_reader
with make_reader('{}/data_directory'.format(remote_url), storage_options = {'sas
### Read dataset in batches
-In the example below, you can see how you can pass an ```abfs``` URL protocol to read data in batches. This example uses the ```make_batch_reader``` class.
+In the example, you can see how you can pass an ```abfs``` URL protocol to read data in batches. This example uses the ```make_batch_reader``` class.
```python from petastorm import make_batch_reader
with make_batch_reader('abfs://<container_name>/<data directory path>/', schema_
## PyTorch API
-To read a Petastorm dataset from PyTorch, you can use the adapter ```petastorm.pytorch.DataLoader``` class. This allows for custom PyTorch collating functions and transforms to be supplied.
+To read a Petastorm dataset from PyTorch, you can use the adapter ```petastorm.pytorch.DataLoader``` class. This adapter allows for custom PyTorch collating functions and transforms to be supplied.
In this example, we will show how Petastorm DataLoader can be used to load a Petastorm dataset with the help of make_reader API. This first section creates the definition of a ```Net``` class and ```train``` and ```test``` function.
synapse-analytics Quickstart Create Apache Gpu Pool Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-create-apache-gpu-pool-portal.md
An Apache Spark pool provides open-source big data compute capabilities where da
In this quickstart, you learn how to use the Azure portal to create an Apache Spark GPU-enabled pool in an Azure Synapse Analytics workspace.
+> [!WARNING]
+> - The GPU accelerated preview is limited to the [Azure Synapse 3.1 (unsupported)](./spark/apache-spark-3-runtime.md) and [Apache Spark 3.2 (EOLA)](./spark/apache-spark-32-runtime.md) runtimes.
+> - Azure Synapse Runtime for Apache Spark 3.1 has reached its end of life (EOL) as of January 26, 2023, with official support discontinued effective January 26, 2024, and no further addressing of support tickets, bug fixes, or security updates beyond this date.
+> - Azure Synapse Runtime for Apache Spark 3.2 has reached its end of life (EOL) as of July 8, 2023, with no further bug or feature fixes, but security fixes may be backported based on risk assessment, and it will be retired and disabled as of July 8, 2024.
+ > [!NOTE] > Azure Synapse GPU-enabled pools are currently in Public Preview.
Sign in to the [Azure portal](https://portal.azure.com/)
|Setting | Suggested value | DescriptionΓÇ»| | : | :-- | :- | | **Apache Spark pool name** | A valid pool name | This is the name that the Apache Spark pool will have. |
- | **Node size family** | Hardware Accelerated | Choose Hardware Accelerated from the drop down menu |
+ | **Node size family** | Hardware Accelerated | Choose Hardware Accelerated from the drop-down menu |
| **Node size** | Large (16 vCPU / 110 GB / 1 GPU) | Set this to the smallest size to reduce costs for this quickstart | | **Autoscale** | Disabled | We don't need autoscale for this quickstart | | **Number of nodes** | 3 | Use a small size to limit costs for this quickstart |
synapse-analytics Connectivity Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/connectivity-settings.md
Title: Azure Synapse connectivity settings
-description: An article that teaches you to configure connectivity settings in Azure Synapse Analytics
--- Previously updated : 03/29/2022 --
+description: Learn to configure connectivity settings in Azure Synapse Analytics.
++ Last updated : 02/28/2024+++ # Azure Synapse Analytics connectivity settings
-This article will explain connectivity settings in Azure Synapse Analytics and how to configure them where applicable.
+This article explains connectivity settings in Azure Synapse Analytics and how to configure them where applicable.
-## Public network access
+## Public network access
You can use the public network access feature to allow incoming public network connectivity to your Azure Synapse workspace.
You can use the public network access feature to allow incoming public network c
> > When the public network access is disabled, access to GIT mode in Synapse Studio and commit changes won't be blocked as long as the user has enough permission to access the integrated Git repo or the corresponding Git branch. However, the publish button won't work because the access to Live mode is blocked by the firewall settings.
-Selecting the **Disable** option will not apply any firewall rules that you may configure. Additionally, your firewall rules will appear greyed out in the Network setting in Synapse portal. Your firewall configurations will be reapplied when you enable public network access again.
+Selecting the **Disable** option will not apply any firewall rules that you might configure. Additionally, your firewall rules will appear grayed out in the Network setting in Synapse portal. Your firewall configurations are reapplied when you enable public network access again.
> [!TIP] > When you revert to enable, allow some time before editing the firewall rules. ### Configure public network access when you create your workspace
-1. Select the **Networking** tab when you create your workspace in [Azure portal](https://aka.ms/azureportal).
-2. Under Managed virtual network, select **Enable** to associate your workspace with managed virtual network and permit public network access.
+1. Select the **Networking** tab when you create your workspace in [Azure portal](https://aka.ms/azureportal).
+1. Under Managed virtual network, select **Enable** to associate your workspace with managed virtual network and permit public network access.
+1. Under **Public network access**, select **Disable** to deny public access to your workspace. Select **Enable** if you want to allow public access to your workspace.
- :::image type="content" source="./media/connectivity-settings/create-synapse-workspace-managed-virtual-network-1.png" alt-text="Create Synapse workspace, networking tab, Managed virtual network setting" lightbox="media/connectivity-settings/create-synapse-workspace-managed-virtual-network-1.png":::
+ :::image type="content" source="media/connectivity-settings/create-synapse-workspace-public-network-access.png" alt-text="Screenshot from the Azure portal. Create Synapse workspace, networking tab, public network access setting." lightbox="media/connectivity-settings/create-synapse-workspace-public-network-access.png":::
-3. Under **Public network access**, select **Disable** to deny public access to your workspace. Select **Enable** if you want to allow public access to your workspace.
-
- :::image type="content" source="./media/connectivity-settings/create-synapse-workspace-public-network-access-2.png" alt-text="Create Synapse workspace, networking tab, public network access setting" lightbox="media/connectivity-settings/create-synapse-workspace-public-network-access-2.png":::
-
-4. Complete the rest of the workspace creation flow.
+1. Complete the rest of the workspace creation flow.
### Configure public network access after you create your workspace
-1. Select your Synapse workspace in [Azure portal](https://aka.ms/azureportal).
-2. Select **Networking** from the left navigation.
-3. Select **Disabled** to deny public access to your workspace. Select **Enabled** if you want to allow public access to your workspace.
-
- :::image type="content" source="./media/connectivity-settings/synapse-workspace-networking-public-network-access-3.png" alt-text="In an existing Synapse workspace, networking tab, public network access setting is enabled" lightbox="media/connectivity-settings/synapse-workspace-networking-public-network-access-3.png":::
+1. Select your Synapse workspace in [Azure portal](https://aka.ms/azureportal).
+1. Select **Networking** from the left navigation.
+1. Select **Disabled** to deny public access to your workspace. Select **Enabled** if you want to allow public access to your workspace.
-4. When disabled, the **Firewall rules** gray out to indicate that firewall rules are not in effect. Firewall rule configurations will be retained.
+ :::image type="content" source="media/connectivity-settings/synapse-workspace-networking-public-network-access.png" alt-text="Screenshot from the Azure portal. In an existing Synapse workspace, networking tab, public network access setting is enabled." lightbox="media/connectivity-settings/synapse-workspace-networking-public-network-access.png":::
- :::image type="content" source="./media/connectivity-settings/synapse-workspace-networking-firewall-rules-4.png" alt-text="In an existing Synapse workspace, networking tab, public network access setting is disabled, attention to the firewall rules" lightbox="media/connectivity-settings/synapse-workspace-networking-firewall-rules-4.png":::
-
-5. Select **Save** to save the change. A notification will confirm that the network setting was successfully saved.
+1. When disabled, the **Firewall rules** gray out to indicate that firewall rules are not in effect. Firewall rule configurations will be retained.
+1. Select **Save** to save the change. A notification will confirm that the network setting was successfully saved.
## Connection policy
-The connection policy for Synapse SQL in Azure Synapse Analytics is set to *Default*. You cannot change this in Azure Synapse Analytics. You can learn more about how that affects connections to Synapse SQL in Azure Synapse Analytics [here](/azure/azure-sql/database/connectivity-architecture#connection-policy).
+The connection policy for Synapse SQL in Azure Synapse Analytics is set to *Default*. You cannot change this in Azure Synapse Analytics. For more information, see [Connectivity architecture](/azure/azure-sql/database/connectivity-architecture#connection-policy).
## Minimal TLS version The serverless SQL endpoint and development endpoint only accept TLS 1.2 and above.
-Starting in December 2021, a requirement for TLS 1.2 has been implemented for workspace-managed dedicated SQL pools in new Synapse workspaces. Login attempts from connections using a TLS version lower than 1.2 will fail. Customers can raise or lower the minimal TLS version using the API, for both new Synapse workspaces or existing workspaces, so users who need to use a lower client version in the workspaces can connect. Customers can also raise the minimum TLS version to meet their security needs. Learn more by reading [minimal TLS REST API](/rest/api/synapse/sqlserver/workspace-managed-sql-server-dedicated-sql-minimal-tls-settings/update).
+Since December 2021, a minimum level of TLS 1.2 is required for workspace-managed dedicated SQL pools in new Synapse workspaces. Sign-in attempts from connections using a TLS version lower than 1.2 fail. Customers can raise or lower this requirement using the [minimal TLS REST API](/rest/api/synapse/sqlserver/workspace-managed-sql-server-dedicated-sql-minimal-tls-settings/update) for both new Synapse workspaces or existing workspaces, so users who cannot use a higher TLS client version in the workspaces can connect. Customers can also raise the minimum TLS version to meet their security needs.
+## Related content
-## Next steps
+ - [Azure Synapse Analytics IP firewall rules](synapse-workspace-ip-firewall.md)
synapse-analytics Apache Spark Gpu Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-gpu-concept.md
Previously updated : 4/10/2022 Last updated : 02/27/2024
Azure Synapse Analytics now supports Apache Spark pools accelerated with graphic
By using NVIDIA GPUs, data scientists and engineers can reduce the time necessary to run data integration pipelines, score machine learning models, and more. This article describes how GPU-accelerated pools can be created and used with Azure Synapse Analytics. This article also details the GPU drivers and libraries that are pre-installed as part of the GPU-accelerated runtime.
+> [!WARNING]
+> - The GPU accelerated preview is limited to the [Azure Synapse 3.1 (unsupported)](../spark/apache-spark-3-runtime.md) and [Apache Spark 3.2 (EOLA)](../spark/apache-spark-32-runtime.md) runtimes.
+> - Azure Synapse Runtime for Apache Spark 3.1 has reached its end of life (EOL) as of January 26, 2023, with official support discontinued effective January 26, 2024, and no further addressing of support tickets, bug fixes, or security updates beyond this date.
+> - Azure Synapse Runtime for Apache Spark 3.2 has reached its end of life (EOL) as of July 8, 2023, with no further bug or feature fixes, but security fixes may be backported based on risk assessment, and it will be retired and disabled as of July 8, 2024.
+ > [!NOTE] > Azure Synapse GPU-enabled pools are currently in Public Preview.
When you select a GPU-accelerated Hardware option in Synapse Spark, you implicit
## Accelerate ETL workloads
-With built-in support for NVIDIAΓÇÖs [RAPIDS Accelerator for Apache Spark](https://nvidia.github.io/spark-rapids/), GPU-accelerated Spark pools in Azure Synapse can provide significant performance improvements compared to standard analytical benchmarks without requiring any code changes. Built on top of NVIDIA CUDA and UCX, NVIDIA RAPIDS enables GPU-accelerated SQL, DataFrame operations, and Spark shuffles. Since there are no code changes required to leverage these accelerations, users can also accelerate their data pipelines that rely on Linux FoundationΓÇÖs Delta Lake or MicrosoftΓÇÖs Hyperspace indexing.
+With built-in support for NVIDIAΓÇÖs [RAPIDS Accelerator for Apache Spark](https://nvidia.github.io/spark-rapids/), GPU-accelerated Spark pools in Azure Synapse can provide significant performance improvements compared to standard analytical benchmarks without requiring any code changes. This package is built on top of NVIDIA CUDA and UCX and enables GPU-accelerated SQL, DataFrame operations, and Spark shuffles. Since there are no code changes required to leverage these accelerations, users can also accelerate their data pipelines that rely on Linux FoundationΓÇÖs Delta Lake or MicrosoftΓÇÖs Hyperspace indexing.
To learn more about how you can use the NVIDIA RAPIDS Accelerator with your GPU-accelerated pool in Azure Synapse Analytics, visit this guide on how to [improve performance with RAPIDS](apache-spark-rapids-gpu.md).
To learn more about how you can train distributed deep learning models, visit th
## Improve machine learning scoring workloads
-Many organizations rely on large batch scoring jobs to frequently execute during narrow windows of time. To achieve improved batch scoring jobs, you can also use GPU-accelerated Spark pools with MicrosoftΓÇÖs [Hummingbird library](https://github.com/Microsoft/hummingbird). With Hummingbird, users can take their traditional, tree-based ML models and compile them into tensor computations. Hummingbird allows users to then seamlessly leverage native hardware acceleration and neural network frameworks to accelerate their ML model scoring without needing to rewrite their models.
+Many organizations rely on large batch scoring jobs to frequently execute during narrow windows of time. To achieve improved batch scoring jobs, you can also use GPU-accelerated Spark pools with MicrosoftΓÇÖs [Hummingbird library](https://github.com/Microsoft/hummingbird). With Hummingbird, users can take traditional, tree-based ML models and compile them into tensor computations. Hummingbird allows users to then seamlessly leverage native hardware acceleration and neural network frameworks to accelerate their ML model scoring without needing to rewrite their models.
## Next steps
synapse-analytics Apache Spark Rapids Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-rapids-gpu.md
Previously updated : 10/18/2021 Last updated : 02/27/2024
-# Apache Spark GPU-accelerated pools in Azure Synapse Analytics
+# Apache Spark GPU-accelerated pools in Azure Synapse Analytics (preview)
Apache Spark is a parallel processing framework that supports in-memory processing to boost the performance of big-data analytic applications. Apache Spark in Azure Synapse Analytics is one of Microsoft's implementations of Apache Spark in the cloud.
spark.conf.set('spark.rapids.sql.enabled','true/false')
> [!NOTE] > Azure Synapse GPU-enabled pools are currently in Public Preview.
+> [!WARNING]
+> - The GPU accelerated preview is limited to the [Azure Synapse 3.1 (unsupported)](../spark/apache-spark-3-runtime.md) and [Apache Spark 3.2 (EOLA)](../spark/apache-spark-32-runtime.md) runtimes.
+> - Azure Synapse Runtime for Apache Spark 3.1 has reached its end of life (EOL) as of January 26, 2023, with official support discontinued effective January 26, 2024, and no further addressing of support tickets, bug fixes, or security updates beyond this date.
+> - Azure Synapse Runtime for Apache Spark 3.2 has reached its end of life (EOL) as of July 8, 2023, with no further bug or feature fixes, but security fixes may be backported based on risk assessment, and it will be retired and disabled as of July 8, 2024.
+ ## RAPIDS Accelerator for Apache Spark The Spark RAPIDS accelerator is a plugin that works by overriding the physical plan of a Spark job by supported GPU operations, and running those operations on the GPUs, thereby accelerating processing. This library is currently in preview and doesn't support all Spark operations (here is a list of [currently supported operators](https://nvidia.github.io/spark-rapids/docs/supported_ops.html), and more support is being added incrementally through new releases).
virtual-desktop Customize Feed For Virtual Desktop Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/customize-feed-for-virtual-desktop-users.md
Title: Customize feed for Azure Virtual Desktop users - Azure
-description: How to customize feed for Azure Virtual Desktop users with PowerShell cmdlets.
+description: How to customize feed for Azure Virtual Desktop users using the Azure portal and PowerShell cmdlets.
Last updated 02/01/2024
You can customize the feed so the RemoteApp and remote desktop resources appear
## Prerequisites
-This article assumes you've already downloaded and installed the Azure Virtual Desktop PowerShell module. If you haven't, follow the instructions in [Set up the PowerShell module](powershell-module.md).
+If you're using either the Azure portal or PowerShell method, you'll need the following things:
-## Customize the display name for a session host
+- An Azure account assigned the [Desktop Virtualization Application Group Contributor](rbac.md#desktop-virtualization-application-group-contributor) role.
+- If you want to use Azure PowerShell locally, see [Use Azure CLI and Azure PowerShell with Azure Virtual Desktop](cli-powershell.md) to make sure you have the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization) PowerShell module installed. Alternatively, use the [Azure Cloud Shell](../cloud-shell/overview.md).
-You can change the display name for a remote desktop for your users by setting its session host friendly name. By default, the session host friendly name is empty, so users only see the app name. You can set the session host friendly name using REST API.
->[!NOTE]
->The following instructions only apply to personal desktops, not pooled desktops. Also, personal host pools only allow and support desktop application groups.
+## Customize the display name for a RemoteApp or desktop
+You can change the display name for a published RemoteApp or desktop to make it easier for users to identify what to connect to.
+#### [Azure portal](#tab/portal)
-## Customize the display name for a RemoteApp
+Here's how to customize the display name for a published RemoteApp or desktop using the Azure portal.
-You can change the display name for a published RemoteApp by setting the friendly name. By default, the friendly name is the same as the name of the RemoteApp program.
+1. Sign in to the [Azure portal](https://portal.azure.com).
-To retrieve a list of published applications for an application group, run the following PowerShell cmdlet:
+2. Search for **Azure Virtual Desktop**.
-```powershell
-Get-AzWvdApplication -ResourceGroupName <resourcegroupname> -ApplicationGroupName <appgroupname>
-```
+3. Under Services, select **Azure Virtual Desktop**.
-To assign a friendly name to a RemoteApp, run the following cmdlet with the required parameters:
+4. On the Azure Virtual Desktop page, select **Application groups** on the left side of the screen, then select the name of the application group you want to edit.
-```powershell
-Update-AzWvdApplication -ResourceGroupName <resourcegroupname> -ApplicationGroupName <appgroupname> -Name <applicationname> -FriendlyName <newfriendlyname>
-```
+5. Select **Applications** in the menu on the left side of the screen.
-For example, let's say you retrieved the current applications with the following example cmdlet:
+6. Select the application you want to update, then enter a new **Display name**.
-```powershell
-Get-AzWvdApplication -ResourceGroupName 0301RG -ApplicationGroupName 0301RAG | format-list
-```
+7. Select **Save**. The application you edited should now display the updated name. Users see the new name once their client refreshes.
-The output would look like this:
-```powershell
-CommandLineArgument :
-CommandLineSetting : DoNotAllow
-Description :
-FilePath : C:\Program Files\Windows NT\Accessories\wordpad.exe
-FriendlyName : Microsoft Word
-IconContent : {0, 0, 1, 0…}
-IconHash : --iom0PS6XLu-EMMlHWVW3F7LLsNt63Zz2K10RE0_64
-IconIndex : 0
-IconPath : C:\Program Files\Windows NT\Accessories\wordpad.exe
-Id : /subscriptions/<subid>/resourcegroups/0301RG/providers/Microsoft.DesktopVirtualization/applicationgroups/0301RAG/applications/Microsoft Word
-Name : 0301RAG/Microsoft Word
-ShowInPortal : False
-Type : Microsoft.DesktopVirtualization/applicationgroups/applications
-```
-To update the friendly name, run this cmdlet:
+### [Azure PowerShell](#tab/powershell)
-```powershell
-Update-AzWvdApplication -GroupName 0301RAG -Name "Microsoft Word" -FriendlyName "WordUpdate" -ResourceGroupName 0301RG -IconIndex 0 -IconPath "C:\Program Files\Windows NT\Accessories\wordpad.exe" -ShowInPortal:$true -CommandLineSetting DoNotallow -FilePath "C:\Program Files\Windows NT\Accessories\wordpad.exe"
-```
+### Customize the display name for a RemoteApp
-To confirm you've successfully updated the friendly name, run this cmdlet:
+Here's how to customize the display name for a RemoteApp using PowerShell. By default, the display name is the same as the name of the application identifier.
-```powershell
-Get-AzWvdApplication -ResourceGroupName 0301RG -ApplicationGroupName 0301RAG | format-list FriendlyName
-```
-The cmdlet should give you the following output:
+2. To retrieve a list of published applications for an application group, run the following PowerShell cmdlet:
-```powershell
-FriendlyName : WordUpdate
-```
+ ```azurepowershell
+ $parameters = @{
+ ResourceGroupName = "<resourcegroupname>"
+ ApplicationGroupName = "<appgroupname>"
+ }
-## Customize the display name for a Remote Desktop
+ Get-AzWvdApplication @parameters
+ ```
-You can change the display name for a published remote desktop by setting a friendly name. If you manually created a host pool and desktop application group through PowerShell, the default friendly name is "Session Desktop." If you created a host pool and desktop application group through the GitHub Azure Resource Manager template or the Azure Marketplace offering, the default friendly name is the same as the host pool name.
+3. To assign a friendly name to a RemoteApp, run the following cmdlet with the required parameters:
-To retrieve the remote desktop resource, run the following PowerShell cmdlet:
+ ```azurepowershell
+ $parameters = @{
+ ResourceGroupName = "<resourcegroupname>"
+ ApplicationGroupName = "<appgroupname>"
+ Name = "<applicationname>"
+ FriendlyName = "<newfriendlyname>"
+ }
-```powershell
-Get-AzWvdDesktop -ResourceGroupName <resourcegroupname> -ApplicationGroupName <appgroupname> -Name <applicationname>
-```
+ Update-AzWvdApplication @parameters
+ ```
-To assign a friendly name to the remote desktop resource, run the following PowerShell cmdlet:
-```powershell
-Update-AzWvdDesktop -ResourceGroupName <resourcegroupname> -ApplicationGroupName <appgroupname> -Name <applicationname> -FriendlyName <newfriendlyname>
-```
+### Customize the display name for a Remote Desktop
-## Customize a display name in the Azure portal
+You can change the display name for a published remote desktop for all users by setting a friendly name. If you manually created a host pool and desktop application group through PowerShell, the default friendly name is **Session Desktop**. If you created a host pool and desktop application group through the GitHub Azure Resource Manager template or the Azure Marketplace offering, the default friendly name is the same as the host pool name. If you have a personal host pool, you can also [set a friendly name for individual session hosts](#set-a-friendly-name-for-an-individual-session-host-in-a-personal-host-pool).
-You can change the display name for a published remote desktop by setting a friendly name using the Azure portal.
-1. Sign in to the [Azure portal](https://portal.azure.com).
+2. To assign a friendly name to the remote desktop resource, run the following PowerShell cmdlet:
-2. Search for **Azure Virtual Desktop**.
+ ```azurepowershell
+ $parameters = @{
+ ResourceGroupName = "<resourcegroupname>"
+ ApplicationGroupName = "<appgroupname>"
+ Name = "<applicationname>"
+ FriendlyName = "<newfriendlyname>"
+ }
-3. Under Services, select **Azure Virtual Desktop**.
+ Update-AzWvdDesktop @parameters
+ ```
-4. On the Azure Virtual Desktop page, select **Application groups** on the left side of the screen, then select the name of the application group you want to edit. (For example, if you want to edit the display name of the desktop application group, select the application group named **Desktop**.)
+3. To retrieve the friendly name for the remote desktop resource, run the following PowerShell cmdlet:
-5. Select **Applications** in the menu on the left side of the screen.
+ ```azurepowershell
+ $parameters = @{
+ ResourceGroupName = "<resourcegroupname>"
+ ApplicationGroupName = "<appgroupname>"
+ Name = "<applicationname>"
+ }
+
+ Get-AzWvdDesktop @parameters | FL ApplicationGroupName, Name, FriendlyName
+ ```
-6. Select the application you want to update, then enter a new **Display name**.
+
+
+## Set a friendly name for an individual session host in a personal host pool
-7. Select **Save**. The application you edited should now display the updated name.
+For session hosts in a personal host pool, you can change the display name for a desktop for each individual session host by setting its friendly name using PowerShell. By default, the session host friendly name is empty, so all users only see the same desktop display name. There isn't currently a way to set the session host friendly name in the Azure portal.
-## Next steps
-Now that you've customized the feed for users, you can sign in to an Azure Virtual Desktop client to test it out. To do so, continue to the Connect to Azure Virtual Desktop How-tos:
- * [Connect with Windows](./users/connect-windows.md)
- * [Connect with the web client](./users/connect-web.md)
- * [Connect with the Android client](./users/connect-android-chrome-os.md)
- * [Connect with the iOS client](./users/connect-ios-ipados.md)
- * [Connect with the macOS client](./users/connect-macos.md)
virtual-machines Cost Optimization Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/cost-optimization-best-practices.md
+
+ Title: Best practices for virtual machine cost optimization
+description: Learn the best practices for managing costs for virtual machines.
+++++ Last updated : 02/21/2024++
+# Best practices for virtual machine cost optimization
+
+This article describes the best practices for managing costs for virtual machines.
+
+If you'd like to see how the billing model works for virtual machines and how to plan for costs ahead of resource deployment, see [Plan to manage costs](cost-optimization-plan-to-manage-costs.md). If you'd like to learn how to monitor costs for virtual machines, see [Monitor costs for virtual machines](cost-optimization-monitor-costs.md).
+
+In this article, you'll learn:
+* Best practices for managing and reducing costs for virtual machines
+* How to use Azure policies to manage and reduce costs
+
+## Best practices to manage and reduce costs for virtual machines
+
+The following are some best practices you can use to reduce the cost of your virtual machines:
+
+- Use the [virtual machines selector](https://azure.microsoft.com/pricing/vm-selector/) to identify the best VMs for your needs
+ - For development and test environments:
+ - Use B-Series virtual machines
+ - Use at least B2 for Windows machines
+ - Use HDDs instead of SSDs when you can
+ - Use locally redundant storage (LRS) accounts instead of geo- or zone-redundant storage accounts
+ - Use Logic Apps or Azure Automation to implement an automatic start and stop schedule for your VMs
+ - For production environments:
+ - Use the dedicated Standard pricing tier or higher
+ - Use a Premium SSD v2 disk and programmatically adjust its performance to account for either higher or lower demand based on your workload patterns
+ - For other disk types, size your disks to achieve your desired performance without the need for over-provisioning. Account for fluctuating workload patterns, and minimizing unused provisioned capacity
+- Use [role-based-access-control (RBAC)](../role-based-access-control/built-in-roles.md) to control who can create resources
+- Use [Azure Spot virtual machines](spot-vms.md) where you can
+- For Windows virtual machines, consider [Azure Hybrid Benefit for Windows Server](windows/hybrid-use-benefit-licensing.md) to save cost on licensing
+- Use [cost alerts](../cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md) to monitor usage and spending
+- Minimize idle instances by configuring [autoscaling](../virtual-machine-scale-sets/virtual-machine-scale-sets-autoscale-overview.md)
+- Configure Azure Bastion for operational access
+
+### Use policies to help manage and reduce costs for virtual machines
+
+You can use [Azure Policy](../governance/policy/overview.md) to help govern and optimize the costs of your resources.
+
+There are built-in policies for [virtual machines](policy-reference.md) and [networking services](../networking/policy-reference.md) that can help with cost savings:
+
+- **Allowed virtual machine SKUs** - This policy enables you to specify a set of virtual machine size SKUs that your organization can deploy. You could use this policy to restrict any virtual machine sizes that exceed your desired budget. This policy would require updates to maintain as new virtual machine SKUs are added.
+ - https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/VMSkusAllowed_Deny.json
+ - You can review the available [VM sizes](sizes.md) and cross-reference their associated costs on the pricing pages for [Windows](https://azure.microsoft.com/pricing/details/virtual-machines/windows/) and [Linux](https://azure.microsoft.com/pricing/details/virtual-machines/linux/).
+- **Network interfaces should not have public IPs** - This policy restricts the creation of public IP addresses, except in cases where they are explicitly allowed. Restricting unnecessary exposure to the internet can help reduce bandwidth and virtual network data costs.
+
+You can also make custom policies using Azure Policy. Some examples include:
+
+- Implement policies to restrict what resources can be created:
+ - https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/AllowedResourceTypes_Deny.json
+- Implement policies to not allow certain resources to be created:
+ - https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/InvalidResourceTypes_Deny.json
+- Using the resource policy to limit the allowed locations where virtual machines can be deployed.
+- Auditing resources that are incurring costs even after virtual machine deletion.
+- Auditing resources to enforce the use of the Azure Hybrid Benefit.
+
+## Next steps
+
+In this article, you learned the best practices for managing and reducing costs for virtual machines and how to use Azure policies to manage and reduce costs.
+
+For more information on virtual machine cost optimization, see the following articles:
+
+- Learn how to [plan to manage costs for virtual machines](cost-optimization-plan-to-manage-costs.md).
+- Learn how to [monitor costs for virtual machines](cost-optimization-monitor-costs.md).
+- Learn [how to optimize your cloud investment with Microsoft Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn about how to [prevent unexpected costs](../cost-management-billing/understand/analyze-unexpected-charges.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
+- Learn how to create [Linux](linux/quick-create-portal.md) and [Windows](windows/quick-create-portal.md) virtual machines.
+- Take the [Microsoft Azure Well-Architected Framework - Cost Optimization training](/training/modules/azure-well-architected-cost-optimization/).
+- Review the [Well-Architected Framework cost optimization design principles](/azure/well-architected/cost-optimization/principles) and how they apply to [virtual machines](/azure/well-architected/service-guides/virtual-machines-review#cost-optimization).
virtual-machines Cost Optimization Monitor Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/cost-optimization-monitor-costs.md
+
+ Title: Monitor costs for virtual machines
+description: Learn how to monitor costs for virtual machines by using cost analysis in the Azure portal.
+++++ Last updated : 02/21/2024++
+# Monitor costs for virtual machines
+
+This article describes how you monitor costs for virtual machines.
+
+If you'd like to see how the billing model works for virtual machines and how to plan for costs ahead of resource deployment, see [Plan to manage costs](cost-optimization-plan-to-manage-costs.md). If you'd like to review the best practices for virtual machine cost optimization, see [Best practices for virtual machine cost optimization](cost-optimization-best-practices.md).
+
+After you start using virtual machine resources, use Cost Management features to set budgets and monitor costs. You can also review forecasted costs and identify spending trends to identify areas where you might want to act. Costs for virtual machines are only a portion of the monthly costs in your Azure bill. Although this article explains how to plan for and manage costs for virtual machines, your bill includes the costs of all Azure services and resources used in your Azure subscription, including the third-party services. You can learn more about the billing model in [Plan to manage costs](cost-optimization-plan-to-manage-costs.md).
+
+In this article, you'll learn how to:
+* Monitor virtual machine costs
+* Create budgets for virtual machines
+* Export virtual machine cost data
+
+## Prerequisites
+
+Cost analysis in Cost Management supports most Azure account types but not all of them. To view the full list of supported account types, see [Understand Cost Management data](../cost-management-billing/costs/understand-cost-mgt-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). To view cost data, you need at least read access for an Azure account. For information about assigning access to Microsoft Cost Management data, see [Assign access to data](../cost-management/assign-access-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+
+## Monitor costs
+
+As you use Azure resources with virtual machines, you incur costs. Azure resource usage unit costs vary by time intervals (seconds, minutes, hours, and days) or by unit usage (bytes, megabytes, and so on.) As soon as virtual machine use starts, costs are incurred, and you can see the costs in [cost analysis](../cost-management/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+
+When you use cost analysis, you view virtual machine costs in graphs and tables for different time intervals. Some examples are by day, current and prior month, and year. You also view costs against budgets and forecasted costs. Switching to longer views over time can help you identify spending trends, and you can see where you might be overspending. If you create budgets, you can also easily see where they're exceeded.
+
+To view virtual machine costs in cost analysis:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Open the scope in the Azure portal and select **Cost analysis** in the menu. For example, go to **Subscriptions**, select a subscription from the list, and then select **Cost analysis** in the menu. Select **Scope** to switch to a different scope in cost analysis.
+
+1. By default, cost for services are shown in the first donut chart. Select the area in the chart labeled virtual machines.
+
+> [!NOTE]
+> If you just created your virtual machine, cost and usage data is typically only available within 8-24 hours.
+
+Actual monthly costs are shown when you initially open cost analysis. Here's an example showing all monthly usage costs.
++
+To narrow costs for a single service, like virtual machines, select **Add filter** and then select **Service name**. Then, select **Virtual Machines**.
+
+Here's an example showing costs for just virtual machines.
++
+In the preceding example, you see the current cost for the service. Costs by Azure regions (locations) and virtual machines costs by resource group are also shown. From here, you can explore costs on your own.
+
+## Create budgets
+
+You can create [budgets](../cost-management/tutorial-acm-create-budgets.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to manage costs and create [alerts](../cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) that automatically notify stakeholders of spending anomalies and overspending risks. Alerts are based on spending compared to budget and cost thresholds. Budgets and alerts are created for Azure subscriptions and resource groups, so they're useful as part of an overall cost monitoring strategy.
+
+Budgets can be created with filters for specific resources or services in Azure if you want more granularity present in your monitoring. Filters help ensure that you don't accidentally create new resources that cost you more money. For more information about the filter options available when you create a budget, see [Group and filter options](../cost-management-billing/costs/group-filter.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+
+## Export cost data
+
+You can also [export your cost data](../cost-management-billing/costs/tutorial-export-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. This is helpful when you need or others to do additional data analysis for costs. For example, a finance team can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets.
+
+## Next steps
+
+In this article, you learned how to monitor virtual machine costs, create budgets, and export cost data.
+
+For more information on virtual machine cost optimization, see the following articles:
+
+- Learn how to [plan to manage costs for virtual machines](cost-optimization-plan-to-manage-costs.md).
+- Review the [virtual machine cost optimization best practices](cost-optimization-best-practices.md).
+- Learn [how to optimize your cloud investment with Microsoft Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn about how to [prevent unexpected costs](../cost-management-billing/understand/analyze-unexpected-charges.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
+- Learn how to create [Linux](linux/quick-create-portal.md) and [Windows](windows/quick-create-portal.md) virtual machines.
+- Take the [Microsoft Azure Well-Architected Framework - Cost Optimization training](/training/modules/azure-well-architected-cost-optimization/).
+- Review the [Well-Architected Framework cost optimization design principles](/azure/well-architected/cost-optimization/principles) and how they apply to [virtual machines](/azure/well-architected/service-guides/virtual-machines-review#cost-optimization).
virtual-machines Cost Optimization Plan To Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/cost-optimization-plan-to-manage-costs.md
+
+ Title: Plan to manage costs for virtual machines
+description: Learn how to plan for and manage costs for virtual machines by using cost analysis in the Azure portal.
+++++ Last updated : 02/21/2024++
+# Plan to manage costs for virtual machines
+
+This article describes how you plan for and manage costs for virtual machines. Before you deploy the service, you can use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate costs for virtual machines. Later, as you deploy Azure resources, review the estimated costs.
+
+If you'd like to learn how to monitor costs for virtual machines, see [Monitor costs for virtual machines](cost-optimization-monitor-costs.md). If you'd like to review the best practices for virtual machine cost optimization, see [Best practices for virtual machine cost optimization](cost-optimization-best-practices.md).
+
+In this article, you'll:
+* Learn how to estimate costs before using virtual machines
+* Gain an understanding of the billing model for virtual machines
+* Learn how to review costs of virtual machine deployments in the Azure portal
+
+## Prerequisites
+
+Cost analysis in Cost Management supports most Azure account types but not all of them. To view the full list of supported account types, see [Understand Cost Management data](../cost-management-billing/costs/understand-cost-mgt-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). To view cost data, you need at least read access for an Azure account. For information about assigning access to Microsoft Cost Management data, see [Assign access to data](../cost-management/assign-access-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+
+## Estimate costs before using virtual machines
+
+Use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate costs before you add virtual machines.
+
+1. On the **Virtual Machines** tile under the **Products** tab, select **Add to estimate** and scroll down to the **Your Estimate** section.
+
+1. Select options from the drop-downs. There are various options available to choose from. The options that have the largest impact on your estimate total are your virtual machine's operating system, the operating system license if applicable, the [VM size](sizes.md) you select under **INSTANCE**, the number of instances you choose, and the amount of time your month your instances to run.
+
+ Notice that the total estimate changes as you select different options. The estimate appears in the upper corner and the bottom of the **Your Estimate** section.
+
+ ![Screenshot showing the your estimate section and main options available for virtual machines.](media/plan-to-manage-costs/virtual-machines-pricing-calculator-overview.png)
+
+1. Below the **Savings Options** section, there are choices for optional, additional resources you can deploy with your virtual machine, including **Managed Disks**, **Storage transactions**, and **Bandwidth**. For optimal performance, we recommend pairing your virtual machines with [managed disks](https://azure.microsoft.com/pricing/details/managed-disks/), but make sure to review the additional cost incurred by these resources.
+
+ ![Screenshot of choices for optional, additional resources.](media/plan-to-manage-costs/virtual-machines-pricing-additional-resources.png)
+
+As you add new resources to your workspace, return to this calculator and add the same resource here to update your cost estimates.
+
+For more information, see [Azure Virtual Machines pricing for Windows](https://azure.microsoft.com/pricing/details/virtual-machines/windows/) or [Linux](https://azure.microsoft.com/pricing/details/virtual-machines/linux/).
+
+## Understand the full billing model for virtual machines
+
+Virtual machines run on Azure infrastructure that accrues costs when you deploy new resources. It's important to understand that there could be other infrastructure costs that might accrue.
+
+### How you're charged for virtual machines
+
+When you create or use virtual machines resources, you might get charged for the following meters:
+
+- **Virtual machines** - You're charged for it based on the number of hours per VM.
+ - The price also changes based on your [VM size](sizes.md).
+ - The price also changes based on the region where your virtual machine is located.
+ - As virtual machine instances go through different states, they're [billed differently](states-billing.md).
+- **Storage** - You're charged for it based on the disk size in GB and the transactions per hour.
+ - For more information about transactions, see the [transactions section of the Understanding billing page for storage](../storage/files/understanding-billing.md#what-are-transactions).
+- **Virtual network** - You're charged for it based on the number of GBs of data transferred.
+- **Bandwidth** - You're charged for it based on the number of GBs of data transferred.
+- **Azure Monitor** - You're charged for it based on the number of GBs of data ingested.
+- **Azure Bastion** - You're charged for it based on the number of GBs of data transferred.
+- **Azure DNS** - You're charged for it based on the number of DNS zones hosted in Azure and the number of DNS queries received.
+- **Load balancer, if used** - You're charged for it based on the number of rulesets, hours used, and GB of data processed.
+
+Any premium software from the Azure Marketplace comes with its own billing meters.
+
+At the end of your billing cycle, the charges for each meter are summed. Your bill or invoice shows a section for all virtual machines costs. There's a separate line item for each meter.
+
+### Other costs that might accrue with virtual machines
+
+When you create resources for virtual machines, resources for other Azure services are also created. They include:
+
+- [Virtual Network](https://azure.microsoft.com/pricing/details/virtual-network/)
+- Virtual Network Interface Card (NIC)
+ - NICs don't incur any costs by themselves. However, your [VM size](sizes.md) limits how many NICs you can deploy, so play accordingly.
+- [A private IP and sometimes a public IP](https://azure.microsoft.com/pricing/details/ip-addresses/)
+- Network Security Group (NSG)
+ - NSGs don't incur any costs.
+- [OS disk and, optionally, additional disks](https://azure.microsoft.com/pricing/details/managed-disks/)
+- In some cases, a [load balancer](https://azure.microsoft.com/pricing/details/load-balancer/)
+
+For more information, see the [Parts of a VM and how they're billed section of the virtual machines documentation overview](overview.md#parts-of-a-vm-and-how-theyre-billed).
+
+### Costs might accrue after resource deletion
+
+After you delete virtual machines resources, the following resources might continue to exist. They continue to accrue costs until you delete them.
+
+- Any disks deployed other than the OS and local disks
+ - By default, the OS disk is deleted with the VM, but it can be [set not to during the VM's creation](delete.md)
+- Virtual network
+ - Your virtual NIC and public IP, if applicable, can be set to delete along with your virtual machine
+- Bandwidth
+- Load balancer
+
+If your OS disk isn't deleted with your VM, it likely incurs [P10 disk costs](https://azure.microsoft.com/pricing/details/managed-disks/) even in a stopped state. The OS disk size is smaller by default for some images and incurs lower costs accordingly.
+
+For virtual networks, one virtual network is billed per subscription and per region. Virtual networks can't span regions or subscriptions. Setting up private endpoints in vNet setups may also incur charges.
+
+Bandwidth is charged by usage; the more data transferred, the more you're charged.
+
+### Using a savings plan with virtual machines
+
+You can choose to spend a fixed hourly amount for your virtual machines, unlocking lower prices until you reach your hourly commitment. These savings plans are available in one- and three-year options.
+
+![Screenshot of virtual machine savings plans on pricing calculator.](media/plan-to-manage-costs/virtual-machines-pricing-savings-plan.png)
+
+### Using Azure Prepayment with virtual machines
+
+You can pay for virtual machines charges with your Azure Prepayment credit. However, you can't use Azure Prepayment credit to pay for charges for third party products and services including those from the Azure Marketplace.
+
+When you prepay for virtual machines, you're purchasing [reserved instances](../cost-management-billing/reservations/save-compute-costs-reservations.md?toc=%2Fazure%2Fvirtual-machines%2Ftoc.json). Committing to a reserved VM instance can save you money. The reservation discount is applied automatically to the number of running virtual machines that match the reservation scope and attributes. Reserved instances are available in one- and three-year plans.
+
+![Screenshot of virtual machine prepayment options on pricing calculator.](media/plan-to-manage-costs/virtual-machines-pricing-prepayment-reserved-instances.png)
+
+For more information, see [Save costs with Azure Reserved VM Instances](prepay-reserved-vm-instances.md).
+
+## Review estimated costs in the Azure portal
+
+As you create resources for virtual machines, you see estimated costs.
+
+To create a virtual machine and view the estimated price:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Enter *virtual machines* in the search.
+
+1. Under **Services**, select **Virtual machines**.
+
+1. In the **Virtual machines** page, select **Create** and then **Azure virtual machine**. The **Create a virtual machine** page opens.
+
+1. On the right side, you see a summary of the estimated costs. Adjust the options in the creation settings to see how the price changes and review the estimated costs.
+
+ ![Screenshot of virtual machines estimated costs on creation page in the Azure portal.](media/plan-to-manage-costs/virtual-machines-pricing-portal-estimate.png)
+
+1. Finish creating the resource.
+
+If your Azure subscription has a spending limit, Azure prevents you from spending over your credit amount. As you create and use Azure resources, your credits are used. If you reach your credit limit, the resources that you deployed are disabled for the rest of that billing period. You can't change your credit limit, but you can remove it. For more information about spending limits, see [Azure spending limit](../cost-management-billing/manage/spending-limit.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+
+## Next steps
+
+In this article, you learned about estimating costs of virtual machines, the virtual machine billing model, and reviewing virtual machine costs in the Azure portal.
+
+For more information on virtual machine cost optimization, see the following articles:
+
+- Learn how to [monitor costs for virtual machines](cost-optimization-monitor-costs.md).
+- Review the [virtual machine cost optimization best practices](cost-optimization-best-practices.md).
+- Learn [how to optimize your cloud investment with Microsoft Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Learn about how to [prevent unexpected costs](../cost-management-billing/understand/analyze-unexpected-charges.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+- Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
+- Learn how to create [Linux](linux/quick-create-portal.md) and [Windows](windows/quick-create-portal.md) virtual machines.
+- Take the [Microsoft Azure Well-Architected Framework - Cost Optimization training](/training/modules/azure-well-architected-cost-optimization/).
+- Review the [Well-Architected Framework cost optimization design principles](/azure/well-architected/cost-optimization/principles) and how they apply to [virtual machines](/azure/well-architected/service-guides/virtual-machines-review#cost-optimization).
virtual-machines Quick Cluster Create Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/quick-cluster-create-terraform.md
This article shows you how to create a Windows VM cluster (containing three Wind
[!INCLUDE [terraform-apply-plan.md](~/azure-dev-docs-pr/articles/terraform/includes/terraform-apply-plan.md)]
-Cost information isn't presented during the virtual machine creation process for Terraform like it is for the [Azure portal](quick-create-portal.md). If you want to learn more about how cost works for virtual machines, see the [Cost optimization Overview page](../plan-to-manage-costs.md).
+Cost information isn't presented during the virtual machine creation process for Terraform like it is for the [Azure portal](quick-create-portal.md). If you want to learn more about how cost works for virtual machines, see the [Cost optimization Overview page](../cost-optimization-plan-to-manage-costs.md).
## Verify the results
virtual-machines Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/quick-create-bicep.md
Several resources are defined in the Bicep file:
When the deployment finishes, you should see a messaged indicating the deployment succeeded.
-Cost information isn't presented during the virtual machine creation process for Bicep like it is for the [Azure portal](quick-create-portal.md). If you want to learn more about how cost works for virtual machines, see the [Cost optimization Overview page](../plan-to-manage-costs.md).
+Cost information isn't presented during the virtual machine creation process for Bicep like it is for the [Azure portal](quick-create-portal.md). If you want to learn more about how cost works for virtual machines, see the [Cost optimization Overview page](../cost-optimization-plan-to-manage-costs.md).
## Review deployed resources
virtual-machines Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/quick-create-cli.md
It takes a few minutes to create the VM and supporting resources. The following
Take a note your own `publicIpAddress` in the output when you create your VM. This IP address is used to access the VM later in this article.
-Cost information isn't presented during the virtual machine creation process for CLI like it is for the [Azure portal](quick-create-portal.md). If you want to learn more about how cost works for virtual machines, see the [Cost optimization Overview page](../plan-to-manage-costs.md).
+Cost information isn't presented during the virtual machine creation process for CLI like it is for the [Azure portal](quick-create-portal.md). If you want to learn more about how cost works for virtual machines, see the [Cost optimization Overview page](../cost-optimization-plan-to-manage-costs.md).
## Install web server
virtual-machines Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/quick-create-portal.md
Sign in to the [Azure portal](https://portal.azure.com).
![Screenshot of Windows virtual machine estimated cost on creation page in the Azure portal.](./media/quick-create-portal/windows-estimated-monthly-cost.png)
- If you want to learn more about how cost works for virtual machines, see the [Cost optimization Overview page](../plan-to-manage-costs.md).
+ If you want to learn more about how cost works for virtual machines, see the [Cost optimization Overview page](../cost-optimization-plan-to-manage-costs.md).
1. Under **Administrator account**, provide a username, such as *azureuser* and a password. The password must be at least 12 characters long and meet the [defined complexity requirements](faq.yml#what-are-the-password-requirements-when-creating-a-vm-).
virtual-machines Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/quick-create-powershell.md
New-AzVm `
-OpenPorts 80,3389 ```
-Cost information isn't presented during the virtual machine creation process for PowerShell like it is for the [Azure portal](quick-create-portal.md). If you want to learn more about how cost works for virtual machines, see the [Cost optimization Overview page](../plan-to-manage-costs.md).
+Cost information isn't presented during the virtual machine creation process for PowerShell like it is for the [Azure portal](quick-create-portal.md). If you want to learn more about how cost works for virtual machines, see the [Cost optimization Overview page](../cost-optimization-plan-to-manage-costs.md).
## Install web server
virtual-machines Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/quick-create-template.md
Several resources are defined in the template:
- **Location**: the default is the same location as the resource group, if it already exists. 1. Select **Review + create**. After validation completes, select **Create** to create and deploy the VM.
-Cost information isn't presented during the virtual machine creation process for ARM templates like it is for the [Azure portal](quick-create-portal.md). If you want to learn more about how cost works for virtual machines, see the [Cost optimization Overview page](../plan-to-manage-costs.md).
+Cost information isn't presented during the virtual machine creation process for ARM templates like it is for the [Azure portal](quick-create-portal.md). If you want to learn more about how cost works for virtual machines, see the [Cost optimization Overview page](../cost-optimization-plan-to-manage-costs.md).
The Azure portal is used to deploy the template. In addition to the Azure portal, you can also use the Azure PowerShell, Azure CLI, and REST API. To learn other deployment methods, see [Deploy templates](../../azure-resource-manager/templates/deploy-powershell.md).
virtual-machines Quick Create Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/quick-create-terraform.md
In this article, you learn how to:
[!INCLUDE [terraform-apply-plan.md](~/azure-dev-docs-pr/articles/terraform/includes/terraform-apply-plan.md)]
-Cost information isn't presented during the virtual machine creation process for Terraform like it is for the [Azure portal](quick-create-portal.md). If you want to learn more about how cost works for virtual machines, see the [Cost optimization Overview page](../plan-to-manage-costs.md).
+Cost information isn't presented during the virtual machine creation process for Terraform like it is for the [Azure portal](quick-create-portal.md). If you want to learn more about how cost works for virtual machines, see the [Cost optimization Overview page](../cost-optimization-plan-to-manage-costs.md).
## Verify the results
virtual-network-manager Create Virtual Network Manager Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-bicep.md
In this quickstart, you deploy three virtual networks and use Azure Virtual Netw
:::image type="content" source="media/create-virtual-network-manager-portal/virtual-network-manager-resources-diagram.png" alt-text="Diagram of resources deployed for a mesh virtual network topology with Azure virtual network manager."::: ## Bicep Template Modules
virtual-network Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-bicep.md
Title: 'Quickstart: Use Bicep to create a virtual network'
+ Title: 'Quickstart: Use Bicep templates to create a virtual network'
description: Use Bicep templates to create a virtual network and virtual machines, and deploy Azure Bastion to securely connect from the internet.
# Quickstart: Use Bicep templates to create a virtual network
-This quickstart shows you how to create a virtual network with two virtual machines (VMs), and then deploy Azure Bastion on the virtual network, by using Bicep templates. You then securely connect to the VMs from the internet by using Azure Bastion, and communicate privately between the VMs.
+This quickstart shows you how to create a virtual network with two virtual machines (VMs), and then deploy Azure Bastion on the virtual network, by using Bicep templates. You then securely connect to the VMs from the internet by using Bastion and start private communication between the VMs.
A virtual network is the fundamental building block for private networks in Azure. Azure Virtual Network enables Azure resources like VMs to securely communicate with each other and the internet.
A virtual network is the fundamental building block for private networks in Azur
- An Azure account with an active subscription. You can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- To deploy the Bicep files, either Azure CLI or PowerShell installed.
+- To deploy the Bicep files, either the Azure CLI or Azure PowerShell installed:
# [CLI](#tab/azure-cli)
- 1. [Install Azure CLI locally](/cli/azure/install-azure-cli) to run the commands. You need Azure CLI version 2.0.28 or later. Run [az version](/cli/azure/reference-index?#az-version) to find your installed version and dependent libraries, and run [az upgrade](/cli/azure/reference-index?#az-upgrade) to upgrade.
+ 1. [Install the Azure CLI locally](/cli/azure/install-azure-cli) to run the commands. You need Azure CLI version 2.0.28 or later. Run [az version](/cli/azure/reference-index?#az-version) to find your installed version and dependent libraries, and run [az upgrade](/cli/azure/reference-index?#az-upgrade) to upgrade.
1. Sign in to Azure by using the [az login](/cli/azure/reference-index#az-login) command.
A virtual network is the fundamental building block for private networks in Azur
## Create the virtual network and VMs
-This quickstart uses the [Two VMs in VNET](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.compute/2-vms-internal-load-balancer/main.bicep) Bicep template from [Azure Quickstart Templates](https://github.com/Azure/azure-quickstart-templates) to create the virtual network, resource subnet, and VMs. The Bicep template defines the following Azure resources:
+This quickstart uses the [Two VMs in VNET](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.compute/2-vms-internal-load-balancer/main.bicep) Bicep template from [Azure Resource Manager Quickstart Templates](https://github.com/Azure/azure-quickstart-templates) to create the virtual network, resource subnet, and VMs. The Bicep template defines the following Azure resources:
- [Microsoft.Network virtualNetworks](/azure/templates/microsoft.network/virtualnetworks): Creates an Azure virtual network. - [Microsoft.Network virtualNetworks/subnets](/azure/templates/microsoft.network/virtualnetworks/subnets): Creates a subnet for the VMs.
Review the Bicep file:
### Deploy the Bicep template 1. Save the Bicep file to your local computer as *main.bicep*.
-1. Deploy the Bicep file by using either Azure CLI or Azure PowerShell.
+1. Deploy the Bicep file by using either the Azure CLI or Azure PowerShell:
# [CLI](#tab/azure-cli)
When the deployment finishes, a message indicates that the deployment succeeded.
## Deploy Azure Bastion
-Azure Bastion uses your browser to connect to VMs in your virtual network over secure shell (SSH) or remote desktop protocol (RDP) by using their private IP addresses. The VMs don't need public IP addresses, client software, or special configuration. For more information about Azure Bastion, see [Azure Bastion](~/articles/bastion/bastion-overview.md).
+Bastion uses your browser to connect to VMs in your virtual network over Secure Shell (SSH) or Remote Desktop Protocol (RDP) by using their private IP addresses. The VMs don't need public IP addresses, client software, or special configuration. For more information about Bastion, see [What is Azure Bastion?](~/articles/bastion/bastion-overview.md).
->[!NOTE]
->[!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+> [!NOTE]
+> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
-Use the [Azure Bastion as a service](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.network/azure-bastion/main.bicep) Bicep template from [Azure Quickstart Templates](https://github.com/Azure/azure-quickstart-templates) to deploy and configure Azure Bastion in your virtual network. This Bicep template defines the following Azure resources:
+Use the [Azure Bastion as a Service](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.network/azure-bastion/main.bicep) Bicep template from [Azure Resource Manager Quickstart Templates](https://github.com/Azure/azure-quickstart-templates) to deploy and configure Bastion in your virtual network. This Bicep template defines the following Azure resources:
-- [Microsoft.Network virtualNetworks/subnets](/azure/templates/microsoft.network/virtualnetworks/subnets): Creates an AzureBastionSubnet subnet.
+- [Microsoft.Network virtualNetworks/subnets](/azure/templates/microsoft.network/virtualnetworks/subnets): Creates an **AzureBastionSubnet** subnet.
- [Microsoft.Network bastionHosts](/azure/templates/microsoft.network/bastionhosts): Creates the Bastion host.-- [Microsoft.Network publicIPAddresses](/azure/templates/microsoft.network/publicipaddresses): Creates a public IP address for the Azure Bastion host.-- [Microsoft Network networkSecurityGroups](/azure/templates/microsoft.network/networksecuritygroups): Controls the network security group (NSG) settings.
+- [Microsoft.Network publicIPAddresses](/azure/templates/microsoft.network/publicipaddresses): Creates a public IP address for the Bastion host.
+- [Microsoft Network networkSecurityGroups](/azure/templates/microsoft.network/networksecuritygroups): Controls the settings for network security groups.
Review the Bicep file:
Review the Bicep file:
- Line 12: Change `param vnetNewOrExisting string` from `'new'` to `'existing'`. - Line 15: Change `param bastionSubnetIpPrefix string` from `'10.1.1.0/26'` to `'10.0.1.0/26'`. - Line 18: Change `param bastionHostName string` to `param bastionHostName = 'VNet-bastion'`.
-
- The first 18 lines of your Bicep file should now look like this:
-
+
+ The first 18 lines of your Bicep file should now look like this example:
+ ```bicep @description('Name of new or existing vnet to which Azure Bastion should be deployed') param vnetName string = 'VNet'
Review the Bicep file:
1. Save the *bastion.bicep* file.
-1. Deploy the Bicep file by using either Azure CLI or Azure PowerShell.
+1. Deploy the Bicep file by using either the Azure CLI or Azure PowerShell:
# [CLI](#tab/azure-cli)
Review the Bicep file:
When the deployment finishes, a message indicates that the deployment succeeded.
->[!NOTE]
->VMs in a virtual network with a Bastion host don't need public IP addresses. Bastion provides the public IP, and the VMs use private IPs to communicate within the network. You can remove the public IPs from any VMs in Bastion-hosted virtual networks. For more information, see [Dissociate a public IP address from an Azure VM](ip-services/remove-public-ip-address-vm.md).
+> [!NOTE]
+> VMs in a virtual network with a Bastion host don't need public IP addresses. Bastion provides the public IP, and the VMs use private IPs to communicate within the network. You can remove the public IPs from any VMs in Bastion-hosted virtual networks. For more information, see [Dissociate a public IP address from an Azure VM](ip-services/remove-public-ip-address-vm.md).
## Review deployed resources
-Use Azure CLI, Azure PowerShell, or the Azure portal to review the deployed resources.
+Use the Azure CLI, Azure PowerShell, or the Azure portal to review the deployed resources:
# [CLI](#tab/azure-cli)
Get-AzResource -ResourceGroupName TestRG
# [Portal](#tab/azure-portal)
-1. In the [Azure portal](https://portal.azure.com), search for and select *resource groups*, and on the **Resource groups** page, select **TestRG** from the list of resource groups.
-1. On the **Overview** page for **TestRG**, review all the resources that you created, including the virtual network, the two VMs, and the Azure Bastion host.
-1. Select the **VNet** virtual network, and on the **Overview** page for **VNet**, note the defined address space of **10.0.0.0/16**.
-1. Select **Subnets** from the left menu, and on the **Subnets** page, note the deployed subnets of **backendSubnet** and **AzureBastionSubnet** with the assigned values from the Bicep files.
+1. In the [Azure portal](https://portal.azure.com), search for and select **resource groups**. On the **Resource groups** page, select **TestRG** from the list of resource groups.
+1. On the **Overview** page for **TestRG**, review all the resources that you created, including the virtual network, the two VMs, and the Bastion host.
+1. Select the **VNet** virtual network. On the **Overview** page for **VNet**, note the defined address space of **10.0.0.0/16**.
+1. On the left menu, select **Subnets**. On the **Subnets** page, note the deployed subnets of **backendSubnet** and **AzureBastionSubnet** with the assigned values from the Bicep files.
Get-AzResource -ResourceGroupName TestRG
1. At the top of the **BackendVM1** page, select the dropdown arrow next to **Connect**, and then select **Bastion**.
- :::image type="content" source="./media/quick-create-bicep/connect-to-virtual-machine.png" alt-text="Screenshot of connecting to VM1 with Azure Bastion." border="true":::
+ :::image type="content" source="./media/quick-create-bicep/connect-to-virtual-machine.png" alt-text="Screenshot of connecting to the first virtual machine with Azure Bastion." border="true":::
-1. On the **Bastion** page, enter the username and password you created for the VM, and then select **Connect**.
+1. On the **Bastion** page, enter the username and password that you created for the VM, and then select **Connect**.
## Communicate between VMs
-1. From the desktop of BackendVM1, open PowerShell.
+1. From the desktop of **BackendVM1**, open PowerShell.
1. Enter `ping BackendVM0`. You get a reply similar to the following message:
Get-AzResource -ResourceGroupName TestRG
Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), ```
- The ping fails because it uses the Internet Control Message Protocol (ICMP). By default, ICMP isn't allowed through Windows firewall.
+ The ping fails because it uses the Internet Control Message Protocol (ICMP). By default, ICMP isn't allowed through Windows Firewall.
-1. To allow ICMP to inbound through Windows firewall on this VM, enter the following command:
+1. To allow ICMP inbound through Windows Firewall on this VM, enter the following command:
```powershell New-NetFirewallRule ΓÇôDisplayName "Allow ICMPv4-In" ΓÇôProtocol ICMPv4 ```
-1. Close the Bastion connection to BackendVM1.
+1. Close the Bastion connection to **BackendVM1**.
-1. Repeat the steps in [Connect to a VM](#connect-to-a-vm) to connect to BackendVM0.
+1. Repeat the steps in [Connect to a VM](#connect-to-a-vm) to connect to **BackendVM0**.
-1. From PowerShell on BackendVM0, enter `ping BackendVM1`.
+1. From PowerShell on **BackendVM0**, enter `ping BackendVM1`.
- This time you get a success reply similar to the following message, because you allowed ICMP through the firewall on VM1.
+ This time you get a success reply similar to the following message, because you allowed ICMP through the firewall on **BackendVM1**.
```cmd PS C:\Users\BackendVM0> ping BackendVM1
Get-AzResource -ResourceGroupName TestRG
Minimum = 0ms, Maximum = 2ms, Average = 0ms ```
-1. Close the Bastion connection to BackendVM0.
+1. Close the Bastion connection to **BackendVM0**.
## Clean up resources
-When you're done with the virtual network, use Azure CLI, Azure PowerShell, or the Azure portal to delete the resource group and all its resources.
+When you finish with the virtual network, use the Azure CLI, Azure PowerShell, or the Azure portal to delete the resource group and all its resources:
# [CLI](#tab/azure-cli)
Remove-AzResourceGroup -Name TestRG
1. In the Azure portal, on the **Resource groups** page, select the **TestRG** resource group. 1. At the top of the **TestRG** page, select **Delete resource group**.
-1. On the **Delete a resource group** page, under **Enter resource group name to confirm deletion**, enter *TestRG*, and then select **Delete**.
+1. On the **Delete a resource group** page, under **Enter resource group name to confirm deletion**, enter **TestRG**, and then select **Delete**.
1. Select **Delete** again. ## Next steps
-In this quickstart, you created a virtual network with two subnets, one containing two VMs and the other for Azure Bastion. You deployed Azure Bastion and used it to connect to the VMs, and securely communicated between the VMs. To learn more about virtual network settings, see [Create, change, or delete a virtual network](manage-virtual-network.md).
+In this quickstart, you created a virtual network that has two subnets: one that contains two VMs and the other for Bastion. You deployed Bastion, and you used it to connect to the VMs and start communication between the VMs. To learn more about virtual network settings, see [Create, change, or delete a virtual network](manage-virtual-network.md).
+
+Private communication between VMs is unrestricted in a virtual network. To learn more about configuring various types of VM communications in a virtual network, continue to the next article:
-Private communication between VMs is unrestricted in a virtual network. Continue to the next article to learn more about configuring different types of VM network communications.
> [!div class="nextstepaction"] > [Filter network traffic](tutorial-filter-network-traffic.md)
virtual-network Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-cli.md
Title: 'Quickstart: Use Azure CLI to create a virtual network'
+ Title: 'Quickstart: Use the Azure CLI to create a virtual network'
-description: Learn how to use Azure CLI to create and connect through an Azure virtual network and virtual machines.
+description: Learn how to use the Azure CLI to create and connect through an Azure virtual network and virtual machines.
Last updated 03/15/2023
-#Customer intent: I want to use Azure CLI to create a virtual network so that virtual machines can communicate privately with each other and with the internet.
+#Customer intent: As a network administrator, I want to use the Azure CLI to create a virtual network so that virtual machines can communicate privately with each other and with the internet.
-# Quickstart: Use Azure CLI to create a virtual network
+# Quickstart: Use the Azure CLI to create a virtual network
-This quickstart shows you how to create a virtual network by using Azure CLI, the Azure command-line interface. You then create two virtual machines (VMs) in the network, securely connect to the VMs from the internet, and communicate privately between the VMs.
+This quickstart shows you how to create a virtual network by using the Azure CLI, the Azure command-line interface. You then create two virtual machines (VMs) in the network, securely connect to the VMs from the internet, and start private communication between the VMs.
A virtual network is the fundamental building block for private networks in Azure. Azure Virtual Network enables Azure resources like VMs to securely communicate with each other and the internet. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
A virtual network is the fundamental building block for private networks in Azur
## Create a resource group
-1. Use [az group create](/cli/azure/group#az-group-create) to create a resource group to host the virtual network. Use the following code to create a resource group named **test-rg** in the **eastus2** Azure region.
+Use [az group create](/cli/azure/group#az-group-create) to create a resource group to host the virtual network. Use the following code to create a resource group named **test-rg** in the **eastus2** Azure region:
- ```azurecli-interactive
- az group create \
- --name test-rg \
- --location eastus2
- ```
+```azurecli-interactive
+az group create \
+ --name test-rg \
+ --location eastus2
+```
## Create a virtual network and subnet
-1. Use [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create) to create a virtual network named **vnet-1** with a subnet named **subnet-1** in the **test-rg** resource group.
+Use [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create) to create a virtual network named **vnet-1** with a subnet named **subnet-1** in the **test-rg** resource group:
- ```azurecli-interactive
- az network vnet create \
- --name vnet-1 \
- --resource-group test-rg \
- --address-prefix 10.0.0.0/16 \
- --subnet-name subnet-1 \
- --subnet-prefixes 10.0.0.0/24
- ```
+```azurecli-interactive
+az network vnet create \
+ --name vnet-1 \
+ --resource-group test-rg \
+ --address-prefix 10.0.0.0/16 \
+ --subnet-name subnet-1 \
+ --subnet-prefixes 10.0.0.0/24
+```
## Deploy Azure Bastion
-Azure Bastion uses your browser to connect to VMs in your virtual network over secure shell (SSH) or remote desktop protocol (RDP) by using their private IP addresses. The VMs don't need public IP addresses, client software, or special configuration.
+Azure Bastion uses your browser to connect to VMs in your virtual network over Secure Shell (SSH) or Remote Desktop Protocol (RDP) by using their private IP addresses. The VMs don't need public IP addresses, client software, or special configuration.
-1. Use [az network vnet subnet create](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-create) to create an Azure Bastion subnet for your virtual network. This subnet is reserved exclusively for Azure Bastion resources and must be named **AzureBastionSubnet**.
+1. Use [az network vnet subnet create](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-create) to create a Bastion subnet for your virtual network. This subnet is reserved exclusively for Bastion resources and must be named **AzureBastionSubnet**.
```azurecli-interactive az network vnet subnet create \
Azure Bastion uses your browser to connect to VMs in your virtual network over s
--address-prefix 10.0.1.0/26 ```
-1. Create a public IP address for Azure Bastion. This IP address is used to connect to the Bastion host from the internet. Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create a public IP address named **public-ip** in the **test-rg** resource group.
+1. Create a public IP address for Bastion. This IP address is used to connect to the Bastion host from the internet. Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create a public IP address named **public-ip** in the **test-rg** resource group:
```azurecli-interactive az network public-ip create \
Azure Bastion uses your browser to connect to VMs in your virtual network over s
--zone 1 2 3 ```
-1. Use [az network bastion create](/cli/azure/network/bastion#az-network-bastion-create) to create an Azure Bastion host in the AzureBastionSubnet of your virtual network.
+1. Use [az network bastion create](/cli/azure/network/bastion#az-network-bastion-create) to create a Bastion host in **AzureBastionSubnet** for your virtual network:
```azurecli-interactive az network bastion create \
Azure Bastion uses your browser to connect to VMs in your virtual network over s
--location eastus2 ```
-It takes about 10 minutes for the Bastion resources to deploy. You can create VMs in the next section while Bastion deploys to your virtual network.
+It takes about 10 minutes to deploy the Bastion resources. You can create VMs in the next section while Bastion deploys to your virtual network.
## Create virtual machines
-Use [az vm create](/cli/azure/vm#az-vm-create) to create two VMs named **VM1** and **VM2** in the **subnet-1** subnet of the virtual network. When you're prompted for credentials, enter user names and passwords for the VMs.
+Use [az vm create](/cli/azure/vm#az-vm-create) to create two VMs named **vm-1** and **vm-2** in the **subnet-1** subnet of the virtual network. When you're prompted for credentials, enter user names and passwords for the VMs.
1. To create the first VM, use the following command:
Use [az vm create](/cli/azure/vm#az-vm-create) to create two VMs named **VM1** a
--public-ip-address "" ```
->[!TIP]
->You can also use the `--no-wait` option to create a VM in the background while you continue with other tasks.
+> [!TIP]
+> You can also use the `--no-wait` option to create a VM in the background while you continue with other tasks.
-The VMs take a few minutes to create. After Azure creates each VM, Azure CLI returns output similar to the following message:
+The VMs take a few minutes to create. After Azure creates each VM, the Azure CLI returns output similar to the following message:
```output {
The VMs take a few minutes to create. After Azure creates each VM, Azure CLI ret
} ```
->[!NOTE]
->VMs in a virtual network with a Bastion host don't need public IP addresses. Bastion provides the public IP, and the VMs use private IPs to communicate within the network. You can remove the public IPs from any VMs in Bastion-hosted virtual networks. For more information, see [Dissociate a public IP address from an Azure VM](ip-services/remove-public-ip-address-vm.md).
+> [!NOTE]
+> VMs in a virtual network with a Bastion host don't need public IP addresses. Bastion provides the public IP, and the VMs use private IPs to communicate within the network. You can remove the public IPs from any VMs in Bastion-hosted virtual networks. For more information, see [Dissociate a public IP address from an Azure VM](ip-services/remove-public-ip-address-vm.md).
[!INCLUDE [ephemeral-ip-note.md](../../includes/ephemeral-ip-note.md)]
The VMs take a few minutes to create. After Azure creates each VM, Azure CLI ret
1. On the **Virtual machines** page, select **vm-1**.
-1. In the **Overview** of **vm-1**, select **Connect**.
+1. In the **Overview** information for **vm-1**, select **Connect**.
-1. In the **Connect to virtual machine** page, select the **Bastion** tab.
+1. On the **Connect to virtual machine** page, select the **Bastion** tab.
1. Select **Use Bastion**.
-1. Enter the username and password you created when you created the VM, and then select **Connect**.
+1. Enter the username and password that you created when you created the VM, and then select **Connect**.
-## Communicate between VMs
+## Start communication between VMs
1. At the bash prompt for **vm-1**, enter `ping -c 4 vm-2`.
The VMs take a few minutes to create. After Azure creates each VM, Azure CLI ret
64 bytes from vm-2.internal.cloudapp.net (10.0.0.5): icmp_seq=4 ttl=64 time=0.890 ms ```
-1. Close the Bastion connection to VM1.
+1. Close the Bastion connection to **vm-1**.
-1. Repeat the steps in [Connect to a virtual machine](#connect-to-a-virtual-machine) to connect to VM2.
+1. Repeat the steps in [Connect to a virtual machine](#connect-to-a-virtual-machine) to connect to **vm-2**.
1. At the bash prompt for **vm-2**, enter `ping -c 4 vm-1`.
The VMs take a few minutes to create. After Azure creates each VM, Azure CLI ret
64 bytes from vm-1.internal.cloudapp.net (10.0.0.4): icmp_seq=4 ttl=64 time=0.780 ms ```
-1. Close the Bastion connection to VM2.
+1. Close the Bastion connection to **vm-2**.
## Clean up resources
-When you're done with the virtual network and the VMs, use [az group delete](/cli/azure/group#az-group-delete) to remove the resource group and all its resources.
+When you finish with the virtual network and the VMs, use [az group delete](/cli/azure/group#az-group-delete) to remove the resource group and all its resources:
```azurecli-interactive az group delete \
az group delete \
## Next steps
-In this quickstart, you created a virtual network with a default subnet that contains two VMs. You deployed Azure Bastion and used it to connect to the VMs, and securely communicated between the VMs. To learn more about virtual network settings, see [Create, change, or delete a virtual network](manage-virtual-network.md).
+In this quickstart, you created a virtual network with a default subnet that contains two VMs. You deployed Bastion, and you used it to connect to the VMs and establish communication between the VMs. To learn more about virtual network settings, see [Create, change, or delete a virtual network](manage-virtual-network.md).
+
+Private communication between VMs in a virtual network is unrestricted by default. To learn more about configuring various types of VM network communications, continue to the next article:
-Private communication between VMs in a virtual network is unrestricted by default. Continue to the next article to learn more about configuring different types of VM network communications.
> [!div class="nextstepaction"] > [Filter network traffic](tutorial-filter-network-traffic.md)-
virtual-network Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-portal.md
Last updated 06/06/2023
-#Customer intent: I want to use the Azure portal to create a virtual network so that virtual machines can communicate privately with each other and with the internet.
+#Customer intent: As a network administrator, I want to use the Azure portal to create a virtual network so that virtual machines can communicate privately with each other and with the internet.
# Quickstart: Use the Azure portal to create a virtual network
-This quickstart shows you how to create a virtual network by using the Azure portal. You then create two virtual machines (VMs) in the network, deploy Azure Bastion to securely connect to the VMs from the internet, and communicate privately between the VMs.
+This quickstart shows you how to create a virtual network by using the Azure portal. You then create two virtual machines (VMs) in the network, deploy Azure Bastion to securely connect to the VMs from the internet, and start private communication between the VMs.
A virtual network is the fundamental building block for private networks in Azure. Azure Virtual Network enables Azure resources like VMs to securely communicate with each other and the internet. >[!VIDEO https://learn-video.azurefd.net/vod/player?id=6b5b138e-8406-406e-8b34-40bdadf9fc6d] -- ## Prerequisites - An Azure account with an active subscription. You can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
1. On the **Virtual machines** page, select **vm-1**.
-1. In the **Overview** of **vm-1**, select **Connect**.
+1. In the **Overview** information for **vm-1**, select **Connect**.
-1. In the **Connect to virtual machine** page, select the **Bastion** tab.
+1. On the **Connect to virtual machine** page, select the **Bastion** tab.
1. Select **Use Bastion**.
-1. Enter the username and password you created when you created the VM, and then select **Connect**.
+1. Enter the username and password that you created when you created the VM, and then select **Connect**.
-## Communicate between VMs
+## Start communication between VMs
1. At the bash prompt for **vm-1**, enter `ping -c 4 vm-2`.
Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
64 bytes from vm-2.internal.cloudapp.net (10.0.0.5): icmp_seq=4 ttl=64 time=0.890 ms ```
-1. Close the Bastion connection to VM1.
+1. Close the Bastion connection to **vm-1**.
-1. Repeat the steps in [Connect to a virtual machine](#connect-to-a-virtual-machine) to connect to VM2.
+1. Repeat the steps in [Connect to a virtual machine](#connect-to-a-virtual-machine) to connect to **vm-2**.
1. At the bash prompt for **vm-2**, enter `ping -c 4 vm-1`.
Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
64 bytes from vm-1.internal.cloudapp.net (10.0.0.4): icmp_seq=4 ttl=64 time=0.780 ms ```
-1. Close the Bastion connection to VM2.
+1. Close the Bastion connection to **vm-2**.
[!INCLUDE [portal-clean-up.md](../../includes/portal-clean-up.md)] ## Next steps
-In this quickstart, you created a virtual network with two subnets, one containing two VMs and the other for Azure Bastion. You deployed Azure Bastion and used it to connect to the VMs, and securely communicated between the VMs. To learn more about virtual network settings, see [Create, change, or delete a virtual network](manage-virtual-network.md).
+In this quickstart, you created a virtual network with two subnets: one that contains two VMs and the other for Bastion. You deployed Bastion, and you used it to connect to the VMs and establish communication between the VMs. To learn more about virtual network settings, see [Create, change, or delete a virtual network](manage-virtual-network.md).
-Private communication between VMs is unrestricted in a virtual network. Continue to the next article to learn more about configuring different types of VM network communications.
+Private communication between VMs is unrestricted in a virtual network. To learn more about configuring various types of VM network communications, continue to the next article:
> [!div class="nextstepaction"] > [Filter network traffic](tutorial-filter-network-traffic.md)
virtual-network Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-powershell.md
Last updated 06/09/2023
-#Customer intent: I want to use PowerShell to create a virtual network so that virtual machines can communicate privately with each other and with the internet.
+#Customer intent: As a network administrator, I want to use PowerShell to create a virtual network so that virtual machines can communicate privately with each other and with the internet.
# Quickstart: Use Azure PowerShell to create a virtual network
-This quickstart shows you how to create a virtual network by using Azure PowerShell. You then create two virtual machines (VMs) in the network, securely connect to the VMs from the internet, and communicate privately between the VMs.
+This quickstart shows you how to create a virtual network by using Azure PowerShell. You then create two virtual machines (VMs) in the network, securely connect to the VMs from the internet, and start private communication between the VMs.
A virtual network is the fundamental building block for private networks in Azure. Azure Virtual Network enables Azure resources like VMs to securely communicate with each other and the internet. ## Prerequisites
A virtual network is the fundamental building block for private networks in Azur
- Azure Cloud Shell or Azure PowerShell.
- The steps in this quickstart run the Azure PowerShell cmdlets interactively in [Azure Cloud Shell](/azure/cloud-shell/overview). To run the commands in the Cloud Shell, select **Open Cloudshell** at the upper-right corner of a code block. Select **Copy** to copy the code and then paste it into Cloud Shell to run it. You can also run the Cloud Shell from within the Azure portal.
+ The steps in this quickstart run the Azure PowerShell cmdlets interactively in [Azure Cloud Shell](/azure/cloud-shell/overview). To run the commands in the Cloud Shell, select **Open Cloudshell** at the upper-right corner of a code block. Select **Copy** to copy the code, and then paste it into Cloud Shell to run it. You can also run Cloud Shell from within the Azure portal.
You can also [install Azure PowerShell locally](/powershell/azure/install-azure-powershell) to run the cmdlets. The steps in this article require Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find your installed version. If you need to upgrade, see [Update the Azure PowerShell module](/powershell/azure/install-Az-ps#update-the-azure-powershell-module).
A virtual network is the fundamental building block for private networks in Azur
## Create a resource group
-1. Use [New-AzResourceGroup](/powershell/module/az.Resources/New-azResourceGroup) to create a resource group to host the virtual network. Run the following code to create a resource group named **test-rg** in the **eastus2** Azure region.
+Use [New-AzResourceGroup](/powershell/module/az.Resources/New-azResourceGroup) to create a resource group to host the virtual network. Run the following code to create a resource group named **test-rg** in the **eastus2** Azure region:
- ```azurepowershell-interactive
- $rg = @{
- Name = 'test-rg'
- Location = 'eastus2'
- }
- New-AzResourceGroup @rg
- ```
+```azurepowershell-interactive
+$rg = @{
+ Name = 'test-rg'
+ Location = 'eastus2'
+}
+New-AzResourceGroup @rg
+```
## Create a virtual network
-
-1. Use [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork) to create a virtual network named **vnet-1** with IP address prefix **10.0.0.0/16** in the **test-rg** resource group and **eastus2** location.
+1. Use [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork) to create a virtual network named **vnet-1** with IP address prefix **10.0.0.0/16** in the **test-rg** resource group and **eastus2** location:
```azurepowershell-interactive $vnet = @{
A virtual network is the fundamental building block for private networks in Azur
$virtualNetwork = New-AzVirtualNetwork @vnet ```
-1. Azure deploys resources to a subnet within a virtual network. Use [Add-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/add-azvirtualnetworksubnetconfig) to create a subnet configuration named **subnet-1** with address prefix **10.0.0.0/24**.
+1. Azure deploys resources to a subnet within a virtual network. Use [Add-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/add-azvirtualnetworksubnetconfig) to create a subnet configuration named **subnet-1** with address prefix **10.0.0.0/24**:
```azurepowershell-interactive $subnet = @{
A virtual network is the fundamental building block for private networks in Azur
$subnetConfig = Add-AzVirtualNetworkSubnetConfig @subnet ```
-1. Then associate the subnet configuration to the virtual network with [Set-AzVirtualNetwork](/powershell/module/az.network/Set-azVirtualNetwork).
+1. Associate the subnet configuration to the virtual network by using [Set-AzVirtualNetwork](/powershell/module/az.network/Set-azVirtualNetwork):
```azurepowershell-interactive $virtualNetwork | Set-AzVirtualNetwork
A virtual network is the fundamental building block for private networks in Azur
## Deploy Azure Bastion
-Azure Bastion uses your browser to connect to VMs in your virtual network over secure shell (SSH) or remote desktop protocol (RDP) by using their private IP addresses. The VMs don't need public IP addresses, client software, or special configuration. For more information about Azure Bastion, see [Azure Bastion](/azure/bastion/bastion-overview).
+Azure Bastion uses your browser to connect to VMs in your virtual network over Secure Shell (SSH) or Remote Desktop Protocol (RDP) by using their private IP addresses. The VMs don't need public IP addresses, client software, or special configuration. For more information about Bastion, see [What is Azure Bastion?](/azure/bastion/bastion-overview).
[!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
-1. Configure an Azure Bastion subnet for your virtual network. This subnet is reserved exclusively for Azure Bastion resources and must be named **AzureBastionSubnet**.
+1. Configure a Bastion subnet for your virtual network. This subnet is reserved exclusively for Bastion resources and must be named **AzureBastionSubnet**.
```azurepowershell-interactive $subnet = @{
Azure Bastion uses your browser to connect to VMs in your virtual network over s
$subnetConfig = Add-AzVirtualNetworkSubnetConfig @subnet ```
-1. Set the configuration.
+1. Set the configuration:
```azurepowershell-interactive $virtualNetwork | Set-AzVirtualNetwork ```
-1. Create a public IP address for Azure Bastion. The bastion host uses the public IP to access secure shell (SSH) and remote desktop protocol (RDP) over port 443.
+1. Create a public IP address for Bastion. The Bastion host uses the public IP to access SSH and RDP over port 443.
```azurepowershell-interactive $ip = @{
Azure Bastion uses your browser to connect to VMs in your virtual network over s
New-AzPublicIpAddress @ip ```
-1. Use the [New-AzBastion](/powershell/module/az.network/new-azbastion) command to create a new Standard SKU Azure Bastion host in the AzureBastionSubnet.
+1. Use the [New-AzBastion](/powershell/module/az.network/new-azbastion) command to create a new Standard SKU Bastion host in **AzureBastionSubnet**:
```azurepowershell-interactive $bastion = @{
Azure Bastion uses your browser to connect to VMs in your virtual network over s
New-AzBastion @bastion ```
-It takes about 10 minutes for the Bastion resources to deploy. You can create VMs in the next section while Bastion deploys to your virtual network.
+It takes about 10 minutes to deploy the Bastion resources. You can create VMs in the next section while Bastion deploys to your virtual network.
## Create virtual machines
-Use [New-AzVM](/powershell/module/az.compute/new-azvm) to create two VMs named **vm-1** and **vm-2** in the **subnet-1** subnet of the virtual network. When you're prompted for credentials, enter user names and passwords for the VMs.
+Use [New-AzVM](/powershell/module/az.compute/new-azvm) to create two VMs named **vm-1** and **vm-2** in the **subnet-1** subnet of the virtual network. When you're prompted for credentials, enter usernames and passwords for the VMs.
1. To create the first VM, use the following code: ```azurepowershell-interactive
- # Set the administrator and password for the VMs. ##
+ # Set the administrator and password for the VM. ##
$cred = Get-Credential ## Place the virtual network into a variable. ## $vnet = Get-AzVirtualNetwork -Name 'vnet-1' -ResourceGroupName 'test-rg'
- ## Create network interface for virtual machine. ##
+ ## Create a network interface for the VM. ##
$nic = @{ Name = "nic-1" ResourceGroupName = 'test-rg'
Use [New-AzVM](/powershell/module/az.compute/new-azvm) to create two VMs named *
} $nicVM = New-AzNetworkInterface @nic
- ## Create a virtual machine configuration for VMs ##
+ ## Create a virtual machine configuration. ##
$vmsz = @{ VMName = "vm-1" VMSize = 'Standard_DS1_v2'
Use [New-AzVM](/powershell/module/az.compute/new-azvm) to create two VMs named *
| Set-AzVMSourceImage @vmimage ` | Add-AzVMNetworkInterface -Id $nicVM.Id
- ## Create the virtual machine for VMs ##
+ ## Create the VM. ##
$vm = @{ ResourceGroupName = 'test-rg' Location = 'eastus2'
Use [New-AzVM](/powershell/module/az.compute/new-azvm) to create two VMs named *
1. To create the second VM, use the following code: ```azurepowershell-interactive
- # Set the administrator and password for the VMs. ##
+ # Set the administrator and password for the VM. ##
$cred = Get-Credential ## Place the virtual network into a variable. ## $vnet = Get-AzVirtualNetwork -Name 'vnet-1' -ResourceGroupName 'test-rg'
- ## Create network interface for virtual machine. ##
+ ## Create a network interface for the VM. ##
$nic = @{ Name = "nic-2" ResourceGroupName = 'test-rg'
Use [New-AzVM](/powershell/module/az.compute/new-azvm) to create two VMs named *
} $nicVM = New-AzNetworkInterface @nic
- ## Create a virtual machine configuration for VMs ##
+ ## Create a virtual machine configuration. ##
$vmsz = @{ VMName = "vm-2" VMSize = 'Standard_DS1_v2'
Use [New-AzVM](/powershell/module/az.compute/new-azvm) to create two VMs named *
| Set-AzVMSourceImage @vmimage ` | Add-AzVMNetworkInterface -Id $nicVM.Id
- ## Create the virtual machine for VMs ##
+ ## Create the VM. ##
$vm = @{ ResourceGroupName = 'test-rg' Location = 'eastus2'
Use [New-AzVM](/powershell/module/az.compute/new-azvm) to create two VMs named *
} New-AzVM @vm ```
-
->[!TIP]
->You can use the `-AsJob` option to create a VM in the background while you continue with other tasks. For example, run `New-AzVM @vm1 -AsJob`. When Azure starts creating the VM in the background, you get something like the following output:
+
+> [!TIP]
+> You can use the `-AsJob` option to create a VM in the background while you continue with other tasks. For example, run `New-AzVM @vm1 -AsJob`. When Azure starts creating the VM in the background, you get something like the following output:
>
->```powershell
->Id Name PSJobTypeName State HasMoreData Location Command
->-- - - -- -- -- -
->1 Long Running... AzureLongRun... Running True localhost New-AzVM
->```
+> ```powershell
+> Id Name PSJobTypeName State HasMoreData Location Command
+> -- - - -- -- -- -
+> 1 Long Running... AzureLongRun... Running True localhost New-AzVM
+> ```
Azure takes a few minutes to create the VMs. When Azure finishes creating the VMs, it returns output to PowerShell.
->[!NOTE]
->VMs in a virtual network with a Bastion host don't need public IP addresses. Bastion provides the public IP, and the VMs use private IPs to communicate within the network. You can remove the public IPs from any VMs in Bastion-hosted virtual networks. For more information, see [Dissociate a public IP address from an Azure VM](ip-services/remove-public-ip-address-vm.md).
+> [!NOTE]
+> VMs in a virtual network with a Bastion host don't need public IP addresses. Bastion provides the public IP, and the VMs use private IPs to communicate within the network. You can remove the public IPs from any VMs in Bastion-hosted virtual networks. For more information, see [Dissociate a public IP address from an Azure VM](ip-services/remove-public-ip-address-vm.md).
[!INCLUDE [ephemeral-ip-note.md](../../includes/ephemeral-ip-note.md)]
Azure takes a few minutes to create the VMs. When Azure finishes creating the VM
1. On the **Virtual machines** page, select **vm-1**.
-1. In the **Overview** of **vm-1**, select **Connect**.
+1. In the **Overview** information for **vm-1**, select **Connect**.
-1. In the **Connect to virtual machine** page, select the **Bastion** tab.
+1. On the **Connect to virtual machine** page, select the **Bastion** tab.
1. Select **Use Bastion**.
-1. Enter the username and password you created when you created the VM, and then select **Connect**.
+1. Enter the username and password that you created when you created the VM, and then select **Connect**.
-## Communicate between VMs
+## Start communication between VMs
1. At the bash prompt for **vm-1**, enter `ping -c 4 vm-2`.
Azure takes a few minutes to create the VMs. When Azure finishes creating the VM
64 bytes from vm-2.internal.cloudapp.net (10.0.0.5): icmp_seq=4 ttl=64 time=0.890 ms ```
-1. Close the Bastion connection to VM1.
+1. Close the Bastion connection to **vm-1**.
-1. Repeat the steps in [Connect to a virtual machine](#connect-to-a-virtual-machine) to connect to VM2.
+1. Repeat the steps in [Connect to a virtual machine](#connect-to-a-virtual-machine) to connect to **vm-2**.
1. At the bash prompt for **vm-2**, enter `ping -c 4 vm-1`.
Azure takes a few minutes to create the VMs. When Azure finishes creating the VM
64 bytes from vm-1.internal.cloudapp.net (10.0.0.4): icmp_seq=4 ttl=64 time=0.780 ms ```
-1. Close the Bastion connection to VM2.
+1. Close the Bastion connection to **vm-2**.
## Clean up resources
-When you're done with the virtual network and the VMs, use [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) to remove the resource group and all its resources.
+When you finish with the virtual network and the VMs, use [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) to remove the resource group and all its resources:
```azurepowershell-interactive Remove-AzResourceGroup -Name 'test-rg' -Force
Remove-AzResourceGroup -Name 'test-rg' -Force
## Next steps
-In this quickstart, you created a virtual network with a default subnet that contains two VMs. You deployed Azure Bastion and used it to connect to the VMs, and securely communicated between the VMs. To learn more about virtual network settings, see [Create, change, or delete a virtual network](manage-virtual-network.md).
+In this quickstart, you created a virtual network with a default subnet that contains two VMs. You deployed Bastion, and you used it to connect to the VMs and establish communication between the VMs. To learn more about virtual network settings, see [Create, change, or delete a virtual network](manage-virtual-network.md).
+
+Private communication between VMs in a virtual network is unrestricted. To learn more about configuring various types of VM network communications, continue to the next article:
-Private communication between VMs in a virtual network is unrestricted. Continue to the next article to learn more about configuring different types of VM network communications.
> [!div class="nextstepaction"] > [Filter network traffic](tutorial-filter-network-traffic.md)
virtual-network Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-template.md
Title: 'Quickstart: Create a virtual network - Resource Manager template'
+ Title: 'Quickstart: Use a Resource Manager template to create a virtual network'
-description: Learn how to use a Resource Manager template to create an Azure virtual network.
+description: Learn how to use an Azure Resource Manager template to create an Azure virtual network.
-# Quickstart: Create a virtual network - Resource Manager template
-
-In this quickstart, you learn how to create a virtual network with two subnets using the Azure Resource Manager template. A virtual network is the fundamental building block for your private network in Azure. It enables Azure resources, like VMs, to securely communicate with each other and with the internet.
+# Quickstart: Use a Resource Manager template to create a virtual network
+In this quickstart, you learn how to create a virtual network with two subnets by using an Azure Resource Manager template. A virtual network is the fundamental building block for your private network in Azure. It enables Azure resources, like virtual machines (VMs), to securely communicate with each other and with the internet.
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
-You can also complete this quickstart using the [Azure portal](quick-create-portal.md), [Azure PowerShell](quick-create-powershell.md), or [Azure CLI](quick-create-cli.md).
+You can also complete this quickstart by using the [Azure portal](quick-create-portal.md), [Azure PowerShell](quick-create-powershell.md), or the [Azure CLI](quick-create-cli.md).
## Prerequisites
If you don't have an Azure subscription, create a [free account](https://azure.m
## Review the template
-The template used in this quickstart is from [Azure Quickstart Templates](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.network/vnet-two-subnets/azuredeploy.json)
+The template that you use in this quickstart is from [Azure Quickstart Templates](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.network/vnet-two-subnets/azuredeploy.json).
:::code language="json" source="~/quickstart-templates/quickstarts/microsoft.network/vnet-two-subnets/azuredeploy.json" :::
-The following Azure resources have been defined in the template:
-- [**Microsoft.Network/virtualNetworks**](/azure/templates/microsoft.network/virtualnetworks): create an Azure virtual network.-- [**Microsoft.Network/virtualNetworks/subnets**](/azure/templates/microsoft.network/virtualnetworks/subnets) - create a subnet.
+The template defines the following Azure resources:
+
+- [Microsoft.Network/virtualNetworks](/azure/templates/microsoft.network/virtualnetworks): Create a virtual network.
+- [Microsoft.Network/virtualNetworks/subnets](/azure/templates/microsoft.network/virtualnetworks/subnets): Create a subnet.
## Deploy the template
-Deploy Resource Manager template to Azure:
+Deploy the Resource Manager template to Azure:
1. Select **Deploy to Azure** to sign in to Azure and open the template. The template creates a virtual network with two subnets. [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.network%2Fvnet-two-subnets%2Fazuredeploy.json)
-2. In the portal, on the **Create a Virtual Network with two Subnets** page, type or select the following values:
- - **Resource group**: Select **Create new**, type **CreateVNetQS-rg** for the resource group name, and select **OK**.
- - **Virtual Network Name**: Type a name for the new virtual network.
-3. Select **Review + create**, and then select **Create**.
-1. When deployment completes, select on **Go to resource** button to review the resources deployed.
+1. In the portal, on the **Create a Virtual Network with two Subnets** page, enter or select the following values:
+ - **Resource group**: Select **Create new**, enter **CreateVNetQS-rg** for the resource group name, and then select **OK**.
+ - **Virtual Network Name**: Enter a name for the new virtual network.
+1. Select **Review + create**, and then select **Create**.
+1. When deployment finishes, select the **Go to resource** button to review the resources that you deployed.
## Review deployed resources
-Explore the resources that were created with the virtual network by browsing the settings blades for **VNet1**.
+Explore the resources that you created with the virtual network by browsing through the settings panes for **VNet1**:
-1. On the **Overview** tab, you'll see the defined address space of **10.0.0.0/16**.
+- The **Overview** tab shows the defined address space of **10.0.0.0/16**.
-2. On the **Subnets** tab, you'll see the deployed subnets of **Subnet1** and **Subnet2** with the appropriate values from the template.
+- The **Subnets** tab shows the deployed subnets of **Subnet1** and **Subnet2** with the appropriate values from the template.
To learn about the JSON syntax and properties for a virtual network in a template, see [Microsoft.Network/virtualNetworks](/azure/templates/microsoft.network/virtualnetworks). ## Clean up resources
-When you no longer need the resources that you created with the virtual network, delete the resource group. This removes the virtual network and all the related resources.
+When you no longer need the resources that you created with the virtual network, delete the resource group. This action removes the virtual network and all the related resources.
To delete the resource group, call the `Remove-AzResourceGroup` cmdlet:
Remove-AzResourceGroup -Name <your resource group name>
``` ## Next steps
-In this quickstart, you deployed an Azure virtual network with two subnets. To learn more about Azure virtual networks, continue to the tutorial for virtual networks.
+
+In this quickstart, you deployed an Azure virtual network with two subnets. To learn more about Azure virtual networks, continue to the tutorial for virtual networks:
> [!div class="nextstepaction"] > [Filter network traffic](tutorial-filter-network-traffic.md)
virtual-network Quick Create Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-terraform.md
Title: 'Quickstart: Create virtual network and subnets using Terraform'
+ Title: 'Quickstart: Use Terraform to create a virtual network'
-description: In this quickstart, you create an Azure Virtual Network and Subnets using Terraform. You use Azure CLI to verify the resources.
+description: In this quickstart, you create an Azure virtual network and subnets by using Terraform. You use the Azure CLI to verify the resources.
Last updated 1/19/2024
content_well_notification: - AI-contribution
-# Customer intent: As a Network Administrator, I want to create a virtual network and subnets using Terraform.
+# Customer intent: As a network administrator, I want to create a virtual network and subnets by using Terraform.
ai-usage: ai-assisted
-# Quickstart: Create an Azure Virtual Network and subnets using Terraform
+# Quickstart: Use Terraform to create a virtual network
-In this quickstart, you learn about a Terraform script that creates an Azure resource group and a virtual network with two subnets. The names of the resource group and the virtual network are generated using a random pet name with a prefix. The script also outputs the names of the created resources.
+In this quickstart, you learn about a Terraform script that creates an Azure resource group and a virtual network with two subnets. The script generates the names of the resource group and the virtual network by using a random pet name with a prefix. The script also shows the names of the created resources in output.
-The script uses the Azure Resource Manager (azurerm) and Random (random) providers. The azurerm provider is used to interact with Azure resources, while the random provider is used to generate random pet names for the resources.
+The script uses the Azure Resource Manager (`azurerm`) provider to interact with Azure resources. It uses the Random (`random`) provider to generate random pet names for the resources.
The script creates the following resources: -- A resource group: A container that holds related resources for an Azure solution.
+- A resource group: A container that holds related resources for an Azure solution.
-- A virtual network: A fundamental building block for your private network in Azure.
+- A virtual network: A fundamental building block for your private network in Azure.
- Two subnets: Segments of a virtual network's IP address range where you can place groups of isolated resources.
The script creates the following resources:
- An Azure account with an active subscription. You can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- [Install and configure Terraform](/azure/developer/terraform/quickstart-configure)
+- [Installation and configuration of Terraform](/azure/developer/terraform/quickstart-configure).
## Implement the Terraform code > [!NOTE]
-> The sample code for this article is located in the [Azure Terraform GitHub repo](https://github.com/Azure/terraform/tree/master/quickstart/101-virtual-network-create-two-subnets). You can view the log file containing the [test results from current and previous versions of Terraform](https://github.com/Azure/terraform/tree/master/quickstart/101-virtual-network-create-two-subnets/TestRecord.md).
->
-> See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform)
+> The sample code for this article is in the [Azure Terraform GitHub repo](https://github.com/Azure/terraform/tree/master/quickstart/101-virtual-network-create-two-subnets). You can view the log file that contains the [test results from current and previous versions of Terraform](https://github.com/Azure/terraform/tree/master/quickstart/101-virtual-network-create-two-subnets/TestRecord.md).
+>
+> For more articles and sample code that show how to use Terraform to manage Azure resources, see the [documentation page for Terraform on Azure](/azure/terraform).
-1. Create a directory in which to test and run the sample Terraform code and make it the current directory.
+1. Create a directory in which to test and run the sample Terraform code, and make it the current directory.
-1. Create a file named `main.tf` and insert the following code:
+1. Create a file named *main.tf* and insert the following code:
:::code language="Terraform" source="~/terraform_samples/quickstart/101-virtual-network-create-two-subnets/main.tf":::
-1. Create a file named `outputs.tf` and insert the following code:
+1. Create a file named *outputs.tf* and insert the following code:
:::code language="Terraform" source="~/terraform_samples/quickstart/101-virtual-network-create-two-subnets/outputs.tf":::
-1. Create a file named `providers.tf` and insert the following code:
+1. Create a file named *providers.tf* and insert the following code:
:::code language="Terraform" source="~/terraform_samples/quickstart/101-virtual-network-create-two-subnets/providers.tf":::
-1. Create a file named `variables.tf` and insert the following code:
+1. Create a file named *variables.tf* and insert the following code:
:::code language="Terraform" source="~/terraform_samples/quickstart/101-virtual-network-create-two-subnets/variables.tf"::: - ## Initialize Terraform [!INCLUDE [terraform-init.md](~/azure-dev-docs-pr/articles/terraform/includes/terraform-init.md)]
The script creates the following resources:
#### [Azure CLI](#tab/azure-cli)
-1. Get the Azure resource group name.
+1. Get the Azure resource group name:
```console resource_group_name=$(terraform output -raw resource_group_name) ```
-1. Get the virtual network name.
+1. Get the virtual network name:
```console virtual_network_name=$(terraform output -raw virtual_network_name) ```
-1. Use [`az network vnet show`](/cli/azure/network/vnet#az-network-vnet-show) to display the details of your newly created virtual network.
+1. Use [`az network vnet show`](/cli/azure/network/vnet#az-network-vnet-show) to display the details of your newly created virtual network:
```azurecli az network vnet show \
The script creates the following resources:
## Troubleshoot Terraform on Azure
- For more information about troubleshooting Terraform, see [Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot).
+For information about troubleshooting Terraform, see [Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot).
## Next steps
-> [!div class="nextstepaction"]
-> [Learn more about using Terraform in Azure](/azure/terraform)
+> [!div class="nextstepaction"]
+> [Learn more about using Terraform on Azure](/azure/terraform)