Updates from: 07/13/2024 01:12:42
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/monitoring.md
Previously updated : 04/16/2024 Last updated : 07/12/2024 # Monitoring Azure OpenAI Service
The following table summarizes the current subset of metrics available in Azure
|Metric|Category|Aggregation|Description|Dimensions| |||||| |`Azure OpenAI Requests`|HTTP|Count|Total number of calls made to the Azure OpenAI API over a period of time. Applies to PayGo, PTU, and PTU-managed SKUs.| `ApiName`, `ModelDeploymentName`,`ModelName`,`ModelVersion`, `OperationName`, `Region`, `StatusCode`, `StreamType`|
+| `Active Tokens` | Usage | Total tokens minus cached tokens over a period of time. Applies to PTU and PTU-managed deployments. Use this metric to understand your TPS or TPM based utilization for PTUs and compare to your benchmarks for target TPS or TPM for your scenarios. | `ModelDeploymentName`,`ModelName`,`ModelVersion` |
| `Generated Completion Tokens` | Usage | Sum | Number of generated tokens (output) from an Azure OpenAI model. Applies to PayGo, PTU, and PTU-manged SKUs | `ApiName`, `ModelDeploymentName`,`ModelName`, `Region`| | `Processed FineTuned Training Hours` | Usage |Sum| Number of training hours processed on an Azure OpenAI fine-tuned model. | `ApiName`, `ModelDeploymentName`,`ModelName`, `Region`| | `Processed Inference Tokens` | Usage | Sum| Number of inference tokens processed by an Azure OpenAI model. Calculated as prompt tokens (input) + generated tokens. Applies to PayGo, PTU, and PTU-manged SKUs.|`ApiName`, `ModelDeploymentName`,`ModelName`, `Region`|
The following table summarizes the current subset of metrics available in Azure
|`Prompt Token Cache Match Rate` | HTTP | Average | **Provisioned-managed only**. The prompt token cache hit ration expressed as a percentage. | `ModelDeploymentName`, `ModelVersion`, `ModelName`, `Region`| |`Time to Response` | HTTP | Average | Recommended latency (responsiveness) measure for streaming requests. **Applies to PTU, and PTU-managed deployments**. This metric does not apply to standard pay-go deployments. Calculated as time taken for the first response to appear after a user sends a prompt, as measured by the API gateway. This number increases as the prompt size increases and/or cache hit size reduces. Note: this metric is an approximation as measured latency is heavily dependent on multiple factors, including concurrent calls and overall workload pattern. In addition, it does not account for any client- side latency that may exist between your client and the API endpoint. Please refer to your own logging for optimal latency tracking.| `ModelDepIoymentName`, `ModelName`, and `ModelVersion` | + ## Configure diagnostic settings All of the metrics are exportable with [diagnostic settings in Azure Monitor](/azure/azure-monitor/essentials/diagnostic-settings). To analyze logs and metrics data with Azure Monitor Log Analytics queries, you need to configure diagnostic settings for your Azure OpenAI resource and your Log Analytics workspace.
ai-services Batch Transcription Audio Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-audio-data.md
Previously updated : 1/18/2024 Last updated : 5/21/2024 ms.devlang: csharp
You can specify one or multiple audio files when creating a transcription. We re
## Supported audio formats and codecs
-The batch transcription API supports many different formats and codecs, such as:
+The batch transcription API (and [fast transcription API](./fast-transcription-create.md)) supports many different formats and codecs, such as:
- WAV - MP3
ai-services Batch Transcription Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-create.md
Previously updated : 4/15/2024 Last updated : 5/21/2024 zone_pivot_groups: speech-cli-rest # Customer intent: As a user who implements audio transcription, I want create transcriptions in bulk so that I don't have to submit audio content repeatedly.
ai-services Batch Transcription Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-get.md
Previously updated : 1/18/2024 Last updated : 5/21/2024 zone_pivot_groups: speech-cli-rest
ai-services Batch Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription.md
Previously updated : 1/18/2024 Last updated : 5/21/2024 ms.devlang: csharp
ai-services Fast Transcription Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/fast-transcription-create.md
+
+ Title: Use the fast transcription API - Speech service
+
+description: Learn how to use Azure AI Speech for fast transcriptions, where you submit audio get the transcription results much faster than real-time audio.
+++++ Last updated : 7/12/2024
+# Customer intent: As a user who implements audio transcription, I want create transcriptions as quickly as possible.
++
+# Use the fast transcription API (preview) with Azure AI Speech
+
+> [!NOTE]
+> This feature is currently in public preview. This preview is provided without a service-level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+>
+> Fast transcription API is only available via the speech to text REST API version 2024-05-15-preview. This preview version is subject to change and is not recommended for production use. It will be retired without notice 90 days after a successor preview version or the general availability (GA) of the API.
+
+Fast transcription API is used to transcribe audio files with returning results synchronously and much faster than real-time audio. Use fast transcription in the scenarios that you need the transcript of an audio recording as quickly as possible with predictable latency, such as:
+
+- Quick audio or video transcription, subtitles, and edit.
+- Video dubbing
+
+> [!TIP]
+> Try out fast transcription in [Azure AI Studio](https://aka.ms/fasttranscription/studio).
+
+## Prerequisites
+
+- An Azure AI Speech resource in one of the regions where the fast transcription API is available. The supported regions are: Central India, East US, Southeast Asia, and West Europe. For more information about regions supported for other Speech service features, see [Speech service regions](./regions.md).
+- An audio file (less than 2 hours long and less than 200 MB in size) in one of the formats and codecs supported by the batch transcription API. For more information about supported audio formats, see [supported audio formats](./batch-transcription-audio-data.md#supported-audio-formats-and-codecs).
+
+## Use the fast transcription API
+
+The fast transcription API is a REST API that uses multipart/form-data to submit audio files for transcription. The API returns the transcription results synchronously.
+
+Construct the request body according to the following instructions:
+
+- Set the required `locales` property. This value should match the expected locale of the audio data to transcribe. The supported locales are: en-US, es-ES, es-MX, fr-FR, hi-IN, it-IT, ja-JP, ko-KR, pt-BR, and zh-CN. You can only specify one locale per transcription request.
+- Optionally, set the `profanityFilterMode` property to specify how to handle profanity in recognition results. Accepted values are `None` to disable profanity filtering, `Masked` to replace profanity with asterisks, `Removed` to remove all profanity from the result, or `Tags` to add profanity tags. The default value is `Masked`. The `profanityFilterMode` property works the same way as via the [batch transcription API](./batch-transcription.md).
+- Optionally, set the `channels` property to specify the zero-based indices of the channels to be transcribed separately. If not specified, multiple channels are merged and transcribed jointly. Only up to two channels are supported. If you want to transcribe the channels from a stereo audio file separately, you need to specify `[0,1]` here. Otherwise, stereo audio will be merged to mono, mono audio will be left as is, and only a single channel will be transcribed. In either of the latter cases, the output has no channel indices for the transcribed text, since only a single audio stream is transcribed.
+
+Make a multipart/form-data POST request to the `transcriptions` endpoint with the audio file and the request body properties. The following example shows how to create a transcription using the fast transcription API.
+
+- Replace `YourSubscriptionKey` with your Speech resource key.
+- Replace `YourServiceRegion` with your Speech resource region.
+- Replace `YourAudioFile` with the path to your audio file.
+- Set the form definition properties as previously described.
+
+```azurecli-interactive
+curl --location 'https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/transcriptions:transcribe?api-version=2024-05-15-preview' \
+--header 'Content-Type: multipart/form-data' \
+--header 'Accept: application/json' \
+--header 'Ocp-Apim-Subscription-Key: YourSubscriptionKey' \
+--form 'audio=@"YourAudioFile"' \
+--form 'definition="{
+ \"locales\":[\"en-US\"],
+ \"profanityFilterMode\": \"Masked\",
+ \"channels\": [0,1]}"'
+```
+
+The response will include `duration`, `channel`, and more. The `combinedPhrases` property contains the full transcriptions for each channel separately. For example, everything the first speaker said is in the first element of the `combinedPhrases` array, and everything the second speaker said is in the second element of the array.
+
+```json
+{
+ "duration": 185079,
+ "combinedPhrases": [
+ {
+ "channel": 0,
+ "text": "Hello. Thank you for calling Contoso. Who am I speaking with today? Hi, Mary. Are you calling because you need health insurance? Great. If you can answer a few questions, we can get you signed up in the Jiffy. So what's your full name? Got it. And what's the best callback number in case we get disconnected? Yep, that'll be fine. Got it. So to confirm, it's 234-554-9312. Excellent. Let's get some additional information for your application. Do you have a job? OK, so then you have a Social Security number as well. OK, and what is your Social Security number please? Sorry, what was that, a 25 or a 225? You cut out for a bit. Alright, thank you so much. And could I have your e-mail address please? Great. Uh That is the last question. So let me take your information and I'll be able to get you signed up right away. Thank you for calling Contoso and I'll be able to get you signed up immediately. One of our agents will call you back in about 24 hours or so to confirm your application. Absolutely. If you need anything else, please give us a call at 1-800-555-5564, extension 123. Thank you very much for calling Contoso. Uh Yes, of course. So the default is a digital membership card, but we can send you a physical card if you prefer. Uh, yeah. Absolutely. I've made a note on your file. You're very welcome. Thank you for calling Contoso and have a great day."
+ },
+ {
+ "channel": 1,
+ "text": "Hi, my name is Mary Rondo. I'm trying to enroll myself with Contuso. Yes, yeah, I'm calling to sign up for insurance. Okay. So Mary Beth Rondo, last name is R like Romeo, O like Ocean, N like Nancy D, D like Dog, and O like Ocean again. Rondo. I only have a cell phone so I can give you that. Sure, so it's 234-554 and then 9312. Yep, that's right. Uh Yes, I am self-employed. Yes, I do. Uh Sure, so it's 412256789. It's double two, so 412, then another two, then five. Yeah, it's maryrondo@gmail.com. So my first and last name at gmail.com. No periods, no dashes. That was quick. Thank you. Actually, so I have one more question. I'm curious, will I be getting a physical card as proof of coverage? uh Yes. Could you please mail it to me when it's ready? I'd like to have it shipped to, are you ready for my address? So it's 2660 Unit A on Maple Avenue SE, Lansing, and then zip code is 48823. Awesome. Thanks so much."
+ }
+ ],
+ "phrases": [
+ {
+ "channel": 0,
+ "offset": 720,
+ "duration": 480,
+ "text": "Hello.",
+ "words": [
+ {
+ "text": "Hello.",
+ "offset": 720,
+ "duration": 480
+ }
+ ],
+ "locale": "en-US",
+ "confidence": 0.9177142
+ },
+ {
+ "channel": 0,
+ "offset": 1200,
+ "duration": 1120,
+ "text": "Thank you for calling Contoso.",
+ "words": [
+ {
+ "text": "Thank",
+ "offset": 1200,
+ "duration": 200
+ },
+ {
+ "text": "you",
+ "offset": 1400,
+ "duration": 80
+ },
+ {
+ "text": "for",
+ "offset": 1480,
+ "duration": 120
+ },
+ {
+ "text": "calling",
+ "offset": 1600,
+ "duration": 240
+ },
+ {
+ "text": "Contoso.",
+ "offset": 1840,
+ "duration": 480
+ }
+ ],
+ "locale": "en-US",
+ "confidence": 0.9177142
+ },
+ {
+ "channel": 0,
+ "offset": 2320,
+ "duration": 1120,
+ "text": "Who am I speaking with today?",
+ "words": [
+ {
+ "text": "Who",
+ "offset": 2320,
+ "duration": 160
+ },
+ {
+ "text": "am",
+ "offset": 2480,
+ "duration": 80
+ },
+ {
+ "text": "I",
+ "offset": 2560,
+ "duration": 80
+ },
+ {
+ "text": "speaking",
+ "offset": 2640,
+ "duration": 320
+ },
+ {
+ "text": "with",
+ "offset": 2960,
+ "duration": 160
+ },
+ {
+ "text": "today?",
+ "offset": 3120,
+ "duration": 320
+ }
+ ],
+ "locale": "en-US",
+ "confidence": 0.9177142
+ },
+ // More transcription results removed for brevity
+ // {...},
+ {
+ "channel": 1,
+ "offset": 4480,
+ "duration": 1600,
+ "text": "Hi, my name is Mary Rondo.",
+ "words": [
+ {
+ "text": "Hi,",
+ "offset": 4480,
+ "duration": 400
+ },
+ {
+ "text": "my",
+ "offset": 4880,
+ "duration": 120
+ },
+ {
+ "text": "name",
+ "offset": 5000,
+ "duration": 120
+ },
+ {
+ "text": "is",
+ "offset": 5120,
+ "duration": 160
+ },
+ {
+ "text": "Mary",
+ "offset": 5280,
+ "duration": 240
+ },
+ {
+ "text": "Rondo.",
+ "offset": 5520,
+ "duration": 560
+ }
+ ],
+ "locale": "en-US",
+ "confidence": 0.8989456
+ },
+ {
+ "channel": 1,
+ "offset": 6080,
+ "duration": 1920,
+ "text": "I'm trying to enroll myself with Contuso.",
+ "words": [
+ {
+ "text": "I'm",
+ "offset": 6080,
+ "duration": 160
+ },
+ {
+ "text": "trying",
+ "offset": 6240,
+ "duration": 200
+ },
+ {
+ "text": "to",
+ "offset": 6440,
+ "duration": 80
+ },
+ {
+ "text": "enroll",
+ "offset": 6520,
+ "duration": 200
+ },
+ {
+ "text": "myself",
+ "offset": 6720,
+ "duration": 360
+ },
+ {
+ "text": "with",
+ "offset": 7080,
+ "duration": 120
+ },
+ {
+ "text": "Contuso.",
+ "offset": 7200,
+ "duration": 800
+ }
+ ],
+ "locale": "en-US",
+ "confidence": 0.8989456
+ },
+ // More transcription results removed for brevity
+ // {...},
+ ]
+}
+```
+
+## Related content
+
+- [Speech to text quickstart](./get-started-speech-to-text.md)
+- [Batch transcription API](./batch-transcription.md)
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/overview.md
With [real-time speech to text](get-started-speech-to-text.md), the audio is tra
- Dictation - Voice agents
+## Fast transcription API (Preview)
+
+Fast transcription API is used to transcribe audio files with returning results synchronously and much faster than real-time audio. Use fast transcription in the scenarios that you need the transcript of an audio recording as quickly as possible with predictable latency, such as:
+
+- Quick audio or video transcription, subtitles, and edit.
+- Video dubbing
+
+> [!NOTE]
+> Fast transcription API is only available via the speech to text REST API version 3.3.
+
+To get started with fast transcription, see [use the fast transcription API (preview)](fast-transcription-create.md).
+ ### Batch transcription [Batch transcription](batch-transcription.md) is used to transcribe a large amount of audio in storage. You can point to audio files with a shared access signature (SAS) URI and asynchronously receive transcription results. Use batch transcription for applications that need to transcribe audio in bulk such as:
ai-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-services-quotas-and-limits.md
Title: Speech service quotas and limits description: Quick reference, detailed description, and best practices on the quotas and limits for the Speech service in Azure AI services.-++ Previously updated : 1/22/2024- Last updated : 5/21/2024+ # Speech service quotas and limits
You can use real-time speech to text with the [Speech SDK](speech-sdk.md) or the
| Concurrent request limit - custom endpoint | 1 <br/><br/>This limit isn't adjustable. | 100 (default value)<br/><br/>The rate is adjustable for Standard (S0) resources. See [more explanations](#detailed-description-quota-adjustment-and-best-practices), [best practices](#general-best-practices-to-mitigate-throttling-during-autoscaling), and [adjustment instructions](#speech-to-text-increase-real-time-speech-to-text-concurrent-request-limit). | | Max audio length for [real-time diarization](./get-started-stt-diarization.md). | N/A | 240 minutes per file |
+#### Fast transcription
+
+| Quota | Free (F0) | Standard (S0) |
+|--|--|--|
+| Maximum audio input file size | N/A | 200 MB |
+| Maximum audio length | N/A | 120 minutes per file |
+| Maximum requests per minute | N/A | 300 |
+ #### Batch transcription | Quota | Free (F0) | Standard (S0) |
ai-services Speech To Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-to-text.md
Previously updated : 1/22/2024 Last updated : 5/21/2024 # What is speech to text?
-In this overview, you learn about the benefits and capabilities of the speech to text feature of the Speech service, which is part of Azure AI services. Speech to text can be used for [real-time](#real-time-speech-to-text) or [batch transcription](#batch-transcription) of audio streams into text.
+In this overview, you learn about the benefits and capabilities of the speech to text feature of the Speech service, which is part of Azure AI services. Speech to text can be used for [real-time](#real-time-speech-to-text), [batch transcription](#batch-transcription-api), or [fast transcription](./fast-transcription-create.md) of audio streams into text.
> [!NOTE]
-> To compare pricing of [real-time](#real-time-speech-to-text) to [batch transcription](#batch-transcription), see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+> To compare pricing of [real-time](#real-time-speech-to-text) to [batch transcription](#batch-transcription-api), see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
For a full list of available speech to text languages, see [Language and voice support](language-support.md?tabs=stt).
With real-time speech to text, the audio is transcribed as speech is recognized
Real-time speech to text is available via the [Speech SDK](speech-sdk.md) and the [Speech CLI](spx-overview.md).
-## Batch transcription
+## Fast transcription (Preview)
+
+Fast transcription API is used to transcribe audio files with returning results synchronously and much faster than real-time audio. Use fast transcription in the scenarios that you need the transcript of an audio recording as quickly as possible with predictable latency, such as:
+
+- Quick audio or video transcription, subtitles, and edit.
+- Video dubbing
+
+> [!NOTE]
+> Fast transcription API is only available via the speech to text REST API version 2024-05-15-preview and later.
+
+To get started with fast transcription, see [use the fast transcription API (preview)](fast-transcription-create.md).
+
+## Batch transcription API
[Batch transcription](batch-transcription.md) is used to transcribe a large amount of audio in storage. You can point to audio files with a shared access signature (SAS) URI and asynchronously receive transcription results. Use batch transcription for applications that need to transcribe audio in bulk such as: - Transcriptions, captions, or subtitles for prerecorded audio
ai-studio Model Catalog Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/model-catalog-overview.md
Phi-3-mini-128k-instruct <br> Phi-3-medium-4k-instruct <br> Phi-3-medium-128k-in
[!INCLUDE [Feature preview](~/reusable-content/ce-skilling/azure/includes/ai-studio/includes/feature-preview.md)]
-Azure AI Studio implements a default configuration of [Azure AI Content Safety](../../ai-services/content-safety/overview.md) text moderation filters for harmful content (hate, self-harm, sexual, and violence) in language models deployed with MaaS. To learn more about content filtering (preview), see [harm categories in Azure AI Content Safety](../../ai-services/content-safety/concepts/harm-categories.md). Content filtering (preview) occurs synchronously as the service processes prompts to generate content, and you may be billed separately as per [AACS pricing](https://azure.microsoft.com/pricing/details/cognitive-services/content-safety/) for such use. You can disable content filtering for individual serverless endpoints when you first deploy a language model or in the deployment details page by clicking the content filtering toggle. You may be at higher risk of exposing users to harmful content if you turn off content filters.
+ ### Network isolation for models deployed via Serverless APIs
ai-studio Reference Model Inference Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/reference/reference-model-inference-api.md
model = ChatCompletionsClient(
) ```
+If you are using an endpoint with support for Entra ID, you can create your client as follows:
+
+```python
+import os
+from azure.ai.inference import ChatCompletionsClient
+from azure.identity import AzureDefaultCredential
+
+model = ChatCompletionsClient(
+ endpoint=os.environ["AZUREAI_ENDPOINT_URL"],
+ credential=AzureDefaultCredential(),
+)
+```
+ # [JavaScript](#tab/javascript) Install the package `@azure-rest/ai-inference` using npm:
const client = new ModelClient(
); ```
+For endpoint with support for Microsoft Entra ID, you can create your client as follows:
+
+```javascript
+import ModelClient from "@azure-rest/ai-inference";
+import { isUnexpected } from "@azure-rest/ai-inference";
+import { AzureDefaultCredential } from "@azure/identity";
+
+const client = new ModelClient(
+ process.env.AZUREAI_ENDPOINT_URL,
+ new AzureDefaultCredential()
+);
+```
+ # [REST](#tab/rest) Use the reference section to explore the API design and which parameters are available. For example, the reference section for [Chat completions](reference-model-inference-chat-completions.md) details how to use the route `/chat/completions` to generate predictions based on chat-formatted instructions:
The Azure AI Model Inference API specifies a set of modalities and parameters th
By setting a header `extra-parameters: pass-through`, the API will attempt to pass any unknown parameter directly to the underlying model. If the model can handle that parameter, the request completes.
-The following example shows a request passing the parameter `safe_prompt` supported by Mistral-Large, which isn't specified in the Azure AI Model Inference API:
+The following example shows a request passing the parameter `safe_prompt` supported by Mistral-Large, which isn't specified in the Azure AI Model Inference API.
# [Python](#tab/python) ```python
+from azure.ai.inference.models import SystemMessage, UserMessage
+ response = model.complete( messages=[ SystemMessage(content="You are a helpful assistant."),
response = model.complete(
"safe_mode": True } )+
+print(response.choices[0].message.content)
```
+> [!TIP]
+> When using Azure AI Inference SDK, using passing extra parameters using `model_extras` configures the request with `extra-parameters: pass-through` automatically for you.
+ # [JavaScript](#tab/javascript) ```javascript
var response = await client.path("/chat/completions").post({
safe_mode: true } });+
+console.log(response.choices[0].message.content)
``` # [REST](#tab/rest)
extra-parameters: pass-through
-> [!TIP]
-> The default value for `extra-parameters` is `error` which returns an error if an extra parameter is indicated in the payload. Alternatively, you can set `extra-parameters: ignore` to drop any unknown parameter in the request. Use this capability in case you happen to be sending requests with extra parameters that you know the model won't support but you want the request to completes anyway. A typical example of this is indicating `seed` parameter.
+> [!NOTE]
+> The default value for `extra-parameters` is `error` which returns an error if an extra parameter is indicated in the payload. Alternatively, you can set `extra-parameters: drop` to drop any unknown parameter in the request. Use this capability in case you happen to be sending requests with extra parameters that you know the model won't support but you want the request to completes anyway. A typical example of this is indicating `seed` parameter.
### Models with disparate set of capabilities
The following example shows the response for a chat completion request indicatin
# [Python](#tab/python) ```python
-from azure.ai.inference.models import ChatCompletionsResponseFormat
+from azure.ai.inference.models import SystemMessage, UserMessage, ChatCompletionsResponseFormat
from azure.core.exceptions import HttpResponseError import json
aks Create Postgresql Ha https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/create-postgresql-ha.md
+
+ Title: 'Create infrastructure for deploying a highly available PostgreSQL database on AKS'
+description: Create the infrastructure needed to deploy a highly available PostgreSQL database on AKS using the CloudNativePG operator.
+ Last updated : 06/07/2024+++++
+# Create infrastructure for deploying a highly available PostgreSQL database on AKS
+
+In this article, you create the infrastructure needed to deploy a highly available PostgreSQL database on AKS using the [CloudNativePG (CNPG)](https://cloudnative-pg.io/) operator.
+
+## Before you begin
+
+* Review the deployment overview and make sure you meet all the prerequisites in [How to deploy a highly available PostgreSQL database on AKS with Azure CLI][postgresql-ha-deployment-overview].
+* [Set environment variables](#set-environment-variables) for use throughout this guide.
+* [Install the required extensions](#install-required-extensions).
+
+## Set environment variables
+
+Set the following environment variables for use throughout this guide:
+
+```bash
+export SUFFIX=$(cat /dev/urandom | LC_ALL=C tr -dc 'a-z0-9' | fold -w 8 | head -n 1)
+export LOCAL_NAME="cnpg"
+export TAGS="owner=user"
+export RESOURCE_GROUP_NAME="rg-${LOCAL_NAME}-${SUFFIX}"
+export PRIMARY_CLUSTER_REGION="westus3"
+export AKS_PRIMARY_CLUSTER_NAME="aks-primary-${LOCAL_NAME}-${SUFFIX}"
+export AKS_PRIMARY_MANAGED_RG_NAME="rg-${LOCAL_NAME}-primary-aksmanaged-${SUFFIX}"
+export AKS_PRIMARY_CLUSTER_FED_CREDENTIAL_NAME="pg-primary-fedcred1-${LOCAL_NAME}-${SUFFIX}"
+export AKS_PRIMARY_CLUSTER_PG_DNSPREFIX=$(echo $(echo "a$(openssl rand -hex 5 | cut -c1-11)"))
+export AKS_UAMI_CLUSTER_IDENTITY_NAME="mi-aks-${LOCAL_NAME}-${SUFFIX}"
+export AKS_CLUSTER_VERSION="1.29"
+export PG_NAMESPACE="cnpg-database"
+export PG_SYSTEM_NAMESPACE="cnpg-system"
+export PG_PRIMARY_CLUSTER_NAME="pg-primary-${LOCAL_NAME}-${SUFFIX}"
+export PG_PRIMARY_STORAGE_ACCOUNT_NAME="hacnpgpsa${SUFFIX}"
+export PG_STORAGE_BACKUP_CONTAINER_NAME="backups"
+export ENABLE_AZURE_PVC_UPDATES="true"
+export MY_PUBLIC_CLIENT_IP=$(dig +short myip.opendns.com @resolver3.opendns.com)
+```
+
+## Install required extensions
+
+The `aks-preview`, `k8s-extension` and `amg` extensions provide more functionality for managing Kubernetes clusters and querying Azure resources. Install these extensions using the following [`az extension add`][az-extension-add] commands:
+
+```bash
+az extension add --upgrade --name aks-preview --yes --allow-preview true
+az extension add --upgrade --name k8s-extension --yes --allow-preview false
+az extension add --upgrade --name amg --yes --allow-preview false
+```
+
+As a prerequisite for utilizing kubectl, it is essential to first install [Krew][install-krew], followed by the installation of the [CNPG plugin][cnpg-plugin]. This will enable the management of the PostgreSQL operator using the subsequent commands.
+
+```bash
+(
+ set -x; cd "$(mktemp -d)" &&
+ OS="$(uname | tr '[:upper:]' '[:lower:]')" &&
+ ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')" &&
+ KREW="krew-${OS}_${ARCH}" &&
+ curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/latest/download/${KREW}.tar.gz" &&
+ tar zxvf "${KREW}.tar.gz" &&
+ ./"${KREW}" install krew
+)
+
+export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"
+
+kubectl krew install cnpg
+```
+
+## Create a resource group
+
+Create a resource group to hold the resources you create in this guide using the [`az group create`][az-group-create] command.
+
+```bash
+az group create \
+ --name $RESOURCE_GROUP_NAME \
+ --location $PRIMARY_CLUSTER_REGION \
+ --tags $TAGS \
+ --query 'properties.provisioningState' \
+ --output tsv
+```
+
+## Create a user-assigned managed identity
+
+In this section, you create a user-assigned managed identity (UAMI) to allow the CNPG PostgreSQL to use an AKS workload identity to access Azure Blob Storage. This configuration allows the PostgreSQL cluster on AKS to connect to Azure Blob Storage without a secret.
+
+1. Create a user-assigned managed identity using the [`az identity create`][az-identity-create] command.
+
+ ```bash
+ AKS_UAMI_WI_IDENTITY=$(az identity create \
+ --name $AKS_UAMI_CLUSTER_IDENTITY_NAME \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --location $PRIMARY_CLUSTER_REGION \
+ --output json)
+ ```
+
+1. Enable AKS workload identity and generate a service account to use later in this guide using the following commands:
+
+ ```bash
+ export AKS_UAMI_WORKLOAD_OBJECTID=$( \
+ echo "${AKS_UAMI_WI_IDENTITY}" | jq -r '.principalId')
+ export AKS_UAMI_WORKLOAD_RESOURCEID=$( \
+ echo "${AKS_UAMI_WI_IDENTITY}" | jq -r '.id')
+ export AKS_UAMI_WORKLOAD_CLIENTID=$( \
+ echo "${AKS_UAMI_WI_IDENTITY}" | jq -r '.clientId')
+
+ echo "ObjectId: $AKS_UAMI_WORKLOAD_OBJECTID"
+ echo "ResourceId: $AKS_UAMI_WORKLOAD_RESOURCEID"
+ echo "ClientId: $AKS_UAMI_WORKLOAD_CLIENTID"
+ ```
+
+The object ID is a unique identifier for the client ID (also known as the application ID) that uniquely identifies a security principal of type *Application* within the Entra ID tenant. The resource ID is a unique identifier to manage and locate a resource in Azure. These values are required to enabled AKS workload identity.
+
+The CNPG operator automatically generates a service account called *postgres* that you use later in the guide to create a federated credential that enables OAuth access from PostgreSQL to Azure Storage.
+
+## Create a storage account in the primary region
+
+1. Create an object storage account to store PostgreSQL backups in the primary region using the [`az storage account create`][az-storage-account-create] command.
+
+ ```bash
+ az storage account create \
+ --name $PG_PRIMARY_STORAGE_ACCOUNT_NAME \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --location $PRIMARY_CLUSTER_REGION \
+ --sku Standard_ZRS \
+ --kind StorageV2 \
+ --query 'provisioningState' \
+ --output tsv
+ ```
+
+1. Create the storage container to store the Write Ahead Logs (WAL) and regular PostgreSQL on-demand and scheduled backups using the [`az storage container create`][az-storage-container-create] command.
+
+ ```bash
+ az storage container create \
+ --name $PG_STORAGE_BACKUP_CONTAINER_NAME \
+ --account-name $PG_PRIMARY_STORAGE_ACCOUNT_NAME \
+ --auth-mode login
+ ```
+
+ Example output:
+
+ ```output
+ {
+ "created": true
+ }
+ ```
+
+ > [!NOTE]
+ > If you encounter the error message: `The request may be blocked by network rules of storage account. Please check network rule set using 'az storage account show -n accountname --query networkRuleSet'. If you want to change the default action to apply when no rule matches, please use 'az storage account update'`. Please verify user permissions for Azure Blob Storage and, if **necessary**, elevate your role to `Storage Blob Data Owner` using the commands provided below and after retry the [`az storage container create`][az-storage-container-create] command.
+
+ ```bash
+ az role assignment list --scope $STORAGE_ACCOUNT_PRIMARY_RESOURCE_ID --output table
+
+ export USER_ID=$(az ad signed-in-user show --query id --output tsv)
+
+ export STORAGE_ACCOUNT_PRIMARY_RESOURCE_ID=$(az storage account show \
+ --name $PG_PRIMARY_STORAGE_ACCOUNT_NAME \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --query "id" \
+ --output tsv)
+
+ az role assignment create \
+ --assignee-object-id $USER_ID \
+ --assignee-principal-type User \
+ --scope $STORAGE_ACCOUNT_PRIMARY_RESOURCE_ID \
+ --role "Storage Blob Data Owner" \
+ --output tsv
+ ```
+
+## Assign RBAC to storage accounts
+
+To enable backups, the PostgreSQL cluster needs to read and write to an object store. The PostgreSQL cluster running on AKS uses a workload identity to access the storage account via the CNPG operator configuration parameter [`inheritFromAzureAD`][inherit-from-azuread].
+
+1. Get the primary resource ID for the storage account using the [`az storage account show`][az-storage-account-show] command.
+
+ ```bash
+ export STORAGE_ACCOUNT_PRIMARY_RESOURCE_ID=$(az storage account show \
+ --name $PG_PRIMARY_STORAGE_ACCOUNT_NAME \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --query "id" \
+ --output tsv)
+
+ echo $STORAGE_ACCOUNT_PRIMARY_RESOURCE_ID
+ ````
+
+1. Assign the "Storage Blob Data Contributor" Azure built-in role to the object ID with the storage account resource ID scope for the UAMI associated with the managed identity for each AKS cluster using the [`az role assignment create`][az-role-assignment-create] command.
+
+ ```bash
+ az role assignment create \
+ --role "Storage Blob Data Contributor" \
+ --assignee-object-id $AKS_UAMI_WORKLOAD_OBJECTID \
+ --assignee-principal-type ServicePrincipal \
+ --scope $STORAGE_ACCOUNT_PRIMARY_RESOURCE_ID \
+ --query "id" \
+ --output tsv
+ ```
+
+## Set up monitoring infrastructure
+
+In this section, you deploy an instance of Azure Managed Grafana, an Azure Monitor workspace, and an Azure Monitor Log Analytics workspace to enable monitoring of the PostgreSQL cluster. You also store references to the created monitoring infrastructure to use as input during the AKS cluster creation process later in the guide. This section might take some time to complete.
+
+> [!NOTE]
+> Azure Managed Grafana instances and AKS clusters are billed independently. For more pricing information, see [Azure Managed Grafana pricing][azure-managed-grafana-pricing].
+
+1. Create an Azure Managed Grafana instance using the [`az grafana create`][az-grafana-create] command.
+
+ ```bash
+ export GRAFANA_PRIMARY="grafana-${LOCAL_NAME}-${SUFFIX}"
+
+ export GRAFANA_RESOURCE_ID=$(az grafana create \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $GRAFANA_PRIMARY \
+ --location $PRIMARY_CLUSTER_REGION \
+ --zone-redundancy Enabled \
+ --tags $TAGS \
+ --query "id" \
+ --output tsv)
+
+ echo $GRAFANA_RESOURCE_ID
+ ```
+
+1. Create an Azure Monitor workspace using the [`az monitor account create`][az-monitor-account-create] command.
+
+ ```bash
+ export AMW_PRIMARY="amw-${LOCAL_NAME}-${SUFFIX}"
+
+ export AMW_RESOURCE_ID=$(az monitor account create \
+ --name $AMW_PRIMARY \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --location $PRIMARY_CLUSTER_REGION \
+ --tags $TAGS \
+ --query "id" \
+ --output tsv)
+
+ echo $AMW_RESOURCE_ID
+ ```
+
+1. Create an Azure Monitor Log Analytics workspace using the [`az monitor log-analytics workspace create`][az-monitor-log-analytics-workspace-create] command.
+
+ ```bash
+ export ALA_PRIMARY="ala-${LOCAL_NAME}-${SUFFIX}"
+
+ export ALA_RESOURCE_ID=$(az monitor log-analytics workspace create \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --workspace-name $ALA_PRIMARY \
+ --location $PRIMARY_CLUSTER_REGION \
+ --query "id" \
+ --output tsv)
+
+ echo $ALA_RESOURCE_ID
+ ```
+
+## Create the AKS cluster to host the PostgreSQL cluster
+
+In this section, you create a multizone AKS cluster with a system node pool. The AKS cluster hosts the PostgreSQL cluster primary replica and two standby replicas, each aligned to a different availability zone to enable zonal redundancy.
+
+You also add a user node pool to the AKS cluster to host the PostgreSQL cluster. Using a separate node pool allows for control over the Azure VM SKUs used for PostgreSQL and enables the AKS system pool to optimize performance and costs. You apply a label to the user node pool that you can reference for node selection when deploying the CNPG operator later in this guide. This section might take some time to complete.
+
+1. Create an AKS cluster using the [`az aks create`][az-aks-create] command.
+
+ ```bash
+ export SYSTEM_NODE_POOL_VMSKU="standard_d2s_v3"
+ export USER_NODE_POOL_NAME="postgres"
+ export USER_NODE_POOL_VMSKU="standard_d4s_v3"
+
+ az aks create \
+ --name $AKS_PRIMARY_CLUSTER_NAME \
+ --tags $TAGS \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --location $PRIMARY_CLUSTER_REGION \
+ --generate-ssh-keys \
+ --node-resource-group $AKS_PRIMARY_MANAGED_RG_NAME \
+ --enable-managed-identity \
+ --assign-identity $AKS_UAMI_WORKLOAD_RESOURCEID \
+ --network-plugin azure \
+ --network-plugin-mode overlay \
+ --network-dataplane cilium \
+ --nodepool-name systempool \
+ --enable-oidc-issuer \
+ --enable-workload-identity \
+ --enable-cluster-autoscaler \
+ --min-count 2 \
+ --max-count 3 \
+ --node-vm-size $SYSTEM_NODE_POOL_VMSKU \
+ --enable-azure-monitor-metrics \
+ --azure-monitor-workspace-resource-id $AMW_RESOURCE_ID \
+ --grafana-resource-id $GRAFANA_RESOURCE_ID \
+ --api-server-authorized-ip-ranges $MY_PUBLIC_CLIENT_IP \
+ --tier standard \
+ --kubernetes-version $AKS_CLUSTER_VERSION \
+ --zones 1 2 3 \
+ --output table
+ ```
+
+2. Add a user node pool to the AKS cluster using the [`az aks nodepool add`][az-aks-node-pool-add] command.
+
+ ```bash
+ az aks nodepool add \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --cluster-name $AKS_PRIMARY_CLUSTER_NAME \
+ --name $USER_NODE_POOL_NAME \
+ --enable-cluster-autoscaler \
+ --min-count 3 \
+ --max-count 6 \
+ --node-vm-size $USER_NODE_POOL_VMSKU \
+ --zones 1 2 3 \
+ --labels workload=postgres \
+ --output table
+ ```
+
+> [!NOTE]
+> If you receive the error message `"(OperationNotAllowed) Operation is not allowed: Another operation (Updating) is in progress, please wait for it to finish before starting a new operation."` when adding the AKS node pool, please wait a few minutes for the AKS cluster operations to complete and then run the `az aks nodepool add` command.
+
+## Connect to the AKS cluster and create namespaces
+
+In this section, you get the AKS cluster credentials, which serve as the keys that allow you to authenticate and interact with the cluster. Once connected, you create two namespaces: one for the CNPG controller manager services and one for the PostgreSQL cluster and its related services.
+
+1. Get the AKS cluster credentials using the [`az aks get-credentials`][az-aks-get-credentials] command.
+
+ ```bash
+ az aks get-credentials \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $AKS_PRIMARY_CLUSTER_NAME \
+ --output none
+ ```
+
+2. Create the namespace for the CNPG controller manager services, the PostgreSQL cluster, and its related services by using the [`kubectl create namespace`][kubectl-create-namespace] command.
+
+ ```bash
+ kubectl create namespace $PG_NAMESPACE --context $AKS_PRIMARY_CLUSTER_NAME
+ kubectl create namespace $PG_SYSTEM_NAMESPACE --context $AKS_PRIMARY_CLUSTER_NAME
+ ```
+
+## Update the monitoring infrastructure
+
+The Azure Monitor workspace for Managed Prometheus and Azure Managed Grafana are automatically linked to the AKS cluster for metrics and visualization during the cluster creation process. In this section, you enable log collection with AKS Container insights and validate that Managed Prometheus is scraping metrics and Container insights is ingesting logs.
+
+1. Enable Container insights monitoring on the AKS cluster using the [`az aks enable-addons`][az-aks-enable-addons] command.
+
+ ```bash
+ az aks enable-addons \
+ --addon monitoring \
+ --name $AKS_PRIMARY_CLUSTER_NAME \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --workspace-resource-id $ALA_RESOURCE_ID \
+ --output table
+ ```
+
+2. Validate that Managed Prometheus is scraping metrics and Container insights is ingesting logs from the AKS cluster by inspecting the DaemonSet using the [`kubectl get`][kubectl-get] command and the [`az aks show`][az-aks-show] command.
+
+ ```bash
+ kubectl get ds ama-metrics-node \
+ --context $AKS_PRIMARY_CLUSTER_NAME \
+ --namespace=kube-system
+
+ kubectl get ds ama-logs \
+ --context $AKS_PRIMARY_CLUSTER_NAME \
+ --namespace=kube-system
+
+ az aks show \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --name $AKS_PRIMARY_CLUSTER_NAME \
+ --query addonProfiles
+ ```
+
+ Your output should resemble the following example output, with *six* nodes total (three for the system node pool and three for the PostgreSQL node pool) and the Container insights showing `"enabled": true`:
+
+ ```output
+ NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR
+ ama-metrics-node 6 6 6 6 6 <none>
+
+ NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR
+ ama-logs 6 6 6 6 6 <none>
+
+ {
+ "omsagent": {
+ "config": {
+ "logAnalyticsWorkspaceResourceID": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-cnpg-9vbin3p8/providers/Microsoft.OperationalInsights/workspaces/ala-cnpg-9vbin3p8",
+ "useAADAuth": "true"
+ },
+ "enabled": true,
+ "identity": null
+ }
+ }
+ ```
+
+## Create a public static IP for PostgreSQL cluster ingress
+
+To validate deployment of the PostgreSQL cluster and use client PostgreSQL tooling, such as *psql* and *PgAdmin*, you need to expose the primary and read-only replicas to ingress. In this section, you create an Azure public IP resource that you later supply to an Azure load balancer to expose PostgreSQL endpoints for query.
+
+1. Get the name of the AKS cluster node resource group using the [`az aks show`][az-aks-show] command.
+
+ ```bash
+ export AKS_PRIMARY_CLUSTER_NODERG_NAME=$(az aks show \
+ --name $AKS_PRIMARY_CLUSTER_NAME \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --query nodeResourceGroup \
+ --output tsv)
+
+ echo $AKS_PRIMARY_CLUSTER_NODERG_NAME
+ ```
+
+2. Create the public IP address using the [`az network public-ip create`][az-network-public-ip-create] command.
+
+ ```bash
+ export AKS_PRIMARY_CLUSTER_PUBLICIP_NAME="$AKS_PRIMARY_CLUSTER_NAME-pip"
+
+ az network public-ip create \
+ --resource-group $AKS_PRIMARY_CLUSTER_NODERG_NAME \
+ --name $AKS_PRIMARY_CLUSTER_PUBLICIP_NAME \
+ --location $PRIMARY_CLUSTER_REGION \
+ --sku Standard \
+ --zone 1 2 3 \
+ --allocation-method static \
+ --output table
+ ```
+
+3. Get the newly created public IP address using the [`az network public-ip show`][az-network-public-ip-show] command.
+
+ ```bash
+ export AKS_PRIMARY_CLUSTER_PUBLICIP_ADDRESS=$(az network public-ip show \
+ --resource-group $AKS_PRIMARY_CLUSTER_NODERG_NAME \
+ --name $AKS_PRIMARY_CLUSTER_PUBLICIP_NAME \
+ --query ipAddress \
+ --output tsv)
+
+ echo $AKS_PRIMARY_CLUSTER_PUBLICIP_ADDRESS
+ ```
+
+4. Get the resource ID of the node resource group using the [`az group show`][az-group-show] command.
+
+ ```bash
+ export AKS_PRIMARY_CLUSTER_NODERG_NAME_SCOPE=$(az group show --name \
+ $AKS_PRIMARY_CLUSTER_NODERG_NAME \
+ --query id \
+ --output tsv)
+
+ echo $AKS_PRIMARY_CLUSTER_NODERG_NAME_SCOPE
+ ```
+
+5. Assign the "Network Contributor" role to the UAMI object ID using the node resource group scope using the [`az role assignment create`][az-role-assignment-create] command.
+
+ ```bash
+ az role assignment create \
+ --assignee-object-id ${AKS_UAMI_WORKLOAD_OBJECTID} \
+ --assignee-principal-type ServicePrincipal \
+ --role "Network Contributor" \
+ --scope ${AKS_PRIMARY_CLUSTER_NODERG_NAME_SCOPE}
+ ```
+
+## Install the CNPG operator in the AKS cluster
+
+In this section, you install the CNPG operator in the AKS cluster using Helm or a YAML manifest.
+
+### [Helm](#tab/helm)
+
+1. Add the CNPG Helm repo using the [`helm repo add`][helm-repo-add] command.
+
+ ```bash
+ helm repo add cnpg https://cloudnative-pg.github.io/charts
+ ```
+
+2. Upgrade the CNPG Helm repo and install it on the AKS cluster using the [`helm upgrade`][helm-upgrade] command with the `--install` flag.
+
+ ```bash
+ helm upgrade --install cnpg \
+ --namespace $PG_SYSTEM_NAMESPACE \
+ --create-namespace \
+ --kube-context=$AKS_PRIMARY_CLUSTER_NAME \
+ cnpg/cloudnative-pg
+ ```
+
+3. Verify the operator installation on the AKS cluster using the [`kubectl get`][kubectl-get] command.
+
+ ```bash
+ kubectl get deployment \
+ --context $AKS_PRIMARY_CLUSTER_NAME \
+ --namespace $PG_SYSTEM_NAMESPACE cnpg-cloudnative-pg
+ ```
+
+### [YAML](#tab/yaml)
+
+1. Install the CNPG operator on the AKS cluster using the [`kubectl apply`][kubectl-apply] command.
+
+ ```bash
+ kubectl apply --context $AKS_PRIMARY_CLUSTER_NAME \
+ --namespace $PG_SYSTEM_NAMESPACE \
+ --server-side -f \
+ https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.23/releases/cnpg-1.23.1.yaml
+ ```
+
+2. Verify the operator installation on the AKS cluster using the [`kubectl get`][kubectl-get] command.
+
+ ```bash
+ kubectl get deployment \
+ --namespace $PG_SYSTEM_NAMESPACE cnpg-controller-manager \
+ --context $AKS_PRIMARY_CLUSTER_NAME
+ ```
+++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Deploy a highly available PostgreSQL database on the AKS cluster][deploy-postgresql]
+
+## Contributors
+
+*This article is maintained by Microsoft. It was originally written by the following contributors*:
+
+* Ken Kilty | Principal TPM
+* Russell de Pina | Principal TPM
+* Adrian Joian | Senior Customer Engineer
+* Jenny Hayes | Senior Content Developer
+* Carol Smith | Senior Content Developer
+* Erin Schaffer | Content Developer 2
+
+<!-- LINKS -->
+[az-identity-create]: /cli/azure/identity#az-identity-create
+[az-grafana-create]: /cli/azure/grafana#az-grafana-create
+[postgresql-ha-deployment-overview]: ./postgresql-ha-overview.md
+[az-extension-add]: /cli/azure/extension#az_extension_add
+[az-group-create]: /cli/azure/group#az_group_create
+[az-storage-account-create]: /cli/azure/storage/account#az_storage_account_create
+[az-storage-container-create]: /cli/azure/storage/container#az_storage_container_create
+[inherit-from-azuread]: https://cloudnative-pg.io/documentation/1.23/appendixes/object_stores/#azure-blob-storage
+[az-storage-account-show]: /cli/azure/storage/account#az_storage_account_show
+[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create
+[az-monitor-account-create]: /cli/azure/monitor/account#az_monitor_account_create
+[az-monitor-log-analytics-workspace-create]: /cli/azure/monitor/log-analytics/workspace#az_monitor_log_analytics_workspace_create
+[azure-managed-grafana-pricing]: https://azure.microsoft.com/pricing/details/managed-grafana/
+[az-aks-create]: /cli/azure/aks#az_aks_create
+[az-aks-node-pool-add]: /cli/azure/aks/nodepool#az_aks_nodepool_add
+[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
+[kubectl-create-namespace]: https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_namespace/
+[az-aks-enable-addons]: /cli/azure/aks#az_aks_enable_addons
+[kubectl-get]: https://kubernetes.io/docs/reference/kubectl/generated/kubectl_get/
+[az-aks-show]: /cli/azure/aks#az_aks_show
+[az-network-public-ip-create]: /cli/azure/network/public-ip#az_network_public_ip_create
+[az-network-public-ip-show]: /cli/azure/network/public-ip#az_network_public_ip_show
+[az-group-show]: /cli/azure/group#az_group_show
+[helm-repo-add]: https://helm.sh/docs/helm/helm_repo_add/
+[helm-upgrade]: https://helm.sh/docs/helm/helm_upgrade/
+[kubectl-apply]: https://kubernetes.io/docs/reference/kubectl/generated/kubectl_apply/
+[deploy-postgresql]: ./deploy-postgresql-ha.md
+[install-krew]: https://krew.sigs.k8s.io/
+[cnpg-plugin]: https://cloudnative-pg.io/documentation/current/kubectl-plugin/#using-krew
aks Deploy Postgresql Ha https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-postgresql-ha.md
+
+ Title: 'Deploy a highly available PostgreSQL database on AKS with Azure CLI'
+description: In this article, you deploy a highly available PostgreSQL database on AKS using the CloudNativePG operator.
+ Last updated : 06/07/2024+++++
+# Deploy a highly available PostgreSQL database on AKS
+
+In this article, you deploy a highly available PostgreSQL database on AKS.
+
+* If you haven't already created the required infrastructure for this deployment, follow the steps in [Create infrastructure for deploying a highly available PostgreSQL database on AKS][create-infrastructure] to get set up, and then you can return to this article.
+
+## Create secret for bootstrap app user
+
+1. Generate a secret to validate the PostgreSQL deployment by interactive login for a bootstrap app user using the [`kubectl create secret`][kubectl-create-secret] command.
+
+ ```bash
+ PG_DATABASE_APPUSER_SECRET=$(echo -n | openssl rand -base64 16)
+
+ kubectl create secret generic db-user-pass \
+ --from-literal=username=app \
+ --from-literal=password="${PG_DATABASE_APPUSER_SECRET}" \
+ --namespace $PG_NAMESPACE \
+ --context $AKS_PRIMARY_CLUSTER_NAME
+ ```
+
+1. Validate that the secret was successfully created using the [`kubectl get`][kubectl-get] command.
+
+ ```bash
+ kubectl get secret db-user-pass --namespace $PG_NAMESPACE --context $AKS_PRIMARY_CLUSTER_NAME
+ ```
+
+## Set environment variables for the PostgreSQL cluster
+
+* Deploy a ConfigMap to set environment variables for the PostgreSQL cluster using the following [`kubectl apply`][kubectl-apply] command:
+
+ ```bash
+ cat <<EOF | kubectl apply --context $AKS_PRIMARY_CLUSTER_NAME -n $PG_NAMESPACE -f -
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: cnpg-controller-manager-config
+ data:
+ ENABLE_AZURE_PVC_UPDATES: 'true'
+ EOF
+ ```
+
+## Install the Prometheus PodMonitors
+
+Prometheus creates PodMonitors for the CNPG instances using a set of default recording rules stored on the CNPG GitHub samples repo. In a production environment, these rules would be modified as needed.
+
+1. Add the Prometheus Community Helm repo using the [`helm repo add`][helm-repo-add] command.
+
+ ```bash
+ helm repo add prometheus-community \
+ https://prometheus-community.github.io/helm-charts
+ ```
+
+2. Upgrade the Prometheus Community Helm repo and install it on the primary cluster using the [`helm upgrade`][helm-upgrade] command with the `--install` flag.
+
+ ```bash
+ helm upgrade --install \
+ --namespace $PG_NAMESPACE \
+ -f https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/main/docs/src/samples/monitoring/kube-stack-config.yaml \
+ prometheus-community \
+ prometheus-community/kube-prometheus-stack \
+ --kube-context=$AKS_PRIMARY_CLUSTER_NAME
+ ```
+
+Verify that the pod monitor is created.
+
+```bash
+kubectl --namespace $PG_NAMESPACE \
+ --context $AKS_PRIMARY_CLUSTER_NAME \
+ get podmonitors.monitoring.coreos.com \
+ $PG_PRIMARY_CLUSTER_NAME \
+ -o yaml
+```
+
+## Create a federated credential
+
+In this section, you create a federated identity credential for PostgreSQL backup to allow CNPG to use AKS workload identity to authenticate to the storage account destination for backups. The CNPG operator creates a Kubernetes service account with the same name as the cluster named used in the CNPG Cluster deployment manifest.
+
+1. Get the OIDC issuer URL of the cluster using the [`az aks show`][az-aks-show] command.
+
+ ```bash
+ export AKS_PRIMARY_CLUSTER_OIDC_ISSUER="$(az aks show \
+ --name $AKS_PRIMARY_CLUSTER_NAME \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --query "oidcIssuerProfile.issuerUrl" \
+ --output tsv)"
+ ```
+
+2. Create a federated identity credential using the [`az identity federated-credential create`][az-identity-federated-credential-create] command.
+
+ ```bash
+ az identity federated-credential create \
+ --name $AKS_PRIMARY_CLUSTER_FED_CREDENTIAL_NAME \
+ --identity-name $AKS_UAMI_CLUSTER_IDENTITY_NAME \
+ --resource-group $RESOURCE_GROUP_NAME --issuer "${AKS_PRIMARY_CLUSTER_OIDC_ISSUER}" \
+ --subject system:serviceaccount:"${PG_NAMESPACE}":"${PG_PRIMARY_CLUSTER_NAME}" \
+ --audience api://AzureADTokenExchange
+ ```
+
+## Deploy a highly available PostgreSQL cluster
+
+In this section, you deploy a highly available PostgreSQL cluster using the [CNPG Cluster custom resource definition (CRD)][cluster-crd].
+
+The following table outlines the key properties set in the YAML deployment manifest for the Cluster CRD:
+
+| Property | Definition |
+| | |
+| `inheritedMetadata` | Specific to the CNPG operator. Metadata is inherited by all objects related to the cluster. |
+| `annotations: service.beta.kubernetes.io/azure-dns-label-name` | DNS label for use when exposing the read-write and read-only Postgres cluster endpoints. |
+| `labels: azure.workload.identity/use: "true"` | Indicates that AKS should inject workload identity dependencies into the pods hosting the PostgreSQL cluster instances. |
+| `topologySpreadConstraints` | Require different zones and different nodes with label `"workload=postgres"`. |
+| `resources` | Configures a Quality of Service (QoS) class of *Guaranteed*. In a production environment, these values are key for maximizing usage of the underlying node VM and vary based on the Azure VM SKU used. |
+| `bootstrap` | Specific to the CNPG operator. Initializes with an empty app database. |
+| `storage` / `walStorage` | Specific to the CNPG operator. Defines storage templates for the PersistentVolumeClaims (PVCs) for data and log storage. It's also possible to specify storage for tablespaces to shard out for increased IOPs. |
+| `replicationSlots` | Specific to the CNPG operator. Enables replication slots for high availability. |
+| `postgresql` | Specific to the CNPG operator. Maps settings for `postgresql.conf`, `pg_hba.conf`, and `pg_ident.conf config`. |
+| `serviceAccountTemplate` | Contains the template needed to generate the service accounts and maps the AKS federated identity credential to the UAMI to enable AKS workload identity authentication from the pods hosting the PostgreSQL instances to external Azure resources. |
+| `barmanObjectStore` | Specific to the CNPG operator. Configures the barman-cloud tool suite using AKS workload identity for authentication to the Azure Blob Storage object store. |
+
+1. Deploy the PostgreSQL cluster with the Cluster CRD using the [`kubectl apply`][kubectl-apply] command.
+
+ ```bash
+ cat <<EOF | kubectl apply --context $AKS_PRIMARY_CLUSTER_NAME -n $PG_NAMESPACE -v 9 -f -
+ apiVersion: postgresql.cnpg.io/v1
+ kind: Cluster
+ metadata:
+ name: $PG_PRIMARY_CLUSTER_NAME
+ spec:
+ inheritedMetadata:
+ annotations:
+ service.beta.kubernetes.io/azure-dns-label-name: $AKS_PRIMARY_CLUSTER_PG_DNSPREFIX
+ labels:
+ azure.workload.identity/use: "true"
+
+ instances: 3
+ startDelay: 30
+ stopDelay: 30
+ minSyncReplicas: 1
+ maxSyncReplicas: 1
+ replicationSlots:
+ highAvailability:
+ enabled: true
+ updateInterval: 30
+
+ topologySpreadConstraints:
+ - maxSkew: 1
+ topologyKey: topology.kubernetes.io/zone
+ whenUnsatisfiable: DoNotSchedule
+ labelSelector:
+ matchLabels:
+ cnpg.io/cluster: $PG_PRIMARY_CLUSTER_NAME
+
+ affinity:
+ nodeSelector:
+ workload: postgres
+
+ resources:
+ requests:
+ memory: '8Gi'
+ cpu: 2
+ limits:
+ memory: '8Gi'
+ cpu: 2
+
+ bootstrap:
+ initdb:
+ database: appdb
+ owner: app
+ secret:
+ name: db-user-pass
+ dataChecksums: true
+
+ storage:
+ size: 2Gi
+ pvcTemplate:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 2Gi
+ storageClassName: managed-csi-premium
+
+ walStorage:
+ size: 2Gi
+ pvcTemplate:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 2Gi
+ storageClassName: managed-csi-premium
+
+ monitoring:
+ enablePodMonitor: true
+
+ postgresql:
+ parameters:
+ archive_timeout: '5min'
+ auto_explain.log_min_duration: '10s'
+ checkpoint_completion_target: '0.9'
+ checkpoint_timeout: '15min'
+ shared_buffers: '256MB'
+ effective_cache_size: '512MB'
+ pg_stat_statements.max: '1000'
+ pg_stat_statements.track: 'all'
+ max_connections: '400'
+ max_prepared_transactions: '400'
+ max_parallel_workers: '32'
+ max_parallel_maintenance_workers: '8'
+ max_parallel_workers_per_gather: '8'
+ max_replication_slots: '32'
+ max_worker_processes: '32'
+ wal_keep_size: '512MB'
+ max_wal_size: '1GB'
+ pg_hba:
+ - host all all all scram-sha-256
+
+ serviceAccountTemplate:
+ metadata:
+ annotations:
+ azure.workload.identity/client-id: "$AKS_UAMI_WORKLOAD_CLIENTID"
+ labels:
+ azure.workload.identity/use: "true"
+
+ backup:
+ barmanObjectStore:
+ destinationPath: "https://${PG_PRIMARY_STORAGE_ACCOUNT_NAME}.blob.core.windows.net/backups"
+ azureCredentials:
+ inheritFromAzureAD: true
+
+ retentionPolicy: '7d'
+ EOF
+ ```
+
+1. Validate that the primary PostgreSQL cluster was successfully created using the [`kubectl get`][kubectl-get] command. The CNPG Cluster CRD specified three instances, which can be validated by viewing running pods once each instance is brought up and joined for replication. Be patient as it can take some time for all three instances to come online and join the cluster.
+
+ ```bash
+ kubectl get pods --context $AKS_PRIMARY_CLUSTER_NAME --namespace $PG_NAMESPACE -l cnpg.io/cluster=$PG_PRIMARY_CLUSTER_NAME
+ ```
+
+ Example output
+
+ ```output
+ NAME READY STATUS RESTARTS AGE
+ pg-primary-cnpg-r8c7unrw-1 1/1 Running 0 4m25s
+ pg-primary-cnpg-r8c7unrw-2 1/1 Running 0 3m33s
+ pg-primary-cnpg-r8c7unrw-3 1/1 Running 0 2m49s
+ ```
+
+### Validate the Prometheus PodMonitor is running
+
+The CNPG operator automatically creates a PodMonitor for the primary instance using the recording rules created during the [Prometheus Community installation](#install-the-prometheus-podmonitors).
+
+1. Validate the PodMonitor is running using the [`kubectl get`][kubectl-get] command.
+
+ ```bash
+ kubectl --namespace $PG_NAMESPACE \
+ --context $AKS_PRIMARY_CLUSTER_NAME \
+ get podmonitors.monitoring.coreos.com \
+ $PG_PRIMARY_CLUSTER_NAME \
+ --output yaml
+ ```
+
+ Example output
+
+ ```output
+ kind: PodMonitor
+ metadata:
+ annotations:
+ cnpg.io/operatorVersion: 1.23.1
+ ...
+ ```
+
+If you are using Azure Monitor for Managed Prometheus, you will need to add another pod monitor using the custom group name. Managed Prometheus does not pick up the custom resource definitions (CRDs) from the Prometheus community. Aside from the group name, the CRDs are the same. This allows pod monitors for Managed Prometheus to exist side-by-side those that use the community pod monitor. If you are not using Managed Prometheus, you can skip this. Create a new pod monitor:
+
+```bash
+cat <<EOF | kubectl apply --context $AKS_PRIMARY_CLUSTER_NAME --namespace $PG_NAMESPACE -f -
+apiVersion: azmonitoring.coreos.com/v1
+kind: PodMonitor
+metadata:
+ name: cnpg-cluster-metrics-managed-prometheus
+ namespace: ${PG_NAMESPACE}
+ labels:
+ azure.workload.identity/use: "true"
+ cnpg.io/cluster: ${PG_PRIMARY_CLUSTER_NAME}
+spec:
+ selector:
+ matchLabels:
+ azure.workload.identity/use: "true"
+ cnpg.io/cluster: ${PG_PRIMARY_CLUSTER_NAME}
+ podMetricsEndpoints:
+ - port: metrics
+EOF
+```
+
+Verify that the pod monitor is created (note the difference in the group name).
+
+```bash
+kubectl --namespace $PG_NAMESPACE \
+ --context $AKS_PRIMARY_CLUSTER_NAME \
+ get podmonitors.azmonitoring.coreos.com \
+ -l cnpg.io/cluster=$PG_PRIMARY_CLUSTER_NAME \
+ -o yaml
+```
+
+#### Option A - Azure Monitor Workspace
+
+Once you have deployed the Postgres cluster and the pod monitor, you can view the metrics using the Azure portal in an Azure Monitor workspace.
++
+#### Option B - Managed Grafana
+
+Alternatively, Once you have deployed the Postgres cluster and pod monitors, you can create a metrics dashboard on the Managed Grafana instance created by the deployment script to visualize the metrics exported to the Azure Monitor workspace. You can access the Managed Grafana via the Azure portal. Navigate to the Managed Grafana instance created by the deployment script and click on the Endpoint link as shown here:
++
+Clicking on the Endpoint link will cause a new browser window to open where you can create dashboards on the Managed Grafana instance. Following the instructions to [configure an Azure Monitor data source](../azure-monitor/visualize/grafana-plugin.md#configure-an-azure-monitor-data-source-plug-in), you can then add visualizations to create a dashboard of metrics from the Postgres cluster. After setting up the data source connection, from the main menu, click the Data sources option and you should see a set of data source options for the data source connection as shown here:
++
+On the Managed Prometheus option, click the option to build a dashboard to open the dashboard editor. Once the editor window opens, click the Add visualization option then click the Managed Prometheus option to browse the metrics from the Postgres cluster. Once you have selected the metric you want to visualize, click the Run queries button to fetch the data for the visualization as shown here:
++
+Click the Save button to add the panel to your dashboard. You can add other panels by clicking the Add button in the dashboard editor and repeating this process to visualize other metrics. Adding the metrics visualizations, you should have something that looks like this:
++
+Click the Save icon to save your dashboard.
+
+## Inspect the deployed PostgreSQL cluster
+
+Validate that PostgreSQL is spread across multiple availability zones by retrieving the AKS node details using the [`kubectl get`][kubectl-get] command.
+
+```bash
+kubectl get nodes \
+ --context $AKS_PRIMARY_CLUSTER_NAME \
+ --namespace $PG_NAMESPACE \
+ --output json | jq '.items[] | {node: .metadata.name, zone: .metadata.labels."failure-domain.beta.kubernetes.io/zone"}'
+```
+
+Your output should resemble the following example output with the availability zone shown for each node:
+
+```output
+{
+ "node": "aks-postgres-15810965-vmss000000",
+ "zone": "westus3-1"
+}
+{
+ "node": "aks-postgres-15810965-vmss000001",
+ "zone": "westus3-2"
+}
+{
+ "node": "aks-postgres-15810965-vmss000002",
+ "zone": "westus3-3"
+}
+{
+ "node": "aks-systempool-26112968-vmss000000",
+ "zone": "westus3-1"
+}
+{
+ "node": "aks-systempool-26112968-vmss000001",
+ "zone": "westus3-2"
+}
+```
+
+## Connect to PostgreSQL and create a sample dataset
+
+In this section, you create a table and insert some data into the app database that was created in the CNPG Cluster CRD you deployed earlier. You use this data to validate the backup and restore operations for the PostgreSQL cluster.
+
+* Create a table and insert data into the app database using the following commands:
+
+ ```bash
+ kubectl cnpg psql $PG_PRIMARY_CLUSTER_NAME --namespace $PG_NAMESPACE
+ ```
+
+ ```sql
+ # Run the following PSQL commands to create a small dataset
+ # postgres=#
+
+ CREATE TABLE datasample (id INTEGER,name VARCHAR(255));
+ INSERT INTO datasample (id, name) VALUES (1, 'John');
+ INSERT INTO datasample (id, name) VALUES (2, 'Jane');
+ INSERT INTO datasample (id, name) VALUES (3, 'Alice');
+ SELECT COUNT(*) FROM datasample;
+
+ # Type \q to exit psql
+ ```
+
+ Your output should resemble the following example output:
+
+ ```output
+ CREATE TABLE
+ INSERT 0 1
+ INSERT 0 1
+ INSERT 0 1
+ count
+ -
+ 3
+ (1 row)
+ ```
+## Connect to PostgreSQL read-only replicas
+
+* Connect to the PostgreSQL read-only replicas and validate the sample dataset using the following commands:
+
+ ```bash
+ kubectl cnpg psql --replica $PG_PRIMARY_CLUSTER_NAME --namespace $PG_NAMESPACE
+ ```
+
+ ```sql
+ #postgres=#
+ SELECT pg_is_in_recovery();
+ ```
+
+ Example output
+
+ ```output
+ # pg_is_in_recovery
+ #-
+ # t
+ #(1 row)
+ ```
+
+ ```sql
+ #postgres=#
+ SELECT COUNT(*) FROM datasample;
+ ```
+
+ Example output
+
+ ```output
+ # count
+ #-
+ # 3
+ #(1 row)
+
+ # Type \q to exit psql
+ ```
+
+## Set up on-demand and scheduled PostgreSQL backups using Barman
+
+1. Validate that the PostgreSQL cluster can access the Azure storage account specified in the CNPG Cluster CRD and that `Working WAL archiving` reports as `OK` using the following command:
+
+ ```bash
+ kubectl cnpg status $PG_PRIMARY_CLUSTER_NAME 1 \
+ --context $AKS_PRIMARY_CLUSTER_NAME \
+ --namespace $PG_NAMESPACE
+ ```
+
+ Example output
+
+ ```output
+ Continuous Backup status
+ First Point of Recoverability: Not Available
+ Working WAL archiving: OK
+ WALs waiting to be archived: 0
+ Last Archived WAL: 00000001000000000000000A @ 2024-07-09T17:18:13.982859Z
+ Last Failed WAL: -
+ ```
+
+1. Deploy an on-demand backup to Azure Storage, which uses the AKS workload identity integration, using the YAML file with the [`kubectl apply`][kubectl-apply] command.
+
+ ```bash
+ export BACKUP_ONDEMAND_NAME="on-demand-backup-1"
+
+ cat <<EOF | kubectl apply --context $AKS_PRIMARY_CLUSTER_NAME --namespace $PG_NAMESPACE -v 9 -f -
+ apiVersion: postgresql.cnpg.io/v1
+ kind: Backup
+ metadata:
+ name: $BACKUP_ONDEMAND_NAME
+ spec:
+ method: barmanObjectStore
+ cluster:
+ name: $PG_PRIMARY_CLUSTER_NAME
+ EOF
+ ```
+
+1. Validate the status of the on-demand backup using the [`kubectl describe`][kubectl-describe] command.
+
+ ```bash
+ kubectl describe backup $BACKUP_ONDEMAND_NAME \
+ --context $AKS_PRIMARY_CLUSTER_NAME \
+ --namespace $PG_NAMESPACE
+ ```
+
+ Example output
+
+ ```output
+ Type Reason Age From Message
+ - - - -
+ Normal Starting 6s cloudnative-pg-backup Starting backup for cluster pg-primary-cnpg-r8c7unrw
+ Normal Starting 5s instance-manager Backup started
+ Normal Completed 1s instance-manager Backup completed
+ ```
+
+1. Validate that the cluster has a first point of recoverability using the following command:
+
+ ```bash
+ kubectl cnpg status $PG_PRIMARY_CLUSTER_NAME 1 \
+ --context $AKS_PRIMARY_CLUSTER_NAME \
+ --namespace $PG_NAMESPACE
+ ```
+
+ Example output
+
+ ```output
+ Continuous Backup status
+ First Point of Recoverability: 2024-06-05T13:47:18Z
+ Working WAL archiving: OK
+ ```
+
+1. Configure a scheduled backup for *every hour at 15 minutes past the hour* using the YAML file with the [`kubectl apply`][kubectl-apply] command.
+
+ ```bash
+ export BACKUP_SCHEDULED_NAME="scheduled-backup-1"
+
+ cat <<EOF | kubectl apply --context $AKS_PRIMARY_CLUSTER_NAME --namespace $PG_NAMESPACE -v 9 -f -
+ apiVersion: postgresql.cnpg.io/v1
+ kind: ScheduledBackup
+ metadata:
+ name: $BACKUP_SCHEDULED_NAME
+ spec:
+ # Backup once per hour
+ schedule: "0 15 * ? * *"
+ backupOwnerReference: self
+ cluster:
+ name: $PG_PRIMARY_CLUSTER_NAME
+ EOF
+ ```
+
+1. Validate the status of the scheduled backup using the [`kubectl describe`][kubectl-describe] command.
+
+ ```bash
+ kubectl describe scheduledbackup $BACKUP_SCHEDULED_NAME \
+ --context $AKS_PRIMARY_CLUSTER_NAME \
+ --namespace $PG_NAMESPACE
+ ```
+
+1. View the backup files stored on Azure blob storage for the primary cluster using the [`az storage blob list`][az-storage-blob-list] command.
+
+ ```bash
+ az storage blob list \
+ --account-name $PG_PRIMARY_STORAGE_ACCOUNT_NAME \
+ --container-name backups \
+ --query "[*].name" \
+ --only-show-errors
+ ```
+
+ Your output should resemble the following example output, validating the backup was successful:
+
+ ```output
+ [
+ "pg-primary-cnpg-r8c7unrw/base/20240605T134715/backup.info",
+ "pg-primary-cnpg-r8c7unrw/base/20240605T134715/data.tar",
+ "pg-primary-cnpg-r8c7unrw/wals/0000000100000000/000000010000000000000001",
+ "pg-primary-cnpg-r8c7unrw/wals/0000000100000000/000000010000000000000002",
+ "pg-primary-cnpg-r8c7unrw/wals/0000000100000000/000000010000000000000003",
+ "pg-primary-cnpg-r8c7unrw/wals/0000000100000000/000000010000000000000003.00000028.backup",
+ "pg-primary-cnpg-r8c7unrw/wals/0000000100000000/000000010000000000000004",
+ "pg-primary-cnpg-r8c7unrw/wals/0000000100000000/000000010000000000000005",
+ "pg-primary-cnpg-r8c7unrw/wals/0000000100000000/000000010000000000000005.00000028.backup",
+ "pg-primary-cnpg-r8c7unrw/wals/0000000100000000/000000010000000000000006",
+ "pg-primary-cnpg-r8c7unrw/wals/0000000100000000/000000010000000000000007",
+ "pg-primary-cnpg-r8c7unrw/wals/0000000100000000/000000010000000000000008",
+ "pg-primary-cnpg-r8c7unrw/wals/0000000100000000/000000010000000000000009"
+ ]
+ ```
+
+## Restore the on-demand backup to a new PostgreSQL cluster
+
+In this section, you restore the on-demand backup you created earlier using the CNPG operator into a new instance using the bootstrap Cluster CRD. A single instance cluster is used for simplicity. Remember that the AKS workload identity (via CNPG inheritFromAzureAD) accesses the backup files, and that the recovery cluster name is used to generate a new Kubernetes service account specific to the recovery cluster.
+
+You also create a second federated credential to map the new recovery cluster service account to the existing UAMI that has "Storage Blob Data Contributor" access to the backup files on blob storage.
+
+1. Create a second federated identity credential using the [`az identity federated-credential create`][az-identity-federated-credential-create] command.
+
+ ```bash
+ export PG_PRIMARY_CLUSTER_NAME_RECOVERED="$PG_PRIMARY_CLUSTER_NAME-recovered-db"
+
+ az identity federated-credential create \
+ --name $PG_PRIMARY_CLUSTER_NAME_RECOVERED \
+ --identity-name $AKS_UAMI_CLUSTER_IDENTITY_NAME \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --issuer "${AKS_PRIMARY_CLUSTER_OIDC_ISSUER}" \
+ --subject system:serviceaccount:"${PG_NAMESPACE}":"${PG_PRIMARY_CLUSTER_NAME_RECOVERED}" \
+ --audience api://AzureADTokenExchange
+ ```
+
+1. Restore the on-demand backup using the Cluster CRD with the [`kubectl apply`][kubectl-apply] command.
+
+ ```bash
+ cat <<EOF | kubectl apply --context $AKS_PRIMARY_CLUSTER_NAME --namespace $PG_NAMESPACE -v 9 -f -
+ apiVersion: postgresql.cnpg.io/v1
+ kind: Cluster
+ metadata:
+ name: $PG_PRIMARY_CLUSTER_NAME_RECOVERED
+ spec:
+
+ inheritedMetadata:
+ annotations:
+ service.beta.kubernetes.io/azure-dns-label-name: $AKS_PRIMARY_CLUSTER_PG_DNSPREFIX
+ labels:
+ azure.workload.identity/use: "true"
+
+ instances: 1
+
+ affinity:
+ nodeSelector:
+ workload: postgres
+
+ # Point to cluster backup created earlier and stored on Azure Blob Storage
+ bootstrap:
+ recovery:
+ source: clusterBackup
+
+ storage:
+ size: 2Gi
+ pvcTemplate:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 2Gi
+ storageClassName: managed-csi-premium
+ volumeMode: Filesystem
+
+ walStorage:
+ size: 2Gi
+ pvcTemplate:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 2Gi
+ storageClassName: managed-csi-premium
+ volumeMode: Filesystem
+
+ serviceAccountTemplate:
+ metadata:
+ annotations:
+ azure.workload.identity/client-id: "$AKS_UAMI_WORKLOAD_CLIENTID"
+ labels:
+ azure.workload.identity/use: "true"
+
+ externalClusters:
+ - name: clusterBackup
+ barmanObjectStore:
+ destinationPath: https://${PG_PRIMARY_STORAGE_ACCOUNT_NAME}.blob.core.windows.net/backups
+ serverName: $PG_PRIMARY_CLUSTER_NAME
+ azureCredentials:
+ inheritFromAzureAD: true
+ wal:
+ maxParallel: 8
+ EOF
+ ```
+
+1. Connect to the recovered instance, then validate that the dataset created on the original cluster where the full backup was taken is present using the following command:
+
+ ```bash
+ kubectl cnpg psql $PG_PRIMARY_CLUSTER_NAME_RECOVERED --namespace $PG_NAMESPACE
+ ```
+
+ ```sql
+ postgres=# SELECT COUNT(*) FROM datasample;
+ ```
+
+ Example output
+
+ ```output
+ # count
+ #-
+ # 3
+ #(1 row)
+
+ # Type \q to exit psql
+ ```
+
+1. Delete the recovered cluster using the following command:
+
+ ```bash
+ kubectl cnpg destroy $PG_PRIMARY_CLUSTER_NAME_RECOVERED 1 \
+ --context $AKS_PRIMARY_CLUSTER_NAME \
+ --namespace $PG_NAMESPACE
+ ```
+
+1. Delete the federated identity credential using the [`az identity federated-credential delete`][az-identity-federated-credential-delete] command.
+
+ ```bash
+ az identity federated-credential delete \
+ --name $PG_PRIMARY_CLUSTER_NAME_RECOVERED \
+ --identity-name $AKS_UAMI_CLUSTER_IDENTITY_NAME \
+ --resource-group $RESOURCE_GROUP_NAME \
+ --yes
+ ```
+
+## Expose the PostgreSQL cluster using a public load balancer
+
+In this section, you configure the necessary infrastructure to publicly expose the PostgreSQL read-write and read-only endpoints with IP source restrictions to the public IP address of your client workstation.
+
+You also retrieve the following endpoints from the Cluster IP service:
+
+* *One* primary read-write endpoint that ends with `*-rw`.
+* *Zero to N* (depending on the number of replicas) read-only endpoints that end with `*-ro`.
+* *One* replication endpoint that ends with `*-r`.
+
+1. Get the Cluster IP service details using the [`kubectl get`][kubectl-get] command.
+
+ ```bash
+ kubectl get services \
+ --context $AKS_PRIMARY_CLUSTER_NAME \
+ --namespace $PG_NAMESPACE \
+ -l cnpg.io/cluster=$PG_PRIMARY_CLUSTER_NAME
+ ```
+
+ Example output
+
+ ```output
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ pg-primary-cnpg-sryti1qf-r ClusterIP 10.0.193.27 <none> 5432/TCP 3h57m
+ pg-primary-cnpg-sryti1qf-ro ClusterIP 10.0.237.19 <none> 5432/TCP 3h57m
+ pg-primary-cnpg-sryti1qf-rw ClusterIP 10.0.244.125 <none> 5432/TCP 3h57m
+ ```
+
+ > [!NOTE]
+ > There are three
+
+1. Get the service details using the [`kubectl get`][kubectl-get] command.
+
+ ```bash
+ export PG_PRIMARY_CLUSTER_RW_SERVICE=$(kubectl get services \
+ --namespace $PG_NAMESPACE \
+ --context $AKS_PRIMARY_CLUSTER_NAME \
+ -l "cnpg.io/cluster" \
+ --output json | jq -r '.items[] | select(.metadata.name | endswith("-rw")) | .metadata.name')
+
+ echo $PG_PRIMARY_CLUSTER_RW_SERVICE
+
+ export PG_PRIMARY_CLUSTER_RO_SERVICE=$(kubectl get services \
+ --namespace $PG_NAMESPACE \
+ --context $AKS_PRIMARY_CLUSTER_NAME \
+ -l "cnpg.io/cluster" \
+ --output json | jq -r '.items[] | select(.metadata.name | endswith("-ro")) | .metadata.name')
+
+ echo $PG_PRIMARY_CLUSTER_RO_SERVICE
+ ```
+
+1. Configure the load balancer service with the following YAML files using the [`kubectl apply`][kubectl-apply] command.
+
+ ```bash
+ cat <<EOF | kubectl apply --context $AKS_PRIMARY_CLUSTER_NAME -f -
+ apiVersion: v1
+ kind: Service
+ metadata:
+ annotations:
+ service.beta.kubernetes.io/azure-load-balancer-resource-group: $AKS_PRIMARY_CLUSTER_NODERG_NAME
+ service.beta.kubernetes.io/azure-pip-name: $AKS_PRIMARY_CLUSTER_PUBLICIP_NAME
+ service.beta.kubernetes.io/azure-dns-label-name: $AKS_PRIMARY_CLUSTER_PG_DNSPREFIX
+ name: cnpg-cluster-load-balancer-rw
+ namespace: "${PG_NAMESPACE}"
+ spec:
+ type: LoadBalancer
+ ports:
+ - protocol: TCP
+ port: 5432
+ targetPort: 5432
+ selector:
+ cnpg.io/instanceRole: primary
+ cnpg.io/podRole: instance
+ loadBalancerSourceRanges:
+ - "$MY_PUBLIC_CLIENT_IP/32"
+ EOF
+
+ cat <<EOF | kubectl apply --context $AKS_PRIMARY_CLUSTER_NAME -f -
+ apiVersion: v1
+ kind: Service
+ metadata:
+ annotations:
+ service.beta.kubernetes.io/azure-load-balancer-resource-group: $AKS_PRIMARY_CLUSTER_NODERG_NAME
+ service.beta.kubernetes.io/azure-pip-name: $AKS_PRIMARY_CLUSTER_PUBLICIP_NAME
+ service.beta.kubernetes.io/azure-dns-label-name: $AKS_PRIMARY_CLUSTER_PG_DNSPREFIX
+ name: cnpg-cluster-load-balancer-ro
+ namespace: "${PG_NAMESPACE}"
+ spec:
+ type: LoadBalancer
+ ports:
+ - protocol: TCP
+ port: 5433
+ targetPort: 5432
+ selector:
+ cnpg.io/instanceRole: replica
+ cnpg.io/podRole: instance
+ loadBalancerSourceRanges:
+ - "$MY_PUBLIC_CLIENT_IP/32"
+ EOF
+ ```
+
+1. Get the service details using the [`kubectl describe`][kubectl-describe] command.
+
+ ```bash
+ kubectl describe service cnpg-cluster-load-balancer-rw \
+ --context $AKS_PRIMARY_CLUSTER_NAME \
+ --namespace $PG_NAMESPACE
+
+ kubectl describe service cnpg-cluster-load-balancer-ro \
+ --context $AKS_PRIMARY_CLUSTER_NAME \
+ --namespace $PG_NAMESPACE
+
+ export AKS_PRIMARY_CLUSTER_ALB_DNSNAME="$(az network public-ip show \
+ --resource-group $AKS_PRIMARY_CLUSTER_NODERG_NAME \
+ --name $AKS_PRIMARY_CLUSTER_PUBLICIP_NAME \
+ --query "dnsSettings.fqdn" --output tsv)"
+
+ echo $AKS_PRIMARY_CLUSTER_ALB_DNSNAME
+ ```
+
+### Validate public PostgreSQL endpoints
+
+In this section, you validate that the Azure Load Balancer is properly set up using the static IP that you created earlier and routing connections to the primary read-write and read-only replicas and use the psql CLI to connect to both.
+
+Remember that the primary read-write endpoint maps to TCP port 5432 and the read-only replica endpoints map to port 5433 to allow the same PostgreSQL DNS name to be used for readers and writers.
+
+> [!NOTE]
+> You need the value of the app user password for PostgreSQL basic auth that was generated earlier and stored in the `$PG_DATABASE_APPUSER_SECRET` environment variable.
+
+* Validate the public PostgreSQL endpoints using the following `psql` commands:
+
+ ```bash
+ echo "Public endpoint for PostgreSQL cluster: $AKS_PRIMARY_CLUSTER_ALB_DNSNAME"
+
+ # Query the primary, pg_is_in_recovery = false
+
+ psql -h $AKS_PRIMARY_CLUSTER_ALB_DNSNAME \
+ -p 5432 -U app -d appdb -W -c "SELECT pg_is_in_recovery();"
+ ```
+
+ Example output
+
+ ```output
+ pg_is_in_recovery
+ -
+ f
+ (1 row)
+ ```
+
+ ```bash
+ echo "Query a replica, pg_is_in_recovery = true"
+
+ psql -h $AKS_PRIMARY_CLUSTER_ALB_DNSNAME \
+ -p 5433 -U app -d appdb -W -c "SELECT pg_is_in_recovery();"
+ ```
+
+ Example output
+
+ ```output
+ # Example output
+
+ pg_is_in_recovery
+ -
+ t
+ (1 row)
+ ```
+
+ When successfully connected to the primary read-write endpoint, the PostgreSQL function returns `f` for *false*, indicating that the current connection is writable.
+
+ When connected to a replica, the function returns `t` for *true*, indicating the database is in recovery and read-only.
+
+## Simulate an unplanned failover
+
+In this section, you trigger a sudden failure by deleting the pod running the primary, which simulates a sudden crash or loss of network connectivity to the node hosting the PostgreSQL primary.
+
+1. Check the status of the running pod instances using the following command:
+
+ ```bash
+ kubectl cnpg status $PG_PRIMARY_CLUSTER_NAME --namespace $PG_NAMESPACE
+ ```
+
+ Example output
+
+ ```output
+ Name Current LSN Rep role Status Node
+ -- -- - --
+ pg-primary-cnpg-sryti1qf-1 0/9000060 Primary OK aks-postgres-32388626-vmss000000
+ pg-primary-cnpg-sryti1qf-2 0/9000060 Standby (sync) OK aks-postgres-32388626-vmss000001
+ pg-primary-cnpg-sryti1qf-3 0/9000060 Standby (sync) OK aks-postgres-32388626-vmss000002
+ ```
+
+1. Delete the primary pod using the [`kubectl delete`][kubectl-delete] command.
+
+ ```bash
+ PRIMARY_POD=$(kubectl get pod \
+ --namespace $PG_NAMESPACE \
+ --no-headers \
+ -o custom-columns=":metadata.name" \
+ -l role=primary)
+
+ kubectl delete pod $PRIMARY_POD --grace-period=1 --namespace $PG_NAMESPACE
+ ```
+
+1. Validate that the `pg-primary-cnpg-sryti1qf-2` pod instance is now the primary using the following command:
+
+ ```bash
+ kubectl cnpg status $PG_PRIMARY_CLUSTER_NAME --namespace $PG_NAMESPACE
+ ```
+
+ Example output
+
+ ```output
+ pg-primary-cnpg-sryti1qf-2 0/9000060 Primary OK aks-postgres-32388626-vmss000001
+ pg-primary-cnpg-sryti1qf-1 0/9000060 Standby (sync) OK aks-postgres-32388626-vmss000000
+ pg-primary-cnpg-sryti1qf-3 0/9000060 Standby (sync) OK aks-postgres-32388626-vmss000002
+ ```
+
+1. Reset the `pg-primary-cnpg-sryti1qf-1` pod instance as the primary using the following command:
+
+ ```bash
+ kubectl cnpg promote $PG_PRIMARY_CLUSTER_NAME 1 --namespace $PG_NAMESPACE
+ ```
+
+1. Validate that the pod instances have returned to their original state before the unplanned failover test using the following command:
+
+ ```bash
+ kubectl cnpg status $PG_PRIMARY_CLUSTER_NAME --namespace $PG_NAMESPACE
+ ```
+
+ Example output
+
+ ```output
+ Name Current LSN Rep role Status Node
+ -- -- - --
+ pg-primary-cnpg-sryti1qf-1 0/9000060 Primary OK aks-postgres-32388626-vmss000000
+ pg-primary-cnpg-sryti1qf-2 0/9000060 Standby (sync) OK aks-postgres-32388626-vmss000001
+ pg-primary-cnpg-sryti1qf-3 0/9000060 Standby (sync) OK aks-postgres-32388626-vmss000002
+ ```
+
+## Clean up resources
+
+* Once you're finished reviewing your deployment, delete all the resources you created in this guide using the [`az group delete`][az-group-delete] command.
+
+ ```bash
+ az group delete --resource-group $RESOURCE_GROUP_NAME --no-wait --yes
+ ```
+
+## Next steps
+
+In this how-to guide, you learned how to:
+
+* Use Azure CLI to create a multi-zone AKS cluster.
+* Deploy a highly available PostgreSQL cluster and database using the CNPG operator.
+* Set up monitoring for PostgreSQL using Prometheus and Grafana.
+* Deploy a sample dataset to the PostgreSQL database.
+* Perform PostgreSQL and AKS cluster upgrades.
+* Simulate a cluster interruption and PostgreSQL replica failover.
+* Perform a backup and restore of the PostgreSQL database.
+
+To learn more about how you can leverage AKS for your workloads, see [What is Azure Kubernetes Service (AKS)?][what-is-aks]
+
+## Contributors
+
+*This article is maintained by Microsoft. It was originally written by the following contributors*:
+
+* Ken Kilty | Principal TPM
+* Russell de Pina | Principal TPM
+* Adrian Joian | Senior Customer Engineer
+* Jenny Hayes | Senior Content Developer
+* Carol Smith | Senior Content Developer
+* Erin Schaffer | Content Developer 2
+* Adam Sharif | Customer Engineer 2
+
+<!-- LINKS -->
+[helm-upgrade]: https://helm.sh/docs/helm/helm_upgrade/
+[create-infrastructure]: ./create-postgresql-ha.md
+[kubectl-create-secret]: https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret/
+[kubectl-get]: https://kubernetes.io/docs/reference/kubectl/generated/kubectl_get/
+[kubectl-apply]: https://kubernetes.io/docs/reference/kubectl/generated/kubectl_apply/
+[helm-repo-add]: https://helm.sh/docs/helm/helm_repo_add/
+[az-aks-show]: /cli/azure/aks#az_aks_show
+[az-identity-federated-credential-create]: /cli/azure/identity/federated-credential#az_identity_federated_credential_create
+[cluster-crd]: https://cloudnative-pg.io/documentation/1.23/cloudnative-pg.v1/#postgresql-cnpg-io-v1-ClusterSpec
+[kubectl-describe]: https://kubernetes.io/docs/reference/kubectl/generated/kubectl_describe/
+[az-storage-blob-list]: /cli/azure/storage/blob/#az_storage_blob_list
+[az-identity-federated-credential-delete]: /cli/azure/identity/federated-credential#az_identity_federated_credential_delete
+[kubectl-delete]: https://kubernetes.io/docs/reference/kubectl/generated/kubectl_delete/
+[az-group-delete]: /cli/azure/group#az_group_delete
+[what-is-aks]: ./what-is-aks.md
aks Postgresql Ha Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/postgresql-ha-overview.md
+
+ Title: 'Overview of deploying a highly available PostgreSQL database on AKS with Azure CLI'
+description: Learn how to deploy a highly available PostgreSQL database on AKS using the CloudNativePG operator.
+ Last updated : 06/07/2024+++
+#Customer intent: As a developer or cluster operator, I want to deploy a highly available PostgreSQL database on AKS so I can see how to run a stateful database workload using the managed Kubernetes service in Azure.
+
+# Deploy a highly available PostgreSQL database on AKS with Azure CLI
+
+In this guide, you deploy a highly available PostgreSQL cluster that spans multiple Azure availability zones on AKS with Azure CLI.
+
+This article walks through the prerequisites for setting up a PostgreSQL cluster on [Azure Kubernetes Service (AKS)][what-is-aks] and provides an overview of the full deployment process and architecture.
+
+## Prerequisites
+
+* This guide assumes a basic understanding of [core Kubernetes concepts][core-kubernetes-concepts] and [PostgreSQL][postgresql].
+* You need the **Owner** or **User Access Administrator** and the **Contributor** [Azure built-in roles][azure-roles] on a subscription in your Azure account.
++
+* You also need the following resources installed:
+
+ * [Azure CLI](/cli/azure/install-azure-cli) version 2.56 or later.
+ * [Azure Kubernetes Service (AKS) preview extension][aks-preview].
+ * [jq][jq], version 1.5 or later.
+ * [kubectl][install-kubectl] version 1.21.0 or later.
+ * [Helm][install-helm] version 3.0.0 or later.
+ * [openssl][install-openssl] version 3.3.0 or later.
+ * [Visual Studio Code][install-vscode] or equivalent.
+ * [Krew][install-krew] version 0.4.4 or later.
+ * [kubectl CloudNativePG (CNPG) Plugin][cnpg-plugin].
+
+## Deployment process
+
+In this guide, you learn how to:
+
+* Use Azure CLI to create a multi-zone AKS cluster.
+* Deploy a highly available PostgreSQL cluster and database using the [CNPG operator][cnpg-plugin].
+* Set up monitoring for PostgreSQL using Prometheus and Grafana.
+* Deploy a sample dataset to a PostgreSQL database.
+* Perform PostgreSQL and AKS cluster upgrades.
+* Simulate a cluster interruption and PostgreSQL replica failover.
+* Perform backup and restore of a PostgreSQL database.
+
+## Deployment architecture
+
+This diagram illustrates a PostgreSQL cluster setup with one primary replica and two read replicas managed by the [CloudNativePG (CNPG)](https://cloudnative-pg.io/) operator. The architecture provides a highly available PostgreSQL running on an AKS cluster that can withstand a zone outage by failing over across replicas.
+
+Backups are stored on [Azure Blob Storage](/azure/storage/blobs/), providing another way to restore the database in the event of an issue with streaming replication from the primary replica.
++
+> [!NOTE]
+> The CNPG operator supports only *one database per cluster*. Plan accordingly for applications that require data separation at the database level.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Create the infrastructure to deploy a highly available PostgreSQL database on AKS using the CNPG operator][create-infrastructure]
+
+## Contributors
+
+*This article is maintained by Microsoft. It was originally written by the following contributors*:
+
+* Ken Kilty | Principal TPM
+* Russell de Pina | Principal TPM
+* Adrian Joian | Senior Customer Engineer
+* Jenny Hayes | Senior Content Developer
+* Carol Smith | Senior Content Developer
+* Erin Schaffer | Content Developer 2
+* Adam Sharif | Customer Engineer 2
+
+<!-- LINKS -->
+[what-is-aks]: ./what-is-aks.md
+[postgresql]: https://www.postgresql.org/
+[core-kubernetes-concepts]: ./concepts-clusters-workloads.md
+[azure-roles]: ../role-based-access-control/built-in-roles.md
+[aks-preview]: ./draft.md#install-the-aks-preview-azure-cli-extension
+[jq]: https://jqlang.github.io/jq/
+[install-kubectl]: https://kubernetes.io/docs/tasks/tools/install-kubectl/
+[install-helm]: https://helm.sh/docs/intro/install/
+[install-openssl]: https://www.openssl.org/
+[install-vscode]: https://code.visualstudio.com/Download
+[install-krew]: https://krew.sigs.k8s.io/
+[cnpg-plugin]: https://cloudnative-pg.io/documentation/current/kubectl-plugin/#using-krew
+[create-infrastructure]: ./create-postgresql-ha.md
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
AKS defines a generally available (GA) version as a version available in all reg
* The latest GA minor version released in AKS (which we refer to as *N*). * Two previous minor versions.
- * Each supported minor version also supports a maximum of two stable patches.
+ * Each supported minor version can support any number of patches at a given time. AKS reserves the right to deprecate patches if a critical CVE or security vulnerability is detected. For awareness on patch availability and any ad-hoc deprecation, please refer to version release notes and visit the [AKS release status webpage][aks-tracker].
AKS might also support preview versions, which are explicitly labeled and subject to [preview terms and conditions][preview-terms].
When a new minor version is introduced, the oldest minor version is deprecated a
When AKS releases 1.30, all the 1.27 versions go out of support 30 days later.
-AKS also supports a maximum of two **patch** releases of a given minor version. For example, given the following supported versions:
-
-```
-Current Supported Version List
-
-1.29.2, 1.29.1, 1.28.7, 1.28.6, 1.27.11, 1.27.10
-```
-
-If AKS releases `1.29.3` and `1.28.8`, the oldest patch versions are deprecated and removed, and the supported version list becomes:
-
-```
-New Supported Version List
--
-1.29.3, 1.29.2, 1.28.8, 1.28.7, 1.27.11, 1.27.10
-```
-
+AKS may support any number of **patches** based on upstream community release availability for a given minor version. AKS reserves the right to deprecate any of these patches at any given time due to a CVE or potential bug concern. You're always encouraged to use the latest patch for a minor version.
## Platform support policy Platform support policy is a reduced support plan for certain unsupported Kubernetes versions. During platform support, customers only receive support from Microsoft for AKS/Azure platform related issues. Any issues related to Kubernetes functionality and components aren't supported.
api-center Discover Shadow Apis Dev Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/discover-shadow-apis-dev-proxy.md
+
+ Title: Tutorial - Discover shadow APIs using Dev Proxy
+description: In this tutorial, you learn how to discover shadow APIs in your apps using Dev Proxy and onboard them to API Center.
+++ Last updated : 07/12/2024+++
+# Tutorial - Discover shadow APIs using Dev Proxy
+
+Using Azure API Center you catalog APIs used in your organization. This allows you to tell which APIs you use, where the API is in its lifecycle, and who to contact if there are issues. In short, having an up-to-date catalog of APIs helps you improve the governance-, compliance-, and security posture.
+
+When building your app, especially if you're integrating new scenarios, you might be using APIs that aren't registered in Azure API Center. These APIs are called shadow APIs. Shadow APIs are APIs that aren't registered in your organization. They might be APIs that aren't yet registered, or they might be APIs that aren't meant to be used in your organization.
+
+One way to check for shadow APIs is by using [Dev Proxy](https://aka.ms/devproxy). Dev Proxy is an API simulator that intercepts and analyzes API requests from applications. One feature of Dev Proxy is checking if the intercepted API requests belong to APIs registered in API Center.
++
+## Before you start
+
+To detect shadow APIs, you need to have an [Azure API Center](/azure/api-center/) instance with information about the APIs that you use in your organization.
+
+### Copy API Center information
+
+From the Azure API Center instance Overview page, copy the **name** of the API Center instance, the name of the **resource group** and the **subscription ID**. You need this information to configure the Dev Proxy `ApiCenterOnboardingPlugin` so that it can connect to your Azure API Center instance.
++
+## Configure Dev Proxy
+
+To check if your app is using shadow APIs, you need to enable the `ApiCenterOnboardingPlugin` in the Dev Proxy configuration file. To create a report of APIs that your app uses, add a reporter.
+
+### Enable the `ApiCenterOnboardingPlugin`
+
+In the `devproxyrc.json` file, add the following configuration:
+
+```json
+{
+ "$schema": "https://raw.githubusercontent.com/microsoft/dev-proxy/main/schemas/v0.19.0/rc.schema.json",
+ "plugins": [
+ {
+ "name": "ApiCenterOnboardingPlugin",
+ "enabled": true,
+ "pluginPath": "~appFolder/plugins/dev-proxy-plugins.dll",
+ "configSection": "apiCenterOnboardingPlugin"
+ }
+ ],
+ "urlsToWatch": [
+ "https://jsonplaceholder.typicode.com/*"
+ ],
+ "apiCenterOnboardingPlugin": {
+ "subscriptionId": "00000000-0000-0000-0000-000000000000",
+ "resourceGroupName": "demo",
+ "serviceName": "contoso-api-center",
+ "workspaceName": "default",
+ "createApicEntryForNewApis": false
+ }
+}
+```
+
+In the `subscriptionId`, `resourceGroupName`, and `serviceName` properties, provide the information about your Azure API Center instance.
+
+In the `urlsToWatch` property, specify the URLs that your app uses.
+
+> [!TIP]
+> Use the [Dev Proxy Toolkit](https://aka.ms/devproxy/toolkit) Visual Studio Code extension to easily manage Dev Proxy configuration.
+
+### Add a reporter
+
+The `ApiCenterOnboardingPlugin` produces a report of APIs that your app is using. To view this report, add a reporter to your Dev Proxy configuration file. Dev Proxy offers several [reporters](/microsoft-cloud/dev/dev-proxy/technical-reference/overview#reporters). In this example, you use the [plain-text reporter](/microsoft-cloud/dev/dev-proxy/technical-reference/plaintextreporter).
+
+Update your `devproxyrc.json` file with a reference to the plain-text reporter:
+
+```json
+{
+ "$schema": "https://raw.githubusercontent.com/microsoft/dev-proxy/main/schemas/v0.19.0/rc.schema.json",
+ "plugins": [
+ {
+ "name": "ApiCenterOnboardingPlugin",
+ "enabled": true,
+ "pluginPath": "~appFolder/plugins/dev-proxy-plugins.dll",
+ "configSection": "apiCenterOnboardingPlugin"
+ },
+ {
+ "name": "PlainTextReporter",
+ "enabled": true,
+ "pluginPath": "~appFolder/plugins/dev-proxy-plugins.dll"
+ }
+ ],
+ "urlsToWatch": [
+ "https://jsonplaceholder.typicode.com/*"
+ ],
+ "apiCenterOnboardingPlugin": {
+ "subscriptionId": "00000000-0000-0000-0000-000000000000",
+ "resourceGroupName": "demo",
+ "serviceName": "contoso-api-center",
+ "workspaceName": "default",
+ "createApicEntryForNewApis": false
+ }
+}
+```
+
+## Check if your app is using shadow APIs
+
+To check if your app is using shadow APIs, connect to your Azure subscription, run Dev Proxy, and let it intercept API requests from your app. Dev Proxy then compares the information about the API requests with the information from Azure API Center and reports on any APIs that aren't registered in API Center.
+
+### Connect to your Azure subscription
+
+Dev Proxy uses information from Azure API Center to determine if your app is using shadow APIs. To get this information, it needs a connection to your Azure subscription. You can connect to your Azure subscription in [several ways](/microsoft-cloud/dev/dev-proxy/technical-reference/apicenterproductionversionplugin#remarks).
+
+### Run Dev Proxy
+
+After connecting to your Azure subscription, start Dev Proxy. If you start Dev Proxy from the same folder where your `devproxyrc.json` file is located, it automatically loads the configuration. Otherwise, specify the path to the configuration file using the `--config-file` option.
+
+When Dev Proxy starts, it checks that it can connect to your Azure subscription. When the connection is successful, you see a message similar to:
+
+```text
+ info Plugin ApiCenterOnboardingPlugin connecting to Azure...
+ info Listening on 127.0.0.1:8000...
+
+Hotkeys: issue (w)eb request, (r)ecord, (s)top recording, (c)lear screen
+Press CTRL+C to stop Dev Proxy
+```
+
+Press <kbd>r</kbd> to start recording API requests from your app.
+
+### Use your app
+
+Use your app as you would normally do. Dev Proxy intercepts the API requests and stores information about them in memory. In the command line where Dev Proxy runs, you should see information about API requests that your app makes.
+
+```text
+ info Plugin ApiCenterOnboardingPlugin connecting to Azure...
+ info Listening on 127.0.0.1:8000...
+
+Hotkeys: issue (w)eb request, (r)ecord, (s)top recording, (c)lear screen
+Press CTRL+C to stop Dev Proxy
+
+Γùë Recording...
+
+ req Γò¡ GET https://jsonplaceholder.typicode.com/posts
+ api Γò░ Passed through
+
+ req Γò¡ DELETE https://jsonplaceholder.typicode.com/posts/1
+ api Γò░ Passed through
+```
+
+### Check shadow APIs
+
+Stop the recording by pressing <kbd>s</kbd>. Dev Proxy connects to the API Center instance and compares the information about requests with the information from API Center.
+
+```text
+ info Plugin ApiCenterOnboardingPlugin connecting to Azure...
+ info Listening on 127.0.0.1:8000...
+
+Hotkeys: issue (w)eb request, (r)ecord, (s)top recording, (c)lear screen
+Press CTRL+C to stop Dev Proxy
+
+Γùë Recording...
+
+ req Γò¡ GET https://jsonplaceholder.typicode.com/posts
+ api Γò░ Passed through
+
+ req Γò¡ DELETE https://jsonplaceholder.typicode.com/posts/1
+ api Γò░ Passed through
+Γùï Stopped recording
+ info Checking if recorded API requests belong to APIs in API Center...
+ info Loading APIs from API Center...
+ info Loading API definitions from API Center...
+```
+
+When Dev Proxy finishes its analysis, it creates a report in a file named `ApiCenterOnboardingPlugin_PlainTextReporter.txt` with the following contents:
+
+```text
+New APIs that aren't registered in Azure API Center:
+
+https://jsonplaceholder.typicode.com:
+ DELETE https://jsonplaceholder.typicode.com/posts/1
+
+APIs that are already registered in Azure API Center:
+
+GET https://jsonplaceholder.typicode.com/posts
+```
+
+### Automatically onboard shadow APIs
+
+The `ApiCenterOnboardingPlugin` can not only detect shadow APIs, but also automatically onboard them to API Center. To automatically onboard shadow APIs, in the Dev Proxy configuration file, update the `createApicEntryForNewApis` to `true`.
+
+```json
+{
+ "$schema": "https://raw.githubusercontent.com/microsoft/dev-proxy/main/schemas/v0.19.0/rc.schema.json",
+ "plugins": [
+ {
+ "name": "ApiCenterOnboardingPlugin",
+ "enabled": true,
+ "pluginPath": "~appFolder/plugins/dev-proxy-plugins.dll",
+ "configSection": "apiCenterOnboardingPlugin"
+ },
+ {
+ "name": "PlainTextReporter",
+ "enabled": true,
+ "pluginPath": "~appFolder/plugins/dev-proxy-plugins.dll"
+ }
+ ],
+ "urlsToWatch": [
+ "https://jsonplaceholder.typicode.com/*"
+ ],
+ "apiCenterOnboardingPlugin": {
+ "subscriptionId": "00000000-0000-0000-0000-000000000000",
+ "resourceGroupName": "demo",
+ "serviceName": "contoso-api-center",
+ "workspaceName": "default",
+ "createApicEntryForNewApis": true
+ }
+}
+```
+
+When you run Dev Proxy with `createApicEntryForNewApis` set to `true`, it automatically creates new API entries in Azure API Center for the shadow APIs that it detects.
++
+### Automatically onboard shadow APIs with OpenAPI spec
+
+When you choose to automatically onboard, shadow APIs to API Center, you can have Dev Proxy generate the OpenAPI spec for the API. Onboarding APIs with OpenAPI specs speeds up onboarding of missing endpoints and provide you with the necessary information about the API. When the `ApiCenterOnboardingPlugin` detects, that Dev Proxy created a new OpenAPI spec, it associates it with the corresponding onboarded API in API Center.
+
+To automatically generate OpenAPI specs for onboarded APIs, update Dev Proxy configuration to include the [`OpenApiSpecGeneratorPlugin`](/microsoft-cloud/dev/dev-proxy/technical-reference/openapispecgeneratorplugin).
+
+```json
+{
+ "$schema": "https://raw.githubusercontent.com/microsoft/dev-proxy/main/schemas/v0.19.0/rc.schema.json",
+ "plugins": [
+ {
+ "name": "OpenApiSpecGeneratorPlugin",
+ "enabled": true,
+ "pluginPath": "~appFolder/plugins/dev-proxy-plugins.dll"
+ },
+ {
+ "name": "ApiCenterOnboardingPlugin",
+ "enabled": true,
+ "pluginPath": "~appFolder/plugins/dev-proxy-plugins.dll",
+ "configSection": "apiCenterOnboardingPlugin"
+ },
+ {
+ "name": "PlainTextReporter",
+ "enabled": true,
+ "pluginPath": "~appFolder/plugins/dev-proxy-plugins.dll"
+ }
+ ],
+ "urlsToWatch": [
+ "https://jsonplaceholder.typicode.com/*"
+ ],
+ "apiCenterOnboardingPlugin": {
+ "subscriptionId": "00000000-0000-0000-0000-000000000000",
+ "resourceGroupName": "demo",
+ "serviceName": "contoso-api-center",
+ "workspaceName": "default",
+ "createApicEntryForNewApis": true
+ }
+}
+```
+
+> [!IMPORTANT]
+> Dev Proxy executes plugins in the order they're registered in the configuration. You need to register the `OpenApiSpecGeneratorPlugin` first so that it can create OpenAPI specs before the `ApiCenterOnboardingPlugin` onboards new APIs.
+
+When you run Dev Proxy with this configuration, it automatically creates new API entries in Azure API Center for the shadow APIs that it detects. For each new API, Dev Proxy generates an OpenAPI spec and associates it with the corresponding onboarded API in API Center.
+
+```text
+ info Plugin ApiCenterOnboardingPlugin connecting to Azure...
+ info Listening on 127.0.0.1:8000...
+
+Hotkeys: issue (w)eb request, (r)ecord, (s)top recording, (c)lear screen
+Press CTRL+C to stop Dev Proxy
+
+Γùë Recording...
+
+ req Γò¡ GET https://jsonplaceholder.typicode.com/posts
+ api Γò░ Passed through
+
+ req Γò¡ DELETE https://jsonplaceholder.typicode.com/posts/1
+ api Γò░ Passed through
+Γùï Stopped recording
+ info Creating OpenAPI spec from recorded requests...
+ info Created OpenAPI spec file jsonplaceholder.typicode.com-20240614104931.json
+ info Checking if recorded API requests belong to APIs in API Center...
+ info Loading APIs from API Center...
+ info Loading API definitions from API Center...
+ info New APIs that aren't registered in Azure API Center:
+
+https://jsonplaceholder.typicode.com:
+ DELETE https://jsonplaceholder.typicode.com/posts/1
+ info Creating new API entries in API Center...
+ info Creating API new-jsonplaceholder-typicode-com-1718354977 for https://jsonplaceholder.typicode.com...
+ info DONE
+```
++
+## Summary
+
+Using Dev Proxy and its `ApiCenterOnboardingPlugin`, you can check if your app is using shadow APIs. The plugin analyzes API requests from your app and reports on any API requests that aren't registered in Azure API Center. The plugin allows you to easily onboard missing APIs to API Center. By combining the `ApiCenterOnboardingPlugin` plugin with the `OpenApiSpecGeneratorPlugin`, you can automatically generate OpenAPI specs for the newly onboarded APIs. You can run this check manually or integrate with your CI/CD pipeline to ensure that your app is using registered APIs before releasing it to production.
+
+## More information
+
+- [Learn more about Dev Proxy](/microsoft-cloud/dev/dev-proxy/overview)
+- [Learn more about Azure API Center](./key-concepts.md)
azure-functions Functions Add Openai Text Completion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-openai-text-completion.md
+
+ Title: 'Tutorial: Add Azure OpenAI text completions to your functions in Visual Studio Code'
+description: Learn how to connect Azure Functions to OpenAI by adding an output binding to your Visual Studio Code project.
Last updated : 07/11/2024+++
+zone_pivot_groups: programming-languages-set-functions
+#customer intent: As an Azure developer, I want learn how to integrate Azure OpenAI capabilities in my function code to leverage AI benefits in my colud-based code executions.
++
+# Tutorial: Add Azure OpenAI text completion hints to your functions in Visual Studio Code
+
+This article shows you how to use Visual Studio Code to add an HTTP endpoint to the function app you created in the previous quickstart article. When triggered, this new HTTP endpoint uses an [Azure Open AI text completion input binding](functions-bindings-openai-textcompletion-input.md) to get text completion hints from your data model.
+
+During this tutorial, you learn how to accomplish these tasks:
+
+> [!div class="checklist"]
+> * Create resources in Azure OpenAI.
+> * Deploy a model in OpenAI the resource.
+> * Set access permissions to the model resource.
+> * Enable your function app to connect to OpenAI.
+> * Add OpenAI bindings to your HTTP triggered function.
+
+## 1. Check prerequisites
+* Complete the steps in [part 1 of the Visual Studio Code quickstart](create-first-function-vs-code-csharp.md).
+* Complete the steps in [part 1 of the Visual Studio Code quickstart](create-first-function-vs-code-java.md).
+* Complete the steps in [part 1 of the Visual Studio Code quickstart](create-first-function-vs-code-node.md).
+* Complete the steps in [part 1 of the Visual Studio Code quickstart](create-first-function-vs-code-typescript.md).
+* Complete the steps in [part 1 of the Visual Studio Code quickstart](create-first-function-vs-code-python.md).
+* Complete the steps in [part 1 of the Visual Studio Code quickstart](create-first-function-vs-code-powershell.md).
+* Obtain access to Azure OpenAI in your Azure subscription. If you haven't already been granted access, complete [this form](https://aka.ms/oai/access) to request access.
+* Install [.NET Core CLI tools](/dotnet/core/tools/?tabs=netcore2x).
+* The [Azurite storage emulator](../storage/common/storage-use-azurite.md?tabs=npm#install-azurite). While you can also use an actual Azure Storage account, the article assumes you're using this emulator.
+
+## 2. Create your Azure OpenAI resources
+
+The following steps show how to create an Azure OpenAI data model in the Azure portal.
+
+1. Sign in with your Azure subscription in the [Azure portal](https://portal.azure.com).
+
+1. Select **Create a resource** and search for the **Azure OpenAI**. When you locate the service, select **Create**.
+
+1. On the **Create Azure OpenAI** page, provide the following information for the fields on the **Basics** tab:
+
+ | Field | Description |
+ |||
+ | **Subscription** | Your subscription, which has been onboarded to use Azure OpenAI. |
+ | **Resource group** | The resource group you created for the function app in the previous article. You can find this resource group name by right-clicking the function app in the Azure Resources browser, selecting properties, and then searching for the `resourceGroup` setting in the returned JSON resource file. |
+ | **Region** | Ideally, the same location as the function app. |
+ | **Name** | A descriptive name for your Azure OpenAI Service resource, such as _mySampleOpenAI_. |
+ | **Pricing Tier** | The pricing tier for the resource. Currently, only the Standard tier is available for the Azure OpenAI Service. For more info on pricing visit the [Azure OpenAI pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) |
+
+ :::image type="content" source="../ai-services/openai/media/create-resource/create-resource-basic-settings.png" alt-text="Screenshot that shows how to configure an Azure OpenAI resource in the Azure portal.":::
+
+1. Select **Next** twice to accept the default values for both the **Network** and **Tags** tabs. The service you create doesn't have any network restrictions, including from the internet.
+
+1. Select **Next** a final time to move to the final stage in the process: **Review + submit**.
+
+1. Confirm your configuration settings, and select **Create**.
+
+ The Azure portal displays a notification when the new resource is available. Select **Go to resource** in the notification or search for your new Azure OpenAI resource by name.
+
+1. In the Azure OpenAI resource page for your new resource, select **Click here to view endpoints** under **Essentials** > **Endpoints**. Copy the **endpoint** URL and the **keys**. Save these values, you need them later.
+
+Now that you have the credentials to connect to your model in Azure OpenAI, you need to set these access credentials in application settings.
+
+## 3. Deploy a model
+
+Now you can deploy a model. You can select from one of several available models in Azure OpenAI Studio.
+
+To deploy a model, follow these steps:
+
+1. Sign in to [Azure OpenAI Studio](https://oai.azure.com).
+
+1. Choose the subscription and the Azure OpenAI resource you created, and select **Use resource**.
+
+1. Under **Management** select **Deployments**.
+
+1. Select **Create new deployment** and configure the following fields:
+
+ | Field | Description |
+ |||
+ | **Deployment name** | Choose a name carefully. The deployment name is used in your code to call the model by using the client libraries and the REST APIs, so you must save for use later on. |
+ | **Select a model** | Model availability varies by region. For a list of available models per region, see [Model summary table and region availability](../ai-services/openai/concepts/models.md#model-summary-table-and-region-availability). |
+
+ > [!IMPORTANT]
+ > When you access the model via the API, you need to refer to the deployment name rather than the underlying model name in API calls, which is one of the key differences between OpenAI and Azure OpenAI. OpenAI only requires the model name. Azure OpenAI always requires deployment name, even when using the model parameter. In our docs, we often have examples where deployment names are represented as identical to model names to help indicate which model works with a particular API endpoint. Ultimately your deployment names can follow whatever naming convention is best for your use case.
+
+1. Accept the default values for the rest of the setting and select **Create**.
+
+ The deployments table shows a new entry that corresponds to your newly created model.
+
+You now have everything you need to add Azure OpenAI-based text completion to your function app.
+
+## 4. Update application settings
+
+1. In Visual Studio Code, open the local code project you created when you completed the [previous article](./create-first-function-vs-code-csharp.md).
+
+1. In the local.settings.json file in the project root folder, update the `AzureWebJobsStorage` setting to `UseDevelopmentStorage=true`. You can skip this step if the `AzureWebJobsStorage` setting in *local.settings.json* is set to the connection string for an existing Azure Storage account instead of `UseDevelopmentStorage=true`.
+
+1. In the local.settings.json file, add these settings values:
+
+ + **`AZURE_OPENAI_ENDPOINT`**: required by the binding extension. Set this value to the endpoint of the Azure OpenAI resource you created earlier.
+ + **`AZURE_OPENAI_KEY`**: required by the binding extension. Set this value to the key for the Azure OpenAI resource.
+ + **`CHAT_MODEL_DEPLOYMENT_NAME`**: used to define the input binding. Set this value to the name you chose for your model deployment.
+
+1. Save the file. When you deploy to Azure, you must also add these settings to your function app.
+
+## 5. Register binding extensions
+
+Because you're using an Azure OpenAI output binding, you must have the corresponding bindings extension installed before you run the project.
+
+Except for HTTP and timer triggers, bindings are implemented as extension packages. To add the Azure OpenAI extension package to your project, run this [dotnet add package](/dotnet/core/tools/dotnet-add-package) command in the **Terminal** window:
+
+```bash
+dotnet add package Microsoft.Azure.Functions.Worker.Extensions.OpenAI --prerelease
+```
+<!NOTE: Update this after preview to `## Verify the extension bundle`-->
+## 5. Update the extension bundle
+
+To access the preview Azure OpenAI bindings, you must use a preview version of the extension bundle that contains this extension.
+
+Replace the `extensionBundle` setting in your current `host.json` file with this JSON:
+
+```json
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle.Preview",
+ "version": "[4.*, 5.0.0)"
+ }
+```
+Now, you can use the Azure OpenAI output binding in your project.
+
+## 6. Return text completion from the model
+
+The code you add creates a `whois` HTTP function endpoint in your existing project. In this function, data passed in a URL `name` parameter of a GET request is used to dynamically create a completion prompt. This dynamic prompt is bound to a text completion input binding, which returns a response from the model based on the prompt. The completion from the model is returned in the HTTP response.
+1. In your existing `HttpExample` class file, add this `using` statement:
+
+ :::code language="csharp" source="~/functions-openai-extension/samples/textcompletion/csharp-ooproc/TextCompletions.cs" range="5" :::
+
+1. In the same file, add this code that defines a new HTTP trigger endpoint named `whois`:
+
+ ```csharp
+ [Function(nameof(WhoIs))]
+ public IActionResult WhoIs([HttpTrigger(AuthorizationLevel.Function, Route = "whois/{name}")] HttpRequest req,
+ [TextCompletionInput("Who is {name}?", Model = "%CHAT_MODEL_DEPLOYMENT_NAME%")] TextCompletionResponse response)
+ {
+ if(!String.IsNullOrEmpty(response.Content))
+ {
+ return new OkObjectResult(response.Content);
+ }
+ else
+ {
+ return new NotFoundObjectResult("Something went wrong.");
+ }
+ }
+ ```
+
+1. Update the `pom.xml` project file to add this reference to the `properties` collection:
+
+ :::code language="xml" source="~/functions-openai-extension/samples/textcompletion/java/pom.xml" range="18" :::
+
+1. In the same file, add this dependency to the `dependencies` collection:
+
+ :::code language="xml" source="~/functions-openai-extension/samples/textcompletion/java/pom.xml" range="29-33" :::
+
+1. In the existing `Function.java` project file, add these `import` statements:
+
+ :::code language="java" source="~/functions-openai-extension/samples/textcompletion/java/src/main/java/com/azfs/TextCompletions.java" range="19-20" :::
+
+1. In the same file, add this code that defines a new HTTP trigger endpoint named `whois`:
+
+ :::code language="java" source="~/functions-openai-extension/samples/textcompletion/java/src/main/java/com/azfs/TextCompletions.java" range="31-46" :::
+
+1. In Visual Studio Code, Press F1 and in the command palette type `Azure Functions: Create Function...`, select **HTTP trigger**, type the function name `whois`, and press Enter.
+
+1. In the new `whois.js` code file, replace the contents of the file with this code:
+
+ :::code language="javascript" source="~/functions-openai-extension/samples/textcompletion/javascript/src/functions/whois.js" :::
+
+1. In Visual Studio Code, Press F1 and in the command palette type `Azure Functions: Create Function...`, select **HTTP trigger**, type the function name `whois`, and press Enter.
+
+1. In the new `whois.ts` code file, replace the contents of the file with this code:
+
+ :::code language="typescript" source="~/functions-openai-extension/samples/textcompletion/typescript/src/functions/whois.ts" :::
+
+1. In the existing `function_app.py` project file, add this `import` statement:
+
+ :::code language="python" source="~/functions-openai-extension/samples/textcompletion/python/function_app.py" range="1" :::
+
+1. In the same file, add this code that defines a new HTTP trigger endpoint named `whois`:
+ :::code language="python" source="~/functions-openai-extension/samples/textcompletion/python/function_app.py" range="7-18" :::
+
+1. In Visual Studio Code, Press F1 and in the command palette type `Azure Functions: Create Function...`, select **HTTP trigger**, type the function name `whois`, select **Anonymous**, and press Enter.
+
+1. Open the new `whois/function.json` code file and replace its contents with this code, which adds a definition for the `TextCompletionResponse` input binding:
+
+ :::code language="json" source="~/functions-openai-extension/samples/textcompletion/powershell/WhoIs/function.json" :::
+
+1. Replace the content of the `whois/run.ps1` code file with this code, which returns the input binding response:
+
+ :::code language="powershell" source="~/functions-openai-extension/samples/textcompletion/powershell/WhoIs/run.ps1" :::
+
+
+## 7. Run the function
+
+1. In Visual Studio Code, Press F1 and in the command palette type `Azurite: Start` and press Enter to start the Azurite storage emulator.
+
+1. Press <kbd>F5</kbd> to start the function app project and Core Tools in debug mode.
+
+1. With the Core Tools running, send a GET request to the `whois` endpoint function, with a name in the path, like this URL:
+
+ `http://localhost:7071/api/whois/<NAME>`
+
+ Replace the `<NAME>` string with the value you want passed to the `"Who is {name}?"` prompt. The `<NAME>` must be the URL-encoded name of a public figure, like `Abraham%20Lincoln`.
+
+ The response you see is the text completion response from your Azure OpenAI model.
+
+1. After a response is returned, press <kbd>Ctrl + C</kbd> to stop Core Tools.
+
+<!-- enable managed identities
+## 8. Set access permissions
+{{move this info into the main article}}
+[create Azure OpenAI resources and to deploy models](../ai-services/openai/how-to/role-based-access-control.md).
+
+## 9. Deploy to Azure
+-->
+
+## 8. Clean up resources
+
+In Azure, *resources* refer to function apps, functions, storage accounts, and so forth. They're grouped into *resource groups*, and you can delete everything in a group by deleting the group.
+
+You created resources to complete these quickstarts. You could be billed for these resources, depending on your [account status](https://azure.microsoft.com/account/) and [service pricing](https://azure.microsoft.com/pricing/). If you don't need the resources anymore, here's how to delete them:
++
+## Related content
+++ [Azure OpenAI extension for Azure Functions](functions-bindings-openai.md)++ [Azure OpenAI extension samples](https://github.com/Azure/azure-functions-openai-extension/tree/main/samples)++ [Machine learning and AI](functions-scenarios.md#machine-learning-and-ai)++++
azure-functions Functions Bindings Openai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-openai.md
You can add the preview extension by adding or replacing the following code in y
::: zone-end
+## Application settings
+
+To use the Azure OpenAI binding extension, you need to add one or more of these settings, which are used to connect to your OpenAI resource. During local development, you also need to add these settings to your `local.settings.json` file.
+
+| Setting name | Description |
+| - | -- |
+| **`AZURE_OPENAI_ENDPOINT`** | Required. Sets the endpoint of the OpenAI resource used by your bindings. |
+| **`AZURE_OPENAI_KEY`** | Sets the key used to access an Azure OpenAI resource. |
+| **`OPENAI_API_KEY`** | Sets the key used to access a non-Azure OpenAI resource. |
+| **`AZURE_CLIENT_ID`** | Sets a user-assigned managed identity used to access the Azure OpenAI resource. |
+
+For more information, see [Work with application settings](functions-how-to-use-azure-function-app-settings.md#settings).
+
<!Include this section if there are any host.json settings defined by the extension: ## host.json settings -->
azure-functions Functions Container Apps Hosting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-container-apps-hosting.md
az functionapp config container set --name <APP_NAME> --resource-group <MY_RESOU
## Managed resource groups
-Azure Functions on Container Apps runs your containerized function app resources in specially managed resource groups. These managed resource groups help protect the consistency of your apps by preventing unintended or unauthorized modification or deletion of resources in the managed group, even by service principles.
+Azure Functions on Container Apps runs your containerized function app resources in specially managed resource groups. These managed resource groups help protect the consistency of your apps by preventing unintended or unauthorized modification or deletion of resources in the managed group, even by service principals.
A managed resource group is created for you the first time you create function app resources in a Container Apps environment. Container Apps resources required by your containerized function app run in this managed resource group. Any other function apps that you create in the same environment use this existing group.
azure-functions Functions Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-scenarios.md
public static async Task Run(
## Machine learning and AI
-Besides data processing, Azure Functions can be used to infer on models.
+Besides data processing, Azure Functions can be used to infer on models. The [Azure OpenAI binding extension](./functions-bindings-openai.md) lets easily integrate features and behaviors of the [Azure OpenAI service](../ai-services/openai/overview.md) into your function code executions.
-For example, a function that calls a TensorFlow model or submits it to Azure AI services can process and classify a stream of images.
+Functions can connect to an OpenAI resources to enable text and chat completions, use assistants, and leverage embeddings and semantic search.
-Functions can also connect to other services to help process data and perform other AI-related tasks, like [text summarization](https://github.com/Azure-Samples/function-csharp-ai-textsummarize).
+A function might also call a TensorFlow model or Azure AI services to process and classify a stream of images.
[ ![Diagram of a machine learning and AI process using Azure Functions.](./media/functions-scenarios/machine-learning-and-ai.png) ](./media/functions-scenarios/machine-learning-and-ai-expanded.png#lightbox) -++ Tutorial: [Text completion using Azure OpenAI](functions-add-openai-text-completion.md?pivots=programming-language-csharp) + Sample: [Text summarization using AI Cognitive Language Service](https://github.com/Azure-Samples/function-csharp-ai-textsummarize)-++ Sample: [Text completion using Azure OpenAI](https://github.com/Azure/azure-functions-openai-extension/tree/main/samples/textcompletion/csharp-ooproc)++ Sample: [Provide assistant skills to your model](https://github.com/Azure/azure-functions-openai-extension/tree/main/samples/assistant/csharp-ooproc)++ Sample: [Generate embeddings](https://github.com/Azure/azure-functions-openai-extension/tree/main/samples/embeddings/csharp-ooproc/Embeddings)++ Sample: [Leverage semantic search](https://github.com/Azure/azure-functions-openai-extension/tree/main/samples/rag-aisearch/csharp-ooproc)++ Tutorial: [Text completion using Azure OpenAI](functions-add-openai-text-completion.md?pivots=programming-language-java)++ Sample: [Text completion using Azure OpenAI](https://github.com/Azure/azure-functions-openai-extension/tree/main/samples/textcompletion/java)++ Sample: [Provide assistant skills to your model](https://github.com/Azure/azure-functions-openai-extension/tree/main/samples/assistant/java)++ Sample: [Generate embeddings](https://github.com/Azure/azure-functions-openai-extension/tree/main/samples/embeddings/java)++ Sample: [Leverage semantic search](https://github.com/Azure/azure-functions-openai-extension/tree/main/samples/rag-aisearch/java)++ Tutorial: [Text completion using Azure OpenAI](functions-add-openai-text-completion.md?pivots=programming-language-javascript) + Training: [Create a custom skill for Azure AI Search](/training/modules/create-enrichment-pipeline-azure-cognitive-search) + Sample: [Chat using ChatGPT](https://github.com/Azure-Samples/function-javascript-ai-openai-chatgpt)-++ Tutorial: [Text completion using Azure OpenAI](functions-add-openai-text-completion.md?pivots=programming-language-python) + Tutorial: [Apply machine learning models in Azure Functions with Python and TensorFlow](./functions-machine-learning-tensorflow.md) + Tutorial: [Deploy a pretrained image classification model to Azure Functions with PyTorch](./machine-learning-pytorch.md)++ Sample: [Text completion using Azure OpenAI](https://github.com/Azure/azure-functions-openai-extension/tree/main/samples/textcompletion/python)++ Sample: [Provide assistant skills to your model](https://github.com/Azure/azure-functions-openai-extension/tree/main/samples/assistant/python)++ Sample: [Generate embeddings](https://github.com/Azure/azure-functions-openai-extension/tree/main/samples/embeddings/python)++ Sample: [Leverage semantic search](https://github.com/Azure/azure-functions-openai-extension/tree/main/samples/rag-aisearch/python) + Sample: [Chat using ChatGPT](https://github.com/Azure-Samples/function-python-ai-openai-chatgpt) + Sample: [LangChain with Azure OpenAI and ChatGPT](https://github.com/Azure-Samples/function-python-ai-langchain)++ Tutorial: [Text completion using Azure OpenAI](functions-add-openai-text-completion.md?pivots=programming-language-powershell)++ Sample: [Text completion using Azure OpenAI](https://github.com/Azure/azure-functions-openai-extension/tree/main/samples/textcompletion/powershell)++ Sample: [Provide assistant skills to your model](https://github.com/Azure/azure-functions-openai-extension/tree/main/samples/assistant/powershell)++ Sample: [Generate embeddings](https://github.com/Azure/azure-functions-openai-extension/tree/main/samples/embeddings/powershell)++ Sample: [Leverage semantic search](https://github.com/Azure/azure-functions-openai-extension/tree/main/samples/rag-aisearch/powershell) ::: zone-end ## Run scheduled tasks
azure-monitor Data Collection Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-syslog.md
When the Azure Monitor agent for Linux is installed, it configures the local Sys
The following facilities are supported with the Syslog collector:
-* None
-* Kern
-* user
-* mail
-* daemon
-* auth
-* syslog
-* lpr
-* news
-* uucp
-* ftp
-* ntp
-* audit
-* alert
-* mark
-* local0
-* local1
-* local2
-* local3
-* local4
-* local5
-* local6
-* local7
+| Pri index | Pri Name |
+| | |
+| 0 | None |
+| 1 | Kern |
+| 2 | user |
+| 3 | mail |
+| 4 | daemon |
+| 4 | auth |
+| 5 | syslog |
+| 6 | lpr |
+| 7 | news |
+| 8 | uucp |
+| 9 | ftp |
+| 10 | ntp |
+| 11 | audit |
+| 12 | alert |
+| 13 | mark |
+| 14 | local0 |
+| 15 | local1 |
+| 16 | local2 |
+| 17 | local3 |
+| 18 | local4 |
+| 19 | local5 |
+| 20 | local6 |
+| 21 | local7 |
The following are the severity levels of the events: * info
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
This section lists all supported platforms and frameworks.
#### Logging frameworks * [`ILogger`](./ilogger.md) * [Log4Net, NLog, or System.Diagnostics.Trace](./asp-net-trace-logs.md)
-* [`Log4J`, Logback, or java.util.logging](./opentelemetry-add-modify.md?tabs=java#logs)
+* [`Log4J`, Logback, or java.util.logging](./opentelemetry-add-modify.md?tabs=java#send-custom-telemetry-using-the-application-insights-classic-api)
* [LogStash plug-in](https://github.com/Azure/azure-diagnostics-tools/tree/master/Logstash/logstash-output-applicationinsights) * [Azure Monitor](/archive/blogs/msoms/application-insights-connector-in-oms)
azure-monitor Availability Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-alerts.md
- Title: Set up availability alerts with Application Insights
-description: Learn how to set up web tests in Application Insights. Get alerts if a website becomes unavailable or responds slowly.
- Previously updated : 04/28/2024---
-# Availability alerts
-
-[Application Insights](app-insights-overview.md) availability tests send web requests to your application at regular intervals from points around the world. You can receive alerts if your application isn't responding or if it responds too slowly.
-
-## Enable alerts
-
-Alerts are now automatically enabled by default, but to fully configure an alert, you must initially create your availability test.
--
-> [!NOTE]
-> With the [new unified alerts](../alerts/alerts-overview.md), the alert rule severity and notification preferences with [action groups](../alerts/action-groups.md) *must be* configured in the alerts experience. Without the following steps, you'll only receive in-portal notifications.
-
-1. After you save the availability test, on the **Details** tab, select the ellipsis by the test you made. Select **Open Rules (Alerts) page**.
-
- :::image type="content" source="./media/availability-alerts/edit-alert.png" alt-text="Screenshot that shows the Availability pane for an Application Insights resource in the Azure portal and the Open Rules (Alerts) page menu option." lightbox="./media/availability-alerts/edit-alert.png":::
-
-1. Set the severity level, rule description, and action group that have the notification preferences you want to use for this alert rule.
-
-### Alert criteria
-
-Automatically enabled availability alerts trigger an email when the endpoint you've defined is unavailable and when it's available again. Availability alerts that are created through this experience are state based. When the alert criteria are met, a single alert gets generated when the website is detected as unavailable. If the website is still down the next time the alert criteria is evaluated, it won't generate a new alert.
-
-For example, suppose that your website is down for an hour and you've set up an email alert with an evaluation frequency of 15 minutes. You'll only receive an email when the website goes down and another email when it's back online. You won't receive continuous alerts every 15 minutes to remind you that the website is still unavailable.
-
-You might not want to receive notifications when your website is down for only a short period of time, for example, during maintenance. You can change the evaluation frequency to a higher value than the expected downtime, up to 15 minutes. You can also increase the alert location threshold so that it only triggers an alert if the website is down for a specific number of regions. For longer scheduled downtimes, temporarily deactivate the alert rule or create a custom rule. It gives you more options to account for the downtime.
-
-#### Change the alert criteria
-
-To make changes to the location threshold, aggregation period, and test frequency, select the condition on the edit page of the alert rule to open the "**Configure signal logic**" window.
-
-### Create a custom alert rule
-
-If you need advanced capabilities, you can create a custom alert rule on the **Alerts** tab. Select **Create** > **Alert rule**. Choose **Metrics** for **Signal type** to show all available signals and select **Availability**.
-
-A custom alert rule offers higher values for the aggregation period (up to 24 hours instead of 6 hours) and the test frequency (up to 1 hour instead of 15 minutes). It also adds options to further define the logic by selecting different operators, aggregation types, and threshold values.
--- **Alert on X out of Y locations reporting failures**: The X out of Y locations alert rule is enabled by default in the [new unified alerts experience](../alerts/alerts-overview.md) when you create a new availability test. You can opt out by selecting the "classic" option or by choosing to disable the alert rule. Configure the action groups to receive notifications when the alert triggers by following the preceding steps. Without this step, you'll only receive in-portal notifications when the rule triggers.--- **Alert on availability metrics**: By using the [new unified alerts](../alerts/alerts-overview.md), you can alert on segmented aggregate availability and test duration metrics too:-
- 1. Select an Application Insights resource in the **Metrics** experience, and select an **Availability** metric.
-
- 1. The **Configure alerts** option from the menu takes you to the new experience where you can select specific tests or locations on which to set up alert rules. You can also configure the action groups for this alert rule here.
--- **Alert on custom analytics queries**: By using the [new unified alerts](../alerts/alerts-overview.md), you can alert on [custom log queries](../alerts/alerts-types.md#log-alerts). With custom queries, you can alert on any arbitrary condition that helps you get the most reliable signal of availability issues. It's also applicable if you're sending custom availability results by using the TrackAvailability SDK.-
- The metrics on availability data include any custom availability results you might be submitting by calling the TrackAvailability SDK. You can use the alerting on metrics support to alert on custom availability results.
-
-## Automate alerts
-
-To automate this process with Azure Resource Manager templates, see [Create a metric alert with an Azure Resource Manager template](../alerts/alerts-metric-create-templates.md#template-for-an-availability-test-along-with-a-metric-alert).
-
-## Troubleshooting
-
-See the dedicated [Troubleshooting article](troubleshoot-availability.md).
-
-## Next steps
--- [Multi-step web tests](availability-multistep.md)-- [Availability](availability-overview.md)
azure-monitor Availability Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-azure-functions.md
- Title: Review TrackAvailability() test results
-description: This article explains how to review data logged by TrackAvailability() tests
- Previously updated : 04/28/2024---
-# Review TrackAvailability() test results
-
-This article explains how to review [TrackAvailability()](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability) test results in the Azure portal and query the data using [Log Analytics](../logs/log-analytics-overview.md#overview-of-log-analytics-in-azure-monitor). [Standard tests](availability-standard-tests.md) **should always be used if possible** as they require little investment, no maintenance, and have few prerequisites.
-
-## Prerequisites
-
-> [!div class="checklist"]
-> - [Workspace-based Application Insights resource](create-workspace-resource.md)
-> - Access to the source code of a [function app](../../azure-functions/functions-how-to-use-azure-function-app-settings.md) in Azure Functions
-> - Developer expertise capable of authoring [custom code](#basic-code-sample) for [TrackAvailability()](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability), tailored to your specific business needs
-
-> [!IMPORTANT]
-> [TrackAvailability()](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability) requires making a developer investment in writing and maintanining potentially complex custom code.
-
-## Check availability
-
-Start by reviewing the graph on the **Availability** tab of your Application Insights resource.
-
-> [!NOTE]
-> Tests created with `TrackAvailability()` will appear with **CUSTOM** next to the test name.
-> Similar to standard web tests, we recommend a minimum of five test locations for TrackAvailability() to ensure you can distinguish problems in your website from network issues.
-
- :::image type="content" source="media/availability-azure-functions/availability-custom.png" alt-text="Screenshot that shows the Availability tab with successful results." lightbox="media/availability-azure-functions/availability-custom.png":::
-
-To see the end-to-end transaction details, under **Drill into**, select **Successful** or **Failed**. Then select a sample. You can also get to the end-to-end transaction details by selecting a data point on the graph.
---
-## Query in Log Analytics
-
-You can use Log Analytics to view your availability results, dependencies, and more. To learn more about Log Analytics, see [Log query overview](../logs/log-query-overview.md).
---
-## Basic code sample
-
-> [!NOTE]
-> This example is designed solely to show you the mechanics of how the `TrackAvailability()` API call works within an Azure function. It doesn't show you how to write the underlying HTTP test code or business logic that's required to turn this example into a fully functional availability test. By default, if you walk through this example, you'll be creating a basic availability HTTP GET test.
->
-> To follow these instructions, you must use the [dedicated plan](../../azure-functions/dedicated-plan.md) to allow editing code in App Service Editor.
-### Create a timer trigger function
-
-1. Create an Azure Functions resource.
- - If you already have an Application Insights resource:
-
- - By default, Azure Functions creates an Application Insights resource. But if you want to use a resource you created previously, you must specify that during creation.
- - Follow the instructions on how to [create an Azure Functions resource](../../azure-functions/functions-create-scheduled-function.md#create-a-function-app) with the following modification:
-
- On the **Monitoring** tab, select the **Application Insights** dropdown box and then enter or select the name of your resource.
-
- :::image type="content" source="media/availability-azure-functions/app-insights-resource.png" alt-text="Screenshot that shows selecting your existing Application Insights resource on the Monitoring tab.":::
-
- - If you don't have an Application Insights resource created yet for your timer-triggered function:
- - By default, when you're creating your Azure Functions application, it creates an Application Insights resource for you. Follow the instructions on how to [create an Azure Functions resource](../../azure-functions/functions-create-scheduled-function.md#create-a-function-app).
-
- > [!NOTE]
- > You can host your functions on a Consumption, Premium, or App Service plan. If you're testing behind a virtual network or testing nonpublic endpoints, you'll need to use the Premium plan in place of the Consumption plan. Select your plan on the **Hosting** tab. Ensure the latest .NET version is selected when you create the function app.
-1. Create a timer trigger function.
- 1. In your function app, select the **Functions** tab.
- 1. Select **Add**. On the **Add function** pane, select the following configurations:
- 1. **Development environment**: **Develop in portal**
- 1. **Select a template**: **Timer trigger**
- 1. Select **Add** to create the timer trigger function.
-
- :::image type="content" source="media/availability-azure-functions/add-function.png" alt-text="Screenshot that shows how to add a timer trigger function to your function app." lightbox="media/availability-azure-functions/add-function.png":::
-
-### Add and edit code in the App Service Editor
-
-Go to your deployed function app, and under **Development Tools**, select the **App Service Editor** tab.
-
-To create a new file, right-click under your timer trigger function (for example, **TimerTrigger1**) and select **New File**. Then enter the name of the file and select **Enter**.
-
-1. Create a new file called **function.proj** and paste the following code:
-
- ```xml
- <Project Sdk="Microsoft.NET.Sdk">
- <PropertyGroup>
- <TargetFramework>netstandard2.0</TargetFramework>
- </PropertyGroup>
- <ItemGroup>
- <PackageReference Include="Microsoft.ApplicationInsights" Version="2.15.0" /> <!-- Ensure youΓÇÖre using the latest version -->
- </ItemGroup>
- </Project>
- ```
-
- :::image type="content" source="media/availability-azure-functions/function-proj.png" alt-text=" Screenshot that shows function.proj in the App Service Editor." lightbox="media/availability-azure-functions/function-proj.png":::
-
-1. Create a new file called **runAvailabilityTest.csx** and paste the following code:
-
- ```csharp
- using System.Net.Http;
-
- public async static Task RunAvailabilityTestAsync(ILogger log)
- {
- using (var httpClient = new HttpClient())
- {
- // TODO: Replace with your business logic
- await httpClient.GetStringAsync("https://www.bing.com/");
- }
- }
- ```
-
-1. Define the `REGION_NAME` environment variable as a valid Azure availability location.
-
- Run the following command in the [Azure CLI](/cli/azure/account?view=azure-cli-latest#az-account-list-locations&preserve-view=true) to list available regions.
-
- ```azurecli
- az account list-locations -o table
- ```
-
-1. Copy the following code into the **run.csx** file. (You replace the pre-existing code.)
-
- ```csharp
- #load "runAvailabilityTest.csx"
-
- using System;
-
- using System.Diagnostics;
-
- using Microsoft.ApplicationInsights;
-
- using Microsoft.ApplicationInsights.Channel;
-
- using Microsoft.ApplicationInsights.DataContracts;
-
- using Microsoft.ApplicationInsights.Extensibility;
-
- private static TelemetryClient telemetryClient;
-
- // =============================================================
-
- // ****************** DO NOT MODIFY THIS FILE ******************
-
- // Business logic must be implemented in RunAvailabilityTestAsync function in runAvailabilityTest.csx
-
- // If this file does not exist, please add it first
-
- // =============================================================
-
- public async static Task Run(TimerInfo myTimer, ILogger log, ExecutionContext executionContext)
-
- {
- if (telemetryClient == null)
- {
- // Initializing a telemetry configuration for Application Insights based on connection string
-
- var telemetryConfiguration = new TelemetryConfiguration();
- telemetryConfiguration.ConnectionString = Environment.GetEnvironmentVariable("APPLICATIONINSIGHTS_CONNECTION_STRING");
- telemetryConfiguration.TelemetryChannel = new InMemoryChannel();
- telemetryClient = new TelemetryClient(telemetryConfiguration);
- }
-
- string testName = executionContext.FunctionName;
- string location = Environment.GetEnvironmentVariable("REGION_NAME");
- var availability = new AvailabilityTelemetry
- {
- Name = testName,
-
- RunLocation = location,
-
- Success = false,
- };
-
- availability.Context.Operation.ParentId = Activity.Current.SpanId.ToString();
- availability.Context.Operation.Id = Activity.Current.RootId;
- var stopwatch = new Stopwatch();
- stopwatch.Start();
-
- try
- {
- using (var activity = new Activity("AvailabilityContext"))
- {
- activity.Start();
- availability.Id = Activity.Current.SpanId.ToString();
- // Run business logic
- await RunAvailabilityTestAsync(log);
- }
- availability.Success = true;
- }
-
- catch (Exception ex)
- {
- availability.Message = ex.Message;
- throw;
- }
-
- finally
- {
- stopwatch.Stop();
- availability.Duration = stopwatch.Elapsed;
- availability.Timestamp = DateTimeOffset.UtcNow;
- telemetryClient.TrackAvailability(availability);
- telemetryClient.Flush();
- }
- }
-
- ```
-
-### Multi-Step Web Test Code Sample
-Follow the same instructions above and instead paste the following code into the **runAvailabilityTest.csx** file:
-
-```csharp
-using System.Net.Http;
-
-public async static Task RunAvailabilityTestAsync(ILogger log)
-{
- using (var httpClient = new HttpClient())
- {
- // TODO: Replace with your business logic
- await httpClient.GetStringAsync("https://www.bing.com/");
-
- // TODO: Replace with your business logic for an additional monitored endpoint, and logic for additional steps as needed
- await httpClient.GetStringAsync("https://www.learn.microsoft.com/");
- }
-}
-```
-
-## Next steps
-
-* [Standard tests](availability-standard-tests.md)
-* [Availability alerts](availability-alerts.md)
-* [Application Map](./app-map.md)
-* [Transaction diagnostics](./transaction-search-and-diagnostics.md?tabs=transaction-diagnostics)
azure-monitor Availability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-overview.md
- Title: Application Insights availability tests
-description: Set up recurring web tests to monitor availability and responsiveness of your app or website.
- Previously updated : 06/18/2024---
-# Application Insights availability tests
-
-After you deploy your web app or website, you can set up recurring tests to monitor availability and responsiveness. [Application Insights](./app-insights-overview.md) sends web requests to your application at regular intervals from points around the world. It can alert you if your application isn't responding or responds too slowly.
-
-You can set up availability tests for any HTTP or HTTPS endpoint that's accessible from the public internet. You don't have to make any changes to the website you're testing. In fact, it doesn't even have to be a site that you own. You can test the availability of a REST API that your service depends on.
-
-## Types of tests
-
-> [!IMPORTANT]
-> There are two upcoming availability tests retirements. On August 31, 2024 multi-step web tests in Application Insights will be retired. We advise users of these tests to transition to alternative availability tests before the retirement date. Following this date, we will be taking down the underlying infrastructure which will break remaining multi-step tests.
-> On September 30, 2026, URL ping tests in Application Insights will be retired. Existing URL ping tests will be removed from your resources. Review the [pricing](https://azure.microsoft.com/pricing/details/monitor/#pricing) for standard tests and [transition](https://aka.ms/availabilitytestmigration) to using them before September 30, 2026 to ensure you can continue to run single-step availability tests in your Application Insights resources.
-
-There are four types of availability tests:
-
-* [Standard test](availability-standard-tests.md): This single request test is similar to the URL ping test. It includes TLS/SSL certificate validity, proactive lifetime check, HTTP request verb (for example, `GET`, `HEAD`, or `POST`), custom headers, and custom data associated with your HTTP request.
-* [Custom TrackAvailability test](availability-azure-functions.md): If you decide to create a custom application to run availability tests, you can use the [TrackAvailability()](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability) method to send the results to Application Insights.
-* Classic tests (**older versions of availability tests**)
- * [URL ping test (deprecated)](monitor-web-app-availability.md): You can create this test through the Azure portal to validate whether an endpoint is responding and measure performance associated with that response. You can also set custom success criteria coupled with more advanced features, like parsing dependent requests and allowing for retries.
- * [Multi-step web test (deprecated)](availability-multistep.md): You can play back this recording of a sequence of web requests to test more complex scenarios. Multi-step web tests are created in Visual Studio Enterprise and uploaded to the portal, where you can run them.
-
-> [!IMPORTANT]
-> The older classic tests, [URL ping test](monitor-web-app-availability.md) and [multi-step web test](availability-multistep.md), rely on the DNS infrastructure of the public internet to resolve the domain names of the tested endpoints. If you're using private DNS, you must ensure that the public domain name servers can resolve every domain name of your test. When that's not possible, you can use [custom TrackAvailability tests](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability) instead.
-
-You can create up to 100 availability tests per Application Insights resource.
-
-> [!NOTE]
-> Availability tests are stored encrypted, according to [Azure data encryption at rest](../../security/fundamentals/encryption-atrest.md#encryption-at-rest-in-microsoft-cloud-services) policies.
-
-## TLS support
-To provide best-in-class encryption, all availability tests use Transport Layer Security (TLS) 1.2 or higher as the encryption mechanism of choice.
-
-> [!WARNING]
-> On 31 October 2024, in alignment with the [Azure wide legacy TLS deprecation](https://azure.microsoft.com/updates/azure-support-tls-will-end-by-31-october-2024-2/) TLS 1.0/1.1 protocol versions and TLS 1.2/1.3 legacy Cipher suites and Elliptical curves will be retired for Application Insights availability tests.
-
-### Supported TLS configurations
-TLS protocol versions 1.2 and 1.3 are supported encryption mechanisms for availability tests. In addition, the following Cipher suites and Elliptical curves are also supported within each version.
-> [!NOTE]
-> TLS 1.3 is currently only available in these availability test regions: NorthCentralUS, CentralUS, EastUS, SouthCentralUS, WestUS
-
-#### TLS 1.2
-**Cipher suites**
-- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 -- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 -- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 -- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 -- TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 -- TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 -- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 -- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 -
-**Elliptical curves**
-- NistP384 -- NistP256 -
-#### TLS 1.3
-**Cipher suites**
-- TLS_AES_256_GCM_SHA384 -- TLS_AES_128_GCM_SHA256 -
-**Elliptical curves:**
-- NistP384 -- NistP256 -
-### Deprecating TLS configuration
-> [!WARNING]
-> After 31 October 2024, only the listed Cipher suites and Elliptical curves within the below TLS 1.2 and TLS 1.3 sections will be retired. TLS 1.2/1.3 and the previously mentioned Cipher Suites and Elliptical Curves under section "Supported TLS configurations" will still be supported.
-
-#### TLS 1.0 and TLS 1.1
-Protocol versions will no longer be supported
-
-#### TLS 1.2
-**Cipher suites**
-- TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA -- TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA -- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA -- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA -- TLS_RSA_WITH_AES_256_GCM_SHA384 -- TLS_RSA_WITH_AES_128_GCM_SHA256 -- TLS_RSA_WITH_AES_256_CBC_SHA256 -- TLS_RSA_WITH_AES_128_CBC_SHA256 -- TLS_RSA_WITH_AES_256_CBC_SHA -- TLS_RSA_WITH_AES_128_CBC_SHA -
-**Elliptical curves:**
-- curve25519 -
-#### TLS 1.3
-**Elliptical curves**
-- curve25519 -
-## Frequently asked questions
-
-This section provides answers to common questions.
-
-### General
-
-#### Can I run availability tests on an intranet server?
-
-Our [web tests](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability) run on points of presence that are distributed around the globe. There are two solutions:
-
-* **Firewall door**: Allow requests to your server from [the long and changeable list of web test agents](../ip-addresses.md).
-* **Custom code**: Write your own code to send periodic requests to your server from inside your intranet. You could run Visual Studio web tests for this purpose. The tester could send the results to Application Insights by using the `TrackAvailability()` API.
-
-#### What is the user agent string for availability tests?
-
-The user agent string is **Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0; AppInsights**
-
-### TLS Support
-
-#### How does this deprecation impact my web test behavior?
-Availability tests act as a distributed client in each of the supported web test locations. Every time a web test is executed the availability test service attempts to reach out to the remote endpoint defined in the web test configuration. A TLS Client Hello message is sent which contains all the currently supported TLS configuration. If the remote endpoint shares a common TLS configuration with the availability test client, then the TLS handshake succeeds. Otherwise, the web test fails with a TLS handshake failure.
-
-#### How do I ensure my web test isn't impacted?
-To avoid any impact, each remote endpoint (including dependent requests) your web test interacts with needs to support at least one combination of the same Protocol Version, Cipher Suite, and Elliptical Curve that availability test does. If the remote endpoint doesn't support the needed TLS configuration, it needs to be updated with support for some combination of the above-mentioned post-deprecation TLS configuration. These endpoints can be discovered through viewing the [Transaction Details](/azure/azure-monitor/app/availability-standard-tests#see-your-availability-test-results) of your web test (ideally for a successful web test execution).
-
-#### How do I validate what TLS configuration a remote endpoint supports?
-There are several tools available to test what TLS configuration an endpoint supports. One way would be to follow the example detailed on this [page](/security/engineering/solving-tls1-problem#appendix-a-handshake-simulation). If your remote endpoint isn't available via the Public internet, you need to ensure you validate the TLS configuration supported on the remote endpoint from a machine that has access to call your endpoint.
-
-> [!NOTE]
-> For steps to enable the needed TLS configuration on your web server, it is best to reach out to the team that owns the hosting platform your web server runs on if the process is not known.
-
-#### After October 31, 2024, what will the web test behavior be for impacted tests?
-There's no one exception type that all TLS handshake failures impacted by this deprecation would present themselves with. However, the most common exception your web test would start failing with would be `The request was aborted: Couldn't create SSL/TLS secure channel`. You should also be able to see any TLS related failures in the TLS Transport [Troubleshooting Step](/troubleshoot/azure/azure-monitor/app-insights/availability/diagnose-ping-test-failure) for the web test result that is potentially impacted.
-
-#### Can I view what TLS configuration is currently in use by my web test?
-The TLS configuration negotiated during a web test execution can't be viewed. As long as the remote endpoint supports common TLS configuration with availability tests, no impact should be seen post-deprecation.
-
-#### Which components does the deprecation affect in the availability test service?
-The TLS deprecation detailed in this document should only affect the availability test web test execution behavior after October 31, 2024. For more information about interacting with the availability test service for CRUD operations, see [Azure Resource Manager TLS Support](/azure/azure-resource-manager/management/tls-support). This resource provides more details on TLS support and deprecation timelines.
-
-#### Where can I get TLS support?
-For any general questions around the legacy TLS problem, see [Solving TLS problems](/security/engineering/solving-tls1-problem).
-
-## Troubleshooting
-
-> [!WARNING]
-> We have recently enabled TLS 1.3 in availability tests. If you are seeing new error messages as a result, please ensure that clients running on Windows Server 2022 with TLS 1.3 enabled can connect to your endpoint. If you are unable to do this, you may consider temporarily disabling TLS 1.3 on your endpoint so that availability tests will fall back to older TLS versions.
-> For additional information, please check the [troubleshooting article](/troubleshoot/azure/azure-monitor/app-insights/troubleshoot-availability).
-See the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/app-insights/troubleshoot-availability).
-
-## Next steps
-
-* [Availability alerts](availability-alerts.md)
-* [Standard tests](availability-standard-tests.md)
-* [Availability tests using Azure Functions](availability-azure-functions.md)
-* [Web tests Azure Resource Manager template](/azure/templates/microsoft.insights/webtests?tabs=json)
azure-monitor Availability Private Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-private-test.md
- Title: Availability testing behind firewalls - Azure Monitor Application Insights
-description: Learn how to use availability tests on endpoint that are behind a firewall.
- Previously updated : 05/07/2024---
-# Testing behind a firewall
-
-To ensure endpoint availability behind firewalls, enable public availability tests or run availability tests in disconnected or no ingress scenarios.
-
-## Public availability test enablement
-
-Ensure your internal website has a public Domain Name System (DNS) record. Availability tests fail if DNS can't be resolved. For more information, see [Create a custom domain name for internal application](../../cloud-services/cloud-services-custom-domain-name-portal.md#add-an-a-record-for-your-custom-domain).
-
-> [!WARNING]
-> The IP addresses used by the availability tests service are shared and can expose your firewall-protected service endpoints to other tests. IP address filtering alone doesn't secure your service's traffic, so it's recommended to add extra custom headers to verify the origin of web request. For more information, see [Virtual network service tags](../../virtual-network/service-tags-overview.md#virtual-network-service-tags).
-
-### Authenticate traffic
-
-Set custom headers in [standard availability tests](availability-standard-tests.md) to validate traffic.
-
-1. Generate a token or GUID to identify traffic from your availability tests.
-2. Add the custom header "X-Customer-InstanceId" with the value `ApplicationInsightsAvailability:<GUID generated in step 1>` under the "Standard test info" section when creating or updating your availability tests.
-3. Ensure your service checks if incoming traffic includes the header and value defined in the previous steps.
-
- :::image type="content" source="media/availability-private-test/custom-validation-header.png" alt-text="Screenshot that shows custom validation header.":::
-
-Alternatively, set the token as a query parameter. For example, `https://yourtestendpoint/?x-customer-instanceid=applicationinsightsavailability:<your guid>`.
-
-### Configure your firewall to permit incoming requests from Availability Tests
-
-> [!NOTE]
-> This example is specific to network security group service tag usage. Many Azure services accept service tags, each requiring different configuration steps.
--- To simplify enabling Azure services without authorizing individual IPs or maintaining an up-to-date IP list, use [Service tags](../../virtual-network/service-tags-overview.md). Apply these tags across Azure Firewall and network security groups, allowing the Availability Test service access to your endpoints. The service tag `ApplicationInsightsAvailability` applies to all Availability Tests.-
- 1. If you're using [Azure network security groups](../../virtual-network/network-security-groups-overview.md), go to your network security group resource and under **Settings**, select **inbound security rules**. Then select **Add**.
-
- :::image type="content" source="media/availability-private-test/add.png" alt-text="Screenshot that shows the inbound security rules tab in the network security group resource.":::
-
- 2. Next, select **Service Tag** as the source and select **ApplicationInsightsAvailability** as the source service tag. Use open ports 80 (http) and 443 (https) for incoming traffic from the service tag.
-
- :::image type="content" source="media/availability-private-test/service-tag.png" alt-text="Screenshot that shows the Add inbound security rules tab with a source of service tag.":::
--- To manage access when your endpoints are outside Azure or when service tags aren't an option, allowlist the [IP addresses of our web test agents](ip-addresses.md). You can query IP ranges using PowerShell, Azure CLI, or a REST call with the [Service Tag API](../../virtual-network/service-tags-overview.md#use-the-service-tag-discovery-api). For a comprehensive list of current service tags and their IP details, download the [JSON file](../../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files).
-
- 1. In your network security group resource, under **Settings**, select **inbound security rules**. Then select **Add**.
- 2. Next, select **IP Addresses** as your source. Then add your IP addresses in a comma-delimited list in source IP address/CIRD ranges.
-
- :::image type="content" source="media/availability-private-test/ip-addresses.png" alt-text="Screenshot that shows the Add inbound security rules tab with a source of IP addresses.":::
-
-## Disconnected or no ingress scenarios
-
-1. Connect your Application Insights resource to your internal service endpoint using [Azure Private Link](../logs/private-link-security.md).
-2. Write custom code to periodically test your internal server or endpoints. Send the results to Application Insights using the [TrackAvailability()](availability-azure-functions.md) API in the core SDK package.
-
-## Troubleshooting
-
-For more information, see the [troubleshooting article](troubleshoot-availability.md).
-
-## Next steps
-
-* [Azure Private Link](../logs/private-link-security.md)
-* [Availability alerts](availability-alerts.md)
-* [Availability overview](availability-overview.md)
-* [Custom availability tests using Azure Functions](availability-azure-functions.md)
azure-monitor Availability Standard Tests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-standard-tests.md
- Title: Availability Standard test - Azure Monitor Application Insights
-description: Set up Standard tests in Application Insights to check for availability of a website with a single request test.
- Previously updated : 09/12/2023---
-# Standard test
-
-A Standard test is a type of availability test that checks the availability of a website by sending a single request. In addition to validating whether an endpoint is responding and measuring the performance, Standard tests also include SSL certificate validity, proactive lifetime check, HTTP request verb (for example, `GET`,`HEAD`, and `POST`), custom headers, and custom data associated with your HTTP request.
-
-To create an availability test, you must use an existing Application Insights resource or [create an Application Insights resource](create-workspace-resource.md).
-
-> [!TIP]
-> If you're currently using other availability tests, like URL ping tests, you might add Standard tests alongside the others. If you want to use Standard tests instead of one of your other tests, add a Standard test and delete your old test.
-
-## Create a Standard test
-
-To create a Standard test:
-
-1. Go to your Application Insights resource and select the **Availability** pane.
-1. Select **Add Standard test**.
-
- :::image type="content" source="./media/availability-standard-test/standard-test.png" alt-text="Screenshot that shows the Availability pane with the Add Standard test tab open." lightbox="./media/availability-standard-test/standard-test.png":::
-
-1. Input your test name, URL, and other settings that are described in the following table. Then select **Create**.
-
- |Setting | Description |
- |--|-|
- |**URL** | The URL can be any webpage you want to test, but it must be visible from the public internet. The URL can include a query string. So, for example, you can exercise your database a little. If the URL resolves to a redirect, we follow it up to 10 redirects.|
- |**Parse dependent requests**| Test requests images, scripts, style files, and other files that are part of the webpage under test. The recorded response time includes the time taken to get these files. The test fails if any of these resources can't be successfully downloaded within the timeout for the whole test. If the option isn't selected, the test only requests the file at the URL you specified. Enabling this option results in a stricter check. The test could fail for cases, which might not be noticeable when you manually browse the site. Please note, we parse only up to 15 dependent requests. |
- |**Enable retries**| When the test fails, it's retried after a short interval. A failure is reported only if three successive attempts fail. Subsequent tests are then performed at the usual test frequency. Retry is temporarily suspended until the next success. This rule is applied independently at each test location. *We recommend this option*. On average, about 80% of failures disappear on retry.|
- | **SSL certificate validation test** | You can verify the SSL certificate on your website to make sure it's correctly installed, valid, trusted, and doesn't give any errors to any of your users. |
- | **Proactive lifetime check** | This setting enables you to define a set time period before your SSL certificate expires. After it expires, your test will fail. |
- |**Test frequency**| Sets how often the test is run from each test location. With a default frequency of five minutes and five test locations, your site is tested on average every minute.|
- |**Test locations**| Our servers send web requests to your URL from these locations. *Our minimum number of recommended test locations is five* to ensure that you can distinguish problems in your website from network issues. You can select up to 16 locations.|
- | **Custom headers** | Key value pairs that define the operating parameters. |
- | **HTTP request verb** | Indicate what action you want to take with your request. |
- | **Request body** | Custom data associated with your HTTP request. You can upload your own files, enter your content, or disable this feature. |
-
-## Success criteria
-
-|Setting| Description|
-|-||
-| **Test timeout** |Decrease this value to be alerted about slow responses. The test is counted as a failure if the responses from your site haven't been received within this period. If you selected **Parse dependent requests**, all the images, style files, scripts, and other dependent resources must have been received within this period.|
-| **HTTP response** | The returned status code that's counted as a success. The number 200 is the code that indicates that a normal webpage has been returned.|
-| **Content match** | A string, like "Welcome!" We test that an exact case-sensitive match occurs in every response. It must be a plain string, without wildcards. Don't forget that if your page content changes, you might have to update it. *Only English characters are supported with content match.* |
-
-## Alerts
-
-|Setting| Description|
-|-||
-|**Near real time** | We recommend using near real time alerts. Configuring this type of alert is done after your availability test is created. |
-|**Alert location threshold**|We recommend a minimum of 3/5 locations. The optimal relationship between alert location threshold and the number of test locations is **alert location threshold** = **number of test locations - 2, with a minimum of five test locations.**|
-
-## Location population tags
-
-You can use the following population tags for the geo-location attribute when you deploy an availability URL ping test by using Azure Resource Manager.
-
-### Azure Government
-
-| Display name | Population name |
-|-||
-| USGov Virginia | usgov-va-azr |
-| USGov Arizona | usgov-phx-azr |
-| USGov Texas | usgov-tx-azr |
-| USDoD East | usgov-ddeast-azr |
-| USDoD Central | usgov-ddcentral-azr |
-
-### Microsoft Azure operated by 21Vianet
-
-| Display name | Population name |
-|-||
-| China East | mc-cne-azr |
-| China East 2 | mc-cne2-azr |
-| China North | mc-cnn-azr |
-| China North 2 | mc-cnn2-azr |
-
-#### Azure
-
-| Display name | Population name |
-|-|-|
-| Australia East | emea-au-syd-edge |
-| Brazil South | latam-br-gru-edge |
-| Central US | us-fl-mia-edge |
-| East Asia | apac-hk-hkn-azr |
-| East US | us-va-ash-azr |
-| France South (Formerly France Central) | emea-ch-zrh-edge |
-| France Central | emea-fr-pra-edge |
-| Japan East | apac-jp-kaw-edge |
-| North Europe | emea-gb-db3-azr |
-| North Central US | us-il-ch1-azr |
-| South Central US | us-tx-sn1-azr |
-| Southeast Asia | apac-sg-sin-azr |
-| UK West | emea-se-sto-edge |
-| West Europe | emea-nl-ams-azr |
-| West US | us-ca-sjc-azr |
-| UK South | emea-ru-msa-edge |
-
-## See your availability test results
-
-Availability test results can be visualized with both **Line** and **Scatter Plot** views.
-
-After a few minutes, select **Refresh** to see your test results.
--
-The **Scatter Plot** view shows samples of the test results that have diagnostic test-step detail in them. The test engine stores diagnostic detail for tests that have failures. For successful tests, diagnostic details are stored for a subset of the executions. Hover over any of the green/red dots to see the test, test name, and location.
--
-Select a particular test or location. Or you can reduce the time period to see more results around the time period of interest. Use Search Explorer to see results from all executions. Or you can use Log Analytics queries to run custom reports on this data.
-
-## Inspect and edit tests
-
-To edit, temporarily disable, or delete a test, select the ellipses next to a test name. It might take up to 20 minutes for configuration changes to propagate to all test agents after a change is made.
--
-You might want to disable availability tests or the alert rules associated with them while you're performing maintenance on your service.
-
-## If you see failures
-
-Select a red dot.
--
-From an availability test result, you can see the transaction details across all components. Here you can:
-
-* Review the troubleshooting report to determine what might have caused your test to fail but your application is still available.
-* Inspect the response received from your server.
-* Diagnose failure with correlated server-side telemetry collected while processing the failed availability test.
-* Log an issue or work item in Git or Azure Boards to track the problem. The bug will contain a link to this event.
-* Open the web test result in Visual Studio.
-
-To learn more about the end-to-end transaction diagnostics experience, see the [transaction diagnostics documentation](./transaction-search-and-diagnostics.md?tabs=transaction-diagnostics).
-
-Select the exception row to see the details of the server-side exception that caused the synthetic availability test to fail. You can also get the [debug snapshot](./snapshot-debugger.md) for richer code-level diagnostics.
--
-In addition to the raw results, you can also view two key availability metrics in [metrics explorer](../essentials/metrics-getting-started.md):
-
-* **Availability**: Percentage of the tests that were successful across all test executions.
-* **Test Duration**: Average test duration across all test executions.
-
-## Next steps
-
-* [Availability alerts](availability-alerts.md)
-* [Multi-step web tests](availability-multistep.md)
-* [Troubleshooting](troubleshoot-availability.md)
-* [Web tests Azure Resource Manager template](/azure/templates/microsoft.insights/webtests?tabs=json)
azure-monitor Availability Test Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-test-migration.md
- Title: Migrate from Azure Monitor Application Insights classic URL ping tests to standard tests
-description: How to migrate from Azure Monitor Application Insights classic availability URL ping tests to standard tests.
-- Previously updated : 11/15/2023---
-# Migrate availability tests
-
-In this article, we guide you through the process of migrating from [classic URL ping tests](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability) to the modern and efficient [standard tests](availability-standard-tests.md).
-
-We simplify this process by providing clear step-by-step instructions to ensure a seamless transition and equip your applications with the most up-to-date monitoring capabilities.
-
-## Migrate classic URL ping tests to standard tests
-
-The following steps walk you through the process of creating [standard tests](availability-standard-tests.md) that replicate the functionality of your [URL ping tests](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability). It allows you to more easily start using the advanced features of [standard tests](availability-standard-tests.md) using your previously created [URL ping tests](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability).
-
-> [!IMPORTANT]
-> On September 30th, 2026, **[URL ping tests](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability) will be retired**. Transition to **[standard tests](/editor/availability-standard-tests.md)** before then.
-> - A cost is associated with running **[standard tests](/editor/availability-standard-tests.md)**. Once you create a **[standard test](/editor/availability-standard-tests.md)**, you will be charged for test executions.
->
-> - Refer to **[Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#pricing)** before starting this process.
->
-### Prerequisites
--- Any [URL ping test](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability) within Application Insights-- [Azure PowerShell](/powershell/azure/get-started-azureps) access-
-### Steps
-
-1. Connect to your subscription with Azure PowerShell (Connect-AzAccount + Set-AzContext).
-
-2. List all URL ping tests in the current subscription:
-
- ```azurepowershell
- Get-AzApplicationInsightsWebTest | `
- Where-Object { $_.WebTestKind -eq "ping" } | `
- Format-Table -Property ResourceGroupName,Name,WebTestKind,Enabled;
- ```
-
-3. Find the URL Ping Test you want to migrate and record its resource group and name.
-
-4. The following commands create a standard test with the same logic as the URL ping test:
-
- ```azurepowershell
- $resourceGroup = "pingTestResourceGroup";
- $appInsightsComponent = "componentName";
- $pingTestName = "pingTestName";
- $newStandardTestName = "newStandardTestName";
-
- $componentId = (Get-AzApplicationInsights -ResourceGroupName $resourceGroup -Name $appInsightsComponent).Id;
- $pingTest = Get-AzApplicationInsightsWebTest -ResourceGroupName $resourceGroup -Name $pingTestName;
- $pingTestRequest = ([xml]$pingTest.ConfigurationWebTest).WebTest.Items.Request;
- $pingTestValidationRule = ([xml]$pingTest.ConfigurationWebTest).WebTest.ValidationRules.ValidationRule;
-
- $dynamicParameters = @{};
-
- if ($pingTestRequest.IgnoreHttpStatusCode -eq [bool]::FalseString) {
- $dynamicParameters["RuleExpectedHttpStatusCode"] = [convert]::ToInt32($pingTestRequest.ExpectedHttpStatusCode, 10);
- }
-
- if ($pingTestValidationRule -and $pingTestValidationRule.DisplayName -eq "Find Text" `
- -and $pingTestValidationRule.RuleParameters.RuleParameter[0].Name -eq "FindText" `
- -and $pingTestValidationRule.RuleParameters.RuleParameter[0].Value) {
- $dynamicParameters["ContentMatch"] = $pingTestValidationRule.RuleParameters.RuleParameter[0].Value;
- $dynamicParameters["ContentPassIfTextFound"] = $true;
- }
-
- New-AzApplicationInsightsWebTest @dynamicParameters -ResourceGroupName $resourceGroup -Name $newStandardTestName `
- -Location $pingTest.Location -Kind 'standard' -Tag @{ "hidden-link:$componentId" = "Resource" } -TestName $newStandardTestName `
- -RequestUrl $pingTestRequest.Url -RequestHttpVerb "GET" -GeoLocation $pingTest.PropertiesLocations -Frequency $pingTest.Frequency `
- -Timeout $pingTest.Timeout -RetryEnabled:$pingTest.RetryEnabled -Enabled:$pingTest.Enabled `
- -RequestParseDependent:($pingTestRequest.ParseDependentRequests -eq [bool]::TrueString);
- ```
-
-5. The new standard test doesn't have alert rules by default, so it doesn't create noisy alerts. No changes are made to your URL ping test so you can continue to rely on it for alerts.
-6. Once you have validated the functionality of the new standard test, [update your alert rules](/azure/azure-monitor/alerts/alerts-manage-alert-rules) that reference the URL ping test to reference the standard test instead. Then you disable or delete the URL ping test.
-7. To delete a URL ping test with Azure PowerShell, you can use this command:
-
- ```azurepowershell
- Remove-AzApplicationInsightsWebTest -ResourceGroupName $resourceGroup -Name $pingTestName;
- ```
-
-## FAQ
-
-#### When should I use these commands?
-
-Migrate URL ping tests to standard tests now to take advantage of new capabilities.
-
-#### Do these steps work for both HTTP and HTTPS endpoints?
-
-Yes, these commands work for both HTTP and HTTPS endpoints, which are used in your URL ping Tests.
-
-## More Information
-
-* [Standard tests](availability-standard-tests.md)
-* [Availability alerts](availability-alerts.md)
-* [Troubleshooting](troubleshoot-availability.md)
-* [Web tests Azure Resource Manager template](/azure/templates/microsoft.insights/webtests?tabs=json)
-* [Web test REST API](/rest/api/application-insights/web-tests)
azure-monitor Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability.md
+
+ Title: Application Insights availability tests
+description: Set up recurring web tests to monitor availability and responsiveness of your app or website.
+ Last updated : 07/05/2024+++
+# Application Insights availability tests
+
+After you deploy your web app or website, you can set up recurring tests to monitor availability and responsiveness. [Application Insights](./app-insights-overview.md) sends web requests to your application at regular intervals from points around the world. It can alert you if your application isn't responding or responds too slowly. You can create up to 100 availability tests per Application Insights resource.
+
+Availability tests don't require any changes to the website you're testing and work for any HTTP or HTTPS endpoint that's accessible from the public internet. You can also test the availability of a REST API that your service depends on.
+
+> [!NOTE]
+> Availability tests are stored encrypted, according to [Azure data encryption at rest](../../security/fundamentals/encryption-atrest.md#encryption-at-rest-in-microsoft-cloud-services) policies.
+
+## Types of availability tests
+
+There are four types of availability tests:
+
+* Standard test: This is a type of availability test that checks the availability of a website by sending a single request, similar to the deprecated URL ping test. In addition to validating whether an endpoint is responding and measuring the performance, Standard tests also include TLS/SSL certificate validity, proactive lifetime check, HTTP request verb (for example, `GET`,`HEAD`, and `POST`), custom headers, and custom data associated with your HTTP request.
+
+* Custom TrackAvailability test: If you decide to create a custom application to run availability tests, you can use the [TrackAvailability()](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability) method to send the results to Application Insights.
+
+* [(Deprecated) Multi-step web test](availability-multistep.md): You can play back this recording of a sequence of web requests to test more complex scenarios. Multi-step web tests are created in Visual Studio Enterprise and uploaded to the portal, where you can run them.
+
+* [(Deprecated) URL ping test](monitor-web-app-availability.md): You can create this test through the Azure portal to validate whether an endpoint is responding and measure performance associated with that response. You can also set custom success criteria coupled with more advanced features, like parsing dependent requests and allowing for retries.
+
+> [!IMPORTANT]
+> There are two upcoming availability tests retirements:
+> * **Multi-step web tests:** On August 31, 2024, multi-step web tests in Application Insights will be retired. We advise users of these tests to transition to alternative availability tests before the retirement date. Following this date, we will be taking down the underlying infrastructure which will break remaining multi-step tests.
+>
+> * **URL ping tests:** On September 30, 2026, URL ping tests in Application Insights will be retired. Existing URL ping tests will be removed from your resources. Review the [pricing](https://azure.microsoft.com/pricing/details/monitor/#pricing) for standard tests and [transition](https://aka.ms/availabilitytestmigration) to using them before September 30, 2026 to ensure you can continue to run single-step availability tests in your Application Insights resources.
+
+<!-- Move this message to "previous-version" documents for both web tests
+> [!IMPORTANT]
+> [Multi-step web test](availability-multistep.md) and [URL ping test](monitor-web-app-availability.md) rely on the DNS infrastructure of the public internet to resolve the domain names of the tested endpoints. If you're using private DNS, you must ensure that the public domain name servers can resolve every domain name of your test. When that's not possible, you can use [custom TrackAvailability tests](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability) instead.
+-->
+
+## Create an availability test
+
+## [Standard test](#tab/standard)
+
+> [!TIP]
+> If you're currently using other availability tests, like URL ping tests, you might add Standard tests alongside the others. If you want to use Standard tests instead of one of your other tests, add a Standard test and delete your old test.
+
+### Prerequisites
+
+> [!div class="checklist"]
+> * [Workspace-based Application Insights resource](create-workspace-resource.md)
+
+### Get started
+
+1. Go to your Application Insights resource and select the **Availability** pane.
+
+1. Select **Add Standard test**.
+
+ :::image type="content" source="./media/availability-standard-test/standard-test.png" alt-text="Screenshot that shows the Availability pane with the Add Standard test tab open." lightbox="./media/availability-standard-test/standard-test.png":::
+
+1. Input your test name, URL, and other settings that are described in the following table. Then, select **Create**.
+
+ | Setting | Description |
+ ||-|
+ | **URL** | The URL can be any webpage you want to test, but it must be visible from the public internet. The URL can include a query string. So, for example, you can exercise your database a little. If the URL resolves to a redirect, we follow it up to 10 redirects. |
+ | **Parse dependent requests** | Test requests images, scripts, style files, and other files that are part of the webpage under test. The recorded response time includes the time taken to get these files. The test fails if any of these resources can't be successfully downloaded within the timeout for the whole test. If the option isn't selected, the test only requests the file at the URL you specified. Enabling this option results in a stricter check. The test could fail for cases, which might not be noticeable when you manually browse the site. Please note, we parse only up to 15 dependent requests. |
+ | **Enable retries** | When the test fails, it's retried after a short interval. A failure is reported only if three successive attempts fail. Subsequent tests are then performed at the usual test frequency. Retry is temporarily suspended until the next success. This rule is applied independently at each test location. *We recommend this option*. On average, about 80% of failures disappear on retry. |
+ | **SSL certificate validation test** | You can verify the SSL certificate on your website to make sure it's correctly installed, valid, trusted, and doesn't give any errors to any of your users. |
+ | **Proactive lifetime check** | This setting enables you to define a set time period before your SSL certificate expires. After it expires, your test will fail. |
+ | **Test frequency** | Sets how often the test is run from each test location. With a default frequency of five minutes and five test locations, your site is tested on average every minute. |
+ | **Test locations** | Our servers send web requests to your URL from these locations. *Our minimum number of recommended test locations is five* to ensure that you can distinguish problems in your website from network issues. You can select up to 16 locations. |
+ | **Custom headers** | Key value pairs that define the operating parameters. |
+ | **HTTP request verb** | Indicate what action you want to take with your request. |
+ | **Request body** | Custom data associated with your HTTP request. You can upload your own files, enter your content, or disable this feature. |
+
+### Success criteria
+
+| Setting | Description |
+|--|--|
+| **Test timeout** | Decrease this value to be alerted about slow responses. The test is counted as a failure if the responses from your site haven't been received within this period. If you selected **Parse dependent requests**, all the images, style files, scripts, and other dependent resources must have been received within this period. |
+| **HTTP response** | The returned status code that's counted as a success. The number 200 is the code that indicates that a normal webpage has been returned. |
+| **Content match** | A string, like "Welcome!" We test that an exact case-sensitive match occurs in every response. It must be a plain string, without wildcards. Don't forget that if your page content changes, you might have to update it. *Only English characters are supported with content match.* |
+
+## [TrackAvailability()](#tab/track)
+
+> [!IMPORTANT]
+> [TrackAvailability()](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability) requires making a developer investment in writing and maintanining potentially complex custom code.
+>
+> *Standard tests should always be used if possible*, as they require little investment, no maintenance, and have few prerequisites.
+
+### Prerequisites
+
+> [!div class="checklist"]
+> * [Workspace-based Application Insights resource](create-workspace-resource.md)
+> * Access to the source code of a [function app](../../azure-functions/functions-how-to-use-azure-function-app-settings.md) in Azure Functions
+> * Developer expertise capable of authoring [custom code](#basic-code-sample) for [TrackAvailability()](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability), tailored to your specific business needs
+
+### Basic code sample
+
+> [!NOTE]
+> This example is designed solely to show you the mechanics of how the `TrackAvailability()` API call works within an Azure function. It doesn't show you how to write the underlying HTTP test code or business logic that's required to turn this example into a fully functional availability test.
+>
+> By default, if you walk through this example, you'll be creating a basic availability HTTP GET test. To follow these instructions, you must use the [dedicated plan](../../azure-functions/dedicated-plan.md) to allow editing code in App Service Editor.
+
+#### Create a timer trigger function
+
+1. Create an Azure Functions resource.
+
+ * **If you already have an Application Insights resource:**
+
+ By default, Azure Functions creates an Application Insights resource. If you want to use a resource you created previously, you must specify that during creation.
+
+ Follow the instructions on how to [create an Azure Functions resource](../../azure-functions/functions-create-scheduled-function.md#create-a-function-app) with the following modification:
+
+ On the **Monitoring** tab, select the **Application Insights** dropdown box and then enter or select the name of your resource:
+
+ :::image type="content" source="media/availability-azure-functions/app-insights-resource.png" alt-text="Screenshot that shows selecting your existing Application Insights resource on the Monitoring tab.":::
+
+ * **If you don't have an Application Insights resource created yet for your timer-triggered function:**
+
+ By default, when you're creating your Azure Functions application, it creates an Application Insights resource for you. Follow the instructions on how to [create an Azure Functions resource](../../azure-functions/functions-create-scheduled-function.md#create-a-function-app).
+
+ > [!NOTE]
+ > You can host your functions on a Consumption, Premium, or App Service plan. If you're testing behind a virtual network or testing nonpublic endpoints, you'll need to use the Premium plan in place of the Consumption plan. Select your plan on the **Hosting** tab. Ensure the latest .NET version is selected when you create the function app.
+
+1. Create a timer trigger function.
+
+ 1. In your function app, select the **Functions** tab.
+
+ 1. Select **Add**. On the **Add function** pane, select the following configurations:
+ * **Development environment**: Develop in portal
+ * **Select a template**: Timer trigger
+
+ 1. Select **Add** to create the timer trigger function.
+
+ :::image type="content" source="media/availability-azure-functions/add-function.png" alt-text="Screenshot that shows how to add a timer trigger function to your function app." lightbox="media/availability-azure-functions/add-function.png":::
+
+#### Add and edit code in the App Service Editor
+
+Go to your deployed function app, and under **Development Tools**, select the **App Service Editor** tab.
+
+To create a new file, right-click under your timer trigger function (for example, **TimerTrigger1**) and select **New File**. Then enter the name of the file and select **Enter**.
+
+1. Create a new file called **function.proj** and paste the following code:
+
+ ```xml
+ <Project Sdk="Microsoft.NET.Sdk">
+ <PropertyGroup>
+ <TargetFramework>netstandard2.0</TargetFramework>
+ </PropertyGroup>
+ <ItemGroup>
+ <PackageReference Include="Microsoft.ApplicationInsights" Version="2.15.0" /> <!-- Ensure youΓÇÖre using the latest version -->
+ </ItemGroup>
+ </Project>
+ ```
+
+ :::image type="content" source="media/availability-azure-functions/function-proj.png" alt-text=" Screenshot that shows function.proj in the App Service Editor." lightbox="media/availability-azure-functions/function-proj.png":::
+
+1. Create a new file called **runAvailabilityTest.csx** and paste the following code:
+
+ ```csharp
+ using System.Net.Http;
+
+ public async static Task RunAvailabilityTestAsync(ILogger log)
+ {
+ using (var httpClient = new HttpClient())
+ {
+ // TODO: Replace with your business logic
+ await httpClient.GetStringAsync("https://www.bing.com/");
+ }
+ }
+ ```
+
+1. Define the `REGION_NAME` environment variable as a valid Azure availability location.
+
+ Run the following command in the [Azure CLI](/cli/azure/account?view=azure-cli-latest#az-account-list-locations&preserve-view=true) to list available regions.
+
+ ```azurecli
+ az account list-locations -o table
+ ```
+
+1. Copy the following code into the **run.csx** file. (You replace the pre-existing code.)
+
+ ```csharp
+ #load "runAvailabilityTest.csx"
+
+ using System;
+
+ using System.Diagnostics;
+
+ using Microsoft.ApplicationInsights;
+
+ using Microsoft.ApplicationInsights.Channel;
+
+ using Microsoft.ApplicationInsights.DataContracts;
+
+ using Microsoft.ApplicationInsights.Extensibility;
+
+ private static TelemetryClient telemetryClient;
+
+ // =============================================================
+
+ // ****************** DO NOT MODIFY THIS FILE ******************
+
+ // Business logic must be implemented in RunAvailabilityTestAsync function in runAvailabilityTest.csx
+
+ // If this file does not exist, please add it first
+
+ // =============================================================
+
+ public async static Task Run(TimerInfo myTimer, ILogger log, ExecutionContext executionContext)
+
+ {
+ if (telemetryClient == null)
+ {
+ // Initializing a telemetry configuration for Application Insights based on connection string
+
+ var telemetryConfiguration = new TelemetryConfiguration();
+ telemetryConfiguration.ConnectionString = Environment.GetEnvironmentVariable("APPLICATIONINSIGHTS_CONNECTION_STRING");
+ telemetryConfiguration.TelemetryChannel = new InMemoryChannel();
+ telemetryClient = new TelemetryClient(telemetryConfiguration);
+ }
+
+ string testName = executionContext.FunctionName;
+ string location = Environment.GetEnvironmentVariable("REGION_NAME");
+ var availability = new AvailabilityTelemetry
+ {
+ Name = testName,
+
+ RunLocation = location,
+
+ Success = false,
+ };
+
+ availability.Context.Operation.ParentId = Activity.Current.SpanId.ToString();
+ availability.Context.Operation.Id = Activity.Current.RootId;
+ var stopwatch = new Stopwatch();
+ stopwatch.Start();
+
+ try
+ {
+ using (var activity = new Activity("AvailabilityContext"))
+ {
+ activity.Start();
+ availability.Id = Activity.Current.SpanId.ToString();
+ // Run business logic
+ await RunAvailabilityTestAsync(log);
+ }
+ availability.Success = true;
+ }
+
+ catch (Exception ex)
+ {
+ availability.Message = ex.Message;
+ throw;
+ }
+
+ finally
+ {
+ stopwatch.Stop();
+ availability.Duration = stopwatch.Elapsed;
+ availability.Timestamp = DateTimeOffset.UtcNow;
+ telemetryClient.TrackAvailability(availability);
+ telemetryClient.Flush();
+ }
+ }
+
+ ```
+
+#### Multi-step web test code sample
+
+Follow the same instructions above and instead paste the following code into the **runAvailabilityTest.csx** file:
+
+```csharp
+using System.Net.Http;
+
+public async static Task RunAvailabilityTestAsync(ILogger log)
+{
+ using (var httpClient = new HttpClient())
+ {
+ // TODO: Replace with your business logic
+ await httpClient.GetStringAsync("https://www.bing.com/");
+
+ // TODO: Replace with your business logic for an additional monitored endpoint, and logic for additional steps as needed
+ await httpClient.GetStringAsync("https://www.learn.microsoft.com/");
+ }
+}
+```
+++++
+## Availability alerts
+
+| Setting | Description |
+||-|
+| **Near real time** | We recommend using near real time alerts. Configuring this type of alert is done after your availability test is created. |
+| **Alert location threshold** |We recommend a minimum of 3/5 locations. The optimal relationship between alert location threshold and the number of test locations is **alert location threshold** = **number of test locations - 2, with a minimum of five test locations.** |
+
+### Location population tags
+
+You can use the following population tags for the geo-location attribute when you deploy an availability URL ping test by using Azure Resource Manager.
+
+#### Azure
+
+| Display name | Population name |
+|-|-|
+| Australia East | emea-au-syd-edge |
+| Brazil South | latam-br-gru-edge |
+| Central US | us-fl-mia-edge |
+| East Asia | apac-hk-hkn-azr |
+| East US | us-va-ash-azr |
+| France South (Formerly France Central) | emea-ch-zrh-edge |
+| France Central | emea-fr-pra-edge |
+| Japan East | apac-jp-kaw-edge |
+| North Europe | emea-gb-db3-azr |
+| North Central US | us-il-ch1-azr |
+| South Central US | us-tx-sn1-azr |
+| Southeast Asia | apac-sg-sin-azr |
+| UK West | emea-se-sto-edge |
+| West Europe | emea-nl-ams-azr |
+| West US | us-ca-sjc-azr |
+| UK South | emea-ru-msa-edge |
+
+#### Azure Government
+
+| Display name | Population name |
+|-||
+| USGov Virginia | usgov-va-azr |
+| USGov Arizona | usgov-phx-azr |
+| USGov Texas | usgov-tx-azr |
+| USDoD East | usgov-ddeast-azr |
+| USDoD Central | usgov-ddcentral-azr |
+
+#### Microsoft Azure operated by 21Vianet
+
+| Display name | Population name |
+|-||
+| China East | mc-cne-azr |
+| China East 2 | mc-cne2-azr |
+| China North | mc-cnn-azr |
+| China North 2 | mc-cnn2-azr |
+
+### Enable alerts
+
+Alerts are now automatically enabled by default, but to fully configure an alert, you must initially create your availability test.
+
+> [!NOTE]
+> With the [new unified alerts](../alerts/alerts-overview.md), the alert rule severity and notification preferences with [action groups](../alerts/action-groups.md) *must be* configured in the alerts experience. Without the following steps, you'll only receive in-portal notifications.
+
+<!--
+-->
+
+1. After you save the availability test, on the **Details** tab, select the ellipsis by the test you made. Select **Open Rules (Alerts) page**.
+
+ :::image type="content" source="./media/availability-alerts/edit-alert.png" alt-text="Screenshot that shows the Availability pane for an Application Insights resource in the Azure portal and the Open Rules (Alerts) page menu option." lightbox="./media/availability-alerts/edit-alert.png":::
+
+1. Set the severity level, rule description, and action group that have the notification preferences you want to use for this alert rule.
+
+### Alert criteria
+
+Automatically enabled availability alerts trigger an email when the endpoint you've defined is unavailable and when it's available again. Availability alerts that are created through this experience are state based. When the alert criteria are met, a single alert gets generated when the website is detected as unavailable. If the website is still down the next time the alert criteria is evaluated, it won't generate a new alert.
+
+For example, suppose that your website is down for an hour and you've set up an email alert with an evaluation frequency of 15 minutes. You'll only receive an email when the website goes down and another email when it's back online. You won't receive continuous alerts every 15 minutes to remind you that the website is still unavailable.
+
+You might not want to receive notifications when your website is down for only a short period of time, for example, during maintenance. You can change the evaluation frequency to a higher value than the expected downtime, up to 15 minutes. You can also increase the alert location threshold so that it only triggers an alert if the website is down for a specific number of regions. For longer scheduled downtimes, temporarily deactivate the alert rule or create a custom rule. It gives you more options to account for the downtime.
+
+#### Change the alert criteria
+
+To make changes to the location threshold, aggregation period, and test frequency, select the condition on the edit page of the alert rule to open the "**Configure signal logic**" window.
+
+### Create a custom alert rule
+
+If you need advanced capabilities, you can create a custom alert rule on the **Alerts** tab. Select **Create** > **Alert rule**. Choose **Metrics** for **Signal type** to show all available signals and select **Availability**.
+
+A custom alert rule offers higher values for the aggregation period (up to 24 hours instead of 6 hours) and the test frequency (up to 1 hour instead of 15 minutes). It also adds options to further define the logic by selecting different operators, aggregation types, and threshold values.
+
+* **Alert on X out of Y locations reporting failures**: The X out of Y locations alert rule is enabled by default in the [new unified alerts experience](../alerts/alerts-overview.md) when you create a new availability test. You can opt out by selecting the "classic" option or by choosing to disable the alert rule. Configure the action groups to receive notifications when the alert triggers by following the preceding steps. Without this step, you'll only receive in-portal notifications when the rule triggers.
+
+* **Alert on availability metrics**: By using the [new unified alerts](../alerts/alerts-overview.md), you can alert on segmented aggregate availability and test duration metrics too:
+
+ 1. Select an Application Insights resource in the **Metrics** experience, and select an **Availability** metric.
+
+ 1. The **Configure alerts** option from the menu takes you to the new experience where you can select specific tests or locations on which to set up alert rules. You can also configure the action groups for this alert rule here.
+
+* **Alert on custom analytics queries**: By using the [new unified alerts](../alerts/alerts-overview.md), you can alert on [custom log queries](../alerts/alerts-types.md#log-alerts). With custom queries, you can alert on any arbitrary condition that helps you get the most reliable signal of availability issues. It's also applicable if you're sending custom availability results by using the TrackAvailability SDK.
+
+ The metrics on availability data include any custom availability results you might be submitting by calling the TrackAvailability SDK. You can use the alerting on metrics support to alert on custom availability results.
+
+### Automate alerts
+
+To automate this process with Azure Resource Manager templates, see [Create a metric alert with an Azure Resource Manager template](../alerts/alerts-metric-create-templates.md#template-for-an-availability-test-along-with-a-metric-alert).
+
+## See your availability test results
+
+This section explains how to review availability test results in the Azure portal and query the data using [Log Analytics](../logs/log-analytics-overview.md#overview-of-log-analytics-in-azure-monitor). Availability test results can be visualized with both **Line** and **Scatter Plot** views.
+
+### Check availability
+
+Start by reviewing the graph on the **Availability** tab of your Application Insights resource.
+
+### [Standard test](#tab/standard)
++
+### [TrackAvailability](#tab/track)
+
+> [!NOTE]
+> Tests created with `TrackAvailability()` will appear with **CUSTOM** next to the test name.
++++
+The **Scatter Plot** view shows samples of the test results that have diagnostic test-step detail in them. The test engine stores diagnostic detail for tests that have failures. For successful tests, diagnostic details are stored for a subset of the executions. Hover over any of the green/red dots to see the test, test name, and location.
++
+Select a particular test or location. Or you can reduce the time period to see more results around the time period of interest. Use Search Explorer to see results from all executions. Or you can use Log Analytics queries to run custom reports on this data.
+
+To see the end-to-end transaction details, under **Drill into**, select **Successful** or **Failed**. Then select a sample. You can also get to the end-to-end transaction details by selecting a data point on the graph.
+++
+### Inspect and edit tests
+
+To edit, temporarily disable, or delete a test, select the ellipses next to a test name. It might take up to 20 minutes for configuration changes to propagate to all test agents after a change is made.
++
+You might want to disable availability tests or the alert rules associated with them while you're performing maintenance on your service.
+
+### If you see failures
+
+Select a red dot.
++
+From an availability test result, you can see the transaction details across all components. Here you can:
+
+* Review the troubleshooting report to determine what might have caused your test to fail but your application is still available.
+* Inspect the response received from your server.
+* Diagnose failure with correlated server-side telemetry collected while processing the failed availability test.
+* Log an issue or work item in Git or Azure Boards to track the problem. The bug will contain a link to this event.
+* Open the web test result in Visual Studio.
+
+To learn more about the end-to-end transaction diagnostics experience, see the [transaction diagnostics documentation](./transaction-search-and-diagnostics.md?tabs=transaction-diagnostics).
+
+Select the exception row to see the details of the server-side exception that caused the synthetic availability test to fail. You can also get the [debug snapshot](./snapshot-debugger.md) for richer code-level diagnostics.
++
+In addition to the raw results, you can also view two key availability metrics in [metrics explorer](../essentials/metrics-getting-started.md):
+
+* **Availability**: Percentage of the tests that were successful across all test executions.
+* **Test Duration**: Average test duration across all test executions.
+
+### Query in Log Analytics
+
+You can use Log Analytics to view your availability results, dependencies, and more. To learn more about Log Analytics, see [Log query overview](../logs/log-query-overview.md).
+++
+## Migrate availability tests
+
+In this article, we guide you through the process of migrating from [classic URL ping tests](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability) to the modern and efficient [standard tests](availability-standard-tests.md).
+
+We simplify this process by providing clear step-by-step instructions to ensure a seamless transition and equip your applications with the most up-to-date monitoring capabilities.
+
+### Migrate classic URL ping tests to standard tests
+
+The following steps walk you through the process of creating [standard tests](availability-standard-tests.md) that replicate the functionality of your [URL ping tests](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability). It allows you to more easily start using the advanced features of [standard tests](availability-standard-tests.md) using your previously created [URL ping tests](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability).
+
+> [!IMPORTANT]
+> A cost is associated with running **[standard tests](/editor/availability-standard-tests.md)**. Once you create a **[standard test](/editor/availability-standard-tests.md)**, you will be charged for test executions. Refer to **[Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#pricing)** before starting this process.
+
+#### Prerequisites
+
+* Any [URL ping test](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability) within Application Insights
+* [Azure PowerShell](/powershell/azure/get-started-azureps) access
+
+#### Get started
+
+1. Connect to your subscription with Azure PowerShell (Connect-AzAccount + Set-AzContext).
+
+1. List all URL ping tests in the current subscription:
+
+ ```azurepowershell
+ Get-AzApplicationInsightsWebTest | `
+ Where-Object { $_.WebTestKind -eq "ping" } | `
+ Format-Table -Property ResourceGroupName,Name,WebTestKind,Enabled;
+ ```
+
+1. Find the URL Ping Test you want to migrate and record its resource group and name.
+
+1. The following commands create a standard test with the same logic as the URL ping test.
+
+ > [!NOTE]
+ > The following commands work for both HTTP and HTTPS endpoints, which are used in your URL ping Tests.
+
+ ```shell
+ $resourceGroup = "pingTestResourceGroup";
+ $appInsightsComponent = "componentName";
+ $pingTestName = "pingTestName";
+ $newStandardTestName = "newStandardTestName";
+
+ $componentId = (Get-AzApplicationInsights -ResourceGroupName $resourceGroup -Name $appInsightsComponent).Id;
+ $pingTest = Get-AzApplicationInsightsWebTest -ResourceGroupName $resourceGroup -Name $pingTestName;
+ $pingTestRequest = ([xml]$pingTest.ConfigurationWebTest).WebTest.Items.Request;
+ $pingTestValidationRule = ([xml]$pingTest.ConfigurationWebTest).WebTest.ValidationRules.ValidationRule;
+
+ $dynamicParameters = @{};
+
+ if ($pingTestRequest.IgnoreHttpStatusCode -eq [bool]::FalseString) {
+ $dynamicParameters["RuleExpectedHttpStatusCode"] = [convert]::ToInt32($pingTestRequest.ExpectedHttpStatusCode, 10);
+ }
+
+ if ($pingTestValidationRule -and $pingTestValidationRule.DisplayName -eq "Find Text" `
+ -and $pingTestValidationRule.RuleParameters.RuleParameter[0].Name -eq "FindText" `
+ -and $pingTestValidationRule.RuleParameters.RuleParameter[0].Value) {
+ $dynamicParameters["ContentMatch"] = $pingTestValidationRule.RuleParameters.RuleParameter[0].Value;
+ $dynamicParameters["ContentPassIfTextFound"] = $true;
+ }
+
+ New-AzApplicationInsightsWebTest @dynamicParameters -ResourceGroupName $resourceGroup -Name $newStandardTestName `
+ -Location $pingTest.Location -Kind 'standard' -Tag @{ "hidden-link:$componentId" = "Resource" } -TestName $newStandardTestName `
+ -RequestUrl $pingTestRequest.Url -RequestHttpVerb "GET" -GeoLocation $pingTest.PropertiesLocations -Frequency $pingTest.Frequency `
+ -Timeout $pingTest.Timeout -RetryEnabled:$pingTest.RetryEnabled -Enabled:$pingTest.Enabled `
+ -RequestParseDependent:($pingTestRequest.ParseDependentRequests -eq [bool]::TrueString);
+ ```
+
+1. The new standard test doesn't have alert rules by default, so it doesn't create noisy alerts. No changes are made to your URL ping test so you can continue to rely on it for alerts.
+
+1. Once you have validated the functionality of the new standard test, [update your alert rules](/azure/azure-monitor/alerts/alerts-manage-alert-rules) that reference the URL ping test to reference the standard test instead. Then you disable or delete the URL ping test.
+
+1. To delete a URL ping test with Azure PowerShell, you can use this command:
+
+ ```azurepowershell
+ Remove-AzApplicationInsightsWebTest -ResourceGroupName $resourceGroup -Name $pingTestName;
+ ```
+
+## Testing behind a firewall
+
+To ensure endpoint availability behind firewalls, enable public availability tests or run availability tests in disconnected or no ingress scenarios.
+
+### Public availability test enablement
+
+Ensure your internal website has a public Domain Name System (DNS) record. Availability tests fail if DNS can't be resolved. For more information, see [Create a custom domain name for internal application](../../cloud-services/cloud-services-custom-domain-name-portal.md#add-an-a-record-for-your-custom-domain).
+
+> [!WARNING]
+> The IP addresses used by the availability tests service are shared and can expose your firewall-protected service endpoints to other tests. IP address filtering alone doesn't secure your service's traffic, so it's recommended to add extra custom headers to verify the origin of web request. For more information, see [Virtual network service tags](../../virtual-network/service-tags-overview.md#virtual-network-service-tags).
+
+#### Authenticate traffic
+
+Set custom headers in [standard availability tests](availability-standard-tests.md) to validate traffic.
+
+1. Generate a token or GUID to identify traffic from your availability tests.
+
+1. Add the custom header "X-Customer-InstanceId" with the value `ApplicationInsightsAvailability:<GUID generated in step 1>` under the "Standard test info" section when creating or updating your availability tests.
+
+1. Ensure your service checks if incoming traffic includes the header and value defined in the previous steps.
+
+ :::image type="content" source="media/availability-private-test/custom-validation-header.png" alt-text="Screenshot that shows custom validation header.":::
+
+Alternatively, set the token as a query parameter. For example, `https://yourtestendpoint/?x-customer-instanceid=applicationinsightsavailability:<your guid>`.
+
+#### Configure your firewall to permit incoming requests from availability tests
+
+> [!NOTE]
+> This example is specific to network security group service tag usage. Many Azure services accept service tags, each requiring different configuration steps.
+
+* To simplify enabling Azure services without authorizing individual IPs or maintaining an up-to-date IP list, use [Service tags](../../virtual-network/service-tags-overview.md). Apply these tags across Azure Firewall and network security groups, allowing the availability test service access to your endpoints. The service tag `ApplicationInsightsAvailability` applies to all availability tests.
+
+ 1. If you're using [Azure network security groups](../../virtual-network/network-security-groups-overview.md), go to your network security group resource and under **Settings**, select **inbound security rules**. Then select **Add**.
+
+ :::image type="content" source="media/availability-private-test/add.png" alt-text="Screenshot that shows the inbound security rules tab in the network security group resource.":::
+
+ 1. Next, select **Service Tag** as the source and select **ApplicationInsightsAvailability** as the source service tag. Use open ports 80 (http) and 443 (https) for incoming traffic from the service tag.
+
+ :::image type="content" source="media/availability-private-test/service-tag.png" alt-text="Screenshot that shows the Add inbound security rules tab with a source of service tag.":::
+
+* To manage access when your endpoints are outside Azure or when service tags aren't an option, allowlist the [IP addresses of our web test agents](ip-addresses.md). You can query IP ranges using PowerShell, Azure CLI, or a REST call with the [Service Tag API](../../virtual-network/service-tags-overview.md#use-the-service-tag-discovery-api). For a comprehensive list of current service tags and their IP details, download the [JSON file](../../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files).
+
+ 1. In your network security group resource, under **Settings**, select **inbound security rules**. Then select **Add**.
+
+ 1. Next, select **IP Addresses** as your source. Then add your IP addresses in a comma-delimited list in source IP address/CIRD ranges.
+
+ :::image type="content" source="media/availability-private-test/ip-addresses.png" alt-text="Screenshot that shows the Add inbound security rules tab with a source of IP addresses.":::
+
+### Disconnected or no ingress scenarios
+
+1. Connect your Application Insights resource to your internal service endpoint using [Azure Private Link](../logs/private-link-security.md).
+
+1. Write custom code to periodically test your internal server or endpoints. Send the results to Application Insights using the [TrackAvailability()](availability-azure-functions.md) API in the core SDK package.
+
+### Supported TLS configurations
+
+To provide best-in-class encryption, all availability tests use Transport Layer Security (TLS) 1.2 and 1.3 as the encryption mechanisms of choice. In addition, the following Cipher suites and Elliptical curves are also supported within each version.
+
+> [!NOTE]
+> TLS 1.3 is currently only available in the availability test regions NorthCentralUS, CentralUS, EastUS, SouthCentralUS, and WestUS.
+
+#### TLS 1.2
+
+**Cipher suites**
+* TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
+* TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
+* TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
+* TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
+* TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384
+* TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
+* TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
+* TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
+
+**Elliptical curves**
+* NistP384
+* NistP256
+
+#### TLS 1.3
+
+**Cipher suites**
+* TLS_AES_256_GCM_SHA384
+* TLS_AES_128_GCM_SHA256
+
+**Elliptical curves:**
+* NistP384
+* NistP256
+
+### Deprecating TLS configuration
+
+> [!WARNING]
+> On 31 October 2024, in alignment with the [Azure wide legacy TLS deprecation](https://azure.microsoft.com/updates/azure-support-tls-will-end-by-31-october-2024-2/), TLS 1.0/1.1 protocol versions and the below listed TLS 1.2/1.3 legacy Cipher suites and Elliptical curves will be retired for Application Insights availability tests.
+
+#### TLS 1.0 and TLS 1.1
+
+Protocol versions will no longer be supported.
+
+#### TLS 1.2
+
+**Cipher suites**
+* TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
+* TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
+* TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
+* TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
+* TLS_RSA_WITH_AES_256_GCM_SHA384
+* TLS_RSA_WITH_AES_128_GCM_SHA256
+* TLS_RSA_WITH_AES_256_CBC_SHA256
+* TLS_RSA_WITH_AES_128_CBC_SHA256
+* TLS_RSA_WITH_AES_256_CBC_SHA
+* TLS_RSA_WITH_AES_128_CBC_SHA
+
+**Elliptical curves:**
+* curve25519
+
+#### TLS 1.3
+
+**Elliptical curves**
+* curve25519
+
+### Troubleshooting
+
+> [!WARNING]
+> We have recently enabled TLS 1.3 in availability tests. If you are seeing new error messages as a result, please ensure that clients running on Windows Server 2022 with TLS 1.3 enabled can connect to your endpoint. If you are unable to do this, you may consider temporarily disabling TLS 1.3 on your endpoint so that availability tests will fall back to older TLS versions.
+> For additional information, please check the [troubleshooting article](/troubleshoot/azure/azure-monitor/app-insights/troubleshoot-availability).
+See the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/app-insights/troubleshoot-availability).
+
+## Downtime, SLA, and outages workbook
+
+This article introduces a simple way to calculate and report service-level agreement (SLA) for web tests through a single pane of glass across your Application Insights resources and Azure subscriptions. The downtime and outage report provides powerful prebuilt queries and data visualizations to enhance your understanding of your customer's connectivity, typical application response time, and experienced downtime.
+
+The SLA workbook template can be accessed from your Application Insights resource in two ways:
+
+* Open the **Availability** pane, then select **SLA Report** at the top of the screen.
+
+ :::image type="content" source="./media/sla-report/availability.png" alt-text="Screenshot that shows the **Availability** tab with SLA Report highlighted." lightbox="./media/sla-report/availability.png":::
+
+* Open the **Workbooks** pane, then select **Downtime & Outages**.
+
+ :::image type="content" source="./media/sla-report/workbook-gallery.png" alt-text="Screenshot of the workbook gallery with the Downtime & Outages workbook highlighted." lightbox ="./media/sla-report/workbook-gallery.png":::
+
+### Parameter flexibility
+
+The parameters set in the workbook influence the rest of your report.
++
+* `Subscriptions`, `App Insights Resources`, and `Web Test`: These parameters determine your high-level resource options. They're based on Log Analytics queries and are used in every report query.
+* `Failure Threshold` and `Outage Window`: You can use these parameters to determine your own criteria for a service outage. An example is the criteria for an Application Insights availability alert based on a failed location counter over a chosen period. The typical threshold is three locations over a five-minute window.
+* `Maintenance Period`: You can use this parameter to select your typical maintenance frequency. `Maintenance Window` is a datetime selector for an example maintenance period. All data that occurs during the identified period will be ignored in your results.
+* `Availability Target %`: This parameter specifies your target objective and takes custom values.
+
+### Overview page
+
+The overview page contains high-level information about your:
+
+* Total SLA (excluding maintenance periods, if defined)
+* End-to-end outage instances
+* Application downtime
+
+Outage instances are defined by when a test starts to fail until it's successful, based on your outage parameters. If a test starts failing at 8:00 AM and succeeds again at 10:00 AM, that entire period of data is considered the same outage.
++
+You can also investigate the longest outage that occurred over your reporting period.
+
+Some tests are linkable back to their Application Insights resource for further investigation. But that's only possible in the [workspace-based Application Insights resource](create-workspace-resource.md).
+
+### Downtime, outages, and failures
+
+The **Outages & Downtime** tab has information on total outage instances and total downtime broken down by test.
++
+The **Failures by Location** tab has a geo-map of failed testing locations to help identify potential problem connection areas.
++
+### Edit the report
+
+You can edit the report like any other [Azure Monitor workbook](../visualize/workbooks-overview.md).
++
+You can customize the queries or visualizations based on your team's needs.
++
+#### Log Analytics
+
+The queries can all be run in [Log Analytics](../logs/log-analytics-overview.md) and used in other reports or dashboards.
++
+Remove the parameter restriction and reuse the core query.
++
+### Access and sharing
+
+The report can be shared with your teams and leadership or pinned to a dashboard for further use. The user needs to have read permission/access to the Application Insights resource where the actual workbook is stored.
++
+## Frequently asked questions
+
+This section provides answers to common questions.
+
+### General
+
+#### Can I run availability tests on an intranet server?
+
+Our [web tests](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability) run on points of presence that are distributed around the globe. There are two solutions:
+
+* **Firewall door**: Allow requests to your server from [the long and changeable list of web test agents](../ip-addresses.md).
+* **Custom code**: Write your own code to send periodic requests to your server from inside your intranet. You could run Visual Studio web tests for this purpose. The tester could send the results to Application Insights by using the `TrackAvailability()` API.
+
+#### What is the user agent string for availability tests?
+
+The user agent string is **Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0; AppInsights)**
+
+### TLS Support
+
+#### How does this deprecation impact my web test behavior?
+
+Availability tests act as a distributed client in each of the supported web test locations. Every time a web test is executed the availability test service attempts to reach out to the remote endpoint defined in the web test configuration. A TLS Client Hello message is sent which contains all the currently supported TLS configuration. If the remote endpoint shares a common TLS configuration with the availability test client, then the TLS handshake succeeds. Otherwise, the web test fails with a TLS handshake failure.
+
+#### How do I ensure my web test isn't impacted?
+
+To avoid any impact, each remote endpoint (including dependent requests) your web test interacts with needs to support at least one combination of the same Protocol Version, Cipher Suite, and Elliptical Curve that availability test does. If the remote endpoint doesn't support the needed TLS configuration, it needs to be updated with support for some combination of the above-mentioned post-deprecation TLS configuration. These endpoints can be discovered through viewing the [Transaction Details](/azure/azure-monitor/app/availability-standard-tests#see-your-availability-test-results) of your web test (ideally for a successful web test execution).
+
+#### How do I validate what TLS configuration a remote endpoint supports?
+
+There are several tools available to test what TLS configuration an endpoint supports. One way would be to follow the example detailed on this [page](/security/engineering/solving-tls1-problem#appendix-a-handshake-simulation). If your remote endpoint isn't available via the Public internet, you need to ensure you validate the TLS configuration supported on the remote endpoint from a machine that has access to call your endpoint.
+
+> [!NOTE]
+> For steps to enable the needed TLS configuration on your web server, it is best to reach out to the team that owns the hosting platform your web server runs on if the process is not known.
+
+#### After October 31, 2024, what will the web test behavior be for impacted tests?
+
+There's no one exception type that all TLS handshake failures impacted by this deprecation would present themselves with. However, the most common exception your web test would start failing with would be `The request was aborted: Couldn't create SSL/TLS secure channel`. You should also be able to see any TLS related failures in the TLS Transport [Troubleshooting Step](/troubleshoot/azure/azure-monitor/app-insights/availability/diagnose-ping-test-failure) for the web test result that is potentially impacted.
+
+#### Can I view what TLS configuration is currently in use by my web test?
+
+The TLS configuration negotiated during a web test execution can't be viewed. As long as the remote endpoint supports common TLS configuration with availability tests, no impact should be seen post-deprecation.
+
+#### Which components does the deprecation affect in the availability test service?
+
+The TLS deprecation detailed in this document should only affect the availability test web test execution behavior after October 31, 2024. For more information about interacting with the availability test service for CRUD operations, see [Azure Resource Manager TLS Support](/azure/azure-resource-manager/management/tls-support). This resource provides more details on TLS support and deprecation timelines.
+
+#### Where can I get TLS support?
+
+For any general questions around the legacy TLS problem, see [Solving TLS problems](/security/engineering/solving-tls1-problem).
+
+## Next steps
+
+* [Troubleshooting](troubleshoot-availability.md)
+* [Web tests Azure Resource Manager template](/azure/templates/microsoft.insights/webtests?tabs=json)
+* [Web test REST API](/rest/api/application-insights/web-tests)
azure-monitor Data Model Complete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-model-complete.md
This can occur if you're using string values. Only numeric values work with cust
## Next steps Learn how to use the [Application Insights API for custom events and metrics](./api-custom-events-metrics.md), including:-- [Custom request telemetry](./api-custom-events-metrics.md#trackrequest)-- [Custom dependency telemetry](./api-custom-events-metrics.md#trackdependency)-- [Custom trace telemetry](./api-custom-events-metrics.md#tracktrace)-- [Custom event telemetry](./api-custom-events-metrics.md#trackevent)-- [Custom metric telemetry](./api-custom-events-metrics.md#trackmetric)
+* [Custom request telemetry](./api-custom-events-metrics.md#trackrequest)
+* [Custom dependency telemetry](./api-custom-events-metrics.md#trackdependency)
+* [Custom trace telemetry](./api-custom-events-metrics.md#tracktrace)
+* [Custom event telemetry](./api-custom-events-metrics.md#trackevent)
+* [Custom metric telemetry](./api-custom-events-metrics.md#trackmetric)
Set up dependency tracking for:-- [.NET](./asp-net-dependencies.md)-- [Java](./opentelemetry-enable.md?tabs=java)
+* [.NET](./asp-net-dependencies.md)
+* [Java](./opentelemetry-enable.md?tabs=java)
To learn more: -- Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights.-- Check out standard context properties collection [configuration](./configuration-with-applicationinsights-config.md#telemetry-initializers-aspnet).-- Explore [.NET trace logs in Application Insights](./asp-net-trace-logs.md).-- Explore [Java trace logs in Application Insights](./opentelemetry-add-modify.md?tabs=java#logs).-- Learn about the [Azure Functions built-in integration with Application Insights](../../azure-functions/functions-monitoring.md?toc=/azure/azure-monitor/toc.json) to monitor functions executions.-- Learn how to [configure an ASP.NET Core](./asp-net.md) application with Application Insights.-- Learn how to [diagnose exceptions in your web apps with Application Insights](./asp-net-exceptions.md).-- Learn how to [extend and filter telemetry](./api-filtering-sampling.md).-- Use [sampling](./sampling.md) to minimize the amount of telemetry based on data model.
+* Check out [platforms](./app-insights-overview.md#supported-languages) supported by Application Insights.
+* Check out standard context properties collection [configuration](./configuration-with-applicationinsights-config.md#telemetry-initializers-aspnet).
+* Explore [.NET trace logs in Application Insights](./asp-net-trace-logs.md).
+* Explore [Java trace logs in Application Insights](./opentelemetry-add-modify.md?tabs=java#send-custom-telemetry-using-the-application-insights-classic-api).
+* Learn about the [Azure Functions built-in integration with Application Insights](../../azure-functions/functions-monitoring.md?toc=/azure/azure-monitor/toc.json) to monitor functions executions.
+* Learn how to [configure an ASP.NET Core](./asp-net.md) application with Application Insights.
+* Learn how to [diagnose exceptions in your web apps with Application Insights](./asp-net-exceptions.md).
+* Learn how to [extend and filter telemetry](./api-filtering-sampling.md).
+* Use [sampling](./sampling.md) to minimize the amount of telemetry based on data model.
azure-monitor Opentelemetry Add Modify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-add-modify.md
The distros automatically collect data by bundling OpenTelemetry instrumentation
#### [ASP.NET Core](#tab/aspnetcore)
-Requests
-- [ASP.NET
+**Requests**
+
+* [ASP.NET
Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.AspNetCore/README.md) ┬╣┬▓
-Dependencies
-- [HttpClient](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.Http/README.md) ┬╣┬▓-- [SqlClient](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.SqlClient/README.md) ┬╣
+**Dependencies**
+
+* [HttpClient](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.Http/README.md) ┬╣┬▓
+* [SqlClient](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.SqlClient/README.md) ┬╣
-Logging
-- `ILogger`
+**Logging**
+
+* `ILogger`
For more information about `ILogger`, see [Logging in C# and .NET](/dotnet/core/extensions/logging) and [code examples](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main/docs/logs).
The Azure Monitor Exporter doesn't include any instrumentation libraries.
#### [Java](#tab/java)
-Requests
+**Requests**
+ * JMS consumers * Kafka consumers * Netty
Requests
> [!NOTE] > Servlet and Netty autoinstrumentation covers the majority of Java HTTP services, including Java EE, Jakarta EE, Spring Boot, Quarkus, and Micronaut.
-Dependencies (plus downstream distributed trace propagation):
+**Dependencies (plus downstream distributed trace propagation)**
+ * Apache HttpClient * Apache HttpAsyncClient * AsyncHttpClient
Dependencies (plus downstream distributed trace propagation):
* OkHttp * RabbitMQ
-Dependencies (without downstream distributed trace propagation):
+**Dependencies (without downstream distributed trace propagation)**
+ * Cassandra * JDBC * MongoDB (async and sync) * Redis (Lettuce and Jedis)
-Metrics
+**Metrics**
* Micrometer Metrics, including Spring Boot Actuator metrics * JMX Metrics
-Logs
+**Logs**
+ * Logback (including MDC properties) ┬╣ ┬│ * Log4j (including MDC/Thread Context properties) ┬╣ ┬│ * JBoss Logging (including MDC properties) ┬╣ ┬│ * java.util.logging ┬╣ ┬│
-Telemetry emitted by these Azure SDKs is automatically collected by default:
+**Default collection**
+
+Telemetry emitted by the following Azure SDKs is automatically collected by default:
* [Azure App Configuration](/java/api/overview/azure/data-appconfiguration-readme) 1.1.10+ * [Azure AI Search](/java/api/overview/azure/search-documents-readme) 11.3.0+
Telemetry emitted by these Azure SDKs is automatically collected by default:
[//]: # "}" [//]: # "console.log(str)" - #### [Java native](#tab/java-native)
-Requests for Spring Boot native applications
+**Requests for Spring Boot native applications**
+ * Spring Web * Spring Web MVC * Spring WebFlux
-Dependencies for Spring Boot native applications
+**Dependencies for Spring Boot native applications**
+ * JDBC * R2DBC * MongoDB * Kafka
-Metrics
+**Metrics**
+ * Micrometer Metrics
-Logs for Spring Boot native applications
+**Logs for Spring Boot native applications**
+ * Logback
-For Quarkus native applications, please look at the [Quarkus documentation](https://quarkus.io/guides/opentelemetry).
+For Quartz native applications, look at the [Quarkus documentation](https://quarkus.io/guides/opentelemetry).
#### [Node.js](#tab/nodejs) The following OpenTelemetry Instrumentation libraries are included as part of the Azure Monitor Application Insights Distro. For more information, see [Azure SDK for JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/monitor/monitor-opentelemetry/README.md#instrumentation-libraries).
-Requests
-- [HTTP/HTTPS](https://github.com/open-telemetry/opentelemetry-js/tree/main/experimental/packages/opentelemetry-instrumentation-http) ┬▓
+**Requests**
+
+* [HTTP/HTTPS](https://github.com/open-telemetry/opentelemetry-js/tree/main/experimental/packages/opentelemetry-instrumentation-http)┬▓
+
+**Dependencies**
+
+* [MongoDB](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-mongodb)
+* [MySQL](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-mysql)
+* [Postgres](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-pg)
+* [Redis](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-redis)
+* [Redis-4](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-redis-4)
+* [Azure SDK](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/instrumentation/opentelemetry-instrumentation-azure-sdk)
-Dependencies
-- [MongoDB](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-mongodb)-- [MySQL](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-mysql)-- [Postgres](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-pg)-- [Redis](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-redis)-- [Redis-4](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-redis-4)-- [Azure SDK](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/instrumentation/opentelemetry-instrumentation-azure-sdk)
+**Logs**
-Logs
-- [Bunyan](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-bunyan)
+* [Bunyan](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-bunyan)
<!---- [Winston](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-winston)
+* [Winston](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node/opentelemetry-instrumentation-winston)
-->
-Instrumentations can be configured using AzureMonitorOpenTelemetryOptions
+Instrumentations can be configured using `AzureMonitorOpenTelemetryOptions`:
```typescript
- // Import Azure Monitor OpenTelemetry
- const { useAzureMonitor, AzureMonitorOpenTelemetryOptions } = require("@azure/monitor-opentelemetry");
- // Import OpenTelemetry HTTP Instrumentation to get config type
- const { HttpInstrumentationConfig } = require("@azure/monitor-opentelemetry");
- // Import HTTP to get type
- const { IncomingMessage } = require("http");
-
- // Specific Instrumentation configs could be added
- const httpInstrumentationConfig: HttpInstrumentationConfig = {
- ignoreIncomingRequestHook: (request: IncomingMessage) => {
- return false; //Return true if you want to ignore a specific request
- },
- enabled: true
- };
- // Instrumentations configuration
- const options: AzureMonitorOpenTelemetryOptions = {
- instrumentationOptions: {
- http: httpInstrumentationConfig,
- azureSdk: { enabled: true },
- mongoDb: { enabled: true },
- mySql: { enabled: true },
- postgreSql: { enabled: true },
- redis: { enabled: true },
- redis4: { enabled: true },
- }
- };
+// Import Azure Monitor OpenTelemetry
+const { useAzureMonitor, AzureMonitorOpenTelemetryOptions } = require("@azure/monitor-opentelemetry");
+// Import OpenTelemetry HTTP Instrumentation to get config type
+const { HttpInstrumentationConfig } = require("@azure/monitor-opentelemetry");
+ // Import HTTP to get type
+const { IncomingMessage } = require("http");
+
+// Specific Instrumentation configs could be added
+const httpInstrumentationConfig: HttpInstrumentationConfig = {
+ ignoreIncomingRequestHook: (request: IncomingMessage) => {
+ return false; //Return true if you want to ignore a specific request
+ },
+ enabled: true
+};
+// Instrumentations configuration
+const options: AzureMonitorOpenTelemetryOptions = {
+instrumentationOptions: {
+ http: httpInstrumentationConfig,
+ azureSdk: { enabled: true },
+ mongoDb: { enabled: true },
+ mySql: { enabled: true },
+ postgreSql: { enabled: true },
+ redis: { enabled: true },
+ redis4: { enabled: true },
+}
+};
- // Enable Azure Monitor integration
- useAzureMonitor(options);
+// Enable Azure Monitor integration
+useAzureMonitor(options);
``` #### [Python](#tab/python)
-Requests
-- [Django](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-django) ┬╣-- [FastApi](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-fastapi) ┬╣-- [Flask](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-flask) ┬╣
+**Requests**
+
+* [Django](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-django) ┬╣
+* [FastApi](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-fastapi) ┬╣
+* [Flask](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-flask) ┬╣
-Dependencies
-- [Psycopg2](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-psycopg2)-- [Requests](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-requests) ┬╣-- [`Urllib`](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-urllib) ┬╣-- [`Urllib3`](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-urllib3) ┬╣
+**Dependencies**
-Logs
-- [Python logging library](https://docs.python.org/3/howto/logging.html) ⁴
+* [Psycopg2](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-psycopg2)
+* [Requests](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-requests) ┬╣
+* [`Urllib`](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-urllib) ┬╣
+* [`Urllib3`](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-urllib3) ┬╣
+
+**Logs**
+
+* [Python logging library](https://docs.python.org/3/howto/logging.html) ⁴
Examples of using the Python logging library can be found on [GitHub](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/monitor/azure-monitor-opentelemetry/samples/logging).
Telemetry emitted by Azure SDKS is automatically [collected](https://github.com/
**Footnotes**-- ¹: Supports automatic reporting of *unhandled/uncaught* exceptions-- ²: Supports OpenTelemetry Metrics-- ³: By default, logging is only collected at INFO level or higher. To change this setting, see the [configuration options](./java-standalone-config.md#autocollected-logging).-- ⁴: By default, logging is only collected when that logging is performed at the WARNING level or higher.+
+* ┬╣: Supports automatic reporting of *unhandled/uncaught* exceptions
+* ┬▓: Supports OpenTelemetry Metrics
+* ┬│: By default, logging is only collected at INFO level or higher. To change this setting, see the [configuration options](./java-standalone-config.md#autocollected-logging).
+* ⁴: By default, logging is only collected when that logging is performed at the WARNING level or higher.
> [!NOTE] > The Azure Monitor OpenTelemetry Distros include custom mapping and logic to automatically emit [Application Insights standard metrics](standard-metrics.md).
You can collect more data automatically when you include instrumentation librari
[!INCLUDE [azure-monitor-app-insights-opentelemetry-support](../includes/azure-monitor-app-insights-opentelemetry-community-library-warning.md)]
-### [ASP.NET Core](#tab/aspnetcore)
+#### [ASP.NET Core](#tab/aspnetcore)
To add a community library, use the `ConfigureOpenTelemetryMeterProvider` or `ConfigureOpenTelemetryTracerProvider` methods,
-after adding the nuget package for the library.
+after adding the NuGet package for the library.
-The following example demonstrates how the [Runtime Instrumentation](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Runtime) can be added to collect extra metrics.
+The following example demonstrates how the [Runtime Instrumentation](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Runtime) can be added to collect extra metrics:
```dotnetcli dotnet add package OpenTelemetry.Instrumentation.Runtime
var app = builder.Build();
app.Run(); ```
-### [.NET](#tab/net)
+#### [.NET](#tab/net)
-The following example demonstrates how the [Runtime Instrumentation](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Runtime) can be added to collect extra metrics.
+The following example demonstrates how the [Runtime Instrumentation](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Runtime) can be added to collect extra metrics:
```csharp // Create a new OpenTelemetry meter provider and add runtime instrumentation and the Azure Monitor metric exporter.
var metricsProvider = Sdk.CreateMeterProviderBuilder()
.AddAzureMonitorMetricExporter(); ```
-### [Java](#tab/java)
+#### [Java](#tab/java)
+ You can't extend the Java Distro with community instrumentation libraries. To request that we include another instrumentation library, open an issue on our GitHub page. You can find a link to our GitHub page in [Next Steps](#next-steps).
-### [Java native](#tab/java-native)
+#### [Java native](#tab/java-native)
-You can't use commmunity instrumentation libraries with GraalVM Java native applications.
+You can't use community instrumentation libraries with GraalVM Java native applications.
-### [Node.js](#tab/nodejs)
+#### [Node.js](#tab/nodejs)
-Other OpenTelemetry Instrumentations are available [here](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node) and could be added using TraceHandler in ApplicationInsightsClient.
+Other OpenTelemetry Instrumentations are available [here](https://github.com/open-telemetry/opentelemetry-js-contrib/tree/main/plugins/node) and could be added using TraceHandler in ApplicationInsightsClient:
```javascript // Import the Azure Monitor OpenTelemetry plugin and OpenTelemetry API
Other OpenTelemetry Instrumentations are available [here](https://github.com/ope
}); ```
-### [Python](#tab/python)
+#### [Python](#tab/python)
To add a community instrumentation library (not officially supported/included in Azure Monitor distro), you can instrument directly with the instrumentations. The list of community instrumentation libraries can be found [here](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation).
This section explains how to collect custom telemetry from your application.
Depending on your language and signal type, there are different ways to collect custom telemetry, including: -- OpenTelemetry API-- Language-specific logging/metrics libraries-- Application Insights [Classic API](api-custom-events-metrics.md)
+* OpenTelemetry API
+* Language-specific logging/metrics libraries
+* Application Insights [Classic API](api-custom-events-metrics.md)
The following table represents the currently supported custom telemetry types:
The following table represents the currently supported custom telemetry types:
|-||-|--|||-|--| | **ASP.NET Core** | | | | | | | | | &nbsp;&nbsp;&nbsp;OpenTelemetry API | | Yes | Yes | Yes | | Yes | |
-| &nbsp;&nbsp;&nbsp;`ILogger` API | | | | | | | Yes |
+| &nbsp;&nbsp;&nbsp;`ILogger` API | | | | | | | Yes |
| &nbsp;&nbsp;&nbsp;AI Classic API | | | | | | | | | | | | | | | | | | **Java** | | | | | | | | | &nbsp;&nbsp;&nbsp;OpenTelemetry API | | Yes | Yes | Yes | | Yes | | | &nbsp;&nbsp;&nbsp;Logback, `Log4j`, JUL | | | | Yes | | | Yes | | &nbsp;&nbsp;&nbsp;Micrometer Metrics | | Yes | | | | | |
-| &nbsp;&nbsp;&nbsp;AI Classic API | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
+| &nbsp;&nbsp;&nbsp;AI Classic API | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
| | | | | | | | | | **Node.js** | | | | | | | | | &nbsp;&nbsp;&nbsp;OpenTelemetry API | | Yes | Yes | Yes | | Yes | |
The following table represents the currently supported custom telemetry types:
| **Python** | | | | | | | | | &nbsp;&nbsp;&nbsp;OpenTelemetry API | | Yes | Yes | Yes | | Yes | | | &nbsp;&nbsp;&nbsp;Python Logging Module | | | | | | | Yes |
-| &nbsp;&nbsp;&nbsp;Events Extension | Yes | | | | | | Yes |
+| &nbsp;&nbsp;&nbsp;Events Extension | Yes | | | | | | Yes |
> [!NOTE] > Application Insights Java 3.x listens for telemetry that's sent to the Application Insights [Classic API](api-custom-events-metrics.md). Similarly, Application Insights Node.js 3.x collects events created with the Application Insights [Classic API](api-custom-events-metrics.md). This makes upgrading easier and fills a gap in our custom telemetry support until all custom telemetry types are supported via the OpenTelemetry API.
The following table shows the recommended [aggregation types](../essentials/metr
||| | Counter | Sum | | Asynchronous Counter | Sum |
-| Histogram | Min, Max, Average, Sum and Count |
+| Histogram | Min, Max, Average, Sum, and Count |
| Asynchronous Gauge | Average | | UpDownCounter | Sum | | Asynchronous UpDownCounter | Sum |
describes the instruments and provides examples of when you might use each one.
#### Histogram example
-#### [ASP.NET Core](#tab/aspnetcore)
+##### [ASP.NET Core](#tab/aspnetcore)
-Application startup must subscribe to a Meter by name.
+Application startup must subscribe to a Meter by name:
```csharp // Create a new ASP.NET Core web application builder.
var app = builder.Build();
app.Run(); ```
-The `Meter` must be initialized using that same name.
+The `Meter` must be initialized using that same name:
```csharp // Create a new meter named "OTel.AzureMonitor.Demo".
myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "apple"), new("color", "
myFruitSalePrice.Record(rand.Next(1, 1000), new("name", "lemon"), new("color", "yellow")); ```
-#### [.NET](#tab/net)
+##### [.NET](#tab/net)
```csharp public class Program
public class Program
} ```
-#### [Java](#tab/java)
+##### [Java](#tab/java)
```java import io.opentelemetry.api.GlobalOpenTelemetry;
public class Program {
} ```
-#### [Java native](#tab/java-native)
+##### [Java native](#tab/java-native)
-1. Inject `OpenTelemetry`
+1. Inject `OpenTelemetry`:
- _Spring_
- ```java
- import io.opentelemetry.api.OpenTelemetry;
-
- @Autowired
- OpenTelemetry openTelemetry;
- ```
+ * **Spring**
+ ```java
+ import io.opentelemetry.api.OpenTelemetry;
+
+ @Autowired
+ OpenTelemetry openTelemetry;
+ ```
- _Quarkus_
- ```java
- import io.opentelemetry.api.OpenTelemetry;
-
- @Inject
- OpenTelemetry openTelemetry;
- ```
+ * **Quarkus**
+ ```java
+ import io.opentelemetry.api.OpenTelemetry;
+
+ @Inject
+ OpenTelemetry openTelemetry;
+ ```
-1. Create an histogram
-```java
-import io.opentelemetry.api.metrics.DoubleHistogram;
-import io.opentelemetry.api.metrics.Meter;
+1. Create a histogram:
-Meter meter = openTelemetry.getMeter("OTEL.AzureMonitor.Demo");
-DoubleHistogram histogram = meter.histogramBuilder("histogram").build();
-histogram.record(1.0);
-histogram.record(100.0);
-histogram.record(30.0);
-```
+ ```java
+ import io.opentelemetry.api.metrics.DoubleHistogram;
+ import io.opentelemetry.api.metrics.Meter;
+
+ Meter meter = openTelemetry.getMeter("OTEL.AzureMonitor.Demo");
+ DoubleHistogram histogram = meter.histogramBuilder("histogram").build();
+ histogram.record(1.0);
+ histogram.record(100.0);
+ histogram.record(30.0);
+ ```
-#### [Node.js](#tab/nodejs)
+##### [Node.js](#tab/nodejs)
```javascript
- // Import the Azure Monitor OpenTelemetry plugin and OpenTelemetry API
- const { useAzureMonitor } = require("@azure/monitor-opentelemetry");
- const { metrics } = require("@opentelemetry/api");
+// Import the Azure Monitor OpenTelemetry plugin and OpenTelemetry API
+const { useAzureMonitor } = require("@azure/monitor-opentelemetry");
+const { metrics } = require("@opentelemetry/api");
- // Enable Azure Monitor integration
- useAzureMonitor();
+// Enable Azure Monitor integration
+useAzureMonitor();
- // Get the meter for the "testMeter" namespace
- const meter = metrics.getMeter("testMeter");
+// Get the meter for the "testMeter" namespace
+const meter = metrics.getMeter("testMeter");
- // Create a histogram metric
- let histogram = meter.createHistogram("histogram");
+// Create a histogram metric
+let histogram = meter.createHistogram("histogram");
- // Record values to the histogram metric with different tags
- histogram.record(1, { "testKey": "testValue" });
- histogram.record(30, { "testKey": "testValue2" });
- histogram.record(100, { "testKey2": "testValue" });
+// Record values to the histogram metric with different tags
+histogram.record(1, { "testKey": "testValue" });
+histogram.record(30, { "testKey": "testValue2" });
+histogram.record(100, { "testKey2": "testValue" });
```
-#### [Python](#tab/python)
+##### [Python](#tab/python)
```python # Import the `configure_azure_monitor()` and `metrics` functions from the appropriate packages.
input()
#### Counter example
-#### [ASP.NET Core](#tab/aspnetcore)
+##### [ASP.NET Core](#tab/aspnetcore)
-Application startup must subscribe to a Meter by name.
+Application startup must subscribe to a Meter by name:
```csharp // Create a new ASP.NET Core web application builder.
var app = builder.Build();
app.Run(); ```
-The `Meter` must be initialized using that same name.
+The `Meter` must be initialized using that same name:
```csharp // Create a new meter named "OTel.AzureMonitor.Demo".
myFruitCounter.Add(5, new("name", "apple"), new("color", "red"));
myFruitCounter.Add(4, new("name", "lemon"), new("color", "yellow")); ```
-#### [.NET](#tab/net)
+##### [.NET](#tab/net)
```csharp public class Program
public class Program
} ```
-#### [Java](#tab/java)
+##### [Java](#tab/java)
```Java import io.opentelemetry.api.GlobalOpenTelemetry;
public class Program {
} } ```
-#### [Java native](#tab/java-native)
-
-1. Inject `OpenTelemetry`
-
- _Spring_
- ```java
- import io.opentelemetry.api.OpenTelemetry;
-
- @Autowired
- OpenTelemetry openTelemetry;
- ```
-
- _Quarkus_
- ```java
- import io.opentelemetry.api.OpenTelemetry;
-
- @Inject
- OpenTelemetry openTelemetry;
- ```
-
-1. Create the counter
+##### [Java native](#tab/java-native)
-```Java
-import io.opentelemetry.api.common.AttributeKey;
-import io.opentelemetry.api.common.Attributes;
-import io.opentelemetry.api.metrics.LongCounter;
-import io.opentelemetry.api.metrics.Meter;
+1. Inject `OpenTelemetry`:
+ * **Spring**
+ ```java
+ import io.opentelemetry.api.OpenTelemetry;
+
+ @Autowired
+ OpenTelemetry openTelemetry;
+ ```
-Meter meter = openTelemetry.getMeter("OTEL.AzureMonitor.Demo");
+ * **Quarkus**
+ ```java
+ import io.opentelemetry.api.OpenTelemetry;
+
+ @Inject
+ OpenTelemetry openTelemetry;
+ ```
-LongCounter myFruitCounter = meter.counterBuilder("MyFruitCounter")
- .build();
+1. Create the counter:
-myFruitCounter.add(1, Attributes.of(AttributeKey.stringKey("name"), "apple", AttributeKey.stringKey("color"), "red"));
-myFruitCounter.add(2, Attributes.of(AttributeKey.stringKey("name"), "lemon", AttributeKey.stringKey("color"), "yellow"));
-myFruitCounter.add(1, Attributes.of(AttributeKey.stringKey("name"), "lemon", AttributeKey.stringKey("color"), "yellow"));
-myFruitCounter.add(2, Attributes.of(AttributeKey.stringKey("name"), "apple", AttributeKey.stringKey("color"), "green"));
-myFruitCounter.add(5, Attributes.of(AttributeKey.stringKey("name"), "apple", AttributeKey.stringKey("color"), "red"));
-myFruitCounter.add(4, Attributes.of(AttributeKey.stringKey("name"), "lemon", AttributeKey.stringKey("color"), "yellow"));
-```
+ ```Java
+ import io.opentelemetry.api.common.AttributeKey;
+ import io.opentelemetry.api.common.Attributes;
+ import io.opentelemetry.api.metrics.LongCounter;
+ import io.opentelemetry.api.metrics.Meter;
+
+
+ Meter meter = openTelemetry.getMeter("OTEL.AzureMonitor.Demo");
+
+ LongCounter myFruitCounter = meter.counterBuilder("MyFruitCounter")
+ .build();
+
+ myFruitCounter.add(1, Attributes.of(AttributeKey.stringKey("name"), "apple", AttributeKey.stringKey("color"), "red"));
+ myFruitCounter.add(2, Attributes.of(AttributeKey.stringKey("name"), "lemon", AttributeKey.stringKey("color"), "yellow"));
+ myFruitCounter.add(1, Attributes.of(AttributeKey.stringKey("name"), "lemon", AttributeKey.stringKey("color"), "yellow"));
+ myFruitCounter.add(2, Attributes.of(AttributeKey.stringKey("name"), "apple", AttributeKey.stringKey("color"), "green"));
+ myFruitCounter.add(5, Attributes.of(AttributeKey.stringKey("name"), "apple", AttributeKey.stringKey("color"), "red"));
+ myFruitCounter.add(4, Attributes.of(AttributeKey.stringKey("name"), "lemon", AttributeKey.stringKey("color"), "yellow"));
+ ```
-#### [Node.js](#tab/nodejs)
+##### [Node.js](#tab/nodejs)
```javascript
- // Import the Azure Monitor OpenTelemetry plugin and OpenTelemetry API
- const { useAzureMonitor } = require("@azure/monitor-opentelemetry");
- const { metrics } = require("@opentelemetry/api");
+// Import the Azure Monitor OpenTelemetry plugin and OpenTelemetry API
+const { useAzureMonitor } = require("@azure/monitor-opentelemetry");
+const { metrics } = require("@opentelemetry/api");
- // Enable Azure Monitor integration
- useAzureMonitor();
+// Enable Azure Monitor integration
+useAzureMonitor();
- // Get the meter for the "testMeter" namespace
- const meter = metrics.getMeter("testMeter");
+// Get the meter for the "testMeter" namespace
+const meter = metrics.getMeter("testMeter");
- // Create a counter metric
- let counter = meter.createCounter("counter");
+// Create a counter metric
+let counter = meter.createCounter("counter");
- // Add values to the counter metric with different tags
- counter.add(1, { "testKey": "testValue" });
- counter.add(5, { "testKey2": "testValue" });
- counter.add(3, { "testKey": "testValue2" });
+// Add values to the counter metric with different tags
+counter.add(1, { "testKey": "testValue" });
+counter.add(5, { "testKey2": "testValue" });
+counter.add(3, { "testKey": "testValue2" });
```
-#### [Python](#tab/python)
+##### [Python](#tab/python)
```python # Import the `configure_azure_monitor()` and `metrics` functions from the appropriate packages.
input()
-#### Gauge Example
+#### Gauge example
-#### [ASP.NET Core](#tab/aspnetcore)
+##### [ASP.NET Core](#tab/aspnetcore)
-Application startup must subscribe to a Meter by name.
+Application startup must subscribe to a Meter by name:
```csharp // Create a new ASP.NET Core web application builder.
var app = builder.Build();
app.Run(); ```
-The `Meter` must be initialized using that same name.
+The `Meter` must be initialized using that same name:
```csharp // Get the current process.
private static IEnumerable<Measurement<int>> GetThreadState(Process process)
} ```
-#### [.NET](#tab/net)
+##### [.NET](#tab/net)
```csharp public class Program
public class Program
} ```
-#### [Java](#tab/java)
+##### [Java](#tab/java)
```Java import io.opentelemetry.api.GlobalOpenTelemetry;
public class Program {
} } ```
-#### [Java native](#tab/java-native)
-
-1. Inject `OpenTelemetry`
+##### [Java native](#tab/java-native)
- _Spring_
- ```java
- import io.opentelemetry.api.OpenTelemetry;
-
- @Autowired
- OpenTelemetry openTelemetry;
- ```
+1. Inject `OpenTelemetry`:
- _Quarkus_
- ```java
- import io.opentelemetry.api.OpenTelemetry;
-
- @Inject
- OpenTelemetry openTelemetry;
- ```
+ * **Spring**
+ ```java
+ import io.opentelemetry.api.OpenTelemetry;
+
+ @Autowired
+ OpenTelemetry openTelemetry;
+ ```
-1. Create a gauge
-```Java
-import io.opentelemetry.api.common.AttributeKey;
-import io.opentelemetry.api.common.Attributes;
-import io.opentelemetry.api.metrics.Meter;
+ * **Quarkus**
+ ```java
+ import io.opentelemetry.api.OpenTelemetry;
+
+ @Inject
+ OpenTelemetry openTelemetry;
+ ```
-Meter meter = openTelemetry.getMeter("OTEL.AzureMonitor.Demo");
+1. Create a gauge:
-meter.gaugeBuilder("gauge")
- .buildWithCallback(
- observableMeasurement -> {
- double randomNumber = Math.floor(Math.random() * 100);
- observableMeasurement.record(randomNumber, Attributes.of(AttributeKey.stringKey("testKey"), "testValue"));
- });
-```
+ ```Java
+ import io.opentelemetry.api.common.AttributeKey;
+ import io.opentelemetry.api.common.Attributes;
+ import io.opentelemetry.api.metrics.Meter;
+
+ Meter meter = openTelemetry.getMeter("OTEL.AzureMonitor.Demo");
+
+ meter.gaugeBuilder("gauge")
+ .buildWithCallback(
+ observableMeasurement -> {
+ double randomNumber = Math.floor(Math.random() * 100);
+ observableMeasurement.record(randomNumber, Attributes.of(AttributeKey.stringKey("testKey"), "testValue"));
+ });
+ ```
-#### [Node.js](#tab/nodejs)
+##### [Node.js](#tab/nodejs)
```typescript
- // Import the useAzureMonitor function and the metrics module from the @azure/monitor-opentelemetry and @opentelemetry/api packages, respectively.
- const { useAzureMonitor } = require("@azure/monitor-opentelemetry");
- const { metrics } = require("@opentelemetry/api");
+// Import the useAzureMonitor function and the metrics module from the @azure/monitor-opentelemetry and @opentelemetry/api packages, respectively.
+const { useAzureMonitor } = require("@azure/monitor-opentelemetry");
+const { metrics } = require("@opentelemetry/api");
- // Enable Azure Monitor integration.
- useAzureMonitor();
+// Enable Azure Monitor integration.
+useAzureMonitor();
- // Get the meter for the "testMeter" meter name.
- const meter = metrics.getMeter("testMeter");
+// Get the meter for the "testMeter" meter name.
+const meter = metrics.getMeter("testMeter");
- // Create an observable gauge metric with the name "gauge".
- let gauge = meter.createObservableGauge("gauge");
+// Create an observable gauge metric with the name "gauge".
+let gauge = meter.createObservableGauge("gauge");
- // Add a callback to the gauge metric. The callback will be invoked periodically to generate a new value for the gauge metric.
- gauge.addCallback((observableResult: ObservableResult) => {
- // Generate a random number between 0 and 99.
- let randomNumber = Math.floor(Math.random() * 100);
+// Add a callback to the gauge metric. The callback will be invoked periodically to generate a new value for the gauge metric.
+gauge.addCallback((observableResult: ObservableResult) => {
+ // Generate a random number between 0 and 99.
+ let randomNumber = Math.floor(Math.random() * 100);
- // Set the value of the gauge metric to the random number.
- observableResult.observe(randomNumber, {"testKey": "testValue"});
- });
+ // Set the value of the gauge metric to the random number.
+ observableResult.observe(randomNumber, {"testKey": "testValue"});
+});
```
-#### [Python](#tab/python)
+##### [Python](#tab/python)
```python # Import the necessary packages.
to draw attention in relevant experiences including the failures section and end
#### [ASP.NET Core](#tab/aspnetcore) -- To log an Exception using an Activity:
- ```csharp
- // Start a new activity named "ExceptionExample".
- using (var activity = activitySource.StartActivity("ExceptionExample"))
- {
- // Try to execute some code.
- try
- {
- throw new Exception("Test exception");
- }
- // If an exception is thrown, catch it and set the activity status to "Error".
- catch (Exception ex)
- {
- activity?.SetStatus(ActivityStatusCode.Error);
- activity?.RecordException(ex);
- }
- }
- ```
-- To log an Exception using `ILogger`:
- ```csharp
- // Create a logger using the logger factory. The logger category name is used to filter and route log messages.
- var logger = loggerFactory.CreateLogger(logCategoryName);
-
- // Try to execute some code.
- try
- {
- throw new Exception("Test Exception");
- }
- catch (Exception ex)
- {
- // Log an error message with the exception. The log level is set to "Error" and the event ID is set to 0.
- // The log message includes a template and a parameter. The template will be replaced with the value of the parameter when the log message is written.
- logger.Log(
- logLevel: LogLevel.Error,
- eventId: 0,
- exception: ex,
- message: "Hello {name}.",
- args: new object[] { "World" });
- }
- ```
+* To log an Exception using an Activity:
+
+ ```csharp
+ // Start a new activity named "ExceptionExample".
+ using (var activity = activitySource.StartActivity("ExceptionExample"))
+ {
+ // Try to execute some code.
+ try
+ {
+ throw new Exception("Test exception");
+ }
+ // If an exception is thrown, catch it and set the activity status to "Error".
+ catch (Exception ex)
+ {
+ activity?.SetStatus(ActivityStatusCode.Error);
+ activity?.RecordException(ex);
+ }
+ }
+ ```
+
+* To log an Exception using `ILogger`:
+
+ ```csharp
+ // Create a logger using the logger factory. The logger category name is used to filter and route log messages.
+ var logger = loggerFactory.CreateLogger(logCategoryName);
+
+ // Try to execute some code.
+ try
+ {
+ throw new Exception("Test Exception");
+ }
+ catch (Exception ex)
+ {
+ // Log an error message with the exception. The log level is set to "Error" and the event ID is set to 0.
+ // The log message includes a template and a parameter. The template will be replaced with the value of the parameter when the log message is written.
+ logger.Log(
+ logLevel: LogLevel.Error,
+ eventId: 0,
+ exception: ex,
+ message: "Hello {name}.",
+ args: new object[] { "World" });
+ }
+ ```
#### [.NET](#tab/net) -- To log an Exception using an Activity:
- ```csharp
- // Start a new activity named "ExceptionExample".
- using (var activity = activitySource.StartActivity("ExceptionExample"))
- {
- // Try to execute some code.
- try
- {
- throw new Exception("Test exception");
- }
- // If an exception is thrown, catch it and set the activity status to "Error".
- catch (Exception ex)
- {
- activity?.SetStatus(ActivityStatusCode.Error);
- activity?.RecordException(ex);
- }
- }
- ```
-- To log an Exception using `ILogger`:
- ```csharp
- // Create a logger using the logger factory. The logger category name is used to filter and route log messages.
- var logger = loggerFactory.CreateLogger("ExceptionExample");
-
- try
- {
- // Try to execute some code.
- throw new Exception("Test Exception");
- }
- catch (Exception ex)
- {
- // Log an error message with the exception. The log level is set to "Error" and the event ID is set to 0.
- // The log message includes a template and a parameter. The template will be replaced with the value of the parameter when the log message is written.
- logger.Log(
- logLevel: LogLevel.Error,
- eventId: 0,
- exception: ex,
- message: "Hello {name}.",
- args: new object[] { "World" });
- }
+* To log an Exception using an Activity:
+
+ ```csharp
+ // Start a new activity named "ExceptionExample".
+ using (var activity = activitySource.StartActivity("ExceptionExample"))
+ {
+ // Try to execute some code.
+ try
+ {
+ throw new Exception("Test exception");
+ }
+ // If an exception is thrown, catch it and set the activity status to "Error".
+ catch (Exception ex)
+ {
+ activity?.SetStatus(ActivityStatusCode.Error);
+ activity?.RecordException(ex);
+ }
+ }
```
+* To log an Exception using `ILogger`:
+
+ ```csharp
+ // Create a logger using the logger factory. The logger category name is used to filter and route log messages.
+ var logger = loggerFactory.CreateLogger("ExceptionExample");
+
+ try
+ {
+ // Try to execute some code.
+ throw new Exception("Test Exception");
+ }
+ catch (Exception ex)
+ {
+ // Log an error message with the exception. The log level is set to "Error" and the event ID is set to 0.
+ // The log message includes a template and a parameter. The template will be replaced with the value of the parameter when the log message is written.
+ logger.Log(
+ logLevel: LogLevel.Error,
+ eventId: 0,
+ exception: ex,
+ message: "Hello {name}.",
+ args: new object[] { "World" });
+ }
+ ```
+ #### [Java](#tab/java) You can use `opentelemetry-api` to update the status of a span and record exceptions.
You can use `opentelemetry-api` to update the status of a span and record except
Set status to `error` and record an exception in your code:
- ```java
- import io.opentelemetry.api.trace.Span;
- import io.opentelemetry.api.trace.StatusCode;
+```java
+import io.opentelemetry.api.trace.Span;
+import io.opentelemetry.api.trace.StatusCode;
+
+Span span = Span.current();
+span.setStatus(StatusCode.ERROR, "errorMessage");
+span.recordException(e);
+```
- Span span = Span.current();
- span.setStatus(StatusCode.ERROR, "errorMessage");
- span.recordException(e);
- ```
#### [Node.js](#tab/nodejs) ```javascript
- // Import the Azure Monitor OpenTelemetry plugin and OpenTelemetry API
- const { useAzureMonitor } = require("@azure/monitor-opentelemetry");
- const { trace } = require("@opentelemetry/api");
+// Import the Azure Monitor OpenTelemetry plugin and OpenTelemetry API
+const { useAzureMonitor } = require("@azure/monitor-opentelemetry");
+const { trace } = require("@opentelemetry/api");
- // Enable Azure Monitor integration
- useAzureMonitor();
+// Enable Azure Monitor integration
+useAzureMonitor();
- // Get the tracer for the "testTracer" namespace
- const tracer = trace.getTracer("testTracer");
+// Get the tracer for the "testTracer" namespace
+const tracer = trace.getTracer("testTracer");
- // Start a span with the name "hello"
- let span = tracer.startSpan("hello");
+// Start a span with the name "hello"
+let span = tracer.startSpan("hello");
- // Try to throw an error
- try{
- throw new Error("Test Error");
- }
+// Try to throw an error
+try{
+ throw new Error("Test Error");
+}
- // Catch the error and record it to the span
- catch(error){
- span.recordException(error);
- }
+// Catch the error and record it to the span
+catch(error){
+ span.recordException(error);
+}
``` #### [Python](#tab/python)
-The OpenTelemetry Python SDK is implemented in such a way that exceptions thrown are automatically captured and recorded. See the following code sample for an example of this behavior.
+The OpenTelemetry Python SDK is implemented in such a way that exceptions thrown are automatically captured and recorded. See the following code sample for an example of this behavior:
```python # Import the necessary packages.
using (var activity = activitySource.StartActivity("CustomActivity"))
#### [Java](#tab/java)
-##### Use the OpenTelemetry annotation
-
-The simplest way to add your own spans is by using OpenTelemetry's `@WithSpan` annotation.
-
-Spans populate the `requests` and `dependencies` tables in Application Insights.
+* **Use the OpenTelemetry annotation**
-1. Add `opentelemetry-instrumentation-annotations-1.32.0.jar` (or later) to your application:
+ The simplest way to add your own spans is by using OpenTelemetry's `@WithSpan` annotation.
+
+ Spans populate the `requests` and `dependencies` tables in Application Insights.
+
+ 1. Add `opentelemetry-instrumentation-annotations-1.32.0.jar` (or later) to your application:
+
+ ```xml
+ <dependency>
+ <groupId>io.opentelemetry.instrumentation</groupId>
+ <artifactId>opentelemetry-instrumentation-annotations</artifactId>
+ <version>1.32.0</version>
+ </dependency>
+ ```
+
+ 1. Use the `@WithSpan` annotation to emit a span each time your method is executed:
+
+ ```java
+ import io.opentelemetry.instrumentation.annotations.WithSpan;
+
+ @WithSpan(value = "your span name")
+ public void yourMethod() {
+ }
+ ```
+
+ By default, the span ends up in the `dependencies` table with dependency type `InProc`.
+
+ For methods representing a background job not captured by autoinstrumentation, we recommend applying the attribute `kind = SpanKind.SERVER` to the `@WithSpan` annotation to ensure they appear in the Application Insights `requests` table.
- ```xml
- <dependency>
- <groupId>io.opentelemetry.instrumentation</groupId>
- <artifactId>opentelemetry-instrumentation-annotations</artifactId>
- <version>1.32.0</version>
- </dependency>
- ```
+* **Use the OpenTelemetry API**
-1. Use the `@WithSpan` annotation to emit a span each time your method is executed:
+ If the preceding OpenTelemetry `@WithSpan` annotation doesn't meet your needs,
+ you can add your spans by using the OpenTelemetry API.
+
+ 1. Add `opentelemetry-api-1.0.0.jar` (or later) to your application:
+
+ ```xml
+ <dependency>
+ <groupId>io.opentelemetry</groupId>
+ <artifactId>opentelemetry-api</artifactId>
+ <version>1.0.0</version>
+ </dependency>
+ ```
+
+ 1. Use the `GlobalOpenTelemetry` class to create a `Tracer`:
+
+ ```java
+ import io.opentelemetry.api.GlobalOpenTelemetry;
+ import io.opentelemetry.api.trace.Tracer;
+
+ static final Tracer tracer = GlobalOpenTelemetry.getTracer("com.example");
+ ```
+
+ 1. Create a span, make it current, and then end it:
+
+ ```java
+ Span span = tracer.spanBuilder("my first span").startSpan();
+ try (Scope ignored = span.makeCurrent()) {
+ // do stuff within the context of this
+ } catch (Throwable t) {
+ span.recordException(t);
+ } finally {
+ span.end();
+ }
+ ```
- ```java
- import io.opentelemetry.instrumentation.annotations.WithSpan;
+#### [Java native](#tab/java-native)
- @WithSpan(value = "your span name")
- public void yourMethod() {
- }
- ```
+1. Inject `OpenTelemetry`:
-By default, the span ends up in the `dependencies` table with dependency type `InProc`.
+ * **Spring**
+ ```java
+ import io.opentelemetry.api.OpenTelemetry;
+
+ @Autowired
+ OpenTelemetry openTelemetry;
+ ```
+
+ * **Quarkus**
+ ```java
+ import io.opentelemetry.api.OpenTelemetry;
-For methods representing a background job not captured by autoinstrumentation, we recommend applying the attribute `kind = SpanKind.SERVER` to the `@WithSpan` annotation to ensure they appear in the Application Insights `requests` table.
+ @Inject
+ OpenTelemetry openTelemetry;
+ ```
-##### Use the OpenTelemetry API
+1. Create a `Tracer`:
-If the preceding OpenTelemetry `@WithSpan` annotation doesn't meet your needs,
-you can add your spans by using the OpenTelemetry API.
+ ```java
+ import io.opentelemetry.api.trace.Tracer;
-1. Add `opentelemetry-api-1.0.0.jar` (or later) to your application:
+ static final Tracer tracer = openTelemetry.getTracer("com.example");
+ ```
- ```xml
- <dependency>
- <groupId>io.opentelemetry</groupId>
- <artifactId>opentelemetry-api</artifactId>
- <version>1.0.0</version>
- </dependency>
- ```
+1. Create a span, make it current, and then end it:
-1. Use the `GlobalOpenTelemetry` class to create a `Tracer`:
-
- ```java
- import io.opentelemetry.api.GlobalOpenTelemetry;
- import io.opentelemetry.api.trace.Tracer;
-
- static final Tracer tracer = GlobalOpenTelemetry.getTracer("com.example");
- ```
-
-1. Create a span, make it current, and then end it:
-
- ```java
- Span span = tracer.spanBuilder("my first span").startSpan();
- try (Scope ignored = span.makeCurrent()) {
- // do stuff within the context of this
- } catch (Throwable t) {
- span.recordException(t);
- } finally {
- span.end();
- }
- ```
-
-#### [Java native](#tab/java-native)
-
-1. Inject `OpenTelemetry`
-
- _Spring_
- ```java
- import io.opentelemetry.api.OpenTelemetry;
-
- @Autowired
- OpenTelemetry openTelemetry;
- ```
-
- _Quarkus_
- ```java
- import io.opentelemetry.api.OpenTelemetry;
- @Inject
- OpenTelemetry openTelemetry;
- ```
-1. Create a `Tracer`:
-
-```java
- import io.opentelemetry.api.trace.Tracer;
-
- static final Tracer tracer = openTelemetry.getTracer("com.example");
-```
-
-1. Create a span, make it current, and then end it:
-
- ```java
+ ```java
Span span = tracer.spanBuilder("my first span").startSpan(); try (Scope ignored = span.makeCurrent()) { // do stuff within the context of this
you can add your spans by using the OpenTelemetry API.
} finally { span.end(); }
- ```
+ ```
#### [Node.js](#tab/nodejs) ```javascript
- // Import the Azure Monitor OpenTelemetry plugin and OpenTelemetry API
- const { useAzureMonitor } = require("@azure/monitor-opentelemetry");
- const { trace } = require("@opentelemetry/api");
+// Import the Azure Monitor OpenTelemetry plugin and OpenTelemetry API
+const { useAzureMonitor } = require("@azure/monitor-opentelemetry");
+const { trace } = require("@opentelemetry/api");
- // Enable Azure Monitor integration
- useAzureMonitor();
+// Enable Azure Monitor integration
+useAzureMonitor();
- // Get the tracer for the "testTracer" namespace
- const tracer = trace.getTracer("testTracer");
+// Get the tracer for the "testTracer" namespace
+const tracer = trace.getTracer("testTracer");
- // Start a span with the name "hello"
- let span = tracer.startSpan("hello");
+// Start a span with the name "hello"
+let span = tracer.startSpan("hello");
- // End the span
- span.end();
+// End the span
+span.end();
``` #### [Python](#tab/python)
We recommend you use the OpenTelemetry APIs whenever possible, but there might b
#### [ASP.NET Core](#tab/aspnetcore)
-##### Events
+**Events**
1. Add `Microsoft.ApplicationInsights` to your application.
-2. Create a `TelemetryClient` instance.
-
-> [!NOTE]
-> It's important to only create once instance of the TelemetryClient per application.
+1. Create a `TelemetryClient` instance:
-```csharp
-var telemetryConfiguration = new TelemetryConfiguration { ConnectionString = "" };
-var telemetryClient = new TelemetryClient(telemetryConfiguration);
-```
+ > [!NOTE]
+ > It's important to only create once instance of the TelemetryClient per application.
+
+ ```csharp
+ var telemetryConfiguration = new TelemetryConfiguration { ConnectionString = "" };
+ var telemetryClient = new TelemetryClient(telemetryConfiguration);
+ ```
-3. Use the client to send custom telemetry.
+1. Use the client to send custom telemetry:
-```csharp
-telemetryClient.TrackEvent("testEvent");
-```
+ ```csharp
+ telemetryClient.TrackEvent("testEvent");
+ ```
#### [.NET](#tab/net)
-##### Events
+**Events**
1. Add `Microsoft.ApplicationInsights` to your application.
-2. Create a `TelemetryClient` instance.
-
-> [!NOTE]
-> It's important to only create once instance of the TelemetryClient per application.
+1. Create a `TelemetryClient` instance:
-```csharp
-var telemetryConfiguration = new TelemetryConfiguration { ConnectionString = "" };
-var telemetryClient = new TelemetryClient(telemetryConfiguration);
-```
+ > [!NOTE]
+ > It's important to only create once instance of the TelemetryClient per application.
+
+ ```csharp
+ var telemetryConfiguration = new TelemetryConfiguration { ConnectionString = "" };
+ var telemetryClient = new TelemetryClient(telemetryConfiguration);
+ ```
-3. Use the client to send custom telemetry.
+1. Use the client to send custom telemetry:
-```csharp
-telemetryClient.TrackEvent("testEvent");
-```
+ ```csharp
+ telemetryClient.TrackEvent("testEvent");
+ ```
#### [Java](#tab/java)
telemetryClient.TrackEvent("testEvent");
1. Use the client to send custom telemetry:
- ##### Events
+ **Events**
```java telemetryClient.trackEvent("WinGame"); ```+
+ **Logs**
- ##### Metrics
+ ```java
+ telemetryClient.trackTrace(message, SeverityLevel.Warning, properties);
+ ```
+
+ **Metrics**
```java telemetryClient.trackMetric("queueLength", 42.0); ```
-
- ##### Dependencies
+
+ **Dependencies**
```java boolean success = false;
telemetryClient.TrackEvent("testEvent");
telemetryClient.trackDependency(telemetry); } ```
-
- ##### Logs
-
- ```java
- telemetryClient.trackTrace(message, SeverityLevel.Warning, properties);
- ```
-
- ##### Exceptions
+
+ **Exceptions**
```java try {
telemetryClient.TrackEvent("testEvent");
} catch (Exception e) { telemetryClient.trackException(e); }
-
-
++ #### [Java native](#tab/java-native) It's not possible to send custom telemetry using the Application Insights Classic API in Java native.
If you want to add custom events or access the Application Insights API, replace
You need to use the `applicationinsights` v3 Beta package to send custom telemetry using the Application Insights classic API. (https://www.npmjs.com/package/applicationinsights/v/beta) ```javascript
- // Import the TelemetryClient class from the Application Insights SDK for JavaScript.
- const { TelemetryClient } = require("applicationinsights");
+// Import the TelemetryClient class from the Application Insights SDK for JavaScript.
+const { TelemetryClient } = require("applicationinsights");
- // Create a new TelemetryClient instance.
- const telemetryClient = new TelemetryClient();
+// Create a new TelemetryClient instance.
+const telemetryClient = new TelemetryClient();
``` Then use the `TelemetryClient` to send custom telemetry:
-##### Events
+**Events**
```javascript
- // Create an event telemetry object.
- let eventTelemetry = {
- name: "testEvent"
- };
+// Create an event telemetry object.
+let eventTelemetry = {
+ name: "testEvent"
+};
- // Send the event telemetry object to Azure Monitor Application Insights.
- telemetryClient.trackEvent(eventTelemetry);
+// Send the event telemetry object to Azure Monitor Application Insights.
+telemetryClient.trackEvent(eventTelemetry);
```
-##### Logs
+**Logs**
```javascript
- // Create a trace telemetry object.
- let traceTelemetry = {
- message: "testMessage",
- severity: "Information"
- };
+// Create a trace telemetry object.
+let traceTelemetry = {
+ message: "testMessage",
+ severity: "Information"
+};
- // Send the trace telemetry object to Azure Monitor Application Insights.
- telemetryClient.trackTrace(traceTelemetry);
+// Send the trace telemetry object to Azure Monitor Application Insights.
+telemetryClient.trackTrace(traceTelemetry);
```
-
-##### Exceptions
+
+**Exceptions**
```javascript
- // Try to execute a block of code.
- try {
- ...
- }
+// Try to execute a block of code.
+try {
+ ...
+}
- // If an error occurs, catch it and send it to Azure Monitor Application Insights as an exception telemetry item.
- catch (error) {
- let exceptionTelemetry = {
- exception: error,
- severity: "Critical"
- };
- telemetryClient.trackException(exceptionTelemetry);
+// If an error occurs, catch it and send it to Azure Monitor Application Insights as an exception telemetry item.
+catch (error) {
+ let exceptionTelemetry = {
+ exception: error,
+ severity: "Critical"
+ };
+ telemetryClient.trackException(exceptionTelemetry);
} ```
pip install azure-monitor-opentelemetry
pip install azure-monitor-events-extension ```
-Use the `track_event` API offered in the extension to send customEvents.
+Use the `track_event` API offered in the extension to send customEvents:
```python ...
These attributes might include adding a custom property to your telemetry. You m
#### Add a custom property to a Span
-Any [attributes](#add-span-attributes) you add to spans are exported as custom properties. They populate the _customDimensions_ field in the requests, dependencies, traces, or exceptions table.
+Any [attributes](#add-span-attributes) you add to spans are exported as custom properties. They populate the *customDimensions* field in the requests, dependencies, traces, or exceptions table.
##### [ASP.NET Core](#tab/aspnetcore)
To add span attributes, use either of the following two ways:
> The advantage of using options provided by instrumentation libraries, when they're available, is that the entire context is available. As a result, users can select to add or filter more attributes. For example, the enrich option in the HttpClient instrumentation library gives users access to the [HttpRequestMessage](/dotnet/api/system.net.http.httprequestmessage) and the [HttpResponseMessage](/dotnet/api/system.net.http.httpresponsemessage) itself. They can select anything from it and store it as an attribute. 1. Many instrumentation libraries provide an enrich option. For guidance, see the readme files of individual instrumentation libraries:
- - [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#enrich)
- - [HttpClient](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.Http/README.md#enrich)
-
-1. Use a custom processor:
-> [!TIP]
-> Add the processor shown here *before* adding Azure Monitor.
-
-```csharp
-// Create an ASP.NET Core application builder.
-var builder = WebApplication.CreateBuilder(args);
-
-// Configure the OpenTelemetry tracer provider to add a new processor named ActivityEnrichingProcessor.
-builder.Services.ConfigureOpenTelemetryTracerProvider((sp, builder) => builder.AddProcessor(new ActivityEnrichingProcessor()));
-
-// Add the Azure Monitor telemetry service to the application. This service will collect and send telemetry data to Azure Monitor.
-builder.Services.AddOpenTelemetry().UseAzureMonitor();
+ * [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#enrich)
+ * [HttpClient](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.Http/README.md#enrich)
-// Build the ASP.NET Core application.
-var app = builder.Build();
-
-// Start the ASP.NET Core application.
-app.Run();
-```
-
-Add `ActivityEnrichingProcessor.cs` to your project with the following code:
+1. Use a custom processor:
-```csharp
-public class ActivityEnrichingProcessor : BaseProcessor<Activity>
-{
- public override void OnEnd(Activity activity)
+ > [!TIP]
+ > Add the processor shown here *before* adding Azure Monitor.
+
+ ```csharp
+ // Create an ASP.NET Core application builder.
+ var builder = WebApplication.CreateBuilder(args);
+
+ // Configure the OpenTelemetry tracer provider to add a new processor named ActivityEnrichingProcessor.
+ builder.Services.ConfigureOpenTelemetryTracerProvider((sp, builder) => builder.AddProcessor(new ActivityEnrichingProcessor()));
+
+ // Add the Azure Monitor telemetry service to the application. This service will collect and send telemetry data to Azure Monitor.
+ builder.Services.AddOpenTelemetry().UseAzureMonitor();
+
+ // Build the ASP.NET Core application.
+ var app = builder.Build();
+
+ // Start the ASP.NET Core application.
+ app.Run();
+ ```
+
+ Add `ActivityEnrichingProcessor.cs` to your project with the following code:
+
+ ```csharp
+ public class ActivityEnrichingProcessor : BaseProcessor<Activity>
{
- // The updated activity will be available to all processors which are called after this processor.
- activity.DisplayName = "Updated-" + activity.DisplayName;
- activity.SetTag("CustomDimension1", "Value1");
- activity.SetTag("CustomDimension2", "Value2");
+ public override void OnEnd(Activity activity)
+ {
+ // The updated activity will be available to all processors which are called after this processor.
+ activity.DisplayName = "Updated-" + activity.DisplayName;
+ activity.SetTag("CustomDimension1", "Value1");
+ activity.SetTag("CustomDimension2", "Value2");
+ }
}
-}
-```
+ ```
-#### [.NET](#tab/net)
+##### [.NET](#tab/net)
To add span attributes, use either of the following two ways:
To add span attributes, use either of the following two ways:
> The advantage of using options provided by instrumentation libraries, when they're available, is that the entire context is available. As a result, users can select to add or filter more attributes. For example, the enrich option in the HttpClient instrumentation library gives users access to the httpRequestMessage itself. They can select anything from it and store it as an attribute. 1. Many instrumentation libraries provide an enrich option. For guidance, see the readme files of individual instrumentation libraries:
- - [ASP.NET](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/Instrumentation.AspNet-1.0.0-rc9.8/src/OpenTelemetry.Instrumentation.AspNet/README.md#enrich)
- - [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#enrich)
- - [HttpClient](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.Http/README.md#enrich)
-1. Use a custom processor:
-
-> [!TIP]
-> Add the processor shown here *before* the Azure Monitor Exporter.
-
-```csharp
-// Create an OpenTelemetry tracer provider builder.
-// It is important to keep the TracerProvider instance active throughout the process lifetime.
-using var tracerProvider = Sdk.CreateTracerProviderBuilder()
- // Add a source named "OTel.AzureMonitor.Demo".
- .AddSource("OTel.AzureMonitor.Demo") // Add a new processor named ActivityEnrichingProcessor.
- .AddProcessor(new ActivityEnrichingProcessor()) // Add the Azure Monitor trace exporter.
- .AddAzureMonitorTraceExporter() // Add the Azure Monitor trace exporter.
- .Build();
-```
+ * [ASP.NET](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/Instrumentation.AspNet-1.0.0-rc9.8/src/OpenTelemetry.Instrumentation.AspNet/README.md#enrich)
+ * [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#enrich)
+ * [HttpClient](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.Http/README.md#enrich)
-Add `ActivityEnrichingProcessor.cs` to your project with the following code:
+1. Use a custom processor:
-```csharp
-public class ActivityEnrichingProcessor : BaseProcessor<Activity>
-{
- // The OnEnd method is called when an activity is finished. This is the ideal place to enrich the activity with additional data.
- public override void OnEnd(Activity activity)
+ > [!TIP]
+ > Add the processor shown here *before* the Azure Monitor Exporter.
+
+ ```csharp
+ // Create an OpenTelemetry tracer provider builder.
+ // It is important to keep the TracerProvider instance active throughout the process lifetime.
+ using var tracerProvider = Sdk.CreateTracerProviderBuilder()
+ // Add a source named "OTel.AzureMonitor.Demo".
+ .AddSource("OTel.AzureMonitor.Demo") // Add a new processor named ActivityEnrichingProcessor.
+ .AddProcessor(new ActivityEnrichingProcessor()) // Add the Azure Monitor trace exporter.
+ .AddAzureMonitorTraceExporter() // Add the Azure Monitor trace exporter.
+ .Build();
+ ```
+
+ Add `ActivityEnrichingProcessor.cs` to your project with the following code:
+
+ ```csharp
+ public class ActivityEnrichingProcessor : BaseProcessor<Activity>
{
- // Update the activity's display name.
- // The updated activity will be available to all processors which are called after this processor.
- activity.DisplayName = "Updated-" + activity.DisplayName;
- // Set custom tags on the activity.
- activity.SetTag("CustomDimension1", "Value1");
- activity.SetTag("CustomDimension2", "Value2");
+ // The OnEnd method is called when an activity is finished. This is the ideal place to enrich the activity with additional data.
+ public override void OnEnd(Activity activity)
+ {
+ // Update the activity's display name.
+ // The updated activity will be available to all processors which are called after this processor.
+ activity.DisplayName = "Updated-" + activity.DisplayName;
+ // Set custom tags on the activity.
+ activity.SetTag("CustomDimension1", "Value1");
+ activity.SetTag("CustomDimension2", "Value2");
+ }
}
-}
-```
+ ```
##### [Java](#tab/java)
Adding one or more span attributes populates the `customDimensions` field in the
1. Add `opentelemetry-api-1.0.0.jar` (or later) to your application:
- ```xml
- <dependency>
- <groupId>io.opentelemetry</groupId>
- <artifactId>opentelemetry-api</artifactId>
- <version>1.0.0</version>
- </dependency>
- ```
+ ```xml
+ <dependency>
+ <groupId>io.opentelemetry</groupId>
+ <artifactId>opentelemetry-api</artifactId>
+ <version>1.0.0</version>
+ </dependency>
+ ```
1. Add custom dimensions in your code:
- ```java
+ ```java
import io.opentelemetry.api.trace.Span; import io.opentelemetry.api.common.AttributeKey;-
+
AttributeKey attributeKey = AttributeKey.stringKey("mycustomdimension"); Span.current().setAttribute(attributeKey, "myvalue1");
- ```
+ ```
##### [Java native](#tab/java-native) Add custom dimensions in your code:
- ```java
- import io.opentelemetry.api.trace.Span;
- import io.opentelemetry.api.common.AttributeKey;
+```java
+import io.opentelemetry.api.trace.Span;
+import io.opentelemetry.api.common.AttributeKey;
- AttributeKey attributeKey = AttributeKey.stringKey("mycustomdimension");
- Span.current().setAttribute(attributeKey, "myvalue1");
- ```
+AttributeKey attributeKey = AttributeKey.stringKey("mycustomdimension");
+Span.current().setAttribute(attributeKey, "myvalue1");
+```
##### [Node.js](#tab/nodejs)
Use the add [custom property example](#add-a-custom-property-to-a-span), but rep
activity.SetTag("client.address", "<IP Address>"); ```
-#### [.NET](#tab/net)
+##### [.NET](#tab/net)
Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code in `ActivityEnrichingProcessor.cs`:
Use the add [custom property example](#add-a-custom-property-to-a-span), but rep
```typescript ...
- // Import the SemanticAttributes class from the @opentelemetry/semantic-conventions package.
- const { SemanticAttributes } = require("@opentelemetry/semantic-conventions");
+// Import the SemanticAttributes class from the @opentelemetry/semantic-conventions package.
+const { SemanticAttributes } = require("@opentelemetry/semantic-conventions");
- // Create a new SpanEnrichingProcessor class.
- class SpanEnrichingProcessor implements SpanProcessor {
+// Create a new SpanEnrichingProcessor class.
+class SpanEnrichingProcessor implements SpanProcessor {
- onEnd(span) {
- // Set the HTTP_CLIENT_IP attribute on the span to the IP address of the client.
- span.attributes[SemanticAttributes.HTTP_CLIENT_IP] = "<IP Address>";
- }
+ onEnd(span) {
+ // Set the HTTP_CLIENT_IP attribute on the span to the IP address of the client.
+ span.attributes[SemanticAttributes.HTTP_CLIENT_IP] = "<IP Address>";
}
+}
``` ##### [Python](#tab/python)
You can populate the _user_Id_ or _user_AuthenticatedId_ field for requests by u
##### [ASP.NET Core](#tab/aspnetcore)
-Use the add [custom property example](#add-a-custom-property-to-a-span).
+Use the add [custom property example](#add-a-custom-property-to-a-span):
```csharp // Add the user ID to the activity as a tag, but only if the activity is not null.
activity?.SetTag("enduser.id", "<User Id>");
##### [.NET](#tab/net)
-Use the add [custom property example](#add-a-custom-property-to-a-span).
+Use the add [custom property example](#add-a-custom-property-to-a-span):
```csharp // Add the user ID to the activity as a tag, but only if the activity is not null.
Populate the `user ID` field in the `requests`, `dependencies`, or `exceptions`
1. Add `opentelemetry-api-1.0.0.jar` (or later) to your application:
- ```xml
- <dependency>
- <groupId>io.opentelemetry</groupId>
- <artifactId>opentelemetry-api</artifactId>
- <version>1.0.0</version>
- </dependency>
- ```
+ ```xml
+ <dependency>
+ <groupId>io.opentelemetry</groupId>
+ <artifactId>opentelemetry-api</artifactId>
+ <version>1.0.0</version>
+ </dependency>
+ ```
1. Set `user_Id` in your code:
- ```java
- import io.opentelemetry.api.trace.Span;
-
- Span.current().setAttribute("enduser.id", "myuser");
- ```
+ ```java
+ import io.opentelemetry.api.trace.Span;
+
+ Span.current().setAttribute("enduser.id", "myuser");
+ ```
##### [Java native](#tab/java-native)
Populate the `user ID` field in the `requests`, `dependencies`, or `exceptions`
Set `user_Id` in your code:
- ```java
- import io.opentelemetry.api.trace.Span;
+```java
+import io.opentelemetry.api.trace.Span;
- Span.current().setAttribute("enduser.id", "myuser");
- ```
+Span.current().setAttribute("enduser.id", "myuser");
+```
-#### [Node.js](#tab/nodejs)
+##### [Node.js](#tab/nodejs)
Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code: ```typescript ...
- // Import the SemanticAttributes class from the @opentelemetry/semantic-conventions package.
- import { SemanticAttributes } from "@opentelemetry/semantic-conventions";
+// Import the SemanticAttributes class from the @opentelemetry/semantic-conventions package.
+import { SemanticAttributes } from "@opentelemetry/semantic-conventions";
- // Create a new SpanEnrichingProcessor class.
- class SpanEnrichingProcessor implements SpanProcessor {
+// Create a new SpanEnrichingProcessor class.
+class SpanEnrichingProcessor implements SpanProcessor {
- onEnd(span: ReadableSpan) {
- // Set the ENDUSER_ID attribute on the span to the ID of the user.
- span.attributes[SemanticAttributes.ENDUSER_ID] = "<User ID>";
- }
+ onEnd(span: ReadableSpan) {
+ // Set the ENDUSER_ID attribute on the span to the ID of the user.
+ span.attributes[SemanticAttributes.ENDUSER_ID] = "<User ID>";
}
+}
``` ##### [Python](#tab/python)
Attaching custom dimensions to logs can be accomplished using a [message templat
#### [Java](#tab/java)
-Logback, Log4j, and java.util.logging are [autoinstrumented](#logs). Attaching custom dimensions to your logs can be accomplished in these ways:
+Logback, Log4j, and java.util.logging are [autoinstrumented](#send-custom-telemetry-using-the-application-insights-classic-api). Attaching custom dimensions to your logs can be accomplished in these ways:
* [Log4j 2.0 MapMessage](https://logging.apache.org/log4j/2.0/javadoc/log4j-api/org/apache/logging/log4j/message/MapMessage.html) (a `MapMessage` key of `"message"` is captured as the log message) * [Log4j 2.0 Thread Context](https://logging.apache.org/log4j/2.x/manual/thread-context.html) * [Log4j 1.2 MDC](https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/MDC.html) - #### [Java native](#tab/java-native) For Spring Boot native applications, Logback is instrumented out of the box.
For Spring Boot native applications, Logback is instrumented out of the box.
#### [Node.js](#tab/nodejs) ```typescript
- const { useAzureMonitor } = require("@azure/monitor-opentelemetry");
- const bunyan = require('bunyan');
-
- // Instrumentations configuration
- const options: AzureMonitorOpenTelemetryOptions = {
- instrumentationOptions: {
- // Instrumentations generating logs
- bunyan: { enabled: true },
- }
- };
+const { useAzureMonitor } = require("@azure/monitor-opentelemetry");
+const bunyan = require('bunyan');
- // Enable Azure Monitor integration
- useAzureMonitor(options);
+// Instrumentations configuration
+const options: AzureMonitorOpenTelemetryOptions = {
+ instrumentationOptions: {
+ // Instrumentations generating logs
+ bunyan: { enabled: true },
+ }
+};
- var log = bunyan.createLogger({ name: 'testApp' });
- log.info({
- "testAttribute1": "testValue1",
- "testAttribute2": "testValue2",
- "testAttribute3": "testValue3"
- }, 'testEvent');
+// Enable Azure Monitor integration
+useAzureMonitor(options);
+var log = bunyan.createLogger({ name: 'testApp' });
+log.info({
+ "testAttribute1": "testValue1",
+ "testAttribute2": "testValue2",
+ "testAttribute3": "testValue3"
+}, 'testEvent');
``` #### [Python](#tab/python)
-The Python [logging](https://docs.python.org/3/howto/logging.html) library is [autoinstrumented](.\opentelemetry-add-modify.md?tabs=python#included-instrumentation-libraries). You can attach custom dimensions to your logs by passing a dictionary into the `extra` argument of your logs.
+The Python [logging](https://docs.python.org/3/howto/logging.html) library is [autoinstrumented](.\opentelemetry-add-modify.md?tabs=python#included-instrumentation-libraries). You can attach custom dimensions to your logs by passing a dictionary into the `extra` argument of your logs:
```python ...
You might use the following ways to filter out telemetry before it leaves your a
### [ASP.NET Core](#tab/aspnetcore) 1. Many instrumentation libraries provide a filter option. For guidance, see the readme files of individual instrumentation libraries:
- - [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#filter)
- - [HttpClient](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.Http/README.md#filter)
+
+ * [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#filter)
+ * [HttpClient](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.Http/README.md#filter)
1. Use a custom processor:
You might use the following ways to filter out telemetry before it leaves your a
### [.NET](#tab/net) 1. Many instrumentation libraries provide a filter option. For guidance, see the readme files of individual instrumentation libraries:
- - [ASP.NET](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/Instrumentation.AspNet-1.0.0-rc9.8/src/OpenTelemetry.Instrumentation.AspNet/README.md#filter)
- - [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#filter)
- - [HttpClient](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.Http/README.md#filter)
+
+ * [ASP.NET](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/Instrumentation.AspNet-1.0.0-rc9.8/src/OpenTelemetry.Instrumentation.AspNet/README.md#filter)
+ * [ASP.NET Core](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.AspNetCore/README.md#filter)
+ * [HttpClient](https://github.com/open-telemetry/opentelemetry-dotnet/blob/1.0.0-rc9.14/src/OpenTelemetry.Instrumentation.Http/README.md#filter)
1. Use a custom processor:
You might use the following ways to filter out telemetry before it leaves your a
1. If a particular source isn't explicitly added by using `AddSource("ActivitySourceName")`, then none of the activities created by using that source are exported. - ### [Java](#tab/java) See [sampling overrides](java-standalone-config.md#sampling-overrides) and [telemetry processors](java-standalone-telemetry-processors.md).
It's not possible to filter telemetry in Java native.
useAzureMonitor(config); ```
-2. Use a custom processor. You can use a custom span processor to exclude certain spans from being exported. To mark spans to not be exported, set `TraceFlag` to `DEFAULT`.
-Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code:
+1. Use a custom processor. You can use a custom span processor to exclude certain spans from being exported. To mark spans to not be exported, set `TraceFlag` to `DEFAULT`.
+
+ Use the add [custom property example](#add-a-custom-property-to-a-span), but replace the following lines of code:
```typescript // Import the necessary packages.
Use the add [custom property example](#add-a-custom-property-to-a-span), but rep
### [Python](#tab/python) 1. Exclude the URL with the `OTEL_PYTHON_EXCLUDED_URLS` environment variable:+ ``` export OTEL_PYTHON_EXCLUDED_URLS="http://localhost:8080/ignore" ```
Use the add [custom property example](#add-a-custom-property-to-a-span), but rep
... ```
-1. Use a custom processor. You can use a custom span processor to exclude certain spans from being exported. To mark spans to not be exported, set `TraceFlag` to `DEFAULT`.
+1. Use a custom processor. You can use a custom span processor to exclude certain spans from being exported. To mark spans to not be exported, set `TraceFlag` to `DEFAULT`:
```python ...
You can use `opentelemetry-api` to get the trace ID or span ID.
1. Add `opentelemetry-api-1.0.0.jar` (or later) to your application:
- ```xml
- <dependency>
- <groupId>io.opentelemetry</groupId>
- <artifactId>opentelemetry-api</artifactId>
- <version>1.0.0</version>
- </dependency>
- ```
+ ```xml
+ <dependency>
+ <groupId>io.opentelemetry</groupId>
+ <artifactId>opentelemetry-api</artifactId>
+ <version>1.0.0</version>
+ </dependency>
+ ```
1. Get the request trace ID and the span ID in your code:
- ```java
- import io.opentelemetry.api.trace.Span;
-
- Span span = Span.current();
- String traceId = span.getSpanContext().getTraceId();
- String spanId = span.getSpanContext().getSpanId();
- ```
+ ```java
+ import io.opentelemetry.api.trace.Span;
+
+ Span span = Span.current();
+ String traceId = span.getSpanContext().getTraceId();
+ String spanId = span.getSpanContext().getSpanId();
+ ```
### [Java native](#tab/java-native) Get the request trace ID and the span ID in your code:
- ```java
- import io.opentelemetry.api.trace.Span;
+```java
+import io.opentelemetry.api.trace.Span;
- Span span = Span.current();
- String traceId = span.getSpanContext().getTraceId();
- String spanId = span.getSpanContext().getSpanId();
- ```
+Span span = Span.current();
+String traceId = span.getSpanContext().getTraceId();
+String spanId = span.getSpanContext().getSpanId();
+```
### [Node.js](#tab/nodejs) Get the request trace ID and the span ID in your code:
- ```javascript
- // Import the trace module from the OpenTelemetry API.
- const { trace } = require("@opentelemetry/api");
+```javascript
+// Import the trace module from the OpenTelemetry API.
+const { trace } = require("@opentelemetry/api");
- // Get the span ID and trace ID of the active span.
- let spanId = trace.getActiveSpan().spanContext().spanId;
- let traceId = trace.getActiveSpan().spanContext().traceId;
- ```
+// Get the span ID and trace ID of the active span.
+let spanId = trace.getActiveSpan().spanContext().spanId;
+let traceId = trace.getActiveSpan().spanContext().traceId;
+```
### [Python](#tab/python) Get the request trace ID and the span ID in your code: ```python
-# Import the necessary libraries.
+# Import the necessary libraries.
from opentelemetry import trace # Get the trace ID and span ID of the current span.
span_id = trace.get_current_span().get_span_context().span_id
### [ASP.NET Core](#tab/aspnetcore) -- To further configure the OpenTelemetry distro, see [Azure Monitor OpenTelemetry configuration](opentelemetry-configuration.md)-- To review the source code, see the [Azure Monitor AspNetCore GitHub repository](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.AspNetCore).-- To install the NuGet package, check for updates, or view release notes, see the [Azure Monitor AspNetCore NuGet Package](https://www.nuget.org/packages/Azure.Monitor.OpenTelemetry.AspNetCore) page.-- To become more familiar with Azure Monitor and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.AspNetCore/tests/Azure.Monitor.OpenTelemetry.AspNetCore.Demo).-- To learn more about OpenTelemetry and its community, see the [OpenTelemetry .NET GitHub repository](https://github.com/open-telemetry/opentelemetry-dotnet).-- To enable usage experiences, [enable web or browser user monitoring](javascript.md).
+* To further configure the OpenTelemetry distro, see [Azure Monitor OpenTelemetry configuration](opentelemetry-configuration.md).
+* To review the source code, see the [Azure Monitor AspNetCore GitHub repository](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.AspNetCore).
+* To install the NuGet package, check for updates, or view release notes, see the [Azure Monitor AspNetCore NuGet Package](https://www.nuget.org/packages/Azure.Monitor.OpenTelemetry.AspNetCore) page.
+* To become more familiar with Azure Monitor and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.AspNetCore/tests/Azure.Monitor.OpenTelemetry.AspNetCore.Demo).
+* To learn more about OpenTelemetry and its community, see the [OpenTelemetry .NET GitHub repository](https://github.com/open-telemetry/opentelemetry-dotnet).
+* To enable usage experiences, [enable web or browser user monitoring](javascript.md).
#### [.NET](#tab/net) -- To further configure the OpenTelemetry distro, see [Azure Monitor OpenTelemetry configuration](opentelemetry-configuration.md)-- To review the source code, see the [Azure Monitor Exporter GitHub repository](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.Exporter).-- To install the NuGet package, check for updates, or view release notes, see the [Azure Monitor Exporter NuGet Package](https://www.nuget.org/packages/Azure.Monitor.OpenTelemetry.Exporter) page.-- To become more familiar with Azure Monitor and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.Exporter/tests/Azure.Monitor.OpenTelemetry.Exporter.Demo).-- To learn more about OpenTelemetry and its community, see the [OpenTelemetry .NET GitHub repository](https://github.com/open-telemetry/opentelemetry-dotnet).-- To enable usage experiences, [enable web or browser user monitoring](javascript.md).
+* To further configure the OpenTelemetry distro, see [Azure Monitor OpenTelemetry configuration](opentelemetry-configuration.md)
+* To review the source code, see the [Azure Monitor Exporter GitHub repository](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.Exporter).
+* To install the NuGet package, check for updates, or view release notes, see the [Azure Monitor Exporter NuGet Package](https://www.nuget.org/packages/Azure.Monitor.OpenTelemetry.Exporter) page.
+* To become more familiar with Azure Monitor and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.Exporter/tests/Azure.Monitor.OpenTelemetry.Exporter.Demo).
+* To learn more about OpenTelemetry and its community, see the [OpenTelemetry .NET GitHub repository](https://github.com/open-telemetry/opentelemetry-dotnet).
+* To enable usage experiences, [enable web or browser user monitoring](javascript.md).
### [Java](#tab/java) -- Review [Java autoinstrumentation configuration options](java-standalone-config.md).-- To review the source code, see the [Azure Monitor Java autoinstrumentation GitHub repository](https://github.com/Microsoft/ApplicationInsights-Java).-- To learn more about OpenTelemetry and its community, see the [OpenTelemetry Java GitHub repository](https://github.com/open-telemetry/opentelemetry-java-instrumentation).-- To enable usage experiences, see [Enable web or browser user monitoring](javascript.md).-- See the [release notes](https://github.com/microsoft/ApplicationInsights-Java/releases) on GitHub.
+* Review [Java autoinstrumentation configuration options](java-standalone-config.md).
+* To review the source code, see the [Azure Monitor Java autoinstrumentation GitHub repository](https://github.com/Microsoft/ApplicationInsights-Java).
+* To learn more about OpenTelemetry and its community, see the [OpenTelemetry Java GitHub repository](https://github.com/open-telemetry/opentelemetry-java-instrumentation).
+* To enable usage experiences, see [Enable web or browser user monitoring](javascript.md).
+* See the [release notes](https://github.com/microsoft/ApplicationInsights-Java/releases) on GitHub.
### [Java native](#tab/java-native) -- For details on adding and modifying Azure Monitor OpenTelemetry, see [Add and modify Azure Monitor OpenTelemetry](opentelemetry-add-modify.md).-- To review the source code, see [Azure Monitor OpenTelemetry Distro in Spring Boot native image Java application](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/spring/spring-cloud-azure-starter-monitor)
- and [Quarkus OpenTelemetry Exporter for Azure](https://github.com/quarkiverse/quarkus-opentelemetry-exporter/tree/main/quarkus-opentelemetry-exporter-azure).
-- To learn more about OpenTelemetry and its community, see the [OpenTelemetry Java GitHub repository](https://github.com/open-telemetry/opentelemetry-java-instrumentation).-- See the [release notes](https://github.com/Azure/azure-sdk-for-jav) on GitHub.
+* For details on adding and modifying Azure Monitor OpenTelemetry, see [Add and modify Azure Monitor OpenTelemetry](opentelemetry-add-modify.md).
+* To review the source code, see [Azure Monitor OpenTelemetry Distro in Spring Boot native image Java application](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/spring/spring-cloud-azure-starter-monitor) and [Quarkus OpenTelemetry Exporter for Azure](https://github.com/quarkiverse/quarkus-opentelemetry-exporter/tree/main/quarkus-opentelemetry-exporter-azure).
+* To learn more about OpenTelemetry and its community, see the [OpenTelemetry Java GitHub repository](https://github.com/open-telemetry/opentelemetry-java-instrumentation).
+* See the [release notes](https://github.com/Azure/azure-sdk-for-jav) on GitHub.
### [Node.js](#tab/nodejs) -- To review the source code, see the [Azure Monitor OpenTelemetry GitHub repository](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/monitor/monitor-opentelemetry).-- To install the npm package and check for updates, see the [`@azure/monitor-opentelemetry` npm Package](https://www.npmjs.com/package/@azure/monitor-opentelemetry) page.-- To become more familiar with Azure Monitor Application Insights and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure-Samples/azure-monitor-opentelemetry-node.js).-- To learn more about OpenTelemetry and its community, see the [OpenTelemetry JavaScript GitHub repository](https://github.com/open-telemetry/opentelemetry-js).-- To enable usage experiences, [enable web or browser user monitoring](javascript.md).
+* To review the source code, see the [Azure Monitor OpenTelemetry GitHub repository](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/monitor/monitor-opentelemetry).
+* To install the npm package and check for updates, see the [`@azure/monitor-opentelemetry` npm Package](https://www.npmjs.com/package/@azure/monitor-opentelemetry) page.
+* To become more familiar with Azure Monitor Application Insights and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure-Samples/azure-monitor-opentelemetry-node.js).
+* To learn more about OpenTelemetry and its community, see the [OpenTelemetry JavaScript GitHub repository](https://github.com/open-telemetry/opentelemetry-js).
+* To enable usage experiences, [enable web or browser user monitoring](javascript.md).
### [Python](#tab/python) -- To review the source code and extra documentation, see the [Azure Monitor Distro GitHub repository](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-opentelemetry/README.md).-- To see extra samples and use cases, see [Azure Monitor Distro samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/monitor/azure-monitor-opentelemetry/samples).-- See the [release notes](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-opentelemetry/CHANGELOG.md) on GitHub.-- To install the PyPI package, check for updates, or view release notes, see the [Azure Monitor Distro PyPI Package](https://pypi.org/project/azure-monitor-opentelemetry/) page.-- To become more familiar with Azure Monitor Application Insights and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure-Samples/azure-monitor-opentelemetry-python).-- To learn more about OpenTelemetry and its community, see the [OpenTelemetry Python GitHub repository](https://github.com/open-telemetry/opentelemetry-python).-- To see available OpenTelemetry instrumentations and components, see the [OpenTelemetry Contributor Python GitHub repository](https://github.com/open-telemetry/opentelemetry-python-contrib).-- To enable usage experiences, [enable web or browser user monitoring](javascript.md).
+* To review the source code and extra documentation, see the [Azure Monitor Distro GitHub repository](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-opentelemetry/README.md).
+* To see extra samples and use cases, see [Azure Monitor Distro samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/monitor/azure-monitor-opentelemetry/samples).
+* See the [release notes](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-opentelemetry/CHANGELOG.md) on GitHub.
+* To install the PyPI package, check for updates, or view release notes, see the [Azure Monitor Distro PyPI Package](https://pypi.org/project/azure-monitor-opentelemetry/) page.
+* To become more familiar with Azure Monitor Application Insights and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure-Samples/azure-monitor-opentelemetry-python).
+* To learn more about OpenTelemetry and its community, see the [OpenTelemetry Python GitHub repository](https://github.com/open-telemetry/opentelemetry-python).
+* To see available OpenTelemetry instrumentations and components, see the [OpenTelemetry Contributor Python GitHub repository](https://github.com/open-telemetry/opentelemetry-python-contrib).
+* To enable usage experiences, [enable web or browser user monitoring](javascript.md).
azure-monitor Opentelemetry Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-configuration.md
export OTEL_TRACES_SAMPLER_ARG=0.1
> [!TIP]
-> When using fixed-rate/percentage sampling and you aren't sure what to set the sampling rate as, start at 5% (i.e., 0.05 sampling ratio) and adjust the rate based on the accuracy of the operations shown in the failures and performance blades. A higher rate generally results in higher accuracy. However, ANY sampling will affect accuracy so we recommend alerting on [OpenTelemetry metrics](opentelemetry-add-modify.md#metrics), which are unaffected by sampling.
+> When using fixed-rate/percentage sampling and you aren't sure what to set the sampling rate as, start at 5% (i.e., 0.05 sampling ratio) and adjust the rate based on the accuracy of the operations shown in the failures and performance panes. A higher rate generally results in higher accuracy. However, ANY sampling will affect accuracy so we recommend alerting on [OpenTelemetry metrics](opentelemetry-add-modify.md#add-custom-metrics), which are unaffected by sampling.
<a name='enable-entra-id-formerly-azure-ad-authentication'></a>
azure-monitor Opentelemetry Dotnet Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-dotnet-migrate.md
+
+ Title: Migrate from Application Insights .NET SDKs to Azure Monitor OpenTelemetry
+description: This article provides guidance on how to migrate .NET applications from the Application Insights Classic API SDKs to Azure Monitor OpenTelemetry.
+ Last updated : 06/07/2024
+ms.devlang: csharp
++++
+# Migrate from .NET Application Insights SDKs to Azure Monitor OpenTelemetry
+
+This guide provides step-by-step instructions to migrate various .NET applications from using Application Insights software development kits (SDKs) to Azure Monitor OpenTelemetry.
+
+Expect a similar experience with Azure Monitor OpenTelemetry instrumentation as with the Application Insights SDKs. For more information and a feature-by-feature comparison, see [release state of features](opentelemetry-enable.md#whats-the-current-release-state-of-features-within-the-azure-monitor-opentelemetry-distro).
+
+> [!div class="checklist"]
+> - ASP.NET Core migration to the [Azure Monitor OpenTelemetry Distro](https://www.nuget.org/packages/Azure.Monitor.OpenTelemetry.AspNetCore). (`Azure.Monitor.OpenTelemetry.AspNetCore` NuGet package)
+> - ASP.NET, console, and WorkerService migration to the [Azure Monitor OpenTelemetry Exporter](https://www.nuget.org/packages/Azure.Monitor.OpenTelemetry.Exporter). (`Azure.Monitor.OpenTelemetry.Exporter` NuGet package)
+
+If you're getting started with Application Insights and don't need to migrate from the Classic API, see [Enable Azure Monitor OpenTelemetry](opentelemetry-enable.md).
+
+## Prerequisites
+
+### [ASP.NET Core](#tab/aspnetcore)
+
+* An ASP.NET Core web application already instrumented with Application Insights without any customizations
+* An actively supported version of [.NET](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)
+
+### [ASP.NET](#tab/net)
+
+* An ASP.NET web application already instrumented with Application Insights
+* An actively supported version of [.NET Framework](/lifecycle/products/microsoft-net-framework)
+
+### [Console](#tab/console)
+
+* A Console application already instrumented with Application Insights
+* An actively supported version of [.NET Framework](/lifecycle/products/microsoft-net-framework) or [.NET](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)
+
+### [WorkerService](#tab/workerservice)
+
+* A WorkerService application already instrumented with Application Insights without any customizations
+* An actively supported version of [.NET](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)
+++
+> [!Tip]
+> Our product group is actively seeking feedback on this documentation. Provide feedback to otel@microsoft.com or see the [Support](#support) section.
+
+## Remove the Application Insights SDK
+
+> [!Note]
+> Before continuing with these steps, you should confirm that you have a current backup of your application.
+
+### [ASP.NET Core](#tab/aspnetcore)
+
+1. Remove NuGet packages
+
+ Remove the `Microsoft.ApplicationInsights.AspNetCore` package from your `csproj`.
+
+ ```terminal
+ dotnet remove package Microsoft.ApplicationInsights.AspNetCore
+ ```
+
+2. Remove Initialization Code and customizations
+
+ Remove any references to Application Insights types in your codebase.
+
+ > [!Tip]
+ > After removing the Application Insights package, you can re-build your application to get a list of references that need to be removed.
+
+ - Remove Application Insights from your `ServiceCollection` by deleting the following line:
+
+ ```csharp
+ builder.Services.AddApplicationInsightsTelemetry();
+ ```
+
+ - Remove the `ApplicationInsights` section from your `appsettings.json`.
+
+ ```json
+ {
+ "ApplicationInsights": {
+ "ConnectionString": "<Your Connection String>"
+ }
+ }
+ ```
+
+3. Clean and Build
+
+ Inspect your bin directory to validate that all references to `Microsoft.ApplicationInsights.*` were removed.
+
+4. Test your application
+
+ Verify that your application has no unexpected consequences.
+
+### [ASP.NET](#tab/net)
+
+1. Remove NuGet packages
+
+ Remove the `Microsoft.AspNet.TelemetryCorrelation` package and any `Microsoft.ApplicationInsights.*` packages from your `csproj` and `packages.config`.
+
+2. Delete the `ApplicationInsights.config` file
+
+3. Delete section from your application's `Web.config` file
+
+ - Two [HttpModules](/troubleshoot/developer/webapps/aspnet/development/http-modules-handlers) were automatically added to your web.config when you first added ApplicationInsights to your project.
+ Any references to the `TelemetryCorrelationHttpModule` and the `ApplicationInsightsWebTracking` should be removed.
+ If you added Application Insights to your [Internet Information Server (IIS) Modules](/iis/get-started/introduction-to-iis/iis-modules-overview), it should also be removed.
+
+ ```xml
+ <configuration>
+ <system.web>
+ <httpModules>
+ <add name="TelemetryCorrelationHttpModule" type="Microsoft.AspNet.TelemetryCorrelation.TelemetryCorrelationHttpModule, Microsoft.AspNet.TelemetryCorrelation" />
+ <add name="ApplicationInsightsWebTracking" type="Microsoft.ApplicationInsights.Web.ApplicationInsightsHttpModule, Microsoft.AI.Web" />
+ </httpModules>
+ </system.web>
+ <system.webServer>
+ <modules>
+ <remove name="TelemetryCorrelationHttpModule" />
+ <add name="TelemetryCorrelationHttpModule" type="Microsoft.AspNet.TelemetryCorrelation.TelemetryCorrelationHttpModule, Microsoft.AspNet.TelemetryCorrelation" preCondition="managedHandler" />
+ <remove name="ApplicationInsightsWebTracking" />
+ <add name="ApplicationInsightsWebTracking" type="Microsoft.ApplicationInsights.Web.ApplicationInsightsHttpModule, Microsoft.AI.Web" preCondition="managedHandler" />
+ </modules>
+ </system.webServer>
+ </configuration>
+ ```
+
+ - Also review any [assembly version redirections](/dotnet/framework/configure-apps/redirect-assembly-versions) added to your web.config.
+
+4. Remove Initialization Code and customizations
+
+ Remove any references to Application Insights types in your codebase.
+
+ > [!Tip]
+ > After removing the Application Insights package, you can re-build your application to get a list of references that need to be removed.
+
+ - Remove references to `TelemetryConfiguration` or `TelemetryClient`. It's a part of your application startup to initialize the Application Insights SDK.
+
+ The following scenarios are optional and apply to advanced users.
+
+ - If you have any more references to the `TelemetryClient`, which are used to [manually record telemetry](./api-custom-events-metrics.md), they should be removed.
+ - If you added any [custom filtering or enrichment](./api-filtering-sampling.md) in the form of a custom `TelemetryProcessor` or `TelemetryInitializer`, they should be removed. You can find them referenced in your configuration.
+ - If your project has a `FilterConfig.cs` in the `App_Start` directory, check for any custom exception handlers that reference Application Insights and remove.
+
+5. Remove JavaScript Snippet
+
+ If you added the JavaScript SDK to collect client-side telemetry, it can also be removed although it continues to work without the .NET SDK.
+ For full code samples of what to remove, review the [onboarding guide for the JavaScript SDK](./javascript-sdk.md).
+
+6. Remove any Visual Studio Artifacts
+
+ If you used Visual Studio to onboard to Application Insights, you could have more files left over in your project.
+
+ - `ConnectedService.json` might have a reference to your Application Insights resource.
+ - `[Your project's name].csproj` might have a reference to your Application Insights resource:
+
+ ```xml
+ <ApplicationInsightsResourceId>/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/Default-ApplicationInsights-EastUS/providers/microsoft.insights/components/WebApplication4</ApplicationInsightsResourceId>
+ ```
+
+7. Clean and Build
+
+ Inspect your bin directory to validate that all references to `Microsoft.ApplicationInsights.` were removed.
+
+8. Test your application
+
+ Verify that your application has no unexpected consequences.
+
+### [Console](#tab/console)
+
+1. Remove NuGet packages
+
+ Remove any `Microsoft.ApplicationInsights.*` packages from your `csproj` and `packages.config`.
+
+ ```terminal
+ dotnet remove package Microsoft.ApplicationInsights
+ ```
+
+ > [!Tip]
+ > If you've used [Microsoft.ApplicationInsights.WorkerService](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService), refer to the WorkerService tabs.
+
+2. Remove Initialization Code and customizations
+
+ Remove any references to Application Insights types in your codebase.
+
+ > [!Tip]
+ > After removing the Application Insights package, you can re-build your application to get a list of references that need to be removed.
+
+ - Remove references to `TelemetryConfiguration` or `TelemetryClient`. It should be part of your application startup to initialize the Application Insights SDK.
+
+ ```csharp
+ var config = TelemetryConfiguration.CreateDefault();
+ var client = new TelemetryClient(config);
+ ```
+
+ > [!Tip]
+ > If you've used `AddApplicationInsightsTelemetryWorkerService()` to add Application Insights to your `ServiceCollection`, refer to the WorkerService tabs.
+
+3. Clean and Build
+
+ Inspect your bin directory to validate that all references to `Microsoft.ApplicationInsights.` were removed.
+
+4. Test your application
+
+ Verify that your application has no unexpected consequences.
+
+### [WorkerService](#tab/workerservice)
+
+1. Remove NuGet packages
+
+ Remove the `Microsoft.ApplicationInsights.WorkerService` package from your `csproj`.
+
+ ```terminal
+ dotnet remove package Microsoft.ApplicationInsights.AspNetCore
+ ```
+
+2. Remove Initialization Code and customizations
+
+ Remove any references to Application Insights types in your codebase.
+
+ > [!Tip]
+ > After removing the Application Insights package, you can re-build your application to get a list of references that need to be removed.
+
+ - Remove Application Insights from your `ServiceCollection` by deleting the following line:
+
+ ```csharp
+ builder.Services.AddApplicationInsightsTelemetryWorkerService();
+ ```
+
+ - Remove the `ApplicationInsights` section from your `appsettings.json`.
+
+ ```json
+ {
+ "ApplicationInsights": {
+ "ConnectionString": "<Your Connection String>"
+ }
+ }
+ ```
+
+3. Clean and Build
+
+ Inspect your bin directory to validate that all references to `Microsoft.ApplicationInsights.*` were removed.
+
+4. Test your application
+
+ Verify that your application has no unexpected consequences.
+++
+> [!Tip]
+> Our product group is actively seeking feedback on this documentation. Provide feedback to otel@microsoft.com or see the [Support](#support) section.
+
+## Enable OpenTelemetry
+
+We recommended creating a development [resource](./create-workspace-resource.md) and using its [connection string](./sdk-connection-string.md) when following these instructions.
++
+Plan to update the connection string to send telemetry to the original resource after confirming migration is successful.
+
+### [ASP.NET Core](#tab/aspnetcore)
+
+1. Install the Azure Monitor Distro
+
+ Our Azure Monitor Distro enables automatic telemetry by including OpenTelemetry instrumentation libraries for collecting traces, metrics, logs, and exceptions, and allows collecting custom telemetry.
+
+ Installing the Azure Monitor Distro brings the [OpenTelemetry SDK](https://www.nuget.org/packages/OpenTelemetry) as a dependency.
+
+ ```terminal
+ dotnet add package Azure.Monitor.OpenTelemetry.AspNetCore
+ ```
+
+2. Add and configure both OpenTelemetry and Azure Monitor
+
+ The OpenTelemery SDK must be configured at application startup as part of your `ServiceCollection`, typically in the `Program.cs`.
+
+ OpenTelemetry has a concept of three signals; Traces, Metrics, and Logs.
+ The Azure Monitor Distro configures each of these signals.
+
+##### Program.cs
+
+The following code sample demonstrates the basics.
+
+```csharp
+using Azure.Monitor.OpenTelemetry.AspNetCore;
+using Microsoft.AspNetCore.Builder;
+using Microsoft.Extensions.DependencyInjection;
+
+public class Program
+{
+ public static void Main(string[] args)
+ {
+ var builder = WebApplication.CreateBuilder(args);
+
+ // Call AddOpenTelemetry() to add OpenTelemetry to your ServiceCollection.
+ // Call UseAzureMonitor() to fully configure OpenTelemetry.
+ builder.Services.AddOpenTelemetry().UseAzureMonitor();
+
+ var app = builder.Build();
+ app.MapGet("/", () => "Hello World!");
+ app.Run();
+ }
+}
+```
+
+We recommend setting your Connection String in an environment variable:
+
+`APPLICATIONINSIGHTS_CONNECTION_STRING=<Your Connection String>`
+
+More options to configure the Connection String are detailed here: [Configure the Application Insights Connection String](./opentelemetry-configuration.md?tabs=aspnetcore#connection-string).
+
+### [ASP.NET](#tab/net)
+
+1. Install the OpenTelemetry SDK via Azure Monitor
+
+ Installing the Azure Monitor Exporter brings the [OpenTelemetry SDK](https://www.nuget.org/packages/OpenTelemetry) as a dependency.
+
+ ```terminal
+ dotnet add package Azure.Monitor.OpenTelemetry.Exporter
+ ```
+
+2. Configure OpenTelemetry as part of your application startup
+
+ The OpenTelemery SDK must be configured at application startup, typically in the `Global.asax.cs`.
+ OpenTelemetry has a concept of three signals; Traces, Metrics, and Logs.
+ Each of these signals needs to be configured as part of your application startup.
+ `TracerProvider`, `MeterProvider`, and `ILoggerFactory` should be created once for your application and disposed when your application shuts down.
+
+##### Global.asax.cs
+
+The following code sample shows a simple example meant only to show the basics.
+No telemetry is collected at this point.
+
+```csharp
+using Microsoft.Extensions.Logging;
+using OpenTelemetry;
+using OpenTelemetry.Metrics;
+using OpenTelemetry.Trace;
+
+public class Global : System.Web.HttpApplication
+{
+ private TracerProvider? tracerProvider;
+ private MeterProvider? meterProvider;
+ // The LoggerFactory needs to be accessible from the rest of your application.
+ internal static ILoggerFactory loggerFactory;
+
+ protected void Application_Start()
+ {
+ this.tracerProvider = Sdk.CreateTracerProviderBuilder()
+ .Build();
+
+ this.meterProvider = Sdk.CreateMeterProviderBuilder()
+ .Build();
+
+ loggerFactory = LoggerFactory.Create(builder =>
+ {
+ builder.AddOpenTelemetry();
+ });
+ }
+
+ protected void Application_End()
+ {
+ this.tracerProvider?.Dispose();
+ this.meterProvider?.Dispose();
+ loggerFactory?.Dispose();
+ }
+}
+```
+
+### [Console](#tab/console)
+
+1. Install the OpenTelemetry SDK via Azure Monitor
+
+ Installing the [Azure Monitor Exporter](https://www.nuget.org/packages/Azure.Monitor.OpenTelemetry.Exporter) brings the [OpenTelemetry SDK](https://www.nuget.org/packages/OpenTelemetry) as a dependency.
+
+ ```terminal
+ dotnet add package Azure.Monitor.OpenTelemetry.Exporter
+ ```
+
+2. Configure OpenTelemetry as part of your application startup
+
+ The OpenTelemery SDK must be configured at application startup, typically in the `Program.cs`.
+ OpenTelemetry has a concept of three signals; Traces, Metrics, and Logs.
+ Each of these signals needs to be configured as part of your application startup.
+ `TracerProvider`, `MeterProvider`, and `ILoggerFactory` should be created once for your application and disposed when your application shuts down.
+
+The following code sample shows a simple example meant only to show the basics.
+No telemetry is collected at this point.
+
+##### Program.cs
+
+```csharp
+using Microsoft.Extensions.Logging;
+using OpenTelemetry;
+using OpenTelemetry.Metrics;
+using OpenTelemetry.Trace;
+
+internal class Program
+{
+ static void Main(string[] args)
+ {
+ TracerProvider tracerProvider = Sdk.CreateTracerProviderBuilder()
+ .Build();
+
+ MeterProvider meterProvider = Sdk.CreateMeterProviderBuilder()
+ .Build();
+
+ ILoggerFactory loggerFactory = LoggerFactory.Create(builder =>
+ {
+ builder.AddOpenTelemetry();
+ });
+
+ Console.WriteLine("Hello, World!");
+
+ // Dispose tracer provider before the application ends.
+ // It will flush the remaining spans and shutdown the tracing pipeline.
+ tracerProvider.Dispose();
+
+ // Dispose meter provider before the application ends.
+ // It will flush the remaining metrics and shutdown the metrics pipeline.
+ meterProvider.Dispose();
+
+ // Dispose logger factory before the application ends.
+ // It will flush the remaining logs and shutdown the logging pipeline.
+ loggerFactory.Dispose();
+ }
+}
+```
+
+### [WorkerService](#tab/workerservice)
+
+1. Install the OpenTelemetry SDK via Azure Monitor
+
+ Installing the [Azure Monitor Exporter](https://www.nuget.org/packages/Azure.Monitor.OpenTelemetry.Exporter) brings the [OpenTelemetry SDK](https://www.nuget.org/packages/OpenTelemetry) as a dependency.
+
+ ```terminal
+ dotnet add package Azure.Monitor.OpenTelemetry.Exporter
+ ```
+
+ You must also install the [OpenTelemetry Extensions Hosting](https://www.nuget.org/packages/OpenTelemetry.Extensions.Hosting) package.
+
+ ```terminal
+ dotnet add package OpenTelemetry.Extensions.Hosting
+ ```
+
+2. Configure OpenTelemetry as part of your application startup
+
+ The OpenTelemery SDK must be configured at application startup, typically in the `Program.cs`.
+ OpenTelemetry has a concept of three signals; Traces, Metrics, and Logs.
+ Each of these signals needs to be configured as part of your application startup.
+ `TracerProvider`, `MeterProvider`, and `ILoggerFactory` should be created once for your application and disposed when your application shuts down.
+
+The following code sample shows a simple example meant only to show the basics.
+No telemetry is collected at this point.
+
+##### Program.cs
+
+```csharp
+using Microsoft.Extensions.DependencyInjection;
+using Microsoft.Extensions.Hosting;
+using Microsoft.Extensions.Logging;
+
+public class Program
+{
+ public static void Main(string[] args)
+ {
+ var builder = Host.CreateApplicationBuilder(args);
+ builder.Services.AddHostedService<Worker>();
+
+ builder.Services.AddOpenTelemetry()
+ .WithTracing()
+ .WithMetrics();
+
+ builder.Logging.AddOpenTelemetry();
+
+ var host = builder.Build();
+ host.Run();
+ }
+}
+```
+++
+> [!Tip]
+> Our product group is actively seeking feedback on this documentation. Provide feedback to otel@microsoft.com or see the [Support](#support) section.
+
+## Install and configure instrumentation libraries
+
+### [ASP.NET Core](#tab/aspnetcore)
+
+[Instrumentation libraries](https://opentelemetry.io/docs/specs/otel/overview/#instrumentation-libraries) can be added to your project to auto collect telemetry about specific components or dependencies.
+
+The following libraries are included in the Distro.
+
+- [HTTP](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Http)
+- [ASP.NET Core](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.AspNetCore)
+- [SQL](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.sqlclient)
+
+#### Customizing instrumentation libraries
+
+The Azure Monitor Distro includes .NET OpenTelemetry instrumentation for [ASP.NET Core](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.AspNetCore/), [HttpClient](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Http/), and [SQLClient](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.SqlClient).
+You can customize these included instrumentations or manually add extra instrumentation on your own using the OpenTelemetry API.
+
+Here are some examples of how to customize the instrumentation:
+
+##### Customizing AspNetCoreTraceInstrumentationOptions
+
+```C#
+builder.Services.AddOpenTelemetry().UseAzureMonitor();
+builder.Services.Configure<AspNetCoreTraceInstrumentationOptions>(options =>
+{
+ options.RecordException = true;
+ options.Filter = (httpContext) =>
+ {
+ // only collect telemetry about HTTP GET requests
+ return HttpMethods.IsGet(httpContext.Request.Method);
+ };
+});
+```
+
+##### Customizing HttpClientTraceInstrumentationOptions
+
+```C#
+builder.Services.AddOpenTelemetry().UseAzureMonitor();
+builder.Services.Configure<HttpClientTraceInstrumentationOptions>(options =>
+{
+ options.RecordException = true;
+ options.FilterHttpRequestMessage = (httpRequestMessage) =>
+ {
+ // only collect telemetry about HTTP GET requests
+ return HttpMethods.IsGet(httpRequestMessage.Method.Method);
+ };
+});
+```
+
+##### Customizing SqlClientInstrumentationOptions
+
+We vendor the [SQLClient](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.SqlClient) instrumentation within our package while it's still in beta. When it reaches a stable release, we include it as a standard package reference. Until then, to customize the SQLClient instrumentation, add the `OpenTelemetry.Instrumentation.SqlClient` package reference to your project and use its public API.
+
+```
+dotnet add package --prerelease OpenTelemetry.Instrumentation.SqlClient
+```
+
+```C#
+builder.Services.AddOpenTelemetry().UseAzureMonitor().WithTracing(builder =>
+{
+ builder.AddSqlClientInstrumentation(options =>
+ {
+ options.SetDbStatementForStoredProcedure = false;
+ });
+});
+```
+
+### [ASP.NET](#tab/net)
+
+[Instrumentation libraries](https://opentelemetry.io/docs/specs/otel/overview/#instrumentation-libraries) can be added to your project to auto collect telemetry about specific components or dependencies. We recommend the following libraries:
+
+1. [OpenTelemetry.Instrumentation.AspNet](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.AspNet) can be used to collect telemetry for incoming requests. Azure Monitor maps it to [Request Telemetry](./data-model-complete.md#request).
+
+ ```terminal
+ dotnet add package OpenTelemetry.Instrumentation.AspNet
+ ```
+
+ It requires adding an extra HttpModule to your `Web.config`:
+
+ ```xml
+ <system.webServer>
+ <modules>
+ <add
+ name="TelemetryHttpModule"
+ type="OpenTelemetry.Instrumentation.AspNet.TelemetryHttpModule,
+ OpenTelemetry.Instrumentation.AspNet.TelemetryHttpModule"
+ preCondition="integratedMode,managedHandler" />
+ </modules>
+ </system.webServer>
+ ```
+
+ A complete getting started guide is available here: [OpenTelemetry.Instrumentation.AspNet Readme](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/tree/main/src/OpenTelemetry.Instrumentation.AspNet)
+
+2. [OpenTelemetry.Instrumentation.Http](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Http) can be used to collect telemetry for outbound http dependencies. Azure Monitor maps it to [Dependency Telemetry](./data-model-complete.md#dependency).
+
+ ```terminal
+ dotnet add package OpenTelemetry.Instrumentation.Http
+ ```
+
+ A complete getting started guide is available here: [OpenTelemetry.Instrumentation.Http Readme](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/tree/main/src/OpenTelemetry.Instrumentation.Http)
+
+3. [OpenTelemetry.Instrumentation.SqlClient](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.SqlClient) can be used to collect telemetry for MS SQL dependencies. Azure Monitor maps it to [Dependency Telemetry](./data-model-complete.md#dependency).
+
+ ```terminal
+ dotnet add package --prerelease OpenTelemetry.Instrumentation.SqlClient
+ ```
+
+ A complete getting started guide is available here: [OpenTelemetry.Instrumentation.SqlClient Readme](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/tree/main/src/OpenTelemetry.Instrumentation.SqlClient)
+
+##### Global.asax.cs
+
+The following code sample expands on the previous example.
+It now collects telemetry, but doesn't yet send to Application Insights.
+
+```csharp
+using Microsoft.Extensions.Logging;
+using OpenTelemetry;
+using OpenTelemetry.Metrics;
+using OpenTelemetry.Trace;
+
+public class Global : System.Web.HttpApplication
+{
+ private TracerProvider? tracerProvider;
+ private MeterProvider? meterProvider;
+ internal static ILoggerFactory loggerFactory;
+
+ protected void Application_Start()
+ {
+ this.tracerProvider = Sdk.CreateTracerProviderBuilder()
+ .AddAspNetInstrumentation()
+ .AddHttpClientInstrumentation()
+ .AddSqlClientInstrumentation()
+ .Build();
+
+ this.meterProvider = Sdk.CreateMeterProviderBuilder()
+ .AddAspNetInstrumentation()
+ .AddHttpClientInstrumentation()
+ .Build();
+
+ loggerFactory = LoggerFactory.Create(builder =>
+ {
+ builder.AddOpenTelemetry();
+ });
+ }
+
+ protected void Application_End()
+ {
+ this.tracerProvider?.Dispose();
+ this.meterProvider?.Dispose();
+ loggerFactory?.Dispose();
+ }
+}
+```
+
+### [Console](#tab/console)
+
+[Instrumentation libraries](https://opentelemetry.io/docs/specs/otel/overview/#instrumentation-libraries) can be added to your project to auto collect telemetry about specific components or dependencies. We recommend the following libraries:
+
+1. [OpenTelemetry.Instrumentation.Http](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Http) can be used to collect telemetry for outbound http dependencies. Azure Monitor maps it to [Dependency Telemetry](./data-model-complete.md#dependency).
+
+ ```terminal
+ dotnet add package OpenTelemetry.Instrumentation.Http
+ ```
+
+ A complete getting started guide is available here: [OpenTelemetry.Instrumentation.Http Readme](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/tree/main/src/OpenTelemetry.Instrumentation.Http)
+
+2. [OpenTelemetry.Instrumentation.SqlClient](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.SqlClient) can be used to collect telemetry for MS SQL dependencies. Azure Monitor maps it to [Dependency Telemetry](./data-model-complete.md#dependency).
+
+ ```terminal
+ dotnet add package --prerelease OpenTelemetry.Instrumentation.SqlClient
+ ```
+
+ A complete getting started guide is available here: [OpenTelemetry.Instrumentation.SqlClient Readme](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/tree/main/src/OpenTelemetry.Instrumentation.SqlClient)
+
+The following code sample expands on the previous example.
+It now collects telemetry, but doesn't yet send to Application Insights.
+
+##### Program.cs
+
+```csharp
+using Microsoft.Extensions.Logging;
+using OpenTelemetry;
+using OpenTelemetry.Metrics;
+using OpenTelemetry.Trace;
+
+internal class Program
+{
+ static void Main(string[] args)
+ {
+ TracerProvider tracerProvider = Sdk.CreateTracerProviderBuilder()
+ .AddHttpClientInstrumentation()
+ .AddSqlClientInstrumentation()
+ .Build();
+
+ MeterProvider meterProvider = Sdk.CreateMeterProviderBuilder()
+ .AddHttpClientInstrumentation()
+ .Build();
+
+ ILoggerFactory loggerFactory = LoggerFactory.Create(builder =>
+ {
+ builder.AddOpenTelemetry();
+ });
+
+ Console.WriteLine("Hello, World!");
+
+ tracerProvider.Dispose();
+ meterProvider.Dispose();
+ loggerFactory.Dispose();
+ }
+}
+```
+
+### [WorkerService](#tab/workerservice)
+
+[Instrumentation libraries](https://opentelemetry.io/docs/specs/otel/overview/#instrumentation-libraries) can be added to your project to auto collect telemetry about specific components or dependencies. We recommend the following libraries:
+
+1. [OpenTelemetry.Instrumentation.Http](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Http) can be used to collect telemetry for outbound http dependencies. Azure Monitor maps it to [Dependency Telemetry](./data-model-complete.md#dependency).
+
+ ```terminal
+ dotnet add package OpenTelemetry.Instrumentation.Http
+ ```
+
+ A complete getting started guide is available here: [OpenTelemetry.Instrumentation.Http Readme](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/tree/main/src/OpenTelemetry.Instrumentation.Http)
+
+2. [OpenTelemetry.Instrumentation.SqlClient](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.SqlClient) can be used to collect telemetry for MS SQL dependencies. Azure Monitor maps it to [Dependency Telemetry](./data-model-complete.md#dependency).
+
+ ```terminal
+ dotnet add package --prerelease OpenTelemetry.Instrumentation.SqlClient
+ ```
+
+ A complete getting started guide is available here: [OpenTelemetry.Instrumentation.SqlClient Readme](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/tree/main/src/OpenTelemetry.Instrumentation.SqlClient)
+
+The following code sample expands on the previous example.
+It now collects telemetry, but doesn't yet send to Application Insights.
+
+##### Program.cs
+
+```csharp
+using Microsoft.Extensions.DependencyInjection;
+using Microsoft.Extensions.Hosting;
+using Microsoft.Extensions.Logging;
+
+public class Program
+{
+ public static void Main(string[] args)
+ {
+ var builder = Host.CreateApplicationBuilder(args);
+ builder.Services.AddHostedService<Worker>();
+
+ builder.Services.AddOpenTelemetry()
+ .WithTracing(builder =>
+ {
+ builder.AddHttpClientInstrumentation();
+ builder.AddSqlClientInstrumentation();
+ })
+ .WithMetrics(builder =>
+ {
+ builder.AddHttpClientInstrumentation();
+ });
+
+ builder.Logging.AddOpenTelemetry();
+
+ var host = builder.Build();
+ host.Run();
+ }
+}
+```
+++
+## Configure Azure Monitor
+
+### [ASP.NET Core](#tab/aspnetcore)
+
+Application Insights offered many more configuration options via `ApplicationInsightsServiceOptions`.
+
+| Application Insights Setting | OpenTelemetry Alternative |
+|--|-|
+| AddAutoCollectedMetricExtractor | N/A |
+| ApplicationVersion | Set "service.version" on Resource |
+| ConnectionString | See [instructions](./opentelemetry-configuration.md?tabs=aspnetcore#connection-string) on configuring the Connection String. |
+| DependencyCollectionOptions | N/A. To customize dependencies, review the available configuration options for applicable Instrumentation libraries. |
+| DeveloperMode | N/A |
+| EnableActiveTelemetryConfigurationSetup | N/A |
+| EnableAdaptiveSampling | N/A. Only fixed-rate sampling is supported. |
+| EnableAppServicesHeartbeatTelemetryModule | N/A |
+| EnableAuthenticationTrackingJavaScript | N/A |
+| EnableAzureInstanceMetadataTelemetryModule | N/A |
+| EnableDependencyTrackingTelemetryModule | See instructions on filtering Traces. |
+| EnableDiagnosticsTelemetryModule | N/A |
+| EnableEventCounterCollectionModule | N/A |
+| EnableHeartbeat | N/A |
+| EnablePerformanceCounterCollectionModule | N/A |
+| EnableQuickPulseMetricStream | AzureMonitorOptions.EnableLiveMetrics |
+| EnableRequestTrackingTelemetryModule | See instructions on filtering Traces. |
+| EndpointAddress | Use ConnectionString. |
+| InstrumentationKey | Use ConnectionString. |
+| RequestCollectionOptions | N/A. See OpenTelemetry.Instrumentation.AspNetCore options. |
+
+### Remove custom configurations
+
+The following scenarios are optional and only apply to advanced users.
+
+* If you have any more references to the `TelemetryClient`, which could be used to [manually record telemetry](./api-custom-events-metrics.md), they should be removed.
+* If you added any [custom filtering or enrichment](./api-filtering-sampling.md) in the form of a custom `TelemetryProcessor` or `TelemetryInitializer`, they should be removed. They can be found in your `ServiceCollection`.
+
+ ```csharp
+ builder.Services.AddSingleton<ITelemetryInitializer, MyCustomTelemetryInitializer>();
+ ```
+
+ ```csharp
+ builder.Services.AddApplicationInsightsTelemetryProcessor<MyCustomTelemetryProcessor>();
+ ```
+
+* Remove JavaScript Snippet
+
+ If you used the Snippet provided by the Application Insights .NET SDK, it must also be removed.
+ For full code samples of what to remove, review the guide [enable client-side telemetry for web applications](./asp-net-core.md?tabs=netcorenew#enable-client-side-telemetry-for-web-applications).
+
+ If you added the JavaScript SDK to collect client-side telemetry, it can also be removed although it continues to work without the .NET SDK.
+ For full code samples of what to remove, review the [onboarding guide for the JavaScript SDK](./javascript-sdk.md).
+
+* Remove any Visual Studio Artifacts
+
+ If you used Visual Studio to onboard to Application Insights, you could have more files left over in your project.
+
+ - `Properties/ServiceDependencies` directory might have a reference to your Application Insights resource.
+
+### [ASP.NET](#tab/net)
+
+To send your telemetry to Application Insights, the Azure Monitor Exporter must be added to the configuration of all three signals.
+
+##### Global.asax.cs
+
+The following code sample expands on the previous example.
+It now collects telemetry and sends to Application Insights.
+
+```csharp
+public class Global : System.Web.HttpApplication
+{
+ private TracerProvider? tracerProvider;
+ private MeterProvider? meterProvider;
+ internal static ILoggerFactory loggerFactory;
+
+ protected void Application_Start()
+ {
+ this.tracerProvider = Sdk.CreateTracerProviderBuilder()
+ .AddAspNetInstrumentation()
+ .AddHttpClientInstrumentation()
+ .AddSqlClientInstrumentation()
+ .AddAzureMonitorTraceExporter()
+ .Build();
+
+ this.meterProvider = Sdk.CreateMeterProviderBuilder()
+ .AddAspNetInstrumentation()
+ .AddHttpClientInstrumentation()
+ .AddAzureMonitorMetricExporter()
+ .Build();
+
+ loggerFactory = LoggerFactory.Create(builder =>
+ {
+ builder.AddOpenTelemetry(o => o.AddAzureMonitorLogExporter());
+ });
+ }
+
+ protected void Application_End()
+ {
+ this.tracerProvider?.Dispose();
+ this.meterProvider?.Dispose();
+ loggerFactory?.Dispose();
+ }
+}
+```
+
+We recommend setting your Connection String in an environment variable:
+
+`APPLICATIONINSIGHTS_CONNECTION_STRING=<Your Connection String>`
+
+More options to configure the Connection String are detailed here: [Configure the Application Insights Connection String](./opentelemetry-configuration.md?tabs=net#connection-string).
+
+### [Console](#tab/console)
+
+To send your telemetry to Application Insights, the Azure Monitor Exporter must be added to the configuration of all three signals.
+
+##### Program.cs
+
+The following code sample expands on the previous example.
+It now collects telemetry and sends to Application Insights.
+
+```csharp
+using Microsoft.Extensions.Logging;
+using OpenTelemetry;
+using OpenTelemetry.Metrics;
+using OpenTelemetry.Trace;
+
+internal class Program
+{
+ static void Main(string[] args)
+ {
+ TracerProvider tracerProvider = Sdk.CreateTracerProviderBuilder()
+ .AddHttpClientInstrumentation()
+ .AddSqlClientInstrumentation()
+ .AddAzureMonitorTraceExporter()
+ .Build();
+
+ MeterProvider meterProvider = Sdk.CreateMeterProviderBuilder()
+ .AddHttpClientInstrumentation()
+ .AddAzureMonitorMetricExporter()
+ .Build();
+
+ ILoggerFactory loggerFactory = LoggerFactory.Create(builder =>
+ {
+ builder.AddOpenTelemetry(o => o.AddAzureMonitorLogExporter());
+ });
+
+ Console.WriteLine("Hello, World!");
+
+ tracerProvider.Dispose();
+ meterProvider.Dispose();
+ loggerFactory.Dispose();
+ }
+}
+```
+
+We recommend setting your Connection String in an environment variable:
+
+`APPLICATIONINSIGHTS_CONNECTION_STRING=<Your Connection String>`
+
+More options to configure the Connection String are detailed here: [Configure the Application Insights Connection String](./opentelemetry-configuration.md?tabs=net#connection-string).
+
+### Remove custom configurations
+
+The following scenarios are optional and apply to advanced users.
+
+* If you have any more references to the `TelemetryClient`, which is used to [manually record telemetry](./api-custom-events-metrics.md), they should be removed.
+
+* Remove any [custom filtering or enrichment](./api-filtering-sampling.md) added as a custom `TelemetryProcessor` or `TelemetryInitializer`. The configuration references them.
+
+* Remove any Visual Studio Artifacts
+
+ If you used Visual Studio to onboard to Application Insights, you could have more files left over in your project.
+
+ - `ConnectedService.json` might have a reference to your Application Insights resource.
+ - `[Your project's name].csproj` might have a reference to your Application Insights resource:
+
+ ```xml
+ <ApplicationInsightsResourceId>/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/Default-ApplicationInsights-EastUS/providers/microsoft.insights/components/WebApplication4</ApplicationInsightsResourceId>
+ ```
+
+### [WorkerService](#tab/workerservice)
+
+To send your telemetry to Application Insights, the Azure Monitor Exporter must be added to the configuration of all three signals.
+
+##### Program.cs
+
+The following code sample expands on the previous example.
+It now collects telemetry and sends to Application Insights.
+
+```csharp
+using Microsoft.Extensions.DependencyInjection;
+using Microsoft.Extensions.Hosting;
+using Microsoft.Extensions.Logging;
+
+public class Program
+{
+ public static void Main(string[] args)
+ {
+ var builder = Host.CreateApplicationBuilder(args);
+ builder.Services.AddHostedService<Worker>();
+
+ builder.Services.AddOpenTelemetry()
+ .WithTracing(builder =>
+ {
+ builder.AddHttpClientInstrumentation();
+ builder.AddSqlClientInstrumentation();
+ builder.AddAzureMonitorTraceExporter();
+ })
+ .WithMetrics(builder =>
+ {
+ builder.AddHttpClientInstrumentation();
+ builder.AddAzureMonitorMetricExporter();
+ });
+
+ builder.Logging.AddOpenTelemetry(builder => builder.AddAzureMonitorLogExporter());
+
+ var host = builder.Build();
+ host.Run();
+ }
+}
+```
+
+We recommend setting your Connection String in an environment variable:
+
+`APPLICATIONINSIGHTS_CONNECTION_STRING=<Your Connection String>`
+
+More options to configure the Connection String are detailed here: [Configure the Application Insights Connection String](./opentelemetry-configuration.md?tabs=net#connection-string).
+
+#### More configurations
+
+Application Insights offered many more configuration options via `ApplicationInsightsServiceOptions`.
+
+| Application Insights Setting | OpenTelemetry Alternative |
+|--|-|
+| AddAutoCollectedMetricExtractor | N/A |
+| ApplicationVersion | Set "service.version" on Resource |
+| ConnectionString | See [instructions](./opentelemetry-configuration.md?tabs=aspnetcore#connection-string) on configuring the Connection String. |
+| DependencyCollectionOptions | N/A. To customize dependencies, review the available configuration options for applicable Instrumentation libraries. |
+| DeveloperMode | N/A |
+| EnableAdaptiveSampling | N/A. Only fixed-rate sampling is supported. |
+| EnableAppServicesHeartbeatTelemetryModule | N/A |
+| EnableAzureInstanceMetadataTelemetryModule | N/A |
+| EnableDependencyTrackingTelemetryModule | See instructions on filtering Traces. |
+| EnableDiagnosticsTelemetryModule | N/A |
+| EnableEventCounterCollectionModule | N/A |
+| EnableHeartbeat | N/A |
+| EnablePerformanceCounterCollectionModule | N/A |
+| EnableQuickPulseMetricStream | AzureMonitorOptions.EnableLiveMetrics |
+| EndpointAddress | Use ConnectionString. |
+| InstrumentationKey | Use ConnectionString. |
+
+### Remove Custom Configurations
+
+The following scenarios are optional and apply to advanced users.
+
+* If you have any more references to the `TelemetryClient`, which are used to [manually record telemetry](./api-custom-events-metrics.md), they should be removed.
+
+* If you added any [custom filtering or enrichment](./api-filtering-sampling.md) in the form of a custom `TelemetryProcessor` or `TelemetryInitializer`, it should be removed. You can find references in your `ServiceCollection`.
+
+ ```csharp
+ builder.Services.AddSingleton<ITelemetryInitializer, MyCustomTelemetryInitializer>();
+ ```
+
+ ```csharp
+ builder.Services.AddApplicationInsightsTelemetryProcessor<MyCustomTelemetryProcessor>();
+ ```
+
+* Remove any Visual Studio Artifacts
+
+ If you used Visual Studio to onboard to Application Insights, you could have more files left over in your project.
+
+ - `Properties/ServiceDependencies` directory might have a reference to your Application Insights resource.
+++
+> [!Tip]
+> Our product group is actively seeking feedback on this documentation. Provide feedback to otel@microsoft.com or see the [Support](#support) section.
+
+## Frequently asked questions
+
+This section is for customers who use telemetry initializers or processors, or write custom code against the classic Application Insights API to create custom telemetry.
+
+### How do the SDK API's map to OpenTelemetry concepts?
+
+[OpenTelemetry](https://opentelemetry.io/) is a vendor neutral observability framework. There are no Application Insights APIs in the OpenTelemetry SDK or libraries. Before migrating, it's important to understand some of OpenTelemetry's concepts.
+
+* In Application Insights, all telemetry was managed through a single `TelemetryClient` and `TelemetryConfiguration`. In OpenTelemetry, each of the three telemetry signals (Traces, Metrics, and Logs) has its own configuration. You can manually create telemetry via the .NET runtime without external libraries. For more information, see the .NET guides on [distributed tracing](/dotnet/core/diagnostics/distributed-tracing-instrumentation-walkthroughs), [metrics](/dotnet/core/diagnostics/metrics), and [logging](/dotnet/core/extensions/logging).
+
+* Application Insights used `TelemetryModules` to automatically collect telemetry for your application.
+Instead, OpenTelemetry uses [Instrumentation libraries](https://opentelemetry.io/docs/specs/otel/overview/#instrumentation-libraries) to collect telemetry from specific components (such as [AspNetCore](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.AspNetCore) for Requests and [HttpClient](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Http) for Dependencies).
+
+* Application Insights used `TelemetryInitializers` to enrich telemetry with additional information or to override properties.
+With OpenTelemetry, you can write a [Processor](https://opentelemetry.io/docs/collector/configuration/#processors) to customize a specific signal. Additionally, many OpenTelemetry Instrumentation libraries offer an `Enrich` method to customize the telemetry generated by that specific component.
+
+* Application Insights used `TelemetryProcessors` to filter telemetry. An OpenTelemetry [Processor](https://opentelemetry.io/docs/collector/configuration/#processors) can also be used to apply filtering rules on a specific signal.
+
+### How do Application Insights telemetry types map to OpenTelemetry?
+
+This table maps Application Insights data types to OpenTelemetry concepts and their .NET implementations.
+
+| Azure Monitor Table | Application Insights DataType | OpenTelemetry DataType | .NET Implementation |
+||-||--|
+| customEvents | EventTelemetry | N/A | N/A |
+| customMetrics | MetricTelemetry | Metrics | System.Diagnostics.Metrics.Meter |
+| dependencies | DependencyTelemetry | Spans (Client, Internal, Consumer) | System.Diagnostics.Activity |
+| exceptions | ExceptionTelemetry | Exceptions | System.Exception |
+| requests | RequestTelemetry | Spans (Server, Producer) | System.Diagnostics.Activity |
+| traces | TraceTelemetry | Logs | Microsoft.Extensions.Logging.ILogger |
+
+The following documents provide more information.
+
+- [Data Collection Basics of Azure Monitor Application Insights](./opentelemetry-overview.md)
+- [Application Insights telemetry data model](./data-model-complete.md)
+- [OpenTelemetry Concepts](https://opentelemetry.io/docs/concepts/)
+
+### How do Application Insights sampling concepts map to OpenTelemetry?
+
+While Application Insights offered multiple options to configure sampling, Azure Monitor Exporter or Azure Monitor Distro only offers fixed rate sampling. Only *Requests* and *Dependencies* (*OpenTelemetry Traces*) can be sampled.
+
+For code samples detailing how to configure sampling, see our guide [Enable Sampling](./opentelemetry-configuration.md#enable-sampling)
+
+### How do Telemetry Processors and Initializers map to OpenTelemetry?
+
+In the Application Insights .NET SDK, use telemetry processors to filter and modify or discard telemetry. Use telemetry initializers to add or modify custom properties. For more information, see the [Azure Monitor documentation](./api-filtering-sampling.md). OpenTelemetry replaces these concepts with activity or log processors, which enrich and filter telemetry.
+
+#### Filtering Traces
+
+To filter telemetry data in OpenTelemetry, you can implement an activity processor. This example is equivalent to the Application Insights example for filtering telemetry data as described in [Azure Monitor documentation](./api-filtering-sampling.md). The example illustrates where unsuccessful dependency calls are filtered.
+
+```csharp
+using System.Diagnostics;
+using OpenTelemetry;
+
+internal sealed class SuccessfulDependencyFilterProcessor : BaseProcessor<Activity>
+{
+ public override void OnEnd(Activity activity)
+ {
+ if (!OKtoSend(activity))
+ {
+ activity.ActivityTraceFlags &= ~ActivityTraceFlags.Recorded;
+ }
+ }
+
+ private bool OKtoSend(Activity activity)
+ {
+ return activity.Kind == ActivityKind.Client && activity.Status == ActivityStatusCode.Ok;
+ }
+}
+```
+
+To use this processor, you need to create a `TracerProvider` and add the processor before `AddAzureMonitorTraceExporter`.
+
+```csharp
+using OpenTelemetry.Trace;
+
+public static void Main()
+{
+ var tracerProvider = Sdk.CreateTracerProviderBuilder()
+ .AddProcessor(new SuccessfulDependencyFilterProcessor())
+ .AddAzureMonitorTraceExporter()
+ .Build();
+}
+```
+
+#### Filtering Logs
+
+[`ILogger`](/dotnet/core/extensions/logging)
+implementations have a built-in mechanism to apply [log
+filtering](/dotnet/core/extensions/logging?tabs=command-line#how-filtering-rules-are-applied).
+This filtering lets you control the logs that are sent to each registered
+provider, including the `OpenTelemetryLoggerProvider`. "OpenTelemetry" is the
+[alias](/dotnet/api/microsoft.extensions.logging.provideraliasattribute)
+for `OpenTelemetryLoggerProvider`, used in configuring filtering rules.
+
+The following example defines "Error" as the default `LogLevel`
+and also defines "Warning" as the minimum `LogLevel` for a user defined category.
+These rules as defined only apply to the `OpenTelemetryLoggerProvider`.
+
+```csharp
+builder.AddFilter<OpenTelemetryLoggerProvider>("*", LogLevel.Error);
+builder.AddFilter<OpenTelemetryLoggerProvider>("MyProduct.MyLibrary.MyClass", LogLevel.Warning);
+```
+
+For more information, please read the [OpenTelemetry .NET documentation on logs](https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/docs/logs/README.md).
+
+#### Adding Custom Properties to Traces
+
+In OpenTelemetry, you can use activity processors to enrich telemetry data with more properties. It's similar to using telemetry initializers in Application Insights, where you can modify telemetry properties.
+
+By default, Azure Monitor Exporter flags any HTTP request with a response code of 400 or greater as failed. However, if you want to treat 400 as a success, you can add an enriching activity processor that sets the success on the activity and adds a tag to include more telemetry properties. It's similar to adding or modifying properties using an initializer in Application Insights as described in [Azure Monitor documentation](./api-filtering-sampling.md?tabs=javascriptwebsdkloaderscript#addmodify-properties-itelemetryinitializer).
+
+Here's an example of how to add custom properties and override the default behavior for certain response codes:
+
+```csharp
+using System.Diagnostics;
+using OpenTelemetry;
+
+/// <summary>
+/// Custom Processor that overrides the default behavior of treating response codes >= 400 as failed requests.
+/// </summary>
+internal class MyEnrichingProcessor : BaseProcessor<Activity>
+{
+ public override void OnEnd(Activity activity)
+ {
+ if (activity.Kind == ActivityKind.Server)
+ {
+ int responseCode = GetResponseCode(activity);
+
+ if (responseCode >= 400 && responseCode < 500)
+ {
+ // If we set the Success property, the SDK won't change it
+ activity.SetStatus(ActivityStatusCode.Ok);
+
+ // Allow to filter these requests in the portal
+ activity.SetTag("Overridden400s", "true");
+ }
+
+ // else leave the SDK to set the Success property
+ }
+ }
+
+ private int GetResponseCode(Activity activity)
+ {
+ foreach (ref readonly var tag in activity.EnumerateTagObjects())
+ {
+ if (tag.Key == "http.response.status_code" && tag.Value is int value)
+ {
+ return value;
+ }
+ }
+
+ return 0;
+ }
+}
+```
+
+To use this processor, you need to create a `TracerProvider` and add the processor before `AddAzureMonitorTraceExporter`.
+
+```csharp
+using OpenTelemetry.Trace;
+
+public static void Main()
+{
+ var tracerProvider = Sdk.CreateTracerProviderBuilder()
+ .AddSource("Company.Product.Name")
+ .AddProcessor(new MyEnrichingProcessor())
+ .AddAzureMonitorTraceExporter()
+ .Build();
+}
+```
+
+### How do I manually track telemetry using OpenTelemetry?
+
+#### Sending Traces - Manual
+
+Traces in Application Insights are stored as `RequestTelemetry` and `DependencyTelemetry`. In OpenTelemetry, traces are modeled as `Span` using the `Activity` class.
+
+OpenTelemetry .NET utilizes the `ActivitySource` and `Activity` classes for tracing, which are part of the .NET runtime. This approach is distinctive because the .NET implementation integrates the tracing API directly into the runtime itself. The `System.Diagnostics.DiagnosticSource` package allows developers to use `ActivitySource` to create and manage `Activity` instances. This method provides a seamless way to add tracing to .NET applications without relying on external libraries, applying the built-in capabilities of the .NET ecosystem. For more detailed information, refer to the [distributed tracing instrumentation walkthroughs](/dotnet/core/diagnostics/distributed-tracing-instrumentation-walkthroughs).
+
+Here's how to migrate manual tracing:
+
+ > [!Note]
+ > In Application Insights, the role name and role instance could be set at a per-telemetry level. However, with the Azure Monitor Exporter, we cannot customize at a per-telemetry level. The role name and role instance are extracted from the OpenTelemetry resource and applied across all telemetry. Please read this document for more information: [Set the cloud role name and the cloud role instance](./opentelemetry-configuration.md?tabs=aspnetcore#set-the-cloud-role-name-and-the-cloud-role-instance).
+
+#### DependencyTelemetry
+
+Application Insights `DependencyTelemetry` is used to model outgoing requests. Here's how to convert it to OpenTelemetry:
+
+**Application Insights Example:**
+
+```csharp
+DependencyTelemetry dep = new DependencyTelemetry
+{
+ Name = "DependencyName",
+ Data = "https://www.example.com/",
+ Type = "Http",
+ Target = "www.example.com",
+ Duration = TimeSpan.FromSeconds(10),
+ ResultCode = "500",
+ Success = false
+};
+
+dep.Context.Cloud.RoleName = "MyRole";
+dep.Context.Cloud.RoleInstance = "MyRoleInstance";
+dep.Properties["customprop1"] = "custom value1";
+client.TrackDependency(dep);
+```
+
+**OpenTelemetry Example:**
+
+```csharp
+var activitySource = new ActivitySource("Company.Product.Name");
+var resourceAttributes = new Dictionary<string, object>
+{
+ { "service.name", "MyRole" },
+ { "service.instance.id", "MyRoleInstance" }
+};
+
+var resourceBuilder = ResourceBuilder.CreateDefault().AddAttributes(resourceAttributes);
+
+using var tracerProvider = Sdk.CreateTracerProviderBuilder()
+ .SetResourceBuilder(resourceBuilder)
+ .AddSource(activitySource.Name)
+ .AddAzureMonitorTraceExporter()
+ .Build();
+
+// Emit traces
+using (var activity = activitySource.StartActivity("DependencyName", ActivityKind.Client))
+{
+ activity?.SetTag("url.full", "https://www.example.com/");
+ activity?.SetTag("server.address", "www.example.com");
+ activity?.SetTag("http.request.method", "GET");
+ activity?.SetTag("http.response.status_code", "500");
+ activity?.SetTag("customprop1", "custom value1");
+ activity?.SetStatus(ActivityStatusCode.Error);
+ activity?.SetEndTime(activity.StartTimeUtc.AddSeconds(10));
+}
+```
+
+#### RequestTelemetry
+
+Application Insights `RequestTelemetry` models incoming requests. Here's how to migrate it to OpenTelemetry:
+
+**Application Insights Example:**
+
+```csharp
+RequestTelemetry req = new RequestTelemetry
+{
+ Name = "RequestName",
+ Url = new Uri("http://example.com"),
+ Duration = TimeSpan.FromSeconds(10),
+ ResponseCode = "200",
+ Success = true,
+ Properties = { ["customprop1"] = "custom value1" }
+};
+
+req.Context.Cloud.RoleName = "MyRole";
+req.Context.Cloud.RoleInstance = "MyRoleInstance";
+client.TrackRequest(req);
+```
+
+**OpenTelemetry Example:**
+
+```csharp
+var activitySource = new ActivitySource("Company.Product.Name");
+var resourceAttributes = new Dictionary<string, object>
+{
+ { "service.name", "MyRole" },
+ { "service.instance.id", "MyRoleInstance" }
+};
+
+var resourceBuilder = ResourceBuilder.CreateDefault().AddAttributes(resourceAttributes);
+
+using var tracerProvider = Sdk.CreateTracerProviderBuilder()
+ .SetResourceBuilder(resourceBuilder)
+ .AddSource(activitySource.Name)
+ .AddAzureMonitorTraceExporter()
+ .Build();
+
+// Emit traces
+using (var activity = activitySource.StartActivity("RequestName", ActivityKind.Server))
+{
+ activity?.SetTag("url.scheme", "https");
+ activity?.SetTag("server.address", "www.example.com");
+ activity?.SetTag("url.path", "/");
+ activity?.SetTag("http.response.status_code", "200");
+ activity?.SetTag("customprop1", "custom value1");
+ activity?.SetStatus(ActivityStatusCode.Ok);
+}
+```
+
+#### Custom Operations Tracking
+
+In Application Insights, track custom operations using `StartOperation` and `StopOperation` methods. Achieve it using `ActivitySource` and `Activity` in OpenTelemetry .NET. For operations with `ActivityKind.Server` and `ActivityKind.Consumer`, Azure Monitor Exporter generates `RequestTelemetry`. For `ActivityKind.Client`, `ActivityKind.Producer`, and `ActivityKind.Internal`, it generates `DependencyTelemetry`. For more information on custom operations tracking, see the [Azure Monitor documentation](./custom-operations-tracking.md). For more on using `ActivitySource` and `Activity` in .NET, see the [.NET distributed tracing instrumentation walkthroughs](/dotnet/core/diagnostics/distributed-tracing-instrumentation-walkthroughs#activity).
+
+Here's an example of how to start and stop an activity for custom operations:
+
+```csharp
+using System.Diagnostics;
+using OpenTelemetry;
+
+var activitySource = new ActivitySource("Company.Product.Name");
+
+using var tracerProvider = Sdk.CreateTracerProviderBuilder()
+ .AddSource(activitySource.Name)
+ .AddAzureMonitorTraceExporter()
+ .Build();
+
+// Start a new activity
+using (var activity = activitySource.StartActivity("CustomOperation", ActivityKind.Server))
+{
+ activity?.SetTag("customTag", "customValue");
+
+ // Perform your custom operation logic here
+
+ // No need to explicitly call Activity.Stop() because the using block automatically disposes the Activity object, which stops it.
+}
+```
+
+#### Sending Logs
+
+Logs in Application Insights are stored as `TraceTelemetry` and `ExceptionTelemetry`.
+
+##### TraceTelemetry
+
+In OpenTelemetry, logging is integrated via the `ILogger` interface. Here's how to migrate `TraceTelemetry`:
+
+**Application Insights Example:**
+
+```csharp
+TraceTelemetry traceTelemetry = new TraceTelemetry
+{
+ Message = "hello from tomato 2.99",
+ SeverityLevel = SeverityLevel.Warning,
+};
+
+traceTelemetry.Context.Cloud.RoleName = "MyRole";
+traceTelemetry.Context.Cloud.RoleInstance = "MyRoleInstance";
+client.TrackTrace(traceTelemetry);
+```
+
+**OpenTelemetry Example:**
+
+```csharp
+var resourceAttributes = new Dictionary<string, object>
+{
+ { "service.name", "MyRole" },
+ { "service.instance.id", "MyRoleInstance" }
+};
+
+var resourceBuilder = ResourceBuilder.CreateDefault().AddAttributes(resourceAttributes);
+
+using var loggerFactory = LoggerFactory.Create(builder => builder
+ .AddOpenTelemetry(loggerOptions =>
+ {
+ loggerOptions.SetResourceBuilder(resourceBuilder);
+ loggerOptions.AddAzureMonitorLogExporter();
+ }));
+
+// Create a new instance `ILogger` from the above LoggerFactory
+var logger = loggerFactory.CreateLogger<Program>();
+
+// Use the logger instance to write a new log
+logger.FoodPrice("tomato", 2.99);
+
+internal static partial class LoggerExtensions
+{
+ [LoggerMessage(LogLevel.Warning, "Hello from `{name}` `{price}`.")]
+ public static partial void FoodPrice(this ILogger logger, string name, double price);
+}
+```
+
+##### ExceptionTelemetry
+
+Application Insights uses `ExceptionTelemetry` to log exceptions. Here's how to migrate to OpenTelemetry:
+
+**Application Insights Example:**
+
+```csharp
+ExceptionTelemetry exceptionTelemetry = new ExceptionTelemetry(new Exception("Test exception"))
+{
+ SeverityLevel = SeverityLevel.Error
+};
+
+exceptionTelemetry.Context.Cloud.RoleName = "MyRole";
+exceptionTelemetry.Context.Cloud.RoleInstance = "MyRoleInstance";
+exceptionTelemetry.Properties["customprop1"] = "custom value1";
+client.TrackException(exceptionTelemetry);
+```
+
+**OpenTelemetry Example:**
+
+```csharp
+var resourceAttributes = new Dictionary<string, object>
+{
+ { "service.name", "MyRole" },
+ { "service.instance.id", "MyRoleInstance" }
+};
+
+var resourceBuilder = ResourceBuilder.CreateDefault().AddAttributes(resourceAttributes);
+
+using var loggerFactory = LoggerFactory.Create(builder => builder
+ .AddOpenTelemetry(loggerOptions =>
+ {
+ loggerOptions.SetResourceBuilder(resourceBuilder);
+ loggerOptions.AddAzureMonitorLogExporter();
+ }));
+
+// Create a new instance `ILogger` from the above LoggerFactory.
+var logger = loggerFactory.CreateLogger<Program>();
+
+try
+{
+ // Simulate exception
+ throw new Exception("Test exception");
+}
+catch (Exception ex)
+{
+ logger?.LogError(ex, "An error occurred");
+}
+```
+
+#### Sending Metrics
+
+Metrics in Application Insights are stored as `MetricTelemetry`. In OpenTelemetry, metrics are modeled as `Meter` from the `System.Diagnostics.DiagnosticSource` package.
+
+Application Insights has both non-pre-aggregating (`TrackMetric()`) and preaggregating (`GetMetric().TrackValue()`) Metric APIs. Unlike OpenTelemetry, Application Insights has no notion of [Instruments](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/supplementary-guidelines.md#instrument-selection). Application Insights has the same API for all the metric scenarios.
+
+OpenTelemetry, on the other hand, requires users to first [pick the right metric instrument](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/supplementary-guidelines.md#instrument-selection) based on the actual semantics of the metric. For example, if the intention is to count something (like the number of total server requests received, etc.), [OpenTelemetry Counter](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/api.md#counter) should be used. If the intention is to calculate various percentiles (like the P99 value of server latency), then [OpenTelemetry Histogram](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/api.md#histogram) instrument should be used. Due to this fundamental difference between Application Insights and OpenTelemetry, no direct comparison is made between them.
+
+Unlike Application Insights, OpenTelemetry doesn't provide built-in mechanisms to enrich or filter metrics. In Application Insights, telemetry processors and initializers could be used to modify or discard metrics, but this capability isn't available in OpenTelemetry.
+
+Additionally, OpenTelemetry doesn't support sending raw metrics directly, as there's no equivalent to the `TrackMetric()` functionality found in Application Insights.
+
+Migrating from Application Insights to OpenTelemetry involves replacing all Application Insights Metric API usages with the OpenTelemetry API. It requires understanding the various OpenTelemetry Instruments and their semantics.
+
+> [!Tip]
+> The histogram is the most versatile and the closest equivalent to the Application Insights `GetMetric().TrackValue()` API. You can replace Application Insights Metric APIs with Histogram to achieve the same purpose.
+
+#### Other Telemetry Types
+
+##### CustomEvents
+
+Not supported in OpenTelemetry.
+
+**Application Insights Example:**
+
+```csharp
+TelemetryClient.TrackEvent()
+```
+
+##### AvailabilityTelemetry
+
+Not supported in OpenTelemetry.
+
+**Application Insights Example:**
+
+```csharp
+TelemetryClient.TrackAvailability()
+```
+
+##### PageViewTelemetry
+
+Not supported in OpenTelemetry.
+
+**Application Insights Example:**
+
+```csharp
+TelemetryClient.TrackPageView()
+```
+
+## Next steps
+
+### [ASP.NET Core](#tab/aspnetcore)
+
+* [Azure Monitor Distro Demo project](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.AspNetCore/tests/Azure.Monitor.OpenTelemetry.AspNetCore.Demo)
+* [OpenTelemetry SDK's getting started guide](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main/src/OpenTelemetry)
+* [OpenTelemetry's example ASP.NET Core project](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main/examples/AspNetCore)
+* [C# and .NET Logging](/dotnet/core/extensions/logging)
+* [Azure Monitor OpenTelemetry getting started with ASP.NET Core](./opentelemetry-enable.md?tabs=aspnetcore)
+
+### [ASP.NET](#tab/net)
+
+* [OpenTelemetry SDK's getting started guide](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main/src/OpenTelemetry)
+* [OpenTelemetry's example ASP.NET project](https://github.com/open-telemetry/opentelemetry-dotnet-contrib/blob/main/examples/AspNet/Global.asax.cs)
+* [C# and .NET Logging](/dotnet/core/extensions/logging)
+* [Azure Monitor OpenTelemetry getting started with .NET](./opentelemetry-enable.md?tabs=net)
+
+### [Console](#tab/console)
+
+* [OpenTelemetry SDK's getting started guide](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main/src/OpenTelemetry)
+* OpenTelemetry's example projects:
+ * [Getting Started with Traces](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main/docs/trace/getting-started-console)
+ * [Getting Started with Metrics](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main/docs/metrics/getting-started-console)
+ * [Getting Started with Logs](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main/docs/logs/getting-started-console)
+* [C# and .NET Logging](/dotnet/core/extensions/logging)
+* [Azure Monitor OpenTelemetry getting started with .NET](./opentelemetry-enable.md?tabs=net)
+
+### [WorkerService](#tab/workerservice)
+
+* [OpenTelemetry SDK's getting started guide](https://github.com/open-telemetry/opentelemetry-dotnet/tree/main/src/OpenTelemetry)
+* [Logging in C# and .NET](/dotnet/core/extensions/logging)
+* [Azure Monitor OpenTelemetry getting started with .NET](./opentelemetry-enable.md?tabs=net)
+++
+> [!Tip]
+> Our product group is actively seeking feedback on this documentation. Provide feedback to otel@microsoft.com or see the [Support](#support) section.
+
+## Support
+
+### [ASP.NET Core](#tab/aspnetcore)
+
+- For Azure support issues, open an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/).
+- For OpenTelemetry issues, contact the [OpenTelemetry .NET community](https://github.com/open-telemetry/opentelemetry-dotnet) directly.
+- For a list of open issues related to Azure Monitor Exporter, see the [GitHub Issues Page](https://github.com/Azure/azure-sdk-for-net/issues?q=is%3Aopen+is%3Aissue+label%3A%22Monitor+-+Exporter%22).
+
+#### [.NET](#tab/net)
+
+- For Azure support issues, open an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/).
+- For OpenTelemetry issues, contact the [OpenTelemetry .NET community](https://github.com/open-telemetry/opentelemetry-dotnet) directly.
+- For a list of open issues related to Azure Monitor Exporter, see the [GitHub Issues Page](https://github.com/Azure/azure-sdk-for-net/issues?q=is%3Aopen+is%3Aissue+label%3A%22Monitor+-+Exporter%22).
+
+### [Console](#tab/console)
+
+- For Azure support issues, open an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/).
+- For OpenTelemetry issues, contact the [OpenTelemetry .NET community](https://github.com/open-telemetry/opentelemetry-dotnet) directly.
+- For a list of open issues related to Azure Monitor Exporter, see the [GitHub Issues Page](https://github.com/Azure/azure-sdk-for-net/issues?q=is%3Aopen+is%3Aissue+label%3A%22Monitor+-+Exporter%22).
+
+### [WorkerService](#tab/workerservice)
+
+- For Azure support issues, open an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/).
+- For OpenTelemetry issues, contact the [OpenTelemetry .NET community](https://github.com/open-telemetry/opentelemetry-dotnet) directly.
+- For a list of open issues related to Azure Monitor Exporter, see the [GitHub Issues Page](https://github.com/Azure/azure-sdk-for-net/issues?q=is%3Aopen+is%3Aissue+label%3A%22Monitor+-+Exporter%22).
++
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
Follow the steps in this section to instrument your application with OpenTelemet
- [ASP.NET Core Application](/aspnet/core/introduction-to-aspnet-core) using an officially supported version of [.NET](https://dotnet.microsoft.com/download/dotnet)
-#### [.NET](#tab/net)
+> [!Tip]
+> If you're migrating from the Application Insights Classic API, see our [migration documentation](./opentelemetry-dotnet-migrate.md).
+
+### [.NET](#tab/net)
- Application using an officially supported version of [.NET Core](https://dotnet.microsoft.com/download/dotnet) or [.NET Framework](https://dotnet.microsoft.com/download/dotnet-framework) that's at least .NET Framework 4.6.2
-#### [Java](#tab/java)
+> [!Tip]
+> If you're migrating from the Application Insights Classic API, see our [migration documentation](./opentelemetry-dotnet-migrate.md).
+
+### [Java](#tab/java)
- A Java application using Java 8+
Follow the steps in this section to instrument your application with OpenTelemet
- [OpenTelemetry supported runtimes](https://github.com/open-telemetry/opentelemetry-js#supported-runtimes) - [Azure Monitor OpenTelemetry Exporter supported runtimes](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/monitor/monitor-opentelemetry-exporter#currently-supported-environments)
-#### [Python](#tab/python)
+> [!Tip]
+> If you're migrating from the Application Insights Classic API, see our [migration documentation](./opentelemetry-nodejs-migrate.md).
+
+### [Python](#tab/python)
- Python Application using Python 3.8+
+> [!Tip]
+> If you're migrating from OpenCensus, see our [migration documentation](./opentelemetry-python-opencensus-migrate.md).
+ ### Install the client library
azure-monitor Opentelemetry Nodejs Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-nodejs-migrate.md
This guide provides two options to upgrade from the Azure Monitor Application In
The following changes and limitations apply to both upgrade paths.
-##### Node < 14 support
+##### Node.js version support
-OpenTelemetry JavaScript's monitoring solutions officially support only Node version 14+. Check the [OpenTelemetry supported runtimes](https://github.com/open-telemetry/opentelemetry-js#supported-runtimes) for the latest updates. Users on older versions like Node 8, previously supported by the ApplicationInsights SDK, can still use OpenTelemetry solutions but can experience unexpected or breaking behavior.
+For a version of Node.js to be supported by the ApplicationInsights 3.X SDK, it must have overlapping support from both the Azure SDK and OpenTelemetry. Check the [OpenTelemetry supported runtimes](https://github.com/open-telemetry/opentelemetry-js#supported-runtimes) for the latest updates. Users on older versions like Node 8, previously supported by the ApplicationInsights SDK, can still use OpenTelemetry solutions but can experience unexpected or breaking behavior. The ApplicationInsights SDK also depends on the Azure SDK for JS which does not guarantee support for any Node.js versions that have reached end-of-life. See [the Azure SDK for JS support policy](https://github.com/Azure/azure-sdk-for-js/blob/main/SUPPORT.md). For a version of Node.js to be supported by the ApplicationInsights 3.X SDK, it must have overlapping support from both the Azure SDK and OpenTelemetry.
##### Configuration options
azure-monitor Pre Aggregated Metrics Log Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/pre-aggregated-metrics-log-metrics.md
The collection endpoint preaggregates events before ingestion sampling. For this
|-|--|-|--| | ASP.NET | Supported <sup>1<sup> | Not supported | Not supported | | ASP.NET Core | Supported <sup>2<sup> | Not supported | Not supported |
-| Java | Not supported | Not supported | [Supported](opentelemetry-add-modify.md?tabs=java#metrics) |
+| Java | Not supported | Not supported | [Supported](opentelemetry-add-modify.md?tabs=java#send-custom-telemetry-using-the-application-insights-classic-api) |
| Node.js | Not supported | Not supported | Not supported | 1. [ASP.NET autoinstrumentation on virtual machines/virtual machine scale sets](./azure-vm-vmss-apps.md) and [on-premises](./application-insights-asp-net-agent.md) emits standard metrics without dimensions. The same is true for Azure App Service, but the collection level must be set to recommended. The SDK is required for all dimensions.
azure-monitor Sla Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sla-report.md
- Title: Downtime, SLA, and outages workbook - Application Insights
-description: Calculate and report SLA for web test through a single pane of glass across your Application Insights resources and Azure subscriptions.
- Previously updated : 04/28/2024---
-# Downtime, SLA, and outages workbook
-
-This article introduces a simple way to calculate and report service-level agreement (SLA) for web tests through a single pane of glass across your Application Insights resources and Azure subscriptions. The downtime and outage report provides powerful prebuilt queries and data visualizations to enhance your understanding of your customer's connectivity, typical application response time, and experienced downtime.
-
-The SLA workbook template is accessible through the workbook gallery in your Application Insights resource. Or, in the left pane, select **Availability** and then select **SLA Report** at the top of the screen.
--
-## Parameter flexibility
-
-The parameters set in the workbook influence the rest of your report.
--
-* `Subscriptions`, `App Insights Resources`, and `Web Test`: These parameters determine your high-level resource options. They're based on Log Analytics queries and are used in every report query.
-* `Failure Threshold` and `Outage Window`: You can use these parameters to determine your own criteria for a service outage. An example is the criteria for an App Insights Availability alert based on a failed location counter over a chosen period. The typical threshold is three locations over a five-minute window.
-* `Maintenance Period`: You can use this parameter to select your typical maintenance frequency. `Maintenance Window` is a datetime selector for an example maintenance period. All data that occurs during the identified period will be ignored in your results.
-* `Availability Target %`: This parameter specifies your target objective and takes custom values.
-
-## Overview page
-
-The overview page contains high-level information about your:
--- Total SLA (excluding maintenance periods, if defined).-- End-to-end outage instances.-- Application downtime.-
-Outage instances are defined by when a test starts to fail until it's successful, based on your outage parameters. If a test starts failing at 8:00 AM and succeeds again at 10:00 AM, that entire period of data is considered the same outage.
--
-You can also investigate the longest outage that occurred over your reporting period.
-
-Some tests are linkable back to their Application Insights resource for further investigation. But that's only possible in the [workspace-based Application Insights resource](create-workspace-resource.md).
-
-## Downtime, outages, and failures
-
-The **Outages & Downtime** tab has information on total outage instances and total downtime broken down by test.
--
-The **Failures by Location** tab has a geo-map of failed testing locations to help identify potential problem connection areas.
--
-## Edit the report
-
-You can edit the report like any other [Azure Monitor workbook](../visualize/workbooks-overview.md).
--
-You can customize the queries or visualizations based on your team's needs.
--
-### Log Analytics
-
-The queries can all be run in [Log Analytics](../logs/log-analytics-overview.md) and used in other reports or dashboards.
--
-Remove the parameter restriction and reuse the core query.
--
-## Access and sharing
-
-The report can be shared with your teams and leadership or pinned to a dashboard for further use. The user needs to have read permission/access to the Application Insights resource where the actual workbook is stored.
--
-## Next steps
--- Learn some [Log Analytics query optimization tips](../logs/query-optimization.md).-- Learn how to [create a chart in workbooks](../visualize/workbooks-chart-visualizations.md).-- Learn how to monitor your website with [availability tests](availability-overview.md).
azure-monitor Transaction Search And Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/transaction-search-and-diagnostics.md
The first time you do this step, you're asked to configure a link to your Azure
In addition to the out-of-the-box telemetry sent by Application Insights SDK, you can:
-* Capture log traces from your favorite logging framework in [.NET](./asp-net-trace-logs.md) or [Java](./opentelemetry-add-modify.md?tabs=java#logs). This means you can search through your log traces and correlate them with page views, exceptions, and other events.
+* Capture log traces from your favorite logging framework in [.NET](./asp-net-trace-logs.md) or [Java](./opentelemetry-add-modify.md?tabs=java#send-custom-telemetry-using-the-application-insights-classic-api). This means you can search through your log traces and correlate them with page views, exceptions, and other events.
* [Write code](./api-custom-events-metrics.md) to send custom events, page views, and exceptions.
azure-monitor Prometheus Remote Write Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-remote-write-virtual-machines.md
Onboarding to Azure Arc-enabled services allows you to manage and configure non-
- Prometheus versions greater than v2.48 are required for Microsoft Entra ID application authentication. ### Azure Monitor workspace+ This article covers sending Prometheus metrics to an Azure Monitor workspace. To create an Azure monitor workspace, see [Manage an Azure Monitor workspace](./azure-monitor-workspace-manage.md#create-an-azure-monitor-workspace). ## Permissions
-Administrator permissions for the cluster or resource are required to complete the steps in this article.
+Administrator permissions for the cluster or resource are required to complete the steps in this article.
## Set up authentication for remote-write
The output contains the `appId` and `password` values. Save these values to use
For more information, see [az ad app create](/cli/azure/ad/app#az-ad-app-create) and [az ad sp create-for-rbac](/cli/azure/ad/sp#az-ad-sp-create-for-rbac). + ## Configure remote-write
-Remote-write is configured in the Prometheus configuration file `prometheus.yml`.
+Remote-write is configured in the Prometheus configuration file `prometheus.yml`, or in the Prometheus Operator.
For more information on configuring remote-write, see the Prometheus.io article: [Configuration](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write). For more on tuning the remote write configuration, see [Remote write tuning](https://prometheus.io/docs/practices/remote_write/#remote-write-tuning).
-To send data to your Azure Monitor Workspace, add the following section to the configuration file of your self-managed Prometheus instance.
+### [Configure remote-write for Prometheus Operator](#tab/prom-operator)
+
+If you are running Prometheus Operator on a Kubernetes cluster, follow the below steps to send data to your Azure Monitor Workspace.
+
+1. If you are using Microsoft Entra ID authentication, convert the secret using base64 encoding, and then apply the secret into your Kubernetes cluster. Save the following into a yaml file. Skip this step if you are using managed identity authentication.
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: remote-write-secret
+ namespace: monitoring # Replace with namespace where Prometheus Operator is deployed.
+type: Opaque
+data:
+ password: <base64-encoded-secret>
+
+```
+
+Apply the secret.
+
+```azurecli
+# set context to your cluster
+az aks get-credentials -g <aks-rg-name> -n <aks-cluster-name>
+
+kubectl apply -f <remote-write-secret.yaml>
+```
+
+2. You will need to update the values for remote write section in Prometheus Operator. Copy the following and save it as a yaml file. For the values of the yaml file, see below section. Refer to [Prometheus Operator documentation](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#azuread) for more details on the Azure Monitor Workspace remote write specification in Prometheus Operator.
+
+```yaml
+prometheus:
+ prometheusSpec:
+ remoteWrite:
+ - url: "<metrics ingestion endpoint for your Azure Monitor workspace>"
+ azureAd:
+# AzureAD configuration.
+# The Azure Cloud. Options are 'AzurePublic', 'AzureChina', or 'AzureGovernment'.
+ cloud: 'AzurePublic'
+ managedIdentity:
+ clientId: "<clientId of the managed identity>"
+ oauth:
+ clientId: "<clientId of the Entra app>"
+ clientSecret:
+ name: remote-write-secret
+ key: password
+ tenantId: "<Azure subscription tenant Id>"
+```
+
+3. Use helm to update your remote write config using the above yaml file.
+
+```azurecli
+helm upgrade -f <YAML-FILENAME>.yml prometheus prometheus-community/kube-prometheus-stack --namespace <namespace where Prometheus Operator is deployed>
+```
+
+### [Configure remote-write for Prometheus running in VMs or other environments](#tab/prom-vm)
+
+To send data to your Azure Monitor Workspace, add the following section to the configuration file (prometheus.yml) of your self-managed Prometheus instance.
```yaml remote_write:
remote_write:
tenant_id: "<Azure subscription tenant Id>" ``` ++ The `url` parameter specifies the metrics ingestion endpoint of the Azure Monitor workspace. It can be found on the Overview page of your Azure Monitor workspace in the Azure portal. :::image type="content" source="media/prometheus-remote-write-virtual-machines/metrics-ingestion-endpoint.png" lightbox="media/prometheus-remote-write-virtual-machines/metrics-ingestion-endpoint.png" alt-text="A screenshot showing the metrics ingestion endpoint for an Azure Monitor workspace.":::
azure-monitor Profiler Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-data.md
Title: Generate load and view Application Insights Profiler data
description: Generate load to your Azure service to view the Profiler data ms.contributor: charles.weininger Previously updated : 09/22/2023 Last updated : 07/11/2024
azure-monitor Profiler Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-overview.md
Title: Analyze application performance traces with Application Insights Profiler
description: Identify the hot path in your web server code with a low-footprint profiler. ms.contributor: charles.weininger Previously updated : 12/11/2023 Last updated : 07/11/2024
Enable the Profiler on all your Azure applications to gather data with the follo
Each of these triggers can be [configured, enabled, or disabled](./profiler-settings.md#trigger-settings).
-## Overhead and sampling algorithm
+## Sampling rate and overhead
-Profiler randomly runs two minutes per hour on each virtual machine hosting applications with Profiler enabled. When Profiler is running, it adds from 5 percent to 15 percent CPU overhead to the server.
+Profiler randomly runs two minutes per hour on each virtual machine hosting applications with Profiler enabled.
+ ## Supported in Profiler
azure-monitor Profiler Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-settings.md
Title: Configure Application Insights Profiler | Microsoft Docs
description: Use the Application Insights Profiler settings pane to see Profiler status and start profiling sessions ms.contributor: Charles.Weininger Previously updated : 09/22/2023 Last updated : 07/11/2024 # Configure Application Insights Profiler
Memory % | Percentage of memory used while Profiler was running.
## Next steps
-[Enable Profiler and view traces](profiler-overview.md?toc=/azure/azure-monitor/toc.json)
+[Enable Profiler and view traces](profiler.md?toc=/azure/azure-monitor/toc.json)
[profiler-on-demand]: ./media/profiler-settings/profiler-on-demand.png [performance-blade]: ./media/profiler-settings/performance-blade.png
azure-monitor Profiler Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-troubleshooting.md
Profiler isn't currently supported on free or shared app service plans. Upgrade
If the data you're trying to view is older than two weeks, try limiting your time filter and try again. Traces are deleted after seven days.
+## Are you aware of the Profiler sampling rate and overhead?
+
+Profiler randomly runs two minutes per hour on each virtual machine hosting applications with Profiler enabled.
++ ## Can you access the gateway? Check that a firewall or proxies aren't blocking your access to [this webpage](https://gateway.azureserviceprofiler.net).
azure-monitor Profiler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler.md
# Enable Profiler for Azure App Service apps
-Application Insights Profiler is preinstalled as part of the Azure App Service runtime. You can run Profiler on ASP.NET and ASP.NET Core apps running on App Service by using the Basic service tier or higher. Follow these steps, even if you included the Application Insights SDK in your application at build time.
+[Application Insights Profiler](./profiler-overview.md) is preinstalled as part of the Azure App Service runtime. You can run Profiler on ASP.NET and ASP.NET Core apps running on App Service by using the Basic service tier or higher. Follow these steps, even if you included the Application Insights SDK in your application at build time.
To enable Profiler on Linux, walk through the [ASP.NET Core Azure Linux web apps instructions](profiler-aspnetcore-linux.md).
azure-monitor Monitor Arc Enabled Vm With Scom Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/scom-manage-instance/monitor-arc-enabled-vm-with-scom-managed-instance.md
- Title: Monitor Azure and Off-Azure Virtual machines with Azure Monitor SCOM Managed Instance (preview)
-description: Azure Monitor SCOM Managed Instance provides a cloud-based alternative for Operations Manager users providing monitoring continuity for cloud and on-premises environments across the cloud adoption journey.
--- Previously updated : 05/22/2024------
-# Monitor Azure and Off-Azure Virtual machines with Azure Monitor SCOM Managed Instance (preview)
-
->[!NOTE]
->This feature is currently in preview.
-
-Azure Monitor SCOM Managed Instance provides a cloud-based alternative for Operations Manager users providing monitoring continuity for cloud and on-premises environments across the cloud adoption journey.
-
-## SCOM Managed Instance Agent
-
-In Azure Monitor SCOM Managed Instance, an agent is a service that is installed on a computer that looks for configuration data and proactively collects information for analysis and reporting, measures the health state of monitored objects like an SQL database or logical disk, and executes tasks on demand by an operator or in response to a condition. It allows SCOM Managed Instance to monitor Windows operating systems and the components installed on them, such as a website or an Active Directory domain controller.
-
-## Support for Azure and Off-Azure workloads
-
-One of the most important monitoring scenarios is that of on-premises (off-Azure) workloads that unlock SCOM Managed Instance as a true **Hybrid monitoring solution**.
-
-The following are the supported monitoring scenarios:
-
-|Type of endpoint|Trust|Experience|
-||||
-|Azure VM |Any type|Azure portal|
-|Arc VM |Any type|Azure portal|
-|Line of sight on-premises agent|Trusted|OpsConsole|
-|Line of sight on-premises agent|Untrusted|Managed Gateway and OpsConsole|
-|No Line of sight on-premises agent|Trusted/Untrusted|Managed Gateway and OpsConsole|
-
-SCOM Managed Instance users will be able to:
-- Monitor VMs and applications which are in untrusted domain/workgroup.-- Onboard endpoints (including Agent installation and setup) seamlessly from SCOM Managed Instance portal.-- Set up and manage Gateways seamlessly from SCOM Managed Instance portal on Arc-enabled servers for off-Azure monitoring.-- Set high availability at Gateway plane for agent failover as described in [Designing for High Availability and Disaster Recovery](/system-center/scom/plan-hadr-design).-
-## Linux monitoring with SCOM Managed Instance
-
-With SCOM Managed Instance, you can monitor Linux workloads that are on-premises and behind a gateway server. At this stage, we don't support monitoring Linux VMs hosted in Azure. For more information, see [How to monitor on-premises Linux VMs](/system-center/scom/manage-deploy-crossplat-agent-console).
-
-For more information, see [Azure Monitor SCOM Managed Instance frequently asked questions](scom-managed-instance-faq.yml).
-
-## Use Arc channel for Agent configuration and monitoring data
-
-Azure Arc can unlock connectivity and monitor on-premises workloads. Azure based manageability of monitoring agents for SCOM Managed Instance helps you to reduce operations cost and simplify agent configuration. The following are the key capabilities of SCOM Managed Instance monitoring over Arc channel:
--- Discover and Install SCOM Managed Instance agent as a VM extension for Arc connected servers.-- Monitor Arc connected servers and hosted applications by reusing existing System Center Operations Manager management packs.-- Azure based SCOM Managed Instance agent management (such as patch, push management pack rules and monitors) via Arc connectivity.-- SCOM Managed Instance agents to relay monitoring data back to SCOM Managed Instance via Arc connectivity.-
-## Prerequisites
-
-Following are the prerequisites required on desired monitoring endpoints that are Virtual machines:
-
-1. Ensure to Allowlist the following Azure URL on the desired monitoring endpoints:
- `*.workloadnexus.azure.com`
-2. Confirm the Line of sight between SCOM Managed Instance and desired monitoring endpoints by running the following command. Obtain LB DNS information by navigating to SCOM Managed Instance **Overview** > **DNS Name**.
-
- ```
- Test-NetConnection -ComputerName <LB DNS> -Port 5723
- ```
-3. Ensure to install [.NET Framework 4.7.2](https://support.microsoft.com/topic/microsoft-net-framework-4-7-2-offline-installer-for-windows-05a72734-2127-a15d-50cf-daf56d5faec2) or higher on desired monitoring endpoints.
-4. Ensure TLS 1.2 or higher is enabled.
-
-To Troubleshooting connectivity problems, see [Troubleshoot issues with Azure Monitor SCOM Managed Instance](/system-center/scom/troubleshoot-scom-managed-instance?view=sc-om-2022#scenario-agent-connectivity-failing&preserve-view=true).
-
-## Install an agent to monitor Azure and Arc-enabled servers
-
->[!NOTE]
->Agent doesn't support multi-homing to multiple SCOM Managed Instances.
-
-To install SCOM Managed Instance agent, follow these steps:
-
-1. On the desired SCOM Managed Instance **Overview** page, under Manage, select **Monitored Resources**.
-
- :::image type="content" source="media/monitor-on-premises-arc-enabled-vm-with-scom-managed-instance/manage-monitored-resources-inline.png" alt-text="Screenshot that shows the Monitored Resource option." lightbox="media/monitor-on-premises-arc-enabled-vm-with-scom-managed-instance/manage-monitored-resources-expanded.png":::
-
-2. On the **Monitored Resources** page, select **New Monitored Resource**.
-
- :::image type="content" source="media/monitor-on-premises-arc-enabled-vm-with-scom-managed-instance/monitored-resources-inline.png" alt-text="Screenshot that shows the Monitored Resource page." lightbox="media/monitor-on-premises-arc-enabled-vm-with-scom-managed-instance/monitored-resources-expanded.png":::
-
- **Add a Monitored Resource** page opens listing all the unmonitored virtual machines.
-
- :::image type="content" source="media/monitor-on-premises-arc-enabled-vm-with-scom-managed-instance/add-monitored-resource-inline.png" alt-text="Screenshot that shows add a monitored resource page." lightbox="media/monitor-on-premises-arc-enabled-vm-with-scom-managed-instance/add-monitored-resource-expanded.png":::
-
-3. Select the desired resource and then select **Add**.
-
- :::image type="content" source="media/monitor-on-premises-arc-enabled-vm-with-scom-managed-instance/add-resource-inline.png" alt-text="Screenshot that shows the Add a resource option." lightbox="media/monitor-on-premises-arc-enabled-vm-with-scom-managed-instance/add-resource-expanded.png":::
-
-4. On the **Install SCOM MI Agent** window, review the selections and select **Install**.
-
- :::image type="content" source="media/monitor-on-premises-arc-enabled-vm-with-scom-managed-instance/install-inline.png" alt-text="Screenshot that shows Install option." lightbox="media/monitor-on-premises-arc-enabled-vm-with-scom-managed-instance/install-expanded.png":::
-
-## Manage agent configuration installed on Azure and Arc-enabled servers
-
-### Upgrade an agent
-
-To upgrade the agent version, follow these steps:
-
-1. Sign in to the [Azure portal](https://portal.azure.com/). Search and select **SCOM Managed Instance**.
-2. On the **Overview** page, under **Manage**, select **SCOM managed instances**.
-3. On the **SCOM managed instances** page, select the desired SCOM managed instance.
-4. On the desired SCOM Managed Instance **Overview** page, under **Manage**, select **Monitored Resources**.
-5. On the **Monitored Resources** page, select Ellipsis button **(…)**, which is next to your desired monitored resource, and select **Configure**.
-
- :::image type="content" source="media/monitor-on-premises-arc-enabled-vm-with-scom-managed-instance/resource-inline.png" alt-text="Screenshot that shows monitored resources." lightbox="media/monitor-on-premises-arc-enabled-vm-with-scom-managed-instance/resource-expanded.png":::
-
- :::image type="content" source="media/monitor-on-premises-arc-enabled-vm-with-scom-managed-instance/configure-agent-inline.png" alt-text="Screenshot that shows agent configuration option." lightbox="media/monitor-on-premises-arc-enabled-vm-with-scom-managed-instance/configure-agent-expanded.png":::
-
-6. On the **Configure Monitored Resource** page, enable **Auto upgrade** and then select **Configure**.
-
- :::image type="content" source="media/monitor-on-premises-arc-enabled-vm-with-scom-managed-instance/configure-monitored-resource-inline.png" alt-text="Screenshot that shows configuration option." lightbox="media/monitor-on-premises-arc-enabled-vm-with-scom-managed-instance/configure-monitored-resource-expanded.png":::
-
-### Delete an agent
-
-To delete the agent version, follow these steps:
-
-1. Sign in to the [Azure portal](https://portal.azure.com/). Search and select **SCOM Managed Instance**.
-2. On the **Overview** page, under **Manage**, select **SCOM managed instances**.
-3. On the **SCOM managed instances** page, select the desired SCOM Managed Instance.
-4. On the desired SCOM Managed Instance **Overview** page, under **Manage**, select **Monitored Resources**.
-5. On the **Monitored Resources** page, select Ellipsis button **(…)**, which is next to your desired monitored resource, and select **Delete**.
-
- :::image type="content" source="media/monitor-on-premises-arc-enabled-vm-with-scom-managed-instance/delete-agent-inline.png" alt-text="Screenshot that shows delete agent option." lightbox="media/monitor-on-premises-arc-enabled-vm-with-scom-managed-instance/delete-agent-expanded.png":::
-
-6. On the **Delete SCOM MI Agent** page, check **Are you sure that you want to delete Monitored Resource?** and then select **Delete**.
-
- :::image type="content" source="media/monitor-on-premises-arc-enabled-vm-with-scom-managed-instance/delete-monitored-resource-inline.png" alt-text="Screenshot that shows delete option." lightbox="media/monitor-on-premises-arc-enabled-vm-with-scom-managed-instance/delete-monitored-resource-expanded.png":::
-
-## Install SCOM Managed Instance Gateway
-
-To install SCOM Managed Instance gateway, follow these steps:
-
-1. Sign in to the [Azure portal](https://portal.azure.com/). Search and select **SCOM Managed Instance**.
-2. On the **Overview** page, under **Manage**, select **SCOM managed instances**.
-3. On the **SCOM managed instances** page, select the desired SCOM managed instance.
-4. On the desired SCOM managed instance **Overview** page, under **Manage**, select **Managed Gateway**.
-
- :::image type="content" source="media/monitor-on-premises-arc-enabled-vm-with-scom-managed-instance/managed-gateway-inline.png" alt-text="Screenshot that shows managed gateway." lightbox="media/monitor-on-premises-arc-enabled-vm-with-scom-managed-instance/managed-gateway-expanded.png":::
-
-5. On the **Managed Gateways** page, select **New Managed Gateway**.
-
- :::image type="content" source="media/monitor-on-premises-arc-enabled-vm-with-scom-managed-instance/new-managed-gateway-inline.png" alt-text="Screenshot that shows new managed gateway." lightbox="media/monitor-on-premises-arc-enabled-vm-with-scom-managed-instance/new-managed-gateway-expanded.png":::
-
- **Add a Managed Gateway** page opens listing all the Azure arc virtual machines.
-
- >[!NOTE]
- >SCOM Managed Instance Managed Gateway can be configured on Arc-enabled machines only.
-
- :::image type="content" source="media/monitor-on-premises-arc-enabled-vm-with-scom-managed-instance/add-managed-gateway-inline.png" alt-text="Screenshot that shows add a managed gateway option." lightbox="media/monitor-on-premises-arc-enabled-vm-with-scom-managed-instance/add-managed-gateway-expanded.png":::
-
-6. Select the desired virtual machine and then select **Add**.
-
- :::image type="content" source="media/monitor-on-premises-arc-enabled-vm-with-scom-managed-instance/add-inline.png" alt-text="Screenshot that shows Add managed gateway." lightbox="media/monitor-on-premises-arc-enabled-vm-with-scom-managed-instance/add-expanded.png":::
-
-7. On the **Install SCOM MI Gateway** window, review the selections and select **Install**.
-
- :::image type="content" source="media/monitor-on-premises-arc-enabled-vm-with-scom-managed-instance/install-gateway-inline.png" alt-text="Screenshot that shows Install managed gateway page." lightbox="media/monitor-on-premises-arc-enabled-vm-with-scom-managed-instance/install-gateway-expanded.png":::
-
-## Manage Gateway configuration
-
-### Configure monitoring of servers via SCOM Managed Instance Gateway
-
-To configure monitoring of servers via SCOM Managed Instance Gateway, follow the steps mentioned in [Install an agent on a computer running Windows by using the Discovery Wizard](/system-center/scom/manage-deploy-windows-agent-console#install-an-agent-on-a-computer-running-windows-by-using-the-discovery-wizard) section.
-
->[!NOTE]
->Operations Manager Console is required for this action. For more information, see [Connect the Azure Monitor SCOM Managed Instance to Ops console](/system-center/scom/connect-managed-instance-ops-console?view=sc-om-2022&preserve-view=true)
-
-### Delete a Gateway
-
-To delete a Gateway, follow these steps:
-
-1. Sign in to the [Azure portal](https://portal.azure.com/). Search and select **SCOM Managed Instance**.
-2. On the **Overview** page, under **Manage**, select **SCOM managed instances**.
-3. On the **SCOM managed instances** page, select the desired SCOM managed instance.
-4. On the desired SCOM managed instance **Overview** page, under **Manage**, select **Managed Gateways**.
-5. On the **Managed Gateways** page, select Ellipsis button **(…)**, which is next to your desired gateway, and select **Delete**.
-
- :::image type="content" source="media/monitor-on-premises-arc-enabled-vm-with-scom-managed-instance/delete-gateway-inline.png" alt-text="Screenshot that shows delete gateway option." lightbox="media/monitor-on-premises-arc-enabled-vm-with-scom-managed-instance/delete-gateway-expanded.png":::
-
-6. On the **Delete SCOM MI Gateway** page, check **Are you sure that you want to delete Managed Gateway?** and then select **Delete**.
-
- :::image type="content" source="media/monitor-on-premises-arc-enabled-vm-with-scom-managed-instance/delete-managed-gateway-inline.png" alt-text="Screenshot that shows delete managed gateway option." lightbox="media/monitor-on-premises-arc-enabled-vm-with-scom-managed-instance/delete-managed-gateway-expanded.png":::
--
-## Configure monitoring of on-premises servers
-
-To configure monitoring of on-premises servers that have direct connectivity (VPN/ER) with Azure, follow the steps mentioned in [Install an agent on a computer running Windows by using the Discovery Wizard](/system-center/scom/manage-deploy-windows-agent-console#install-an-agent-on-a-computer-running-windows-by-using-the-discovery-wizard) section.
-
->[!NOTE]
->Operations Manager Console is required for this action. For more information, see [Connect the Azure Monitor SCOM Managed Instance to Ops console](/system-center/scom/connect-managed-instance-ops-console?view=sc-om-2022&preserve-view=true)
azure-monitor Monitor Azure Off Azure Vm With Scom Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/scom-manage-instance/monitor-azure-off-azure-vm-with-scom-managed-instance.md
+
+ms.assetid:
+ Title: Monitor Azure and Off-Azure Virtual machines with Azure Monitor SCOM Managed Instance extensions.
+description: Azure Monitor SCOM Managed Instance provides a cloud-based alternative for Operations Manager users providing monitoring continuity for cloud and on-premises environments across the cloud adoption journey.
+++ Last updated : 07/05/2024+++++
+# Monitor Azure and Off-Azure Virtual machines with Azure Monitor SCOM Managed Instance extensions
+
+Azure Monitor SCOM Managed Instance provides a cloud-based alternative for Operations Manager users providing monitoring continuity for cloud and on-premises environments across the cloud adoption journey.
+
+## SCOM Managed Instance Agent
+
+In Azure Monitor SCOM Managed Instance, an agent is a service that is installed on a computer that looks for configuration data and proactively collects information for analysis and reporting, measures the health state of monitored objects like an SQL database or logical disk, and executes tasks on demand by an operator or in response to a condition. It allows SCOM Managed Instance to monitor Windows operating systems and the components installed on them, such as a website or an Active Directory domain controller.
+
+In Azure Monitor SCOM Managed Instance, monitoring agent is installed and managed by Azure Virtual Machine Extensions, named **SCOMMI-Agent-Windows**. For more information on VM extensions, see [Azure Virtual Machine extensions and features](/azure/virtual-machines/extensions/overview).
+
+## Supported Windows versions for monitoring
+
+Following are the supported Windows versions that can be monitored using SCOM Managed Instance:
+
+- Windows 2022
+- Windows 2019
+- Windows 2016
+- Windows 2012 R2
+- Windows 2012
+
+For more information, see [Operations Manager System requirements](/system-center/scom/system-requirements).
+
+## Use Arc channel for Agent configuration and monitoring data
+
+Azure Arc can unlock connectivity and monitor on-premises workloads. Azure based manageability of monitoring agents for SCOM Managed Instance helps you to reduce operations cost and simplify agent configuration. The following are the key capabilities of SCOM Managed Instance monitoring over Arc channel:
+
+- Discover and Install SCOM Managed Instance agent as a VM extension for Arc connected servers.
+- Monitor Arc connected servers and hosted applications by reusing existing System Center Operations Manager management packs.
+- Monitor Arc connected servers and applications, which are in untrusted domain/workgroup.
+- Azure based SCOM Managed Instance agent management (such as patch, push management pack rules and monitors) via Arc connectivity.
+
+## Prerequisites
+
+Following are the prerequisites required to monitor Virtual machines:
+
+1. Line of sight to Nexus endpoint.
+ For example, `Test-NetConnection -ComputerName westus.workloadnexus.azure.com -Port 443`
+2. Line of sight to SCOMMI LB
+ For example, `Test-NetConnection -ComputerName <LBDNS> -Port 5723`
+3. Ensure to install [.NET Framework 4.7.2](https://support.microsoft.com/topic/microsoft-net-framework-4-7-2-offline-installer-for-windows-05a72734-2127-a15d-50cf-daf56d5faec2) or higher on desired monitoring endpoints.
+
+To Troubleshooting connectivity problems, see [Troubleshoot issues with Azure Monitor SCOM Managed Instance](/system-center/scom/troubleshoot-scom-managed-instance?view=sc-om-2022#scenario-agent-connectivity-failing&preserve-view=true).
+
+## Install an agent to monitor Azure and Arc-enabled servers
+
+>[!NOTE]
+>Agent doesn't support multihoming to multiple SCOM Managed Instances.
+
+To install SCOM Managed Instance agent, follow these steps:
+
+1. On the desired SCOM Managed Instance **Overview** page, under Manage, select **Monitored Resources**.
+
+2. On the **Monitored Resources** page, select **New Monitored Resource**.
+
+ :::image type="content" source="media/monitor-azure-off-azure-vm-with-scom-managed-instance/monitored-resources-inline.png" alt-text="Screenshot that shows the Monitored Resource page." lightbox="media/monitor-azure-off-azure-vm-with-scom-managed-instance/monitored-resources-expanded.png":::
+
+ **Add a Monitored Resource** page opens listing all the unmonitored virtual machines.
+
+ :::image type="content" source="media/monitor-azure-off-azure-vm-with-scom-managed-instance/add-monitored-resource-inline.png" alt-text="Screenshot that shows add a monitored resource page." lightbox="media/monitor-azure-off-azure-vm-with-scom-managed-instance/add-monitored-resource-expanded.png":::
+
+3. Select the desired resource and then select **Add**.
+
+ :::image type="content" source="media/monitor-azure-off-azure-vm-with-scom-managed-instance/add-resource-inline.png" alt-text="Screenshot that shows the Add a resource option." lightbox="media/monitor-azure-off-azure-vm-with-scom-managed-instance/add-resource-expanded.png":::
+
+4. On the **Add Monitored Resources** window, enable **Auto upgrade**, review the selections and select **Add**.
+
+ :::image type="content" source="media/monitor-azure-off-azure-vm-with-scom-managed-instance/install-inline.png" alt-text="Screenshot that shows Install option." lightbox="media/monitor-azure-off-azure-vm-with-scom-managed-instance/install-expanded.png":::
+
+## Manage agent configuration installed on Azure and Arc-enabled servers
+
+### Upgrade an agent
+
+>[!NOTE]
+>Upgrading an agent is a one time effort for existing monitored resources. Further updates will be applied automatically to these resources. For new resources, you can choose this option when you add them to the SCOM Managed Instance.
+
+To upgrade the agent version, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/). Search and select **SCOM Managed Instance**.
+2. On the **Overview** page, under **Manage**, select **SCOM managed instances**.
+3. On the **SCOM managed instances** page, select the desired SCOM managed instance.
+4. On the desired SCOM Managed Instance **Overview** page, under **Manage**, select **Monitored Resources**.
+5. On the **Monitored Resources** page, select Ellipsis button **(…)**, which is next to your desired monitored resource, and select **Configure**.
+
+ :::image type="content" source="media/monitor-azure-off-azure-vm-with-scom-managed-instance/resource-inline.png" alt-text="Screenshot that shows monitored resources." lightbox="media/monitor-azure-off-azure-vm-with-scom-managed-instance/resource-expanded.png":::
+
+ :::image type="content" source="media/monitor-azure-off-azure-vm-with-scom-managed-instance/configure-agent-inline.png" alt-text="Screenshot that shows agent configuration option." lightbox="media/monitor-azure-off-azure-vm-with-scom-managed-instance/configure-agent-expanded.png":::
+
+6. On the **Configure Monitored Resource** page, enable **Auto upgrade** and then select **Configure**.
+
+ :::image type="content" source="media/monitor-azure-off-azure-vm-with-scom-managed-instance/configure-monitored-resource-inline.png" alt-text="Screenshot that shows configuration option." lightbox="media/monitor-azure-off-azure-vm-with-scom-managed-instance/configure-monitored-resource-expanded.png":::
+
+### Remove an agent
+
+To remove the agent version, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/). Search and select **SCOM Managed Instance**.
+2. On the **Overview** page, under **Manage**, select **SCOM managed instances**.
+3. On the **SCOM managed instances** page, select the desired SCOM Managed Instance.
+4. On the desired SCOM Managed Instance **Overview** page, under **Manage**, select **Monitored Resources**.
+5. On the **Monitored Resources** page, select Ellipsis button **(…)**, which is next to your desired monitored resource, and select **Remove**.
+
+ :::image type="content" source="media/monitor-azure-off-azure-vm-with-scom-managed-instance/delete-agent-inline.png" alt-text="Screenshot that shows delete agent option." lightbox="media/monitor-azure-off-azure-vm-with-scom-managed-instance/delete-agent-expanded.png":::
+
+6. On the **Remove Monitored Resources** page, enter *remove* under **Enter "remove" to confirm "removal"** and then select **Remove**.
+
+ :::image type="content" source="media/monitor-azure-off-azure-vm-with-scom-managed-instance/delete-monitored-resource-inline.png" alt-text="Screenshot that shows delete option." lightbox="media/monitor-azure-off-azure-vm-with-scom-managed-instance/delete-monitored-resource-expanded.png":::
+
+## Multihome On-premises virtual machines
+
+Multihoming allows you to monitor on-premises virtual machines by retaining the existing connection with Operations Manager (On-premises) and establishing a new connection with SCOM Managed Instance. When a virtual machine that is monitored by Operations Manager (on-premises) is multihomed with SCOM Managed Instance, the Azure extensions replace existing monitoring agent (*Monagent.msi*) with the latest version of SCOM Managed Instance agent. During this operation, the connection to Operations Manager (on-premises) is retained automatically to ensure continuity in monitoring process.
+
+>[!Note]
+>- To multihome your on-premises your virtual machines, they must be Arc-enabled.
+>- Multihome with Operations Manager is supported only using Kerberos authentication.
+>- Multihome is limited only to two connections, one with SCOM Managed Instance and another with Operations Manager. A virtual machine cannot be multihomed to multiple Operations Manager Management Groups or multiple SCOM Managed Instances.
+
+### Multihome supported scenarios
+
+Multihome is supported with the following Operations Manager versions using the SCOM Managed Instance extension-based agent:
+
+| -|Operations Manager 2012|Operations Manager 2016|Operations Manager 2019|Operations Manager 2022|
+||||||
+|Supported |✅|✅|✅|✅|
+
+>[!NOTE]
+>Agents (Monagent) that are installed by Operations Manager (on-premises) aren't supported to multihome with SCOM Managed Instance.
azure-netapp-files Azacsnap Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-release-notes.md
Download the [latest release](https://aka.ms/azacsnapinstaller) of the installer
For specific information on Preview features, refer to the [AzAcSnap Preview](azacsnap-preview.md) page.
-## May-2024
+## Jul-2024
### AzAcSnap 10 (Build: 1B55F1*)
azure-resource-manager Linter Rule No Deployments Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-no-deployments-resources.md
In ARM templates, you can reuse or modularize a template through nesting or link
"resources": [ { "type": "Microsoft.Resources/deployments",
- "apiVersion": "2022-09-01",
+ "apiVersion": "2024-03-01",
"name": "nestedTemplate1", "properties": { "mode": "Incremental",
azure-resource-manager Msbuild Bicep File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/msbuild-bicep-file.md
Run MSBuild to convert the Bicep file and the Bicep parameter file to JSON.
"resources": [ { "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2022-05-01",
+ "apiVersion": "2023-04-01",
"name": "[variables('storageAccountName')]", "location": "[parameters('location')]", "sku": {
azure-resource-manager Patterns Configuration Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/patterns-configuration-set.md
Last updated 06/23/2023 + # Configuration set pattern Rather than define lots of individual parameters, create predefined sets of values. During deployment, select the set of values to use.
azure-resource-manager Patterns Logical Parameter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/patterns-logical-parameter.md
Last updated 06/23/2023 + # Logical parameter pattern Use parameters to specify the logical definition of a resource, or even of multiple resources. The Bicep file converts the logical parameter to deployable resource definitions. By following this pattern, you can separate *what's* deployed from *how* it's deployed.
azure-resource-manager Patterns Name Generation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/patterns-name-generation.md
Last updated 06/23/2023 + # Name generation pattern Within your Bicep files, use string interpolation and Bicep functions to create resource names that are unique, deterministic, meaningful, and different for each environment that you deploy to.
azure-resource-manager Patterns Shared Variable File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/patterns-shared-variable-file.md
Last updated 07/28/2023 + # Shared variable file pattern Reduce the repetition of shared values in your Bicep files. Instead, load those values from a shared JSON file within your Bicep file. When using arrays, concatenate the shared values with deployment-specific values in your Bicep code.
azure-resource-manager Quickstart Create Bicep Use Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio-code.md
When finished, you have:
@maxLength(24) param storageAccountName string = 'store${uniqueString(resourceGroup().id)}'
-resource virtualNetwork 'Microsoft.Network/virtualNetworks@2019-11-01' = {
+resource virtualNetwork 'Microsoft.Network/virtualNetworks@2023-11-01' = {
name: 'exampleVNet' location: resourceGroup().location properties: {
resource virtualNetwork 'Microsoft.Network/virtualNetworks@2019-11-01' = {
} }
-resource exampleStorage 'Microsoft.Storage/storageAccounts@2021-02-01' = {
+resource exampleStorage 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: storageAccountName location: 'eastus' sku: {
azure-resource-manager Quickstart Create Bicep Use Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio.md
In *main.bicep*, type **vnet**. Select **res-vnet** from the list, and then pres
Your Bicep file now contains the following code: ```bicep
-resource virtualNetwork 'Microsoft.Network/virtualNetworks@2019-11-01' = {
+resource virtualNetwork 'Microsoft.Network/virtualNetworks@2023-11-01' = {
name: 'name' location: location properties: {
After the single quote for the resource type, add `=` and a space. You're presen
This option adds all of the properties for the resource type that are required for deployment. After selecting this option, your storage account has the following properties: ```bicep
-resource exampleStorage 'Microsoft.Storage/storageAccounts@2021-09-01' = {
+resource exampleStorage 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: 1 location: 2 sku: {
When you've finished, you have:
param storageName string param location string = resourceGroup().location
-resource virtualNetwork 'Microsoft.Network/virtualNetworks@2019-11-01' = {
+resource virtualNetwork 'Microsoft.Network/virtualNetworks@2023-11-01' = {
name: storageName location: location properties: {
resource virtualNetwork 'Microsoft.Network/virtualNetworks@2019-11-01' = {
} }
-resource exampleStorage 'Microsoft.Storage/storageAccounts@2021-09-01' = {
+resource exampleStorage 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: storageName location: location sku: {
azure-resource-manager Quickstart Create Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/quickstart-create-template-specs.md
param location string = resourceGroup().location
var storageAccountName = 'storage${uniqueString(resourceGroup().id)}'
-resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = {
+resource storageAccount 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: storageAccountName location: location sku: {
You can create a template spec with a Bicep file but the `mainTemplate` must be
'resources': [ { 'type': 'Microsoft.Storage/storageAccounts'
- 'apiVersion': '2022-09-01'
+ 'apiVersion': '2023-04-01'
'name': '[variables(\'storageAccountName\')]' 'location': '[parameters(\'location\')]' 'sku': {
Rather than create a new template spec for the revised template, add a new versi
'resources': [ { 'type': 'Microsoft.Storage/storageAccounts'
- 'apiVersion': '2022-09-01'
+ 'apiVersion': '2023-04-01'
'name': '[variables(\'storageAccountName\')]' 'location': '[parameters(\'location\')]' 'sku': {
azure-resource-manager Quickstart Private Module Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/quickstart-private-module-registry.md
param location string
var uniqueStorageName = '${storagePrefix}${uniqueString(resourceGroup().id)}'
-resource stg 'Microsoft.Storage/storageAccounts@2021-06-01' = {
+resource stg 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: uniqueStorageName location: location sku: {
azure-resource-manager Resource Declaration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/resource-declaration.md
resource <symbolic-name> '<full-type-name>@<api-version>' = {
So, a declaration for a storage account can start with: ```bicep
-resource stg 'Microsoft.Storage/storageAccounts@2021-04-01' = {
+resource stg 'Microsoft.Storage/storageAccounts@2023-04-01' = {
... } ```
resource <symbolic-name> '<full-type-name>@<api-version>' = {
Each resource has a name. When setting the resource name, pay attention to the [rules and restrictions for resource names](../management/resource-name-rules.md). ```bicep
-resource stg 'Microsoft.Storage/storageAccounts@2019-06-01' = {
+resource stg 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: 'examplestorage' ... }
Typically, you'd set the name to a parameter so you can pass in different values
@maxLength(24) param storageAccountName string
-resource stg 'Microsoft.Storage/storageAccounts@2019-06-01' = {
+resource stg 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: storageAccountName ... }
resource stg 'Microsoft.Storage/storageAccounts@2019-06-01' = {
Many resources require a location. You can determine if the resource needs a location either through intellisense or [template reference](/azure/templates/). The following example adds a location parameter that is used for the storage account. ```bicep
-resource stg 'Microsoft.Storage/storageAccounts@2019-06-01' = {
+resource stg 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: 'examplestorage' location: 'eastus' ...
Typically, you'd set location to a parameter so you can deploy to different loca
```bicep param location string = resourceGroup().location
-resource stg 'Microsoft.Storage/storageAccounts@2019-06-01' = {
+resource stg 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: 'examplestorage' location: location ...
You can use either system-assigned or user-assigned identities.
The following example shows how to configure a system-assigned identity for an Azure Kubernetes Service cluster. ```bicep
-resource aks 'Microsoft.ContainerService/managedClusters@2020-09-01' = {
+resource aks 'Microsoft.ContainerService/managedClusters@2024-02-01' = {
name: clusterName location: location tags: tags
The next example shows how to configure a user-assigned identity for a virtual m
```bicep param userAssignedIdentity string
-resource vm 'Microsoft.Compute/virtualMachines@2020-06-01' = {
+resource vm 'Microsoft.Compute/virtualMachines@2024-03-01' = {
name: vmName location: location identity: {
The preceding properties are generic to most resource types. After setting those
Use intellisense or [Bicep resource reference](/azure/templates/) to determine which properties are available and which ones are required. The following example sets the remaining properties for a storage account. ```bicep
-resource stg 'Microsoft.Storage/storageAccounts@2019-06-01' = {
+resource stg 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: 'examplestorage' location: 'eastus' sku: {
azure-resource-manager Resource Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/resource-dependencies.md
Azure Resource Manager evaluates the dependencies between resources, and deploys
An implicit dependency is created when one resource declaration references another resource in the same deployment. In the following example, `otherResource` gets a property from `exampleDnsZone`. The resource named `otherResource` is implicitly dependent on `exampleDnsZone`. ```bicep
-resource exampleDnsZone 'Microsoft.Network/dnszones@2018-05-01' = {
+resource exampleDnsZone 'Microsoft.Network/dnsZones@2023-07-01-preview' = {
name: 'myZone' location: 'global' }
-resource otherResource 'Microsoft.Example/examples@2023-05-01' = {
+resource otherResource 'Microsoft.Example/examples@2024-05-01' = {
name: 'exampleResource' properties: { // get read-only DNS zone property
resource otherResource 'Microsoft.Example/examples@2023-05-01' = {
A nested resource also has an implicit dependency on its containing resource. ```bicep
-resource myParent 'My.Rp/parentType@2023-05-01' = {
+resource myParent 'My.Rp/parentType@2024-05-01' = {
name: 'myParent' location: 'West US'
An explicit dependency is declared with the `dependsOn` property. The property a
The following example shows a DNS zone named `otherZone` that depends on a DNS zone named `dnsZone`: ```bicep
-resource dnsZone 'Microsoft.Network/dnszones@2018-05-01' = {
+resource dnsZone 'Microsoft.Network/dnszones@2023-07-01-preview' = {
name: 'demoeZone1' location: 'global' }
-resource otherZone 'Microsoft.Network/dnszones@2018-05-01' = {
+resource otherZone 'Microsoft.Network/dnszones@2023-07-01-preview' = {
name: 'demoZone2' location: 'global' dependsOn: [
azure-resource-manager Scenarios Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/scenarios-secrets.md
When you use Bicep modules, you can provide secure parameters by using [the `get
You can also reference a key vault defined in another resource group by using the `existing` and `scope` keywords together. In the following example, the Bicep file is deployed to a resource group named *Networking*. The value for the module's parameter *mySecret* is defined in a key vault named *contosonetworkingsecrets* located in the *Secrets* resource group: ```bicep
-resource networkingSecretsKeyVault 'Microsoft.KeyVault/vaults@2019-09-01' existing = {
+resource networkingSecretsKeyVault 'Microsoft.KeyVault/vaults@2023-07-01' existing = {
scope: resourceGroup('Secrets') name: 'contosonetworkingsecrets' }
azure-resource-manager Scope Extension Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/scope-extension-resources.md
To apply an extension resource type at the target deployment scope, add the reso
When deployed to a resource group, the following template adds a lock to that resource group. ```bicep
-resource createRgLock 'Microsoft.Authorization/locks@2016-09-01' = {
+resource createRgLock 'Microsoft.Authorization/locks@2020-05-01' = {
name: 'rgLock' properties: { level: 'CanNotDelete'
var role = {
Reader: '/subscriptions/${subscription().subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/acdd72a7-3385-48ef-bd42-f606fba81ae7' }
-resource roleAssignSub 'Microsoft.Authorization/roleAssignments@2020-04-01-preview' = {
+resource roleAssignSub 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
name: guid(subscription().id, principalId, role[builtInRoleType]) properties: { roleDefinitionId: role[builtInRoleType]
var role = {
} var uniqueStorageName = 'storage${uniqueString(resourceGroup().id)}'
-resource demoStorageAcct 'Microsoft.Storage/storageAccounts@2019-04-01' = {
+resource demoStorageAcct 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: uniqueStorageName location: location sku: {
resource demoStorageAcct 'Microsoft.Storage/storageAccounts@2019-04-01' = {
properties: {} }
-resource roleAssignStorage 'Microsoft.Authorization/roleAssignments@2020-04-01-preview' = {
+resource roleAssignStorage 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
name: guid(demoStorageAcct.id, principalId, role[builtInRoleType]) properties: { roleDefinitionId: role[builtInRoleType]
resource roleAssignStorage 'Microsoft.Authorization/roleAssignments@2020-04-01-p
You can apply an extension resource to an existing resource. The following example adds a lock to an existing storage account. ```bicep
-resource demoStorageAcct 'Microsoft.Storage/storageAccounts@2021-04-01' existing = {
+resource demoStorageAcct 'Microsoft.Storage/storageAccounts@2023-04-01' existing = {
name: 'examplestore' }
-resource createStorageLock 'Microsoft.Authorization/locks@2016-09-01' = {
+resource createStorageLock 'Microsoft.Authorization/locks@2020-05-01' = {
name: 'storeLock' scope: demoStorageAcct properties: {
The following example shows how to apply a lock on a storage account that reside
```bicep param storageAccountName string-
- resource storage 'Microsoft.Storage/storageAccounts@2021-09-01' existing = {
+
+ resource storage 'Microsoft.Storage/storageAccounts@2023-04-01' existing = {
name: storageAccountName }-
- resource storeLock 'Microsoft.Authorization/locks@2017-04-01' = {
+
+ resource storeLock 'Microsoft.Authorization/locks@2020-05-01' = {
scope: storage name: 'storeLock' properties: {
azure-resource-manager Template Specs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/template-specs.md
The following example shows a simple Bicep file for creating a storage account i
]) param storageAccountType string = 'Standard_LRS'
-resource stg 'Microsoft.Storage/storageAccounts@2021-04-01' = {
+resource stg 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: 'store${uniqueString(resourceGroup().id)}' location: resourceGroup().location sku: {
param templateSpecName string = 'CreateStorageAccount'
param templateSpecVersionName string = '0.1' param location string = resourceGroup().location
-resource createTemplateSpec 'Microsoft.Resources/templateSpecs@2021-05-01' = {
+resource createTemplateSpec 'Microsoft.Resources/templateSpecs@2022-02-01' = {
name: templateSpecName location: location properties: {
resource createTemplateSpec 'Microsoft.Resources/templateSpecs@2021-05-01' = {
} }
-resource createTemplateSpecVersion 'Microsoft.Resources/templateSpecs/versions@2021-05-01' = {
+resource createTemplateSpecVersion 'Microsoft.Resources/templateSpecs/versions@2022-02-01' = {
parent: createTemplateSpec name: templateSpecVersionName location: location
resource createTemplateSpecVersion 'Microsoft.Resources/templateSpecs/versions@2
'resources': [ { 'type': 'Microsoft.Storage/storageAccounts'
- 'apiVersion': '2019-06-01'
+ 'apiVersion': '2023-04-01'
'name': 'store$uniquestring(resourceGroup().id)' 'location': resourceGroup().location 'kind': 'StorageV2'
resource createTemplateSpecVersion 'Microsoft.Resources/templateSpecs/versions@2
} } }- ``` The JSON template embedded in the Bicep file needs to make these changes:
az deployment group create \
For more information, see [Bicep parameters file](./parameter-files.md). - To pass parameter file with: # [PowerShell](#tab/azure-powershell)
az deployment group create \
- - Use JSON parameters file - The following JSON is a sample JSON parameters file: ```json
azure-resource-manager User Defined Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/user-defined-data-types.md
param storageAccountName string
]) param storageAccountSKU string = 'Standard_LRS'
-resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = {
+resource storageAccount 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: storageAccountName location: location sku: {
type storageAccountConfigType = {
param storageAccountConfig storageAccountConfigType
-resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = {
+resource storageAccount 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: storageAccountConfig.name location: location sku: {
azure-resource-manager Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/visual-studio-code.md
param location string = resourceGroup().location
var storageAccountName = '${uniqueString(resourceGroup().id)}storage'
-resource storageAccount 'Microsoft.Storage/storageAccounts@2021-08-01' = {
+resource storageAccount 'Microsoft.Storage/storageAccounts@2023-04-01' = {
name: storageAccountName location: location sku: {
And, paste the following JSON:
```json { "type": "Microsoft.Batch/batchAccounts",
- "apiVersion": "2021-06-01",
+ "apiVersion": "2024-02-01",
"name": "[parameters('batchAccountName')]", "location": "[parameters('location')]", "tags": {
azure-vmware Architecture Private Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/architecture-private-clouds.md
AV64 SKUs are available per Availability Zone, the table below lists the Azure r
| Azure region | Availability Zone | SKU | Multi-AZ SDDC | AV64 FDs Supported | | : | :: | :: | :: | :: |
-| Australia East | AZ01 | AV36P, AV64 | Yes | 5 (7 Planned H2 2024) |
-| Australia East | AZ02 | AV36 | No | N/A |
-| Australia East | AZ03 | AV36P, AV64 | Yes | 5 (7 Planned H2 2024) |
+| Australia East | AZ01 | AV36P, AV64 | Yes |7|
+| Australia East | AZ02 | AV36, (AV64 Planned H2 2024)| No |N/A (7 Planned H2 2024) |
+| Australia East | AZ03 | AV36P, AV64 | Yes |7|
| Australia South East | AZ01 | AV36 | No | N/A | | Brazil South | AZ02 | **AV36** | No | N/A |
-| Canada Central | AZ02 | AV36, **AV36P** | No | N/A |
-| Canada East | N/A | AV36 | No | N/A |
-| Central India | AZ03 | AV36P | No | N/A |
-| Central US | AZ01 | AV36P | No | N/A |
-| Central US | AZ02 | **AV36** | No | N/A |
-| Central US | AZ03 | AV36P | No | N/A |
-| East Asia | AZ01 | AV36 | No | N/A |
-| East US | AZ01 | **AV36P** | Yes | N/A |
+| Canada Central | AZ02 | AV36, **AV36P,** (AV64 Planned H2 2024)| No |N/A (7 Planned H2 2024)|
+| Canada East | N/A | AV36| No | N/A |
+| Central India | AZ03 | AV36P, (AV64 Planned H2 2024) | No |N/A (7 Planned H2 2024) |
+| Central US | AZ01 | AV36P, (AV64 Planned H2 2024) | No |N/A (7 Planned H2 2024) |
+| Central US | AZ02 | **AV36**, (AV64 Planned H2 2024)| No |N/A (7 Planned H2 2024) |
+| Central US | AZ03 | AV36P, (AV64 Planned H2 2024) | No |N/A (7 Planned H2 2024) |
+| East Asia | AZ01 | AV36, (AV64 Planned H2 2024)| No |N/A (7 Planned H2 2024) |
+| East US | AZ01 | **AV36P****,** (AV64 Planned H2 2024)| Yes |N/A (7 Planned H2 2024) |
| East US | AZ02 | **AV36P**, AV64 | Yes | 7 | | East US | AZ03 | **AV36**, **AV36P**, AV64 | Yes | 7 |
-| East US 2 | AZ01 | **AV36**, AV64 | No | 5 (7 Planned H2 2024) |
-| East US 2 | AZ02 | AV36P, **AV52**, AV64 | No | 5 (7 Planned H2 2024) |
-| France Central | AZ01 | **AV36** | No | N/A |
-| Germany West Central | AZ01 | AV36P | Yes | N/A |
-| Germany West Central | AZ02 | **AV36** | Yes | N/A |
-| Germany West Central | AZ03 | AV36, **AV36P** | Yes | N/A |
-| Italy North | AZ03 | AV36P | No | N/A |
-| Japan East | AZ02 | **AV36** | No | N/A |
-| Japan West | AZ01 | **AV36** | No | N/A |
-| North Central US | AZ01 | **AV36**, AV64 | No | 5 (7 Planned H2 2024) |
-| North Central US | AZ02 | AV36P, AV64 | No | 5 (7 Planned H2 2024) |
+| East US 2 | AZ01 | **AV36**, AV64 | No |7|
+| East US 2 | AZ02 | AV36P, **AV52**, AV64 | No | 7|
+| France Central | AZ01 | **AV36**, (AV64 Planned H2 2024)| No |N/A (7 Planned H2 2024) |
+| Germany West Central | AZ01 | AV36P, (AV64 Planned H2 2024)| Yes |N/A (7 Planned H2 2024) |
+| Germany West Central | AZ02 | **AV36**, (AV64 Planned H2 2024)| Yes |N/A (7 Planned H2 2024) |
+| Germany West Central | AZ03 | AV36, **AV36P**, AV64 | Yes |7|
+| Italy North | AZ03 | AV36P, (AV64 Planned H2 2024)| No |N/A (7 Planned H2 2024) |
+| Japan East | AZ02 | **AV36**, (AV64 Planned H2 2024)| No |N/A (7 Planned H2 2024) |
+| Japan West | AZ01 | **AV36**, (AV64 Planned H2 2024) | No |N/A (7 Planned H2 2024) |
+| North Central US | AZ01 | **AV36**, AV64 | No |7|
+| North Central US | AZ02 | AV36P, AV64 | No |7|
| North Europe | AZ02 | AV36, AV64 | No | 5 (7 Planned H2 2024) |
-| Qatar Central | AZ03 | AV36P | No | N/A |
-| South Africa North | AZ03 | AV36 | No | N/A |
-| South Central US | AZ01 | AV36, AV64 | No | 5 (7 Planned H2 2024) |
-| South Central US | AZ02 | **AV36P**, AV52, AV64 | No | 5 (7 Planned H2 2024) |
+| Qatar Central | AZ03 | AV36P, (AV64 Planned H2 2024)| No |N/A (7 Planned H2 2024) |
+| South Africa North | AZ03 | AV36, (AV64 Planned H2 2024) | No |N/A (7 Planned H2 2024) |
+| South Central US | AZ01 | AV36, AV64 | No | 7 |
+| South Central US | AZ02 | **AV36P**, AV52, AV64 | No | 7 |
| South East Asia | AZ02 | **AV36** | No | N/A | | Sweden Central | AZ01 | AV36 | No | N/A | | Switzerland North | AZ01 | **AV36**, AV64 | No | 7 |
-| Switzerland North | AZ03 | AV36P | No | N/A |
+| Switzerland North | AZ03 | AV36P, (AV64 Planned H2 2024)| No |N/A (7 Planned H2 2024) |
| Switzerland West | AZ01 | **AV36**, AV64 | No | 7 | | UAE North | AZ03 | AV36P | No | N/A | | UK South | AZ01 | AV36, AV36P, AV52, AV64 | Yes | 7 | | UK South | AZ02 | **AV36**, AV64 | Yes | 7 | | UK South | AZ03 | AV36P, AV64 | Yes | 7 | | UK West | AZ01 | AV36 | No | N/A |
-| West Europe | AZ01 | **AV36**, AV36P, AV52, AV64 | Yes | 5 (7 Planned H2 2024) |
+| West Europe | AZ01 | **AV36**, AV36P, AV52, AV64 | Yes | 7 |
| West Europe | AZ02 | **AV36**, AV64 | Yes | 7 |
-| West Europe | AZ03 | AV36P, AV64 | Yes | 5 (7 Planned H2 2024) |
+| West Europe | AZ03 | AV36P, AV64 | Yes |N/A (7 Planned H2 2024|
| West US | AZ01 | AV36, AV36P | No | N/A | | West US 2 | AZ01 | AV36 | No | N/A | | West US 2 | AZ02 | AV36P | No | N/A |
cdn Cdn Custom Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-custom-ssl.md
Follow the instructions on the form; you have two verification options:
- You can approve just the specific host name used in this request. Another approval is required for later requests.
-After approval, DigiCert completes the certificate creation for your custom domain name. The certificate is valid for one year and will be autorenewed before it's expired.
+After approval, DigiCert completes the certificate creation for your custom domain name. The certificate is valid for one year. If the CNAME record for your custom domain is added or updated to map to your endpoint hostname after verification, then it will be autorenewed before it's expired.
+
+>[!NOTE]
+> Managed certificate autorenewal requires that your custom domain be directly mapped to your CDN endpoint by a CNAME record.
## Wait for propagation
cdn Cdn How Caching Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-how-caching-works.md
Not all resources can be cached. The following table shows what resources can be
For **Azure CDN Standard from Microsoft** caching to work on a resource, the origin server must support any HEAD and GET HTTP requests and the content-length values must be the same for any HEAD and GET HTTP responses for the asset. For a HEAD request, the origin server must support the HEAD request, and must respond with the same headers as if it received a GET request.
+> [!NOTE]
+> Requests that include authorization header will not be cached.
+ ## Default caching behavior The following table describes the default caching behavior for the Azure Content Delivery Network products and their optimizations.
container-apps Aspire Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/aspire-dashboard.md
You can enable the .NET Aspire Dashboard on any existing container app using the
::: zone pivot="azurecli"
-You can enable the .NET Aspire Dashboard on any existing container app using the following commands.
+You can enable the .NET Aspire Dashboard on any existing container app environment by using the following commands.
```azurecli az containerapp env dotnet-component create \
container-registry Container Registry Artifact Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-artifact-cache.md
Artifact cache currently supports the following upstream registries:
| Upstream Registries | Support | Availability | |-|-|--|
-| Docker Hub | Supports both authenticated and unauthenticated pulls. | Azure CLI, Azure portal |
+| Docker Hub | Supports both authenticated and unauthenticated pulls. | Azure CLI |
+| Docker Hub | Supports authenticated pulls only. | Azure portal |
| Microsoft Artifact Registry | Supports unauthenticated pulls only. | Azure CLI, Azure portal | | AWS Elastic Container Registry (ECR) Public Gallery | Supports unauthenticated pulls only. | Azure CLI, Azure portal | | GitHub Container Registry | Supports both authenticated and unauthenticated pulls. | Azure CLI, Azure portal |
container-registry Intro Connected Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/intro-connected-registry.md
It can also be configured to synchronize a subset of the repositories from the c
A connected registry can work in one of two modes: *ReadWrite* or *ReadOnly* -- **ReadWrite mode** - The default mode allows clients to pull and push artifacts (read and write) to the connected registry. Artifacts that are pushed to the connected registry will be synchronized with the cloud registry.
+- **ReadWrite mode** - The mode allows clients to pull and push artifacts (read and write) to the connected registry. Artifacts that are pushed to the connected registry will be synchronized with the cloud registry.
- The ReadWrite mode is useful when a local development environment is in place. The images are pushed to the local connected registry and from there synchronized to the cloud.
+ The ReadWrite mode is useful when a local development environment is in place. The images are pushed to the local connected registry and from there synchronized to the cloud.
- **ReadOnly mode** - When the connected registry is in ReadOnly mode, clients can only pull (read) artifacts. This configuration is used for nested IoT Edge scenarios, or other scenarios where clients need to pull a container image to operate.
+- **Default mode** - The ***ReadOnly mode*** is now the default mode for connected registries. This change is likely due to security concerns and customer preferences. Starting with CLI version 2.60.0, the default mode is ReadOnly.
+ ### Registry hierarchy Each connected registry must be connected to a parent. The top parent is the cloud registry. For hierarchical scenarios such as [nested IoT Edge](overview-connected-registry-and-iot-edge.md), you can nest connected registries in either mode. The parent connected to the cloud registry can operate in either mode.
container-registry Troubleshoot Artifact Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/troubleshoot-artifact-cache.md
Artifact cache currently supports the following upstream registries:
| Upstream Registries | Support | Availability | |-|-|--|
-| Docker Hub | Supports both authenticated and unauthenticated pulls. | Azure CLI, Azure portal |
+| Docker Hub | Supports both authenticated and unauthenticated pulls. | Azure CLI |
+| Docker Hub | Supports authenticated pulls only. | Azure portal |
| Microsoft Artifact Registry | Supports unauthenticated pulls only. | Azure CLI, Azure portal | | AWS Elastic Container Registry (ECR) Public Gallery | Supports unauthenticated pulls only. | Azure CLI, Azure portal | | GitHub Container Registry | Supports both authenticated and unauthenticated pulls. | Azure CLI, Azure portal |
copilot Use Guided Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/use-guided-deployments.md
If a template isn't available, Copilot in Azure provides information to help you
- "Azure AI search + OpenAI template?" - "Can you suggest a template for app services using SAP cloud SDK?" - "Java app with Azure OpenAI?"-- "Can I use Azure Open AI with React?"
+- "Can I use Azure OpenAI with React?"
- "Enterprise chat with GPT using Java?" - "How can I deploy a sample app using Enterprise chat with GPT and java?" - "I want to use Azure functions to build an OpenAI app"
cosmos-db Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/release-notes.md
Previously updated : 05/09/2024 Last updated : 07/02/2024 #Customer intent: As a database administrator, I want to review the release notes, so I can understand what new features are released for the service.
Last updated 05/09/2024
This article contains release notes for the API for MongoDB vCore. These release notes are composed of feature release dates, and feature updates.
-## Latest release: May 06, 2024
+## Latest release: July 02, 2024
+
+- Metrics added
+ - Customer Activity.
+ - Requests.
+
+(Preview feature list)
+- Support for accumulators
+ - $mergeObjects.
+- Support for aggregation operator
+ - $let.
+- Geospatial query operators
+ - $minDistance.
+ - $maxDistance.
+
+## Previous releases
+
+### May 06, 2024
- Query operator enhancements.
- - $geoNear aggregation. This can be enabled through Flag - `Geospatial support for vcore "MongoDB for CosmosDB"` (Public Preview)
+ - $geoNear aggregation. This can be enabled through Flag - `Geospatial support for vcore "MongoDB for CosmosDB"`
+
+(Preview feature list)
- Support for accumulators - $push.
- - $mergeObjects.
- $addToSet. - $tsSecond/$tsIncrement. - $map/$reduce.
This article contains release notes for the API for MongoDB vCore. These release
- Improved performance of group and distinct. - Improved performance for $geoWithin queries with $centerSphere when radius is greater than π.
-## Previous releases
- ### April 16, 2024 - Query operator enhancements.
cosmos-db Reserved Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/reserved-capacity.md
Azure Cosmos DB Reserved Capacity allows you to benefit from discounted prices on the throughput provisioned for your Azure Cosmos DB resources. You can enjoy up to 63% savings by committing to a reservation for Azure Cosmos DB resources for either one year or three years. Examples of resources are databases and containers (tables, collections, and graphs).
+> [!IMPORTANT]
+> Currently, Azure Pricing Calculator is only showing reservations bigger than one million RU/s. This temporary limitation is being fixed and reservations of any size, starting with 100 RU/s, will soon be available. The Azure Portal isn't affected by this issue.
+ ## How Azure Cosmos DB pricing and discounts work with Reserved Capacity The size of the Reserved Capacity purchase should be based on the total amount of throughput that the existing or soon-to-be-deployed Azure Cosmos DB resources use on an hourly basis.
cosmos-db Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/synapse-link.md
Azure Synapse Link isn't recommended if you're looking for traditional data ware
## Limitations
-* Azure Synapse Link for Azure Cosmos DB is supported for NoSQL and MongoDB APIs. It is not supported for Cassandra or Table APIs and remains in preview for Gremlin API.
+* Azure Synapse Link for Azure Cosmos DB is supported for NoSQL, Gremlin and MongoDB APIs. It is not supported for Cassandra or Table APIs.
+
+* Data Explorer in Synapse Workspaces doesn't list Gremlin graphs in the tree view. But you can still run queries.
* Accessing the Azure Cosmos DB analytics store with Azure Synapse Dedicated SQL Pool currently isn't supported.
data-factory Connector Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-salesforce.md
Previously updated : 06/25/2024 Last updated : 07/11/2024 # Copy data from and to Salesforce using Azure Data Factory or Azure Synapse Analytics
To copy data from and to Salesforce, set the type property of the dataset to **S
| Property | Description | Required | |: |: |: | | type | The type property must be set to **SalesforceV2Object**. | Yes |
-| objectApiName | The Salesforce object name to retrieve data from. | No for source (if "SOQLQuery" in source is specified), Yes for sink |
-| reportId | The ID of the Salesforce report to retrieve data from. It isn't supported in sink. There are [limitations](https://developer.salesforce.com/docs/atlas.en-us.api_analytics.meta/api_analytics/sforce_analytics_rest_api_limits_limitations.htm) when you use reports. | No for source (if "SOQLQuery" in source is specified), not support sink |
+| objectApiName | The Salesforce object name to retrieve data from. | No for source (if "query" in source is specified), Yes for sink |
+| reportId | The ID of the Salesforce report to retrieve data from. It isn't supported in sink. There are [limitations](https://developer.salesforce.com/docs/atlas.en-us.api_analytics.meta/api_analytics/sforce_analytics_rest_api_limits_limitations.htm) when you use reports. | No for source (if "query" in source is specified), not support sink |
> [!IMPORTANT] > The "__c" part of **API Name** is needed for any custom object.
To copy data from Salesforce, set the source type in the copy activity to **Sale
| Property | Description | Required | |: |: |: | | type | The type property of the copy activity source must be set to **SalesforceV2Source**. | Yes |
-| SOQLQuery | Use the custom query to read data. You can only use [Salesforce Object Query Language (SOQL)](https://developer.salesforce.com/docs/atlas.en-us.soql_sosl.meta/soql_sosl/sforce_api_calls_soql.htm) query with limitations. For SOQL limitations, see this [article](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/queries.htm#SOQL%20Considerations). If query isn't specified, all the data of the Salesforce object specified in "ObjectApiName/reportId" in dataset is retrieved. | No (if "ObjectApiName/reportId" in the dataset is specified) |
+| query | Use the custom query to read data. You can only use [Salesforce Object Query Language (SOQL)](https://developer.salesforce.com/docs/atlas.en-us.soql_sosl.meta/soql_sosl/sforce_api_calls_soql.htm) query with limitations. For SOQL limitations, see this [article](https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/queries.htm#SOQL%20Considerations). If query isn't specified, all the data of the Salesforce object specified in "ObjectApiName/reportId" in dataset is retrieved. | No (if "ObjectApiName/reportId" in the dataset is specified) |
| includeDeletedObjects | Indicates whether to query the existing records, or query all records including the deleted ones. If not specified, the default behavior is false. <br>Allowed values: **false** (default), **true**. | No | > [!IMPORTANT]
To copy data from Salesforce, set the source type in the copy activity to **Sale
"typeProperties": { "source": { "type": "SalesforceV2Source",
- "SOQLQuery": "SELECT Col_Currency__c, Col_Date__c, Col_Email__c FROM AllDataType__c",
+ "query": "SELECT Col_Currency__c, Col_Date__c, Col_Email__c FROM AllDataType__c",
"includeDeletedObjects": false }, "sink": {
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
This article summarizes what's new in Microsoft Defender for Cloud. It includes
|Date | Category | Update| |--|--|--|
+| July 11 | Upcoming update | [GitHub application permissions update](#github-application-permissions-update) |
| July 10 | GA | [Compliance standards are now GA](#compliance-standards-are-now-ga) | | July 9 | Upcoming update | [Inventory experience improvement](#inventory-experience-improvement) | |July 8 | Upcoming update | [Container mapping tool to run by default in GitHub](#container-mapping-tool-to-run-by-default-in-github) |
+### GitHub application permissions update
+
+July 11, 2024
+
+**Estimated date for change**: July 18, 2024
+
+DevOps security in Defender for Cloud is constantly making updates that require customers with GitHub connectors in Defender for Cloud to update the permissions for the Microsoft Security DevOps application in GitHub.
+
+As part of this update, the GitHub application will require GitHub Copilot Business read permissions. This permission will be used to help customers better secure their GitHub Copilot deployments. We suggest updating the application as soon as possible.
+
+Permissions can be granted in two different ways:
+
+1. In your GitHub organization, navigate to the Microsoft Security DevOps application within **Settings > GitHub Apps** and accept the permissions request.
+
+1. In an automated email from GitHub Support, select **Review permission request** to accept or reject this change.
+ ### Compliance standards are now GA July 10, 2024
digital-twins How To Integrate Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-integrate-maps.md
- Title: Integrate with Azure Maps-
-description: Learn how to use Azure Functions to create a function that can use the twin graph and Azure Digital Twins notifications to update an Azure Maps indoor map.
-- Previously updated : 09/27/2022----
-# Optional fields. Don't forget to remove # if you need a field.
-#
-
-#
--
-# Integrate Azure Digital Twins data into an Azure Maps indoor map
-
-This article shows how to use Azure Digital Twins data to update information displayed on an *indoor map* from [Azure Maps](../azure-maps/about-azure-maps.md). Because Azure Digital Twins stores a graph of your IoT device relationships and routes device data to different endpoints, it's a great service for updating informational overlays on maps.
-
-This guide covers the following information:
-
-1. Configuring your Azure Digital Twins instance to send twin update events to a function in [Azure Functions](../azure-functions/functions-overview.md).
-2. Creating a function to update an Azure Maps indoor maps feature stateset.
-3. Storing your maps ID and feature stateset ID in the Azure Digital Twins graph.
-
-## Get started
-
-This section sets additional context for the information in this article.
-
-### Prerequisites
-
-Before proceeding with this article, start by setting up your individual Azure Digital Twins and Azure Maps resources.
-
-* For Azure Digital Twins: Follow the instructions in [Connect an end-to-end solution](./tutorial-end-to-end.md) to set up an Azure Digital Twins instance with a sample twin graph and simulated data flow.
- * In this article, you'll extend that solution with another endpoint and route. You'll also add another function to the function app from that tutorial.
-* For Azure Maps: Follow the instructions in [Use Creator to create indoor maps](../azure-maps/tutorial-creator-indoor-maps.md) and create an Azure Maps indoor map with a *feature stateset*.
- * Feature statesets are collections of dynamic properties (states) assigned to dataset features such as rooms or equipment. In the Azure Maps instructions above, the feature stateset stores room status that you'll be displaying on a map.
- * You'll need your Azure Maps **subscription key**, feature **stateset ID**, and **mapConfiguration**.
-
-### Topology
-
-The image below illustrates where the indoor maps integration elements in this tutorial fit into a larger, end-to-end Azure Digital Twins scenario.
--
-## Route twin update notifications from Azure Digital Twins
-
-Azure Digital Twins instances can emit twin update events whenever a twin's state is updated. The Azure Digital Twins [Connect an end-to-end solution](./tutorial-end-to-end.md) linked above walks through a scenario where a thermometer is used to update a temperature attribute attached to a room's twin. This tutorial extends that solution by subscribing an Azure function to update notifications from twins, and using that function to update your maps.
-
-This pattern reads from the room twin directly, rather than the IoT device, which gives you the flexibility to change the underlying data source for temperature without needing to update your mapping logic. For example, you can add multiple thermometers or set this room to share a thermometer with another room, all without needing to update your map logic.
-
-First, you'll create a route in Azure Digital Twins to forward all twin update events to an Event Grid topic.
-
-1. Create an Event Grid topic, which will receive events from your Azure Digital Twins instance, using the CLI command below:
- ```azurecli-interactive
- az eventgrid topic create --resource-group <your-resource-group-name> --name <your-topic-name> --location <region>
- ```
-
-2. Create an endpoint to link your Event Grid topic to Azure Digital Twins, using the CLI command below:
- ```azurecli-interactive
- az dt endpoint create eventgrid --endpoint-name <Event-Grid-endpoint-name> --eventgrid-resource-group <Event-Grid-resource-group-name> --eventgrid-topic <your-Event-Grid-topic-name> --dt-name <your-Azure-Digital-Twins-instance-name>
- ```
-
-3. Create a route in Azure Digital Twins to send twin update events to your endpoint, using the CLI command below. For the Azure Digital Twins instance name placeholder in this command, you can use the friendly name or the host name for a boost in performance.
-
- >[!NOTE]
- >There is currently a known issue in Cloud Shell affecting these command groups: `az dt route`, `az dt model`, `az dt twin`.
- >
- >To resolve, either run `az login` in Cloud Shell prior to running the command, or use the [local CLI](/cli/azure/install-azure-cli) instead of Cloud Shell. For more detail on this, see [Azure Digital Twins known issues](troubleshoot-known-issues.md#400-client-error-bad-request-in-cloud-shell).
-
- ```azurecli-interactive
- az dt route create --dt-name <your-Azure-Digital-Twins-instance-hostname-or-name> --endpoint-name <Event-Grid-endpoint-name> --route-name <my-route> --filter "type = 'Microsoft.DigitalTwins.Twin.Update'"
- ```
-
-## Create an Azure function to receive events and update maps
-
-In this section, you'll create a function that listens for events sent to the Event Grid topic. The function will read those update notifications and send corresponding updates to an Azure Maps feature stateset, to update the temperature of one room.
-
-In the Azure Digital Twins tutorial [prerequisite](#prerequisites), you created a function app to store Azure functions Azure Digital Twins. Now, create a new [Event Grid-triggered Azure function](../azure-functions/functions-bindings-event-grid-trigger.md) inside the function app.
-
-Replace the function code with the following code. It will filter out only updates to space twins, read the updated temperature, and send that information to Azure Maps.
--
-You'll need to set two environment variables in your function app. One is your [Azure Maps primary subscription key](../azure-maps/quick-demo-map-app.md#get-the-subscription-key-for-your-account), and one is your Azure Maps stateset ID.
-
-```azurecli-interactive
-az functionapp config appsettings set --name <your-function-app-name> --resource-group <your-resource-group> --settings "subscription-key=<your-Azure-Maps-primary-subscription-key>"
-az functionapp config appsettings set --name <your-function-app-name> --resource-group <your-resource-group> --settings "statesetID=<your-Azure-Maps-stateset-ID>"
-```
-
-### View live updates in the map
-
-To see live-updating temperature, follow the steps below:
-
-1. Begin sending simulated IoT data by running the *DeviceSimulator* project from the Azure Digital Twins [Connect an end-to-end solution](tutorial-end-to-end.md). The instructions for this process are in the [Configure and run the simulation](././tutorial-end-to-end.md#configure-and-run-the-simulation) section.
-2. Use [the Azure Maps Indoor module](../azure-maps/how-to-use-indoor-module.md) to render your indoor maps created in Azure Maps Creator.
- 1. Copy the example indoor map HTML file from [Example: Custom Styling: Consume map configuration in WebSDK (Preview)](../azure-maps/how-to-use-indoor-module.md#example-custom-styling-consume-map-configuration-in-websdk-preview).
- 1. Replace the **subscription key**, **mapConfiguration**, **statesetID**, and **region** in the local HTML file with your values.
- 1. Open that file in your browser.
-
-Both samples send temperature in a compatible range, so you should see the color of room 121 update on the map about every 30 seconds.
--
-## Store map information in Azure Digital Twins
-
-Now that you have a hardcoded solution to updating your maps information, you can use the Azure Digital Twins graph to store all of the information necessary for updating your indoor map. This information would include the stateset ID, maps subscription ID, and feature ID of each map and location respectively.
-
-A solution for this specific example would involve updating each top-level space to have a stateset ID and maps subscription ID attribute, and updating each room to have a feature ID. You would need to set these values once when initializing the twin graph, then query those values for each twin update event.
-
-Depending on the configuration of your topology, storing these three attributes at different levels correlating to the granularity of your map will be possible.
-
-## Next steps
-
-To read more about managing, upgrading, and retrieving information from the twins graph, see the following references:
-
-* [Manage digital twins](./how-to-manage-twin.md)
-* [Query the twin graph](./how-to-query-graph.md)
event-grid How To Event Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/how-to-event-domains.md
And then use your favorite method of making an HTTP POST to publish your events
> [!NOTE] > For samples that use programming language SDKs to publish events to an Event Grid domain, use the following links: > - [.NET](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/eventgrid/Azure.Messaging.EventGrid/samples/Sample2_PublishEventsToDomain.md#publishing-events-to-an-event-grid-domain)
-> - [Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/eventgrid/azure-eventgrid/samples/sync_samples/sample_publish_eg_events_to_a_domain.py)
+> - [Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/eventgrid/azure-eventgrid/samples/basic/sync_samples/sample_publish_eg_events_to_a_domain.py)
> - [Java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/eventgrid/azure-messaging-eventgrid-cloudnative-cloudevents/src/samples/java/com/azure/messaging/eventgrid/cloudnative/cloudevents/samples/PublishNativeCloudEventToDomainAsync.java) ## Search lists of topics or subscriptions
event-grid Subscribe To Sap Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-sap-events.md
If you have any questions, contact us at <a href="mailto:ask-grid-and-ms-sap@mic
## Enable events to flow to your partner topic
-SAP's capability to send events to Azure Event Grid is available through SAP's [beta program](https://influence.sap.com/sap/ino/#campaign/3314). Using this program, you can let SAP know about your desire to have your S4/HANA events available on Azure. You can find the SAP's announcement of this new feature [here](https://blogs.sap.com/2022/10/11/sap-event-mesh-event-bridge-to-microsoft-azure-to-go-beta/). Through SAP's Beta program, you'll be provided with the documentation on how to configure your SAP S4/HANA system to flow events to Event Grid.
+SAP's capability to send events to Azure Event Grid is available through SAP's beta program. Using this program, you can let SAP know about your desire to have your S4/HANA events available on Azure. You can find the SAP's announcement of this new feature [here](https://blogs.sap.com/2022/10/11/sap-event-mesh-event-bridge-to-microsoft-azure-to-go-beta/). Through SAP's Beta program, you'll be provided with the documentation on how to configure your SAP S4/HANA system to flow events to Event Grid.
SAP's BETA program started in October 2022 and will last a couple of months. Thereafter, the feature will be released by SAP as a generally available (GA) capability. Event Grid's capability to receive events from a partner, like SAP, is already a GA feature.
expressroute Expressroute Asymmetric Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-asymmetric-routing.md
Previously updated : 06/30/2023 Last updated : 07/11/2024
This article explains how network traffic might take different paths when multiple routes are available between network source and destination.
+> [!NOTE]
+> * This article discusses the issues that may occur with asymmetric routing in a network with multiple links to a destination. It should not be used as a reference for designing a network with asymmetric routing, as Microsoft does not recommend or support this architecture.
++ There are two concepts you need to know to understand asymmetric routing. The first is the effect of multiple network paths. The other is how devices, like a firewall keep state. These types of devices are called stateful devices. When these two factors are combined, they can create a scenario in which network traffic gets dropped by the stateful device. The traffic is dropped because it didn't detect that the traffic originated from itself. ## Multiple network paths
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md
Previously updated : 04/30/2024 Last updated : 07/11/2024
The following table shows connectivity locations and the service providers for e
| **Sydney** | [Equinix SY2](https://www.equinix.com/locations/asia-colocation/australia-colocation/sydney-data-centers/sy2/) | 2 | Australia East | Supported | AARNet<br/>AT&T NetBond<br/>British Telecom<br/>Cello<br/>Devoli<br/>Equinix<br/>GTT<br/>Kordia<br/>Megaport<br/>NEXTDC<br/>NTT Communications<br/>Optus<br/>Orange<br/>Spark NZ<br/>Telstra Corporation<br/>TPG Telecom<br/>Verizon<br/>Vocus Group NZ | | **Sydney2** | [NextDC S1](https://www.nextdc.com/data-centres/s1-sydney-data-centre) | 2 | Australia East | Supported | Megaport<br/>NETSG<br/>NextDC | | **Taipei** | Chief Telecom | 2 | n/a | Supported | Chief Telecom<br/>Chunghwa Telecom<br/>FarEasTone |
+| **Taipei2** | Chunghwa Telecom | 2 | n/a | Supported | |
| **Tel Aviv** | Bezeq International | 2 | Israel Central | Supported | Bezeq International | | **Tel Aviv2** | SDS | 2 | Israel Central | Supported | | | **Tokyo** | [Equinix TY4](https://www.equinix.com/locations/asia-colocation/japan-colocation/tokyo-data-centers/ty4/) | 2 | Japan East | Supported | Aryaka Networks<br/>AT&T NetBond<br/>BBIX<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Colt<br/>Equinix<br/>Intercloud<br/>Internet Initiative Japan Inc. - IIJ<br/>Megaport<br/>NTT Communications<br/>NTT EAST<br/>Orange<br/>Softbank<br/>Telehouse - KDDI<br/>Verizon </br></br> |
expressroute Metro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/metro.md
The following diagram allows for a comparison between the standard ExpressRoute
| Metro location | Peering locations | Location address | Zone | Local Azure Region | ER Direct | Service Provider | |--|--|--|--|--|--|--|
-| Amsterdam Metro | Amsterdam<br>Amsterdam2 | Equinix AM5<br>Digital Realty AMS8 | 1 | West Europe | &check; | Colt<sup>1</sup><br>Console Connect<sup>1</sup><br>Digital Realty<br>Equinix<sup>1</sup><br>euNetworks<br><br>Megaport<br> |
-| Singapore Metro | Singapore<br>Singapore2 | Equinix SG1<br>Global Switch Tai Seng | 2 | Southeast Asia | &check; | Console Connect<sup>1</sup><br>Equinix<sup>1</sup><br>Megaport |
-| Zurich Metro | Zurich<br>Zurich2 | Digital Realty ZUR2<br>Equinix ZH5 | 1 | Switzerland North | &check; | Colt<sup>1</sup><br>Digital Realty<sup>1</sup> |
+| Amsterdam Metro | Amsterdam<br>Amsterdam2 | Equinix AM5<br>Digital Realty AMS8 | 1 | West Europe | &check; | Colt<sup>1</sup><br>Console Connect<sup>1</sup><br>Digital Realty<br>Equinix<br>euNetworks<br><br>Megaport<br> |
+| Singapore Metro | Singapore<br>Singapore2 | Equinix SG1<br>Global Switch Tai Seng | 2 | Southeast Asia | &check; | Console Connect<sup>1</sup><br>Equinix<br>Megaport |
+| Zurich Metro | Zurich<br>Zurich2 | Digital Realty ZUR2<br>Equinix ZH5 | 1 | Switzerland North | &check; | Colt<sup>1</sup><br>Digital Realty |
<sup>1<sup> These service providers will be available in the future.
frontdoor Front Door Caching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-caching.md
The following request headers don't get forwarded to the origin when caching is
- `Accept-Charset` - `Accept-Language`
+> [!NOTE]
+> Requests that include authorization header will not be cached.
+ ## Response headers If the origin response is cacheable, then the `Set-Cookie` header is removed before the response is sent to the client. If an origin response isn't cacheable, Front Door doesn't strip the header. For example, if the origin response includes a `Cache-Control` header with a `max-age` value indicates to Front Door that the response is cacheable, and the `Set-Cookie` header is stripped.
governance Keyboard Shortcuts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/reference/keyboard-shortcuts.md
Title: Keyboard shortcuts in the Azure portal for Azure Resource Graph Explorer description: Azure Resource Graph Explorer in the Azure portal supports keyboard shortcuts to help you perform actions and navigate. Previously updated : 08/17/2021 Last updated : 07/11/2024 + # Keyboard shortcuts for Azure Resource Graph Explorer
-This article lists the keyboard shortcuts that work in the Azure Resource Graph Explorer page of the
-Azure portal. For a list of global keyboard shortcuts or a list of keyboard shortcuts available for
-other pages, visit
-[Keyboard shortcuts in the Azure portal](../../../azure-portal/azure-portal-keyboard-shortcuts.md).
+This article lists the keyboard shortcuts that work in the Azure Resource Graph Explorer page of the Azure portal. For a list of global keyboard shortcuts or a list of keyboard shortcuts available for other pages, visit [Keyboard shortcuts in the Azure portal](../../../azure-portal/azure-portal-keyboard-shortcuts.md).
## Keyboard shortcuts for editing queries
other pages, visit
## Next steps - [Keyboard shortcuts in the Azure portal](../../../azure-portal/azure-portal-keyboard-shortcuts.md)-- [Query language for Resource Graph](../concepts/query-language.md)
+- [Understanding the Azure Resource Graph query language](../concepts/query-language.md)
governance General https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/troubleshoot/general.md
Title: Troubleshoot common errors
+ Title: Troubleshoot common errors for Azure Resource Graph
description: Learn how to troubleshoot issues with the various SDKs while querying Azure resources with Azure Resource Graph. Previously updated : 08/17/2021 Last updated : 07/11/2024 # Troubleshoot errors using Azure Resource Graph
-You may run into errors when querying Azure resources with Azure Resource Graph. This article
-describes various errors that may occur and how to resolve them.
+You might run into errors when querying Azure resources with Azure Resource Graph. This article describes various errors that might occur and how to resolve them.
## Finding error details
-Most errors are the result of an issue while running a query with Azure Resource Graph. When a query
-fails, the SDK provides details about the failed query. This information indicates the issue so that
-it can be fixed and a later query succeeds.
+Most errors are the result of an issue while running a query with Azure Resource Graph. When a query fails, the SDK provides details about the failed query. This information indicates the issue so that it can be fixed and a later query succeeds.
## General errors
Customers making large or frequent resource queries have requests throttled.
#### Cause
-Azure Resource Graph allocates a quota number for each user based on a time window. For example, a
-user can send at most 15 queries within every 5-second window without being throttled. The quota
-value is determined by many factors and is subject to change. For more information, see
-[Throttling in Azure Resource Graph](../overview.md#throttling).
+Azure Resource Graph allocates a quota number for each user based on a time window. For example, a user can send at most 15 queries within every 5-second window without being throttled. The quota value is determined by many factors and is subject to change. For more information, see [Throttling in Azure Resource Graph](../overview.md#throttling).
#### Resolution
There are several methods of dealing with throttled requests:
#### Issue
-Customers with access to more than 1,000 subscriptions, including cross-tenant subscriptions with
-[Azure Lighthouse](../../../lighthouse/overview.md), can't fetch data across all subscriptions in a
-single call to Azure Resource Graph.
+Customers with access to more than 1,000 subscriptions, including cross-tenant subscriptions with [Azure Lighthouse](../../../lighthouse/overview.md), can't fetch data across all subscriptions in a single call to Azure Resource Graph.
#### Cause
-Azure CLI and PowerShell forward only the first 1,000 subscriptions to Azure Resource Graph. The
-REST API for Azure Resource Graph accepts a maximum number of subscriptions to perform the query on.
+Azure CLI and PowerShell forward only the first 1,000 subscriptions to Azure Resource Graph. The REST API for Azure Resource Graph accepts a maximum number of subscriptions to perform the query on.
#### Resolution
-Batch requests for the query with a subset of subscriptions to stay under the 1,000 subscription
-limit. The solution is using the **Subscription** parameter in PowerShell.
+Batch requests for the query with a subset of subscriptions to stay under the 1,000 subscription limit. The solution is using the **Subscription** parameter in PowerShell.
```azurepowershell-interactive # Replace this query with your own
$response
#### Issue
-Customers querying the Azure Resource Graph REST API get a _500_ (Internal Server Error) response
-returned.
+Customers querying the Azure Resource Graph REST API get a _500_ (Internal Server Error) response returned.
#### Cause
-The Azure Resource Graph REST API only supports a `Content-Type` of **application/json**. Some REST
-tools or agents default to **text/plain**, which is unsupported by the REST API.
+The Azure Resource Graph REST API only supports a `Content-Type` of `application/json`. Some REST tools or agents default to `text/plain`, which is unsupported by the REST API.
#### Resolution
-Validate that the tool or agent you're using to query Azure Resource Graph has the REST API header
-`Content-Type` configured for **application/json**.
+Validate that the tool or agent you're using to query Azure Resource Graph has the REST API header `Content-Type` configured for `application/json`.
### Scenario: No read permission to all subscriptions in list #### Issue
-Customers that explicitly pass a list of subscriptions with an Azure Resource Graph query get a
-_403_ (Forbidden) response.
+Customers that explicitly pass a list of subscriptions with an Azure Resource Graph query get a _403_ (Forbidden) response.
#### Cause
-If the customer doesn't have read permission to all the provided subscriptions, the request is
-denied because of lack of appropriate security rights.
+If the customer doesn't have read permission to all the provided subscriptions, the request is denied because of lack of appropriate security rights.
#### Resolution
-Include at least one subscription in the subscription list that the customer running the query has
-at least read access to. For more information, see
-[Permissions in Azure Resource Graph](../overview.md#permissions-in-azure-resource-graph).
+Include at least one subscription in the subscription list that the customer running the query has at least read access to. For more information, see [Permissions in Azure Resource Graph](../overview.md#permissions-in-azure-resource-graph).
## Next steps
-If you didn't see your problem or are unable to solve your issue, visit one of the following
-channels for more support:
+If you didn't see your problem or are unable to solve your issue, visit one of the following channels for more support:
-- Get answers from Azure experts through
- [Azure Forums](https://azure.microsoft.com/support/forums/).
-- Connect with [@AzureSupport](https://twitter.com/azuresupport) - the official Microsoft Azure
- account for improving customer experience by connecting the Azure community to the right
- resources: answers, support, and experts.
-- If you need more help, you can file an Azure support incident. Go to the
- [Azure support site](https://azure.microsoft.com/support/options/) and select **Get Support**.
+- Get answers from Azure experts through [Azure Forums](https://azure.microsoft.com/support/forums/).
+- Connect with [@AzureSupport](https://twitter.com/azuresupport) - the official Microsoft Azure account for improving customer experience by connecting the Azure community to the right resources: answers, support, and experts.
+- If you need more help, you can file an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/) and select **Get Support**.
hdinsight Apache Hadoop Hive Java Udf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-hive-java-udf.md
description: Learn how to create a Java-based user-defined function (UDF) that w
Previously updated : 07/20/2023 Last updated : 07/12/2024 # Use a Java UDF with Apache Hive in HDInsight
Learn how to create a Java-based user-defined function (UDF) that works with Apa
* A Hadoop cluster on HDInsight. See [Get Started with HDInsight on Linux](./apache-hadoop-linux-tutorial-get-started.md). * [Java Developer Kit (JDK) version 8](/azure/developer/java/fundamentals/java-support-on-azure) * [Apache Maven](https://maven.apache.org/download.cgi) properly [installed](https://maven.apache.org/install.html) according to Apache. Maven is a project build system for Java projects.
-* The [URI scheme](../hdinsight-hadoop-linux-information.md#URI-and-scheme) for your clusters primary storage. This would be wasb:// for Azure Storage, abfs:// for Azure Data Lake Storage Gen2 or adl:// for Azure Data Lake Storage Gen1. If secure transfer is enabled for Azure Storage, the URI would be `wasbs://`. See also, [secure transfer](../../storage/common/storage-require-secure-transfer.md).
+* The [URI scheme](../hdinsight-hadoop-linux-information.md#URI-and-scheme) for your clusters primary storage. This would be wasb:// for Azure Storage, `abfs://` for Azure Data Lake Storage Gen2 or adl:// for Azure Data Lake Storage Gen1. If secure transfer is enabled for Azure Storage, the URI would be `wasbs://`. See also, [secure transfer](../../storage/common/storage-require-secure-transfer.md).
* A text editor or Java IDE
hdinsight Apache Hadoop On Premises Migration Best Practices Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-on-premises-migration-best-practices-infrastructure.md
description: Learn infrastructure best practices for migrating on-premises Hadoo
Previously updated : 07/25/2023 Last updated : 07/12/2024 # Migrate on-premises Apache Hadoop clusters to Azure HDInsight - infrastructure best practices
hdinsight Apache Hadoop Use Hive Ambari View https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-hive-ambari-view.md
description: Learn how to use the Hive View from your web browser to submit Hive
Previously updated : 07/12/2023 Last updated : 07/12/2024 # Use Apache Ambari Hive View with Apache Hadoop in HDInsight
A Hadoop cluster on HDInsight. See [Get Started with HDInsight on Linux](./apach
> [!TIP] > Download or save results from the **Actions** drop-down dialog box under the **Results** tab.
-### Visual explain
+### Visual explains
-To display a visualization of the query plan, select the **Visual Explain** tab below the worksheet.
+To display a visualization of the query plan, select the **Visual Explains** tab below the worksheet.
-The **Visual Explain** view of the query can be helpful in understanding the flow of complex queries.
+The **Visual Explains** view of the query can be helpful in understanding the flow of complex queries.
### Tez UI
hdinsight Hdinsight Hdfs Troubleshoot Safe Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/hdinsight-hdfs-troubleshoot-safe-mode.md
Title: Local HDFS stuck in safe mode on Azure HDInsight cluster
description: Troubleshoot local Apache HDFS stuck in safe mode on Apache cluster in Azure HDInsight Previously updated : 07/20/2023 Last updated : 07/12/2024 # Scenario: Local HDFS stuck in safe mode on Azure HDInsight cluster
hdinsight Hdinsight Troubleshoot Invalidnetworksecuritygroupsecurityrules Cluster Creation Fails https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/hdinsight-troubleshoot-invalidnetworksecuritygroupsecurityrules-cluster-creation-fails.md
Title: InvalidNetworkSecurityGroupSecurityRules error - Azure HDInsight
description: Cluster Creation Fails with the ErrorCode InvalidNetworkSecurityGroupSecurityRules Previously updated : 07/25/2023 Last updated : 07/12/2024 # Scenario: InvalidNetworkSecurityGroupSecurityRules - cluster creation fails in Azure HDInsight
hdinsight Troubleshoot Disk Space https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/troubleshoot-disk-space.md
Title: Manage disk space in Azure HDInsight
description: Troubleshooting steps and possible resolutions for managing disk space issues when interacting with Azure HDInsight clusters. Previously updated : 07/20/2023 Last updated : 07/12/2024 # Manage disk space in Azure HDInsight
Review the following configurations:
* Ensure that the cluster size is appropriate for the workload. The workload might have changed recently or the cluster might have been resized. [Scale up](../hdinsight-scaling-best-practices.md) the cluster to match a higher workload.
-* `/mnt/resource` might be filled with orphaned files (as if resource manager restart). If necessary, manually clean `/mnt/resource/hadoop/yarn/log` and `/mnt/resource/hadoop/yarn/local`.
+* `/mnt/resource` might be filled with orphaned files (as if Resource Manager restart). If necessary, manually clean `/mnt/resource/hadoop/yarn/log` and `/mnt/resource/hadoop/yarn/local`.
## Next steps
hdinsight Hdinsight Go Sdk Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-go-sdk-overview.md
ms.devlang: golang Previously updated : 07/10/2023 Last updated : 07/12/2024 # HDInsight SDK for Go (Preview)
client.Resize(context.Background(), "<Resource Group Name>", "<Cluster Name>", h
## Cluster monitoring
-The HDInsight Management SDK can also be used to manage monitoring on your clusters via the Operations Management Suite (OMS).
+The HDInsight Management SDK can also be used to manage to monitor on your clusters via the Operations Management Suite (OMS).
Similarly to how you created `ClusterClient` to use for management operations, you need to create an `ExtensionClient` to use for monitoring operations. Once you've completed the Authentication section above, you can create an `ExtensionClient` like so:
extClient.DisableMonitoring(context.Background(), "<Resource Group Name", "Clust
## Script actions
-HDInsight provides a configuration function called script actions that invokes custom scripts to customize the cluster.
+HDInsight provides a configuration function called script actions that invoke custom scripts to customize the cluster.
> [!NOTE] > More information on how to use script actions can be found here: [Customize Linux-based HDInsight clusters using script actions](./hdinsight-hadoop-customize-cluster-linux.md)
hdinsight Hdinsight Hadoop Customize Cluster Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-customize-cluster-linux.md
description: Add custom components to HDInsight clusters by using script actions
Previously updated : 07/31/2023 Last updated : 07/12/2024 # Customize Azure HDInsight clusters by using script actions
In this section, you use the [Add-AzHDInsightScriptAction](/powershell/module/az
The following script shows how to apply a script action when you create a cluster by using PowerShell:
-[!code-powershell[main](../../powershell_scripts/hdinsight/use-script-action/use-script-action.ps1?range=5-90)]
+[!Code-powershell[main](../../powershell_scripts/hdinsight/use-script-action/use-script-action.ps1?range=5-90)]
It can take several minutes before the cluster is created.
This section explains how to apply script actions on a running cluster.
To use these PowerShell commands, you need the [AZ Module](/powershell/azure/). The following example shows how to apply a script action to a running cluster:
-[!code-powershell[main](../../powershell_scripts/hdinsight/use-script-action/use-script-action.ps1?range=105-117)]
+[!Code-powershell[main](../../powershell_scripts/hdinsight/use-script-action/use-script-action.ps1?range=105-117)]
After the operation finishes, you receive information similar to the following text:
For an example of using the .NET SDK to apply scripts to a cluster, see [Apply a
The following example script demonstrates using the cmdlets to promote and then demote a script.
-[!code-powershell[main](../../powershell_scripts/hdinsight/use-script-action/use-script-action.ps1?range=123-140)]
+[!Code-powershell[main](../../powershell_scripts/hdinsight/use-script-action/use-script-action.ps1?range=123-140)]
### Azure CLI
hdinsight Hdinsight Hadoop Stack Trace Error Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-stack-trace-error-messages.md
description: Index of Hadoop stack trace error messages in Azure HDInsight. Find
Previously updated : 07/25/2023 Last updated : 07/12/2024 # Index of Apache Hadoop in HDInsight troubleshooting articles
hdinsight Hdinsight Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes-archive.md
To subscribe, click the ΓÇ£watchΓÇ¥ button in the banner and watch out for [HDIn
## Release Information
+### Release date: May 16, 2024
+
+This release note applies to
+++
+HDInsight release will be available to all regions over several days. This release note is applicable for image number **2405081840**. [How to check the image number?](./view-hindsight-cluster-image-version.md)
+
+HDInsight uses safe deployment practices, which involve gradual region deployment. It might take up to 10 business days for a new release or a new version to be available in all regions.
+
+**OS versions**
+
+* HDInsight 5.0: Ubuntu 18.04.5 LTS Linux Kernel 5.4
+* HDInsight 4.0: Ubuntu 18.04.5 LTS Linux Kernel 5.4
+
+> [!NOTE]
+> Ubuntu 18.04 is supported under [Extended Security Maintenance(ESM)](https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/canonical-ubuntu-18-04-lts-reaching-end-of-standard-support/ba-p/3822623) by the Azure Linux team for [Azure HDInsight July 2023](/azure/hdinsight/hdinsight-release-notes-archive#release-date-july-25-2023), release onwards.
+
+For workload specific versions, see [HDInsight 5.x component versions](./hdinsight-5x-component-versioning.md).
+
+## Fixed issues
+
+* Added API in gateway to get token for Keyvault, as part of the SFI initiative.
+* In the new Log monitor `HDInsightSparkLogs` table, for log type `SparkDriverLog`, some of the fields were missing. For example, `LogLevel & Message`. This release adds the missing fields to schemas and fixed formatting for `SparkDriverLog`.
+* Livy logs not available in Log Analytics monitoring `SparkDriverLog` table, which was due to an issue with Livy log source path and log parsing regex in `SparkLivyLog` configs.
+* Any HDInsight cluster, using ADLS Gen2 as a primary storage account can leverage MSI based access to any of the Azure resources (for example, SQL, Keyvaults) which is used within the application code.
+
+
+
+* [Basic and Standard A-series VMs Retirement](https://azure.microsoft.com/updates/basic-and-standard-aseries-vms-on-hdinsight-will-retire-on-31-august-2024/).
+ * On August 31, 2024, we'll retire Basic and Standard A-series VMs. Before that date, you need to migrate your workloads to Av2-series VMs, which provide more memory per vCPU and faster storage on solid-state drives (SSDs).
+ * To avoid service disruptions, [migrate your workloads](https://aka.ms/Av1retirement) from Basic and Standard A-series VMs to Av2-series VMs before August 31, 2024.
+* Retirement Notifications for [HDInsight 4.0](https://azure.microsoft.com/updates/azure-hdinsight-40-will-be-retired-on-31-march-2025-migrate-your-hdinsight-clusters-to-51) and [HDInsight 5.0](https://azure.microsoft.com/updates/hdinsight5retire/).
+
+If you have any more questions, contact [Azure Support](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview).
+
+You can always ask us about HDInsight on [Azure HDInsight - Microsoft Q&A](/answers/tags/168/azure-hdinsight).
+
+We're listening: YouΓÇÖre welcome to add more ideas and other topics here and vote for them - [HDInsight Ideas](https://feedback.azure.com/d365community/search/?q=HDInsight) and follow us for more updates on [AzureHDInsight Community](https://www.linkedin.com/groups/14313521/).
+
+> [!NOTE]
+> We advise customers to use to latest versions of HDInsight [Images](./view-hindsight-cluster-image-version.md) as they bring in the best of open source updates, Azure updates and security fixes. For more information, see [Best practices](./hdinsight-overview-before-you-start.md).
++ ### Release date: April 15, 2024 This release note applies to :::image type="icon" source="./media/hdinsight-release-notes/yes-icon.svg" border="false"::: HDInsight 5.1 version.
HDInsight uses safe deployment practices, which involve gradual region deploymen
For workload specific versions, see [HDInsight 5.x component versions](./hdinsight-5x-component-versioning.md).
-## Fixed issues
+**Fixed issues**
* Bug fixes for Ambari DB, Hive Warehouse Controller (HWC), Spark, HDFS * Bug fixes for Log analytics module for HDInsightSparkLogs
For workload specific versions, see
- Apache Ranger support for Spark SQL in Spark 3.3.0 (HDInsight version 5.1) with Enterprise security package. Learn more about it [here](./spark/ranger-policies-for-spark.md).
-### Fixed issues
+**Fixed issues**
- Security fixes from Ambari and Oozie components
hdinsight Hdinsight Supported Node Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-supported-node-configuration.md
keywords: vm sizes, cluster sizes, cluster configuration
Previously updated : 07/10/2023 Last updated : 07/12/2024 # What are the default and recommended node configurations for Azure HDInsight?
The following tables list default and recommended virtual machine (VM) sizes for
If you need more than 32 worker nodes in a cluster, select a head node size with at least 8 cores and 14 GB of RAM.
-The only cluster types that have data disks are Kafka and HBase clusters with the Accelerated Writes feature enabled. HDInsight supports P30 and S30 disk sizes in these scenarios. For all other cluster types, HDInsight provides managed disk space with the cluster. Starting 11/07/2019, the managed disk size of each node in the newly created cluster is 128 GB. This can't be changed.
+The only cluster types that have data disks are Kafka and HBase clusters with the Accelerated Writes feature enabled. HDInsight supports P30 and S30 disk sizes in these scenarios. For all other cluster types, HDInsight provides managed disk space with the cluster. From 11/07/2019 onwards, the managed disk size of each node in the newly created cluster is 128 GB. This can't be changed.
The specifications of all minimum recommended VM types used in this document are summarized in the following table.
The specifications of all minimum recommended VM types used in this document are
For more information on the specifications of each VM type, see the following documents:
-* [General purpose virtual machine sizes: Dv2 series 1-5](../virtual-machines/dv2-dsv2-series.md)
-* [Memory optimized virtual machine sizes: Dv2 series 11-15](../virtual-machines/dv2-dsv2-series-memory.md)
-* [General purpose virtual machine sizes: Av2 series 1-8](../virtual-machines/av2-series.md)
+* [General purpose virtual machine sizes: `Dv2` series 1-5](../virtual-machines/dv2-dsv2-series.md)
+* [Memory optimized virtual machine sizes: `Dv2` series 11-15](../virtual-machines/dv2-dsv2-series-memory.md)
+* [General purpose virtual machine sizes: `Av2` series 1-8](../virtual-machines/av2-series.md)
### All supported regions
hdinsight Hdinsight Troubleshoot Failed Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-troubleshoot-failed-cluster.md
description: Diagnose and troubleshoot a slow or failing job on an Azure HDInsig
Previously updated : 07/20/2023 Last updated : 07/12/2024 # Troubleshoot a slow or failing job on a HDInsight cluster
An HDInsight Gateway times out responses that take longer than two minutes, retu
In this case, review the following logs in the `/var/log/webhcat` directory:
-* **webhcat.log** is the log4j log to which server writes logs
+* **webhcat.log** is the Log4j log to which server writes logs
* **webhcat-console.log** is the stdout of the server when started * **webhcat-console-error.log** is the stderr of the server process
At the YARN level, there are two types of timeouts:
If you open the `/var/log/webhcat/webhcat.log` log file and search for "queued job", you may see multiple entries where the execution time is excessively long (>2000 ms), with entries showing increasing wait times.
- The time for the queued jobs continues to increase because the rate at which new jobs get submitted is higher than the rate at which the old jobs are completed. Once the YARN memory is 100% used, the *joblauncher queue* can no longer borrow capacity from the *default queue*. Therefore, no more new jobs can be accepted into the joblauncher queue. This behavior can cause the waiting time to become longer and longer, causing a timeout error that is usually followed by many others.
+ The time for the queued jobs continues to increase because the rate at which new jobs get submitted is higher than the rate at which the old jobs are completed. Once the YARN memory is 100% used, the `joblauncher queue` can no longer borrow capacity from the *default queue*. Therefore, no more new jobs can be accepted into the job launcher queue. This behavior can cause the waiting time to become longer and longer, causing a timeout error that is usually followed by many others.
- The following image shows the joblauncher queue at 714.4% overused. This is acceptable so long as there is still free capacity in the default queue to borrow from. However, when the cluster is fully utilized and the YARN memory is at 100% capacity, new jobs must wait, which eventually causes timeouts.
+ The following image shows the job launcher queue at 714.4% overused. This is acceptable so long as there is still free capacity in the default queue to borrow from. However, when the cluster is fully utilized and the YARN memory is at 100% capacity, new jobs must wait, which eventually causes timeouts.
:::image type="content" source="./media/hdinsight-troubleshoot-failed-cluster/hdi-job-launcher-queue.png" alt-text="HDInsight Job launcher queue view.":::
At the YARN level, there are two types of timeouts:
2. YARN processing can take a long time, which can cause timeouts.
- * List all jobs: This is a time-consuming call. This call enumerates the applications from the YARN ResourceManager, and for each completed application, gets the status from the YARN JobHistoryServer. With higher numbers of jobs, this call can time out.
+ * List all jobs: This is a time-consuming call. This call enumerates the applications from the YARN Resource Manager, and for each completed application, gets the status from the YARN JobHistoryServer. With higher numbers of jobs, this call can time out.
* List jobs older than seven days: The HDInsight YARN JobHistoryServer is configured to retain completed job information for seven days (`mapreduce.jobhistory.max-age-ms` value). Trying to enumerate purged jobs results in a timeout.
hdinsight Hdinsight Use External Metadata Stores https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-use-external-metadata-stores.md
description: Use external metadata stores with Azure HDInsight clusters.
Previously updated : 07/12/2023 Last updated : 07/12/2024 # Use external metadata stores in Azure HDInsight
HDInsight also supports custom metastores, which are recommended for production
Create or have an existing Azure SQL Database before setting up a custom Hive metastore for a HDInsight cluster. For more information, see [Quickstart: Create a single database in Azure SQL Database](/azure/azure-sql/database/single-database-create-quickstart?tabs=azure-portal).
-While creating the cluster, HDInsight service needs to connect to the external metastore and verify your credentials. Configure Azure SQL Database firewall rules to allow Azure services and resources to access the server. Enable this option in the Azure portal by selecting **Set server firewall**. Then select **No** underneath **Deny public network access**, and **Yes** underneath **Allow Azure services and resources to access this server** for Azure SQL Database. For more information, see [Create and manage IP firewall rules](/azure/azure-sql/database/firewall-configure#use-the-azure-portal-to-manage-server-level-ip-firewall-rules)
+When you create the cluster, HDInsight service needs to connect to the external metastore and verify your credentials. Configure Azure SQL Database firewall rules to allow Azure services and resources to access the server. Enable this option in the Azure portal by selecting **Set server firewall**. Then select **No** underneath **Deny public network access**, and **Yes** underneath **Allow Azure services and resources to access this server** for Azure SQL Database. For more information, see [Create and manage IP firewall rules](/azure/azure-sql/database/firewall-configure#use-the-azure-portal-to-manage-server-level-ip-firewall-rules)
-Private endpoints for SQL stores is only supported on the clusters created with `outbound` ResourceProviderConnection. To learn more, see this [documentation](./hdinsight-private-link.md).
+Private endpoints for SQL stores are only supported on the clusters created with `outbound` ResourceProviderConnection. To learn more, see this [documentation](./hdinsight-private-link.md).
:::image type="content" source="./media/hdinsight-use-external-metadata-stores/configure-azure-sql-database-firewall1.png" alt-text="set server firewall button."::: ### Select a custom metastore during cluster creation
hdinsight Gateway Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/gateway-best-practices.md
Title: Gateway deep dive and best practices for Apache Hive in Azure HDInsight
description: Learn how to navigate the best practices for running Hive queries over the Azure HDInsight gateway Previously updated : 07/25/2023 Last updated : 07/12/2024 # Gateway deep dive and best practices for Apache Hive in Azure HDInsight
hdinsight Interactive Query Troubleshoot Error Message Hive View https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/interactive-query-troubleshoot-error-message-hive-view.md
Title: Error message not shown in Apache Hive View - Azure HDInsight
description: Query fails in Apache Hive View without any details on Azure HDInsight cluster. Previously updated : 07/25/2023 Last updated : 07/12/2024 # Scenario: Query error message not displayed in Apache Hive View in Azure HDInsight
hdinsight Interactive Query Troubleshoot Outofmemory Overhead Exceeded https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/interactive-query-troubleshoot-outofmemory-overhead-exceeded.md
Title: Joins in Apache Hive leads to OutOfMemory error - Azure HDInsight
description: Dealing with OutOfMemory errors "GC overhead limit exceeded error" Previously updated : 07/20/2023 Last updated : 07/12/2024 # Scenario: Joins in Apache Hive leads to an OutOfMemory error in Azure HDInsight
hdinsight Troubleshoot Gateway Timeout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/troubleshoot-gateway-timeout.md
Title: Exception when running queries from Apache Ambari Hive View in Azure HDIn
description: Troubleshooting steps when running Apache Hive queries through Apache Ambari Hive View in Azure HDInsight. Previously updated : 07/25/2023 Last updated : 07/12/2024 # Exception when running queries from Apache Ambari Hive View in Azure HDInsight
Cannot create property 'errors' on string '<!DOCTYPE html PUBLIC '-//W3C//DTD XH
A Gateway timeout.
-The Gateway timeout value is 2 minutes. Queries from Ambari Hive View are submitted to the `/hive2` endpoint through the gateway. Once the query is successfully compiled and accepted, the HiveServer returns a `queryid`. Clients then keep polling for the status of the query. During this process, if the HiveServer doesn't return an HTTP response within 2 minutes, the HDI Gateway throws a 502.3 Gateway timeout error to the caller. The errors could happen when the query is submitted for processing (more likely) and also in the get status call (less likely). Users could see either of them.
+The Gateway timeout value is 2 minutes. Queries from Ambari Hive View are submitted to the `/hive2` endpoint through the gateway. Once the query is successfully compiled and accepted, the HiveServer returns a `queryid`. Clients then keep polling for the status of the query. During this process, if the HiveServer doesn't return an HTTP response within 2 minutes, the HDI Gateway throws a 502.3 Gateway timeout error to the caller. The errors could happen when the query is submitted for processing (more likely) and also in the got status call (less likely). Users could see either of them.
-The http handler thread is supposed to be quick: prepare the job and return a `queryid`. However, due to several reasons, all the handler threads could be busy resulting in timeouts for new queries and the get status calls.
+The http handler thread is supposed to be quick: prepare the job and return a `queryid`. However, due to several reasons, all the handler threads could be busy resulting in timeouts for new queries and the got status calls.
### Responsibilities of the HTTP handler thread
Some general recommendations to you to improve the situation:
* If using an external hive metastore, check the DB metrics and make sure that the database isn't overloaded. Consider scaling the metastore database layer.
-* Ensure that parallel ops is turned on (this enables the HTTP handler threads to run in parallel). To verify the value, launch [Apache Ambari](../hdinsight-hadoop-manage-ambari.md) and navigate to **Hive** > **Configs** > **Advanced** > **Custom hive-site**. The value for `hive.server2.parallel.ops.in.session` should be `true`.
+* Ensure that parallel ops are turned on (this enables the HTTP handler threads to run in parallel). To verify the value, launch [Apache Ambari](../hdinsight-hadoop-manage-ambari.md) and navigate to **Hive** > **Configs** > **Advanced** > **Custom hive-site**. The value for `hive.server2.parallel.ops.in.session` should be `true`.
* Ensure that the cluster's VM SKU isn't too small for the load. Consider to splitting the work among multiple clusters. For more information, see [Choose a cluster type](../hdinsight-capacity-planning.md#choose-a-cluster-type).
hdinsight Apache Spark Intellij Tool Failure Debug https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-intellij-tool-failure-debug.md
keywords: debug remotely intellij, remote debugging intellij, ssh, intellij, hdi
Previously updated : 07/31/2023 Last updated : 07/12/2024 # Failure spark job debugging with Azure Toolkit for IntelliJ (preview)
hdinsight Apache Spark Jupyter Notebook Use External Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-jupyter-notebook-use-external-packages.md
description: Step-by-step instructions on how to configure Jupyter Notebooks ava
Previously updated : 07/12/2023 Last updated : 07/12/2024 # Use external packages with Jupyter Notebooks in Apache Spark clusters on HDInsight
hdinsight Apache Spark Load Data Run Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-load-data-run-query.md
description: Tutorial - Learn how to load data and run interactive queries on Sp
Previously updated : 07/12/2023 Last updated : 07/12/2024 #Customer intent: As a developer new to Apache Spark and to Apache Spark in Azure HDInsight, I want to learn how to load data into a Spark cluster, so I can run interactive SQL queries against the data.
-# Tutorial: Load data and run queries on an Apache Spark cluster in Azure HDInsight
+# Tutorial: Load data, and run queries on an Apache Spark cluster in Azure HDInsight
In this tutorial, you learn how to create a dataframe from a csv file, and how to run interactive Spark SQL queries against an [Apache Spark](https://spark.apache.org/) cluster in Azure HDInsight. In Spark, a dataframe is a distributed collection of data organized into named columns. Dataframe is conceptually equivalent to a table in a relational database or a data frame in R/Python.
Applications can create dataframes directly from files or folders on the remote
from pyspark.sql.types import * ```
- When running an interactive query in Jupyter, the web browser window or tab caption shows a **(Busy)** status along with the notebook title. You also see a solid circle next to the **PySpark** text in the top-right corner. After the job is completed, it changes to a hollow circle.
+ When you run an interactive query in Jupyter, the web browser window or tab caption shows a **(Busy)** status along with the notebook title. You also see a solid circle next to the **PySpark** text in the top-right corner. After the job is completed, it changes to a hollow circle.
:::image type="content" source="./media/apache-spark-load-data-run-query/hdinsight-spark-interactive-spark-query-status.png " alt-text="Status of interactive Spark SQL query." border="true":::
hdinsight Apache Spark Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-resource-manager.md
description: Learn how to manage resources for Spark clusters on Azure HDInsight
Previously updated : 07/20/2023 Last updated : 07/12/2024 # Manage resources for Apache Spark cluster on Azure HDInsight
The three configuration parameters can be configured at the cluster level (for a
### Change the parameters using Ambari UI
-1. From the Ambari UI navigate to **Spark2** > **Configs** > **Custom spark2-defaults**.
+1. From the Ambari UI navigate to **Spark 2** > **Configs** > **Custom spark2-defaults**.
:::image type="content" source="./media/apache-spark-resource-manager/ambari-ui-spark2-configs.png " alt-text="Set parameters using Ambari custom." border="true":::
Because of Spark dynamic allocation, the only resources that are consumed by thr
1. From the Ambari UI, from the left pane, select **Spark2**.
-2. In the next page, select **Spark2 Thrift Servers**.
+2. In the next page, select **Spark 2 Thrift Servers**.
:::image type="content" source="./media/apache-spark-resource-manager/ambari-ui-spark2-thrift-servers.png " alt-text="Restart thrift server1." border="true":::
-3. You should see the two headnodes on which the Spark2 Thrift Server is running. Select one of the headnodes.
+3. You should see the two headnodes on which the Spark 2 Thrift Server is running. Select one of the headnodes.
:::image type="content" source="./media/apache-spark-resource-manager/restart-thrift-server-2.png " alt-text="Restart thrift server2." border="true":::
-4. The next page lists all the services running on that headnode. From the list, select the drop-down button next to Spark2 Thrift Server, and then select **Stop**.
+4. The next page lists all the services running on that headnode. From the list, select the drop-down button next to Spark 2 Thrift Server, and then select **Stop**.
:::image type="content" source="./media/apache-spark-resource-manager/ambari-ui-spark2-thriftserver-restart.png " alt-text="Restart thrift server3." border="true"::: 5. Repeat these steps on the other headnode as well.
Launch the Yarn UI as shown in the beginning of the article. In Cluster Metrics
1. In the Yarn UI, from the left panel, select **Running**. From the list of running applications, determine the application to be killed and select the **ID**.
- :::image type="content" source="./media/apache-spark-resource-manager/apache-ambari-kill-app1.png " alt-text="Kill App1." border="true":::
+ :::image type="content" source="./media/apache-spark-resource-manager/apache-ambari-kill-app1.png " alt-text="Kill App 1." border="true":::
2. Select **Kill Application** on the top-right corner, then select **OK**.
- :::image type="content" source="./media/apache-spark-resource-manager/apache-ambari-kill-app2.png " alt-text="Kill App2." border="true":::
+ :::image type="content" source="./media/apache-spark-resource-manager/apache-ambari-kill-app2.png " alt-text="Kill App 2." border="true":::
## See also
hdinsight Apache Spark Troubleshoot Application Stops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-troubleshoot-application-stops.md
Title: Apache Spark Streaming application stops after 24 days in Azure HDInsight
description: An Apache Spark Streaming application stops after executing for 24 days and there are no errors in the log files. Previously updated : 07/12/2023 Last updated : 07/12/2024 # Scenario: Apache Spark Streaming application stops after executing for 24 days in Azure HDInsight
hdinsight Apache Spark Troubleshoot Sparkexception Kryo Serialization Failed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-troubleshoot-sparkexception-kryo-serialization-failed.md
Title: Issues with JDBC/ODBC & Apache Thrift framework - Azure HDInsight
description: Unable to download large data sets using JDBC/ODBC and Apache Thrift software framework in Azure HDInsight Previously updated : 07/20/2023 Last updated : 07/12/2024 # Unable to download large data sets using JDBC/ODBC and Apache Thrift software framework in HDInsight
hdinsight Safely Manage Jar Dependency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/safely-manage-jar-dependency.md
description: This article discusses best practices for managing Java Archive (JA
Previously updated : 07/20/2023 Last updated : 07/12/2024 # Safely manage jar dependencies
hdinsight Spark Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/spark-best-practices.md
Title: Apache Spark guidelines on Azure HDInsight
description: Learn guidelines for using Apache Spark in Azure HDInsight. Previously updated : 07/12/2023 Last updated : 07/12/2024 # Apache Spark guidelines
This article provides various guidelines for using Apache Spark on Azure HDInsig
| Option | Documents | |||
-| VSCode | [Use Spark & Hive Tools for Visual Studio Code](../hdinsight-for-vscode.md) |
+| Visual Studio Code | [Use Spark & Hive Tools for Visual Studio Code](../hdinsight-for-vscode.md) |
| Jupyter Notebooks | [Tutorial: Load data and run queries on an Apache Spark cluster in Azure HDInsight](./apache-spark-load-data-run-query.md) | | IntelliJ | [Tutorial: Use Azure Toolkit for IntelliJ to create Apache Spark applications for an HDInsight cluster](./apache-spark-intellij-tool-plugin.md) | | IntelliJ | [Tutorial: Create a Scala Maven application for Apache Spark in HDInsight using IntelliJ](./apache-spark-create-standalone-application.md) |
iot-central Concepts Device Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-templates.md
Title: What are device templates in Azure IoT Central
description: Device templates let you specify the behavior of the devices connected to your application. They also define a UI for the device in IoT Central. Previously updated : 06/05/2023 Last updated : 07/12/2024
# What are device templates?
-A device template in Azure IoT Central is a blueprint that defines the characteristics and behaviors of a type of device that connects to your application. For example, the device template defines the telemetry that a device sends so that IoT Central can create visualizations that use the correct units and data types.
+A device template in Azure IoT Central is a blueprint that defines the characteristics and behaviors of a type of device that connects to your application. For example, the device template defines the telemetry that a device sends so that IoT Central can create visualizations that use the correct units and data types. Telemetry that matches the device template definition is referred to as *modeled* data. Telemetry that doesn't match the device template definition is referred to as *unmodeled* data.
-A solution builder adds device templates to an IoT Central application. A device developer writes the device code that implements the behaviors defined in the device template. To learn more about the data that a device exchanges with IoT Central, see [Telemetry, property, and command payloads](../../iot/concepts-message-payloads.md).
+A solution builder adds device templates to an IoT Central application. A device developer writes the device code that implements the behaviors defined in the device template. To learn more about how to create a device template or have one automatically generated, see [Create a device template in your Azure IoT Central application](howto-set-up-template.md). To learn more about the data that a device exchanges with IoT Central, see [Telemetry, property, and command payloads](../../iot/concepts-message-payloads.md).
A device template includes the following sections: - _A device model_. This part of the device template defines how the device interacts with your application. Every device model has a unique ID. A device developer implements the behaviors defined in the model. - _Root component_. Every device model has a root component. The root component's interface describes capabilities that are specific to the device model.
- - _Components_. A device model may include components in addition to the root component to describe device capabilities. Each component has an interface that describes the component's capabilities. Component interfaces may be reused in other device models. For example, several phone device models could use the same camera interface.
+ - _Components_. A device model can include components in addition to the root component to describe device capabilities. Each component has an interface that describes the component's capabilities. Component interfaces can be reused in other device models. For example, several phone device models could use the same camera interface.
- _Inherited interfaces_. A device model contains one or more interfaces that extend the capabilities of the root component. - _Views_. This part of the device template lets the solution developer define visualizations to view data from the device, and forms to manage and control a device. Views don't affect the code that a device developer writes to implement the device model.
You can also mark a property as writable on an interface. A device can receive a
Devices don't need to be connected to set property values. The updated values are transferred when the device next connects to the application. This behavior applies to both read-only and writable properties.
-Don't use properties to send telemetry from your device. For example, a readonly property such as `temperatureSetting=80` should mean that the device temperature has been set to 80, and the device is trying to get to, or stay at, this temperature.
+Don't use properties to send telemetry from your device. For example, a readonly property such as `temperatureSetting=80` should mean that the device temperature is set to 80, and the device is trying to get to, or stay at, this target temperature.
For writable properties, the device application returns a desired state status code, version, and description to indicate whether it received and applied the property value.
A solution developer creates views that let operators monitor and manage connect
- Tiles to let the operator call commands, including commands that expect a payload. - Tiles to display labels, images, or markdown text.
-## Next steps
+## Next step
Now that you've learned about device templates, a suggested next step is to read [Telemetry, property, and command payloads](../../iot/concepts-message-payloads.md) to learn more about the data a device exchanges with IoT Central.
iot-central Howto Manage Device Templates With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-device-templates-with-rest-api.md
Title: Add device templates in Azure IoT Central with the REST API
description: How to use the IoT Central REST API to add, update, delete, and manage device templates in an application Previously updated : 06/14/2023 Last updated : 07/12/2024
The request body has some required fields:
* `@id`: a unique ID in the form of a simple Uniform Resource Name. * `@type`: declares that the top-level object is a `"ModelDefinition","DeviceModel"`. * `@context`: specifies the DTDL version used for the interface.
-* `contents`: lists the properties, telemetry, and commands that make up your device. The capabilities may be defined in multiple interfaces.
+* `contents`: lists the properties, telemetry, and commands that make up your device. The capabilities can be defined in multiple interfaces.
* `capabilityModel` : Every device template has a capability model. A relationship is established between each module capability model and a device model. A capability model implements one or more module interfaces. > [!TIP]
The response to this request looks like the following example:
] } ```-
-## Next steps
-
-Now that you've learned how to manage device templates with the REST API, a suggested next step is to [How to create device templates from IoT Central GUI](howto-set-up-template.md#create-a-device-template).
iot-central Howto Map Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-map-data.md
Title: Transform telemetry on ingress to IoT Central
description: To use complex telemetry from devices, you can use mappings to transform it as it arrives in your IoT Central application. Previously updated : 06/13/2023 Last updated : 07/12/2024
Data mapping lets you transform complex device telemetry into structured data in
* Normalize telemetry from different devices by mapping JSON paths on multiple devices to a common alias. * Export to destinations outside IoT Central.
+> [!TIP]
+> If you want to autogenerate a device template from unmodeled telemetry, see [Autogenerate a device template](howto-set-up-template.md#autogenerate-a-device-template).
+ :::image type="content" source="media/howto-map-data/map-data-summary.png" alt-text="Diagram that summarizes the mapping process in IoT Central." border="false"::: The following video walks you through the data mapping process:
The results of these mapping rules look like the following examples:
``` Now you can use the mapped aliases to display telemetry on a chart or dashboard. You can also use the mapped aliases when you export telemetry.-
-## Next steps
-
-Now that you've learned how to map data for your device, a suggested next step is to learn [How to use data explorer to analyze device data](howto-create-analytics.md).
iot-central Howto Set Up Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-set-up-template.md
Title: Define a new IoT device type in Azure IoT Central
-description: How to create a device template in your Azure IoT Central application. You define the telemetry, state, properties, and commands for your device type.
+ Title: Create a device template in Azure IoT Central
+description: How to create a device template. You define the telemetry, state, properties, and commands for your template. Device templates can also be autogenerated.
Previously updated : 03/01/2024 Last updated : 07/12/2024
#customer intent: As an solution builders, I want define the device types that can connect to my application so that I can manage and monitor them effectively.
-# Define a new IoT device type in your Azure IoT Central application
+# Create a device template in your Azure IoT Central application
-A device template is a blueprint that defines the characteristics and behaviors of a type of device that connects to an Azure IoT Central application.
+A device template is a blueprint that defines the characteristics and behaviors of a type of device that connects to an Azure IoT Central application. For example, you can create a device template for a sensor that sends telemetry, such as temperature and properties, such as location. To learn more, see [What are device templates?](concepts-device-templates.md).
-This article describes how to create a device template in IoT Central. For example, you can create a device template for a sensor that sends telemetry, such as temperature and properties, such as location. From this device template, an operator can create and connect real devices.
+This article describes some of the ways to create a device template in IoT Central such as [autogenerating a device template from a telemetry message](#autogenerate-a-device-template) or defining one in the [IoT Central UI](#create-a-device-template-in-your-azure-iot-central-application).
+
+From a device template, an operator can create and connect real devices.
The following screenshot shows an example of a device template:
The device template has the following sections:
- Raw data - View the raw data sent by your designated preview device. This view is useful when you're debugging or troubleshooting a device template. - Views - Use views to visualize the data from the device and forms to manage and control a device.
-To learn more, see [What are device templates?](concepts-device-templates.md).
- To learn how to manage device templates by using the IoT Central REST API, see [How to use the IoT Central REST API to manage device templates.](../core/howto-manage-device-templates-with-rest-api.md)
-## Create a device template
- You have several options to create device templates: -- Design the device template in the IoT Central GUI.
+- Design the device template in the IoT Central UI.
- Import a device template from the list of featured device templates. Optionally, customize the device template to your requirements in IoT Central. - When the device connects to IoT Central, have it send the model ID of the model it implements. IoT Central uses the model ID to retrieve the model from the model repository and to create a device template. Add any cloud properties and views your IoT Central application needs to the device template. - When the device connects to IoT Central, let IoT Central [autogenerate a device template](#autogenerate-a-device-template) definition from the data the device sends.
You have several options to create device templates:
> [!NOTE] > In each case, the device code must implement the capabilities defined in the model. The device code implementation isn't affected by the cloud properties and views sections of the device template.
-This section shows you how to import a device template from the list of featured device templates and how to customize it using the IoT Central GUI. This example uses the **Onset Hobo MX-100 Temp Sensor** device template from the list of featured device templates:
+## Import a device template
+
+This section shows you how to import a device template from the list of featured device templates and how to customize it using the IoT Central UI. This example uses the **Onset Hobo MX-100 Temp Sensor** device template from the list of featured device templates:
1. To add a new device template, select **+ New** on the **Device templates** page. 1. On the **Select type** page, scroll down until you find the **Onset Hobo MX-100 Temp Sensor** tile in the **Featured device templates** section.
The following steps show how to use this feature:
:::image type="content" source="media/howto-set-up-template/infer-model-3.png" alt-text="Screenshot that shows how to rename the autogenerated device template." lightbox="media/howto-set-up-template/infer-model-3.png":::
-## Manage a device template
+## Manage device templates in the UI
-You can rename or delete a template from the template's editor page.
+You can create, edit, rename, or delete a template from the template's editor page.
After you define the template, you can publish it. Until the template is published, you can't connect a device to it, and it doesn't appear on the **Devices** page.
Add forms to a device template to enable operators to manage a device by viewing
Before you can connect a device that implements your device model, you must publish your device template.
-To publish a device template, go to you your device template, and select **Publish**.
+To publish a device template, go to your device template, and select **Publish**.
After you publish a device template, an operator can go to the **Devices** page, and add either real or simulated devices that use your device template. You can continue to modify and save your device template as you're making changes. When you want to push these changes out to the operator to view under the **Devices** page, you must select **Publish** each time.
load-balancer Backend Pool Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/backend-pool-management.md
az vm create \
### Limitations * IP based backends can only be used for Standard Load Balancers * The backend resources must be in the same virtual network as the load balancer for IP based LBs
+ * IP-based load balancers backend instances must still be virtual machines or virtual machine scale sets. Attaching other PaaS services to the backend pool of an IP based Load Balancer is not supported.
* A load balancer with IP based Backend Pool canΓÇÖt function as a Private Link service * [Private endpoint resources](../private-link/private-endpoint-overview.md) can't be placed in an IP based backend pool
- * IP-based load balancers don't support ACI containers
+ * IP-based load balancers doesn't support ACI containers
* Load balancers or services such as Application Gateway canΓÇÖt be placed in the backend pool of the load balancer * Inbound NAT Rules canΓÇÖt be specified by IP address * You can configure IP based and NIC based backend pools for the same load balancer. You canΓÇÖt create a single backend pool that mixes backed addresses targeted by NIC and IP addresses within the same pool.
logic-apps Logic Apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-overview.md
ms.suite: integration + Last updated 06/14/2024
logic-apps Quickstart Create Example Consumption Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-create-example-consumption-workflow.md
ms.suite: integration
+ Last updated 06/13/2024 #Customer intent: As a developer, I want to create my first example Consumption logic app workflow that runs in multitenant Azure Logic Apps using the Azure portal.
machine-learning Concept Model Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-catalog.md
Phi-3-medium-4k-instruct, Phi-3-medium-128k-instruct | [Microsoft Managed Count
[!INCLUDE [machine-learning-preview-generic-disclaimer](includes/machine-learning-preview-generic-disclaimer.md)]
-For language models deployed to MaaS, Azure Machine Learning implements a default configuration of [Azure AI Content Safety](../ai-services/content-safety/overview.md) text moderation filters that detect harmful content such as hate, self-harm, sexual, and violent content. To learn more about content filtering (preview), see [harm categories in Azure AI Content Safety](../ai-services/content-safety/concepts/harm-categories.md).
-
-Content filtering (preview) occurs synchronously as the service processes prompts to generate content, and you might be billed separately as per [AACS pricing](https://azure.microsoft.com/pricing/details/cognitive-services/content-safety/) for such use. You can disable content filtering (preview) for individual serverless endpoints either at the time when you first deploy a language model or in the deployment details page by selecting the content filtering toggle. If you use a model in MaaS via an API other than the [Azure AI Model Inference API](../ai-studio/reference/reference-model-inference-api.md), content filtering isn't enabled unless you implement it separately by using [Azure AI Content Safety](../ai-services/content-safety/quickstart-text.md). If you use a model in MaaS without content filtering, you run a higher risk of exposing users to harmful content.
### Network isolation for models deployed via Serverless APIs
managed-instance-apache-cassandra Best Practice Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/best-practice-performance.md
Our default settings are already suitable for low latency workloads. To ensure b
Like every database system, Cassandra works best if the CPU utilization is around 50% and never gets above 80%. You can view CPU metrics in the Metrics tab within Monitoring from the portal:
- :::image type="content" source="./media/best-practice-performance/metrics.png" alt-text="Screenshot of CPU metrics." lightbox="./media/best-practice-performance/metrics.png" border="true":::
+ :::image type="content" source="./media/best-practice-performance/metrics-cpu.png" alt-text="Screenshot of CPU metrics by idle usage." lightbox="./media/best-practice-performance/metrics-cpu.png" border="true":::
+
+ > [!TIP]
+ > For a realistic CPU view, add a filter and split the property by `Usage kind=usage_idle`. If this value is lower than 20%, you can apply splitting to obtain usage by all usage kinds.
+ :::image type="content" source="./media/best-practice-performance/metrics-cpu-by-usage.png" alt-text="Screenshot of CPU metrics by usage kind." lightbox="./media/best-practice-performance/metrics-cpu-by-usage.png" border="true":::
If the CPU is permanently above 80% for most nodes the database becomes overloaded manifesting in multiple client timeouts. In this scenario, we recommend taking the following actions:
operator-nexus Howto Create Access Control List For Network To Network Interconnects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-create-access-control-list-for-network-to-network-interconnects.md
The table below provides guidance on the usage of parameters when creating ACLs:
| matchConditions | Conditions required to be matched | | | ttlValues | TTL [Time To Live] | 0-255 | | dscpMarking | DSCP Markings that need to be matched | 0-63 |
+| fargments | Specify the IP fragment packets | Range: 1-8191<br> Example: [1, 5, 1250-1300, 8000-8191] |
| portCondition | Port condition that needs to be matched | | | portType | Port type that needs to be matched | Example: SourcePort |
+| ports | Port number that needs to be matched | Range: 0-65535<br> Example: [1, 10, 500, 1025-1050, 64000-65535] |
| protocolTypes | Protocols that need to be matched | [tcp, udp, range[1-2, 1, 2]] | | vlanMatchCondition | VLAN match condition that needs to be matched | | | layer4Protocol | Layer 4 Protocol | should be either TCP or UDP |
The table below provides guidance on the usage of parameters when creating ACLs:
> - IPGroupNames and IpPrefixValues cannot be combined.<br> > - Egress ACLs do not support certain options like IP options, IP length, fragment, ether-type, DSCP marking, and TTL values.<br> > - Ingress ACLs do not support the following options: etherType.<br>
+> - Ports inputs can be `port-number` or `range-of-ports`.<br>
+> - Fragments inputs can be `port-number` or `range-of-ports`.<br>
### Example payload for ACL creation
postgresql Concepts Geo Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-geo-disaster-recovery.md
Both geo-replication with read replicas and geo-backup are solutions for geo-dis
| <b> Can be in non-paired region | Yes | No | | <b> Supports read scale | Yes | No | | <b> Can be configured after the creation of the server | Yes | No |
-| <b> Restore to specific point in time | No | Yes |
+| <b> Restore to specific point in time | No | No |
| <b> Capacity guaranteed | Yes | No |
postgresql Concepts Major Version Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-major-version-upgrade.md
Title: Major version upgrades in Azure Database for PostgreSQL - Flexible Server
description: Learn how to use Azure Database for PostgreSQL - Flexible Server to do in-place major version upgrades of PostgreSQL on a server. Previously updated : 7/8/2024 Last updated : 7/12/2024 -
- - references_regions
# Major version upgrades in Azure Database for PostgreSQL - Flexible Server
Here are some of the important considerations with in-place major version upgrad
- After an in-place major version upgrade is successful, there are no automated ways to revert to the earlier version. However, you can perform a point-in-time recovery (PITR) to a time before the upgrade to restore the previous version of the database instance.
+- Azure Database for PostgreSQL Flexible Server takes snapshot of your database during an upgrade. The snapshot is taken before the upgrade starts. If the upgrade fails, the system will automatically restore your database to its state from the snapshot.
+ ## Post upgrade/migrate After the major version upgrade is complete, we recommend to run the `ANALYZE` command in each database to refresh the `pg_statistic` table. Otherwise, you may run into performance issues.
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
Previously updated : 6/8/2024 Last updated : 7/12/2024 #customer intent: As a reader, I want the title and description to meet the required length and include the relevant information about the release notes for Azure DB for PostgreSQL - Flexible Server.
This page provides latest news and updates regarding feature additions, engine v
* General availability of [Major Version Upgrade Support for PostgreSQL 16](concepts-major-version-upgrade.md) for Azure Database for PostgreSQL flexible server. * General availability of [Pgvector 0.7.0](concepts-extensions.md) extension. * General availability support for [Storage-Autogrow with read replicas](concepts-read-replicas.md)-
+* [SCRAM authentication](how-to-connect-scram.md) authentication set as default for new PostgreSQL 14+ new server deployments.
## Release: June 2024 * Support for new [minor versions](concepts-supported-versions.md) 16.3, 15.7, 14.12, 13.15, and 12.19 <sup>$</sup>
This page provides latest news and updates regarding feature additions, engine v
* General availability of [Customer Managed Key feature](./concepts-data-encryption.md) with Azure Database for PostgreSQL flexible server in all Azure public regions ## Release: December 2022- * Support for [extensions](concepts-extensions.md) pg_hint_plan with new servers<sup>$</sup> * General availability of [Customer Managed Key feature](./concepts-data-encryption.md) with Azure Database for PostgreSQL flexible server in Canada East, Canada Central, Southeast Asia, Switzerland North, Switzerland West, Brazil South and East Asia Azure regions ## Release: November 2022- * Public preview of [Enhanced Metrics](./concepts-monitoring.md#enhanced-metrics) for Azure Database for PostgreSQL flexible server * Support for [minor versions](./concepts-supported-versions.md) 14.5, 13.8, 12.12, 11.17. <sup>$</sup> * General availability of Azure Database for PostgreSQL flexible server in China North 3 & China East 3 Regions. - ## Release: October 2022- * Support for [Read Replica](./concepts-read-replicas.md) feature in public preview. * Support for [Azure Active Directory](concepts-azure-ad-authentication.md) authentication in public preview. * Support for [Customer managed keys](concepts-data-encryption.md) in public preview.
This page provides latest news and updates regarding feature additions, engine v
* Postgres 14 is now the default PostgreSQL version. ## Release: September 2022- * Support for [Fast Restore](./concepts-backup-restore.md) feature. * General availability of [Geo-Redundant Backups](./concepts-backup-restore.md). See the [regions](overview.md#azure-regions) where Geo-redundant backup is currently available. ## Release: August 2022- * Support for [PostgreSQL minor version](./concepts-supported-versions.md) 14.4. <sup>$</sup> * Support for [new regions](overview.md#azure-regions) Qatar Central, Switzerland West, France South. <sup>**$**</sup> New PostgreSQL 14 servers are provisioned with version 14.4. Your existing PostgreSQL 14.3 servers will be upgraded to 14.4 in your server's future maintenance window. ## Release: July 2022- * Support for [Geo-redundant backup](./concepts-backup-restore.md#geo-redundant-backup-and-restore) in [more regions](./overview.md#azure-regions) - Australia East, Australia Southeast, Canada Central, Canada East, UK South, UK West, East US, West US, East Asia, Southeast Asia, North Central US, South Central US, and France Central. ## Release: June 2022- * Support for [**PostgreSQL version 14**](./concepts-supported-versions.md). * Support for [minor versions](./concepts-supported-versions.md) 14.3, 13.7, 12.11, 11.16. <sup>$</sup> * Support for [Same-zone high availability](concepts-high-availability.md) deployment option.
reliability Cross Region Replication Azure No Pair https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/cross-region-replication-azure-no-pair.md
+
+ Title: Cross-region replication for non-paired regions
+description: Learn about cross-region replication for non-paired regions
++++ Last updated : 06/14/2024++++
+# Cross-region replication solutions for non-paired regions
+
+Some Azure services support cross-region replication to ensure business continuity and protect against data loss. These services make use of another secondary region that uses *cross-region replication*. Both the primary and secondary regions together form a [region pair](./cross-region-replication-azure.md#azure-paired-regions).
+
+However, there are some [regions that are non-paired](./cross-region-replication-azure.md#regions-with-availability-zones-and-no-region-pair) and so require alternative methods to achieving geo-replication.
+
+This document lists some of the services and possible solutions that support geo-replication methods without requiring paired regions.
++
+## Azure App Service
+For App Service, custom backups are stored on a selected storage account. As a result, there's a dependency for cross-region restore on GRS and paired regions. For automatic backup type, you can't backup/restore across regions. As a workaround, you can implement a custom file copy mechanism for the saved data set to manually copy across non-paired regions and different storage accounts.
+
+## Azure Backup
+
+To achieve geo-replication in non-paired regions:
+
+- Use [Azure Site Recovery](/azure/site-recovery/azure-to-azure-enable-global-disaster-recovery).ΓÇ» Azure Site Recovery is the Disaster Recovery service from Azure that provides business continuity and disaster recovery by replicating workloads from the primary location to the secondary location. The secondary location can be a non-paired region if it is supported by Azure Site Recovery. You can have maximum data retention up to 15 days with Azure Site Recovery.
+- Use [Zone-redundant Storage](../backup/backup-overview.md#why-use-azure-backup) to replicate your data in availability zones, guaranteeing data residency and resiliency in the same region.
+++
+## Azure Database for MySQL
++
+Choose any [Azure Database for MySQL available Azure regions](/azure/mysql/flexible-server/overview#azure-region) to spin up your [read replicas](/azure/mysql/flexible-server/concepts-read-replicas#cross-region-replication).
++
+## Azure Database for PostgreSQL
+
+For geo-replication in non-paired regions with Azure Database for PostgreSQL, you can use:
+
+**Managed service with geo-replication**: Azure PostgreSQL Managed service supports active [geo-replication](/azure/postgresql/flexible-server/concepts-read-replicas) to create a continuously readable secondary replica of your primary server. The readable secondary may be in the same Azure region as the primary or, more commonly, in a different region. This kind of readable secondary replica is also known as *geo-replica*.
+
+You can also utilize any of the two customer-managed data migration methods listed below to replicate the data to a non-paired region.
+
+- [Copy](/azure/postgresql/migrate/how-to-migrate-using-dump-and-restore?tabs=psql).
+
+- [Logical Replication & Logical Decoding](/azure/postgresql/flexible-server/concepts-logical).
++
+
+## Azure Data Factory
+
+For geo-replication in non-paired regions, Azure Data Factory (ADF) supports Infrastructure-as-code provisioning of ADF pipelines combined withΓÇ»[Source Control for ADF](/azure/data-factory/concepts-data-redundancy#using-source-control-in-azure-data-factory).
++
+## Azure Event Grid
+
+For geo-replication of Event Grid topics in non-paired regions, you can implement [client-side failover](/azure/event-grid/custom-disaster-recovery-client-side).
+
+## Azure IoT Hub
+
+For geo-replication in non-paired regions, use the [concierge pattern](/azure/iot-hub/iot-hub-ha-dr#achieve-cross-region-ha) for routing to a secondary IoT Hub.
++
+## Azure Key Vault
++++
+## Azure Kubernetes Service (AKS)
+
+Azure Backup can provide protection for AKS clusters, including a [cross-region restore (CRR)](/azure/backup/tutorial-restore-aks-backups-across-regions) feature that's currently in preview and only supports Azure Disks. Although the CRR feature relies on GRS paired regions replicas, any dependency on CRR can be avoided if the AKS cluster stores data only in external storage and avoids using "in-cluster" solutions.
++
+## Azure Monitor Logs
+
+Log Analytics workspaces in Azure Monitor Logs don't use paired regions. To ensure business continuity and protect against data loss, enable cross-region workspace replication.
+
+For more information, see [Enhance resilience by replicating your Log Analytics workspace across regions](/azure/azure-monitor/logs/workspace-replication)
++
+## Azure SQL Database
+
+For geo-replication in non-paired regions with Azure SQL Database, you can use:
+
+- [Failover group feature](/azure/azure-sql/database/failover-group-sql-db?view=azuresql&preserve-view=true) that replicates across any combination of Azure regions without any dependency on underlying storage GRS.
+
+- [Active geo-replication feature](/azure/azure-sql/database/active-geo-replication-overview?view=azuresql&preserve-view=true) to create a continuously synchronized readable secondary database for a primary database. The readable secondary database may be in the same Azure region as the primary or, more commonly, in a different region. This kind of readable secondary database is also known as a *geo-secondary* or *geo-replica*.
+
+## Azure SQL Managed Instance
+
+For geo-replication in non-paired regions with Azure SQL Managed Instance, you can use:
+
+- [Failover group feature](/azure/azure-sql/managed-instance/failover-group-sql-mi?view=azuresql&preserve-view=true) that replicates across any combination of Azure regions without any dependency on underlying storage GRS.
++
+## Azure Storage
++
+To achieve geo-replication in non-paired regions:
+
+- **For Azure Object Storage**:
+
+ - For blob storage and Azure Data Lake Storage, you can use tools such as [AZCopy](../storage/common/storage-use-azcopy-blobs-copy.md) or [Azure Data Factory](/azure/data-factory/connector-azure-blob-storage?tabs=data-factory.md).
+
+ - For general-purpose v2 storage accounts and premium block blob accounts, you can use [Azure Storage Object Replication](../storage/blobs/object-replication-overview.md).
+
+ >[!NOTE]
+ >Object replication isn't supported for [Azure Data Lake Storage](../storage/blobs/data-lake-storage-best-practices.md).
+++
+- **For Azure NetApp Files (ANF)**, you can replicate to a set of non-standard pairs besides Azure region pairs. See [Azure NetApp Files (ANF) cross-region replication](/azure/azure-netapp-files/cross-region-replication-introduction).
+
+- **For Azure Files:**
+
+ - To copy your files to another storage account in a different region, use tools such as:
+
+ - [AZCopy](../storage/common/storage-use-azcopy-blobs-copy.md)
+ - [Azure PowerShell](/powershell/module/az.storage/?view=azps-12.0.0&preserve-view=true)
+ - [Azure Data Factory](/azure/data-factory/connector-azure-blob-storage?tabs=data-factory)
+
+ For a sample script, see [Sync between two Azure file shares for Backup and Disaster Recovery](https://github.com/Azure-Samples/azure-files-samples/tree/master/SyncBetweenTwoAzureFileSharesForDR).
+
+ - To sync between your Azure file share (cloud endpoint), an on-premises Windows file server, and a mounted file share running on a virtual machine in another Azure region (your server endpoint for disaster recovery purposes), use [Azure File Sync](/azure/storage/file-sync/file-sync-introduction).
+
+ > [!IMPORTANT]
+ > You must disable cloud tiering to ensure that all data is present locally, and provision enough storage on the Azure Virtual Machine to hold the entire dataset. To ensure changes replicate quickly to the secondary region, files should only be accessed and modified on the server endpoint rather than in Azure.
+++++++
+## Next steps
+
+- [Azure services and regions that support availability zones](availability-zones-service-support.md)
+- [Disaster recovery guidance by service](disaster-recovery-guidance-overview.md)
+- [Reliability guidance](./reliability-guidance-overview.md)
+- [Business continuity management program in Azure](./business-continuity-management-program.md)
+
reliability Cross Region Replication Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/cross-region-replication-azure.md
# Azure cross-region replication
-Many Azure regions provide availability zones, which are separated groups of datacenters. Within a region, availability zones are close enough to have low-latency connections to other availability zones, but they're far enough apart to reduce the likelihood that more than one will be affected by local outages or weather. Availability zones have independent power, cooling, and networking infrastructure. They're designed so that if one zone experiences an outage, then regional services, capacity, and high availability are supported by the remaining zones.
-
-While Azure regions are designed to offer protection against local disasters with availability zones, they can also provide protection from regional or large geography disasters with disaster recovery by making use of another secondary region that uses *cross-region replication*. Both the primary and secondary regions together form a [region pair](#azure-paired-regions).
--
-## Cross-region replication
-
-To ensure customers are supported across the world, Azure maintains multiple geographies. These discrete demarcations define a disaster recovery and data residency boundary across one or multiple Azure regions.
+While Azure regions are designed to offer protection against local disasters with [availability zones](./availability-zones-overview.md), they can also provide protection from regional or large geography disasters with disaster recovery by making use of another secondary region that uses *cross-region replication*. Both the primary and secondary regions together form a [region pair](#azure-paired-regions).
Cross-region replication is one of several important pillars in the Azure business continuity and disaster recovery strategy. Cross-region replication builds on the synchronous replication of your applications and data that exists by using availability zones within your primary Azure region for high availability. Cross-region replication asynchronously replicates the same applications and data across other Azure regions for disaster recovery protection. ![Image depicting high availability via asynchronous replication of applications and data across other Azure regions for disaster recovery protection.](./media/cross-region-replication.png)
-Some Azure services take advantage of cross-region replication to ensure business continuity and protect against data loss. Azure provides several [storage solutions](../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region) that make use of cross-region replication to ensure data availability. For example, [Azure geo-redundant storage](../storage/common/storage-redundancy.md#geo-redundant-storage) (GRS) replicates data to a secondary region automatically. This approach ensures that data is durable even if the primary region isn't recoverable.
+Some Azure services support cross-region replication to ensure business continuity and protect against data loss. Azure provides several [storage solutions](../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region) that make use of cross-region replication to ensure data availability. For example, [Azure geo-redundant storage](../storage/common/storage-redundancy.md#geo-redundant-storage) (GRS) replicates data to a secondary region automatically. This approach ensures that data is durable even if the primary region isn't recoverable.
-Not all Azure services automatically replicate data or automatically fall back from a failed region to cross-replicate to another enabled region. In these scenarios, recovery and replication must be configured by the customer. These examples are illustrations of the *shared responsibility model*. It's a fundamental pillar in your disaster recovery strategy. For more information about the shared responsibility model and to learn about business continuity and disaster recovery in Azure, see [Business continuity management in Azure](business-continuity-management-program.md).
+## Shared responsibility
+
+Not all Azure services automatically replicate data or automatically fall back from a failed region to cross-replicate to another enabled region. In these scenarios, you are responsible for recovery and replication. These examples are illustrations of the *shared responsibility model*. It's a fundamental pillar in your disaster recovery strategy. For more information about the shared responsibility model and to learn about business continuity and disaster recovery in Azure, see [Business continuity management in Azure](business-continuity-management-program.md).
Shared responsibility becomes the crux of your strategic decision-making when it comes to disaster recovery. Azure doesn't require you to use cross-region replication, and you can use services to build resiliency without cross-replicating to another enabled region. But we strongly recommend that you configure your essential services across regions to benefit from [isolation](../security/fundamentals/isolation-choices.md) and improve [availability](availability-zones-service-support.md).
You aren't limited to using services within your regional pairs. Although an Azu
## Azure paired regions
-Many regions also have a paired region to support cross-region replication based on proximity and other factors.
+Many regions have a paired region to support cross-region replication based on proximity and other factors.
>[!IMPORTANT]
Many regions also have a paired region to support cross-region replication based
## Regions with availability zones and no region pair
-Azure continues to expand globally in regions without a regional pair and achieves high availability by leveraging [availability zones](../reliability/availability-zones-overview.md) and [locally redundant or zone-redundant storage (LRS/ZRS)](../storage/common/storage-redundancy.md#zone-redundant-storage). Regions without a pair will not have [geo-redundant storage (GRS)](../storage/common/storage-redundancy.md#geo-redundant-storage). Such regions follow [data residency](https://azure.microsoft.com/global-infrastructure/data-residency/#overview) guidelines to allow for the option to keep data resident within the same region. Customers are responsible for data resiliency based on their Recovery Point Objective or Recovery Time Objective (RTO/RPO) needs and may move, copy, or access their data from any location globally. In the rare event that an entire Azure region is unavailable, customers will need to plan for their Cross Region Disaster Recovery per guidance from [Azure services that support high availability](../reliability/availability-zones-service-support.md#azure-services-with-availability-zone-support) and [Azure Resiliency ΓÇô Business Continuity and Disaster Recovery](https://azure.microsoft.com/mediahandler/files/resourcefiles/resilience-in-azure-whitepaper/resiliency-whitepaper-2022.pdf).
+Azure continues to expand globally in regions without a regional pair and achieves high availability by leveraging [availability zones](../reliability/availability-zones-overview.md) and [locally redundant or zone-redundant storage (LRS/ZRS)](../storage/common/storage-redundancy.md#zone-redundant-storage). Regions without a pair will not have [geo-redundant storage (GRS)](../storage/common/storage-redundancy.md#geo-redundant-storage). However, [some services offer alternative options for cross-region replication](./cross-region-replication-azure-no-pair.md).
+
+Non-paired regions follow [data residency](https://azure.microsoft.com/global-infrastructure/data-residency/#overview) guidelines to allow for the option to keep data resident within the same region. Customers are responsible for data resiliency based on their Recovery Point Objective or Recovery Time Objective (RTO/RPO) needs and may move, copy, or access their data from any location globally. In the rare event that an entire Azure region is unavailable, customers will need to plan for their Cross Region Disaster Recovery per guidance from [Azure services that support high availability](../reliability/availability-zones-service-support.md#azure-services-with-availability-zone-support) and [Azure Resiliency ΓÇô Business Continuity and Disaster Recovery](https://azure.microsoft.com/mediahandler/files/resourcefiles/resilience-in-azure-whitepaper/resiliency-whitepaper-2022.pdf).
The table below lists Azure regions without a region pair:
The table below lists Azure regions without a region pair:
| Italy | Italy North| | Austria | Austria East (Coming soon) | | Spain | Spain Central|+ ## Next steps
+- [Azure cross-region replication for non-paired regions](./cross-region-replication-azure-no-pair.md)
- [Azure services and regions that support availability zones](availability-zones-service-support.md) - [Disaster recovery guidance by service](disaster-recovery-guidance-overview.md) - [Reliability guidance](./reliability-guidance-overview.md)
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
The listed features were released in the last three months. For information abou
### Microsoft unified security platform now generally available
-Microsoft Sentinel is now generally available within the Microsoft unified security operations platform in the Microsoft Defender portal. The Microsoft unified security operations platform brings together the full capabilities of Microsoft Sentinel, Microsoft Defender XDR, and Microsoft Copilot for Security in Microsoft Defender. For more information, see the following resources:
+Microsoft Sentinel is now generally available within the Microsoft unified security operations platform in the Microsoft Defender portal. The Microsoft unified security operations platform brings together the full capabilities of Microsoft Sentinel, Microsoft Defender XDR, and Microsoft Copilot in Microsoft Defender. For more information, see the following resources:
- Blog post: [General availability of the Microsoft unified security operations platform](https://aka.ms/unified-soc-announcement) - [Microsoft Sentinel in the Microsoft Defender portal](microsoft-sentinel-defender-portal.md) - [Connect Microsoft Sentinel to Microsoft Defender XDR](/defender-xdr/microsoft-sentinel-onboard)-- [Microsoft Copilot for Security in Microsoft Defender](/defender-xdr/security-copilot-in-microsoft-365-defender)
+- [Microsoft Copilot in Microsoft Defender](/defender-xdr/security-copilot-in-microsoft-365-defender)
## June 2024
storage-mover Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/service-overview.md
The [resource hierarchy article](resource-hierarchy.md) has more information abo
[!INCLUDE [hybrid-service-explanation](includes/hybrid-service-explanation.md)]
+## Using Azure Storage Mover and Azure Data Box
+
+When transitioning on-premises workloads to Azure Storage, reducing downtime and ensuring predictable periods of unavailability is crucial for users and business operations. For the initial bulk migration, you can use [Azure Data Box](https://learn.microsoft.com/azure/databox/) and combine it with Azure Storage Mover for online catch-up.
+
+Using Azure Data Box conserves significant network bandwidth. However, active workloads on your source storage might undergo changes while the Data Box is in transit to an Azure Data Center. The "online catch-up" phase involves updating your cloud storage with these changes before fully cutting over the workload to use the cloud data. This typically requires minimal bandwidth since most data already resides in Azure, and only the delta needs to be transferred. Azure Storage Mover excels in this task.
+
+Azure Storage Mover detects differences between your on-premises storage and cloud storage, transferring updates and new files not captured by the Data Box transfer. Additionally, if only a file's metadata (such as permissions) has changed, Azure Storage Mover uploads just the new metadata instead of the entire file content.
+
+Read more details on how to use Azure Storage Mover with Azure Data Box [here](https://techcommunity.microsoft.com/t5/azure-storage-blog/storage-migration-combine-azure-storage-mover-and-azure-data-box/ba-p/4143354).
+ ## Next steps The following articles can help you become more familiar with the Storage Mover service.
storage Storage Blob Inventory Report Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-inventory-report-analytics.md
You might have to wait up to 24 hours after enabling inventory reports for your
> [!NOTE] > As part of creating the workspace, you'll create a storage account that has a hierarchical namespace. Azure Synapse stores Spark tables and application logs to this account. Azure Synapse refers to this account as the _primary storage account_. To avoid confusion, this article uses the term _inventory report account_ to refer to the account which contains inventory reports.
-2. In the Synapse workspace, assign the **Contributor** role to your user identity. See [Azure RBAC: Owner role for the workspace](../../synapse-analytics/get-started-add-admin.md#azure-rbac-owner-role-for-the-workspace).
+2. In the Synapse workspace, assign the **Contributor** role to your user identity. See [Azure RBAC: Owner role for the workspace](../../synapse-analytics/get-started-add-admin.md#azure-role-based-access-control-owner-role-for-the-workspace).
3. Give the Synapse workspace permission to access the inventory reports in your storage account by navigating to your inventory report account, and then assigning the **Storage Blob Data Contributor** role to the system managed identity of the workspace. See [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.yml).
You might have to wait up to 24 hours after enabling inventory reports for your
1. Open your Synapse workspace in Synapse Studio. See [Open Synapse Studio](../../synapse-analytics/get-started-create-workspace.md#open-synapse-studio).
-2. In Synapse Studio, Make sure that your identity is assigned the role of **Synapse Administrator**. See [Synapse RBAC: Synapse Administrator role for the workspace](../../synapse-analytics/get-started-add-admin.md#synapse-rbac-synapse-administrator-role-for-the-workspace).
+2. In Synapse Studio, Make sure that your identity is assigned the role of **Synapse Administrator**. See [Synapse RBAC: Synapse Administrator role for the workspace](../../synapse-analytics/get-started-add-admin.md#synapse-role-based-access-control-synapse-administrator-role-for-the-workspace).
3. Create an Apache Spark pool. See [Create a serverless Apache Spark pool](../../synapse-analytics/get-started-analyze-spark.md#create-a-serverless-apache-spark-pool).
In this section, you'll generate statistical data that you'll visualize in a rep
[Estimate the cost of archiving data](archive-cost-estimation.md)
- [Optimize costs by automatically managing the data lifecycle](lifecycle-management-overview.md)
+ [Optimize costs by automatically managing the data lifecycle](lifecycle-management-overview.md)
storage Classic Account Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/classic-account-migrate.md
The process of migrating a classic storage account involves four steps:
For more information about the migration process, see [Understand storage account migration from the classic deployment model to Azure Resource Manager](classic-account-migration-process.md)
+> [!NOTE]
+> Accounts left in a **Prepare** migration state more 30 days may have their migrations committed on your behalf. If you need more than 30 days to validate your migration to Azure Resource Manager, you can abort the current migration and restart it when you are ready.
+ You can migrate a classic storage account to the Azure Resource Manager deployment model with the Azure portal or PowerShell. # [Portal](#tab/azure-portal)
storage Classic Account Migration Process https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/classic-account-migration-process.md
There are four steps to the migration process, as shown in the following diagram
> [!NOTE] > The operations described in the following sections are all idempotent. If you have a problem other than an unsupported feature or a configuration error, retry the prepare, abort, or commit operation.
+> [!NOTE]
+> Accounts left in a **Prepare** migration state more 30 days may have their migrations committed on your behalf. If you need more than 30 days to validate your migration to Azure Resource Manager, you can abort the current migration and restart it when you are ready.
+ ### Validate The Validation step is the first step in the migration process. The goal of this step is to analyze the state of the resources that you want to migrate from the classic deployment model. The Validation step evaluates whether the resources are capable of migration (success or failure). If the classic storage account isn't capable of migration, Azure lists the reasons why.
synapse-analytics How To Access Secured Purview Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/catalog-and-governance/how-to-access-secured-purview-account.md
Last updated 09/02/2021 -+ # Access a secured Microsoft Purview account from Azure Synapse Analytics
synapse-analytics Quickstart Connect Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/catalog-and-governance/quickstart-connect-azure-purview.md
Last updated 09/29/2021 -+
synapse-analytics Concepts Data Factory Differences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-integration/concepts-data-factory-differences.md
Last updated 02/15/2022 -+ # Data integration in Azure Synapse Analytics versus Azure Data Factory
synapse-analytics Data Integration Data Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-integration/data-integration-data-lake.md
Last updated 02/15/2022-+ # Ingest data into Azure Data Lake Storage Gen2
synapse-analytics Data Integration Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-integration/data-integration-sql-pool.md
Last updated 02/15/2022-+ # Ingest data into a dedicated SQL pool
synapse-analytics Linked Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-integration/linked-service.md
Last updated 04/15/2020 -+ # Secure a linked service with Private Links
synapse-analytics Sql Pool Stored Procedure Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-integration/sql-pool-stored-procedure-activity.md
-+ Last updated 05/13/2021
synapse-analytics Clone Lake Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/database-designer/clone-lake-database.md
Title: Clone a lake database using the database designer. description: Learn how to clone an entire lake database or specific tables within a lake database using the database designer. -+
synapse-analytics Get Started Add Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-add-admin.md
Title: 'Quickstart: Get started add an Administrator' description: In this tutorial, you'll learn how to add another administrative user to your workspace.---+++
# Add an administrator to your Synapse workspace
-In this tutorial, you'll learn how to add an administrator to your Synapse workspace. This user will have full control over the workspace.
+In this tutorial, you'll learn how to add an administrator to your Synapse workspace. This user has full control over the workspace.
## Overview
-So far in the get started guide, we've focused on activities *you* do in the workspace. Because you created the workspace in STEP 1, you are an administrator of the Synapse workspace. Now, we will make another user Ryan (`ryan@contoso.com`) an administrator. When we are done, Ryan will be able to do everything you can do in the workspace.
+So far in the get started guide, we've focused on activities *you* do in the workspace. Because you created the workspace in STEP 1, you're an administrator of the Synapse workspace. Now, we'll make another user Ryan (`ryan@contoso.com`) an administrator. When we're done, Ryan will be able to do everything you can do in the workspace.
-## Azure RBAC: Owner role for the workspace
+## Azure role-based access control: Owner role for the workspace
1. Open the Azure portal and open your Synapse workspace. 1. On the left side, select **Access control (IAM)**.
So far in the get started guide, we've focused on activities *you* do in the wor
1. Select **Save**.
-
-## Synapse RBAC: Synapse Administrator role for the workspace
+## Synapse role-based access control: Synapse Administrator role for the workspace
-Assign to `ryan@contoso.com` to Synapse RBAC **Synapse Administrator** role on the workspace.
+Assign to `ryan@contoso.com` to the **Synapse Administrator** role on the workspace.
1. Open your workspace in Synapse Studio. 1. On the left side, select **Manage** to open the Manage hub.
Assign to `ryan@contoso.com` to Synapse RBAC **Synapse Administrator** role on t
1. Add `ryan@contoso.com` to the **Synapse Administrator** role. 1. Then select **Apply**.
-## Azure RBAC: Role assignments on the workspace's primary storage account
+## Azure role-based access control: Role assignments on the workspace's primary storage account
1. Open the workspace's primary storage account in the Azure portal. 1. On the left side, select **Access control (IAM)**.
synapse-analytics Get Started Analyze Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-analyze-spark.md
Title: 'Quickstart: Get started analyzing with Spark' description: In this tutorial, you'll learn to analyze data with Apache Spark.---+++
synapse-analytics Get Started Analyze Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-analyze-sql-on-demand.md
Title: 'Tutorial: Get started analyze data with a serverless SQL pool' description: In this tutorial, you'll learn how to analyze data with a serverless SQL pool using data located in Spark databases.---+++
synapse-analytics Get Started Analyze Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-analyze-sql-pool.md
Title: "Tutorial: Get started analyze data with dedicated SQL pools" description: In this tutorial, use the NYC Taxi sample data to explore SQL pool's analytic capabilities.---+++ Last updated 10/16/2023
synapse-analytics Get Started Analyze Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-analyze-storage.md
Title: 'Tutorial: Get started analyze data in Storage accounts' description: In this tutorial, you'll learn how to analyze data located in a storage account.---+++
df.write.mode("overwrite").parquet("/NYCTaxi/PassengerCountStats_parquetformat")
### Analyze data in a storage account
-You can analyze the data in your workspace default ADLS Gen2 account or you can link an ADLS Gen2 or Blob storage account to your workspace through "**Manage**" > "**Linked Services**" > "**New**" (The steps below will refer to the primary ADLS Gen2 account).
+You can analyze the data in your workspace default Azure Data Lake Storage (ADLS) Gen2 account or you can link an ADLS Gen2 or Blob storage account to your workspace through "**Manage**" > "**Linked Services**" > "**New**" (The next steps will refer to the primary ADLS Gen2 account).
1. In Synapse Studio, go to the **Data** hub, and then select **Linked**. 1. Go to **Azure Data Lake Storage Gen2** > **myworkspace (Primary - contosolake)**. 1. Select **users (Primary)**. You should see the **NYCTaxi** folder. Inside you should see two folders called **PassengerCountStats_csvformat** and **PassengerCountStats_parquetformat**.
-1. Open the **PassengerCountStats_parquetformat** folder. Inside, you'll see a parquet file with a name like `part-00000-2638e00c-0790-496b-a523-578da9a15019-c000.snappy.parquet`.
+1. Open the **PassengerCountStats_parquetformat** folder. Inside, there's a parquet file with a name like `part-00000-2638e00c-0790-496b-a523-578da9a15019-c000.snappy.parquet`.
1. Right-click **.parquet**, then select **New notebook**, then select **Load to DataFrame**. A new notebook is created with a cell like this: ```py
You can analyze the data in your workspace default ADLS Gen2 account or you can
display(df.limit(10)) ```
-1. Attach to the Spark pool named **Spark1**. Run the cell. If you run into an error related to lack of cores, this spark pool may be used by another session. Cancel all the existing sessions and retry.
+1. Attach to the Spark pool named **Spark1**. Run the cell. If you run into an error related to lack of cores, another session could be using this spark pool this spark pool. Cancel all the existing sessions and retry.
1. Select back to the **users** folder. Right-click the **.parquet** file again, and then select **New SQL script** > **SELECT TOP 100 rows**. It creates a SQL script like this: ```sql
synapse-analytics Get Started Create Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-create-workspace.md
Title: 'Quickstart: Get started - create a Synapse workspace' description: In this tutorial, you'll learn how to create a Synapse workspace, a dedicated SQL pool, and a serverless Apache Spark pool.---+++
In this tutorial, you'll learn how to create a Synapse workspace, a dedicated SQ
## Prerequisites
-To complete this tutorial's steps, you need to have access to a resource group for which you are assigned the **Owner** role. Create the Synapse workspace in this resource group.
+To complete this tutorial's steps, you need to have access to a resource group for which you're assigned the **Owner** role. Create the Synapse workspace in this resource group.
## Create a Synapse workspace in the Azure portal
Fill in the following fields:
Fill in the following fields: 1. **Workspace name** - Pick any globally unique name. In this tutorial, we'll use **myworkspace**.
-1. **Region** - Pick the region where you have placed your client applications/services (for example, Azure VM, Power BI, Azure Analysis Service) and storages that contain data (for example Azure Data Lake storage, Azure Cosmos DB analytical storage).
+1. **Region** - Pick the region where you have placed your client applications/services (for example, Azure Virtual Machine, Power BI, Azure Analysis Service) and storages that contain data (for example Azure Data Lake storage, Azure Cosmos DB analytical storage).
> [!NOTE] > A workspace that is not co-located with the client applications or storage can be the root cause of many performance issues. If your data or the clients are placed in multiple regions, you can create separate workspaces in different regions co-located with your data and clients.
After your Azure Synapse workspace is created, you have two ways to open Synapse
> To sign into your workspace, there are two **Account selection methods**. One is from **Azure subscription**, the other is from **Enter manually**. If you have the Synapse Azure role or higher level Azure roles, you can use both methods to log into the workspace. If you don't have the related Azure roles, and you were granted as the Synapse RBAC role, **Enter manually** is the only way to log into the workspace. To learn more about the Synapse RBAC, refer to [What is Synapse role-based access control (RBAC)](./security/synapse-workspace-synapse-rbac.md). ## Place sample data into the primary storage account
-We are going to use a small 100K row sample dataset of NYC Taxi Cab data for many examples in this getting started guide. We begin by placing it in the primary storage account you created for the workspace.
+We're going to use a small 100 K row sample dataset of NYC Taxi Cab data for many examples in this getting started guide. We begin by placing it in the primary storage account you created for the workspace.
-* Download the [NYC Taxi - green trip dataset](../open-datasets/dataset-taxi-green.md?tabs=azureml-opendatasets#additional-information) to your computer. Navigate to the [original dataset location](https://www.nyc.gov/site/tlc/about/tlc-trip-record-data.page) from the above link, choose a specific year and download the Green taxi trip records in Parquet format.
+* Download the [NYC Taxi - green trip dataset](../open-datasets/dataset-taxi-green.md?tabs=azureml-opendatasets#additional-information) to your computer. Navigate to the [original dataset location](https://www.nyc.gov/site/tlc/about/tlc-trip-record-data.page) from the link, choose a specific year and download the Green taxi trip records in Parquet format.
* Rename the downloaded file to *NYCTripSmall.parquet*. * In Synapse Studio, navigate to the **Data** Hub. * Select **Linked**.
-* Under the category **Azure Data Lake Storage Gen2** you'll see an item with a name like **myworkspace ( Primary - contosolake )**.
+* Under the category Azure Data Lake Storage Gen2,** you'll see an item with a name like **myworkspace ( Primary - contosolake )**.
* Select the container named **users (Primary)**. * Select **Upload** and select the `NYCTripSmall.parquet` file you downloaded.
-Once the parquet file is uploaded it is available through two equivalent URIs:
+Once the parquet file is uploaded, it's available through two equivalent URIs:
* `https://contosolake.dfs.core.windows.net/users/NYCTripSmall.parquet` * `abfss://users@contosolake.dfs.core.windows.net/NYCTripSmall.parquet`
synapse-analytics Get Started Knowledge Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-knowledge-center.md
Title: 'Tutorial: Get started explore the Synapse Knowledge center' description: In this tutorial, you'll learn how to use the Synapse Knowledge center.---+++
In this tutorial, you'll learn how to use the Synapse Studio **Knowledge center*
There are two ways of finding the **Knowledge center** in Synapse Studio:
- 1. In the Home hub, near the top-right of the page click on **Learn**.
- 2. In the menu bar at the top, click **?** and then **Knowledge center**.
+ 1. In the Home hub, near the top-right of the page, select **Learn**.
+ 2. In the menu bar at the top, select **?** and then **Knowledge center**.
Pick either method and open the **Knowledge center**. ## Exploring the Knowledge center
-Once it is visible, you will see that the **Knowledge center** allows you to do three things:
+Once it's visible, you'll see that the **Knowledge center** allows you to do three things:
* **Use samples immediately**. If you want a quick example of how Synapse works, choose this option. * **Browse gallery**. This option lets you link sample data sets and add sample code in the form of SQL scripts, notebooks, and pipelines. * **Tour Synapse Studio**. This option takes you on a brief tour of the basic parts of Synapse Studio. This is useful if you have never used Synapse Studio before.
There are three items in this section:
* Query data with SQL * Create external table with SQL
-1. In the **Knowledge center**, click **Use samples immediately**.
+1. In the **Knowledge center**, select **Use samples immediately**.
1. Select **Query data with SQL**.
-1. Click **Use sample**.
+1. Select **Use sample**.
1. A new sample SQL script will open. 1. Scroll to the first query (lines 28 to 32) and select the query text.
-1. Click Run. It will run only code you have selected.
+1. select Run. It will run only code you have selected.
## Gallery: A collection of sample datasets and sample code
-1. Go to the **Knowledge center**, click **Browse gallery**.
+1. Go to the **Knowledge center**, select **Browse gallery**.
1. Select the **SQL scripts** tab at the top.
-1. Select **Load the New York Taxicab dataset** Data ingestion sample, click **Continue**.
+1. Select **Load the New York Taxicab dataset** Data ingestion sample, select **Continue**.
1. Under **SQL pool**, choose **Select an existing pool** and select **SQLPOOL1**, and select the **SQLPOOL1** database you created earlier.
-1. Click **Open Script**.
+1. Select **Open Script**.
1. A new sample SQL script will open.
-1. Click **Run**
-1. This will create several tables for all of the NYC Taxi data and load them using the T-SQL COPY command. If you had created these tables in the previous quick start steps, select and execute only code to CREATE and COPY for tables that do not exist.
+1. Select **Run**
+1. This will create several tables for all of the NYC Taxi data and load them using the T-SQL COPY command. If you had created these tables in the previous quick start steps, select and execute only code to CREATE and COPY for tables that don't exist.
> [!NOTE] > When using the sample gallery for SQL script with a dedicated SQL pool (formerly SQL DW), you will only be able to use an existing dedicated SQL pool (formerly SQL DW).
synapse-analytics Get Started Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-monitor.md
Title: 'Tutorial: Get started with Azure Synapse Analytics - monitor your Synapse workspace' description: In this tutorial, you'll learn how to monitor activities in your Synapse workspace.---+++
synapse-analytics Get Started Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-pipelines.md
Title: 'Tutorial: Get started integrate with pipelines' description: In this tutorial, you'll learn how to integrate pipelines and activities using Synapse Studio.---+++
synapse-analytics Get Started Visualize Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started-visualize-power-bi.md
Title: "'Tutorial: Get started with Azure Synapse Analytics - visualize workspace data with Power BI'" description: In this tutorial, you learn how to use Power BI to visualize data in Azure Synapse Analytics.---+++ Last updated 10/16/2023
synapse-analytics Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/get-started.md
Title: 'Tutorial: Get started with Azure Synapse Analytics' description: In this tutorial, you'll learn the basic steps to set up and use Azure Synapse Analytics.---+++
synapse-analytics Implementation Success Assess Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-assess-environment.md
Title: "Synapse implementation success methodology: Assess environment" description: "Learn how to assess your environment to help evaluate the solution design and make informed technology decisions to implement Azure Synapse Analytics." --++ Last updated 05/31/2022
synapse-analytics Implementation Success Evaluate Data Integration Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-data-integration-design.md
Title: "Synapse implementation success methodology: Evaluate data integration design" description: "Learn how to evaluate the data integration design and validate that it meets guidelines and requirements." --++ Last updated 05/31/2022
synapse-analytics Implementation Success Evaluate Dedicated Sql Pool Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-dedicated-sql-pool-design.md
Title: "Synapse implementation success methodology: Evaluate dedicated SQL pool design" description: "Learn how to evaluate your dedicated SQL pool design to identify issues and validate that it meets guidelines and requirements." --++ Last updated 05/31/2022
synapse-analytics Implementation Success Evaluate Project Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-project-plan.md
Title: "Synapse implementation success methodology: Evaluate project plan" description: "Learn how to evaluate your modern data warehouse project plan before the project starts." --++ Last updated 05/31/2022
synapse-analytics Implementation Success Evaluate Serverless Sql Pool Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-serverless-sql-pool-design.md
Title: "Synapse implementation success methodology: Evaluate serverless SQL pool design" description: "Learn how to evaluate your serverless SQL pool design to identify issues and validate that it meets guidelines and requirements." --++ Last updated 05/31/2022
synapse-analytics Implementation Success Evaluate Solution Development Environment Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-solution-development-environment-design.md
Title: "Synapse implementation success methodology: Evaluate solution development environment design" description: "Learn how to set up multiple environments for your modern data warehouse project to support development, testing, and production." --++ Last updated 05/31/2022
synapse-analytics Implementation Success Evaluate Spark Pool Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-spark-pool-design.md
Title: "Synapse implementation success methodology: Evaluate Spark pool design" description: "Learn how to evaluate your Spark pool design to identify issues and validate that it meets guidelines and requirements." --++ Last updated 05/31/2022
synapse-analytics Implementation Success Evaluate Team Skill Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-team-skill-sets.md
Title: "Synapse implementation success methodology: Evaluate team skill sets" description: "Learn how to evaluate your team of skilled resources that will implement your Azure Synapse solution." --++ Last updated 05/31/2022
synapse-analytics Implementation Success Evaluate Workspace Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-evaluate-workspace-design.md
Title: "Synapse implementation success methodology: Evaluate workspace design" description: "Learn how to evaluate the Synapse workspace design and validate that it meets guidelines and requirements." --++ Last updated 05/31/2022
synapse-analytics Implementation Success Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-overview.md
Title: Azure Synapse implementation success by design description: "Learn about the Azure Synapse success series of articles that's designed to help you deliver a successful implementation of Azure Synapse Analytics." --++ Last updated 05/31/2022
synapse-analytics Implementation Success Perform Monitoring Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-perform-monitoring-review.md
Title: "Synapse implementation success methodology: Perform monitoring review" description: "Learn how to perform monitoring of your Azure Synapse solution." --++ Last updated 05/31/2022
synapse-analytics Implementation Success Perform Operational Readiness Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-perform-operational-readiness-review.md
Title: "Synapse implementation success methodology: Perform operational readiness review" description: "Learn how to perform an operational readiness review to evaluate your solution for its preparedness to provide optimal services to users." --++ Last updated 05/31/2022
synapse-analytics Implementation Success Perform User Readiness And Onboarding Plan Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/implementation-success-perform-user-readiness-and-onboarding-plan-review.md
Title: "Synapse implementation success methodology: Perform user readiness and onboarding plan review" description: "Learn how to perform user readiness and onboarding of new users to ensure successful adoption of your data warehouse." --++ Last updated 05/31/2022
synapse-analytics Proof Of Concept Playbook Dedicated Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/proof-of-concept-playbook-dedicated-sql-pool.md
Title: "Synapse POC playbook: Data warehousing with dedicated SQL pool in Azure Synapse Analytics" description: "A high-level methodology for preparing and running an effective Azure Synapse Analytics proof of concept (POC) project for dedicated SQL pool." --++ Last updated 05/23/2022
synapse-analytics Proof Of Concept Playbook Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/proof-of-concept-playbook-overview.md
Title: Azure Synapse proof of concept playbook description: "Introduction to a series of articles that provide a high-level methodology for planning, preparing, and running an effective Azure Synapse Analytics proof of concept project." --++ Last updated 05/23/2022
synapse-analytics Proof Of Concept Playbook Serverless Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/proof-of-concept-playbook-serverless-sql-pool.md
Title: "Synapse POC playbook: Data lake exploration with serverless SQL pool in Azure Synapse Analytics" description: "A high-level methodology for preparing and running an effective Azure Synapse Analytics proof of concept (POC) project for serverless SQL pool." --++ Last updated 05/23/2022
synapse-analytics Proof Of Concept Playbook Spark Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/proof-of-concept-playbook-spark-pool.md
Title: "Synapse POC playbook: Big data analytics with Apache Spark pool in Azure Synapse Analytics" description: "A high-level methodology for preparing and running an effective Azure Synapse Analytics proof of concept (POC) project for Apache Spark pool." --++ Last updated 05/23/2022
synapse-analytics Security White Paper Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/security-white-paper-access-control.md
Title: "Azure Synapse Analytics security white paper: Access control" description: Use different approaches or a combination of techniques to control access to data with Azure Synapse Analytics. --++ Last updated 01/14/2022
synapse-analytics Security White Paper Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/security-white-paper-authentication.md
Title: "Azure Synapse Analytics security white paper: Authentication" description: Implement authentication mechanisms with Azure Synapse Analytics. --++ Last updated 01/14/2022
synapse-analytics Security White Paper Data Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/security-white-paper-data-protection.md
Title: "Azure Synapse Analytics security white paper: Data protection" description: Protect data to comply with federal, local, and company guidelines with Azure Synapse Analytics. --++ Last updated 01/14/2022
synapse-analytics Security White Paper Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/security-white-paper-introduction.md
Title: Azure Synapse Analytics security white paper description: Overview of the Azure Synapse Analytics security white paper series of articles. --++ Last updated 01/14/2022
synapse-analytics Security White Paper Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/security-white-paper-network-security.md
Title: "Azure Synapse Analytics security white paper: Network security" description: Manage secure network access with Azure Synapse Analytics. --++ Last updated 01/14/2022
synapse-analytics Security White Paper Threat Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/security-white-paper-threat-protection.md
Title: "Azure Synapse Analytics security white paper: Threat detection" description: Audit, protect, and monitor Azure Synapse Analytics. --++ Last updated 01/14/2022
synapse-analytics Success By Design Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/success-by-design-introduction.md
Title: Success by design description: Azure Synapse Customer Success Engineering Success by Design repository. --++ Last updated 05/23/2022
synapse-analytics How To Analyze Complex Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/how-to-analyze-complex-schema.md
Last updated 06/15/2020 -+ # Analyze complex data types in Azure Synapse Analytics
synapse-analytics Overview Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/overview-cognitive-services.md
description: Enrich your data with artificial intelligence (AI) in Azure Synapse
-+ Last updated 05/13/2024
synapse-analytics Quickstart Integrate Azure Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/quickstart-integrate-azure-machine-learning.md
description: Link your Synapse workspace to an Azure Machine Learning workspace
-+ Last updated 02/29/2024
synapse-analytics Setup Environment Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/setup-environment-cognitive-services.md
Title: "Setup environment for Azure AI services for big data"
description: Set up your SynapseML or MMLSpark pipeline with Azure AI services in Azure Databricks and run a sample. -+
synapse-analytics Synapse Machine Learning Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/synapse-machine-learning-library.md
Last updated 08/31/2022-+ # What is SynapseML?
synapse-analytics Tutorial Automl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-automl.md
description: Tutorial on how to train a machine learning model without code in A
-+ Last updated 03/06/2024
synapse-analytics Tutorial Cognitive Services Anomaly https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-cognitive-services-anomaly.md
description: Learn how to use Azure AI Anomaly Detector for anomaly detection in
-+ Last updated 07/01/2021
synapse-analytics Tutorial Cognitive Services Sentiment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-cognitive-services-sentiment.md
description: Learn how to use Azure AI Language for sentiment analysis in Azure
-+ Last updated 11/20/2020
synapse-analytics Tutorial Configure Cognitive Services Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-configure-cognitive-services-synapse.md
description: Learn how to configure the prerequisites for using Azure AI service
-+ Last updated 11/20/2020
synapse-analytics Tutorial Score Model Predict Spark Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-score-model-predict-spark-pool.md
description: Learn how to use PREDICT functionality in serverless Apache Spark p
-+ Last updated 11/02/2021
synapse-analytics Tutorial Sql Pool Model Scoring Wizard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-sql-pool-model-scoring-wizard.md
description: Tutorial for how to use the machine learning model scoring wizard t
-+ Last updated 09/25/2020
synapse-analytics What Is Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/what-is-machine-learning.md
description: An Overview of machine learning capabilities in Azure Synapse Analy
-+ Last updated 08/31/2022
synapse-analytics Apache Spark Advisor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/monitoring/apache-spark-advisor.md
description: Spark Advisor is a system to automatically analyze commands/queries
-+
synapse-analytics Overview Terminology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/overview-terminology.md
Title: Terminology - Azure Synapse Analytics description: Reference guide walking user through Azure Synapse Analytics-+ Last updated 08/19/2022--++ # Azure Synapse Analytics terminology
synapse-analytics Overview What Is https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/overview-what-is.md
Title: What is Azure Synapse Analytics? description: An Overview of Azure Synapse Analytics-+ Previously updated : 11/02/2021-- Last updated : 07/10/2024++ # What is Azure Synapse Analytics?
**Apache Spark for Azure Synapse** deeply and seamlessly integrates Apache Spark--the most popular open source big data engine used for data preparation, data engineering, ETL, and machine learning.
-* ML models with SparkML algorithms and AzureML integration for Apache Spark 3.1 with built-in support for Linux Foundation Delta Lake.
+* ML models with SparkML algorithms and Azure Machine Learning integration for Apache Spark 3.1 with built-in support for Linux Foundation Delta Lake.
* Simplified resource model that frees you from having to worry about managing clusters. * Fast Spark start-up and aggressive autoscaling. * Built-in support for .NET for Spark allowing you to reuse your C# expertise and existing .NET code within a Spark application.
Azure Synapse contains the same Data Integration engine and experiences as Azure
## Data Explorer (Preview)
-Azure Synapse Data Explorer provides customers with an interactive query experience to unlock insights from log and telemetry data. To complement existing SQL and Apache Spark analytics runtime engines, Data Explorer analytics runtime is optimized for efficient log analytics using powerful indexing technology to automatically index free-text and semi-structured data commonly found in the telemetry data.
+Azure Synapse Data Explorer provides customers with an interactive query experience to unlock insights from system-generated logs. To complement existing SQL and Apache Spark analytics runtime engines, Data Explorer analytics runtime is optimized for efficient log analytics using powerful indexing technology to automatically index free-text and semi-structured data commonly found in the system-generated logs.
Use Data Explorer as a data platform for building near real-time log analytics and IoT analytics solutions to:
-* Consolidate and correlate your logs and events data across on-premises, cloud, third-party data sources.
+* Consolidate and correlate your logs and events data across on-premises, cloud, and third-party data sources.
* Accelerate your AI Ops journey (pattern recognition, anomaly detection, forecasting, and more) * Replace infrastructure-based log search solutions to save cost and increase productivity. * Build IoT Analytics solution for your IoT data.
Use Data Explorer as a data platform for building near real-time log analytics a
* Perform key tasks: ingest, explore, prepare, orchestrate, visualize * Monitor resources, usage, and users across SQL, Spark, and Data Explorer * Use Role-based access control to simplify access to analytics resources
-* Write SQL, Spark or KQL code and integrate with enterprise CI/CD processes
+* Write SQL, Spark, or KQL code and integrate with enterprise CI/CD processes
## Engage with the Synapse community
synapse-analytics Compatibility Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/partner/compatibility-issues.md
Title: Compatibility issues with third-party applications and Azure Synapse Anal
description: Describes known issues that third-party applications may find with Azure Synapse -+ Last updated 06/16/2023
synapse-analytics System Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/partner/system-integration.md
Title: System integration partners
description: List of industry system integrators building customer solutions with Azure Synapse Analytics -+ Last updated 09/21/2023
synapse-analytics Quickstart Apache Spark Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-apache-spark-notebook.md
Title: 'Quickstart: Create a serverless Apache Spark pool using web tools'
description: This quickstart shows how to use the web tools to create a serverless Apache Spark pool in Azure Synapse Analytics and how to run a Spark SQL query. -+
synapse-analytics Quickstart Connect Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-connect-azure-data-explorer.md
Last updated 02/15/2022--++
synapse-analytics Quickstart Connect Synapse Link Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-connect-synapse-link-cosmos-db.md
Last updated 04/21/2020 -+
synapse-analytics Quickstart Create Apache Spark Pool Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-create-apache-spark-pool-portal.md
Title: "Quickstart: Create a serverless Apache Spark pool using the Azure portal
description: Create a serverless Apache Spark pool using the Azure portal by following the steps in this guide. -+ Last updated 03/11/2024
synapse-analytics Quickstart Create Apache Spark Pool Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-create-apache-spark-pool-studio.md
Title: "Quickstart: Create a serverless Apache Spark pool using Synapse Studio"
description: Create a serverless Apache Spark pool using Synapse Studio by following the steps in this guide. -+ Last updated 03/11/2024
synapse-analytics Quickstart Create Sql Pool Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-create-sql-pool-portal.md
Last updated 04/15/2020 -+
synapse-analytics Quickstart Create Sql Pool Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-create-sql-pool-studio.md
Title: "Quickstart: Create a dedicated SQL pool using Synapse Studio"
description: Create a dedicated SQL pool using Synapse Studio by following the steps in this guide. -+ Last updated 02/21/2023
synapse-analytics Quickstart Create Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-create-workspace-cli.md
Last updated 02/04/2022 -+
synapse-analytics Quickstart Create Workspace Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-create-workspace-powershell.md
Last updated 02/04/2022 -+
synapse-analytics Quickstart Create Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-create-workspace.md
Title: 'Quickstart: create a Synapse workspace' description: Create an Synapse workspace by following the steps in this guide.-+ Last updated 03/23/2022--++
synapse-analytics Quickstart Load Studio Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-load-studio-sql-pool.md
Last updated 12/11/2020 -+
synapse-analytics Quickstart Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-power-bi.md
Last updated 10/27/2020 -+
synapse-analytics Quickstart Read From Gen2 To Pandas Dataframe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-read-from-gen2-to-pandas-dataframe.md
description: Read data from an Azure Data Lake Storage Gen2 account into a Panda
-+ Last updated 07/11/2022
synapse-analytics Quickstart Serverless Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-serverless-sql-pool.md
Last updated 04/15/2020 -+
synapse-analytics Connect To A Secure Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/connect-to-a-secure-storage-account.md
Last updated 02/10/2021 -+ # Connect to a secure Azure storage account from your Synapse workspace
synapse-analytics How To Connect To Workspace From Restricted Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/how-to-connect-to-workspace-from-restricted-network.md
Last updated 06/05/2023-+ # Connect to workspace resources from a restricted network
synapse-analytics How To Connect To Workspace With Private Links https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/how-to-connect-to-workspace-with-private-links.md
Last updated 01/20/2022 -+ # Connect to your Azure Synapse workspace using private links
synapse-analytics How To Create A Workspace With Data Exfiltration Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/how-to-create-a-workspace-with-data-exfiltration-protection.md
Last updated 09/19/2022 -+ # Create a workspace with data exfiltration protection enabled
synapse-analytics How To Create Managed Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/how-to-create-managed-private-endpoints.md
Last updated 04/15/2020 -+ # Create a Managed private endpoint to your data source
synapse-analytics How To Grant Workspace Managed Identity Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/how-to-grant-workspace-managed-identity-permissions.md
Title: Grant permissions to managed identity in Synapse workspace
description: An article that explains how to configure permissions for managed identity in Azure Synapse workspace. -+ Last updated 09/01/2022
synapse-analytics How To Manage Synapse Rbac Role Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/how-to-manage-synapse-rbac-role-assignments.md
Last updated 3/7/2022 -+ # How to manage Synapse RBAC role assignments in Synapse Studio
synapse-analytics How To Review Synapse Rbac Role Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/how-to-review-synapse-rbac-role-assignments.md
Last updated 3/07/2022 -+ # How to review Synapse RBAC role assignments
synapse-analytics How To Set Up Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/how-to-set-up-access-control.md
Last updated 5/23/2022 -+
synapse-analytics Synapse Private Link Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-private-link-hubs.md
Last updated 12/01/2020 -+ # Connect to Azure Synapse Studio using Azure Private Link Hubs
synapse-analytics Synapse Workspace Managed Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-managed-private-endpoints.md
Last updated 01/12/2020 -+ # Synapse Managed private endpoints
synapse-analytics Synapse Workspace Synapse Rbac Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-synapse-rbac-roles.md
Title: Azure Synapse RBAC roles
description: This article describes the built-in Synapse RBAC (role-based access control) roles, the permissions they grant, and the scopes at which they can be used. -+ Last updated 06/16/2023
synapse-analytics Synapse Workspace Synapse Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-synapse-rbac.md
Last updated 3/07/2022 -+ # What is Synapse role-based access control (RBAC)?
synapse-analytics Synapse Workspace Understand What Role You Need https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-understand-what-role-you-need.md
Last updated 04/22/2022 -+ # Understand the roles required to perform common tasks in Azure Synapse
synapse-analytics Workspace Data Exfiltration Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/workspace-data-exfiltration-protection.md
Last updated 10/17/2022 -+ # Data exfiltration protection for Azure Synapse Analytics workspaces This article will explain data exfiltration protection in Azure Synapse Analytics
synapse-analytics Workspaces Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/workspaces-encryption.md
Last updated 03/24/2022 -+
synapse-analytics Apache Spark 3 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-3-runtime.md
Title: Azure Synapse Runtime for Apache Spark 3.1 (unsupported)
description: Supported versions of Spark, Scala, Python, and .NET for Apache Spark 3.1. -+
synapse-analytics Apache Spark 32 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-32-runtime.md
Title: Azure Synapse Runtime for Apache Spark 3.2
description: Supported versions of Spark, Scala, Python, and .NET for Apache Spark 3.2. -+
synapse-analytics Apache Spark 33 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-33-runtime.md
Title: Azure Synapse Runtime for Apache Spark 3.3
description: New runtime is GA and ready for production workloads. Spark 3.3.1, Python 3.10, Delta Lake 2.2. -+
Last updated 11/17/2022
-# Azure Synapse Runtime for Apache Spark 3.3 (GA)
+# Azure Synapse Runtime for Apache Spark 3.3 (EOSA)
+ Azure Synapse Analytics supports multiple runtimes for Apache Spark. This document covers the runtime components and versions for the Azure Synapse Runtime for Apache Spark 3.3.
-> [!TIP]
-> We strongly recommend proactively upgrading workloads to a more recent GA version of the runtime which currently is [Azure Synapse Runtime for Apache Spark 3.4 (GA)](./apache-spark-34-runtime.md).
+> [!Warning]
+> End of support announced for Azure Synapse Runtime for Apache Spark 3.3 July 12th, 2024.
+>
+> We strongly recommend you upgrade your Apache Spark 3.3 based workloads to [Azure Synapse Runtime for Apache Spark 3.4 (GA)](./apache-spark-34-runtime.md).
+> For up-to-date information, a detailed list of changes, and specific release notes for Spark runtimes, check and subscribe to [Spark Runtimes Releases and Updates](https://github.com/microsoft/synapse-spark-runtime).
## Component versions | Component | Version |
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This docume
| Python | 3.10 | | R (Preview) | 4.2.2 |
->[!TIP]
-> For up-to-date information, a detailed list of changes, and specific release notes for Spark runtimes, check and subscribe [Spark Runtimes Releases and Updates](https://github.com/microsoft/synapse-spark-runtime).
[Synapse-Python310-CPU.yml](https://github.com/Azure-Samples/Synapse/blob/main/Spark/Python/Synapse-Python310-CPU.yml) contains the list of libraries shipped in the default Python 3.10 environment in Azure Synapse Spark.
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This docume
> * We will continue to support .NET for Apache Spark in all previous versions of the Azure Synapse Runtime according to [their lifecycle stages](runtime-for-apache-spark-lifecycle-and-supportability.md). However, we do not have plans to support .NET for Apache Spark in Azure Synapse Runtime for Apache Spark 3.3 and future versions. We recommend that users with existing workloads written in C# or F# migrate to Python or Scala. Users are advised to take note of this information and plan accordingly. ## Libraries
-The following sections present the libraries included in Azure Synapse Runtime for Apache Spark 3.3.
-
-### Scala and Java default libraries
-
-| GroupID | ArtifactID | Version |
-|-||--|
-| com.aliyun | aliyun-java-sdk-core | 4.5.10 |
-| com.aliyun | aliyun-java-sdk-kms | 2.11.0 |
-| com.aliyun | aliyun-java-sdk-ram | 3.1.0 |
-| com.aliyun | aliyun-sdk-oss | 3.13.0 |
-| com.amazonaws | aws-java-sdk-bundle | 1.12.1026 |
-| com.chuusai | shapeless_2.12 | 2.3.7 |
-| com.clearspring.analytics | stream | 2.9.6 |
-| com.esotericsoftware | kryo-shaded | 4.0.2 |
-| com.esotericsoftware | minlog | 1.3.0 |
-| com.fasterxml.jackson | jackson-annotations | 2.13.4 |
-| com.fasterxml.jackson | jackson-core | 2.13.4 |
-| com.fasterxml.jackson | jackson-core-asl | 1.9.13 |
-| com.fasterxml.jackson | jackson-databind | 2.13.4.1 |
-| com.fasterxml.jackson | jackson-dataformat-cbor | 2.13.4 |
-| com.fasterxml.jackson | jackson-mapper-asl | 1.9.13 |
-| com.fasterxml.jackson | jackson-module-scala_2.12 | 2.13.4 |
-| com.github.joshelser | dropwizard-metrics-hadoop-metrics2-reporter | 0.1.2 |
-| com.github.luben | zstd-jni | 1.5.2-1 |
-| com.github.luben | zstd-jni | 1.5.2-1 |
-| com.github.vowpalwabbit | vw-jni | 9.3.0 |
-| com.github.wendykierp | JTransforms | 3.1 |
-| com.google.code.findbugs | jsr305 | 3.0.0 |
-| com.google.code.gson | gson | 2.8.6 |
-| com.google.crypto.tink | tink | 1.6.1 |
-| com.google.flatbuffers | flatbuffers-java | 1.12.0 |
-| com.google.guava | guava | 14.0.1 |
-| com.google.protobuf | protobuf-java | 2.5.0 |
-| com.googlecode.json-simple | json-simple | 1.1.1 |
-| com.jcraft | jsch | 0.1.54 |
-| com.jolbox | bonecp | 0.8.0.RELEASE |
-| com.linkedin.isolation-forest | isolation-forest_3.2.0_2.12 | 2.0.8 |
-| com.microsoft.azure | azure-data-lake-store-sdk | 2.3.9 |
-| com.microsoft.azure | azure-eventhubs | 3.3.0 |
-| com.microsoft.azure | azure-eventhubs-spark_2.12 | 2.3.22 |
-| com.microsoft.azure | azure-keyvault-core | 1.0.0 |
-| com.microsoft.azure | azure-storage | 7.0.1 |
-| com.microsoft.azure | cosmos-analytics-spark-3.4.1-connector_2.12 | 1.8.10 |
-| com.microsoft.azure | qpid-proton-j-extensions | 1.2.4 |
-| com.microsoft.azure | synapseml_2.12 | 0.11.3-spark3.3 |
-| com.microsoft.azure | synapseml-cognitive_2.12 | 0.11.3-spark3.3 |
-| com.microsoft.azure | synapseml-core_2.12 | 0.11.3-spark3.3 |
-| com.microsoft.azure | synapseml-deep-learning_2.12 | 0.11.3-spark3.3 |
-| com.microsoft.azure | synapseml-internal_2.12 | 0.11.3-spark3.3 |
-| com.microsoft.azure | synapseml-lightgbm_2.12 | 0.11.3-spark3.3 |
-| com.microsoft.azure | synapseml-opencv_2.12 | 0.11.3-spark3.3 |
-| com.microsoft.azure | synapseml-vw_2.12 | 0.11.3-spark3.3 |
-| com.microsoft.azure.kusto | kusto-data | 3.2.1 |
-| com.microsoft.azure.kusto | kusto-ingest | 3.2.1 |
-| com.microsoft.azure.kusto | kusto-spark_3.0_2.12 | 3.1.16 |
-| com.microsoft.azure.kusto | spark-kusto-synapse-connector_3.1_2.12 | 1.3.3 |
-| com.microsoft.cognitiveservices.speech | client-jar-sdk | 1.14.0 |
-| com.microsoft.sqlserver | msslq-jdbc | 8.4.1.jre8 |
-| com.ning | compress-lzf | 1.1 |
-| com.sun.istack | istack-commons-runtime | 3.0.8 |
-| com.tdunning | json | 1.8 |
-| com.thoughtworks.paranamer | paranamer | 2.8 |
-| com.twitter | chill-java | 0.10.0 |
-| com.twitter | chill_2.12 | 0.10.0 |
-| com.typesafe | config | 1.3.4 |
-| com.univocity | univocity-parsers | 2.9.1 |
-| com.zaxxer | HikariCP | 2.5.1 |
-| commons-cli | commons-cli | 1.5.0 |
-| commons-codec | commons-codec | 1.15 |
-| commons-collections | commons-collections | 3.2.2 |
-| commons-dbcp | commons-dbcp | 1.4 |
-| commons-io | commons-io | 2.11.0 |
-| commons-lang | commons-lang | 2.6 |
-| commons-logging | commons-logging | 1.1.3 |
-| commons-pool | commons-pool | 1.5.4 |
-| dev.ludovic.netlib | arpack | 2.2.1 |
-| dev.ludovic.netlib | blas | 2.2.1 |
-| dev.ludovic.netlib | lapack | 2.2.1 |
-| io.airlift | aircompressor | 0.21 |
-| io.delta | delta-core_2.12 | 2.2.0.9 |
-| io.delta | delta-storage | 2.2.0.9 |
-| io.dropwizard.metrics | metrics-core | 4.2.7 |
-| io.dropwizard.metrics | metrics-graphite | 4.2.7 |
-| io.dropwizard.metrics | metrics-jmx | 4.2.7 |
-| io.dropwizard.metrics | metrics-json | 4.2.7 |
-| io.dropwizard.metrics | metrics-jvm | 4.2.7 |
-| io.github.resilience4j | resilience4j-core | 1.7.1 |
-| io.github.resilience4j | resilience4j-retry | 1.7.1 |
-| io.netty | netty-all | 4.1.74.Final |
-| io.netty | netty-buffer | 4.1.74.Final |
-| io.netty | netty-codec | 4.1.74.Final |
-| io.netty | netty-codec-http2 | 4.1.74.Final |
-| io.netty | netty-codec-http-4 | 4.1.74.Final |
-| io.netty | netty-codec-socks | 4.1.74.Final |
-| io.netty | netty-common | 4.1.74.Final |
-| io.netty | netty-handler | 4.1.74.Final |
-| io.netty | netty-resolver | 4.1.74.Final |
-| io.netty | netty-tcnative-classes | 2.0.48 |
-| io.netty | netty-transport | 4.1.74.Final |
-| io.netty | netty-transport-classes-epoll | 4.1.87.Final |
-| io.netty | netty-transport-classes-kqueue | 4.1.87.Final |
-| io.netty | netty-transport-native-epoll | 4.1.87.Final-linux-aarch_64 |
-| io.netty | netty-transport-native-epoll | 4.1.87.Final-linux-x86_64 |
-| io.netty | netty-transport-native-kqueue | 4.1.87.Final-osx-aarch_64 |
-| io.netty | netty-transport-native-kqueue | 4.1.87.Final-osx-x86_64 |
-| io.netty | netty-transport-native-unix-common | 4.1.87.Final |
-| io.opentracing | opentracing-api | 0.33.0 |
-| io.opentracing | opentracing-noop | 0.33.0 |
-| io.opentracing | opentracing-util | 0.33.0 |
-| io.spray | spray-json_2.12 | 1.3.5 |
-| io.vavr | vavr | 0.10.4 |
-| io.vavr | vavr-match | 0.10.4 |
-| jakarta.annotation | jakarta.annotation-api | 1.3.5 |
-| jakarta.inject | jakarta.inject | 2.6.1 |
-| jakarta.servlet | jakarta.servlet-api | 4.0.3 |
-| jakarta.validation-api | | 2.0.2 |
-| jakarta.ws.rs | jakarta.ws.rs-api | 2.1.6 |
-| jakarta.xml.bind | jakarta.xml.bind-api | 2.3.2 |
-| javax.activation | activation | 1.1.1 |
-| javax.jdo | jdo-api | 3.0.1 |
-| javax.transaction | jta | 1.1 |
-| javax.transaction | transaction-api | 1.1 |
-| javax.xml.bind | jaxb-api | 2.2.11 |
-| javolution | javolution | 5.5.1 |
-| jline | jline | 2.14.6 |
-| joda-time | joda-time | 2.10.13 |
-| mysql | mysql-connector-java | 8.0.18 |
-| net.razorvine | pickle | 1.2 |
-| net.sf.jpam | jpam | 1.1 |
-| net.sf.opencsv | opencsv | 2.3 |
-| net.sf.py4j | py4j | 0.10.9.5 |
-| net.sf.supercsv | super-csv | 2.2.0 |
-| net.sourceforge.f2j | arpack_combined_all | 0.1 |
-| org.antlr | ST4 | 4.0.4 |
-| org.antlr | antlr-runtime | 3.5.2 |
-| org.antlr | antlr4-runtime | 4.8 |
-| org.apache.arrow | arrow-format | 7.0.0 |
-| org.apache.arrow | arrow-memory-core | 7.0.0 |
-| org.apache.arrow | arrow-memory-netty | 7.0.0 |
-| org.apache.arrow | arrow-vector | 7.0.0 |
-| org.apache.avro | avro | 1.11.0 |
-| org.apache.avro | avro-ipc | 1.11.0 |
-| org.apache.avro | avro-mapred | 1.11.0 |
-| org.apache.commons | commons-collections4 | 4.4 |
-| org.apache.commons | commons-compress | 1.21 |
-| org.apache.commons | commons-crypto | 1.1.0 |
-| org.apache.commons | commons-lang3 | 3.12.0 |
-| org.apache.commons | commons-math3 | 3.6.1 |
-| org.apache.commons | commons-pool2 | 2.11.1 |
-| org.apache.commons | commons-text | 1.10.0 |
-| org.apache.curator | curator-client | 2.13.0 |
-| org.apache.curator | curator-framework | 2.13.0 |
-| org.apache.curator | curator-recipes | 2.13.0 |
-| org.apache.derby | derby | 10.14.2.0 |
-| org.apache.hadoop | hadoop-aliyun | 3.3.3.5.2-106693326 |
-| org.apache.hadoop | hadoop-annotations | 3.3.3.5.2-106693326 |
-| org.apache.hadoop | hadoop-aws | 3.3.3.5.2-106693326 |
-| org.apache.hadoop | hadoop-azure | 3.3.3.5.2-106693326 |
-| org.apache.hadoop | hadoop-azure-datalake | 3.3.3.5.2-106693326 |
-| org.apache.hadoop | hadoop-client-api | 3.3.3.5.2-106693326 |
-| org.apache.hadoop | hadoop-client-runtime | 3.3.3.5.2-106693326 |
-| org.apache.hadoop | hadoop-cloud-storage | 3.3.3.5.2-106693326 |
-| org.apache.hadoop | hadoop-openstack | 3.3.3.5.2-106693326 |
-| org.apache.hadoop | hadoop-shaded-guava | 1.1.1 |
-| org.apache.hadoop | hadoop-yarn-server-web-proxy | 3.3.3.5.2-106693326 |
-| org.apache.hive | hive-beeline | 2.3.9 |
-| org.apache.hive | hive-cli | 2.3.9 |
-| org.apache.hive | hive-common | 2.3.9 |
-| org.apache.hive | hive-exec | 2.3.9 |
-| org.apache.hive | hive-jdbc | 2.3.9 |
-| org.apache.hive | hive-llap-common | 2.3.9 |
-| org.apache.hive | hive-metastore | 2.3.9 |
-| org.apache.hive | hive-serde | 2.3.9 |
-| org.apache.hive | hive-service-rpc | 2.3.9 |
-| org.apache.hive | hive-shims-0.23 | 2.3.9 |
-| org.apache.hive | hive-shims | 2.3.9 |
-| org.apache.hive | hive-shims-common | 2.3.9 |
-| org.apache.hive | hive-shims-scheduler | 2.3.9 |
-| org.apache.hive | hive-storage-api | 2.7.2 |
-| org.apache.httpcomponents | httpclient | 4.5.13 |
-| org.apache.httpcomponents | httpcore | 4.4.14 |
-| org.apache.httpcomponents | httpmime | 4.5.13 |
-| org.apache.httpcomponents.client5 | httpclient5 | 5.1.3 |
-| org.apache.iceberg | delta-iceberg | 2.2.0.9 |
-| org.apache.ivy | ivy | 2.5.1 |
-| org.apache.kafka | kafka-clients | 2.8.1 |
-| org.apache.logging.log4j | log4j-1.2-api | 2.17.2 |
-| org.apache.logging.log4j | log4j-api | 2.17.2 |
-| org.apache.logging.log4j | log4j-core | 2.17.2 |
-| org.apache.logging.log4j | log4j-slf4j-impl | 2.17.2 |
-| org.apache.orc | orc-core | 1.7.6 |
-| org.apache.orc | orc-mapreduce | 1.7.6 |
-| org.apache.orc | orc-shims | 1.7.6 |
-| org.apache.parquet | parquet-column | 1.12.3 |
-| org.apache.parquet | parquet-common | 1.12.3 |
-| org.apache.parquet | parquet-encoding | 1.12.3 |
-| org.apache.parquet | parquet-format-structures | 1.12.3 |
-| org.apache.parquet | parquet-hadoop | 1.12.3 |
-| org.apache.parquet | parquet-jackson | 1.12.3 |
-| org.apache.qpid | proton-j | 0.33.8 |
-| org.apache.spark | spark-avro_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-catalyst_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-core_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-graphx_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-hadoop-cloud_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-hive_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-kvstore_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-launcher_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-mllib_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-mllib-local_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-network-common_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-network-shuffle_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-repl_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-sketch_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-sql_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-sql-kafka-0-10_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-streaming_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-streaming-kafka-0-10-assembly_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-tags_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-token-provider-kafka-0-10_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-unsafe_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.spark | spark-yarn_2.12 | 3.3.1.5.2-106693326 |
-| org.apache.thrift | libfb303 | 0.9.3 |
-| org.apache.thrift | libthrift | 0.12.0 |
-| org.apache.velocity | velocity | 1.5 |
-| org.apache.xbean | xbean-asm9-shaded | 4.2 |
-| org.apache.yetus | audience-annotations | 0.5.0 |
-| org.apache.zookeeper | zookeeper | 3.6.2.5.2-106693326 |
-| org.apache.zookeeper | zookeeper-jute | 3.6.2.5.2-106693326 |
-| org.apache.zookeeper | zookeeper | 3.6.2.5.2-106693326 |
-| org.apache.zookeeper | zookeeper-jute | 3.6.2.5.2-106693326 |
-| org.apiguardian | apiguardian-api | 1.1.0 |
-| org.codehaus.janino | commons-compiler | 3.0.16 |
-| org.codehaus.janino | janino | 3.0.16 |
-| org.codehaus.jettison | jettison | 1.1 |
-| org.datanucleus | datanucleus-api-jdo | 4.2.4 |
-| org.datanucleus | datanucleus-core | 4.1.17 |
-| org.datanucleus | datanucleus-rdbms | 4.1.19 |
-| org.datanucleusjavax.jdo | | 3.2.0-m3 |
-| org.eclipse.jetty | jetty-util | 9.4.48.v20220622 |
-| org.eclipse.jetty | jetty-util-ajax | 9.4.48.v20220622 |
-| org.fusesource.leveldbjni | leveldbjni-all | 1.8 |
-| org.glassfish.hk2 | hk2-api | 2.6.1 |
-| org.glassfish.hk2 | hk2-locator | 2.6.1 |
-| org.glassfish.hk2 | hk2-utils | 2.6.1 |
-| org.glassfish.hk2 | osgi-resource-locator | 1.0.3 |
-| org.glassfish.hk2.external | aopalliance-repackaged | 2.6.1 |
-| org.glassfish.jaxb | jaxb-runtime | 2.3.2 |
-| org.glassfish.jersey.containers | jersey-container-servlet | 2.36 |
-| org.glassfish.jersey.containers | jersey-container-servlet-core | 2.36 |
-| org.glassfish.jersey.core | jersey-client | 2.36 |
-| org.glassfish.jersey.core | jersey-common | 2.36 |
-| org.glassfish.jersey.core | jersey-server | 2.36 |
-| org.glassfish.jersey.inject | jersey-hk2 | 2.36 |
-| org.ini4j | ini4j | 0.5.4 |
-| org.javassist | javassist | 3.25.0-GA |
-| org.javatuples | javatuples | 1.2 |
-| org.jdom | jdom2 | 2.0.6 |
-| org.jetbrains | annotations | 17.0.0 |
-| org.jodd | jodd-core | 3.5.2 |
-| org.json | json | 20210307 |
-| org.json4s | json4s-ast_2.12 | 3.7.0-M11 |
-| org.json4s | json4s-core_2.12 | 3.7.0-M11 |
-| org.json4s | json4s-jackson_2.12 | 3.7.0-M11 |
-| org.json4s | json4s-scalap_2.12 | 3.7.0-M11 |
-| org.junit.jupiter | junit-jupiter | 5.5.2 |
-| org.junit.jupiter | junit-jupiter-api | 5.5.2 |
-| org.junit.jupiter | junit-jupiter-engine | 5.5.2 |
-| org.junit.jupiter | junit-jupiter-params | 5.5.2 |
-| org.junit.platform | junit-platform-commons | 1.5.2 |
-| org.junit.platform | junit-platform-engine | 1.5.2 |
-| org.lz4 | lz4-java | 1.8.0 |
-| org.mlflow | mlfow-spark | 2.1.1 |
-| org.objenesis | objenesis | 3.2 |
-| org.openpnp | opencv | 3.2.0-1 |
-| org.opentest4j | opentest4j | 1.2.0 |
-| org.postgresql | postgresql | 42.2.9 |
-| org.roaringbitmap | RoaringBitmap | 0.9.25 |
-| org.roaringbitmap | shims | 0.9.25 |
-| org.rocksdb | rocksdbjni | 6.20.3 |
-| org.scalactic | scalactic_2.12 | 3.2.14 |
-| org.scala-lang | scala-compiler | 2.12.15 |
-| org.scala-lang | scala-library | 2.12.15 |
-| org.scala-lang | scala-reflect | 2.12.15 |
-| org.scala-lang.modules | scala-collection-compat_2.12 | 2.1.1 |
-| org.scala-lang.modules | scala-java8-compat_2.12 | 0.9.0 |
-| org.scala-lang.modules | scala-parser-combinators_2.12 | 1.1.2 |
-| org.scala-lang.modules | scala-xml_2.12 | 1.2.0 |
-| org.scalanlp | breeze-macros_2.12 | 1.2 |
-| org.scalanlp | breeze_2.12 | 1.2 |
-| org.slf4j | jcl-over-slf4j | 1.7.32 |
-| org.slf4j | jul-to-slf4j | 1.7.32 |
-| org.slf4j | slf4j-api | 1.7.32 |
-| org.threeten | threeten-extra | 1.5.0 |
-| org.tukaani | xz | 1.8 |
-| org.typelevel | algebra_2.12 | 2.0.1 |
-| org.typelevel | cats-kernel_2.12 | 2.1.1 |
-| org.typelevel | spire_2.12 | 0.17.0 |
-| org.typelevel | spire-macros_2.12 | 0.17.0 |
-| org.typelevel | spire-platform_2.12 | 0.17.0 |
-| org.typelevel | spire-util_2.12 | 0.17.0 |
-| org.wildfly.openssl | wildfly-openssl | 1.0.7.Final |
-| org.xerial.snappy | snappy-java | 1.1.8.4 |
-| oro | oro | 2.0.8 |
-| pl.edu.icm | JLargeArrays | 1.5 |
-| stax | stax-api | 1.0.1 |
-
-### Python libraries (Normal VMs)
-| Library | Version | Library | Version | Library | Version |
-||--|||--||
-| _libgcc_mutex | 0.1=conda_forge | hdf5 | 1.12.2=nompi_h2386368_100 | parquet-cpp | g1.5.1=2 |
-| _openmp_mutex | 4.5=2_kmp_llvm | html5lib | 1.1=pyh9f0ad1d_0 | parso | g0.8.3=pyhd8ed1ab_0 |
-| _py-xgboost-mutex | 2.0=cpu_0 | humanfriendly | 10.0=py310hff52083_4 | partd | g1.3.0=pyhd8ed1ab_0 |
-| _tflow_select | 2.3.0=mkl | hummingbird-ml | 0.4.0=pyhd8ed1ab_0 | pathos | g0.3.0=pyhd8ed1ab_0 |
-| absl-py | 1.3.0=pyhd8ed1ab_0 | icu | 58.2=hf484d3e_1000 | pathspec | 0.10.1 |
-| adal | 1.2.7=pyhd8ed1ab_0 | idna | 3.4=pyhd8ed1ab_0 | patsy | g0.5.3=pyhd8ed1ab_0 |
-| adlfs | 0.7.7=pyhd8ed1ab_0 | imagecodecs | 2022.9.26=py310h90cd304_3 | pcre2 | g10.40=hc3806b6_0 |
-| aiohttp | 3.8.3=py310h5764c6d_1 | imageio | 2.9.0=py_0 | pexpect | g4.8.0=pyh1a96a4e_2 |
-| aiosignal | 1.3.1=pyhd8ed1ab_0 | importlib-metadata | 5.0.0=pyha770c72_1 | pickleshare | g0.7.5=py_1003 |
-| anyio | 3.6.2 | interpret | 0.2.4=py37_0 | pillow | g9.2.0=py310h454ad03_3 |
-| aom | 3.5.0=h27087fc_0 | interpret-core | 0.2.4=py37h21ff451_0 | pip | g22.3.1=pyhd8ed1ab_0 |
-| applicationinsights | 0.11.10 | ipykernel | 6.17.0=pyh210e3f2_0 | pkginfo | 1.8.3 |
-| argcomplete | 2.0.0 | ipython | 8.6.0=pyh41d4057_1 | platformdirs | 2.5.3 |
-| argon2-cffi | 21.3.0 | ipython-genutils | 0.2.0 | plotly | g4.14.3=pyh44b312d_0 |
-| argon2-cffi-bindings | 21.2.0 | ipywidgets | 7.7.0 | pmdarima | g2.0.1=py310h5764c6d_0 |
-| arrow-cpp | 9.0.0=py310he7aa4d3_2_cpu | isodate | 0.6.0=py_1 | portalocker | g2.6.0=py310hff52083_1 |
-| asttokens | 2.1.0=pyhd8ed1ab_0 | itsdangerous | 2.1.2=pyhd8ed1ab_0 | pox | g0.3.2=pyhd8ed1ab_0 |
-| astunparse | 1.6.3=pyhd8ed1ab_0 | jdcal | 1.4.1=py_0 | ppft | g1.7.6.6=pyhd8ed1ab_0 |
-| async-timeout | 4.0.2=pyhd8ed1ab_0 | jedi | 0.18.1=pyhd8ed1ab_2 | prettytable | 3.2.0 |
-| attrs | 22.1.0=pyh71513ae_1 | jeepney | 0.8.0 | prometheus-client | 0.15.0 |
-| aws-c-cal | 0.5.11=h95a6274_0 | jinja2 | 3.1.2=pyhd8ed1ab_1 | prompt-toolkit | g3.0.32=pyha770c72_0 |
-| aws-c-common | 0.6.2=h7f98852_0 | jmespath | 1.0.1 | protobuf | g3.20.1=py310hd8f1fbe_0 |
-| aws-c-event-stream | 0.2.7=h3541f99_13 | joblib | 1.2.0=pyhd8ed1ab_0 | psutil | g5.9.4=py310h5764c6d_0 |
-| aws-c-io | 0.10.5=hfb6a706_0 | jpeg | 9e=h166bdaf_2 | pthread-stubs | g0.4=h36c2ea0_1001 |
-| aws-checksums | 0.1.11=ha31a3da_7 | jsonpickle | 2.2.0 | ptyprocess | g0.7.0=pyhd3deb0d_0 |
-| aws-sdk-cpp | 1.8.186=hecaee15_4 | jsonschema | 4.17.0 | pure_eval | g0.2.2=pyhd8ed1ab_0 |
-| azure-common | 1.1.28 | jupyter_client | 7.4.4=pyhd8ed1ab_0 | py-xgboost | g1.7.1=cpu_py310hd1aba9c_0 |
-| azure-core | 1.26.1=pyhd8ed1ab_0 | jupyter_core | 4.11.2=py310hff52083_0 | py4j | g0.10.9.5=pyhd8ed1ab_0 |
-| azure-datalake-store | 0.0.51=pyh9f0ad1d_0 | jupyter-server | 1.23.0 | pyarrow | g9.0.0=py310h9be7b57_2_cpu |
-| azure-graphrbac | 0.61.1 | jupyterlab-pygments | 0.2.2 | pyasn1 | g0.4.8=py_0 |
-| azure-identity | 1.7.0 | jupyterlab-widgets | 3.0.3 | pyasn1-modules | g0.2.7=py_0 |
-| azure-mgmt-authorization | 2.0.0 | jxrlib | 1.1=h7f98852_2 | pycosat | g0.6.4=py310h5764c6d_1 |
-| azure-mgmt-containerregistry | 10.0.0 | keras | 2.8.0 | pycparser | g2.21=pyhd8ed1ab_0 |
-| azure-mgmt-core | 1.3.2 | keras-applications | 1.0.8 | pygments | g2.13.0=pyhd8ed1ab_0 |
-| azure-mgmt-keyvault | 10.1.0 | keras-preprocessing | 1.1.2 | pyjwt | g2.6.0=pyhd8ed1ab_0 |
-| azure-mgmt-resource | 21.2.1 | keras2onnx | 1.6.5=pyhd8ed1ab_0 | pynacl | 1.5.0 |
-| azure-mgmt-storage | 20.1.0 | keyutils | 1.6.1=h166bdaf_0 | pyodbc | g4.0.34=py310hd8f1fbe_1 |
-| azure-storage-blob | 12.13.0 | kiwisolver | 1.4.4=py310hbf28c38_1 | pyopenssl | g22.1.0=pyhd8ed1ab_0 |
-| azureml-core | 1.47.0 | knack | 0.10.0 | pyparsing | g3.0.9=pyhd8ed1ab_0 |
-| azureml-dataprep | 4.5.7 | kqlmagiccustom | 0.1.114.post16 | pyperclip | 1.8.2 |
-| azureml-dataprep-native | 38.0.0 | krb5 | 1.19.3=h3790be6_0 | pyqt | g5.9.2=py310h295c915_6 |
-| azureml-dataprep-rslex | 2.11.4 | lcms2 | 2.14=h6ed2654_0 | pyrsistent | 0.19.2 |
-| azureml-dataset-runtime | 1.47.0 | ld_impl_linux-64 | 2.39=hc81fddc_0 | pysocks | g1.7.1=pyha2e5f31_6 |
-| azureml-mlflow | 1.47.0 | lerc | 4.0.0=h27087fc_0 | pyspark | g3.3.1=pyhd8ed1ab_0 |
-| azureml-opendatasets | 1.47.0 | liac-arff | 2.5.0=pyhd8ed1ab_1 | python | g3.10.6=h582c2e5_0_cpython |
-| azureml-telemetry | 1.47.0 | libabseil | 20220623.0=cxx17_h48a1fff_5 | python_abi | g3.10=2_cp310 |
-| backcall | 0.2.0=pyh9f0ad1d_0 | libaec | 1.0.6=h9c3ff4c_0 | python-dateutil | g2.8.2=pyhd8ed1ab_0 |
-| backports | 1.0=py_2 | libavif | 0.11.1=h5cdd6b5_0 | python-flatbuffers | g2.0=pyhd8ed1ab_0 |
-| backports-tempfile | 1.0 | libblas | 3.9.0=16_linux64_mkl | pytorch | g1.13.0=py3.10_cpu_0 |
-| backports-weakref | 1.0.post1 | libbrotlicommon | 1.0.9=h166bdaf_8 | pytorch-mutex | g1.0=cpu |
-| backports.functools_lru_cache | 1.6.4=pyhd8ed1ab_0 | libbrotlidec | 1.0.9=h166bdaf_8 | pytz | g2022.6=pyhd8ed1ab_0 |
-| bcrypt | 4.0.1 | libbrotlienc | 1.0.9=h166bdaf_8 | pyu2f | g0.1.5=pyhd8ed1ab_0 |
-| beautifulsoup4 | 4.9.3=pyhb0f4dca_0 | libcblas | 3.9.0=16_linux64_mkl | pywavelets | g1.3.0=py310hde88566_2 |
-| blas | 2.116=mkl | libclang | 14.0.6 | pyyaml | g6.0=py310h5764c6d_5 |
-| blas-devel | 3.9.0=16_linux64_mkl | libcrc32c | 1.1.2=h9c3ff4c_0 | pyzmq | g24.0.1=py310h330234f_1 |
-| bleach | 5.0.1 | libcurl | 7.86.0=h7bff187_1 | qt | g5.9.7=h5867ecd_1 |
-| blinker | 1.5=pyhd8ed1ab_0 | libdeflate | 1.14=h166bdaf_0 | re2 | g2022.06.01=h27087fc_0 |
-| blosc | 1.21.1=h83bc5f7_3 | libedit | 3.1.20191231=he28a2e2_2 | readline | g8.1.2=h0f457ee_0 |
-| bokeh | 3.0.1=pyhd8ed1ab_0 | libev | 4.33=h516909a_1 | regex | g2022.10.31=py310h5764c6d_0 |
-| brotli | 1.0.9=h166bdaf_8 | libevent | 2.1.10=h9b69904_4 | requests | g2.28.1=pyhd8ed1ab_1 |
-| brotli-bin | 1.0.9=h166bdaf_8 | libffi | 3.4.2=h7f98852_5 | requests-oauthlib | g1.3.1=pyhd8ed1ab_0 |
-| brotli-python | 1.0.9=py310hd8f1fbe_8 | libgcc-ng | 12.2.0=h65d4601_19 | retrying | g1.3.3=py_2 |
-| brotlipy | 0.7.0=py310h5764c6d_1005 | libgfortran-ng | 12.2.0=h69a702a_19 | rsa | g4.9=pyhd8ed1ab_0 |
-| brunsli | 0.1=h9c3ff4c_0 | libgfortran5 | 12.2.0=h337968e_19 | ruamel_yaml | g0.15.80=py310h5764c6d_1008 |
-| bzip2 | 1.0.8=h7f98852_4 | libglib | 2.74.1=h606061b_1 | ruamel-yaml | 0.17.4 |
-| c-ares | 1.18.1=h7f98852_0 | libgoogle-cloud | 2.1.0=hf2e47f9_1 | ruamel-yaml-clib | 0.2.6 |
-| c-blosc2 | 2.4.3=h7a311fb_0 | libiconv | 1.17=h166bdaf_0 | s2n | g1.0.10=h9b69904_0 |
-| ca-certificates | 2022.9.24=ha878542_0 | liblapack | 3.9.0=16_linux64_mkl | salib | g1.4.6.1=pyhd8ed1ab_0 |
-| cached_property | 1.5.2=pyha770c72_1 | liblapacke | 3.9.0=16_linux64_mkl | scikit-image | g0.19.3=py310h769672d_2 |
-| cached-property | 1.5.2=hd8ed1ab_1 | libllvm11 | 11.1.0=he0ac6c6_5 | scikit-learn | g1.1.3=py310h0c3af53_1 |
-| cachetools | 5.2.0=pyhd8ed1ab_0 | libnghttp2 | 1.47.0=hdcd2b5c_1 | scipy | g1.9.3=py310hdfbd76f_2 |
-| certifi | 2022.9.24=pyhd8ed1ab_0 | libnsl | 2.0.0=h7f98852_0 | seaborn | g0.11.1=hd8ed1ab_1 |
-| cffi | 1.15.1=py310h255011f_2 | libpng | 1.6.38=h753d276_0 | seaborn-base | g0.11.1=pyhd8ed1ab_1 |
-| cfitsio | 4.1.0=hd9d235c_0 | libprotobuf | 3.20.1=h6239696_4 | secretstorage | 3.3.3 |
-| charls | 2.3.4=h9c3ff4c_0 | libsodium | 1.0.18=h36c2ea0_1 | send2trash | 1.8.0 |
-| charset-normalizer | 2.1.1=pyhd8ed1ab_0 | libsqlite | 3.39.4=h753d276_0 | setuptools | g65.5.1=pyhd8ed1ab_0 |
-| click | 8.1.3=unix_pyhd8ed1ab_2 | libssh2 | 1.10.0=haa6b8db_3 | shap | g0.39.0=py310hb5077e9_1 |
-| cloudpickle | 2.2.0=pyhd8ed1ab_0 | libstdcxx-ng | 12.2.0=h46fd767_19 | sip | g4.19.13=py310h295c915_0 |
-| colorama | 0.4.6=pyhd8ed1ab_0 | libthrift | 0.16.0=h491838f_2 | six | g1.16.0=pyh6c4a22f_0 |
-| coloredlogs | 15.0.1=pyhd8ed1ab_3 | libtiff | 4.4.0=h55922b4_4 | skl2onnx | g1.8.0.1=pyhd8ed1ab_1 |
-| conda-package-handling | 1.9.0=py310h5764c6d_1 | libutf8proc | 2.8.0=h166bdaf_0 | sklearn-pandas | g2.2.0=pyhd8ed1ab_0 |
-| configparser | 5.3.0=pyhd8ed1ab_0 | libuuid | 2.32.1=h7f98852_1000 | slicer | g0.0.7=pyhd8ed1ab_0 |
-| contextlib2 | 21.6.0 | libuv | 1.44.2=h166bdaf_0 | smart_open | g6.2.0=pyha770c72_0 |
-| contourpy | 1.0.6=py310hbf28c38_0 | libwebp-base | 1.2.4=h166bdaf_0 | smmap | g3.0.5=pyh44b312d_0 |
-| cryptography | 38.0.3=py310h597c629_0 | libxcb | 1.13=h7f98852_1004 | snappy | g1.1.9=hbd366e4_2 |
-| cycler | 0.11.0=pyhd8ed1ab_0 | libxgboost | 1.7.1=cpu_ha3b9936_0 | sniffio | 1.3.0 |
-| cython | 0.29.32=py310hd8f1fbe_1 | libxml2 | 2.9.9=h13577e0_2 | soupsieve | g2.3.2.post1=pyhd8ed1ab_0 |
-| cytoolz | 0.12.0=py310h5764c6d_1 | libzlib | 1.2.13=h166bdaf_4 | sqlalchemy | 1.4.43 |
-| dash | 1.21.0=pyhd8ed1ab_0 | libzopfli | 1.0.3=h9c3ff4c_0 | sqlite | g3.39.4=h4ff8645_0 |
-| dash_cytoscape | 0.2.0=pyhd8ed1ab_1 | lightgbm | 3.2.1=py310h295c915_0 | sqlparse | g0.4.3=pyhd8ed1ab_0 |
-| dash-core-components | 1.17.1=pyhd8ed1ab_0 | lime | 0.2.0.1=pyhd8ed1ab_1 | stack_data | g0.6.0=pyhd8ed1ab_0 |
-| dash-html-components | 1.1.4=pyhd8ed1ab_0 | llvm-openmp | 15.0.4=he0ac6c6_0 | statsmodels | g0.13.5=py310hde88566_2 |
-| dash-renderer | 1.9.1=pyhd8ed1ab_0 | llvmlite | 0.39.1=py310h58363a5_1 | sympy | g1.11.1=py310hff52083_2 |
-| dash-table | 4.12.0=pyhd8ed1ab_0 | locket | 1.0.0=pyhd8ed1ab_0 | tabulate | g0.9.0=pyhd8ed1ab_1 |
-| dask-core | 2022.10.2=pyhd8ed1ab_0 | lxml | 4.8.0 | tbb | g2021.6.0=h924138e_1 |
-| databricks-cli | 0.17.3=pyhd8ed1ab_0 | lz4-c | 1.9.3=h9c3ff4c_1 | tensorboard | 2.8.0 |
-| dav1d | 1.0.0=h166bdaf_1 | markdown | 3.3.4=gpyhd8ed1ab_0 | tensorboard-data-server | g0.6.0=py310h597c629_3 |
-| dbus | 1.13.6=h5008d03_3 | markupsafe | g2.1.1=py310h5764c6d_2 | tensorboard-plugin-wit | g1.8.1=pyhd8ed1ab_0 |
-| debugpy | 1.6.3=py310hd8f1fbe_1 | matplotlib | g3.6.2=py310hff52083_0 | tensorflow | 2.8.0 |
-| decorator | 5.1.1=pyhd8ed1ab_0 | matplotlib-base | g3.6.2=py310h8d5ebf3_0 | tensorflow-base | g2.10.0=mkl_py310hb9daa73_0 |
-| defusedxml | 0.7.1 | matplotlib-inline | g0.1.6=pyhd8ed1ab_0 | tensorflow-estimator | 2.8.0 |
-| dill | 0.3.6=pyhd8ed1ab_1 | mistune | 2.0.4 | tensorflow-io-gcs-filesystem | 0.27.0 |
-| distlib | 0.3.6 | mkl | g2022.1.0=h84fe81f_915 | termcolor | g2.1.0=pyhd8ed1ab_0 |
-| distro | 1.8.0 | mkl-devel | g2022.1.0=ha770c72_916 | terminado | 0.17.0 |
-| docker | 6.0.1 | mkl-include | g2022.1.0=h84fe81f_915 | textblob | g0.15.3=py_0 |
-| dotnetcore2 | 3.1.23 | mleap | g0.17.0=pyhd8ed1ab_0 | tf-estimator-nightly | 2.8.0.dev2021122109 |
-| entrypoints | 0.4=pyhd8ed1ab_0 | mlflow-skinny | g1.30.0=py310h1d0e22c_0 | threadpoolctl | g3.1.0=pyh8a188c0_0 |
-| et_xmlfile | 1.0.1=py_1001 | mpc | g1.2.1=h9f54685_0 | tifffile | g2022.10.10=pyhd8ed1ab_0 |
-| executing | 1.2.0=pyhd8ed1ab_0 | mpfr | g4.1.0=h9202a9a_1 | tinycss2 | 1.2.1 |
-| expat | 2.5.0=h27087fc_0 | mpmath | g1.2.1=pyhd8ed1ab_0 | tk | g8.6.12=h27826a3_0 |
-| fastjsonschema | 2.16.2 | msal | g2022.09.01=py_0 | toolz | g0.12.0=pyhd8ed1ab_0 |
-| filelock | 3.8.0 | msal-extensions | 0.3.1 | torchvision | 0.14.0 |
-| fire | 0.4.0=pyh44b312d_0 | msrest | 0.7.1 | tornado | g6.2=py310h5764c6d_1 |
-| flask | 2.2.2=pyhd8ed1ab_0 | msrestazure | 0.6.4 | tqdm | g4.64.1=pyhd8ed1ab_0 |
-| flask-compress | 1.13=pyhd8ed1ab_0 | multidict | g6.0.2=py310h5764c6d_2 | traitlets | g5.5.0=pyhd8ed1ab_0 |
-| flatbuffers | 2.0.7=h27087fc_0 | multiprocess | g0.70.14=py310h5764c6d_3 | typed-ast | 1.4.3 |
-| fontconfig | 2.14.1=hc2a2eb6_0 | munkres | g1.1.4=pyh9f0ad1d_0 | typing_extensions | g4.4.0=pyha770c72_0 |
-| fonttools | 4.38.0=py310h5764c6d_1 | mypy | 0.780 | typing-extensions | g4.4.0=hd8ed1ab_0 |
-| freetype | 2.12.1=hca18f0e_0 | mypy-extensions | 0.4.3 | tzdata | g2022fgh191b570_0 |
-| frozenlist | 1.3.3=py310h5764c6d_0 | nbclassic | 0.4.8 | unicodedata2 | g15.0.0gpy310h5764c6d_0 |
-| fsspec | 2022.10.0=pyhd8ed1ab_0 | nbclient | 0.7.0 | unixodbc | g2.3.10gh583eb01_0 |
-| fusepy | 3.0.1 | nbconvert | 7.2.3 | urllib3 | g1.26.4=pyhd8ed1ab_0 |
-| future | 0.18.2=pyhd8ed1ab_6 | nbformat | 5.7.0 | virtualenv | 20.14.0 |
-| gast | 0.4.0=pyh9f0ad1d_0 | ncurses | g6.3=h27087fc_1 | wcwidth | g0.2.5=pyh9f0ad1d_2 |
-| gensim | 4.2.0=py310h769672d_0 | ndg-httpsclient | 0.5.1 | webencodings | g0.5.1=py_1 |
-| geographiclib | 1.52=pyhd8ed1ab_0 | nest-asyncio | g1.5.6=pyhd8ed1ab_0 | websocket-client | 1.4.2 |
-| geopy | 2.1.0=pyhd3deb0d_0 | networkx | g2.8.8=pyhd8ed1ab_0 | werkzeug | g2.2.2=pyhd8ed1ab_0 |
-| gettext | 0.21.1=h27087fc_0 | nltk | g3.6.2=pyhd8ed1ab_0 | wheel | g0.38.3=pyhd8ed1ab_0 |
-| gevent | 22.10.1=py310hab16fe0_1 | notebook | 6.5.2 | widgetsnbextension | 3.6.1 |
-| gflags | 2.2.2=he1b5a44_1004 | notebook-shim | 0.2.2 | wrapt | g1.14.1=py310h5764c6d_1 |
-| giflib | 5.2.1=h36c2ea0_2 | numba | g0.56.3=py310ha5257ce_0 | xgboost | g1.7.1=cpu_py310hd1aba9c_0 |
-| gitdb | 4.0.9=pyhd8ed1ab_0 | numpy | g1.23.4=py310h53a5b5f_1 | xorg-libxau | g1.0.9=h7f98852_0 |
-| gitpython | 3.1.29=pyhd8ed1ab_0 | oauthlib | g3.2.2=pyhd8ed1ab_0 | xorg-libxdmcp | g1.1.3=h7f98852_0 |
-| glib | 2.74.1=h6239696_1 | onnx | g1.12.0=py310h3d64581_0 | xyzservices | g2022.9.0=pyhd8ed1ab_0 |
-| glib-tools | 2.74.1=h6239696_1 | onnxconverter-common | g1.7.0=pyhd8ed1ab_0 | xz | g5.2.6=h166bdaf_0 |
-| glog | 0.6.0=h6f12383_0 | onnxmltools | g1.7.0=pyhd8ed1ab_0 | yaml | g0.2.5=h7f98852_2 |
-| gmp | 6.2.1=h58526e2_0 | onnxruntime | g1.13.1=py310h00a7d45_1 | yarl | g1.8.1=py310h5764c6d_0 |
-| gmpy2 | 2.1.2=py310h3ec546c_1 | openjpeg | g2.5.0=h7d73246_1 | zeromq | g4.3.4=h9c3ff4c_1 |
-| google-auth | 2.14.0=pyh1a96a4e_0 | openpyxl | g3.0.7=pyhd8ed1ab_0 | zfp | g1.0.0=h27087fc_3 |
-| google-auth-oauthlib | 0.4.6=pyhd8ed1ab_0 | openssl | g1.1.1s=h166bdaf_0 | zipp | g3.10.0=pyhd8ed1ab_0 |
-| google-pasta | 0.2.0=pyh8c360ce_0 | opt_einsum | g3.3.0=pyhd8ed1ab_1 | zlib | g1.2.13=h166bdaf_4 |
-| greenlet | 1.1.3.post0=py310hd8f1fbe_0 | orc | g1.7.6=h6c59b99_0 | zlib-ng | g2.0.6=h166bdaf_0 |
-| grpc-cpp | 1.46.4=hbad87ad_7 | packaging | g21.3=pyhd8ed1ab_0 | zope.event | g4.5.0gpyh9f0ad1d_0 |
-| grpcio | 1.46.4=py310h946def9_7 | pandas | g1.5.1=py310h769672d_1 | zope.interface | g5.5.1=py310h5764c6d_0 |
-| gst-plugins-base | 1.14.0=hbbd80ab_1 | pandasql | 0.7.3 | zstd | g1.5.2=h6239696_4 |
-| gstreamer | 1.14.0=h28cd5cc_2 | pandocfilters | 1.5.0 | | |
-| h5py | 3.7.0=nompi_py310h416281c_102 | paramiko | 2.12.0 | | |
-
-### R libraries (Preview)
-
-| **Library** | **Version** | ** Library** | **Version** | ** Library** | **Version** |
-|:-:|:--:|::|:--:|::|:--:|
-| askpass | 1.1 | highcharter | 0.9.4 | readr | 2.1.3 |
-| assertthat | 0.2.1 | highr | 0.9 | readxl | 1.4.1 |
-| backports | 1.4.1 | hms | 1.1.2 | recipes | 1.0.3 |
-| base64enc | 0.1-3 | htmltools | 0.5.3 | rematch | 1.0.1 |
-| bit | 4.0.5 | htmlwidgets | 1.5.4 | rematch2 | 2.1.2 |
-| bit64 | 4.0.5 | httpcode | 0.3.0 | remotes | 2.4.2 |
-| blob | 1.2.3 | httpuv | 1.6.6 | reprex | 2.0.2 |
-| brew | 1.0-8 | httr | 1.4.4 | reshape2 | 1.4.4 |
-| brio | 1.1.3 | ids | 1.0.1 | rjson | 0.2.21 |
-| broom | 1.0.1 | igraph | 1.3.5 | rlang | 1.0.6 |
-| bslib | 0.4.1 | infer | 1.0.3 | rlist | 0.4.6.2 |
-| cachem | 1.0.6 | ini | 0.3.1 | rmarkdown | 2.18 |
-| callr | 3.7.3 | ipred | 0.9-13 | RODBC | 1.3-19 |
-| caret | 6.0-93 | isoband | 0.2.6 | roxygen2 | 7.2.2 |
-| cellranger | 1.1.0 | iterators | 1.0.14 | rprojroot | 2.0.3 |
-| cli | 3.4.1 | jquerylib | 0.1.4 | rsample | 1.1.0 |
-| clipr | 0.8.0 | jsonlite | 1.8.3 | rstudioapi | 0.14 |
-| clock | 0.6.1 | knitr | 1.41 | rversions | 2.1.2 |
-| colorspace | 2.0-3 | labeling | 0.4.2 | rvest | 1.0.3 |
-| commonmark | 1.8.1 | later | 1.3.0 | sass | 0.4.4 |
-| config | 0.3.1 | lava | 1.7.0 | scales | 1.2.1 |
-| conflicted | 1.1.0 | lazyeval | 0.2.2 | selectr | 0.4-2 |
-| coro | 1.0.3 | lhs | 1.1.5 | sessioninfo | 1.2.2 |
-| cpp11 | 0.4.3 | lifecycle | 1.0.3 | shiny | 1.7.3 |
-| crayon | 1.5.2 | lightgbm | 3.3.3 | slider | 0.3.0 |
-| credentials | 1.3.2 | listenv | 0.8.0 | sourcetools | 0.1.7 |
-| crosstalk | 1.2.0 | lobstr | 1.1.2 | sparklyr | 1.7.8 |
-| crul | 1.3 | lubridate | 1.9.0 | SQUAREM | 2021.1 |
-| curl | 4.3.3 | magrittr | 2.0.3 | stringi | 1.7.8 |
-| data.table | 1.14.6 | maps | 3.4.1 | stringr | 1.4.1 |
-| DBI | 1.1.3 | memoise | 2.0.1 | sys | 3.4.1 |
-| dbplyr | 2.2.1 | mime | 0.12 | systemfonts | 1.0.4 |
-| desc | 1.4.2 | miniUI | 0.1.1.1 | testthat | 3.1.5 |
-| devtools | 2.4.5 | modeldata | 1.0.1 | textshaping | 0.3.6 |
-| dials | 1.1.0 | modelenv | 0.1.0 | tibble | 3.1.8 |
-| DiceDesign | 1.9 | ModelMetrics | 1.2.2.2 | tidymodels | 1.0.0 |
-| diffobj | 0.3.5 | modelr | 0.1.10 | tidyr | 1.2.1 |
-| digest | 0.6.30 | munsell | 0.5.0 | tidyselect | 1.2.0 |
-| downlit | 0.4.2 | numDeriv | 2016.8-1.1 | tidyverse | 1.3.2 |
-| dplyr | 1.0.10 | openssl | 2.0.4 | timechange | 0.1.1 |
-| dtplyr | 1.2.2 | parallelly | 1.32.1 | timeDate | 4021.106 |
-| e1071 | 1.7-12 | parsnip | 1.0.3 | tinytex | 0.42 |
-| ellipsis | 0.3.2 | patchwork | 1.1.2 | torch | 0.9.0 |
-| evaluate | 0.18 | pillar | 1.8.1 | triebeard | 0.3.0 |
-| fansi | 1.0.3 | pkgbuild | 1.4.0 | TTR | 0.24.3 |
-| farver | 2.1.1 | pkgconfig | 2.0.3 | tune | 1.0.1 |
-| fastmap | 1.1.0 | pkgdown | 2.0.6 | tzdb | 0.3.0 |
-| fontawesome | 0.4.0 | pkgload | 1.3.2 | urlchecker | 1.0.1 |
-| forcats | 0.5.2 | plotly | 4.10.1 | urltools | 1.7.3 |
-| foreach | 1.5.2 | plyr | 1.8.8 | usethis | 2.1.6 |
-| forge | 0.2.0 | praise | 1.0.0 | utf8 | 1.2.2 |
-| fs | 1.5.2 | prettyunits | 1.1.1 | uuid | 1.1-0 |
-| furrr | 0.3.1 | pROC | 1.18.0 | vctrs | 0.5.1 |
-| future | 1.29.0 | processx | 3.8.0 | viridisLite | 0.4.1 |
-| future.apply | 1.10.0 | prodlim | 2019.11.13 | vroom | 1.6.0 |
-| gargle | 1.2.1 | profvis | 0.3.7 | waldo | 0.4.0 |
-| generics | 0.1.3 | progress | 1.2.2 | warp | 0.2.0 |
-| gert | 1.9.1 | progressr | 0.11.0 | whisker | 0.4 |
-| ggplot2 | 3.4.0 | promises | 1.2.0.1 | withr | 2.5.0 |
-| gh | 1.3.1 | proxy | 0.4-27 | workflows | 1.1.2 |
-| gistr | 0.9.0 | pryr | 0.1.5 | workflowsets | 1.0.0 |
-| gitcreds | 0.1.2 | ps | 1.7.2 | xfun | 0.35 |
-| globals | 0.16.2 | purrr | 0.3.5 | xgboost | 1.6.0.1 |
-| glue | 1.6.2 | quantmod | 0.4.20 | XML | 3.99-0.12 |
-| googledrive | 2.0.0 | r2d3 | 0.2.6 | xml2 | 1.3.3 |
-| googlesheets4 | 1.0.1 | R6 | 2.5.1 | xopen | 1.0.0 |
-| gower | 1.0.0 | ragg | 1.2.4 | xtable | 1.8-4 |
-| GPfit | 1.0-8 | rappdirs | 0.3.3 | xts | 0.12.2 |
-| gtable | 0.3.1 | rbokeh | 0.5.2 | yaml | 2.3.6 |
-| hardhat | 1.2.0 | rcmdcheck | 1.4.0 | yardstick | 1.1.0 |
-| haven | 2.5.1 | RColorBrewer | 1.1-3 | zip | 2.2.2 |
-| hexbin | 1.28.2 | Rcpp | 1.0.9 | zoo | 1.8-11 |
+To check the libraries included in Azure Synapse Runtime for Apache Spark 3.3 for Java/Scala, Python, and R go to [Azure Synapse Runtime for Apache Spark 3.3](https://github.com/microsoft/synapse-spark-runtime/tree/main/Synapse/spark3.3)
## Next steps - [Manage libraries for Apache Spark pools in Azure Synapse Analytics](apache-spark-manage-pool-packages.md)
synapse-analytics Apache Spark 34 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-34-runtime.md
Title: Azure Synapse Runtime for Apache Spark 3.4
description: New runtime is in GA stage. Try it and use Spark 3.4.1, Python 3.10, Delta Lake 2.4. -+
synapse-analytics Apache Spark Azure Create Spark Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-azure-create-spark-configuration.md
Title: Manage Apache Spark configuration
description: Learn how to create an Apache Spark configuration for your synapse studio. -+
synapse-analytics Apache Spark Azure Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-azure-log-analytics.md
Title: Monitor Apache Spark applications with Azure Log Analytics
description: Learn how to enable the Synapse Studio connector for collecting and sending the Apache Spark application metrics and logs to your Log Analytics workspace. -+
synapse-analytics Apache Spark Azure Portal Add Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-azure-portal-add-libraries.md
Title: Manage Apache Spark packages
description: Learn how to add and manage libraries used by Apache Spark in Azure Synapse Analytics. -+ Last updated 04/15/2023
synapse-analytics Apache Spark Intelligent Cache Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-intelligent-cache-concept.md
Last updated 7/7/2022 -+
synapse-analytics Apache Spark Machine Learning Mllib Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-machine-learning-mllib-notebook.md
Title: 'Tutorial: Build a machine learning app with Apache Spark MLlib'
description: A tutorial on how to use Apache Spark MLlib to create a machine learning app that analyzes a dataset by using classification through logistic regression. -+ Last updated 02/29/2024
synapse-analytics Apache Spark Notebook Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-notebook-concept.md
Last updated 11/18/2020 -+
synapse-analytics Apache Spark Performance Hyperspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-performance-hyperspace.md
Title: Hyperspace indexes for Apache Spark
description: Performance optimization for Apache Spark using Hyperspace indexes -+
zone_pivot_groups: programming-languages-spark-all-minus-sql-r
Hyperspace introduces the ability for Apache Spark users to create indexes on their datasets, such as CSV, JSON, and Parquet, and use them for potential query and workload acceleration.
-In this article, we highlight the basics of Hyperspace, emphasize its simplicity, and show how it can be used by just about anyone.
+In this article, we highlight the basics of Hyperspace, emphasize its simplicity, and show how just about anyone can use it.
Disclaimer: Hyperspace helps accelerate your workloads or queries under two circumstances:
This document is also available in notebook form, for [Python](https://github.co
>[!Note] > Hyperspace is supported in Azure Synapse Runtime for Apache Spark 3.1 (unsupported), and Azure Synapse Runtime for Apache Spark 3.2 (End of Support announced). However, it should be noted that Hyperspace is not supported in Azure Synapse Runtime for Apache Spark 3.3 (GA).
-To begin with, start a new Spark session. Since this document is a tutorial merely to illustrate what Hyperspace can offer, you will make a configuration change that allows us to highlight what Hyperspace is doing on small datasets.
+To begin with, start a new Spark session. Since this document is a tutorial merely to illustrate what Hyperspace can offer, you'll make a configuration change that allows us to highlight what Hyperspace is doing on small datasets.
By default, Spark uses broadcast join to optimize join queries when the data size for one side of join is small (which is the case for the sample data we use in this tutorial). Therefore, we disable broadcast joins so that later when we run join queries, Spark uses sort-merge join. This is mainly to show how Hyperspace indexes would be used at scale for accelerating join queries.
-The output of running the following cell shows a reference to the successfully created Spark session and prints out '-1' as the value for the modified join config, which indicates that broadcast join is successfully disabled.
+The output of running the following cell shows a reference to the successfully created Spark session and prints '-1' as the value for the modified join config, which indicates that broadcast join is successfully disabled.
:::zone pivot = "programming-language-scala"
After indexes are created, you can perform several actions:
* **Vacuum if an index is no longer required.** You can vacuum an index, which forces a physical deletion of the index contents and associated metadata completely from Hyperspace's metadata. Refresh if the underlying data changes, you can refresh an existing index to capture that.
-Delete if the index is not needed, you can perform a soft-delete that is, index is not physically deleted but is marked as 'deleted' so it is no longer used in your workloads.
+Delete if the index isn't needed, you can perform a soft-delete that is, index isn't physically deleted but is marked as 'deleted' so it's no longer used in your workloads.
The following sections show how such index management operations can be done in Hyperspace.
hyperspace.CreateIndex(deptDF, deptIndexConfig2);
## List indexes
-The code that follows shows how you can list all available indexes in a Hyperspace instance. It uses "indexes" API that returns information about existing indexes as a Spark DataFrame so you can perform additional operations.
+The code that follows shows how you can list all available indexes in a Hyperspace instance. It uses "indexes" API that returns information about existing indexes as a Spark DataFrame so you can perform more operations.
For instance, you can invoke valid operations on this DataFrame for checking its content or analyzing it further (for example filtering specific indexes or grouping them according to some desired property).
-The following cell uses DataFrame's 'show' action to fully print the rows and show details of our indexes in a tabular form. For each index, you can see all information Hyperspace has stored about it in the metadata. You will immediately notice the following:
+The following cell uses DataFrame's 'show' action to fully print the rows and show details of our indexes in a tabular form. For each index, you can see all information Hyperspace has stored about it in the metadata. You'll immediately notice the following:
* config.indexName, config.indexedColumns, config.includedColumns, and status.status are the fields that a user normally refers to. * dfSignature is automatically generated by Hyperspace and is unique for each index. Hyperspace uses this signature internally to maintain the index and exploit it at query time.
Results in:
You can drop an existing index by using the "deleteIndex" API and providing the index name. Index deletion does a soft delete: It mainly updates index's status in the Hyperspace metadata from "ACTIVE" to "DELETED". This will exclude the dropped index from any future query optimization and Hyperspace no longer picks that index for any query.
-However, index files for a deleted index still remain available (since it is a soft-delete), so that the index could be restored if user asks for.
+However, index files for a deleted index still remain available (since it's a soft-delete), so that the index could be restored if user asks for.
The following cell deletes index with name "deptIndex2" and lists Hyperspace metadata after that. The output should be similar to above cell for "List Indexes" except for "deptIndex2", which now should have its status changed into "DELETED".
deptDFrame: org.apache.spark.sql.DataFrame = [deptId: int, deptName: string ...
&nbsp; &nbsp;
-This only shows the top 5 rows
+This only shows the top five rows
&nbsp; &nbsp;
appendData.Write().Mode("Append").Parquet(testDataLocation);
::: zone-end
-Hybrid scan is disabled by default. Therefore, you will see that because we appended new data, Hyperspace will decide *not* to use the index.
+Hybrid scan is disabled by default. Therefore, you'll see that because we appended new data, Hyperspace will decide *not* to use the index.
-In the output, you will see no plan differences (hence, no highlighting).
+In the output, you'll see no plan differences (hence, no highlighting).
:::zone pivot = "programming-language-scala"
productIndex2:abfss://datasets@hyperspacebenchmark.dfs.core.windows.net/hyperspa
When you're ready to update your indexes but don't want to rebuild your entire index, Hyperspace supports updating indexes in an incremental manner using the `hs.refreshIndex("name", "incremental")` API. This will eliminates the need for a full rebuild of index from scratch, utilizing previously created index files as well as updating indexes on only the newly added data.
-Of course, be sure to use the complementary `optimizeIndex` API (shown below) periodically to make sure you do not see performance regressions. We recommend calling optimize at least once for every 10 times you call `refreshIndex(..., "incremental")`, assuming the data you added/removed is < 10% of the original dataset. For instance, if your original dataset is 100 GB, and you've added/removed data in increments/decrements of 1 GB, you can call `refreshIndex` 10 times before calling `optimizeIndex`. Please note that this example is simply used for illustration and you have to adapt this for your workloads.
+Of course, be sure to use the complementary `optimizeIndex` API (shown below) periodically to make sure you don't see performance regressions. We recommend calling optimize at least once for every 10 times you call `refreshIndex(..., "incremental")`, assuming the data you added/removed is < 10% of the original dataset. For instance, if your original dataset is 100 GB, and you've added/removed data in increments/decrements of 1 GB, you can call `refreshIndex` 10 times before calling `optimizeIndex`. Note that this example is for illustration and you have to adapt this for your workloads.
-In the example below, notice the addition of a Sort node in the query plan when indexes are used. This is because partial indexes are created on the appended data files, causing Spark to introduce a `Sort`. Please also note that `Shuffle` i.e. Exchange is still eliminated from the plan, giving you the appropriate acceleration.
+In the example below, notice the addition of a Sort node in the query plan when indexes are used. This is because partial indexes are created on the appended data files, causing Spark to introduce a `Sort`. Also note that `Shuffle` that is, Exchange is still eliminated from the plan, giving you the appropriate acceleration.
:::zone pivot = "programming-language-scala"
Project [name#820, qty#821, date#822, qty#827, date#828]
## Optimize index layout
-After calling incremental refreshes multiple times on newly appended data (e.g. if the user writes to data in small batches or in case of streaming scenarios), the number of index files tend to become large affecting the performance of the index (large number of small files problem). Hyperspace provides `hyperspace.optimizeIndex("indexName")` API to optimize the index layout and reduce the large files problem.
+After calling incremental refreshes multiple times on newly appended data (for example, if the user writes to data in small batches or in streaming scenarios), the number of index files tend to become large affecting the performance of the index (large number of small files problem). Hyperspace provides `hyperspace.optimizeIndex("indexName")` API to optimize the index layout and reduce the large files problem.
-In the plan below, notice that Hyperspace has removed the additional Sort node in the query plan. Optimize can help avoiding sorting for any index bucket which contains only one file. However, this will only be true if ALL the index buckets have at most 1 file per bucket, after `optimizeIndex`.
+In the plan below, notice that Hyperspace has removed the extra Sort node in the query plan. Optimize can help avoiding sorting for any index bucket that contains only one file. However, this will only be true if ALL the index buckets have at most one file per bucket, after `optimizeIndex`.
:::zone pivot = "programming-language-scala"
synapse-analytics Apache Spark Pool Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-pool-configurations.md
-+ Last updated 09/07/2022 # Apache Spark pool configurations in Azure Synapse Analytics
-A Spark pool is a set of metadata that defines the compute resource requirements and associated behavior characteristics when a Spark instance is instantiated. These characteristics include but aren't limited to name, number of nodes, node size, scaling behavior, and time to live. A Spark pool in itself doesn't consume any resources. There are no costs incurred with creating Spark pools. Charges are only incurred once a Spark job is executed on the target Spark pool and the Spark instance is instantiated on demand.
+A Spark pool is a set of metadata that defines the compute resource requirements and associated behavior characteristics when a Spark instance is instantiated. These characteristics include but aren't limited to name, number of nodes, node size, scaling behavior, and time to live. A Spark pool in itself doesn't consume any resources. There are no costs incurred with creating Spark pools. Charges are only incurred once a Spark job is executed on the target Spark pool and the Spark instance is instantiated on demand.
You can read how to create a Spark pool and see all their properties here [Get started with Spark pools in Synapse Analytics](../quickstart-create-apache-spark-pool-portal.md) ## Isolated Compute
-The Isolated Compute option provides more security to Spark compute resources from untrusted services by dedicating the physical compute resource to a single customer. Isolated compute option is best suited for workloads that require a high degree of isolation from other customer's workloads for reasons that include meeting compliance and regulatory requirements. The Isolate Compute option is only available with the XXXLarge (80 vCPU / 504 GB) node size and only available in the following regions. The isolated compute option can be enabled or disabled after pool creation although the instance may need to be restarted. If you expect to enable this feature in the future, ensure that your Synapse workspace is created in an isolated compute supported region.
+The Isolated Compute option provides more security to Spark compute resources from untrusted services by dedicating the physical compute resource to a single customer. Isolated compute option is best suited for workloads that require a high degree of isolation from other customer's workloads for reasons that include meeting compliance and regulatory requirements. The Isolate Compute option is only available with the XXXLarge (80 vCPU / 504 GB) node size and only available in the following regions. The isolated compute option can be enabled or disabled after pool creation although the instance might need to be restarted. If you expect to enable this feature in the future, ensure that your Synapse workspace is created in an isolated compute supported region.
* East US * West US 2
The Isolated Compute option provides more security to Spark compute resources fr
## Nodes
-Apache Spark pool instance consists of one head node and two or more worker nodes with a minimum of three nodes in a Spark instance. The head node runs extra management services such as Livy, Yarn Resource Manager, Zookeeper, and the Spark driver. All nodes run services such as Node Agent and Yarn Node Manager. All worker nodes run the Spark Executor service.
+Apache Spark pool instance consists of one head node and two or more worker nodes with a minimum of three nodes in a Spark instance. The head node runs extra management services such as Livy, Yarn Resource Manager, Zookeeper, and the Spark driver. All nodes run services such as Node Agent and Yarn Node Manager. All worker nodes run the Spark Executor service.
## Node Sizes
-A Spark pool can be defined with node sizes that range from a Small compute node with 4 vCore and 32 GB of memory up to a XXLarge compute node with 64 vCore and 432 GB of memory per node. Node sizes can be altered after pool creation although the instance may need to be restarted.
+A Spark pool can be defined with node sizes that range from a Small compute node with 4 vCore and 32 GB of memory up to a XXLarge compute node with 64 vCore and 432 GB of memory per node. Node sizes can be altered after pool creation although the instance might need to be restarted.
|Size | vCore | Memory| |--||-|
A Spark pool can be defined with node sizes that range from a Small compute node
## Autoscale
-Autoscale for Apache Spark pools allows automatic scale up and down of compute resources based on the amount of activity. When the autoscale feature is enabled, you set the minimum, and maximum number of nodes to scale. When the autoscale feature is disabled, the number of nodes set will remain fixed. This setting can be altered after pool creation although the instance may need to be restarted.
+Autoscale for Apache Spark pools allows automatic scale up and down of compute resources based on the amount of activity. When the autoscale feature is enabled, you set the minimum, and maximum number of nodes to scale. When the autoscale feature is disabled, the number of nodes set will remain fixed. This setting can be altered after pool creation although the instance might need to be restarted.
## Elastic pool storage
-Apache Spark pools now support elastic pool storage. Elastic pool storage allows the Spark engine to monitor worker node temporary storage and attach extra disks if needed. Apache Spark pools utilize temporary disk storage while the pool is instantiated. Spark jobs write shuffle map outputs, shuffle data and spilled data to local VM disks. Examples of operations that may utilize local disk are sort, cache, and persist. When temporary VM disk space runs out, Spark jobs may fail due to ΓÇ£Out of Disk SpaceΓÇ¥ error (java.io.IOException: No space left on device). With ΓÇ£Out of Disk SpaceΓÇ¥ errors, much of the burden to prevent jobs from failing shifts to the customer to reconfigure the Spark jobs (for example, tweak the number of partitions) or clusters (for example, add more nodes to the cluster). These errors might not be consistent, and the user may end up experimenting heavily by running production jobs. This process can be expensive for the user in multiple dimensions:
+Apache Spark pools now support elastic pool storage. Elastic pool storage allows the Spark engine to monitor worker node temporary storage and attach extra disks if needed. Apache Spark pools utilize temporary disk storage while the pool is instantiated. Spark jobs write shuffle map outputs, shuffle data and spilled data to local VM disks. Examples of operations that could utilize local disk are sort, cache, and persist. When temporary VM disk space runs out, Spark jobs could fail due to ΓÇ£Out of Disk SpaceΓÇ¥ error (java.io.IOException: No space left on device). With ΓÇ£Out of Disk SpaceΓÇ¥ errors, much of the burden to prevent jobs from failing shifts to the customer to reconfigure the Spark jobs (for example, tweak the number of partitions) or clusters (for example, add more nodes to the cluster). These errors might not be consistent, and the user might end up experimenting heavily by running production jobs. This process can be expensive for the user in multiple dimensions:
-* Wasted time. Customers are required to experiment heavily with job configurations via trial and error and are expected to understand SparkΓÇÖs internal metrics to make the correct decision.
-* Wasted resources. Since production jobs can process varying amount of data, Spark jobs can fail non-deterministically if resources aren't over-provisioned. For instance, consider the problem of data skew, which may result in a few nodes requiring more disk space than others. Currently in Synapse, each node in a cluster gets the same size of disk space and increasing disk space across all nodes isn't an ideal solution and leads to tremendous waste.
-* Slowdown in job execution. In the hypothetical scenario where we solve the problem by autoscaling nodes (assuming costs aren't an issue to the end customer), adding a compute node is still expensive (takes a few minutes) as opposed to adding storage (takes a few seconds).
+* Wasted time. Customers are required to experiment heavily with job configurations via trial and error and are expected to understand SparkΓÇÖs internal metrics to make the correct decision.
+* Wasted resources. Since production jobs can process varying amount of data, Spark jobs can fail non-deterministically if resources aren't over-provisioned. For instance, consider the problem of data skew, which could result in a few nodes requiring more disk space than others. Currently in Synapse, each node in a cluster gets the same size of disk space and increasing disk space across all nodes isn't an ideal solution and leads to tremendous waste.
+* Slowdown in job execution. In the hypothetical scenario where we solve the problem by autoscaling nodes (assuming costs aren't an issue to the end customer), adding a compute node is still expensive (takes a few minutes) as opposed to adding storage (takes a few seconds).
No action is required by you, plus you should see fewer job failures as a result. > [!NOTE]
-> Azure Synapse Elastic pool storage is currently in Public Preview. During Public Preview there is no charge for use of Elastic pool storage.
+> Azure Synapse Elastic pool storage is currently in Public Preview. During Public Preview there is no charge for use of Elastic pool storage.
## Automatic pause
-The automatic pause feature releases resources after a set idle period, reducing the overall cost of an Apache Spark pool. The number of minutes of idle time can be set once this feature is enabled. The automatic pause feature is independent of the autoscale feature. Resources can be paused whether the autoscale is enabled or disabled. This setting can be altered after pool creation although active sessions will need to be restarted.
+The automatic pause feature releases resources after a set idle period, reducing the overall cost of an Apache Spark pool. The number of minutes of idle time can be set once this feature is enabled. The automatic pause feature is independent of the autoscale feature. Resources can be paused whether the autoscale is enabled or disabled. This setting can be altered after pool creation although active sessions will need to be restarted.
## Next steps
synapse-analytics Apache Spark To Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-to-power-bi.md
Title: 'Azure Synapse Studio notebooks'
description: This tutorial provides an overview on how to create a Power BI dashboard using Apache Spark and a Serverless SQL pool. -+
synapse-analytics Apache Spark Version Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-version-support.md
Title: Apache Spark version support
description: Supported versions of Spark, Scala, Python -+ Last updated 03/08/2024
The following table lists the runtime name, Apache Spark version, and release da
| Runtime name | Release date | Release stage | End of Support announcement date | End of Support effective date | | | || | | | [Azure Synapse Runtime for Apache Spark 3.4](./apache-spark-34-runtime.md) | Nov 21, 2023 | GA (as of Apr 8, 2024) | Q2 2025| Q1 2026|
-| [Azure Synapse Runtime for Apache Spark 3.3](./apache-spark-33-runtime.md) | Nov 17, 2022 | GA (as of Feb 23, 2023) | Q2/Q3 2024 | 3/31/2025 |
+| [Azure Synapse Runtime for Apache Spark 3.3](./apache-spark-33-runtime.md) | Nov 17, 2022 |**end of support announced**|July 12th, 2024| 3/31/2025 |
| [Azure Synapse Runtime for Apache Spark 3.2](./apache-spark-32-runtime.md) | July 8, 2022 | __deprecated and soon disabled__ | July 8, 2023 | __July 8, 2024__ | | [Azure Synapse Runtime for Apache Spark 3.1](./apache-spark-3-runtime.md) | May 26, 2021 | __deprecated and soon disabled__ | January 26, 2023 | __January 26, 2024__ | | [Azure Synapse Runtime for Apache Spark 2.4](./apache-spark-24-runtime.md) | December 15, 2020 | __deprecated and soon disabled__ | July 29, 2022 | __September 29, 2023__ |
synapse-analytics Azure Synapse Diagnostic Emitters Azure Eventhub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/azure-synapse-diagnostic-emitters-azure-eventhub.md
Title: Collect your Apache Spark applications logs and metrics using Azure Event
description: In this tutorial, you learn how to use the Synapse Apache Spark diagnostic emitter extension to emit Apache Spark applicationsΓÇÖ logs, event logs and metrics to your Azure Event Hubs. -+
synapse-analytics Azure Synapse Diagnostic Emitters Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/azure-synapse-diagnostic-emitters-azure-storage.md
Title: Collect your Apache Spark applications logs and metrics using Azure Stora
description: This article shows how to use the Synapse Spark diagnostic emitter extension to collect logs, event logs and metrics.cluster and learn how to integrate the Grafana dashboards. -+
synapse-analytics Connect Monitor Azure Synapse Spark Application Level Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/connect-monitor-azure-synapse-spark-application-level-metrics.md
Title: Collect Apache Spark applications metrics using APIs
description: Tutorial - Learn how to integrate your existing on-premises Prometheus server with Azure Synapse workspace for near real-time Azure Spark application metrics using the Synapse Prometheus connector. -+
synapse-analytics Intellij Tool Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/intellij-tool-synapse.md
Title: Tutorial - Azure Toolkit for IntelliJ (Spark application)
description: Tutorial - Use the Azure Toolkit for IntelliJ to develop Spark applications, which are written in Scala, and submit them to a serverless Apache Spark pool. -+
synapse-analytics Optimize Write For Apache Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/optimize-write-for-apache-spark.md
Last updated 08/03/2022 -+ # The need for optimize write on Apache Spark
-Analytical workloads on Big Data processing engines such as Apache Spark perform most efficiently when using standardized larger file sizes. The relation between the file size, the number of files, the number of Spark workers and its configurations, play a critical role on performance. Ingestion workloads into data lake tables may have the inherited characteristic of constantly writing lots of small files; this scenario is commonly known as the "small file problem".
+Analytical workloads on Big Data processing engines such as Apache Spark perform most efficiently when using standardized larger file sizes. The relation between the file size, the number of files, the number of Spark workers and its configurations, play a critical role on performance. Ingestion workloads into data lake tables could have the inherited characteristic of constantly writing lots of small files; this scenario is commonly known as the "small file problem".
-Optimize Write is a Delta Lake on Synapse feature that reduces the number of files written and aims to increase individual file size of the written data. It dynamically optimizes partitions while generating files with a default 128 MB size. The target file size may be changed per workload requirements using [configurations](apache-spark-azure-create-spark-configuration.md).
+Optimize Write is a Delta Lake on Synapse feature that reduces the number of files written and aims to increase individual file size of the written data. It dynamically optimizes partitions while generating files with a default 128-MB size. The target file size might be changed per workload requirements using [configurations](apache-spark-azure-create-spark-configuration.md).
This feature achieves the file size by using an extra data shuffle phase over partitions, causing an extra processing cost while writing the data. The small write penalty should be outweighed by read efficiency on the tables.
This feature achieves the file size by using an extra data shuffle phase over pa
### When to use it
-* Delta lake partitioned tables subject to write patterns that generate suboptimal (less than 128 MB) or non-standardized files sizes (files with different sizes between itself).
+* Delta lake partitioned tables subject to write patterns that generate suboptimal (less than 128 MB) or nonstandardized files sizes (files with different sizes between itself).
* Repartitioned data frames that will be written to disk with suboptimal files size. * Delta lake partitioned tables targeted by small batch SQL commands like UPDATE, DELETE, MERGE, CREATE TABLE AS SELECT, INSERT INTO, etc. * Streaming ingestion scenarios with append data patterns to Delta lake partitioned tables where the extra write latency is tolerable.
This feature achieves the file size by using an extra data shuffle phase over pa
## How to enable and disable the optimize write feature
-The optimize write feature is disabled by default. In Spark 3.3 Pool, it is enabled by default for partitioned tables.
+The optimize write feature is disabled by default. In Spark 3.3 Pool, it's enabled by default for partitioned tables.
Once the configuration is set for the pool or session, all Spark write patterns will use the functionality.
spark.conf.set("spark.microsoft.delta.optimizeWrite.enabled", "true")
SET `spark.microsoft.delta.optimizeWrite.enabled` = true ```
-To check the current configuration value, use the command as shown below:
+To check the current configuration value, use the command as shown:
1. Scala and PySpark
spark.conf.get("spark.microsoft.delta.optimizeWrite.enabled")
SET `spark.microsoft.delta.optimizeWrite.enabled` ```
-To disable the optimize write feature, change the following configuration as shown below:
+To disable the optimize write feature, change the following configuration as shown:
1. Scala and PySpark
synapse-analytics Reservation Of Executors In Dynamic Allocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/reservation-of-executors-in-dynamic-allocation.md
Last updated 11/07/2022 -+ # Reservation of Executors as part of Dynamic Allocation in Synapse Spark Pools
synapse-analytics Runtime For Apache Spark Lifecycle And Supportability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/runtime-for-apache-spark-lifecycle-and-supportability.md
Title: Synapse runtime for Apache Spark lifecycle and supportability description: Lifecycle and support policies for Synapse runtime for Apache Spark---+++ Last updated 03/08/2024
synapse-analytics Spark Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/spark-dotnet.md
Last updated 05/01/2020-+ # Use .NET for Apache Spark with Azure Synapse Analytics
synapse-analytics Synapse File Mount Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/synapse-file-mount-api.md
Last updated 07/27/2022 -+
synapse-analytics Tutorial Spark Pool Filesystem Spec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/tutorial-spark-pool-filesystem-spec.md
description: Tutorial for how to use FSSPEC in PySpark notebook to read/write AD
-+ Last updated 11/02/2021
synapse-analytics Tutorial Use Pandas Spark Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/tutorial-use-pandas-spark-pool.md
description: Tutorial for how to use Pandas in a PySpark notebook to read/write
-+ Last updated 11/02/2021
synapse-analytics Use Prometheus Grafana To Monitor Apache Spark Application Level Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/use-prometheus-grafana-to-monitor-apache-spark-application-level-metrics.md
Title: Tutorial - Monitor Apache Spark Applications metrics with Prometheus and
description: Tutorial - Learn how to deploy the Apache Spark application metrics solution to an Azure Kubernetes Service (AKS) cluster and learn how to integrate the Grafana dashboards. -+
synapse-analytics Gen2 Migration Schedule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/gen2-migration-schedule.md
Title: Migrate your dedicated SQL pool (formerly SQL DW) to Gen2
description: Instructions for migrating an existing dedicated SQL pool (formerly SQL DW) to Gen2 and the migration schedule by region. -+ Last updated 01/21/2020
synapse-analytics Maintenance Scheduling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/maintenance-scheduling.md
Title: Maintenance schedules for Synapse SQL pool
description: Maintenance scheduling enables customers to plan around the necessary scheduled maintenance events that Azure Synapse Analytics uses to roll out new features, upgrades, and patches. -+ Last updated 01/10/2024
synapse-analytics Memory Concurrency Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/memory-concurrency-limits.md
Title: Memory and concurrency limits
description: View the memory and concurrency limits allocated to the various performance levels and resource classes for dedicated SQL pool in Azure Synapse Analytics. -+ Last updated 04/04/2021
synapse-analytics Quickstart Bulk Load Copy Tsql Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-bulk-load-copy-tsql-examples.md
Last updated 06/15/2022 -+
synapse-analytics Quickstart Bulk Load Copy Tsql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-bulk-load-copy-tsql.md
Title: "Quickstart: Bulk load data using a single T-SQL statement"
description: Bulk load data using the COPY statement -+ Last updated 11/20/2020
synapse-analytics Quickstart Configure Workload Isolation Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-configure-workload-isolation-portal.md
Title: "Quickstart: Configure workload isolation - Portal"
description: Use Azure portal to configure workload isolation for dedicated SQL pool. -+ Last updated 11/28/2022
synapse-analytics Quickstart Configure Workload Isolation Tsql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-configure-workload-isolation-tsql.md
Title: "Quickstart: Configure workload isolation - T-SQL"
description: Use T-SQL to configure workload isolation. -+ Last updated 04/27/2020
synapse-analytics Quickstart Create A Workload Classifier Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-create-a-workload-classifier-portal.md
Title: "Quickstart: Create a workload classifier - Portal"
description: Use Azure portal to create a workload classifier with high importance. -+ Last updated 05/04/2020
synapse-analytics Quickstart Create A Workload Classifier Tsql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-create-a-workload-classifier-tsql.md
Title: "Quickstart: Create a workload classifier - T-SQL"
description: Use T-SQL to create a workload classifier with high importance. -+ Last updated 02/04/2020
synapse-analytics Quickstart Scale Compute Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-scale-compute-portal.md
Title: "Quickstart: Scale compute for an Azure Synapse dedicated SQL pool (forme
description: You can scale compute for an Azure Synapse dedicated SQL pool (formerly SQL DW) with the Azure portal. -+ Last updated 02/22/2023
synapse-analytics Quickstart Scale Compute Workspace Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/quickstart-scale-compute-workspace-portal.md
Title: "Quickstart: Scale compute for an Azure Synapse dedicated SQL pool in a S
description: Learn how to scale compute for an Azure Synapse dedicated SQL pool in a Synapse workspace with the Azure portal. -+ Last updated 02/22/2023
synapse-analytics Release Notes 10 0 10106 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/release-notes-10-0-10106-0.md
Title: Release notes for dedicated SQL pool (formerly SQL DW)
description: Release notes for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics. -+ Last updated 3/24/2022
synapse-analytics Resource Classes For Workload Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/resource-classes-for-workload-management.md
Title: Resource classes for workload management
description: Guidance for using resource classes to manage concurrency and compute resources for queries in Azure Synapse Analytics. -+ Last updated 02/04/2020
synapse-analytics Sql Data Warehouse Concept Resource Utilization Query Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-concept-resource-utilization-query-activity.md
Title: Manageability and monitoring - query activity, resource utilization
description: Learn what capabilities are available to manage and monitor Azure Synapse Analytics. Use the Azure portal and Dynamic Management Views (DMVs) to understand query activity and resource utilization of your data warehouse. -+ Last updated 04/08/2024
synapse-analytics Sql Data Warehouse How To Configure Workload Importance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-how-to-configure-workload-importance.md
Title: Configure workload importance for dedicated SQL pool
description: Learn how to set request level importance in Azure Synapse Analytics. -+ Last updated 05/15/2020
synapse-analytics Sql Data Warehouse How To Manage And Monitor Workload Importance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-how-to-manage-and-monitor-workload-importance.md
Title: Manage and monitor workload importance in dedicated SQL pool
description: Learn how to manage and monitor request level importance dedicated SQL pool for Azure Synapse Analytics. -+ Last updated 02/04/2020
synapse-analytics Sql Data Warehouse Monitor Workload Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-monitor-workload-portal.md
Title: Monitor workload - Azure portal
description: Monitor Synapse SQL using the Azure portal -+ Last updated 09/13/2022
synapse-analytics Sql Data Warehouse Predict https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-predict.md
Title: Score machine learning models with PREDICT
description: Learn how to score machine learning models using the T-SQL PREDICT function in dedicated SQL pool. -+ Last updated 07/21/2020
synapse-analytics Sql Data Warehouse Reference Collation Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-reference-collation-types.md
Title: Data warehouse collation types
description: Collation types supported for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics. -+ Last updated 01/22/2024
synapse-analytics Sql Data Warehouse Tables Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-data-types.md
Title: Table data types in dedicated SQL pool (formerly SQL DW)
description: Recommendations for defining table data types for dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics. -+ Last updated 01/06/2020
synapse-analytics Sql Data Warehouse Workload Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-classification.md
Title: Workload classification for dedicated SQL pool
description: Guidance for using classification to manage query concurrency, importance, and compute resources for dedicated SQL pool in Azure Synapse Analytics. -+ Last updated 01/24/2022
synapse-analytics Sql Data Warehouse Workload Importance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-importance.md
Title: Workload importance
description: Guidance for setting importance for dedicated SQL pool queries in Azure Synapse Analytics. -+ Last updated 02/04/2020
synapse-analytics Sql Data Warehouse Workload Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-isolation.md
Title: Workload isolation
description: Guidance for setting workload isolation with workload groups in Azure Synapse Analytics. -+ Last updated 11/16/2021
synapse-analytics Sql Data Warehouse Workload Management Portal Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-management-portal-monitor.md
Title: Workload management portal monitoring
description: Guidance for workload management portal monitoring in Azure Synapse Analytics. -+ Last updated 03/01/2021
synapse-analytics Sql Data Warehouse Workload Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-workload-management.md
Title: Workload management
description: Guidance for implementing workload management in Azure Synapse Analytics. -+ Last updated 02/04/2020
synapse-analytics Upgrade To Latest Generation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/upgrade-to-latest-generation.md
Title: Upgrade to the latest generation of dedicated SQL pool (formerly SQL DW)
description: Upgrade Azure Synapse Analytics dedicated SQL pool (formerly SQL DW) to latest generation of Azure hardware and storage architecture. -+ Last updated 02/19/2019
synapse-analytics Workspace Connected Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/workspace-connected-create.md
Last updated 03/07/2022-+ # Enable Azure Synapse workspace features for a dedicated SQL pool (formerly SQL DW)
synapse-analytics Workspace Connected Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/workspace-connected-experience.md
Last updated 03/07/2022-+ # Enabling Synapse workspace features on an existing dedicated SQL pool (formerly SQL DW)
synapse-analytics Workspace Connected Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/workspace-connected-regions.md
Last updated 02/15/2022-+
synapse-analytics Active Directory Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/active-directory-authentication.md
Last updated 3/07/2022 -+ # Use Microsoft Entra authentication for authentication with Synapse SQL
synapse-analytics Best Practices Serverless Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/best-practices-serverless-sql-pool.md
Title: Best practices for serverless SQL pool
description: Recommendations and best practices for working with serverless SQL pool. -+ Last updated 02/15/2023
synapse-analytics Connect Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/connect-overview.md
Last updated 04/15/2020 -+
synapse-analytics Connection Strings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/connection-strings.md
Last updated 04/15/2020 -+
synapse-analytics Create External Table As Select https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/create-external-table-as-select.md
Last updated 04/15/2020 -+ # Store query results to storage using serverless SQL pool in Azure Synapse Analytics
synapse-analytics Create Use External Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/create-use-external-tables.md
Last updated 02/02/2022 -+ # Create and use native external tables using SQL pools in Azure Synapse Analytics
synapse-analytics Create Use Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/create-use-views.md
Last updated 05/20/2020 -+ # Create and use views using serverless SQL pool in Azure Synapse Analytics
synapse-analytics Data Processed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/data-processed.md
Last updated 11/05/2020 -+ # Cost management for serverless SQL pool in Azure Synapse Analytics
synapse-analytics Develop Dynamic Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-dynamic-sql.md
Title: Use dynamic SQL in Synapse SQL
description: Tips for using dynamic SQL in Synapse SQL. -+ Last updated 04/15/2020
synapse-analytics Develop Group By Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-group-by-options.md
Title: Use GROUP BY options in Synapse SQL
description: Synapse SQL allows for developing solutions by implementing different GROUP BY options. -+ Last updated 04/15/2020
synapse-analytics Develop Label https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-label.md
Title: Use query labels in Synapse SQL
description: Included in this article are essential tips for using query labels in Synapse SQL. -+ Last updated 04/15/2020
synapse-analytics Develop Loops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-loops.md
Title: Use T-SQL loops
description: Tips for using T-SQL loops, replacing cursors, and developing related solutions with Synapse SQL in Azure Synapse Analytics. -+ Last updated 04/15/2020
synapse-analytics Develop Openrowset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-openrowset.md
Last updated 03/23/2022 -+ # How to use OPENROWSET using serverless SQL pool in Azure Synapse Analytics
synapse-analytics Develop Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-overview.md
Title: Resources for developing Synapse SQL features
description: Development concepts, design decisions, recommendations, and coding techniques for Synapse SQL. -+ Last updated 03/23/2022
synapse-analytics Develop Storage Files Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-storage-files-overview.md
Last updated 04/19/2020 -+ # Access external storage using serverless SQL pool in Azure Synapse Analytics
synapse-analytics Develop Storage Files Spark Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-storage-files-spark-tables.md
Last updated 02/15/2022 -+ # Synchronize Apache Spark for Azure Synapse external table definitions in serverless SQL pool
synapse-analytics Develop Tables Cetas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-tables-cetas.md
Last updated 02/17/2023 -+ # CETAS with Synapse SQL
synapse-analytics Develop Tables Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-tables-data-types.md
Title: Table data types in Synapse SQL
description: Recommendations for defining table data types in Synapse SQL. -+ Last updated 04/15/2020
synapse-analytics Develop Tables Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-tables-overview.md
Title: Design tables using Synapse SQL
description: Introduction to designing tables in Synapse SQL. -+ Last updated 04/15/2020
synapse-analytics Develop Tables Statistics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-tables-statistics.md
Title: Create and update statistics using Azure Synapse SQL resources
description: Recommendations and examples for creating and updating query-optimization statistics in Azure Synapse SQL. -+ Last updated 10/11/2022
synapse-analytics Develop User Defined Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-user-defined-schemas.md
Last updated 04/15/2020 -+
synapse-analytics Develop Variable Assignment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-variable-assignment.md
Last updated 04/15/2020 -+ # Assign variables with Synapse SQL
synapse-analytics Develop Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-views.md
Last updated 04/15/2020 -+ # T-SQL views with dedicated SQL pool and serverless SQL pool in Azure Synapse Analytics
synapse-analytics Get Started Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/get-started-azure-data-studio.md
Last updated 04/15/2020 -+ # Connect to Synapse SQL with Azure Data Studio
synapse-analytics Get Started Connect Sqlcmd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/get-started-connect-sqlcmd.md
Last updated 04/15/2020 -+ # Connect to Synapse SQL with sqlcmd
synapse-analytics Get Started Power Bi Professional https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/get-started-power-bi-professional.md
Last updated 04/15/2020 -+
synapse-analytics Get Started Ssms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/get-started-ssms.md
Last updated 04/15/2020 -+ # Connect to Synapse SQL with SQL Server Management Studio (SSMS)
synapse-analytics Get Started Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/get-started-visual-studio.md
Last updated 04/15/2020 -+ # Connect to Synapse SQL with Visual Studio and SSDT
synapse-analytics Mfa Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/mfa-authentication.md
Last updated 04/15/2020 -+
synapse-analytics On Demand Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/on-demand-workspace-overview.md
Last updated 12/06/2022 -+ # Serverless SQL pool in Azure Synapse Analytics
synapse-analytics Overview Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/overview-features.md
Last updated 03/24/2022-+ # Transact-SQL features supported in Azure Synapse SQL
synapse-analytics Query Data Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/query-data-storage.md
Last updated 04/15/2020 -+ # Query storage files with serverless SQL pool in Azure Synapse Analytics
synapse-analytics Query Delta Lake Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/query-delta-lake-format.md
Last updated 02/15/2023 -+ # Query Delta Lake (v1) files using serverless SQL pool in Azure Synapse Analytics
synapse-analytics Query Folders Multiple Csv Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/query-folders-multiple-csv-files.md
Last updated 04/15/2020 -+ # Query folders and multiple files
synapse-analytics Query Json Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/query-json-files.md
Last updated 05/20/2020 -+ # Query JSON files using serverless SQL pool in Azure Synapse Analytics
synapse-analytics Query Parquet Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/query-parquet-files.md
Last updated 02/15/2023 -+ # Query Parquet files using serverless SQL pool in Azure Synapse Analytics
synapse-analytics Query Parquet Nested Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/query-parquet-nested-types.md
Last updated 05/20/2020 -+ # Query nested types in Parquet and JSON files by using serverless SQL pool in Azure Synapse Analytics
synapse-analytics Query Single Csv File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/query-single-csv-file.md
Last updated 05/20/2020 -+ # Query CSV files
synapse-analytics Query Specific Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/query-specific-files.md
Last updated 05/20/2020 -+ # Use file metadata in serverless SQL pool queries
synapse-analytics Reference Tsql System Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/reference-tsql-system-views.md
Last updated 04/15/2020 -+ # System views supported in Synapse SQL
synapse-analytics Resource Consumption Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resource-consumption-models.md
Last updated 04/15/2020 -+ # Synapse SQL resource consumption
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
Title: Serverless SQL pool self-help
description: This article contains information that can help you troubleshoot problems with serverless SQL pool. -+ Last updated 12/08/2022
synapse-analytics Sql Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/sql-authentication.md
Last updated 03/07/2022 -+ # SQL Authentication in Azure Synapse Analytics
synapse-analytics Tutorial Connect Power Bi Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/tutorial-connect-power-bi-desktop.md
Last updated 05/20/2020 -+ # Tutorial: Use serverless SQL pool with Power BI Desktop & create a report
synapse-analytics Tutorial Data Analyst https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/tutorial-data-analyst.md
Last updated 05/25/2022 -+ # Tutorial: Explore and Analyze data lakes with serverless SQL pool
synapse-analytics Tutorial Logical Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/tutorial-logical-data-warehouse.md
Last updated 02/17/2023 -+ # Tutorial: Create Logical Data Warehouse with serverless SQL pool
synapse-analytics Concept Synapse Link Cosmos Db Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/concept-synapse-link-cosmos-db-support.md
Last updated 06/02/2021 -+
synapse-analytics Connect Synapse Link Sql Database Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/connect-synapse-link-sql-database-vnet.md
Last updated 11/16/2022 -+ # Configure Azure Synapse Link for Azure SQL Database with network security
synapse-analytics Connect Synapse Link Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/connect-synapse-link-sql-database.md
Last updated 11/16/2022--++ # Get started with Azure Synapse Link for Azure SQL Database
synapse-analytics Connect Synapse Link Sql Server 2022 Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/connect-synapse-link-sql-server-2022-vnet.md
Last updated 03/15/2023 -+ # Configure Azure Synapse Link for SQL Server 2022 with network security
synapse-analytics Connect Synapse Link Sql Server 2022 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/connect-synapse-link-sql-server-2022.md
Last updated 11/16/2022--++ # Get started with Azure Synapse Link for SQL Server 2022
synapse-analytics How To Connect Synapse Link Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/how-to-connect-synapse-link-cosmos-db.md
Last updated 03/02/2021 -+
synapse-analytics How To Copy To Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/how-to-copy-to-sql-pool.md
Last updated 08/10/2020 -+
synapse-analytics How To Query Analytical Store Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/how-to-query-analytical-store-spark.md
Last updated 11/02/2021 -+
synapse-analytics Sql Database Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/sql-database-synapse-link.md
Last updated 11/16/2022--++ # Azure Synapse Link for Azure SQL Database
synapse-analytics Sql Server 2022 Synapse Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/sql-server-2022-synapse-link.md
Last updated 11/16/2022--++ # Azure Synapse Link for SQL Server 2022
synapse-analytics Sql Synapse Link Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/sql-synapse-link-overview.md
Last updated 11/16/2022--++ # What is Azure Synapse Link for SQL?
synapse-analytics Troubleshoot Synapse Studio And Storage Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/troubleshoot/troubleshoot-synapse-studio-and-storage-connectivity.md
Last updated 02/15/2022-+ # Troubleshoot connectivity between Azure Synapse Analytics Synapse Studio and storage
synapse-analytics Troubleshoot Synapse Studio Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/troubleshoot/troubleshoot-synapse-studio-powershell.md
Title: Troubleshoot Synapse Studio connectivity description: Troubleshoot Azure Synapse Studio connectivity using PowerShell-+ Last updated 10/30/2020 --++ # Troubleshoot Synapse Studio connectivity with PowerShell
-Azure Synapse Studio depends on a set of Web API endpoints to work properly. This guide will help you identify causes of connectivity issues when you're:
-- configuring your local network (such as network behind a corporate firewall) for accessing Azure Synapse Studio.-- experiencing connectivity issues using Azure Synapse Studio.
+Azure Synapse Studio depends on a set of Web API endpoints to work properly. This guide will help you identify causes of connectivity issues when:
+
+- You're configuring your local network (such as network behind a corporate firewall) for accessing Azure Synapse Studio.
+- You're experiencing connectivity issues using Azure Synapse Studio.
## Prerequisite
Right-click on the following link, and select "Save target as":
- [Test-AzureSynapse.ps1](https://go.microsoft.com/fwlink/?linkid=2119734)
-Alternatively, you may open the link directly, and save the opened script file. Don't save the address of the link above, as it may change in the future.
+Alternatively, you can open the link directly, and save the opened script file. Don't save the address of the link, as it could change in the future.
In file explorer, right-click on the downloaded script file, and select "Run with PowerShell".
The diagnostic session will be started. Wait for it to complete.
![Wait for diagnosis to complete](media/troubleshooting-synapse-studio-powershell/wait-for-diagnosis.png)
-In the end, a diagnosis summary will be shown. If your PC cannot connect to one or more of the endpoints, it will show some suggestions in the "Summary" section.
+In the end, a diagnosis summary will be shown. If your PC can't connect to one or more of the endpoints, it will show some suggestions in the "Summary" section.
![Review diagnostic summary](media/troubleshooting-synapse-studio-powershell/diagnosis-summary.png)
-Additionally, a diagnostic log file for this session will be generated in the same folder as the troubleshooting script. Its location is shown in "General tips" section (`D:\TestAzureSynapse_2020....log`). You may send this file to technical support if necessary.
+Additionally, a diagnostic log file for this session will be generated in the same folder as the troubleshooting script. Its location is shown in "General tips" section (`D:\TestAzureSynapse_2020....log`). You can send this file to technical support if necessary.
-If you're a network administrator and tuning your firewall configuration for Azure Synapse Studio, the technical details shown above the "Summary" section may help.
+If you're a network administrator and tuning your firewall configuration for Azure Synapse Studio, the technical details shown above the "Summary" section might help.
* All the test items (requests) marked with "Passed" mean they have passed connectivity tests, regardless of the HTTP status code.
- For the failed requests, the reason is shown in yellow, such as `NamedResolutionFailure` or `ConnectFailure`. These reasons may help you figure out whether there are misconfigurations with your network environment.
+ For the failed requests, the reason is shown in yellow, such as `NamedResolutionFailure` or `ConnectFailure`. These reasons might help you figure out whether there are misconfigurations with your network environment.
## Next steps
synapse-analytics Troubleshoot Synapse Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/troubleshoot/troubleshoot-synapse-studio.md
Last updated 10/01/2023 -+ # Synapse Studio troubleshooting
synapse-analytics Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new-archive.md
Title: Previous monthly updates in Azure Synapse Analytics
description: Archive of the new features and documentation improvements for Azure Synapse Analytics -+ Last updated 07/21/2023
synapse-analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new.md
Title: What's new?
description: Learn about the new features and documentation improvements for Azure Synapse Analytics -+ Last updated 08/01/2023
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new.md
Previously updated : 06/06/2024 Last updated : 07/12/2024 # What's new in Azure Virtual Desktop?
Make sure to check back here often to keep up with new updates.
> [!TIP] > See [What's new in documentation](whats-new-documentation.md), where we highlight new and updated articles for Azure Virtual Desktop.
+## June 2024
+
+Here's what changed in June 2024:
+
+### Configuring the default chroma value for Azure Virtual Desktop is now in public preview
+
+The chroma value determines the color space used for encoding. By default, the chroma value is set to 4:2:0, which provides a good balance between image quality and network bandwidth. You can increase the default chroma value to 4:4:4 to improve image quality. You don't need to use GPU acceleration to change the default chroma value.
+
+For more information, seeΓÇ»[Configure default chroma value for Azure Virtual Desktop](configure-default-chroma-value.md).
+
+### New Teams SlimCore changes are now available in preview
+
+Microsoft Teams on Azure Virtual Desktop supports chat and collaboration. With media optimizations, it also supports calling and meeting functionality by redirecting it to the local device when using Windows App or the Remote Desktop client on a supported platform.
+
+There are two versions of Teams, Classic Teams and [New Teams](/microsoftteams/new-teams-desktop-admin), and you can use either with Azure Virtual Desktop. New Teams has feature parity with Classic Teams, and improves performance, reliability, and security.
+
+New Teams can use either SlimCore or the WebRTC Redirector Service. SlimCore is available in preview and you need to [opt in to the preview](/microsoftteams/public-preview-doc-updates?tabs=new-teams-client) to use it. If you use SlimCore, you should also install the WebRTC Redirector Service. This allows a user to fall back to WebRTC, such as if they roam between different devices that don't support the new optimization architecture. For more information about SlimCore and how to opt into the preview, see [New VDI solution for Teams](/microsoftteams/vdi-2).
+
+For more information, seeΓÇ»[Use Microsoft Teams on Azure Virtual Desktop](teams-on-avd.md).
+
+### Preferred application group type behavior for pooled host pools in Azure Virtual Desktop has been updated
+
+An application group is a logical grouping of applications that are available on session hosts in a host pool. Application groups control whether a full desktop or which applications from a host pool are available to users to connect to. An application group can only be assigned to a single host pool, but you can assign multiple application groups to the same host pool. Users can be assigned to multiple application groups across multiple host pools, which enable you to vary the applications and desktops that users can access.
+
+For more information, seeΓÇ»[Preferred application group type behavior for pooled host pools in Azure Virtual Desktop](preferred-application-group-type.md).
+
+### Additional data and metrics for Connection Reliability for Azure Virtual Desktop is now available
+
+Using Azure Virtual Desktop Insights can help you understand your deployments of Azure Virtual Desktop. It can help with checks such as which client versions are connecting, opportunities for cost saving, or knowing if you have resource limitations or connectivity issues.
+
+The reliability of a connection can have a significant impact on the end-user experience. Azure Virtual Desktop Insights can help you understand disconnection events and correlations between errors that affect end users.
+
+For more information and instructions, seeΓÇ»[Use cases for Azure Virtual Desktop Insights](insights-use-cases.md).
++
+### Configuring RDP Shortpath for Azure Virtual Desktop now supports host pool settings
+
+You can granularly control how RDP Shortpath is used by configuring the networking settings of a host pool using the Azure portal or Azure PowerShell. Configuring RDP Shortpath on the host pool enables you to optionally set which of the four RDP Shortpath options you want to use and is used alongside the session host configuration.
+
+For more information, seeΓÇ»[Configure RDP Shortpath for Azure Virtual Desktop](configure-rdp-shortpath.md).
+
+### Adding and managing app attach and MSIX app attach applications in Azure Virtual Desktop are now available
+
+MSIX app attach enables you to dynamically attach applications from an application package to a user session in Azure Virtual Desktop. Applications aren't installed locally on session hosts or images, making it easier to create custom images for your session hosts, and reducing operational overhead and costs for your organization. Delivering applications with MSIX app attach also gives you greater control over which applications your users can access in a remote session.
+
+For more information and instructions, seeΓÇ»[Add and manage app attach and MSIX app attach applications](app-attach-setup.md).
++ ## May 2024 Here's what changed in May 2024:
virtual-machine-scale-sets Virtual Machine Scale Sets Configure Rolling Upgrades https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-configure-rolling-upgrades.md
Stop-AzVmssRollingUpgrade `
## Restart a rolling upgrade
-If you decide to cancel a rolling upgrade or the upgrade has stopped due to any policy breach, any more changes that result in another scale set model change trigger a new rolling upgrade. If you want to restart a rolling upgrade, to trigger a generic model update. This tells the scale set to check if all the instances are up to date with the latest model.
+If you decide to cancel a rolling upgrade or the upgrade has stopped due to any policy breach, any more changes that result in another scale set model change trigger a new rolling upgrade. If you want to restart a rolling upgrade, trigger a generic model update. This tells the scale set to check if all the instances are up to date with the latest model.
### [CLI](#tab/cli4) To restart a rolling upgrade after its been canceled, you need to trigger the scale set to check if the instances in the scale set are up to date with the latest scale set model. You can do this by running [az vmss update](/cli/azure/vmss#az-vmss-update).
virtual-wan Route Maps About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/route-maps-about.md
Before using Route-maps, take into consideration the following limitations:
* When adding ASNs to the AS-PAT, only use the Private ASN range 64512 - 65535, but don't use ASN's Reseverd by Azure: * Public ASNs: 8074, 8075, 12076 * Private ASNs: 65515, 65517, 65518, 65519, 65520
+* When using Route-maps, do not remove the Azure BGP communities:
+ * 65517:12001 , 65517:12002 , 65517:12003 , 65517:12005 , 65517:12006 , 65518:65518 , 65517:65517 , 65517:12076 , 65518:12076 , 65515:10000 , 65515:20000
* You can't apply Route-maps to connections between on-premises and SD-WAN/Firewall NVAs in the virtual hub. These connections aren't supported during Preview. You can still apply route-maps to other supported connections when an NVA in the virtual hub is deployed. This doesn't apply to the Azure Firewall, as the routing for Azure Firewall is provided through Virtual WAN [routing intent features](how-to-routing-policies.md). * Route-maps supports only 2-byte ASN numbers. * The point-to-site (P2S) Multipool feature isn't currently supported with Route-maps. * Modifying the *Default* route is only supported when the default route is learned from on-premises or an NVA. * A prefix can be modified either by Route-maps, or by NAT, but not both. * Route-maps won't be applied to the [hub address space](virtual-wan-site-to-site-portal.md#hub).
-* Applying Route-Maps on NVAs in a spoke VNet is not supported.
+* Applying Route-Maps on NVAs in a spoke VNet isn't supported.
## Configuration workflow
Route-maps allows you to match routes using Route-prefix, BGP community, and AS-
Match conditions are used to select a set of routes. Once those routes are selected, they can be dropped or modified. You can configure the following **Actions**:
-* **Drop:** All the matched routes are dropped (i.e filtered-out) from the route advertisement. For example, a site-to-site VPN connection has routes 10.2.1.0/24, 10.2.2.0/24 and 10.2.3.0/24 being advertised from Azure to a branch office. A route-map can be configured to drop 10.2.1.0/24, 10.2.2.0/24, resulting in only 10.2.3.0/24 being advertised from Azure to a branch office.
+* **Drop:** All the matched routes are dropped (i.e. filtered-out) from the route advertisement. For example, a site-to-site VPN connection has routes 10.2.1.0/24, 10.2.2.0/24 and 10.2.3.0/24 being advertised from Azure to a branch office. A route-map can be configured to drop 10.2.1.0/24, 10.2.2.0/24, resulting in only 10.2.3.0/24 being advertised from Azure to a branch office.
* **Modify:** The possible route modifications are aggregating route-prefixes or modifying route BGP attributes. For example, a site-to-site VPN connection has routes 10.2.1.0/24 with an AS Path of 65535 and a BGP community of 65535:100 being advertised from Azure to a branch office. A route-map can be configured to add the AS Path of [65535, 65005].
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
Virtual WAN is a networking-as-a-service platform that has a 99.95% SLA. However
The SLA for each component is calculated individually. For example, if ExpressRoute has a 10 minute downtime, the availability of ExpressRoute would be calculated as (Maximum Available Minutes - downtime) / Maximum Available Minutes * 100. ### Can you change the VNet address space in a spoke VNet connected to the hub?
-Yes, this can be done automatically with no update or reset required on the peering connection. You can find more information on how to change the VNet address space [here](../virtual-network/manage-virtual-network.yml).
+Yes, this can be done automatically with no update or reset required on the peering connection. Please note the following:
+* You do not need to click the ["Sync" button](../virtual-network/update-virtual-network-peering-address-space.yml#modify-the-address-range-prefix-of-an-existing-address-range) under the Peering blade. Once the VNet's address space is changed, the VNet peering will automatically sync with the virtual hub's VNet.
+* Please ensure the updated address space does not overlap with the address space for any existing spoke VNets in your Virtual WAN.
+
+You can find more information on how to change the VNet address space [here](../virtual-network/manage-virtual-network.yml).
++ ## <a name="vwan-customer-controlled-maintenance"></a>Virtual WAN customer-controlled gateway maintenance
vpn-gateway Point To Site About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-about.md
The tables in this section show the values for the default policies. However, th
|**Policies** | ||
-|TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 |
-|TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 |
-|TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 |
|TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 |
-|TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 |
-|TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 |
+|TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 |
+|TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384|
|TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 |
-|TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 |
-|TLS_RSA_WITH_AES_128_GCM_SHA256 |
-|TLS_RSA_WITH_AES_256_GCM_SHA384 |
-|TLS_RSA_WITH_AES_128_CBC_SHA256 |
-|TLS_RSA_WITH_AES_256_CBC_SHA256 |
+|**TLS_AES_256_GCM_SHA384 |
+|**TLS_AES_128_GCM_SHA256 |
+
+**Only supported on TLS1.3 with OpenVPN
+ ## <a name="configure"></a>How do I configure a P2S connection?
vpn-gateway Vpn Gateway Vpn Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-vpn-faq.md
To proper functionalities and healthy state to your VPN Gateway, consider one of
1. Revert to the default native Azure DNS by removing the custom DNS within the VNet settings (recommended configuration). 2. Add in your custom DNS configuration a DNS forwarder pointing to the native Azure DNS (IP address: 168.63.129.16). Considering the specific rules and nature of your custom DNS, this setup might not resolve and fix the issue as expected.
+### Can two VPN clients connected in Point-to-Site to the same VPN Gateway communicate?
+
+Communication between VPN clients connected in Point-to-Site to the same VPN Gateway is not supported. When two VPN clients are connected to the same Point-to-Site (P2S) VPN Gateway instance, the VPN Gateway instance can automatically route traffic between them by determining the IP address each client is assigned from the address pool. However, if the VPN clients are connected to different VPN Gateway instances, routing between the VPN clients is not possible because each VPN Gateway instance is unaware of the IP address assigned to the client by the other instance.
+ ### Could point-to-site VPN connections be affected by a potential vulnerability known as "tunnel vision"? Microsoft is aware of reports discussing network technique that bypasses VPN encapsulation. This is an industry-wide issue impacting any operating system that implements a DHCP client according to its RFC specification and has support for DHCP option 121 routes, including Windows.