Updates from: 12/05/2023 02:15:25
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-model-overview.md
The following table shows the available models for each current preview and stab
|Model|[2023-10-31-preview](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-10-31-preview&preserve-view=true&tabs=HTTP)|[2023-07-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)|[2022-08-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)|[v2.1 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)| |-|--||--|| |[Add-on capabilities](concept-add-on-capabilities.md) | ✔️| ✔️| n/a| n/a|
-|[Business Card](concept-business-card.md) | deprecated|✔️|✔️|✔️ |
+|[Business card](concept-business-card.md) | deprecated|✔️|✔️|✔️ |
|[Contract](concept-contract.md) | ✔️| ✔️| n/a| n/a| |[Custom classifier](concept-custom-classifier.md) | ✔️| ✔️| n/a| n/a| |[Custom composed](concept-composed-models.md) | ✔️| ✔️| ✔️| ✔️| |[Custom neural](concept-custom-neural.md) | ✔️| ✔️| ✔️| n/a| |[Custom template](concept-custom-template.md) | ✔️| ✔️| ✔️| ✔️|
-|[General Document](concept-general-document.md) | deprecated| ✔️| ✔️| n/a|
-|[Health Insurance Card](concept-health-insurance-card.md)| ✔️| ✔️| ✔️| n/a|
-|[ID Document](concept-id-document.md) | ✔️| ✔️| ✔️| ✔️|
+|[General document](concept-general-document.md) | deprecated| ✔️| ✔️| n/a|
+|[Health insurance card](concept-health-insurance-card.md)| ✔️| ✔️| ✔️| n/a|
+|[ID document](concept-id-document.md) | ✔️| ✔️| ✔️| ✔️|
|[Invoice](concept-invoice.md) | ✔️| ✔️| ✔️| ✔️| |[Layout](concept-layout.md) | ✔️| ✔️| ✔️| ✔️| |[Read](concept-read.md) | ✔️| ✔️| ✔️| n/a|
ai-services Language Support Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/language-support-custom.md
Last updated 11/15/2023
Azure AI Document Intelligence models provide multilingual document processing support. Our language support capabilities enable your users to communicate with your applications in natural ways and empower global outreach. Custom models are trained using your labeled datasets to extract distinct data from structured, semi-structured, and unstructured documents specific to your use cases. Standalone custom models can be combined to create composed models. The following tables list the available language and locale support by model and feature:
-## [Custom classifier](#tab/custom-classifier)
-
-***custom classifier model***
+## Custom classifier
:::moniker range="doc-intel-3.1.0"+ | LanguageΓÇöLocale code | Default | |:-|:| | English (United States)ΓÇöen-US| English (United States)ΓÇöen-US| :::moniker-end :::moniker range="doc-intel-4.0.0"+ |Language| Code (optional) | |:--|:-:| |Afrikaans| `af`|
Azure AI Document Intelligence models provide multilingual document processing s
|Ukrainian|`uk`| |Urdu|`ur`| |Vietnamese|`vi`|
-## [Custom neural](#tab/custom-neural)
-
-***custom neural model***
-
-#### Handwritten text
-The following table lists the supported languages for extracting handwritten texts.
+## Custom neural
-|Language| Language code (optional) | Language| Language code (optional) |
-|:--|:-:|:--|:-:|
-|English|`en`|Japanese |`ja`|
-|Chinese Simplified |`zh-Hans`|Korean |`ko`|
-|French |`fr`|Portuguese |`pt`|
-|German |`de`|Spanish |`es`|
-|Italian |`it`|
-#### Printed text
+## [**Printed text**](#tab/printed)
The following table lists the supported languages for printed text.
The following table lists the supported languages for printed text.
|Albanian| `sq`| |Arabic|`ar`| |Bulgarian|`bg`|
-|Chinese (Han (Simplified variant))| `zh-Hans`|
-|Chinese (Han (Traditional variant))|`zh-Hant`|
+|Chinese Simplified| `zh-Hans`|
+|Chinese Traditional|`zh-Hant`|
|Croatian|`hr`| |Czech|`cs`| |Danish|`da`|
The following table lists the supported languages for printed text.
|Urdu|`ur`| |Vietnamese|`vi`|
+## [**Handwritten text**](#tab/handwritten)
+
+The following table lists the supported languages for extracting **handwritten** texts.
+
+|Language| Language code (optional) | Language| Language code (optional) |
+|:--|:-:|:--|:-:|
+|English|`en`|Japanese |`ja`|
+|Chinese Simplified |`zh-Hans`|Korean |`ko`|
+|French |`fr`|Portuguese |`pt`|
+|German |`de`|Spanish |`es`|
+|Italian |`it`|
++ Neural models support added languages for the `v3.1` and later APIs.
Neural models support added languages for the `v3.1` and later APIs.
:::moniker-end
-## [Custom template](#tab/custom-template)
-
-***custom template model***
+## Custom template
-#### Handwritten text
-The following table lists the supported languages for extracting handwritten texts.
-
-|Language| Language code (optional) | Language| Language code (optional) |
-|:--|:-:|:--|:-:|
-|English|`en`|Japanese |`ja`|
-|Chinese Simplified |`zh-Hans`|Korean |`ko`|
-|French |`fr`|Portuguese |`pt`|
-|German |`de`|Spanish |`es`|
-|Italian |`it`|
+## [**Printed**](#tab/printed)
-#### Printed text
+The following table lists the supported languages for **printed** text.</br>
-The following table lists the supported languages for printed text.
:::row::: :::column span=""::: |Language| Code (optional) |
The following table lists the supported languages for printed text.
:::column-end::: :::row-end:::
+## [**Handwritten**](#tab/handwritten)
+
+The following table lists the supported languages for extracting handwritten texts.
+
+|Language| Language code (optional) | Language| Language code (optional) |
+|:--|:-:|:--|:-:|
+|English|`en`|Japanese |`ja`|
+|Chinese Simplified |`zh-Hans`|Korean |`ko`|
+|French |`fr`|Portuguese |`pt`|
+|German |`de`|Spanish |`es`|
+|Italian |`it`|
+
ai-services Language Support Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/language-support-ocr.md
Azure AI Document Intelligence models provide multilingual document processing s
::: moniker-end
-## Read model
-
-##### Model ID: **prebuilt-read**
- > [!NOTE] > **Language code optional** > > * Document Intelligence's deep learning based universal models extract all multi-lingual text in your documents, including text lines with mixed languages, and don't require specifying a language code.
-> * Don't provide the language code as the parameter unless you are sure about the language and want to force the service to apply only the relevant model. Otherwise, the service may return incomplete and incorrect text.
+>
+> * Don't provide the language code as the parameter unless you are sure of the language and want to force the service to apply only the relevant model. Otherwise, the service may return incomplete and incorrect text.
> > * Also, It's not necessary to specify a locale. This is an optional parameter. The Document Intelligence deep-learning technology will auto-detect the text language in your image.
-### [Read: handwritten text](#tab/read-hand)
+## Read model
+
+##### Model ID: **prebuilt-read**
+
+### [**Read: handwritten text**](#tab/read-hand)
:::moniker range="doc-intel-4.0.0"
The following table lists read model language support for extracting and analyzi
:::moniker-end
-### [Read: printed text](#tab/read-print)
+### [**Read: printed text**](#tab/read-print)
:::moniker range=">=doc-intel-3.1.0"
The following table lists read model language support for extracting and analyzi
:::row::: :::column span="":::
- |Language| Code (optional) |
+ |Language| Code (optional) |
|:--|:-:| |Abaza|abq| |Abkhazian|ab|
The following table lists read model language support for extracting and analyzi
|Finnish|fi| :::column-end::: :::column span="":::
- |Language| Code (optional) |
+ |Language| Code (optional) |
|:--|:-:| |Fon|fon| |French|fr|
The following table lists read model language support for extracting and analyzi
:::moniker-end
-### [Read: language detection](#tab/read-detection)
+### [**Read: language detection**](#tab/read-detection)
The [Read model API](concept-read.md) supports **language detection** for the following languages in your documents. This list can include languages not currently supported for text extraction.
The [Read model API](concept-read.md) supports **language detection** for the fo
##### Model ID: **prebuilt-layout**
-### [Layout: handwritten text](#tab/layout-hand)
+### [**Layout: handwritten text**](#tab/layout-hand)
:::moniker range="doc-intel-4.0.0"
The following table lists layout model language support for extracting and analy
|Thai (preview) | `th` | Arabic (preview) | `ar` | :::moniker-end
-### [Layout: printed text](#tab/layout-print)
+### [**Layout: printed text**](#tab/layout-print)
:::moniker range=">=doc-intel-3.1.0"
ai-services Language Support Prebuilt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/language-support-prebuilt.md
Last updated 11/15/2023
Azure AI Document Intelligence models provide multilingual document processing support. Our language support capabilities enable your users to communicate with your applications in natural ways and empower global outreach. Prebuilt models enable you to add intelligent domain-specific document processing to your apps and flows without having to train and build your own models. The following tables list the available language and locale support by model and feature:
+## Business card
+
+ > [!IMPORTANT]
+> Starting with Document Intelligence **v4.0 (preview)**, and going forward, the business card model (prebuilt-businessCard) is deprecated. To extract data from business card formats, use the following:
-## [Business card](#tab/business-card)
+| Feature | version| Model ID |
+|- ||--|
+| Business card model|&bullet; v3.1:2023-07-31 (GA)</br>&bullet; v3.0:2022-08-31 (GA)</br>&bullet; v2.1 (GA)|**`prebuilt-businessCard`**|
+ ***Model ID: prebuilt-businessCard***
Azure AI Document Intelligence models provide multilingual document processing s
:::moniker-end
+## Contract
-## [Contract](#tab/contract)
***Model ID: prebuilt-contract***
Azure AI Document Intelligence models provide multilingual document processing s
:::moniker-end
+## Health insurance card
-## [Health insurance card](#tab/health-insurance-card)
***Model ID: prebuilt-healthInsuranceCard.us***
Azure AI Document Intelligence models provide multilingual document processing s
:::moniker-end
-## [ID document](#tab/id-document)
+## ID document
+ ***Model ID: prebuilt-idDocument***
Azure AI Document Intelligence models provide multilingual document processing s
|Canada|Driver License, Identification Card, Residency Permit (Maple Card)| |Australia|Driver License, Photo Card, Key-pass ID (including digital version)|
-## [Invoice](#tab/invoice)
++
+| Region | Document types |
+|--|-|
+|Worldwide|Passport Book, Passport Card|
+|United States|Driver License, Identification Card
++
+## Invoice
***Model ID: prebuilt-invoice*** :::moniker range=">=doc-intel-3.1.0"
-| Supported languages | Details |
+### [Supported languages](#tab/languages)
+
+| Languages | Details |
|:-|:| | &bullet; English (`en`) | United States (`us`), Australia (`au`), Canada (`ca`), United Kingdom (-uk), India (-in)| | &bullet; Spanish (`es`) |Spain (`es`)|
Azure AI Document Intelligence models provide multilingual document processing s
| &bullet; Chinese (simplified (zh-hans)) | China (zh-hans-cn)| | &bullet; Chinese (traditional (zh-hant)) | Hong Kong SAR (zh-hant-hk), Taiwan (zh-hant-tw)|
-| Supported Currency Codes | Details |
+### [Supported Currency Codes](#tab/currency)
+
+| Currency Code | Details |
|:-|:| | &bullet; ARS | Argentine Peso (`ar`) | | &bullet; AUD | Australian Dollar (`au`) |
Azure AI Document Intelligence models provide multilingual document processing s
| &bullet; TWD | New Taiwan Dollar (`tw`) | | &bullet; USD | United States Dollar (`us`) | + :::moniker-end :::moniker range="doc-intel-3.0.0"
+### [Supported languages](#tab/languages)
| Supported languages | Details | |:-|:| | &bullet; English (`en`) | United States (`us`), Australia (`au`), Canada (`ca`), United Kingdom (-uk), India (-in)|
Azure AI Document Intelligence models provide multilingual document processing s
| &bullet; Portuguese (`pt`) | Portugal (`pt`), Brazil (`br`)| | &bullet; Dutch (`nl`) | Netherlands (`nl`)|
+### [Supported Currency Codes](#tab/currency)
| Supported Currency Codes | Details | |:-|:| | &bullet; BRL | Brazilian Real (`br`) |
Azure AI Document Intelligence models provide multilingual document processing s
| &bullet; GGP | Guernsey Pound (`gg`) | | &bullet; INR | Indian Rupee (`in`) | | &bullet; USD | United States (`us`) |+ :::moniker-end
-## [Receipt](#tab/receipt)
+ | Supported languages | Details |
+ |:-|:|
+ |English (`en`) | United States (`us`)
-***Model ID: prebuilt-receipt***
+## Receipt
:::moniker range=">=doc-intel-3.0.0"
-#### Thermal receipts (retail, meal, parking, etc.)
+***Model ID: prebuilt-receipt***
+
+### [Thermal receipts](#tab/thermal)
| Language name | Language code | Language name | Language code | |:--|:-:|:--|:-:|
Azure AI Document Intelligence models provide multilingual document processing s
|Latvian|``lv``|Xitsonga|`ts`| |Lingala|``ln``|||
-#### Hotel receipts
-| Supported Languages | Details |
-|:--|:-:|
-|English|United States (`en-US`)|
-|French|France (`fr-FR`)|
-|German|Germany (`de-DE`)|
-|Italian|Italy (`it-IT`)|
-|Japanese|Japan (`ja-JP`)|
-|Portuguese|Portugal (`pt-PT`)|
-|Spanish|Spain (`es-ES`)|
+### [Hotel receipts](#tab/hotel)
+| Supported Languages|Language code |
+|:--|:|
+|English (United States)|`en-US`|
+|French|`fr-FR`|
+|German|`de-DE`|
+|Italian|`it-IT`|
+|Japanese|`ja-JP`|
+|Portuguese|`pt-PT`|
+|Spanish|`es-ES`|
+ ::: moniker-end ::: moniker range="doc-intel-2.1.0"
-### Supported languages and locales v2.1
- | Model | LanguageΓÇöLocale code | Default | |--|:-|:| |Receipt| &bullet; English (United States)ΓÇöen-US</br> &bullet; English (Australia)ΓÇöen-AU</br> &bullet; English (Canada)ΓÇöen-CA</br> &bullet; English (United Kingdom)ΓÇöen-GB</br> &bullet; English (India)ΓÇöen-IN | Autodetected | ::: moniker-end
-### [Tax Documents](#tab/tax)
+## Tax documents
- Model ID | LanguageΓÇöLocale code | Default |
-|--|:-|:|
-|**prebuilt-tax.us.w2**|English (United States)|English (United States)ΓÇöen-US|
-|**prebuilt-tax.us.1098**|English (United States)|English (United States)ΓÇöen-US|
-|**prebuilt-tax.us.1098E**|English (United States)|English (United States)ΓÇöen-US|
-|**prebuilt-tax.us.1098T**|English (United States)|English (United States)ΓÇöen-US|
+ | Model ID | LanguageΓÇöLocale code | Default |
+ |--|:-|:|
+ |**prebuilt-tax.us.w2**|English (United States)|English (United States)ΓÇöen-US|
+ |**prebuilt-tax.us.1098**|English (United States)|English (United States)ΓÇöen-US|
+ |**prebuilt-tax.us.1098E**|English (United States)|English (United States)ΓÇöen-US|
+ |**prebuilt-tax.us.1098T**|English (United States)|English (United States)ΓÇöen-US|
+ |**prebuilt-tax.us.1099**|English (United States)|English (United States)ΓÇöen-US|
-
+ | Model ID | LanguageΓÇöLocale code | Default |
+ |--|:-|:|
+ |**prebuilt-tax.us.w2**|English (United States)|English (United States)ΓÇöen-US|
+ |**prebuilt-tax.us.1098**|English (United States)|English (United States)ΓÇöen-US|
+ |**prebuilt-tax.us.1098E**|English (United States)|English (United States)ΓÇöen-US|
+ |**prebuilt-tax.us.1098T**|English (United States)|English (United States)ΓÇöen-US|
+
+ | Model ID | LanguageΓÇöLocale code | Default |
+ |--|:-|:|
+ |**prebuilt-tax.us.w2**|English (United States)|English (United States)ΓÇöen-US|
ai-services Switching Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/switching-endpoints.md
client = AzureOpenAI(
api_key=os.getenv("AZURE_OPENAI_KEY"), api_version="2023-10-01-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
- )
+)
``` </td>
ai-services Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/regions.md
The following regions are supported for Speech service features such as speech t
| Europe | France Central | `francecentral` | | Europe | Germany West Central | `germanywestcentral` | | Europe | Norway East | `norwayeast` |
-| Europe | Sweden Central | `swedentcentral` |
+| Europe | Sweden Central | `swedentcentral`<sup>8</sup> |
| Europe | Switzerland North | `switzerlandnorth` <sup>6</sup>| | Europe | Switzerland West | `switzerlandwest` | | Europe | UK South | `uksouth` <sup>1,2,3,4,7</sup>|
The following regions are supported for Speech service features such as speech t
<sup>5</sup> The region supports keyword verification.
-<sup>6</sup> The region does not support Speaker Recognition.
+<sup>6</sup> The region doesn't support Speaker Recognition.
<sup>7</sup> The region supports the [high performance](how-to-deploy-and-use-endpoint.md#add-a-deployment-endpoint) endpoint type for Custom Neural Voice.
+<sup>8</sup> The region doesn't support Custom Neural Voice.
+ ## Intent recognition Available regions for intent recognition via the Speech SDK are in the following table.
ai-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-services-quotas-and-limits.md
These limits aren't adjustable. For more information on batch synthesis latency,
| Max number of simultaneous model trainings | N/A | 4 | | Max number of custom endpoints | N/A | 50 |
+#### Real-time text to speech avatar
+
+| Quota | Free (F0)| Standard (S0) |
+|--|--|--|
+| New connections per minute | Not available for F0 | 2 new connections per minute |
+ #### Audio Content Creation tool | Quota | Free (F0)| Standard (S0) |
aks Ai Toolchain Operator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ai-toolchain-operator.md
- Title: Deploy an AI model on Azure Kubernetes Service (AKS) with the AI toolchain operator (Preview)
-description: Learn how to enable the AI toolchain operator add-on on Azure Kubernetes Service (AKS) to simplify OSS AI model management and deployment.
-- Previously updated : 11/03/2023--
-# Deploy an AI model on Azure Kubernetes Service (AKS) with the AI toolchain operator (Preview)
-
-The AI toolchain operator (KAITO) is a managed add-on for AKS that simplifies the experience of running OSS AI models on your AKS clusters. The AI toolchain operator automatically provisions the necessary GPU nodes and sets up the associated inference server as an endpoint server to your AI models. Using this add-on reduces your onboarding time and enables you to focus on AI model usage and development rather than infrastructure setup.
-
-This article shows you how to enable the AI toolchain operator add-on and deploy an AI model on AKS.
--
-## Before you begin
-
-* This article assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for AKS](./concepts-clusters-workloads.md).
-* If you aren't familiar with Microsoft Entra Workload Identity, see the [Workload Identity overview](../active-directory/workload-identities/workload-identities-overview.md).
-* For ***all hosted model inference files*** and recommended infrastructure setup, see the [KAITO GitHub repository](https://github.com/Azure/kaito).
-
-## Prerequisites
-
-* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
- * If you have multiple Azure subscriptions, make sure you select the correct subscription in which the resources will be created and charged using the [`az account set`][az-account-set] command.
-
- > [!NOTE]
- > The subscription you use must have GPU VM quota.
-
-* Azure CLI version 2.47.0 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
-* Helm v3 installed. For more information, see [Installing Helm](https://helm.sh/docs/intro/install/).
-* The Kubernetes command-line client, kubectl, installed and configured. For more information, see [Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/).
-
-## Enable the Azure CLI preview extension
-
-* Enable the Azure CLI preview extension using the [`az extension add`][az-extension-add] command.
-
- ```azurecli-interactive
- az extension add --name aks-preview
- ```
-
-## Register the `AIToolchainOperatorPreview` feature flag
-
-1. Register the `AIToolchainOperatorPreview` feature flag using the [`az feature register`][az-feature-register] command.
-
- ```azurecli-interactive
- az feature register --name AIToolchainOperatorPreview --namespace Microsoft.ContainerService
- ```
-
- It takes a few minutes for the status to show as *Registered*.
-
-2. Verify the registration using the [`az feature show`][az-feature-show] command.
-
- ```azurecli-interactive
- az feature show --name AIToolchainOperatorPreview --namespace Microsoft.ContainerService
- ```
-
-3. When the status reflects as *Registered*, refresh the registration of the Microsoft.ContainerService resource provider using the [`az provider register`][az-provider-register] command.
-
- ```azurecli-interactive
- az provider register --namespace Microsoft.ContainerService
- ```
-
-### Export environment variables
-
-* To simplify the configuration steps in this article, you can define environment variables using the following commands. Make sure to replace the placeholder values with your own.
-
- ```azurecli-interactive
- export AZURE_SUBSCRIPTION_ID="mySubscriptionID"
- export AZURE_RESOURCE_GROUP="myResourceGroup"
- export CLUSTER_NAME="myClusterName"
- ```
-
-## Enable the AI toolchain operator add-on on an AKS cluster
-
-1. Create an Azure resource group using the [`az group create`][az-group-create] command.
-
- ```azurecli-interactive
- az group create --name AZURE_RESOURCE_GROUP --location eastus
- ```
-
-2. Create an AKS cluster with the AI toolchain operator add-on enabled using the [`az aks create`][az-aks-create] command with the `--enable-ai-toolchain-operator`, `--enable-workload-identity`, and `--enable-oidc-issuer` flags.
-
- ```azurecli-interactive
- az aks create --resource-group AZURE_RESOURCE_GROUP --name CLUSTER_NAME --generate-ssh-keys --enable-managed-identity --enable-workload-identity --enable-oidc-issuer --enable-ai-toolchain-operator
- ```
-
- > [!NOTE]
- > AKS creates a managed identity once you enable the AI toolchain operator add-on. The managed identity is used to access the AI toolchain operator workspace CRD. The AI toolchain operator workspace CRD is used to create and manage AI toolchain operator workspaces.
- >
- > AI toolchain operator enablement requires the enablement of workload identity and OIDC issuer.
-
-## Connect to your cluster
-
-1. Configure `kubectl` to connect to your cluster using the [`az aks get-credentials`][az-aks-get-credentials] command.
-
- ```azurecli-interactive
- az aks get-credentials --resource-group AZURE_RESOURCE_GROUP --name CLUSTER_NAME
- ```
-
-2. Verify the connection to your cluster using the `kubectl get` command.
-
- ```azurecli-interactive
- kubectl get nodes
- ```
-
-3. Export environment variables for the principal ID identity and client ID identity using the following commands:
-
- ```azurecli-interactive
- export MC_RESOURCE_GROUP=$(az aks show --resource-group AZURE_RESOURCE_GROUP --name CLUSTER_NAME --query nodeResourceGroup -o tsv)
- export PRINCIPAL_ID=$(az identity show --name "ai-toolchain-operator-{CLUSTER_NAME}" --resource-group "{MC_RESOURCE_GROUP} --query 'principalId' -o tsv)
- export CLIENT_ID=$(az identity show --name gpuIdentity --resource-group "${AZURE_RESOURCE_GROUP}" --subscription "${AZURE_SUBSCRIPTION_ID}" --query 'clientId' -o tsv)
- ```
-
-## Create a role assignment for the principal ID identity
-
-1. Create a new role assignment for the service principal using the [`az role assignment create`][az-role-assignment-create] command.
-
- ```azurecli-interactive
- az role assignment create --role "Contributor" --assignee "${PRINCIPAL_ID}" --scope "/subscriptions/${AZURE_SUBSCRIPTION_ID}/resourcegroups/${AZURE_RESOURCE_GROUP}"/providers/Microsoft.ContainerService/managedClusters/${CLUSTER_NAME}"
- ```
-
-2. Get the AKS OIDC Issuer URL and export it as an environment variable using the following command:
-
- ```azurecli-interactive
- export AKS_OIDC_ISSUER=$(az aks show --resource-group "${AZURE_RESOURCE_GROUP}" --name "${CLUSTER_NAME}" --subscription "${AZURE_SUBSCRIPTION_ID}" --query "oidcIssuerProfile.issuerUrl" -o tsv)
- ```
-
-## Establish a federated identity credential
-
-* Create the federated identity credential between the managed identity, AKS OIDC issuer, and subject using the [`az identity federated-credential create`][az-identity-federated-credential-create] command.
-
- ```azurecli-interactive
- az identity federated-credential create --name "${FEDERATED_IDENTITY}" --identity-name "ai-toolchain-operator-{CLUSTER_NAME}" --resource-group "${AZURE_RESOURCE_GROUP} --issuer "${AKS_OIDC_ISSUER}" --subject system:serviceaccount:"kube-system":"gpu-provisioner" --audience api://AzureADTokenExchange --subscription "${AZURE_SUBSCRIPTION_ID}"
- ```
-
-## Deploy a default hosted AI model
-
-1. Deploy the Falcon 7B model YAML file from the GitHub repository using the `kubectl apply` command.
-
- ```azurecli-interactive
- kubectl apply -f https://raw.githubusercontent.com/Azure/kaito/main/examples/kaito_workspace_falcon_7b.yaml
- ```
-
-2. Track the live resource changes in your workspace using the `kubectl get` command.
-
- ```azurecli-interactive
- kubectl get workspace workspace-falcon-7b -w
- ```
-
-3. Check your service and get the service IP address using the `kubectl get svc` command.
-
- ```azurecli-interactive
- export SERVICE_IP=$(kubectl get svc workspace-falcon-7b -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
- ```
-
-4. Run the Falcon 7B model with a sample input of your choice using the following `curl` command:
-
- ```azurecli-interactive
- curl -X POST "http://SERVICE_IP:80/chat" -H "accept: application/json" -H "Content-Type: application/json" -d '{"prompt":"YOUR_PROMPT_HERE"}'
- ```
-
-## Clean up resources
-
-If you no longer need these resources, you can delete them to avoid incurring extra Azure charges.
-
-* Delete the resource group and its associated resources using the [`az group delete`][az-group-delete] command.
-
- ```azurecli-interactive
- az group delete --name AZURE_RESOURCE_GROUP --yes --no-wait
- ```
-
-## Next steps
-
-For more inference model options, see the [KAITO GitHub repository](https://github.com/Azure/kaito).
-
-<!-- LINKS -->
-[az-group-create]: /cli/azure/group#az_group_create
-[az-group-delete]: /cli/azure/group#az_group_delete
-[az-aks-create]: /cli/azure/aks#az_aks_create
-[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
-[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create
-[az-identity-federated-credential-create]: /cli/azure/identity/federated-credential#az_identity_federated_credential_create
-[az-account-set]: /cli/azure/account#az_account_set
-[az-extension-add]: /cli/azure/extension#az_extension_add
-[az-feature-register]: /cli/azure/feature#az_feature_register
-[az-feature-show]: /cli/azure/feature#az_feature_show
-[az-provider-register]: /cli/azure/provider#az_provider_register
aks App Routing Dns Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing-dns-ssl.md
description: Understand the advanced configuration options that are supported wi
Previously updated : 11/21/2023 Last updated : 12/04/2023 # Set up advanced Ingress configurations with the application routing add-on
az aks approuting update -g <ResourceGroupName> -n <ClusterName> --enable-kv --a
## Enable Azure DNS integration
-To enable support for DNS zones, see the following prerequisites:
+To enable support for DNS zones, review the following prerequisite:
-* The app routing add-on can be configured to automatically create records on one or more Azure public and private DNS zones for hosts defined on Ingress resources. All global Azure DNS zones need to be in the same resource group, and all private Azure DNS zones need to be in the same resource group. If you don't have an Azure DNS zone, you can [create one][create-an-azure-dns-zone].
+* The app routing add-on can be configured to automatically create records on one or more Azure public and private DNS zones for hosts defined on Ingress resources. All public Azure DNS zones need to be in the same resource group, and all private Azure DNS zones need to be in the same resource group. If you don't have an Azure DNS zone, you can [create one][create-an-azure-dns-zone].
-### Create a global Azure DNS zone
+### Create a public Azure DNS zone
> [!NOTE] > If you already have an Azure DNS Zone, you can skip this step.
The application routing add-on creates an Ingress class on the cluster named *we
az keyvault certificate show --vault-name <KeyVaultName> -n <KeyVaultCertificateName> --query "id" --output tsv ```
+ The following example output shows the certificate URI returned from the command:
+
+ ```output
+ https://KeyVaultName.vault.azure.net/certificates/KeyVaultCertificateName/ea62e42260f04f17a9309d6b87aceb44
+ ```
+ 2. Copy the following YAML manifest into a new file named **ingress.yaml** and save the file to your local computer.
- > [!NOTE]
- > Update *`<Hostname>`* with your DNS host name and *`<KeyVaultCertificateUri>`* with the ID returned from Azure Key Vault.
- > The *`secretName`* key in the `tls` section defines the name of the secret that contains the certificate for this Ingress resource. This certificate will be presented in the browser when a client browses to the URL defined in the `<Hostname>` key. Make sure that the value of `secretName` is equal to `keyvault-` followed by the value of the Ingress resource name (from `metadata.name`). In the example YAML, secretName will need to be equal to `keyvault-<your Ingress name>`.
+ Update *`<Hostname>`* with the name of your DNS host and *`<KeyVaultCertificateUri>`* with the URI returned from the command to query Azure Key Vault in step 1 above. The string value for `*<KeyVaultCertificateUri>*` should only include `https://yourkeyvault.vault.azure.net/certificates/certname`. The *Certificate Version* at the end of the URI string should be omitted in order to get the current version.
+
+ The *`secretName`* key in the `tls` section defines the name of the secret that contains the certificate for this Ingress resource. This certificate is presented in the browser when a client browses to the URL specified in the `<Hostname>` key. Make sure that the value of `secretName` is equal to `keyvault-` followed by the value of the Ingress resource name (from `metadata.name`). In the example YAML, `secretName` needs to be equal to `keyvault-<your Ingress name>`.
```yml apiVersion: networking.k8s.io/v1
Learn about monitoring the Ingress-nginx controller metrics included with the ap
[az-aks-install-cli]: /cli/azure/aks#az-aks-install-cli [az-aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials [create-and-export-a-self-signed-ssl-certificate]: #create-and-export-a-self-signed-ssl-certificate
-[create-an-azure-dns-zone]: #create-a-global-azure-dns-zone
+[create-an-azure-dns-zone]: #create-a-public-azure-dns-zone
[azure-dns-overview]: ../dns/dns-overview.md [az-keyvault-certificate-show]: /cli/azure/keyvault/certificate#az-keyvault-certificate-show [prometheus-in-grafana]: app-routing-nginx-prometheus.md
aks App Routing Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing-migration.md
After migrating to the application routing add-on, learn how to [monitor Ingress
<!-- INTERNAL LINKS --> [install-azure-cli]: /cli/azure/install-azure-cli
-[app-routing-dns-create]: ./app-routing-dns-ssl.md#create-a-global-azure-dns-zone
+[app-routing-dns-create]: ./app-routing-dns-ssl.md#create-a-public-azure-dns-zone
[app-routing-dns-configure]: ./app-routing-dns-ssl.md#attach-azure-dns-zone-to-the-application-routing-add-on <!-- EXTERNAL LINKS -->
aks Concepts Clusters Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-clusters-workloads.md
Title: Concepts - Kubernetes basics for Azure Kubernetes Services (AKS)
description: Learn the basic cluster and workload components of Kubernetes and how they relate to features in Azure Kubernetes Service (AKS) Previously updated : 10/31/2022 Last updated : 12/04/2023 # Kubernetes core concepts for Azure Kubernetes Service (AKS)
To run your applications and supporting services, you need a Kubernetes *node*.
| -- | - | | `kubelet` | The Kubernetes agent that processes the orchestration requests from the control plane along with scheduling and running the requested containers. | | *kube-proxy* | Handles virtual networking on each node. The proxy routes network traffic and manages IP addressing for services and pods. |
-| *container runtime* | Allows containerized applications to run and interact with additional resources, such as the virtual network and storage. AKS clusters using Kubernetes version 1.19+ for Linux node pools use `containerd` as their container runtime. Beginning in Kubernetes version 1.20 for Windows node pools, `containerd` can be used in preview for the container runtime, but Docker is still the default container runtime. AKS clusters using prior versions of Kubernetes for node pools use Docker as their container runtime. |
+| *container runtime* | Allows containerized applications to run and interact with additional resources, such as the virtual network or storage. AKS clusters using Kubernetes version 1.19+ for Linux node pools use `containerd` as their container runtime. Beginning in Kubernetes version 1.20 for Windows node pools, `containerd` can be used in preview for the container runtime, but Docker is still the default container runtime. AKS clusters using prior versions of Kubernetes for node pools use Docker as their container runtime. |
![Azure virtual machine and supporting resources for a Kubernetes node](media/concepts-clusters-workloads/aks-node-resource-interactions.png)
Using the Kubernetes Scheduler, the Deployment Controller runs replicas on any a
Two Kubernetes resources, however, let you manage these types of applications: -- *StatefulSets* maintain the state of applications beyond an individual pod lifecycle, such as storage.
+- *StatefulSets* maintain the state of applications beyond an individual pod lifecycle.
- *DaemonSets* ensure a running instance on each node, early in the Kubernetes bootstrap process. ### StatefulSets
Replicas in a StatefulSet are scheduled and run across any available node in an
### DaemonSets
-For specific log collection or monitoring, you may need to run a pod on all, or selected, nodes. You can use *DaemonSet* deploy on one or more identical pods, but the DaemonSet Controller ensures that each node specified runs an instance of the pod.
+For specific log collection or monitoring, you may need to run a pod on all nodes or a select set of nodes. You can use *DaemonSets* to deploy to one or more identical pods. The DaemonSet Controller ensures that each node specified runs an instance of the pod.
The DaemonSet Controller can schedule pods on nodes early in the cluster boot process, before the default Kubernetes scheduler has started. This ability ensures that the pods in a DaemonSet are started before traditional pods in a Deployment or StatefulSet are scheduled.
aks Free Standard Pricing Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/free-standard-pricing-tiers.md
In the Standard tier, the Uptime SLA feature is enabled by default per cluster.
## Before you begin
-[Azure CLI](/cli/azure/install-azure-cli) version 2.47.0 or later and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+Make sure you have installed [Azure CLI](/cli/azure/install-azure-cli) version 2.47.0 or later. Run `az --version` to find your current version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
## Create a new cluster in the Free tier or Paid tier
aks Open Ai Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-ai-quickstart.md
Now that the application is deployed, you can deploy the Python-based microservi
nodeSelector: "kubernetes.io/os": linux containers:
- - name: order-service
+ - name: ai-service
image: ghcr.io/azure-samples/aks-store-demo/ai-service:latest ports: - containerPort: 5001
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
For the past release history, see [Kubernetes history](https://github.com/kubern
| K8s version | Upstream release | AKS preview | AKS GA | End of life | Platform support | |--|-|--||-|--| | 1.24 | Apr 2022 | May 2022 | Jul 2022 | Jul 2023 | Until 1.28 GA |
-| 1.25 | Aug 2022 | Oct 2022 | Dec 2022 | Jan 2, 2024 | Until 1.29 GA |
+| 1.25 | Aug 2022 | Oct 2022 | Dec 2022 | Jan 14, 2024 | Until 1.29 GA |
| 1.26 | Dec 2022 | Feb 2023 | Apr 2023 | Mar 2024 | Until 1.30 GA | | 1.27* | Apr 2023 | Jun 2023 | Jul 2023 | Jul 2024, LTS until Jul 2025 | Until 1.31 GA | | 1.28 | Aug 2023 | Sep 2023 | Nov 2023 | Nov 2024 | Until 1.32 GA|
aks Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade.md
Last updated 11/21/2023
An Azure Kubernetes Service (AKS) cluster will periodically need to be updated to ensure security and compatibility with the latest features. There are two components of an AKS cluster that are necessary to maintain: -- *Cluster Kubernetes version*: Part of the AKS cluster lifecycle involves performing upgrades to the latest Kubernetes version. ItΓÇÖs important you upgrade to apply the latest security releases and to get access to the latest Kubernetes features, as well as to stay within the [AKS support window][supported-k8s-versions].
+- *Cluster Kubernetes version*: Part of the AKS cluster lifecycle involves performing upgrades to the latest Kubernetes version. ItΓÇÖs important that you upgrade to apply the latest security releases and to get access to the latest Kubernetes features, as well as to stay within the [AKS support window][supported-k8s-versions].
- *Node image version*: AKS regularly provides new node images with the latest OS and runtime updates. It's beneficial to upgrade your nodes' images regularly to ensure support for the latest AKS features and to apply essential security patches and hot fixes. For Linux nodes, node image security patches and hotfixes may be performed without your initiation as *unattended updates*. These updates are automatically applied, but AKS doesn't automatically reboot your Linux nodes to complete the update process. You're required to use a tool like [kured][node-updates-kured] or [node image upgrade][node-image-upgrade] to reboot the nodes and complete the cycle.
For more information what cluster operations may trigger specific upgrade events
[ts-quota-exceeded]: /troubleshoot/azure/azure-kubernetes/error-code-quotaexceeded [ts-subnet-full]: /troubleshoot/azure/azure-kubernetes/error-code-subnetisfull-upgrade [node-security-patches]: ./concepts-vulnerability-management.md#worker-nodes
-[node-updates-kured]: ./node-updates-kured.md
+[node-updates-kured]: ./node-updates-kured.md
api-management Api Management Howto Disaster Recovery Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-disaster-recovery-backup-restore.md
Previously updated : 07/27/2022 Last updated : 11/30/2023
Restore is a long-running operation that may take up to 30 or more minutes to co
## Storage networking constraints
-### Access using storage access key
-
-If the storage account is **[firewall][azure-storage-ip-firewall] enabled** and a storage key is used for access, then the customer must **Allow** the set of [Azure API Management control plane IP addresses][control-plane-ip-address] on their storage account for backup or restore to work. The storage account can be in any Azure region except the one where the API Management service is located. For example, if the API Management service is in West US, then the Azure Storage account can be in West US 2 and the customer needs to open the control plane IP 13.64.39.16 (API Management control plane IP of West US) in the firewall. This is because the requests to Azure Storage aren't SNATed to a public IP from compute (Azure API Management control plane) in the same Azure region. Cross-region storage requests will be SNATed to the public IP address.
-
-### Access using managed identity
-If an API Management system-assigned managed identity is used to access a firewall-enabled storage account, ensure that the storage account [grants access to trusted Azure services](../storage/common/storage-network-security.md?tabs=azure-portal#grant-access-to-trusted-azure-services).
+If the storage account is **[firewall][azure-storage-ip-firewall] enabled**, it's recommended to use the API Management instance's system-assigned managed identity for access to the account. Ensure that the storage account [grants access to trusted Azure services](../storage/common/storage-network-security.md?tabs=azure-portal#grant-access-to-trusted-azure-services).
## What is not backed up - **Usage data** used for creating analytics reports **isn't included** in the backup. Use [Azure API Management REST API][azure api management rest api] to periodically retrieve analytics reports for safekeeping.
api-management Api Management Howto Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-policies.md
# Policies in Azure API Management
-In Azure API Management, API publishers can change API behavior through configuration using *policies*. Policies are a collection of statements that are run sequentially on the request or response of an API. Popular statements include:
+In Azure API Management, API publishers can change API behavior through configuration using *policies*. Policies are a collection of statements that are run sequentially on the request or response of an API. API Management provides more than 50 policies out of the box that you can configure to address common API scenarios such as authentication, rate limiting, caching, and transformation of requests or responses. For a complete list, see [API Management policy reference](api-management-policies.md).
+
+Popular policies include:
* Format conversion from XML to JSON * Call rate limiting to restrict the number of incoming calls from a developer * Filtering requests that come from certain IP addresses
-Many more policies are available out of the box. For a complete list, see [API Management policy reference](api-management-policies.md).
Policies are applied inside the gateway between the API consumer and the managed API. While the gateway receives requests and forwards them, unaltered, to the underlying API, a policy can apply changes to both the inbound request and outbound response.
In API Management, a [GraphQL resolver](configure-graphql-resolver.md) is config
For more information, see [Configure a GraphQL resolver](configure-graphql-resolver.md). + ## Examples ### Apply policies specified at different scopes
api-management Api Management Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policies.md
More information about policies:
+ [Policy overview](api-management-howto-policies.md) + [Set or edit policies](set-edit-policies.md) + [Policy expressions](api-management-policy-expressions.md)++ [Author policies using Microsoft Copilot for Azure](../copilot/author-api-management-policies.md?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json) > [!IMPORTANT] > [Limit call rate by subscription](rate-limit-policy.md) and [Set usage quota by subscription](quota-policy.md) have a dependency on the subscription key. A subscription key isn't required when other policies are applied.
More information about policies:
- [Validate parameters](validate-parameters-policy.md) - Validates the request header, query, or path parameters against the API schema. - [Validate headers](validate-headers-policy.md) - Validates the response headers against the API schema. - [Validate status code](validate-status-code-policy.md) - Validates the HTTP status codes in responses against the API schema.
-## Next steps
-For more information about working with policies, see:
-
-+ [Tutorial: Transform and protect your API](transform-api.md)
-+ [Set or edit policies](set-edit-policies.md)
-+ [Policy snippets repo](https://github.com/Azure/api-management-policy-snippets)
api-management Api Management Policy Expressions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policy-expressions.md
The `context` variable is implicitly available in every policy [expression](api-
|`bool VerifyNoRevocation(input: this System.Security.Cryptography.X509Certificates.X509Certificate2)`|Performs an X.509 chain validation without checking certificate revocation status.<br /><br />`input` - certificate object<br /><br />Returns `true` if the validation succeeds; `false` if the validation fails.|
-## Next steps
+## Related content
For more information working with policies, see: + [Policies in API Management](api-management-howto-policies.md)
-+ [Transform APIs](transform-api.md)
-+ [Policy Reference](./api-management-policies.md) for a full list of policy statements and their settings
-+ [Policy samples](./policy-reference.md)
++ [Tutorial: Transform and protect APIs](transform-api.md)++ [Policy reference](./api-management-policies.md) for a full list of policy statements and their settings++ [Policy snippets repo](https://github.com/Azure/api-management-policy-snippets) ++ [Author policies using Microsoft Copilot for Azure](../copilot/author-api-management-policies.md?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json) For more information:
api-management Policy Fragments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-fragments.md
After creating a policy fragment, you can view and update the properties of a po
1. Review **Policy document references** for policy definitions that include the fragment. Before a fragment can be deleted, you must remove the fragment references from all policy definitions. 1. After all references are removed, select **Delete**.
-## Next steps
+## Related content
For more information about working with policies, see:
For more information about working with policies, see:
+ [Set or edit policies](set-edit-policies.md) + [Policy reference](./api-management-policies.md) for a full list of policy statements + [Policy snippets repo](https://github.com/Azure/api-management-policy-snippets) ++ [Author policies using Microsoft Copilot for Azure](../copilot/author-api-management-policies.md?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json)
api-management Set Edit Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/set-edit-policies.md
To configure a policy:
The **ip-filter** policy now appears in the **Inbound processing** section.
-## Get assistance creating policies using Microsoft Copilot for Azure (preview)
--
-[Microsoft Copilot for Azure](../copilot/overview.md) (preview) provides policy authoring capabilities for Azure API Management. Using Copilot for Azure in the context of API Management's policy editor, you can create policies that match your specific requirements without knowing the syntax or have already configured policies explained to you. This proves particularly useful for handling complex policies with multiple requirements.
-
-You can prompt Copilot for Azure to generate policy definitions, then copy the results into the policy editor and make any necessary adjustments. Ask questions to gain insights into different options, modify the provided policy, or clarify the policy you already have. [Learn more](../copilot/author-api-management-policies.md) about this capability.
-
-> [!NOTE]
-> Microsoft Copilot for Azure requires [registration](../copilot/limited-access.md#registration-process) (preview) and is currently only available to approved enterprise customers and partners.
- ## Configure policies at different scopes API Management gives you flexibility to configure policy definitions at multiple [scopes](api-management-howto-policies.md#scopes), in each of the policy sections.
To modify the policy evaluation order using the policy editor:
A globally scoped policy has no parent scope, and using the `base` element in it has no effect. + ## Related content For more information about working with policies, see:
For more information about working with policies, see:
+ [Set or edit policies](set-edit-policies.md) + [Policy reference](./api-management-policies.md) for a full list of policy statements and their settings + [Policy snippets repo](https://github.com/Azure/api-management-policy-snippets) ++ [Author policies using Microsoft Copilot for Azure](../copilot/author-api-management-policies.md?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json)
api-management Virtual Network Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-reference.md
Previously updated : 10/19/2023 Last updated : 11/29/2023
When an API Management service instance is hosted in a VNet, the ports in the fo
>[!IMPORTANT] > * **Bold** items in the *Purpose* column indicate port configurations required for successful deployment and operation of the API Management service. Configurations labeled "optional" enable specific features, as noted. They are not required for the overall health of the service. >
-> * We recommend using [service tags](../virtual-network/service-tags-overview.md) instead of IP addresses in NSG rules to specify network sources and destinations. Service tags prevent downtime when infrastructure improvements necessitate IP address changes.
+> * We recommend using the indicated [service tags](../virtual-network/service-tags-overview.md) instead of IP addresses in NSG and other network rules to specify network sources and destinations. Service tags prevent downtime when infrastructure improvements necessitate IP address changes.
### [stv2](#tab/stv2)
The following settings and FQDNs are required to maintain and diagnose API Manag
## Control plane IP addresses
-The following IP addresses are divided by **Azure Environment** and **Region**. In some cases, two IP addresses are listed. Permit both IP addresses.
- > [!IMPORTANT]
-> Control plane IP addresses should be configured for network access rules only when needed in certain networking scenarios. We recommend using the **ApiManagement** [service tag](../virtual-network/service-tags-overview.md) instead of control plane IP addresses to prevent downtime when infrastructure improvements necessitate IP address changes.
-
-| **Azure Environment**| **Region**| **IP address**|
-|--|-||
-| Azure Public| Australia Central| 20.37.52.67|
-| Azure Public| Australia Central 2| 20.39.99.81|
-| Azure Public| Australia East| 20.40.125.155|
-| Azure Public| Australia Southeast| 20.40.160.107|
-| Azure Public| Brazil South| 191.233.24.179, 191.238.73.14|
-| Azure Public| Brazil Southeast| 191.232.18.181|
-| Azure Public| Canada Central| 52.139.20.34, 20.48.201.76|
-| Azure Public| Canada East| 52.139.80.117|
-| Azure Public| Central India| 13.71.49.1, 20.192.45.112|
-| Azure Public| Central US| 13.86.102.66|
-| Azure Public| Central US EUAP| 52.253.159.160|
-| Azure Public| East Asia| 52.139.152.27|
-| Azure Public| East US| 52.224.186.99|
-| Azure Public| East US 2| 20.44.72.3|
-| Azure Public| East US 2 EUAP| 52.253.229.253|
-| Azure Public| France Central| 40.66.60.111|
-| Azure Public| France South| 20.39.80.2|
-| Azure Public| Germany North| 51.116.0.0|
-| Azure Public| Germany West Central| 51.116.96.0, 20.52.94.112|
-| Azure Public| Japan East| 52.140.238.179|
-| Azure Public| Japan West| 40.81.185.8|
-| Azure Public| India Central| 20.192.234.160|
-| Azure Public| India West| 20.193.202.160|
-| Azure Public| Korea Central| 40.82.157.167, 20.194.74.240|
-| Azure Public| Korea South| 40.80.232.185|
-| Azure Public| North Central US| 40.81.47.216|
-| Azure Public| North Europe| 52.142.95.35|
-| Azure Public| Norway East| 51.120.2.185|
-| Azure Public| Norway West| 51.120.130.134|
-| Azure Public| South Africa North| 102.133.130.197, 102.37.166.220|
-| Azure Public| South Africa West| 102.133.0.79|
-| Azure Public| South Central US| 20.188.77.119, 20.97.32.190|
-| Azure Public| South India| 20.44.33.246|
-| Azure Public| Southeast Asia| 40.90.185.46|
-| Azure Public| Switzerland North| 51.107.246.176, 51.107.0.91|
-| Azure Public| Switzerland West| 51.107.96.8|
-| Azure Public| UAE Central| 20.37.81.41|
-| Azure Public| UAE North| 20.46.144.85|
-| Azure Public| UK South| 51.145.56.125|
-| Azure Public| UK West| 51.137.136.0|
-| Azure Public| West Central US| 52.253.135.58|
-| Azure Public| West Europe| 51.145.179.78|
-| Azure Public| West India| 40.81.89.24|
-| Azure Public| West US| 13.64.39.16|
-| Azure Public| West US 2| 51.143.127.203|
-| Azure Public| West US 3| 20.150.167.160|
-| Microsoft Azure operated by 21Vianet| China North (Global)| 139.217.51.16|
-| Microsoft Azure operated by 21Vianet| China East (Global)| 139.217.171.176|
-| Microsoft Azure operated by 21Vianet| China North| 40.125.137.220|
-| Microsoft Azure operated by 21Vianet| China East| 40.126.120.30|
-| Microsoft Azure operated by 21Vianet| China North 2| 40.73.41.178|
-| Microsoft Azure operated by 21Vianet| China East 2| 40.73.104.4|
-| Azure Government| USGov Virginia (Global)| 52.127.42.160|
-| Azure Government| USGov Texas (Global)| 52.127.34.192|
-| Azure Government| USGov Virginia| 52.227.222.92|
-| Azure Government| USGov Iowa| 13.73.72.21|
-| Azure Government| USGov Arizona| 52.244.32.39|
-| Azure Government| USGov Texas| 52.243.154.118|
-| Azure Government| USDoD Central| 52.182.32.132|
-| Azure Government| USDoD East| 52.181.32.192|
--
-## Next steps
+> Control plane IP addresses for Azure API Management should be configured for network access rules only when needed in certain networking scenarios. We recommend using the **ApiManagement** [service tag](../virtual-network/service-tags-overview.md) instead of control plane IP addresses to prevent downtime when infrastructure improvements necessitate IP address changes.
+++
+## Related content
Learn more about: * [Connecting a virtual network to backend using VPN Gateway](../vpn-gateway/design.md#s2smulti) * [Connecting a virtual network from different deployment models](../vpn-gateway/vpn-gateway-connect-different-deployment-models-powershell.md)
-* [Debug your APIs using request tracing](api-management-howto-api-inspector.md)
* [Virtual Network frequently asked questions](../virtual-network/virtual-networks-faq.md) * [Service tags](../virtual-network/network-security-groups-overview.md#service-tags)
+For more guidance on configuration issues, see:
+* [API Management - Networking FAQs (Demystifying series I)](https://techcommunity.microsoft.com/t5/azure-paas-blog/api-management-networking-faqs-demystifying-series-i/ba-p/1500996)
+* [API Management - Networking FAQs (Demystifying series II)](https://techcommunity.microsoft.com/t5/azure-paas-blog/api-management-networking-faqs-demystifying-series-ii/ba-p/1502056)
+++ [api-management-using-vnet-menu]: ./media/api-management-using-with-vnet/api-management-menu-vnet.png [api-management-setup-vpn-select]: ./media/api-management-using-with-vnet/api-management-using-vnet-select.png [api-management-setup-vpn-add-api]: ./media/api-management-using-with-vnet/api-management-using-vnet-add-api.png
azure-arc Administer Arc Scvmm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/administer-arc-scvmm.md
+
+ Title: Perform ongoing administration for Arc-enabled System Center Virtual Machine Manager
+description: Learn how to perform administrator operations related to Azure Arc-enabled System Center Virtual Machine Manager
+ Last updated : 12/04/2023+++++++
+# Perform ongoing administration for Arc-enabled System Center Virtual Machine Manager
+
+In this article, you learn how to perform various administrative operations related to Azure Arc-enabled System Center Virtual Machine Manager (SCVMM):
+
+- Upgrade the Azure Arc resource bridge manually
+- Update the SCVMM account credentials
+- Collect logs from the Arc resource bridge
+
+Each of these operations requires either SSH key to the resource bridge VM or the kubeconfig file that provides access to the Kubernetes cluster on the resource bridge VM.
+
+## Upgrade the Arc resource bridge manually
+
+Azure Arc-enabled SCVMM requires the Arc resource bridge to connect your SCVMM environment with Azure. Periodically, new images of Arc resource bridge are released to include security and feature updates. The Arc resource bridge can be manually upgraded from the SCVMM server. You must meet all upgrade [prerequisites](../resource-bridge/upgrade.md#prerequisites) before attempting to upgrade. The SCVMM server must have the kubeconfig and appliance configuration files stored locally. If the SCVMM account credentials changed after the initial deployment of the resource bridge, [update the new account credentials](administer-arc-scvmm.md#update-the-scvmm-account-credentials-using-a-new-password-or-a-new-scvmm-account-after-onboarding) before attempting manual upgrade.
+
+> [!NOTE]
+> The manual upgrade feature is available for resource bridge version 1.0.14 and later. Resource bridges below version 1.0.14 must [perform the recovery option](./disaster-recovery.md) to upgrade to version 1.0.15 or later.
+
+The manual upgrade generally takes between 30-90 minutes, depending on the network speed. The upgrade command takes your Arc resource bridge to the immediate next version, which might not be the latest available version. Multiple upgrades could be needed to reach a [supported version](../resource-bridge/upgrade.md#supported-versions). You can check your resource bridge version by checking the Azure resource of your Arc resource bridge.
+
+To manually upgrade your Arc resource bridge, make sure you've installed the latest `az arcappliance` CLI extension by running the extension upgrade command from the SCVMM server:
+
+```azurecli
+az extension add --upgrade --name arcappliance
+```
+
+To manually upgrade your resource bridge, use the following command:
+
+```azurecli
+az arcappliance upgrade scvmm --config-file <file path to ARBname-appliance.yaml>
+```
+
+## Update the SCVMM account credentials (using a new password or a new SCVMM account after onboarding)
+
+Azure Arc-enabled SCVMM uses the SCVMM account credentials you provided during the onboarding to communicate with your SCVMM management server. These credentials are only persisted locally on the Arc resource bridge VM.
+
+As part of your security practices, you might need to rotate credentials for your SCVMM accounts. As credentials are rotated, you must also update the credentials provided to Azure Arc to ensure the functioning of Azure Arc-enabled SCVMM. You can also use the same steps in case you need to use a different SCVMM account after onboarding. You must ensure the new account also has all the [required SCVMM permissions](quickstart-connect-system-center-virtual-machine-manager-to-arc.md#prerequisites).
+
+There are two different sets of credentials stored on the Arc resource bridge. You can use the same account credentials for both.
+
+- **Account for Arc resource bridge**. This account is used for deploying the Arc resource bridge VM and will be used for upgrade.
+- **Account for SCVMM cluster extension**. This account is used to discover inventory and perform all the VM operations through Azure Arc-enabled SCVMM.
+
+To update the credentials of the account for Arc resource bridge, run the following Azure CLI commands. Run the commands from a workstation that can access cluster configuration IP address of the Arc resource bridge locally:
+
+```azurecli
+az account set -s <subscription id>
+az arcappliance get-credentials -n <name of the appliance> -g <resource group name>
+az arcappliance update-infracredentials scvmm --kubeconfig kubeconfig
+```
+For more information on the commands, see [`az arcappliance get-credentials`](/cli/azure/arcappliance#az-arcappliance-get-credentials) and [`az arcappliance update-infracredentials scvmm`](/cli/azure/arcappliance/update-infracredentials#az-arcappliance-update-infracredentials-scvmm).
++
+To update the credentials used by the SCVMM cluster extension on the resource bridge. This command can be run from anywhere with `connectedscvmm` CLI extension installed.
+
+```azurecli
+az connectedscvmm scvmm connect --custom-location <name of the custom location> --location <Azure region> --name <name of the SCVMM resource in Azure> --resource-group <resource group for the SCVMM resource> --username <username for the SCVMM account> --password <password to the SCVMM account>
+```
+
+## Collect logs from the Arc resource bridge
+
+For any issues encountered with the Azure Arc resource bridge, you can collect logs for further investigation. To collect the logs, use the Azure CLI [`Az arcappliance log`](/cli/azure/arcappliance/logs#az-arcappliance-logs-scvmm) command.
+
+To save the logs to a destination folder, run the following commands. These commands need connectivity to cluster configuration IP address.
+
+```azurecli
+az account set -s <subscription id>
+az arcappliance get-credentials -n <name of the appliance> -g <resource group name>
+az arcappliance logs scvmm --kubeconfig kubeconfig --out-dir <path to specified output directory>
+```
+
+If the Kubernetes cluster on the resource bridge isn't in functional state, you can use the following commands. These commands require connectivity to IP address of the Azure Arc resource bridge VM via SSH.
+
+```azurecli
+az account set -s <subscription id>
+az arcappliance get-credentials -n <name of the appliance> -g <resource group name>
+az arcappliance logs scvmm --out-dir <path to specified output directory> --ip XXX.XXX.XXX.XXX
+```
+
+## Next steps
+
+- [Troubleshoot common issues related to resource bridge](../resource-bridge/troubleshoot-resource-bridge.md).
+- [Understand disaster recovery operations for resource bridge](./disaster-recovery.md).
azure-monitor Alerts Create Activity Log Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-activity-log-alert-rule.md
+
+ Title: Create or edit an activity log, service health, or resource health alert rule
+description: This article shows you how to create a new activity log, service health, and resource health alert rule.
+++ Last updated : 11/27/2023+++
+# Create or edit an activity log, service health, or resource health alert rule
+
+This article shows you how to create or edit an activity log, service health, or resource health alert rule. To learn more about alerts, see the [alerts overview](alerts-overview.md).
+
+You create an alert rule by combining the resources to be monitored, the monitoring data from the resource, and the conditions that you want to trigger the alert. You can then define [action groups](./action-groups.md) and [alert processing rules](alerts-action-rules.md) to determine what happens when an alert is triggered.
+
+Alerts triggered by these alert rules contain a payload that uses the [common alert schema](alerts-common-schema.md).
+++
+## Configure the alert rule conditions
+
+1. On the **Condition** tab, when you select the **Signal name** field, the most commonly used signals are displayed in the drop-down list. Select one of these popular signals, or select **See all signals** if you want to choose a different signal for the condition.
+
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-popular-signals.png" alt-text="Screenshot that shows popular signals when creating an alert rule.":::
+
+1. (Optional) If you chose to **See all signals** in the previous step, use the **Select a signal** pane to search for the signal name or filter the list of signals. Filter by:
+ - **Signal type**: The [type of alert rule](alerts-overview.md#types-of-alerts) you're creating.
+ - **Signal source**: The service sending the signal.
+
+ This table describes the services available for activity log alert rules:
+
+ | Signal source | Description |
+ |--|--|
+ | Activity log ΓÇô Policy | The service that provides the Policy activity log events. |
+ | Activity log ΓÇô Autoscale | The service that provides the Autoscale activity log events. |
+ | Activity log ΓÇô Security | The service that provides the Security activity log events. |
+ | Resource health | The service that provides the resource-level health status. |
+ | Service health | The service that provides the subscription-level health status. |
+
+ Select the **Signal name** and **Apply**.
+
+ #### [Activity log alert](#tab/activity-log)
+
+ 1. On the **Conditions** pane, select the **Chart period**.
+ 1. The **Preview** chart shows you the results of your selection.
+ 1. Select values for each of these fields in the **Alert logic** section:
+
+ |Field |Description |
+ |||
+ |Event level| Select the level of the events for this alert rule. Values are **Critical**, **Error**, **Warning**, **Informational**, **Verbose**, and **All**.|
+ |Status|Select the status levels for the alert.|
+ |Event initiated by|Select the user or service principal that initiated the event.|
+
+ #### [Resource Health alert](#tab/resource-health)
+
+ 1. On the **Conditions** pane, select values for each of these fields:
+
+ |Field |Description |
+ |||
+ |Event status| Select the statuses of Resource Health events. Values are **Active**, **In Progress**, **Resolved**, and **Updated**.|
+ |Current resource status|Select the current resource status. Values are **Available**, **Degraded**, and **Unavailable**.|
+ |Previous resource status|Select the previous resource status. Values are **Available**, **Degraded**, **Unavailable**, and **Unknown**.|
+ |Reason type|Select the causes of the Resource Health events. Values are **Platform Initiated**, **Unknown**, and **User Initiated**.|
+
+ #### [Service Health alert](#tab/service-health)
+
+ 1. On the **Conditions** pane, select values for each of these fields:
+
+ |Field |Description |
+ |||
+ |Services| Select the Azure services.|
+ |Regions|Select the Azure regions.|
+ |Event types|Select the types of Service Health events. Values are **Service issue**, **Planned maintenance**, **Health advisories**, and **Security advisories**.|
+
+
++
+## Configure the alert rule details
++
+1. Enter values for the **Alert rule name** and the **Alert rule description**.
+1. Select **Enable upon creation** for the alert rule to start running as soon as you're done creating it.
+
+1. [!INCLUDE [alerts-wizard-custom=properties](../includes/alerts-wizard-custom-properties.md)]
++
+## Next steps
+ [View and manage your alert instances](alerts-manage-alert-instances.md)
azure-monitor Alerts Create Log Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-log-alert-rule.md
+
+ Title: Create Azure Monitor log alert rules
+description: This article shows you how to create a new log alert rule.
+++ Last updated : 11/27/2023+++
+# Create or edit a log alert rule
+
+This article shows you how to create a new log alert rule or edit an existing log alert rule. To learn more about alerts, see the [alerts overview](alerts-overview.md).
+
+You create an alert rule by combining the resources to be monitored, the monitoring data from the resource, and the conditions that you want to trigger the alert. You can then define [action groups](./action-groups.md) and [alert processing rules](alerts-action-rules.md) to determine what happens when an alert is triggered.
+
+Alerts triggered by these alert rules contain a payload that uses the [common alert schema](alerts-common-schema.md).
+++
+## Configure the alert rule conditions
+
+1. On the **Condition** tab, when you select the **Signal name** field, the most commonly used signals are displayed in the drop-down list. Select one of these popular signals, or select **See all signals** if you want to choose a different signal for the condition.
+
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-popular-signals.png" alt-text="Screenshot that shows popular signals when creating an alert rule.":::
+
+1. (Optional) If you chose to **See all signals** in the previous step, use the **Select a signal** pane to search for the signal name or filter the list of signals. Filter by:
+ - **Signal type**: The [type of alert rule](alerts-overview.md#types-of-alerts) you're creating.
+ - **Signal source**: The service that sends the "Custom log search" and "Log (saved query)" signals.
+ Select the **Signal name** and **Apply**.
+
+1. On the **Logs** pane, write a query that returns the log events for which you want to create an alert.
+
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-rule-query-pane.png" alt-text="Screenshot that shows the Query pane when creating a new log alert rule.":::
+
+ To use one of the predefined alert rule queries, expand the **Schema and filter** pane on the left of the **Logs** pane. Then select the **Queries** tab, and select one of the queries.
+
+1. (Optional) If you're querying an ADX or ARG cluster, Log Analytics can't automatically identify the column with the event timestamp, so we recommend that you add a time range filter to the query. For example:
+
+ ```KQL
+ adx('https://help.kusto.windows.net/Samples').table
+ | where MyTS >= ago(5m) and MyTS <= now()
+ ```
+
+ ```KQL
+ arg("").Resources
+ | where type =~ 'Microsoft.Compute/virtualMachines'
+ | project _ResourceId=tolower(id), tags
+ ```
+
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-logs-conditions-tab.png" alt-text="Screenshot that shows the Condition tab when creating a new log alert rule.":::
+
+1. Select **Run** to run the alert.
+1. The **Preview** section shows you the query results. When you're finished editing your query, select **Continue Editing Alert**.
+1. The **Condition** tab opens populated with your log query. By default, the rule counts the number of results in the last five minutes. If the system detects summarized query results, the rule is automatically updated with that information.
+
+1. In the **Measurement** section, select values for these fields:
+
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-measurements.png" alt-text="Screenshot that shows the Measurement tab when creating a new log alert rule.":::
+
+ |Field |Description |
+ |||
+ |Measure|Log alerts can measure two different things, which can be used for different monitoring scenarios:<br> **Table rows**: The number of rows returned can be used to work with events such as Windows event logs, Syslog, and application exceptions. <br>**Calculation of a numeric column**: Calculations based on any numeric column can be used to include any number of resources. An example is CPU percentage. |
+ |Aggregation type| The calculation performed on multiple records to aggregate them to one numeric value by using the aggregation granularity. Examples are Total, Average, Minimum, or Maximum. |
+ |Aggregation granularity| The interval for aggregating multiple records to one numeric value.|
++
+1. <a name="dimensions"></a>(Optional) In the **Split by dimensions** section, you can use dimensions to help provide context for the triggered alert.
+
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-log-rule-dimensions.png" alt-text="Screenshot that shows the splitting by dimensions section of a new log alert rule.":::
+
+ Dimensions are columns from your query results that contain additional data. When you use dimensions, the alert rule groups the query results by the dimension values and evaluates the results of each group separately. If the condition is met, the rule fires an alert for that group. The alert payload includes the combination that triggered the alert.
+
+ You can apply up to six dimensions per alert rule. Dimensions can only be string or numeric columns. If you want to use a column that isn't a number or string type as a dimension, you must convert it to a string or numeric value in your query. If you select more than one dimension value, each time series that results from the combination triggers its own alert and is charged separately.
+
+ For example:
+ - You could use dimensions to monitor CPU usage on multiple instances running your website or app. Each instance is monitored individually, and notifications are sent for each instance where the CPU usage exceeds the configured value.
+ - You could decide not to split by dimensions when you want a condition applied to multiple resources in the scope. For example, you wouldn't use dimensions if you want to fire an alert if at least five machines in the resource group scope have CPU usage above the configured value.
+
+ Select values for these fields:
+
+ - **Resource ID column**: In general, if your alert rule scope is a workspace, the alerts are fired on the workspace. If you want a separate alert for each affected Azure resource, you can:
+ - use the ARM **Azure Resource ID** column as a dimension
+ - specify it as a dimension in the Azure Resource ID property, which makes the resource returned by your query the target of the alert, so alerts are fired on the resource returned by your query, such as a virtual machine or a storage account, as opposed to in the workspace. When you use this option, if the workspace gets data from resources in more than one subscription, alerts can be triggered on resources from a subscription that is different from the alert rule subscription.
+
+ |Field |Description |
+ |||
+ |Dimension name|Dimensions can be either number or string columns. Dimensions are used to monitor specific time series and provide context to a fired alert.|
+ |Operator|The operator used on the dimension name and value. |
+ |Dimension values|The dimension values are based on data from the last 48 hours. Select **Add custom value** to add custom dimension values. |
+ |Include all future values| Select this field to include any future values added to the selected dimension. |
+
+1. In the **Alert logic** section, select values for these fields:
+
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-log-rule-logic.png" alt-text="Screenshot that shows the Alert logic section of a new log alert rule.":::
+
+ |Field |Description |
+ |||
+ |Operator| The query results are transformed into a number. In this field, select the operator to use to compare the number against the threshold.|
+ |Threshold value| A number value for the threshold. |
+ |Frequency of evaluation|How often the query is run. Can be set anywhere from one minute to one day (24 hours).|
+
+ > [!NOTE]
+ > There are some limitations to using a <a name="frequency">one minute</a> alert rule frequency. When you set the alert rule frequency to one minute, an internal manipulation is performed to optimize the query. This manipulation can cause the query to fail if it contains unsupported operations. The following are the most common reasons a query are not supported:
+ > * The query contains the **search**, **union** or **take** (limit) operations
+ > * The query contains the **ingestion_time()** function
+ > * The query uses the **adx** pattern
+ > * The query calls a function that calls other tables
++
+1. (Optional) In the **Advanced options** section, you can specify the number of failures and the alert evaluation period required to trigger an alert. For example, if you set **Aggregation granularity** to 5 minutes, you can specify that you only want to trigger an alert if there were three failures (15 minutes) in the last hour. Your application business policy determines this setting.
+
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-preview-advanced-options.png" alt-text="Screenshot that shows the Advanced options section of a new log alert rule.":::
+
+ Select values for these fields under **Number of violations to trigger the alert**:
+
+ |Field |Description |
+ |||
+ |Number of violations|The number of violations that trigger the alert.|
+ |Evaluation period|The time period within which the number of violations occur. |
+ |Override query time range| If you want the alert evaluation period to be different than the query time range, enter a time range here.<br> The alert time range is limited to a maximum of two days. Even if the query contains an **ago** command with a time range of longer than two days, the two-day maximum time range is applied. For example, even if the query text contains **ago(7d)**, the query only scans up to two days of data. If the query requires more data than the alert evaluation you can change the time range manually. If the query contains **ago** command, it will be changed automatically to 2 days (48 hours).|
+
+ > [!NOTE]
+ > If you or your administrator assigned the Azure Policy **Azure Log Search Alerts over Log Analytics workspaces should use customer-managed keys**, you must select **Check workspace linked storage**. If you don't, the rule creation will fail because it won't meet the policy requirements.
+
+1. The **Preview** chart shows query evaluations results over time. You can change the chart period or select different time series that resulted from a unique alert splitting by dimensions.
+
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-alert-rule-preview.png" alt-text="Screenshot that shows a preview of a new alert rule.":::
+
+1. Select **Done**. From this point on, you can select the **Review + create** button at any time.
+++
+## Configure the alert rule details
+
+1. On the **Details** tab, define the **Project details**.
+ - Select the **Subscription**.
+ - Select the **Resource group**.
+
+1. Define the **Alert rule details**.
+
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-rule-details-tab.png" alt-text="Screenshot that shows the Details tab when creating a new log alert rule.":::
+
+ 1. Select the **Severity**.
+ 1. Enter values for the **Alert rule name** and the **Alert rule description**.
+ 1. Select the **Region**.
+ 1. <a name="managed-id"></a>In the **Identity** section, select which identity is used by the log alert rule to send the log query. This identity is used for authentication when the alert rule executes the log query.
+
+ Keep these things in mind when selecting an identity:
+ - A managed identity is required if you're sending a query to Azure Data Explorer.
+ - Use a managed identity if you want to be able to see or edit the permissions associated with the alert rule.
+ - If you don't use a managed identity, the alert rule permissions are based on the permissions of the last user to edit the rule, at the time the rule was last edited.
+ - Use a managed identity to help you avoid a case where the rule doesn't work as expected because the user that last edited the rule didn't have permissions for all the resources added to the scope of the rule.
+
+ The identity associated with the rule must have these roles:
+ - If the query is accessing a Log Analytics workspace, the identity must be assigned a **Reader role** for all workspaces accessed by the query. If you're creating resource-centric log alerts, the alert rule may access multiple workspaces, and the identity must have a reader role on all of them.
+ - If you are querying an ADX or ARG cluster you must add **Reader role** for all data sources accessed by the query. For example, if the query is resource centric, it needs a reader role on that resources.
+ - If the query is [accessing a remote Azure Data Explorer cluster](../logs/azure-monitor-data-explorer-proxy.md), the identity must be assigned:
+ - **Reader role** for all data sources accessed by the query. For example, if the query is calling a remote Azure Data Explorer cluster using the adx() function, it needs a reader role on that ADX cluster.
+ - **Database viewer** for all databases the query is accessing.
+
+ For detailed information on managed identities, see [managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md).
+
+ Select one of the following options for the identity used by the alert rule:
+
+ |Identity |Description |
+ |||
+ |None|Alert rule permissions are based on the permissions of the last user who edited the rule, at the time the rule was edited.|
+ |System assigned managed identity| Azure creates a new, dedicated identity for this alert rule. This identity has no permissions and is automatically deleted when the rule is deleted. After creating the rule, you must assign permissions to this identity to access the workspace and data sources needed for the query. For more information about assigning permissions, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md). |
+ |User assigned managed identity|Before you create the alert rule, you [create an identity](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity) and assign it appropriate permissions for the log query. This is a regular Azure identity. You can use one identity in multiple alert rules. The identity isn't deleted when the rule is deleted. When you select this type of identity, a pane opens for you to select the associated identity for the rule. |
+
+1. (Optional) In the **Advanced options** section, you can set several options:
+
+ |Field |Description |
+ |||
+ |Enable upon creation| Select for the alert rule to start running as soon as you're done creating it.|
+ |Automatically resolve alerts (preview) |Select to make the alert stateful. When an alert is stateful, the alert is resolved when the condition is no longer met for a specific time range. The time range differs based on the frequency of the alert:<br>**1 minute**: The alert condition isn't met for 10 minutes.<br>**5-15 minutes**: The alert condition isn't met for three frequency periods.<br>**15 minutes - 11 hours**: The alert condition isn't met for two frequency periods.<br>**11 to 12 hours**: The alert condition isn't met for one frequency period. <br><br>Note that stateful log alerts have these limitations:<br> - they can trigger up to 300 alerts per evaluation.<br> - you can have a maximum of 5000 alerts with the `fired` alert condition.|
+ |Mute actions |Select to set a period of time to wait before alert actions are triggered again. If you select this checkbox, the **Mute actions for** field appears to select the amount of time to wait after an alert is fired before triggering actions again.|
+ |Check workspace linked storage|Select if logs workspace linked storage for alerts is configured. If no linked storage is configured, the rule isn't created.|
+
+1. [!INCLUDE [alerts-wizard-custom=properties](../includes/alerts-wizard-custom-properties.md)]
+++
+## Next steps
+ [View and manage your alert instances](alerts-manage-alert-instances.md)
azure-monitor Alerts Create Metric Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-metric-alert-rule.md
+
+ Title: Create Azure Monitor metric alert rules
+description: This article shows you how to create a new metric alert rule.
+++ Last updated : 11/27/2023+++
+# Create or edit a metric alert rule
+
+This article shows you how to create a new metric alert rule or edit an existing metric alert rule. To learn more about alerts, see the [alerts overview](alerts-overview.md).
+
+You create an alert rule by combining the resources to be monitored, the monitoring data from the resource, and the conditions that you want to trigger the alert. You can then define [action groups](./action-groups.md) and [alert processing rules](alerts-action-rules.md) to determine what happens when an alert is triggered.
+
+Alerts triggered by these alert rules contain a payload that uses the [common alert schema](alerts-common-schema.md).
+++
+## Configure the alert rule conditions
+
+1. On the **Condition** tab, when you select the **Signal name** field, the most commonly used signals are displayed in the drop-down list. Select one of these popular signals, or select **See all signals** if you want to choose a different signal for the condition.
+
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-popular-signals.png" alt-text="Screenshot that shows popular signals when creating an alert rule.":::
+
+1. (Optional) If you chose to **See all signals** in the previous step, use the **Select a signal** pane to search for the signal name or filter the list of signals. Filter by:
+ - **Signal type**: The [type of alert rule](alerts-overview.md#types-of-alerts) you're creating.
+ - **Signal source**: The service sending the signal.
+
+ This table describes the services available for metric alert rules:
+
+ |Signal source |Description |
+ |||
+ |Platform |For metric signals, the monitor service is the metric namespace. "Platform" means the metrics are provided by the resource provider, namely, Azure.|
+ |Azure.ApplicationInsights|Customer-reported metrics, sent by the Application Insights SDK. |
+ |Azure.VM.Windows.GuestMetrics |VM guest metrics, collected by an extension running on the VM. Can include built-in operating system perf counters and custom perf counters. |
+ |\<your custom namespace\>|A custom metric namespace, containing custom metrics sent with the Azure Monitor Metrics API. |
+
+ Select the **Signal name** and **Apply**.
+
+1. Preview the results of the selected metric signal in the **Preview** section. Select values for the following fields.
+
+ |Field|Description|
+ |||
+ |Time range|The time range to include in the results. Can be from the last six hours to the last week.|
+ |Time series|The time series to include in the results.|
+
+1. In the **Alert logic** section:
+
+ |Field |Description |
+ |||
+ |Threshold|Select if the threshold should be evaluated based on a static value or a dynamic value.<br>A **static threshold** evaluates the rule by using the threshold value that you configure.<br>**Dynamic thresholds** use machine learning algorithms to continuously learn the metric behavior patterns and calculate the appropriate thresholds for unexpected behavior. You can learn more about using [dynamic thresholds for metric alerts](alerts-types.md#apply-advanced-machine-learning-with-dynamic-thresholds). |
+ |Operator|Select the operator for comparing the metric value against the threshold. <br>If you're using dynamic thresholds, alert rules can use tailored thresholds based on metric behavior for both upper and lower bounds in the same alert rule. Select one of these operators: <br> - Greater than the upper threshold or lower than the lower threshold (default) <br> - Greater than the upper threshold <br> - Lower than the lower threshold|
+ |Aggregation type|Select the aggregation function to apply on the data points: Sum, Count, Average, Min, or Max.|
+ |Threshold value|If you selected a **static** threshold, enter the threshold value for the condition logic.|
+ |Unit|If the selected metric signal supports different units, such as bytes, KB, MB, and GB, and if you selected a **static** threshold, enter the unit for the condition logic.|
+ |Threshold sensitivity|If you selected a **dynamic** threshold, enter the sensitivity level. The sensitivity level affects the amount of deviation from the metric series pattern that's required to trigger an alert. <br> - **High**: Thresholds are tight and close to the metric series pattern. An alert rule is triggered on the smallest deviation, resulting in more alerts. <br> - **Medium**: Thresholds are less tight and more balanced. There are fewer alerts than with high sensitivity (default). <br> - **Low**: Thresholds are loose, allowing greater deviation from the metric series pattern. Alert rules are only triggered on large deviations, resulting in fewer alerts.|
+ |Aggregation granularity| Select the interval that's used to group the data points by using the aggregation type function. Choose an **Aggregation granularity** (period) that's greater than the **Frequency of evaluation** to reduce the likelihood of missing the first evaluation period of an added time series.|
+ |Frequency of evaluation|Select how often the alert rule is to be run. Select a frequency that's smaller than the aggregation granularity to generate a sliding window for the evaluation.|
+
+1. (Optional) You can configure splitting by dimensions.
+
+ Dimensions are name-value pairs that contain more data about the metric value. By using dimensions, you can filter the metrics and monitor specific time-series, instead of monitoring the aggregate of all the dimensional values.
+
+ If you select more than one dimension value, each time series that results from the combination triggers its own alert and is charged separately. For example, the transactions metric of a storage account can have an API name dimension that contains the name of the API called by each transaction (for example, GetBlob, DeleteBlob, and PutPage). You can choose to have an alert fired when there's a high number of transactions in a specific API (the aggregated data). Or you can use dimensions to alert only when the number of transactions is high for specific APIs.
+
+ |Field |Description |
+ |||
+ |Dimension name|Dimensions can be either number or string columns. Dimensions are used to monitor specific time series and provide context to a fired alert.<br>Splitting on the **Azure Resource ID** column makes the specified resource into the alert target. If detected, the **ResourceID** column is selected automatically and changes the context of the fired alert to the record's resource.|
+ |Operator|The operator used on the dimension name and value.|
+ |Dimension values|The dimension values are based on data from the last 48 hours. Select **Add custom value** to add custom dimension values.|
+ |Include all future values| Select this field to include any future values added to the selected dimension.|
+
+1. (Optional) In the **When to evaluate** section:
+
+ |Field |Description |
+ |||
+ |Check every|Select how often the alert rule checks if the condition is met. |
+ |Lookback period|Select how far back to look each time the data is checked. For example, every 1 minute, look back 5 minutes.|
+
+1. (Optional) In the **Advanced options** section, you can specify how many failures within a specific time period trigger an alert. For example, you can specify that you only want to trigger an alert if there were three failures in the last hour. Your application business policy should determine this setting.
+
+ Select values for these fields:
+
+ |Field |Description |
+ |||
+ |Number of violations|The number of violations within the configured time frame that trigger the alert.|
+ |Evaluation period|The time period within which the number of violations occur.|
+ |Ignore data before|Use this setting to select the date from which to start using the metric historical data for calculating the dynamic thresholds. For example, if a resource was running in testing mode and is moved to production, you may want to disregard the metric behavior while the resource was in testing.|
+
+1. Select **Done**. From this point on, you can select the **Review + create** button at any time.
+++
+## Configure the alert rule details
+
+1. On the **Details** tab, define the **Project details**.
+ - Select the **Subscription**.
+ - Select the **Resource group**.
+
+1. Define the **Alert rule details**.
+
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-metric-rule-details-tab.png" alt-text="Screenshot that shows the Details tab when creating a new alert rule.":::
+
+1. Select the **Severity**.
+1. Enter values for the **Alert rule name** and the **Alert rule description**.
+1. (Optional) If you're creating a metric alert rule that monitors a custom metric with the scope defined as one of the following regions and you want to make sure that the data processing for the alert rule takes place within that region, you can select to process the alert rule in one of these regions:
+ - North Europe
+ - West Europe
+ - Sweden Central
+ - Germany West Central
+
+1. (Optional) In the **Advanced options** section, you can set several options.
+
+ |Field |Description |
+ |||
+ |Enable upon creation| Select for the alert rule to start running as soon as you're done creating it.|
+ |Automatically resolve alerts (preview) |Select to make the alert stateful. When an alert is stateful, the alert is resolved when the condition is no longer met.<br> If you don't select this checkbox, metric alerts are stateless. Stateless alerts fire each time the condition is met, even if alert already fired.<br> The frequency of notifications for stateless metric alerts differs based on the alert rule's configured frequency:<br>**Alert frequency of less than 5 minutes**: While the condition continues to be met, a notification is sent somewhere between one and six minutes.<br>**Alert frequency of more than 5 minutes**: While the condition continues to be met, a notification is sent between the configured frequency and doubles the value of the frequency. For example, for an alert rule with a frequency of 15 minutes, a notification is sent somewhere between 15 to 30 minutes.|
+
+1. [!INCLUDE [alerts-wizard-custom=properties](../includes/alerts-wizard-custom-properties.md)]
+++
+## Next steps
+ [View and manage your alert instances](alerts-manage-alert-instances.md)
azure-monitor Alerts Create New Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
To edit an existing alert rule:
|Field |Description | |||
- |Threshold|Select if the threshold should be evaluated based on a static value or a dynamic value.<br>A **static threshold** evaluates the rule by using the threshold value that you configure.<br>**Dynamic thresholds** use machine learning algorithms to continuously learn the metric behavior patterns and calculate the appropriate thresholds for unexpected behavior. You can learn more about using [dynamic thresholds for metric alerts](alerts-types.md#dynamic-thresholds). |
+ |Threshold|Select if the threshold should be evaluated based on a static value or a dynamic value.<br>A **static threshold** evaluates the rule by using the threshold value that you configure.<br>**Dynamic thresholds** use machine learning algorithms to continuously learn the metric behavior patterns and calculate the appropriate thresholds for unexpected behavior. You can learn more about using [dynamic thresholds for metric alerts](alerts-types.md#apply-advanced-machine-learning-with-dynamic-thresholds). |
|Operator|Select the operator for comparing the metric value against the threshold. <br>If you're using dynamic thresholds, alert rules can use tailored thresholds based on metric behavior for both upper and lower bounds in the same alert rule. Select one of these operators: <br> - Greater than the upper threshold or lower than the lower threshold (default) <br> - Greater than the upper threshold <br> - Lower than the lower threshold| |Aggregation type|Select the aggregation function to apply on the data points: Sum, Count, Average, Min, or Max.| |Threshold value|If you selected a **static** threshold, enter the threshold value for the condition logic.|
azure-monitor Alerts Create Rule Cli Powershell Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-rule-cli-powershell-arm.md
+
+ Title: Create Azure Monitor alert rules using the CLI, PowerShell or an ARM template
+description: This article shows you how to create a new alert rule using the CLI, PowerShell or an ARM template.
+++ Last updated : 11/29/2023++
+# Create a new alert rule using the CLI, PowerShell, or an ARM template
+You can create a new alert rule using the [Create a new alert rule using the CLI](#create-a-new-alert-rule-using-the-cli), [PowerShell](#create-a-new-alert-rule-using-powershell), or an [Azure Resource Manager template](#create-a-new-alert-rule-using-an-arm-template).
+
+## Create a new alert rule using the CLI
+
+You can create a new alert rule using the [Azure CLI](/cli/azure/get-started-with-azure-cli). The following code examples use [Azure Cloud Shell](../../cloud-shell/overview.md). You can see the full list of the [Azure CLI commands for Azure Monitor](/cli/azure/azure-cli-reference-for-monitor#azure-monitor-references).
+
+1. In the [portal](https://portal.azure.com/), select **Cloud Shell**. At the prompt, use the commands that follow.
+
+ To create a metric alert rule, use the `az monitor metrics alert create` command. You can see detailed documentation on the metric alert rule create command in the `az monitor metrics alert create` section of the [CLI reference documentation for metric alerts](/cli/azure/monitor/metrics/alert).
+
+ To create a metric alert rule that monitors if average Percentage CPU on a VM is greater than 90:
+ ```azurecli
+ az monitor metrics alert create -n {nameofthealert} -g {ResourceGroup} --scopes {VirtualMachineResourceID} --condition "avg Percentage CPU > 90" --description {descriptionofthealert}
+ ```
+## Create a new alert rule using PowerShell
+
+- To create a metric alert rule using PowerShell, use the [Add-AzMetricAlertRuleV2](/powershell/module/az.monitor/add-azmetricalertrulev2) cmdlet.
+- To create a log alert rule using PowerShell, use the [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule) cmdlet.
+- To create an activity log alert rule using PowerShell, use the [Set-AzActivityLogAlert](/powershell/module/az.monitor/set-azactivitylogalert) cmdlet.
+
+## Create a new alert rule using an ARM template
+
+You can use an [Azure Resource Manager template (ARM template)](../../azure-resource-manager/templates/syntax.md) to configure alert rules consistently in all of your environments.
+
+1. Create a new resource, using the following resource types:
+ - For metric alerts: `Microsoft.Insights/metricAlerts`
+ - For log alerts: `Microsoft.Insights/scheduledQueryRules`
+ - For activity log, service health, and resource health alerts: `microsoft.Insights/activityLogAlerts`
+ > [!NOTE]
+ > - Metric alerts for an Azure Log Analytics workspace resource type (`Microsoft.OperationalInsights/workspaces`) are configured differently than other metric alerts. For more information, see [Resource Template for Metric Alerts for Logs](alerts-metric-logs.md#resource-template-for-metric-alerts-for-logs).
+ > - We recommend that you create the metric alert using the same resource group as your target resource.
+1. Copy one of the templates from these sample ARM templates.
+ - For metric alerts: [Resource Manager template samples for metric alert rules](resource-manager-alerts-metric.md)
+ - For log alerts: [Resource Manager template samples for log alert rules](resource-manager-alerts-log.md)
+ - For activity log alerts: [Resource Manager template samples for activity log alert rules](resource-manager-alerts-activity-log.md)
+ - For resource health alerts: [Resource Manager template samples for resource health alert rules](resource-manager-alerts-resource-health.md)
+1. Edit the template file to contain appropriate information for your alert, and save the file as \<your-alert-template-file\>.json.
+1. Edit the corresponding parameters file to customize the alert, and save as \<your-alert-template-file\>.parameters.json.
+1. Set the `metricName` parameter, using one of the values in [Azure Monitor supported metrics](../essentials/metrics-supported.md).
+1. Deploy the template using [PowerShell](../../azure-resource-manager/templates/deploy-powershell.md#deploy-local-template) or the [CLI](../../azure-resource-manager/templates/deploy-cli.md#deploy-local-template).
+
+## Next steps
+[Manage alert rules](alerts-manage-alert-rules.md)
+[Manage alert instances](alerts-manage-alert-instances.md)
+
azure-monitor Alerts Metric Multiple Time Series Single Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-metric-multiple-time-series-single-rule.md
For example, assume we've set the preceding alert rule to monitor for CPU above
The alert rule triggers on *VM-a* but not *VM-b*. These triggered alerts are independent. They can also resolve at different times depending on the individual behavior of each of the virtual machines.
-For more information about multi-resource alert rules and the resource types supported for this capability, see [Monitoring at scale using metric alerts in Azure Monitor](alerts-types.md#monitor-multiple-resources).
+For more information about multi-resource alert rules and the resource types supported for this capability, see [Monitoring at scale using metric alerts in Azure Monitor](alerts-types.md#monitor-multiple-resources-with-one-alert-rule).
> [!NOTE] > In a metric alert rule that monitors multiple resources, only a single condition is allowed.
azure-monitor Alerts Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-types.md
-# Types of Azure Monitor alerts
+# Choosing the right type of alert rule
This article describes the kinds of Azure Monitor alerts you can create. It helps you understand when to use each type of alert.
+For more information about pricing, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/).
The types of alerts are: - [Metric alerts](#metric-alerts)
The types of alerts are:
- [Smart detection alerts](#smart-detection-alerts) - [Prometheus alerts](#prometheus-alerts)
-## Choose the right alert type
-
-The information in this table can help you decide when to use each type of alert. For more information about pricing, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/).
+## Types of Azure Monitor alerts
|Alert type |When to use |Pricing information| ||||
You can create rules by using these metrics:
Metric alert rules include these features: - You can use multiple conditions on an alert rule for a single resource. - You can add granularity by [monitoring multiple metric dimensions](#narrow-the-target-using-dimensions). -- You can use [dynamic thresholds](#dynamic-thresholds), which are driven by machine learning.
+- You can use [dynamic thresholds](#apply-advanced-machine-learning-with-dynamic-thresholds), which are driven by machine learning.
- You can configure if metric alerts are [stateful or stateless](alerts-overview.md#alerts-and-state). Metric alerts are stateful by default. The target of the metric alert rule can be: - A single resource, such as a virtual machine (VM). For supported resource types, see [Supported resources for metric alerts in Azure Monitor](alerts-metric-near-real-time.md).-- [Multiple resources](#monitor-multiple-resources) of the same type in the same Azure region, such as a resource group.
+- [Multiple resources](#monitor-multiple-resources-with-one-alert-rule) of the same type in the same Azure region, such as a resource group.
-### Multiple conditions
+### Applying multiple conditions to a metric alert rule
When you create an alert rule for a single resource, you can apply multiple conditions. For example, you could create an alert rule to monitor an Azure virtual machine and alert when both "Percentage CPU is higher than 90%" and "Queue length is over 300 items". When an alert rule has multiple conditions, the alert fires when all the conditions in the alert rule are true and is resolved when at least one of the conditions is no longer true for three consecutive checks.
-### Narrow the target using Dimensions
+### Narrow the target using dimensions
For instructions on using dimensions in metric alert rules, see [Monitor multiple time series in a single metric alert rule](alerts-metric-multiple-time-series-single-rule.md).
-### Create resource-centric alerts by using splitting by dimensions
+### Monitor the same condition on multiple resources using splitting by dimensions
To monitor for the same condition on multiple Azure resources, you can use splitting by dimensions. When you use splitting by dimensions, you can create resource-centric alerts at scale for a subscription or resource group. Alerts are split into separate alerts by grouping combinations. Splitting on an Azure resource ID column makes the specified resource into the alert target. You might also decide not to split when you want a condition applied to multiple resources in the scope. For example, you might want to fire an alert if at least five machines in the resource group scope have CPU usage over 80%.
-### Monitor multiple resources
+### Monitor multiple resources with one alert rule
You can monitor at scale by applying the same metric alert rule to multiple resources of the same type for resources that exist in the same Azure region. Individual notifications are sent for each monitored resource.
You can specify the scope of monitoring with a single metric alert rule in one o
- All VMs in one Azure region in one or more resource groups in a subscription. - All VMs in one Azure region in a subscription.
-### Dynamic thresholds
+### Apply advanced machine learning with dynamic thresholds
Dynamic thresholds use advanced machine learning to: - Learn the historical behavior of metrics.
Note that stateful log alerts have these limitations:
> [!NOTE] > Log alerts work best when you're trying to detect specific data in the logs, as opposed to when you're trying to detect a lack of data in the logs. Because logs are semi-structured data, they're inherently more latent than metric data on information like a VM heartbeat. To avoid misfires when you're trying to detect a lack of data in the logs, consider using [metric alerts](#metric-alerts). You can send data to the metric store from logs by using [metric alerts for logs](alerts-metric-logs.md).
-### Dimensions in log alert rules
+### Monitor multiple instances of a resource using dimensions
You can use dimensions when you create log alert rules to monitor the values of multiple instances of a resource with one rule. For example, you can monitor CPU usage on multiple instances running your website or app. Each instance is monitored individually. Notifications are sent for each instance.
-### Splitting by dimensions in log alert rules
+### Monitor the same condition on multiple resources using splitting by dimensions
To monitor for the same condition on multiple Azure resources, you can use splitting by dimensions. When you use splitting by dimensions, you can create resource-centric alerts at scale for a subscription or resource group. Alerts are split into separate alerts by grouping combinations by using numerical or string columns. Splitting on the Azure resource ID column makes the specified resource into the alert target. You might also decide not to split when you want a condition applied to multiple resources in the scope. For example, you might want to fire an alert if at least five machines in the resource group scope have CPU usage over 80%.
-### Use the API
+### Use the API for log alert rules
Manage new rules in your workspaces by using the [ScheduledQueryRules](/rest/api/monitor/scheduledqueryrule-2021-08-01/scheduled-query-rules) API. > [!NOTE] > Log alerts for Log Analytics used to be managed by using the legacy [Log Analytics Alert API](api-alerts.md). Learn more about [switching to the current ScheduledQueryRules API](alerts-log-api-switch.md).
-## Log alerts on your Azure bill
+### Log alerts on your Azure bill
Log alerts are listed under resource provider `microsoft.insights/scheduledqueryrules` with: - Log alerts on Application Insights shown with the exact resource name along with resource group and alert properties.
azure-monitor Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/getting-started.md
To enable Azure Monitor to monitor all of your Azure resources, you need to both
- Configure Azure resources to generate monitoring data for Azure Monitor to collect. > [!IMPORTANT]
-> If you're new to Azure Monitor or are want to monitor a single Azure resource, start with the [Monitor Azure resources with Azure Monitor tutorial](essentials/monitor-azure-resource.md). The tutorial provides general concepts for Azure Monitor and guidance for monitoring a single Azure resource. This article provides recommendations for preparing your environment to leverage all features of Azure Monitor to monitoring your entire set of applications and resources together at scale.
+> If you're new to Azure Monitor or want to monitor a single Azure resource, start with the [Monitor Azure resources with Azure Monitor tutorial](essentials/monitor-azure-resource.md). The tutorial provides general concepts for Azure Monitor and guidance for monitoring a single Azure resource. This article provides recommendations for preparing your environment to leverage all features of Azure Monitor to monitoring your entire set of applications and resources together at scale.
## Getting started workflow These articles provide detailed information about each of the main steps you'll need to do when getting started with Azure Monitor.
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
All custom tables created with or migrated to the [data collection rule (DCR)-ba
| Service | Table | |:|:|
-| Active Directory | [AADDomainServicesDNSAuditsGeneral](/azure/azure-monitor/reference/tables/AADDomainServicesDNSAuditsGeneral)<br> [AADDomainServicesDNSAuditsDynamicUpdates](/azure/azure-monitor/reference/tables/AADDomainServicesDNSAuditsDynamicUpdates)<br>[AADServicePrincipalSignInLogs](/azure/azure-monitor/reference/tables/AADServicePrincipalSignInLogs) |
+| Azure Active Directory | [AADDomainServicesDNSAuditsGeneral](/azure/azure-monitor/reference/tables/AADDomainServicesDNSAuditsGeneral)<br> [AADDomainServicesDNSAuditsDynamicUpdates](/azure/azure-monitor/reference/tables/AADDomainServicesDNSAuditsDynamicUpdates)<br>[AADServicePrincipalSignInLogs](/azure/azure-monitor/reference/tables/AADServicePrincipalSignInLogs) |
| API Management | [ApiManagementGatewayLogs](/azure/azure-monitor/reference/tables/ApiManagementGatewayLogs)<br>[ApiManagementWebSocketConnectionLogs](/azure/azure-monitor/reference/tables/ApiManagementWebSocketConnectionLogs) | | Application Gateways | [AGWAccessLogs](/azure/azure-monitor/reference/tables/AGWAccessLogs)<br>[AGWPerformanceLogs](/azure/azure-monitor/reference/tables/AGWPerformanceLogs)<br>[AGWFirewallLogs](/azure/azure-monitor/reference/tables/AGWFirewallLogs) | | Application Insights | [AppTraces](/azure/azure-monitor/reference/tables/apptraces) |
azure-monitor Get Started Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/get-started-queries.md
The preceding query returns 10 results from the `SecurityEvent` table, in no spe
* The query starts with the table name `SecurityEvent`, which defines the scope of the query. * The pipe (|) character separates commands, so the output of the first command is the input of the next. You can add any number of piped elements.
-* Following the pipe is the `take` command, which returns a specific number of arbitrary records from the table.
+* Following the pipe is the [`take` operator](#take).
-We could run the query even without adding `| take 10`. The command would still be valid, but it could return up to 30,000 results.
+ We could run the query even without adding `| take 10`. The command would still be valid, but it could return up to 30,000 results.
+
+#### Take
+
+Use the [`take` operator](/azure/data-explorer/kusto/query/takeoperator) to view a small sample of records by returning up to the specified number of records. The selected results are arbitrary and displayed in no particular order. If you need to return results in a particular order, use the [`sort` and `top` operators](#sort-and-top).
### Search queries
This query searches the `SecurityEvent` table for records that contain the phras
> Search queries are ordinarily slower than table-based queries because they have to process more data. ## Sort and top
-Although `take` is useful for getting a few records, the results are selected and displayed in no particular order. To get an ordered view, you could `sort` by the preferred column:
+
+This section describes the `sort` and `top` operators and their `desc` and `asc` arguments. Although [`take`](#take) is useful for getting a few records, you can't select or sort the results in any particular order. To get an ordered view, use `sort` and `top`.
+
+### Desc and asc
+
+#### Desc
+
+Use the `desc` argument to sort records in descending order. Descending is the default sorting order for `sort` and `top`, so you can usually omit the `desc` argument.
+
+For example, the data returned by both of the following queries is sorted by the [TimeGenerated column](./log-standard-columns.md#timegenerated), in descending order:
+
+- ```Kusto
+ SecurityEvent
+ | sort by TimeGenerated desc
+ ```
+
+- ```Kusto
+ SecurityEvent
+ | sort by TimeGenerated
+ ```
+
+#### Asc
+
+To sort in ascending order, specify `asc`.
+
+### Sort
+
+You can use the [`sort` operator](/azure/data-explorer/kusto/query/sort-operator). `sort` sorts the query results by the column you specify. However, `sort` doesn't limit the number of records that are returned by the query.
+
+For example, the following query returns all available records for the `SecurityEvent` table, which is up to a maximum of 30,000 records, and sorts them by the TimeGenerated column.
```Kusto SecurityEvent
-| sort by TimeGenerated desc
+| sort by TimeGenerated
```
-The preceding query could return too many results though, and it might also take some time. The query sorts the entire `SecurityEvent` table by the `TimeGenerated` column. The Analytics portal then limits the display to only 30,000 records. This approach isn't optimal.
+The preceding query could return too many results. Also, it might also take some time to return the results. The query sorts the entire `SecurityEvent` table by the `TimeGenerated` column. The Analytics portal then limits the display to only 30,000 records. This approach isn't optimal. The best way to only get the latest records is to use the [`top` operator](#top).
+
+### Top
+
+Use the [`top` operator](/azure/data-explorer/kusto/query/topoperator) to sort the entire table on the server side and then only return the top records.
-The best way to get only the latest 10 records is to use `top`, which sorts the entire table on the server side and then returns the top records:
+For example, the following query returns the latest 10 records:
```Kusto SecurityEvent | top 10 by TimeGenerated ```
-Descending is the default sorting order, so you would usually omit the `desc` argument. The output looks like this example.
+The output looks like this example.
<!-- convertborder later --> :::image type="content" source="media/get-started-queries/top10.png" lightbox="media/get-started-queries/top10.png" alt-text="Screenshot that shows the top 10 records sorted in descending order." border="false"::: ## The where operator: Filter on a condition Filters, as indicated by their name, filter the data by a specific condition. Filtering is the most common way to limit query results to relevant information.
-To add a filter to a query, use the `where` operator followed by one or more conditions. For example, the following query returns only `SecurityEvent` records where `Level equals _8`:
+To add a filter to a query, use the [`where` operator](/azure/data-explorer/kusto/query/whereoperator) followed by one or more conditions. For example, the following query returns only `SecurityEvent` records where `Level equals _8`:
```Kusto SecurityEvent
azure-monitor Log Analytics Workspace Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-workspace-health.md
Previously updated : 02/07/2023 Last updated : 11/23/2023 #Customer-intent: As a Log Analytics workspace administrator, I want to know when there are latency issues in a Log Analytics workspace, so I can act to resolve the issue, contact Microsoft for support, or track that is Azure is meeting its SLA.
Azure Service Health monitors:
## View Log Analytics workspace health and set up health status alerts
-When Azure Service Health detects [average latency](../logs/data-ingestion-time.md#average-latency) in your Log Analytics workspace, the workspace resource health status is **Available**.
To view your Log Analytics workspace health and set up health status alerts:
To view your Log Analytics workspace health and set up health status alerts:
The **Resource health** screen shows:
- - **Health history**: Indicates whether Azure Service Health has detected latency issues related to the specific Log Analytics workspace. To further investigate latency issues related to your workspace, see [Investigate latency](#investigate-log-analytics-workspace-health-issues).
+ - **Health history**: Indicates whether Azure Service Health has detected latency or query execution issues in the specific Log Analytics workspace. To further investigate latency issues related to your workspace, see [Investigate latency](#investigate-log-analytics-workspace-health-issues).
- **Azure service issues**: Displayed when a known issue with an Azure service might affect latency in the Log Analytics workspace. Select the message to view details about the service issue in Azure Service Health. > [!NOTE] > - Service health notifications do not indicate that your Log Analytics workspace is necessarily affected by the know service issue. If your Log Analytics workspace resource health status is **Available**, Azure Service Health did not detect issues in your workspace. > - Resource Health excludes data types for which long ingestion latency is expected. For example, Application Insights data types that calculate the application map data and are known to add latency.+ :::image type="content" source="media/data-ingestion-time/log-analytics-workspace-latency.png" lightbox="media/data-ingestion-time/log-analytics-workspace-latency.png" alt-text="Screenshot that shows the Resource health screen for a Log Analytics workspace."::: +
+ This table describes the possible resource health status values for a Log Analytics workspace:
+
+ | Resource health status | Description |
+ |-|-|
+ |Available| [Average latency](../logs/data-ingestion-time.md#average-latency) and no query execution issues detected.|
+ |Unavailable|Higher than average latency detected.|
+ |Degraded|Query failures detected.|
+ |Unknown|Currently unable to determine Log Analytics workspace health because you haven't run queries or ingested data to this workspace recently.|
1. To set up health status alerts, you can either [enable recommended out-of-the-box alert](../alerts/alerts-overview.md#recommended-alert-rules) rules, or manually create new alert rules. - To enable the recommended alert rules:
To view Log Analytics workspace health metrics:
| - | - | | Query count | Total number of user queries in the Log Analytics workspace within the selected time range.<br>This number includes only user-initiated queries, and doesn't include queries initiated by Sentinel rules and alert-related queries. | | Query failure count | Total number of failed user queries in the Log Analytics workspace within the selected time range.<br>This number includes all queries that return 5XX response codes - except 504 *Gateway Timeout* - which indicate an error related to the application gateway or the backend server.|
- | Query success rate | Total number of successful user queries in the Log Analytics workspace within the selected time range.<br>This number includes all queries that return 2XX, 4XX, and 504 response codes; in other words, all user queries that don't result in a service error. |
+ | AvailabilityRate_Query | Percentage of successful user queries in the Log Analytics workspace within the selected time range.<br>This number includes all queries that return 2XX, 4XX, and 504 response codes; in other words, all user queries that don't result in a service error. |
## Investigate Log Analytics workspace health issues
To investigate Log Analytics workspace health issues:
- [Query](./queries.md) the data in your Log Analytics workspace to [understand which factors are contributing greater than expected latency in your workspace](../logs/data-ingestion-time.md). - [Use the `_LogOperation` function to view and set up alerts about operational issues](../logs/monitor-workspace.md) logged in your Log Analytics workspace.
-
-
+
## Next steps
azure-monitor Workspace Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/workspace-design.md
A single [Log Analytics workspace](log-analytics-workspace-overview.md) might be
> [!NOTE] > This article discusses Azure Monitor and Microsoft Sentinel because many customers need to consider both in their design. Most of the decision criteria apply to both services. If you use only one of these services, you can ignore the other in your evaluation.
+Here's a video about the fundamentals of Azure Monitor Logs and best practices and design considerations for designing your Azure Monitor Logs deployment:
+
+> [!VIDEO https://www.youtube.com/embed/pqUvZqoQV4o]
+ ## Design strategy Your design should always start with a single workspace to reduce the complexity of managing multiple workspaces and in querying data from them. There are no performance limitations from the amount of data in your workspace. Multiple services and data sources can send data to the same workspace. As you identify criteria to create more workspaces, your design should use the fewest number that will match your requirements.
There are two options to implement logs in a central location:
- Learn more about [designing and configuring data access in a workspace](manage-access.md). - Get [sample workspace architectures for Microsoft Sentinel](../../sentinel/sample-workspace-designs.md).
+- Here's a video on designing the proper structure for your Log Analytics workspace: [ITOps Talk:Log Analytics workspace design deep dive](/shows/it-ops-talk/ops115-log-analytics-workspace-design-deep-dive)
azure-monitor Workbooks Automate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-automate.md
In this example, the following steps facilitate the customization of an exported
1. Use the new `reserializedData` variable in place of the original `serializedData` property. 1. Deploy the new workbook resource by using the updated ARM template.
-### Limitations
-Currently, this mechanism can't be used to create workbook instances in the **Workbooks** gallery of Application Insights. We're working on addressing this limitation. In the meantime, we recommend that you use the **Troubleshooting Guides** gallery (workbookType: `tsg`) to deploy Application Insights-related workbooks.
- ## Next steps Explore how workbooks are being used to power the new [Storage insights experience](../../storage/common/storage-insights-overview.md?toc=%2fazure%2fazure-monitor%2ftoc.json).
azure-monitor Workbooks Bring Your Own Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-bring-your-own-storage.md
There are times when you might have a query or some business logic that you want
- When you save to custom storage, you can't pin individual parts of the workbook to a dashboard because the individual pins would contain protected information in the dashboard itself. When you use custom storage, you can only pin links to the workbook itself to dashboards. - After a workbook has been saved to custom storage, it will always be saved to custom storage, and this feature can't be turned off. To save elsewhere, you can use **Save As** and elect to not save the copy to custom storage.-- Workbooks in an Application Insights resource are "legacy" workbooks and don't support custom storage. The latest feature for workbooks in an Application Insights resource is the **More** selection. Legacy workbooks don't have **Subscription** options when you save them.-
- <!-- convertborder later -->
- :::image type="content" source="./media/workbooks-bring-your-own-storage/legacy-workbooks.png" lightbox="./media/workbooks-bring-your-own-storage/legacy-workbooks.png" alt-text="Screenshot that shows a legacy workbook." border="false":::
## Next steps
azure-netapp-files Azure Netapp Files Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md
Azure NetApp Files volumes are designed to be contained in a special purpose sub
* West US 2 * West US 3
-<a name="regions-edit-network-features"></a>The option to *[edit network features for existing volumes](configure-network-features.md#edit-network-features-option-for-existing-volumes)* is supported for the following regions:
+<a name="regions-edit-network-features"></a>The option to *[edit network features for existing volumes (preview)](configure-network-features.md#edit-network-features-option-for-existing-volumes)* is supported for the following regions:
* Australia Central * Australia Central 2
Azure NetApp Files volumes are designed to be contained in a special purpose sub
* Central India * Central US * East Asia
-* East US
+* East US*
* East US 2 * France Central * Germany North
Azure NetApp Files volumes are designed to be contained in a special purpose sub
* Norway West * Qatar Central * South Africa North
-* South Central US
+* South Central US*
* South India * Southeast Asia * Sweden Central
Azure NetApp Files volumes are designed to be contained in a special purpose sub
* UK South * West Europe * West US
-* West US 2
+* West US 2*
* West US 3
+\* Not all volume in this region are available for conversion. All volumes will be available for conversion in the future.
## Considerations
azure-netapp-files Azure Netapp Files Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-videos.md
This article provides references to videos that contain in-depth discussions abo
Several videos are available to help you learn more about Azure NetApp Files:
-* [Microsoft Ignite 2019: Run your most demanding enterprise file workloads with Azure NetApp Files](https://azure.microsoft.com/resources/videos/ignite-2018-taking-on-the-most-demanding-enterprise-file-workloads-with-azure-netapp-files/) provides a brief introduction to Azure NetApp Files, including use cases and demo, and then goes deeper on the capabilities and roadmap.
+* [Microsoft Ignite 2019: Run your most demanding enterprise file workloads with Azure NetApp Files](https://www.youtube.com/watch?v=inVjDxF5Y8w) provides a brief introduction to Azure NetApp Files, highlighting use cases and demonstrating Azure NetApp Files features.
* [Azure NetApp Files talks by Kirk Ryan](https://www.youtube.com/channel/UCq1jZkyVXqMsMSIvScBE2qg/playlists) are a series of videos, tutorials, and demonstrations dedicated to Azure NetApp Files.
azure-netapp-files Configure Network Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-network-features.md
This section shows you how to set the network features option when you create a
[ ![Screenshot that shows the Volumes page displaying the network features setting.](../media/azure-netapp-files/network-features-volume-list.png)](../media/azure-netapp-files/network-features-volume-list.png#lightbox)
-## Edit network features option for existing volumes
+## <a name="edit-network-features-option-for-existing-volumes"></a> Edit network features option for existing volumes (preview)
You can edit the network features option of existing volumes from *Basic* to *Standard* network features. The change you make applies to all volumes in the same *network sibling set* (or *siblings*). Siblings are determined by their network IP address relationship. They share the same network interface card (NIC) for mounting the volume to the client or connecting to the SMB share of the volume. At the creation of a volume, its siblings are determined by a placement algorithm that aims for reusing the IP address where possible.
azure-resource-manager Bicep Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-cli.md
The `publish` command adds a module to a registry. The Azure container registry
After publishing the file to the registry, you can [reference it in a module](modules.md#file-in-registry).
-To use the publish command, you must have [Bicep CLI version 0.4.X or higher](./install.md). To use the `--documentationUri`/`-d` parameter, you must have [Bicep CLI version 0.14.X or higher](./install.md).
+To use the publish command, you must have [Bicep CLI version 0.14.X or higher](./install.md). To use the `--documentationUri`/`-d` parameter, you must have [Bicep CLI version 0.14.X or higher](./install.md).
To publish a module to a registry, use:
backup Backup Mabs Release Notes V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-release-notes-v3.md
Title: Release notes for Microsoft Azure Backup Server v3 description: This article provides the information about the known issues and workarounds for Microsoft Azure Backup Server (MABS) v3. Previously updated : 11/07/2023 Last updated : 12/04/2023 ms.asset: 0c4127f2-d936-48ef-b430-a9198e425d81
This article provides the known issues and workarounds for Microsoft Azure Backup Server (MABS) V3.
-## MABS V4 UR1 known issues and workarounds
+## MABS V4 UR1 Refresh known issues and workarounds
No known issues. ++
+## MABS V4 UR1 known issues and workarounds
+
+Microsoft is recalling the release of Update Rollup 1 for Microsoft Azure Backup Server V4 due to the following known issues:
+
+- Hyper-V scheduled backups take a long time to complete because each backup job triggers a consistency check.
+
+ **Error Message**: The replica of Microsoft Hyper-V RCT on `<Machine Name>` is not consistent with the protected data source. MABS has detected changes in file locations or volume configurations of protected objects since the data source was configured for protection. (ID 30135)
+
+- MABS console occasionally crashes when SMTP alerts or reports are configured.
+
+ The updated build **Update Rollup 1 Refresh for MABS V4** is released that fixes these known issues.
+
+>[!Important]
+>If you had installed Update Rollup 1 for MABS V4 (14.0.42.0), we recommend that you install **Update Rollup1 Refresh (14.0.46.0)** on your MABS server and update the protection agents from *KB 5033756*.
+>
+>For any queries or additional information, contact **Microsoft Support**.
+ ## MABS V4 known issues and workarounds If you're protecting Windows Server 2012 and 2012 R2, you need to install Visual C++ redistributable 2015 manually on the protected server. You can download [Visual C++ Redistributable for Visual Studio 2015 from Official Microsoft Download Center](https://www.microsoft.com/en-in/download/details.aspx?id=48145).
backup Backup Mabs Whats New Mabs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-whats-new-mabs.md
Title: What's new in Microsoft Azure Backup Server
description: Microsoft Azure Backup Server gives you enhanced backup capabilities for protecting VMs, files and folders, workloads, and more. Previously updated : 11/23/2023 Last updated : 12/04/2023
Microsoft Azure Backup Server gives you enhanced backup capabilities to protect VMs, files and folders, workloads, and more.
+## What's new in MABS V4 Update Rollup 1 Refresh (UR1 Refresh)?
+Microsoft Azure Backup Server version 4 (MABS V4) Update *Rollup 1 Refresh* includes critical bug fixes and feature enhancements. For information about the bugs fixed and the installation instructions of MABS V4 UR1 Refresh, see [KB article 5033756](https://support.microsoft.com/home/contact?SourceApp=smcivr2).
+
+>[!Important]
+>MABS V4 UR1 Refresh supersedes MABS V4 UR1 that has the same feature enhancements, but fixes the known issues in MABS V4 UR1. [Learn more](backup-mabs-release-notes-v3.md).
+
+The following table lists the new features added in MABS V4 UR1:
+
+| Feature | Supportability |
+| | |
+| Item-level recovery for VMware VMs running Windows directly from online recovery points. | Use MARS version *2.0.9251.0* or above for this feature. |
+| Windows and Basic SMTP Authentication for MABS email reports and alerts. | This enables MABS to send reports and alerts using any vendor supporting SMTP Basic Authentication. [Learn more](/system-center/dpm/monitor-dpm?view=sc-dpm-2022&preserve-view=true#configure-email-for-dpm). <br><br> Note that if you're using Microsoft 365 SMTP with a MABS V4 private fix, re-enter the credential using Basic Authentication. |
+| Fall back to crash-consistent backups for VMware VMs. | Use a registry key for VMware VMs when backups fail with ApplicationQuiesceFault. [Learn more](backup-azure-backup-server-vmware.md#applicationquiescefault). |
+| Experience improvements for MABS backups to Azure. | |
+| List online recovery points for a data source along with the expiry time and soft-delete status. | To view the list of recovery points along with their expiration dates, right-click a data source and select **List recovery points**. |
+| Stop protection and retaining data using the policy duration for immutable vaults directly from the console. | This helps you save the backup costs when stopping protection for a data source backed up to an immutable vault. [Learn more](backup-azure-security-feature.md#immutability-support).
## What's new in MABS V4 Update Rollup 1 (UR1)?
The following table lists the new features added in MABS V4 UR1:
| Feature | Supportability | | | | | Item-level recovery for VMware VMs running Windows directly from online recovery points. | Note that you need *MARS version 2.0.9251.0 or above* to use this feature. |
-| Windows and Basic SMTP Authentication for MABS email reports and alerts. | This enables MABS to send reports and alerts using any vendor supporting SMTP Basic Authentication. [Learn more](/system-center/dpm/monitor-dpm?view=sc-dpm-2022&preserve-view=true#configure-email-for-dpm). <br><br> Note that if you are using Microsoft 365 SMTP with a MABS V4 private fix, reenter the credential using Basic Authentication. |
+| Windows and Basic SMTP Authentication for MABS email reports and alerts. | This enables MABS to send reports and alerts using any vendor supporting SMTP Basic Authentication. [Learn more](/system-center/dpm/monitor-dpm?view=sc-dpm-2022&preserve-view=true#configure-email-for-dpm). <br><br> Note that if you are using Microsoft 365 SMTP with a MABS V4 private fix, re-enter the credential using Basic Authentication. |
| Fall back to crash consistent backups for VMware VMs. | Use a registry key for VMware VMs when backups fail with ApplicationQuiesceFault. [Learn more](backup-azure-backup-server-vmware.md#applicationquiescefault). | | **Experience improvements for MABS backups to Azure.** | | | List online recovery points for a data source along with the expiry time and soft-delete status. | To view the list of recovery points along with their expiration dates, right-click a data source and select **List recovery points**. |
MABS V3 UR1 includes a new parameter **[-CheckReplicaFragmentation]**. The new p
### 32-Bit protection agent deprecation
-With MABS v3 UR1, support for 32-bit protection agent is no longer supported. You won't be able to protect 32-bit workloads after upgrading the MABS v3 server to UR1. Any existing 32-bit protection agents will be in a disabled state and scheduled backups will fail with the **agent is disabled** error. If you want to retain backup data for these agents, you can stop the protection with the retain data option. Otherwise, the protection agent can be removed.
+With MABS v3 UR1, support for 32-bit protection agent is no longer supported. You won't be able to protect 32-bit workloads after upgrading the MABS v3 server to UR1. Any existing 32-bit protection agents will be in a disabled state and scheduled backups will fail with the **agent is disabled** error. If you want to retain the backup data for these agents, you can stop the protection with the retained data option. Otherwise, the protection agent can be removed.
>[!NOTE] >Review the [updated protection matrix](./backup-mabs-protection-matrix.md) to learn the supported workloads for protection with MABS UR 1.
bastion Native Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/native-client.md
description: Learn how to configure Bastion for native client connections.
Previously updated : 06/23/2023 Last updated : 12/04/2023
Use the following table to understand how to connect from native clients. Notice
||||| ||| | Windows native client | Windows VM | [RDP](connect-vm-native-client-windows.md) | Yes | [Upload/Download](vm-upload-download-native.md#rdp) | Yes | Yes | | | Linux VM | [SSH](connect-vm-native-client-windows.md) | Yes |No | Yes | Yes |
-| | Any VM|[az network bastion tunnel](connect-vm-native-client-windows.md) |No |[Upload](vm-upload-download-native.md#tunnel-command)| No | No |
+| | Any VM|[az network bastion tunnel](connect-vm-native-client-windows.md#connect-to-a-vmtunnel-command) |No |[Upload](vm-upload-download-native.md#tunnel-command)| No | No |
| Linux native client | Linux VM |[SSH](connect-vm-native-client-linux.md#ssh)| Yes | No | Yes | Yes | | | Windows or any VM| [az network bastion tunnel](connect-vm-native-client-linux.md) | No | [Upload](vm-upload-download-native.md#tunnel-command) | No | No | | Other native client (putty) | Any VM | [az network bastion tunnel](connect-vm-native-client-linux.md) | No | [Upload](vm-upload-download-native.md#tunnel-command) | No | No |
bastion Quickstart Developer Sku https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/quickstart-developer-sku.md
description: Learn how to deploy Bastion using the Developer SKU.
Previously updated : 10/26/2023 Last updated : 12/04/2023
In this quickstart, you'll learn how to deploy Azure Bastion using the Developer SKU. After Bastion is deployed, you can connect to virtual machines (VM) in the virtual network via Bastion using the private IP address of the VM. The VMs you connect to don't need a public IP address, client software, agent, or a special configuration. For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md) > [!IMPORTANT]
-> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
+> During Preview, Bastion Developer SKU is free of charge. Pricing details will be released at GA for a usage-based pricing model.
[!INCLUDE [regions](../../includes/bastion-developer-sku-regions.md)]
bastion Quickstart Host Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/quickstart-host-arm-template.md
Title: 'Quickstart: Deploy Azure Bastion to a virtual network using an ARM template' description: Learn how to deploy Azure Bastion to a virtual network by using an Azure Resource Manager template.--++ Previously updated : 06/27/2022 Last updated : 12/04/2023 #Customer intent: As someone with a networking background, I want to deploy Azure Bastion to a virtual machine by using an ARM template.
communication-services Call Automation Ai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/samples/call-automation-ai.md
In this sample, we'll cover off what this sample does and what you need as pre-r
::: zone pivot="programming-language-csharp" ::: zone-end ::: zone pivot="programming-language-java" ::: zone-end
communication-services Calling Widget Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/calling-widget/calling-widget-tutorial.md
function App() {
/** * Token for local user. */
- const token = "<Enter your ACS token here>";
+ const token = "<Enter your Azure Communication Services token here>";
/** * User identifier for local user. */ const userId: CommunicationIdentifier = {
- communicationUserId: "<Enter your ACS ID here>",
+ communicationUserId: "<Enter your Azure Communication Services ID here>",
}; /**
communication-services End Of Call Survey Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/end-of-call-survey-tutorial.md
The API will return the following error messages if data validation fails or the
- \{propertyName\} lowerBound: \{rating.scale?.lowerBound\} and upperBound: \{rating.scale?.upperBound\} should be between 0 and 100. -- Please try again [ACS failed to submit survey, due to network or other error].
+- Please try again [Azure Communication Services failed to submit survey, due to network or other error].
### We will return any error codes with a message.
communication-services File Sharing Tutorial Acs Chat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/file-sharing-tutorial-acs-chat.md
-# Enable file sharing using UI Library in Azure Communication Service Chat with Azure Blob storage
+# Enable file sharing using UI Library in Azure Communication Services Chat with Azure Blob storage
[!INCLUDE [Public Preview Notice](../includes/public-preview-include.md)]
-In an Azure Communication Service Chat ("ACS Chat"), we can enable file sharing between communication users. Note, Azure Communication Services Chat is different from the Teams Interoperability Chat ("Interop Chat"). If you want to enable file sharing in an Interop Chat, refer to [Add file sharing with UI Library in Teams Interoperability Chat](./file-sharing-tutorial-interop-chat.md).
+In an Azure Communication Services Chat, we can enable file sharing between communication users. Note, Azure Communication Services Chat is different from the Teams Interoperability Chat ("Interop Chat"). If you want to enable file sharing in an Interop Chat, refer to [Add file sharing with UI Library in Teams Interoperability Chat](./file-sharing-tutorial-interop-chat.md).
In this tutorial, we're configuring the Azure Communication Services UI Library Chat Composite to enable file sharing. The UI Library Chat Composite provides a set of rich components and UI controls that can be used to enable file sharing. We're using Azure Blob Storage to enable the storage of the files that are shared through the chat thread.
Download errors are displayed to users in an error bar on top of the Chat Compos
## Clean up resources
-If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. You can find out more about [cleaning up Azure Communication Service resources](../quickstarts/create-communication-resource.md#clean-up-resources) and [cleaning Azure Function Resources](../../azure-functions/create-first-function-vs-code-csharp.md#clean-up-resources).
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. You can find out more about [cleaning up Azure Communication Services resources](../quickstarts/create-communication-resource.md#clean-up-resources) and [cleaning Azure Function Resources](../../azure-functions/create-first-function-vs-code-csharp.md#clean-up-resources).
## Next steps
You may also want to:
- [Learn about client and server architecture](../concepts/client-and-server-architecture.md) - [Learn about authentication](../concepts/authentication.md) - [Add file sharing with UI Library in Teams Interoperability Chat](./file-sharing-tutorial-interop-chat.md)-- [Add file sharing with UI Library in Azure Communication Service Chat](./file-sharing-tutorial-acs-chat.md)
+- [Add file sharing with UI Library in Azure Communication Services Chat](./file-sharing-tutorial-acs-chat.md)
- [Add inline image with UI Library in Teams Interoperability Chat](./inline-image-tutorial-interop-chat.md)
communication-services File Sharing Tutorial Interop Chat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/file-sharing-tutorial-interop-chat.md
[!INCLUDE [Public Preview Notice](../includes/public-preview-include.md)]
-In a Teams Interoperability Chat ("Interop Chat"), we can enable file sharing between Azure Communication Service end users and Teams users. Note, Interop Chat is different from the Azure Communication Service Chat ("ACS Chat"). If you want to enable file sharing in an Azure Communication Services Chat, refer to [Add file sharing with UI Library in Azure Communication Service Chat](./file-sharing-tutorial-acs-chat.md). Currently, the Azure Communication Service end user is only able to receive file attachments from the Teams user. Please refer to [UI Library Use Cases](../concepts/ui-library/ui-library-use-cases.md) to learn more.
+In a Teams Interoperability Chat ("Interop Chat"), we can enable file sharing between Azure Communication Services end users and Teams users. Note, Interop Chat is different from the Azure Communication Services Chat. If you want to enable file sharing in an Azure Communication Services Chat, refer to [Add file sharing with UI Library in Azure Communication Services Chat](./file-sharing-tutorial-acs-chat.md). Currently, the Azure Communication Services end user is only able to receive file attachments from the Teams user. Please refer to [UI Library Use Cases](../concepts/ui-library/ui-library-use-cases.md) to learn more.
>[!IMPORTANT] >
Access the code for this tutorial on [GitHub](https://github.com/Azure-Samples/c
## Background
-First of all, we need to understand that Teams Interop Chat has to part of a Teams meeting currently. When the Teams user creates an online meeting, a chat thread would be created and associated with the meeting. To enable the Azure Communication Service end user joining the chat and starting to send/receive messages, a meeting participant (a Teams user) would need to admit them to the call first. Otherwise, they don't have access to the chat.
+First of all, we need to understand that Teams Interop Chat has to part of a Teams meeting currently. When the Teams user creates an online meeting, a chat thread would be created and associated with the meeting. To enable the Azure Communication Services end user joining the chat and starting to send/receive messages, a meeting participant (a Teams user) would need to admit them to the call first. Otherwise, they don't have access to the chat.
-Once the Azure Communication Service end user is admitted to the call, they would be able to start to chat with other participants on the call. In this tutorial, we're checking out how inline image works in Interop chat.
+Once the Azure Communication Services end user is admitted to the call, they would be able to start to chat with other participants on the call. In this tutorial, we're checking out how inline image works in Interop chat.
## Overview
To be able to start the Composite for meeting chat, we need to pass `TeamsMeetin
Note that meeting link should look something like `https://teams.microsoft.com/l/meetup-join/19%3ameeting_XXXXXXXXXXX%40thread.v2/XXXXXXXXXXX`
-And this is all you need! And there's no other setup needed to enable the Azure Communication Service end user to receive file attachments from the Teams user.
+And this is all you need! And there's no other setup needed to enable the Azure Communication Services end user to receive file attachments from the Teams user.
## Permissions
When file is shared from a Teams client, the Teams user has options to set the f
- "People with existing access" - "People you choose"
-Specifically, the UI library currently only supports "Anyone" and "People you choose" (with email address) and all other permissions aren't supported. If Teams user sent a file with unsupported permissions, the Azure Communication Service end user might be prompted to a login page or denied access when they click on the file attachment in the chat thread.
+Specifically, the UI library currently only supports "Anyone" and "People you choose" (with email address) and all other permissions aren't supported. If Teams user sent a file with unsupported permissions, the Azure Communication Services end user might be prompted to a login page or denied access when they click on the file attachment in the chat thread.
-![Teams File Permissions](./media/file-sharing-tutorial-interop-chat-0.png "Screenshot of a Teams client listing out file permissions.")
+![Screenshot of a Teams client listing out file permissions.](./media/file-sharing-tutorial-interop-chat-0.png "Screenshot of a Teams client listing out file permissions.")
Moreover, the Teams user's tenant admin might impose restrictions on file sharing, including disabling some file permissions or disabling file sharing option all together.
Moreover, the Teams user's tenant admin might impose restrictions on file sharin
Let's run `npm run start` then you should be able to access our sample app via `localhost:3000` like the following screenshot:
-![ACS UI library](./media/inline-image-tutorial-interop-chat-0.png "Screenshot of a Azure Communication Services UI library.")
+![Screenshot of an Azure Communication Services UI library.](./media/inline-image-tutorial-interop-chat-0.png "Screenshot of a Azure Communication Services UI library.")
Simply click on the chat button located in the bottom to reveal the chat panel and now if Teams user sends some files, you should see something like the following screenshot:
-![Teams sending a file](./media/file-sharing-tutorial-interop-chat-1.png "Screenshot of a Teams client sending one file.")
+![Screenshot of a Teams client sending one file.](./media/file-sharing-tutorial-interop-chat-1.png "Screenshot of a Teams client sending one file.")
-![ACS getting a file](./media/file-sharing-tutorial-interop-chat-2.png "Screenshot of Azure Communication Services UI library receiving one file.")
+![Screenshot of Azure Communication Services UI library receiving one file.](./media/file-sharing-tutorial-interop-chat-2.png "Screenshot of Azure Communication Services UI library receiving one file.")
And now if the user click on the file attachment card, a new tab would be opened like the following where the user can download the file:
-![File Content](./media/file-sharing-tutorial-interop-chat-3.png "Screenshot of Sharepoint webpage that shows the file content.")
+![Screenshot of Sharepoint webpage that shows the file content.](./media/file-sharing-tutorial-interop-chat-3.png "Screenshot of Sharepoint webpage that shows the file content.")
## Next steps
You may also want to:
- [Creating user access tokens](../quickstarts/identity/access-tokens.md) - [Learn about client and server architecture](../concepts/client-and-server-architecture.md) - [Learn about authentication](../concepts/authentication.md)-- [Add file sharing with UI Library in Azure Azure Communication Service end user Service Chat](./file-sharing-tutorial-acs-chat.md)
+- [Add file sharing with UI Library in Azure Azure Communication Services end user Service Chat](./file-sharing-tutorial-acs-chat.md)
- [Add inline image with UI Library in Teams Interoperability Chat](./inline-image-tutorial-interop-chat.md)
communication-services Inline Image Tutorial Interop Chat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/inline-image-tutorial-interop-chat.md
And this is all you need! And there's no other setup needed to enable inline ima
Let's run `npm run start` then you should be able to access our sample app via `localhost:3000` like the following screenshot:
-![ACS UI library](./media/inline-image-tutorial-interop-chat-0.png "Screenshot of a Azure Communication Services UI library.")
+![Screenshot of a Azure Communication Services UI library.](./media/inline-image-tutorial-interop-chat-0.png "Screenshot of a Azure Communication Services UI library.")
Simply click on the chat button located in the bottom to reveal the chat panel and now if Teams user sends an image, you should see something like the following screenshot:
-![Teams sending two images](./media/inline-image-tutorial-interop-chat-1.png "Screenshot of a Teams client sending 2 inline images.")
+!["Screenshot of a Teams client sending 2 inline images."](./media/inline-image-tutorial-interop-chat-1.png "Screenshot of a Teams client sending 2 inline images.")
-![ACS getting two images](./media/inline-image-tutorial-interop-chat-2.png "Screenshot of Azure Communication Services UI library receiving 2 inline images.")
+![Screenshot of Azure Communication Services UI library receiving two inline images.](./media/inline-image-tutorial-interop-chat-2.png "Screenshot of Azure Communication Services UI library receiving 2 inline images.")
Note that in a Teams Interop Chat, we currently only support Azure Communication Service end user to receive inline images sent by the Teams user. To learn more about what features are supported, refer to the [UI Library use cases](../concepts/ui-library/ui-library-use-cases.md)
communications-gateway Connect Operator Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/connect-operator-connect.md
Previously updated : 11/15/2023 Last updated : 11/27/2023 - template-how-to-pattern - has-azure-ad-ps-ref
To add the Project Synergy application:
1. Select **Properties**. 1. Scroll down to the Tenant ID field. Your tenant ID is in the box. Make a note of your tenant ID. 1. Open PowerShell.
-1. Run the following cmdlet, replacing *`<AADTenantID>`* with the tenant ID you noted down in step 5.
+1. Run the following cmdlet, replacing *`<TenantID>`* with the tenant ID you noted down in step 5.
```azurepowershell
- Connect-AzureAD -TenantId "<AADTenantID>"
+ Connect-AzureAD -TenantId "<TenantID>"
New-AzureADServicePrincipal -AppId eb63d611-525e-4a31-abd7-0cb33f679599 -DisplayName "Operator Connect" ```
The user who sets up Azure Communications Gateway needs to have the Admin user r
## Find the Object ID and Application ID for your Azure Communication Gateway resource
-Each Azure Communications Gateway resource automatically receives a [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md), which Azure Communications Gateway uses to connect to the Operator Connect environment. You need to find the Object ID and Application ID of the managed identity, so that you can connect Azure Communications Gateway to the Operator Connect or Teams Phone Mobile environment in [Set up application roles for Azure Communications Gateway](#set-up-application-roles-for-azure-communications-gateway) and [Add the Application ID for Azure Communications Gateway to Operator Connect](#add-the-application-id-for-azure-communications-gateway-to-operator-connect).
+Each Azure Communications Gateway resource automatically receives a [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md), which Azure Communications Gateway uses to connect to the Operator Connect environment. You need to find the Object ID and Application ID of the managed identity, so that you can connect Azure Communications Gateway to the Operator Connect or Teams Phone Mobile environment in [Set up application roles for Azure Communications Gateway](#set-up-application-roles-for-azure-communications-gateway) and [Add the Application IDs for Azure Communications Gateway to Operator Connect](#add-the-application-ids-for-azure-communications-gateway-to-operator-connect).
1. Sign in to the [Azure portal](https://azure.microsoft.com/). 1. In the search bar at the top of the page, search for your Communications Gateway resource.
Each Azure Communications Gateway resource automatically receives a [system-assi
Azure Communications Gateway contains services that need to access the Operator Connect API on your behalf. To enable this access, you must grant specific application roles to the system-assigned managed identity for Azure Communications Gateway under the Project Synergy Enterprise Application. You created the Project Synergy Enterprise Application in [Add the Project Synergy application to your Azure tenancy](#add-the-project-synergy-application-to-your-azure-tenancy). > [!IMPORTANT]
-> Granting permissions has two parts: configuring the system-assigned managed identity for Azure Communications Gateway with the appropriate roles (this step) and adding the application ID of the managed identity to the Operator Connect or Teams Phone Mobile environment. You'll add the application ID to the Operator Connect or Teams Phone Mobile environment later, in [Add the Application ID for Azure Communications Gateway to Operator Connect](#add-the-application-id-for-azure-communications-gateway-to-operator-connect).
+> Granting permissions has two parts: configuring the system-assigned managed identity for Azure Communications Gateway with the appropriate roles (this step) and adding the application ID of the managed identity to the Operator Connect or Teams Phone Mobile environment. You'll add the application ID to the Operator Connect or Teams Phone Mobile environment later, in [Add the Application IDs for Azure Communications Gateway to Operator Connect](#add-the-application-ids-for-azure-communications-gateway-to-operator-connect).
Do the following steps in the tenant that contains your Project Synergy application.
Do the following steps in the tenant that contains your Project Synergy applicat
1. Select **Properties**. 1. Scroll down to the Tenant ID field. Your tenant ID is in the box. Make a note of your tenant ID. 1. Open PowerShell.
-1. Run the following cmdlet, replacing *`<AADTenantID>`* with the tenant ID you noted down in step 5.
+1. Run the following cmdlet, replacing *`<TenantID>`* with the tenant ID you noted down in step 5.
```azurepowershell
- Connect-AzureAD -TenantId "<AADTenantID>"
+ Connect-AzureAD -TenantId "<TenantID>"
``` 1. Run the following cmdlet, replacing *`<CommunicationsGatewayObjectID>`* with the Object ID you noted down in [Find the Object ID and Application ID for your Azure Communication Gateway resource](#find-the-object-id-and-application-id-for-your-azure-communication-gateway-resource). ```azurepowershell
If you don't already have an onboarding team, contact azcog-enablement@microsoft
Go to the [Operator Connect homepage](https://operatorconnect.microsoft.com/) and check that you're able to sign in.
-## Add the Application ID for Azure Communications Gateway to Operator Connect
+## Add the Application IDs for Azure Communications Gateway to Operator Connect
-You must enable the Azure Communications Gateway application within the Operator Connect or Teams Phone Mobile environment. Enabling the application allows Azure Communications Gateway to use the roles that you set up in [Set up application roles for Azure Communications Gateway](#set-up-application-roles-for-azure-communications-gateway).
+You must enable Azure Communications Gateway within the Operator Connect or Teams Phone Mobile environment. This process requires configuring your environment with two Application IDs:
+- The Application ID of the system-assigned managed identity that you found in [Find the Object ID and Application ID for your Azure Communication Gateway resource](#find-the-object-id-and-application-id-for-your-azure-communication-gateway-resource). This Application ID allows Azure Communications Gateway to use the roles that you set up in [Set up application roles for Azure Communications Gateway](#set-up-application-roles-for-azure-communications-gateway).
+- A standard Application ID for Azure Communications Gateway. This ID always has the value `8502a0ec-c76d-412f-836c-398018e2312b`.
-To enable the application, add the Application ID of the system-assigned managed identity representing Azure Communications Gateway to your Operator Connect or Teams Phone Mobile environment. You found this ID in [Find the Object ID and Application ID for your Azure Communication Gateway resource](#find-the-object-id-and-application-id-for-your-azure-communication-gateway-resource).
+To add the Application IDs:
1. Log into the [Operator Connect portal](https://operatorconnect.microsoft.com/operator/configuration).
-1. Add a new **Application Id**, using the Application ID that you found.
+1. Add a new **Application Id** for the Application ID that you found for the managed identity.
+1. Add a second **Application Id** for the value `8502a0ec-c76d-412f-836c-398018e2312b`.
## Register your deployment's domain name in Microsoft Entra
communications-gateway Interoperability Operator Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/interoperability-operator-connect.md
For full details of the media interworking features available in Azure Communica
Operator Connect and Teams Phone Mobile require API integration between your IT systems and Microsoft Teams for flow-through provisioning and automation. After your deployment has been certified and launched, you must not use the Operator Connect portal for provisioning. You can use Azure Communications Gateway's Number Management Portal instead. This Azure portal feature enables you to pass the certification process and sell Operator Connect or Teams Phone Mobile services while you carry out a custom API integration project.
-The Number Management Portal is available as part of the optional API Bridge feature.
- For more information, see [Manage an enterprise with Azure Communications Gateway's Number Management Portal for Operator Connect and Teams Phone Mobile](manage-enterprise-operator-connect.md). > [!TIP]
communications-gateway Manage Enterprise Operator Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/manage-enterprise-operator-connect.md
Previously updated : 07/17/2023 Last updated : 11/27/2023
Azure Communications Gateway's Number Management Portal enables you to manage en
The Operator Connect and Teams Phone Mobile programs don't allow you to use the Operator Connect portal for provisioning after you've launched your service in the Teams Admin Center. The Number Management Portal is a simple alternative that you can use until you've finished integrating with the Operator Connect APIs.
-> [!IMPORTANT]
-> You must have selected Azure Communications Gateway's API Bridge option to use the Number Management Portal.
- ## Prerequisites
-Confirm that you have [!INCLUDE [project-synergy-nmp-permissions](includes/communications-gateway-nmp-project-synergy-permissions.md)] permissions for the Project Synergy enterprise application and **Reader** access to your subscription. If you don't have these permissions, ask your administrator to set them up by following [Set up user roles for Azure Communications Gateway](provision-user-roles.md).
+Confirm that you have **Reader** access to your subscription and appropriate permissions for the Project Synergy enterprise application:
+
+<!-- Must be kept in sync with provision-user-roles.md - steps for understanding and configuring -->
+* To view existing configuration: **PartnerSettings.Read**, **TrunkManagement.Read**, and **NumberManagement.Read**
+* To make changes to consents (which represent your relationships with enterprises) and numbers: **PartnerSettings.Read**, **TrunkManagement.Read**, and **NumberManagement.Write**
+
+If you don't have these permissions, ask your administrator to set them up by following [Set up user roles for Azure Communications Gateway](provision-user-roles.md).
-If you're assigning new numbers to an enterprise customer:
+If you're uploading new numbers for an enterprise customer:
-* You must know the numbers you need to assign (as E.164 numbers). Each number must:
+* You must know the numbers you need to upload (as E.164 numbers). Each number must:
* Contain only digits (0-9), with an optional `+` at the start. * Include the country code. * Be up to 19 characters long. * You must have completed any internal procedures for assigning numbers.
-* You need to know the following information for each range of numbers.
+* You must know the following information for each number.
-|Information for each range of numbers |Notes |
+|Information for each number |Notes |
||| |Calling profile |One of the Calling Profiles created by Microsoft for you.| |Intended usage | Individuals (calling users), applications or conference calls.|
If you're assigning new numbers to an enterprise customer:
|Location | A description of the location for emergency calls. The enterprise must have configured this location in the Teams Admin Center. Only required for individuals (calling users) and only if you don't allow the enterprise to update the address.| |Whether the enterprise can update the civic address or location | If you don't allow the enterprise to update the civic address or location, you must specify a civic address or location. You can specify an address or location and also allow the enterprise to update it.| |Country | The country for the number. Only required if you're uploading a North American Toll-Free number, otherwise optional.|
-|Ticket number (optional) |The ID of any ticket or other request that you want to associate with this range of numbers. Up to 64 characters. |
+|Ticket number (optional) |The ID of any ticket or other request that you want to associate with this number. Up to 64 characters. |
+
+If you're uploading multiple numbers, prepare a `.csv` file with the heading `Numbers` and one number per line (up to 10,000 numbers), as in the following example. You can use this file to upload multiple numbers at once with the same settings (for example, the same calling profile).
+
+```
+Numbers
++441632960000++441632960001++441632960002++441632960003++441632960004
+```
+ ## Go to your Communications Gateway resource
If you're assigning new numbers to an enterprise customer:
## Select an enterprise customer to manage
-When an enterprise customer uses the Teams Admin Center to request service, the Operator Connect APIs create a **consent**. This consent represents the relationship between you and the enterprise.
+When an enterprise customer uses the Teams Admin Center to request service, the Operator Connect APIs create a *consent*. This consent represents the relationship between you and the enterprise.
The Number Management Portal allows you to update the status of these consents. Finding the consent for an enterprise is also the easiest way to manage numbers for an enterprise.
-1. From the overview page for your Communications Gateway resource, select **Consents** in the sidebar.
+1. From the overview page for your Communications Gateway resource, find the **Number Management** section in the sidebar. Select **Consents**.
1. Find the enterprise that you want to manage. 1. If you need to change the status of the relationship, select **Update Relationship Status** from the menu for the enterprise. Set the new status. For example, if you're agreeing to provide service to a customer, set the status to **Agreement signed**. If you set the status to **Consent Declined** or **Contract Terminated**, you must provide a reason. ## Manage numbers for the enterprise
-Assigning numbers to an enterprise allows IT administrators at the enterprise to allocate those numbers to their users.
+Uploading numbers for an enterprise allows IT administrators at the enterprise to allocate those numbers to their users.
1. Go to the number management page for the enterprise. * If you followed [Select an enterprise customer to manage](#select-an-enterprise-customer-to-manage), select **Manage numbers** from the menu.
- * Otherwise, select **Numbers** in the sidebar and search for the enterprise using the enterprise's Microsoft Entra tenant ID.
-1. To add new numbers for an enterprise:
+ * Otherwise, find the **Number Management** section in the sidebar and select **Numbers**. Search for the enterprise using the enterprise's Microsoft Entra tenant ID.
+1. To upload new numbers for an enterprise:
1. Select **Upload numbers**. 1. Fill in the fields based on the information you determined in [Prerequisites](#prerequisites). These settings apply to all the numbers you upload in the **Telephone numbers** section.
- 1. In **Telephone numbers**, upload the numbers, as a comma-separated list.
+ 1. In **Telephone numbers**, add the numbers:
+ * If you created a `.csv` file with multiple numbers as described in [Prerequisites](#prerequisites), select **Upload CSV file** and upload the file when prompted.
+ * Otherwise, select **Manual input** and add each number individually.
1. Select **Review + upload** and **Upload**. Uploading creates an order for uploading numbers over the Operator Connect API. 1. Wait 30 seconds, then refresh the order status. When the order status is **Complete**, the numbers are available to the enterprise. You might need to refresh more than once. 1. To remove numbers from an enterprise: 1. Select the numbers. 1. Select **Release numbers**.
- 1. 1. Wait 30 seconds, then refresh the order status. When the order status is **Complete**, the numbers have been removed.
+ 1. Wait 30 seconds, then refresh the order status. When the order status is **Complete**, the numbers have been removed.
## View civic addresses for an enterprise
You can view civic addresses for an enterprise. The enterprise configures the de
1. Go to the civic address page for the enterprise. * If you followed [Select an enterprise customer to manage](#select-an-enterprise-customer-to-manage), select **Civic addresses** from the menu.
- * Otherwise, select **Civic addresses** in the sidebar and search for the enterprise using the enterprise's Microsoft Entra tenant ID.
+ * Otherwise, find the **Number Management** section in the sidebar and select **Civic addresses**. Search for the enterprise using the enterprise's Microsoft Entra tenant ID.
1. View the civic addresses. You can see the address, the company name, the description and whether the address was validated when the enterprise configured the address. 1. Optionally, select an individual address to view additional information provided by the enterprise (for example, the ELIN information).
communications-gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/overview.md
Previously updated : 11/06/2023 Last updated : 11/27/2023
Launching Operator Connect or Teams Phone Mobile requires you to use the Operato
For more information, see [Number Management Portal for provisioning with Operator Connect APIs](interoperability-operator-connect.md#number-management-portal-for-provisioning-with-operator-connect-apis) and [Manage an enterprise with Azure Communications Gateway's Number Management Portal for Operator Connect and Teams Phone Mobile](manage-enterprise-operator-connect.md).
-The Number Management Portal is available as part of the optional API Bridge feature.
- > [!TIP] > The Number Management Portal does not allow your enterprise customers to manage Teams Calling. For example, it does not provide self-service portals.
communications-gateway Provision User Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/provision-user-roles.md
Previously updated : 06/02/2023 Last updated : 11/27/2023 # Set up user roles for Azure Communications Gateway
Your staff might need different user roles, depending on the tasks they need to
| Deploying Azure Communications Gateway |**Contributor** access to your subscription| | Raising support requests |**Owner**, **Contributor** or **Support Request Contributor** access to your subscription or a custom role with `Microsoft.Support/*` access at the subscription level| |Monitoring logs and metrics | **Reader** access to your subscription|
-|Using the Number Management Portal| [!INCLUDE [project-synergy-nmp-permissions](includes/communications-gateway-nmp-project-synergy-permissions.md)] roles for the Project Synergy enterprise application and **Reader** access to your subscription|
+|Using the Number Management Portal| **Reader** access to your subscription and appropriate roles for the Project Synergy enterprise application: <!-- Must be kept in sync with step below for configuring and with manage-enterprise-operator-connect.md --><br> - To view existing configuration: **PartnerSettings.Read**, **TrunkManagement.Read**, and **NumberManagement.Read**<br>- To configure your relationship to an enterprise (a _consent_) and numbers: **PartnerSettings.Read**, **TrunkManagement.Read**, and **NumberManagement.Write**|
+
+> [!TIP]
+> To allow staff to manage consents in the Number Management Portal without managing numbers, assign the **NumberManagement.Read**, **TrunkManagement.Read** and **PartnerSettings.Write** roles.
## Configure user roles
You need to use the Azure portal to configure user roles.
### Assign a user role 1. Follow the steps in [Assign a user role using the Azure portal](../role-based-access-control/role-assignments-portal.md) to assign the permissions you determined in [Understand the user roles required for Azure Communications Gateway](#understand-the-user-roles-required-for-azure-communications-gateway).
-1. If you're managing access to the Number Management Portal, follow [Assign users and groups to an application](../active-directory/manage-apps/assign-user-or-group-access-portal.md) to assign [!INCLUDE [project-synergy-nmp-permissions](includes/communications-gateway-nmp-project-synergy-permissions.md)] roles for each user in the Project Synergy application.
+1. If you're managing access to the Number Management Portal, follow [Assign users and groups to an application](/entra/identity/enterprise-apps/assign-user-or-group-access-portal?pivots=portal) to assign suitable roles for each user in the Project Synergy application.
+ <!-- Must be kept in sync with step 1 and with manage-enterprise-operator-connect.md -->
+ * To view existing configuration: **PartnerSettings.Read**, **TrunkManagement.Read**, and **NumberManagement.Read**
+ * To make changes to consents and numbers: **PartnerSettings.Read**, **TrunkManagement.Read**, and **NumberManagement.Write**
## Next steps
confidential-computing Concept Skr Attestation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/concept-skr-attestation.md
Example
Exact details of the type of key and other attributes associated can be found [here](../key-vault/general/quick-create-cli.md). ```azurecli
-az keyvault key create --exportable true --vault-name "vault name from step 1" --kty RSA-HSM --name "keyname" --policy "jsonpolicyfromstep3 -can be a path to JSON" --protection hsm --vault-name "name of vault created from step1"
+az keyvault key create --exportable true --vault-name "vault name from step 1" --kty RSA-HSM --name "keyname" --policy "jsonpolicyfromstep3 -can be a path to JSON"
``` ### Step 4: Application running within a TEE doing a remote attestation
confidential-computing Secret Key Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/secret-key-management.md
For example, systems can be configured so that keys are only released once code
CVMs rely on virtual Trusted Platform Modules (vTPM) you can read more about this in [Virtual TPMs in Azure](virtual-tpms-in-azure-confidential-vm.md)
-The [Azure Managed HSM](../key-vault/managed-hsm/overview.md) offering is [built on Confidential Computing technologies] (managed-hsm/managed-hsm-technical-details.md) and can be used to enhance access control of secrets & keys for an application.
+The [Azure Managed HSM](../key-vault/managed-hsm/overview.md) offering is [built on Confidential Computing technologies](../key-vault/managed-hsm/managed-hsm-technical-details.md) and can be used to enhance access control of secrets & keys for an application.
confidential-ledger Create Blob Managed App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/create-blob-managed-app.md
The **Blob Storage Digest Backed by Confidential Ledger** Managed Application ca
## Deploying the managed application
-The Managed Application can be found in the Azure Marketplace here: [Blob Storage Digests Backed by Confidential Ledger (preview)](https://ms.portal.azure.com/#view/Microsoft_Azure_Marketplace/GalleryItemDetailsBladeNopdl/id/azureconfidentialledger.acl-blob-storage-preview/resourceGroupId//resourceGroupLocation//dontDiscardJourney~/false/_provisioningContext~/%7B%22initialValues%22%3A%7B%22subscriptionIds%22%3A%5B%22027da7f8-2fc6-46d4-9be9-560706b60fec%22%5D%2C%22resourceGroupNames%22%3A%5B%5D%2C%22locationNames%22%3A%5B%22eastus%22%5D%7D%2C%22telemetryId%22%3A%225be042b2-6422-4ee3-9457-4d6d96064009%22%2C%22marketplaceItem%22%3A%7B%22categoryIds%22%3A%5B%5D%2C%22id%22%3A%22Microsoft.Portal%22%2C%22itemDisplayName%22%3A%22NoMarketplace%22%2C%22products%22%3A%5B%5D%2C%22version%22%3A%22%22%2C%22productsWithNoPricing%22%3A%5B%5D%2C%22publisherDisplayName%22%3A%22Microsoft.Portal%22%2C%22deploymentName%22%3A%22NoMarketplace%22%2C%22launchingContext%22%3A%7B%22telemetryId%22%3A%225be042b2-6422-4ee3-9457-4d6d96064009%22%2C%22source%22%3A%5B%5D%2C%22galleryItemId%22%3A%22%22%7D%2C%22deploymentTemplateFileUris%22%3A%7B%7D%2C%22uiMetadata%22%3Anull%7D%7D).
+The Managed Application can be found in the Azure Marketplace here: [Blob Storage Digests Backed by Confidential Ledger (preview)](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azureconfidentialledger.acl-blob-storage?tab=Overview).
### Resources to be created
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/introduction.md
Title: Azure Cosmos DB ΓÇô Unified AI Database- description: Azure Cosmos DB is a global multi-model database and ideal database for AI applications requiring speed, elasticity and availability with native support for NoSQL and relational data.
You can [Try Azure Cosmos DB for Free](https://azure.microsoft.com/try/cosmosdb/
> [!TIP] > To learn more about Azure Cosmos DB, join us every Thursday at 1PM Pacific on Azure Cosmos DB Live TV. See the [Upcoming session schedule and past episodes](https://gotcosmos.com/tv). - ## Key Benefits Here's some key benefits of using Azure Cosmos DB.
cosmos-db How To Scale Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/how-to-scale-cluster.md
You can enable or disable high availability (HA) to suit your needs. HA avoids d
In this guide, we've shown that scaling and configuring your Cosmos DB for MongoDB vCore cluster in the Azure portal is a straightforward process. The Azure portal includes the ability to adjust the cluster tier, increase storage size, and enable or disable high availability without any downtime. > [!div class="nextstepaction"]
-> [Restore a Azure Cosmos DB for MongoDB vCore cluster](how-to-restore-cluster.md)
+> [Restore an Azure Cosmos DB for MongoDB vCore cluster](how-to-restore-cluster.md)
data-factory Airflow Create Private Requirement Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/airflow-create-private-requirement-package.md
When performing the import of your folder into an Airflow IR environment, ensure
### Step 6: Inside Airflow UI, you can run your dag file created at step 1, to check if import is successful.
-## Next steps
+## Related content
- [What is Azure Data Factory Managed Airflow?](concept-managed-airflow.md) - [Run an existing pipeline with Airflow](tutorial-run-existing-pipeline-with-airflow.md)
data-factory Airflow Get Ip Airflow Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/airflow-get-ip-airflow-cluster.md
For more information, see the below screenshots.
- To add managed Airflow Cluster IP address into Azure SQL Database, refer to [Configure Azure Key Vault firewalls and virtual networks](/azure/azure-sql/database/firewall-configure) - To add managed Airflow Cluster IP address into Azure PostgreSQL Database, refer to [Create and manage firewall rules for Azure Database for PostgreSQL - Single Server using the Azure portal](/azure/postgresql/single-server/how-to-manage-firewall-using-portal)
-## Next steps
+## Related content
- [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md) - [Managed Airflow pricing](airflow-pricing.md)
data-factory Airflow Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/airflow-pricing.md
Managed Airflow supports either small (D2v4) or large (D4v4) node sizing. Small
:::image type="content" source="media/airflow-pricing/airflow-pricing.png" alt-text="Shows a screenshot of a table of pricing options for Managed Airflow configuration.":::
-## Next steps
+## Related content
- [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md) - [Changing password for Airflow environments](password-change-airflow.md)
data-factory Airflow Sync Github Repository https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/airflow-sync-github-repository.md
Assuming your private package has already been auto synced via git-sync, all you
:::image type="content" source="media/airflow-git-sync-repository/airflow-private-package.png" alt-text="Screenshot showing the Airflow requirements section on the Airflow environment setup dialog that appears during creation of an Airflow IR.":::
-## Next steps
+## Related content
- [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md) - [Managed Airflow pricing](airflow-pricing.md)
data-factory Apply Dataops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/apply-dataops.md
You can use the main search bar from the Azure Data Factory Studio to find data
:::image type="content" lightbox="media/apply-dataops/purview-search.png" source="media/apply-dataops/purview-search.png" alt-text="Screenshot showing Purview results from a search in the Azure Data Factory Studio search bar.":::
-## Next steps
+## Related content
- [Automated publishing for CI/CD in Azure Data Factory](continuous-integration-delivery-improvements.md) - [Source control in Azure Data Factory](source-control.md)
data-factory Apply Finops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/apply-finops.md
The pipeline-level view of your data factory bill is useful to attribute overall
Another mechanism for tracking attributing costs for your data factory resource is to use [tagging in your factory](plan-manage-costs.md). You can assign the same tag to your data factory and other Azure resources, putting them into the same category to view their consolidated billing. All SSIS (SQL Server Integration Services) IRs within the factory inherit this tag. Keep in mind that if you change your data factory tag, you need to stop and restart all SSIS IRs within the factory for them to inherit the new tag. For more details, refer to the [reconfigure SSIS IR section](manage-azure-ssis-integration-runtime.md#to-reconfigure-an-azure-ssis-ir).
-## Next steps
+## Related content
- [Plan to manage costs for Azure Data Factory](plan-manage-costs.md) - [Understanding Azure Data Factory pricing through examples](pricing-concepts.md)
data-factory Author Global Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/author-global-parameters.md
We strongly recommend using the new mechanism of including global parameters in
-## Next steps
+## Related content
* Learn about Azure Data Factory's [continuous integration and deployment process](continuous-integration-delivery-improvements.md) * Learn how to use the [control flow expression language](control-flow-expression-language-functions.md)
data-factory Author Management Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/author-management-hub.md
Global parameters are constants across a data factory that can be consumed by a
:::image type="content" source="media/author-global-parameters/create-global-parameter-3.png" alt-text="Create global parameters":::
-## Next steps
+## Related content
Learn how to [configure a git repository](source-control.md) to your ADF
data-factory Author Visually https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/author-visually.md
Select **Feedback** to comment about features or to notify Microsoft about issue
:::image type="content" source="media/author-visually/provide-feedback.png" alt-text="Feedback":::
-## Next steps
+## Related content
To learn more about monitoring and managing pipelines, see [Monitor and manage pipelines programmatically](monitor-programmatically.md).
data-factory Azure Integration Runtime Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/azure-integration-runtime-ip-addresses.md
Allow traffic from the IP addresses listed for the Azure Integration runtime in
Instead, we suggest using [trusted services while connecting to Azure Storage](https://techcommunity.microsoft.com/t5/azure-data-factory/data-factory-is-now-a-trusted-service-in-azure-storage-and-azure/ba-p/964993).
-## Next steps
+## Related content
* [Security considerations for data movement in Azure Data Factory](data-movement-security-considerations.md)
data-factory Azure Ssis Integration Runtime Express Virtual Network Injection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/azure-ssis-integration-runtime-express-virtual-network-injection.md
Following our guidance in the [Configure an NSG](#nsg) section above, you must i
- If you need to access Azure Files, you must open port *445* for outbound TCP traffic with *0.0.0.0/0* or your Azure Files FQDN as destination.
-## Next steps
+## Related content
- [Join Azure-SSIS IR to a virtual network via ADF UI](join-azure-ssis-integration-runtime-virtual-network-ui.md) - [Join Azure-SSIS IR to a virtual network via Azure PowerShell](join-azure-ssis-integration-runtime-virtual-network-powershell.md)
data-factory Azure Ssis Integration Runtime Package Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/azure-ssis-integration-runtime-package-store.md
dtutil /SQL YourFolder\YourPackage3 /ENCRYPT FILE;Z:\YourFolder\YourPackage3.dts
If you've configured Azure-SSIS IR package stores on top of Azure Files, your deployed packages will appear in them when you connect to your Azure-SSIS IR on SSMS 2019 or later versions.
-## Next steps
+## Related content
You can rerun/edit the auto-generated ADF pipelines with Execute SSIS Package activities or create new ones on ADF portal. For more information, see [Run SSIS packages as Execute SSIS Package activities in ADF pipelines](./how-to-invoke-ssis-package-ssis-activity.md).
data-factory Azure Ssis Integration Runtime Standard Virtual Network Injection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/azure-ssis-integration-runtime-standard-virtual-network-injection.md
Make sure that the resource quota for your subscription is enough for these reso
If your data source is an Azure service, please check whether you've configured it with virtual network service endpoints. If that's the case, the traffic from Azure-SSIS IR to your data source will switch to use the private IP addresses managed by Azure services and adding your own static public IP addresses to the firewall's allowlist for your data source won't take effect.
-## Next steps
+## Related content
- [Join Azure-SSIS IR to a virtual network via ADF UI](join-azure-ssis-integration-runtime-virtual-network-ui.md) - [Join Azure-SSIS IR to a virtual network via Azure PowerShell](join-azure-ssis-integration-runtime-virtual-network-powershell.md)
data-factory Azure Ssis Integration Runtime Virtual Network Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/azure-ssis-integration-runtime-virtual-network-configuration.md
Using Azure portal, you can grant the user creating Azure-SSIS IR the necessary
:::image type="content" source="media/join-azure-ssis-integration-runtime-virtual-network/grant-virtual-network-permissions.png" alt-text="Grant virtual network permissions":::
-## Next steps
+## Related content
- [Express virtual network injection method](azure-ssis-integration-runtime-express-virtual-network-injection.md) - [Standard virtual network injection method](azure-ssis-integration-runtime-standard-virtual-network-injection.md)
data-factory Better Understand Different Integration Runtime Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/better-understand-different-integration-runtime-charges.md
In this example, the execution time of each HDInsight activity is rounded up to
:::image type="content" source="./media/integration-runtime-pricing/self-hosted-integration-runtime-example-3.png" alt-text="Screenshot of calculation formula for Self-hosted integration runtime example 3.":::
-## Next steps
+## Related content
Now that you understand the pricing for Azure Data Factory, you can get started!
data-factory Built In Preinstalled Components Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/built-in-preinstalled-components-ssis-integration-runtime.md
This article lists all built-in and preinstalled components, such as clients, dr
| **Built-in workflow tasks** | [Execute Package Task](/sql/integration-services/control-flow/execute-package-task)<br/><br/>[Execute Process Task](/sql/integration-services/control-flow/execute-process-task)<br/><br/>[Execute SQL Server Agent Job Task](/sql/integration-services/control-flow/execute-sql-server-agent-job-task)<br/><br/>[Expression Task](/sql/integration-services/control-flow/expression-task)<br/><br/>[Message Queue Task](/sql/integration-services/control-flow/message-queue-task)<br/><br/>[Send Mail Task](/sql/integration-services/control-flow/send-mail-task)<br/><br/>[WMI Data Reader Task](/sql/integration-services/control-flow/wmi-data-reader-task)<br/><br/>[WMI Event Watcher Task](/sql/integration-services/control-flow/wmi-event-watcher-task) | | **Preinstalled tasks ([Azure Feature Pack](/sql/integration-services/azure-feature-pack-for-integration-services-ssis))** | [Azure Blob Download Task](/sql/integration-services/control-flow/azure-blob-download-task)<br/><br/>[Azure Blob Upload Task](/sql/integration-services/control-flow/azure-blob-upload-task)<br/><br/>[Azure Data Lake Analytics Task](/sql/integration-services/control-flow/azure-data-lake-analytics-task)<br/><br/>[Azure Data Lake Store File System Task](/sql/integration-services/control-flow/azure-data-lake-store-file-system-task)<br/><br/>[Azure HDInsight Create Cluster Task](/sql/integration-services/control-flow/azure-hdinsight-create-cluster-task)<br/><br/>[Azure HDInsight Delete Cluster Task](/sql/integration-services/control-flow/azure-hdinsight-delete-cluster-task)<br/><br/>[Azure HDInsight Hive Task](/sql/integration-services/control-flow/azure-hdinsight-hive-task)<br/><br/>[Azure HDInsight Pig Task](/sql/integration-services/control-flow/azure-hdinsight-pig-task)<br/><br/>[Azure SQL Azure Synapse Analytics Upload Task](/sql/integration-services/control-flow/azure-sql-dw-upload-task)<br/><br/>[Flexible File Task](/sql/integration-services/control-flow/flexible-file-task) |
-## Next steps
+## Related content
To install additional custom/Open Source/3rd party components on your SSIS IR, follow the instructions in [Customize Azure-SSIS IR](./how-to-configure-azure-ssis-ir-custom-setup.md).
data-factory Change Data Capture Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/change-data-capture-troubleshoot.md
SET IDENTITY_INSERT dbo.TableName ON;
Currently, Self-hosted integration runtime isn't supported in the CDC resource. If trying to connect to an on-premise source, use Azure integration runtime with managed virtual network.
-## Next steps
+## Related content
- [Learn more about the change data capture resource](concepts-change-data-capture-resource.md) - [Set up a change data capture resource](how-to-change-data-capture-resource.md)
data-factory Ci Cd Github Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/ci-cd-github-troubleshoot-guide.md
Dynamic content isn't written as per expression language requirements.
* For debug run, check expressions in pipeline within current git branch. * For Triggered run, check expressions in pipeline within *Live* mode.
-## Next steps
+## Related content
For more help with troubleshooting, try the following resources:
data-factory Compute Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/compute-linked-services.md
You create an Azure Function linked service and use it with the [Azure Function
| function key | Access key for the Azure Function. Click on the **Manage** section for the respective function, and copy either the **Function Key** or the **Host key**. Find out more here: [Azure Functions HTTP triggers and bindings](../azure-functions/functions-bindings-http-webhook-trigger.md#authorization-keys) | yes | | | | |
-## Next steps
+## Related content
For a list of the supported transformation activities, see [Transform data](transform-data.md).
data-factory Concept Managed Airflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concept-managed-airflow.md
You can install any provider package by editing the airflow environment from the
* DAGs that are inside a Blob Storage in VNet/behind Firewall is currently not supported. * Azure Key Vault isn't supported in LinkedServices to import dags.
-## Next steps
+## Related content
- [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md) - [Managed Airflow pricing](airflow-pricing.md)
data-factory Concepts Annotations User Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-annotations-user-properties.md
You can remove some from the view if you select the Bookmark sign:
![Screenshot showing how to remove User Properties.](./media/concepts-annotations-user-properties/remove-user-properties.png "Remove User Properties")
-## Next steps
+## Related content
To learn more about monitoring see [Visually monitor Azure Data Factory.](./monitor-visually.md)
data-factory Concepts Change Data Capture Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-change-data-capture-resource.md
When using Azure Synapse Analytics as target, the **Staging Settings** is availa
> [!NOTE] > We always use the last published configuration when starting a CDC. For running CDCs, while your data is being processed, you will be billed 4 v-cores of General Purpose Data Flows.
-## Next steps
+## Related content
- [Learn how to set up a change data capture resource](how-to-change-data-capture-resource.md). - [Learn how to set up a change data capture resource with schema evolution](how-to-change-data-capture-resource-with-schema-evolution.md).
data-factory Concepts Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-change-data-capture.md
The followings are the templates to use the change data capture in Azure Data Fa
- [Replicate multiple objects from SAP via SAP CDC](solution-template-replicate-multiple-objects-sap-cdc.md)
-## Next steps
+## Related content
- [Learn how to use the checkpoint key in the data flow activity](control-flow-execute-data-flow-activity.md). - [Learn about the ADF Change Data Capture resource](concepts-change-data-capture-resource.md).
data-factory Concepts Data Flow Column Pattern https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-column-pattern.md
The above example matches on all subcolumns of complex column `a`. `a` contains
* `position` is the ordinal position of columns in your data flow * `origin` is the transformation where a column originated or was last updated
-## Next steps
+## Related content
* Learn more about the mapping data flows [expression language](data-transformation-functions.md) for data transformations * Use column patterns in the [sink transformation](data-flow-sink.md) and [select transformation](data-flow-select.md) with rule-based mapping
data-factory Concepts Data Flow Debug Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-debug-mode.md
Selecting a column in your data preview tab and clicking **Statistics** in the d
:::image type="content" source="media/data-flow/stats.png" alt-text="Column statistics":::
-## Next steps
+## Related content
* Once you're finished building and debugging your data flow, [execute it from a pipeline.](control-flow-execute-data-flow-activity.md) * When testing your pipeline with a data flow, use the pipeline [Debug run execution option.](iterative-development-debugging.md)
data-factory Concepts Data Flow Expression Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-expression-builder.md
Dataflow processes till milliseconds. For *2018-07-31T20:00:00.2170000*, you'll
In the portal for the service, timestamp is being shown in the **current browser setting**, which can eliminate 217, but when you'll run the data flow end to end, 217 (milliseconds part is processed as well). You can use toString(myDateTimeColumn) as expression and see full precision data in preview. Process datetime as datetime rather than string for all practical purposes.
-## Next steps
+## Related content
[Begin building data transformation expressions.](data-transformation-functions.md)
data-factory Concepts Data Flow Manage Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-manage-graph.md
If your data flow has any join, lookup, exists, or union transformations, data f
:::image type="content" source="media/data-flow/hide-reference-nodes.png" alt-text="Hide reference nodes":::
-## Next steps
+## Related content
After completing your data flow logic, turn on [debug mode](concepts-data-flow-debug-mode.md) and test it out in a data preview.
data-factory Concepts Data Flow Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-overview.md
Mapping data flows are available in the following regions in ADF:
| West US 2 | Γ£ô | | West US 3 | Γ£ô |
-## Next steps
+## Related content
* Learn how to create a [source transformation](data-flow-source.md). * Learn how to build your data flows in [debug mode](concepts-data-flow-debug-mode.md).
data-factory Concepts Data Flow Performance Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-performance-pipelines.md
On the pipeline, execute data flow activity under the "Sink Properties" section
You can use an [Azure Synapse database template](../synapse-analytics/database-designer/overview-database-templates.md) when crating a pipeline. When creating a new dataflow, in the source or sink settings, select **Workspace DB**. The database dropdown lists the databases created through the database template. The Workspace DB option is only available for new data flows, it's not available when you use an existing pipeline from the Synapse studio gallery.
-## Next steps
+## Related content
- [Data flow performance overview](concepts-data-flow-performance.md) - [Optimizing sources](concepts-data-flow-performance-sources.md)
data-factory Concepts Data Flow Performance Sinks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-performance-sinks.md
When you're writing to Azure Cosmos DB, altering throughput and batch size durin
**Write throughput budget:** Use a value, which is smaller than total RUs per minute. If you have a data flow with a high number of Spark partitions, setting a budget throughput allows more balance across those partitions.
-## Next steps
+## Related content
- [Data flow performance overview](concepts-data-flow-performance.md) - [Optimizing sources](concepts-data-flow-performance-sources.md)
data-factory Concepts Data Flow Performance Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-performance-sources.md
If possible, avoid using the For-Each activity to run data flows over a set of f
ADF and Synapse datasets are shared resources in your factories and workspaces. However, when you're reading large numbers of source folders and files with delimited text and JSON sources, you can improve the performance of data flow file discovery by setting the option "User projected schema" inside the Projection | Schema options dialog. This option turns off ADF's default schema autodiscovery and greatly improves the performance of file discovery. Before setting this option, make sure to import the projection so that ADF has an existing schema for projection. This option doesn't work with schema drift.
-## Next steps
+## Related content
- [Data flow performance overview](concepts-data-flow-performance.md) - [Optimizing sinks](concepts-data-flow-performance-sinks.md)
data-factory Concepts Data Flow Performance Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-performance-transformations.md
If your data isn't evenly partitioned after a transformation, you can use the [o
> Transformations inside your data flow (with the exception of the Sink transformation) do not modify the file and folder partitioning of data at rest. Partitioning in each transformation repartitions data inside the data frames of the temporary serverless Spark cluster that ADF manages for each of your data flow executions.
-## Next steps
+## Related content
- [Data flow performance overview](concepts-data-flow-performance.md) - [Optimizing sources](concepts-data-flow-performance-sources.md)
data-factory Concepts Data Flow Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-performance.md
If you don't require every pipeline execution of your data flow activities to fu
:::image type="content" source="media/data-flow/logging.png" alt-text="Logging level":::
-## Next steps
+## Related content
- [Optimizing sources](concepts-data-flow-performance-sources.md) - [Optimizing sinks](concepts-data-flow-performance-sinks.md)
data-factory Concepts Data Flow Schema Drift https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-schema-drift.md
In the generated Derived Column transformation, each drifted column is mapped to
:::image type="content" source="media/data-flow/map-drifted-2.png" alt-text="Screenshot shows the Derived Column's Settings tab.":::
-## Next steps
+## Related content
In the [Data Flow Expression Language](data-transformation-functions.md), you'll find additional facilities for column patterns and schema drift including "byName" and "byPosition".
data-factory Concepts Datasets Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-datasets-linked-services.md
Here are some differences between datasets in Data Factory current version (and
- The policy and availability properties arenΓÇÖt supported in the current version. The start time for a pipeline depends on [triggers](concepts-pipeline-execution-triggers.md). - Scoped datasets (datasets defined in a pipeline) arenΓÇÖt supported in the current version.
-## Next steps
+## Related content
See the following tutorial for step-by-step instructions for creating pipelines and datasets by using one of these tools or SDKs. - [Quickstart: create a data factory using .NET](quickstart-create-data-factory-dot-net.md)
data-factory Concepts Integration Runtime Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-integration-runtime-performance.md
However, if most of your data flows execute in parallel, it is not recommended t
> [!NOTE] > Time to live is not available when using the auto-resolve integration runtime (default).
-## Next steps
+## Related content
See other Data Flow articles related to performance:
data-factory Concepts Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-integration-runtime.md
Data Flow activities are executed on their associated Azure integration runtime.
## Integration Runtime in CI/CD Integration runtimes don't change often and are similar across all stages in your CI/CD. Data Factory requires you to have the same name and type of integration runtime across all stages of CI/CD. If you want to share integration runtimes across all stages, consider using a dedicated factory just to contain the shared integration runtimes. You can then use this shared factory in all of your environments as a linked integration runtime type.
-## Next steps
+## Related content
See the following articles:
data-factory Concepts Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-linked-services.md
You can find the list of supported data stores in the [connector overview](copy-
Reference [compute environments supported](compute-linked-services.md) for details about different compute environments you can connect to from your service as well as the different configurations.
-## Next steps
+## Related content
- [Learn how to use credentials from a user-assigned managed identity in a linked service](credentials.md).
data-factory Concepts Nested Activities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-nested-activities.md
The child pipeline would look similar to the below example.
:::image type="content" source="media/concepts-pipelines-activities/nested-activity-execute-child-pipeline.png" alt-text="Screenshot showing an example child pipeline with a ForEach loop.":::
-## Next steps
+## Related content
See the following tutorials for step-by-step instructions for creating pipelines and datasets.
data-factory Concepts Parameters Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-parameters-variables.md
After defining a pipeline variable, you can access its value during a pipeline r
![Screenshot of variable definition.](./media/pipeline-parameter-variable-definition/variable-definition.png)
-## Next steps
+## Related content
See the following tutorials for step-by-step instructions for creating pipelines with activities: - [Build a pipeline with a copy activity](quickstart-create-data-factory-powershell.md)
data-factory Concepts Pipeline Execution Triggers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-pipeline-execution-triggers.md
An event-based trigger runs pipelines in response to an event. There are two fla
For more information about event-based triggers, see [Storage Event Trigger](how-to-create-event-trigger.md) and [Custom Event Trigger](how-to-create-custom-event-trigger.md).
-## Next steps
+## Related content
See the following tutorials:
data-factory Concepts Pipelines Activities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-pipelines-activities.md
For example, say you have a Scheduler trigger, "Trigger A," that I wish to kick
} ```
-## Next steps
+## Related content
See the following tutorials for step-by-step instructions for creating pipelines with activities: - [Build a pipeline with a copy activity](quickstart-create-data-factory-powershell.md)
data-factory Concepts Roles Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-roles-permissions.md
Here are a few examples that demonstrate what you can achieve with custom roles:
Assign the built-in **contributor** role on the data factory resource for the user. This role lets the user see the resources in the Azure portal, but the user can't access the **Publish** and **Publish All** buttons.
-## Next steps
+## Related content
- Learn more about roles in Azure - [Understand role definitions](../role-based-access-control/role-definitions.md)
data-factory Configure Azure Ssis Integration Runtime Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/configure-azure-ssis-integration-runtime-performance.md
You can also adjust the database pricing tier based on [database transaction uni
## Design for high performance Designing an SSIS package to run on Azure is different from designing a package for on-premises execution. Instead of combining multiple independent tasks in the same package, separate them into several packages for more efficient execution in the Azure-SSIS IR. Create a package execution for each package, so that they donΓÇÖt have to wait for each other to finish. This approach benefits from the scalability of the Azure-SSIS integration runtime and improves the overall throughput.
-## Next steps
+## Related content
Learn more about the Azure-SSIS Integration Runtime. See [Azure-SSIS Integration Runtime](concepts-integration-runtime.md#azure-ssis-integration-runtime).
data-factory Configure Bcdr Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/configure-bcdr-azure-ssis-integration-runtime.md
If a disaster occurs and impacts your existing Azure-SSIS IR but not Azure SQL D
1. Using [Azure portal/ADF UI](./create-azure-ssis-integration-runtime-portal.md) or [Azure PowerShell](./create-azure-ssis-integration-runtime-powershell.md), create your new ADF/Azure-SSIS IR named *YourNewADF*/*YourNewAzureSSISIR*, respectively, in another region. If you use Azure portal/ADF UI, you can ignore the test connection error on **Deployment settings** page of **Integration runtime setup** pane.
-## Next steps
+## Related content
You can consider these other configuration options for your Azure-SSIS IR:
data-factory Configure Outbound Allow List Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/configure-outbound-allow-list-azure-policy.md
To apply policies to an Azure Data Factory instance, complete the following step
- For an individual Azure Data factory: 1,000 requests / 5 minutes. Only 1,000 activity runs can be executed in a 5-minute period. Subsequent run requests fail once this limit is reached. - For a subscription: 50,000 requests / 5 minutes. Only 50,000 activity runs can be executed in a 5-minute period per subscription. Subsequent run requests fail once this limit is reached.
-## Next steps
+## Related content
Check out the following article to learn more about the Azure security baseline:
data-factory Connect Data Factory To Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connect-data-factory-to-azure-purview.md
Once you connect the data factory to a Microsoft Purview account, when you execu
Once you connect the data factory to a Microsoft Purview account, you can use the search bar at the top center of Data Factory authoring UI to search for data and perform actions. Learn more from [Discover and explore data in ADF using Microsoft Purview](how-to-discover-explore-purview-data.md).
-## Next steps
+## Related content
[Tutorial: Push Data Factory lineage data to Microsoft Purview](tutorial-push-lineage-to-purview.md)
data-factory Connector Amazon Marketplace Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-marketplace-web-service.md
To copy data from Amazon Marketplace Web Service, set the source type in the cop
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Amazon Rds For Oracle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-rds-for-oracle.md
You are suggested to enable parallel copy with data partitioning especially when
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Amazon Rds For Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-rds-for-sql-server.md
When you copy data from/to Amazon RDS for SQL Server with [Always Encrypted](/sq
5. Create a **rule for the Windows Firewall** on the machine to allow incoming traffic through this port. 6. **Verify connection**: To connect to Amazon RDS for SQL Server by using a fully qualified name, use Amazon RDS for SQL Server Management Studio from a different machine. An example is `"<machine>.<domain>.corp.<company>.com,1433"`.
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Amazon Redshift https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-redshift.md
When copying data from Amazon Redshift, the following mappings are used from Ama
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Amazon S3 Compatible Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-s3-compatible-storage.md
To learn details about the properties, check [GetMetadata activity](control-flow
To learn details about the properties, check [Delete activity](delete-activity.md).
-## Next steps
+## Related content
For a list of data stores that the Copy activity supports as sources and sinks, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Amazon Simple Storage Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-simple-storage-service.md
To learn details about the properties, check [Delete activity](delete-activity.m
] ```
-## Next steps
+## Related content
For a list of data stores that the Copy activity supports as sources and sinks, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Appfigures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-appfigures.md
source(allowSchemaDrift: true,
entityType: 'products') ~> AppFiguresSource ```
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Asana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-asana.md
source(allowSchemaDrift: true,
entityType: 'teams') ~> AsanaSource ```
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-blob-storage.md
Azure Data Factory can get new or changed files only from Azure Blob Storage by
.
-## Next steps
+## Related content
For a list of data stores that the Copy activity supports as sources and sinks, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Azure Cosmos Analytical Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-cosmos-analytical-store.md
In the monitoring section, you always have the chance to rerun a pipeline. When
In addition, Azure Cosmos DB analytical store now supports Change Data Capture (CDC) for Azure Cosmos DB API for NoSQL and Azure Cosmos DB API for Mongo DB (public preview). Azure Cosmos DB analytical store allows you to efficiently consume a continuous and incremental feed of changed (inserted, updated, and deleted) data from analytical store.
-## Next steps
+## Related content
Get started with [change data capture in Azure Cosmos DB analytical store ](../cosmos-db/get-started-change-data-capture.md).
data-factory Connector Azure Cosmos Db Mongodb Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-cosmos-db-mongodb-api.md
After copy activity execution, below BSON ObjectId is generated in sink:
} ```
-## Next steps
+## Related content
For a list of data stores that Copy Activity supports as sources and sinks, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Azure Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-cosmos-db.md
In addition, Azure Cosmos DB analytical store now supports Change Data Capture (
-## Next steps
+## Related content
For a list of data stores that Copy Activity supports as sources and sinks, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-data-explorer.md
IncomingStream sink(allowSchemaDrift: true,
For more information about the properties, see [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
* For a list of data stores that the copy activity supports as sources and sinks, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Azure Data Lake Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-data-lake-storage.md
When you debug the pipeline, the **Enable change data capture** works as well. B
In the monitoring section, you always have the chance to rerun a pipeline. When you are doing so, the changes are always gotten from the checkpoint record in your selected pipeline run.
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Azure Data Lake Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-data-lake-store.md
When you debug the pipeline, the **Enable change data capture (Preview)** works
In the monitoring section, you always have the chance to rerun a pipeline. When you're doing so, the changes are always gotten from the checkpoint record in your selected pipeline run.
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Azure Database For Mariadb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-database-for-mariadb.md
To copy data from Azure Database for MariaDB, the following properties are suppo
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Azure Database For Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-database-for-mysql.md
When copying data from Azure Database for MySQL, the following mappings are used
| `varchar` |`String` | | `year` |`Int32` |
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Azure Database For Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-database-for-postgresql.md
IncomingStream sink(allowSchemaDrift: true,
For more information about the properties, see [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Azure Databricks Delta Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-databricks-delta-lake.md
The same [copy activity monitoring experience](copy-activity-monitoring.md) is p
For more information about the properties, see [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by Copy activity, see [supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Azure File Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-file-storage.md
To learn details about the properties, check [Delete activity](delete-activity.m
] ```
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Azure Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-search.md
The following table specifies whether an Azure AI Search data type is supported
Currently other data types e.g. ComplexType are not supported. For a full list of Azure AI Search supported data types, see [Supported data types (Azure AI Search)](/rest/api/searchservice/supported-data-types).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Azure Sql Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-data-warehouse.md
When you copy data from or to Azure Synapse Analytics, the following mappings ar
| varbinary | Byte[] | | varchar | String, Char[] |
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by Copy Activity, see [supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-database.md
derivedColumn1 sink(allowSchemaDrift: true,
* Only **net changes** from SQL CDC will be loaded by ADF via [cdc.fn_cdc_get_net_changes_](/sql/relational-databases/system-functions/cdc-fn-cdc-get-net-changes-capture-instance-transact-sql?source=recommendations).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Azure Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-managed-instance.md
derivedColumn1 sink(allowSchemaDrift: true,
* Only **net changes** from SQL CDC will be loaded by ADF via [cdc.fn_cdc_get_net_changes_](/sql/relational-databases/system-functions/cdc-fn-cdc-get-net-changes-capture-instance-transact-sql?source=recommendations).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Azure Table Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-table-storage.md
When you move data to and from Azure Table, the following [mappings defined by A
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-cassandra.md
The following tables show the virtual tables that renormalize the data from the
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Concur https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-concur.md
To copy data from Concur, set the source type in the copy activity to **ConcurSo
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Couchbase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-couchbase.md
To copy data from Couchbase, set the source type in the copy activity to **Couch
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Dataworld https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-dataworld.md
source(allowSchemaDrift: true,
tableId: 'MyTable') ~> DataworldSource ```
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Db2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-db2.md
When copying data from DB2, the following mappings are used from DB2 data types
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Drill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-drill.md
To copy data from Drill, set the source type in the copy activity to **DrillSour
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Dynamics Ax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-dynamics-ax.md
To copy data from Dynamics AX, set the **source** type in Copy Activity to **Dyn
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores that Copy Activity supports as sources and sinks, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Dynamics Crm Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-dynamics-crm-office-365.md
IncomingStream sink(allowSchemaDrift: true,
To learn details about the properties, see [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of supported data stores the copy activity as sources and sinks, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector File System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-file-system.md
To learn details about the properties, check [Delete activity.](delete-activity.
] ```
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Ftp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-ftp.md
To learn details about the properties, check [Delete activity](delete-activity.m
] ```
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-github.md
The following properties are supported for the GitHub linked service.
| userName | GitHub username | yes | | password | GitHub password | yes |
-## Next steps
+## Related content
Create a [source dataset](data-flow-source.md) in mapping data flow.
data-factory Connector Google Adwords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-google-adwords.md
To copy data from Google AdWords, set the source type in the copy activity to **
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Google Bigquery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-google-bigquery.md
To copy data from Google BigQuery, set the source type in the copy activity to *
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Google Cloud Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-google-cloud-storage.md
To learn details about the properties, check [Delete activity](delete-activity.m
If you were using an Amazon S3 connector to copy data from Google Cloud Storage, it's still supported as is for backward compatibility. We suggest that you use the new model mentioned earlier. The authoring UI has switched to generating the new model.
-## Next steps
+## Related content
For a list of data stores that the Copy activity supports as sources and sinks, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Google Sheets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-google-sheets.md
source(allowSchemaDrift: true,
sheetName: 'Sheet1') ~> GoogleSheetsSource ```
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Greenplum https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-greenplum.md
To copy data from Greenplum, set the source type in the copy activity to **Green
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Hbase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-hbase.md
To copy data from HBase, set the source type in the copy activity to **HBaseSour
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Hdfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-hdfs.md
For information about Delete activity properties, see [Delete activity](delete-a
} ```
-## Next steps
+## Related content
For a list of data stores that are supported as sources and sinks by the Copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Hive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-hive.md
source(
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-http.md
To learn details about the properties, check [Lookup activity](control-flow-look
] ```
-## Next steps
+## Related content
For a list of data stores that Copy Activity supports as sources and sinks, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Hubspot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-hubspot.md
To copy data from HubSpot, set the source type in the copy activity to **Hubspot
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Impala https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-impala.md
To copy data from Impala, set the source type in the copy activity to **ImpalaSo
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Informix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-informix.md
To copy data to Informix, the following properties are supported in the copy act
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Jira https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-jira.md
To copy data from Jira, set the source type in the copy activity to **JiraSource
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Magento https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-magento.md
To copy data from Magento, set the source type in the copy activity to **Magento
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Mariadb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mariadb.md
To copy data from MariaDB, set the source type in the copy activity to **MariaDB
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Marketo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-marketo.md
To copy data from Marketo, set the source type in the copy activity to **Marketo
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Microsoft Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-microsoft-access.md
To copy data to Microsoft Access, the following properties are supported in the
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Microsoft Fabric Lakehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-microsoft-fabric-lakehouse.md
sink(allowSchemaDrift: true,
```
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Mongodb Atlas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mongodb-atlas.md
To achieve such schema-agnostic copy, skip the "structure" (also called *schema*
To copy data from MongoDB Atlas to tabular sink or reversed, refer to [schema mapping](copy-activity-schema-and-type-mapping.md#schema-mapping).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Mongodb Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mongodb-legacy.md
The following tables show the virtual tables that represent the original arrays
| 2222 |0 |1 | | 2222 |1 |2 |
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mongodb.md
Here are steps that help you upgrade your linked service and related queries:
| `SELECT employees.name, departments.name AS department_name FROM employees LEFT JOIN departments ON employees.department_id = departments.id;`|`db.employees.aggregate([ { $lookup: { from: "departments", localField: "department_id", foreignField: "_id", as: "department" } }, { $unwind: "$department" }, { $project: { _id: 0, name: 1, department_name: "$department.name" } } ])` |
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mysql.md
When copying data from MySQL, the following mappings are used from MySQL data ty
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Netezza https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-netezza.md
You are suggested to enable parallel copy with data partitioning especially when
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores that Copy Activity supports as sources and sinks, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Odata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-odata.md
Project Online requires user-based OAuth, which is not supported by Azure Data F
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores that Copy Activity supports as sources and sinks, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Odbc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-odbc.md
To troubleshoot connection issues, use the **Diagnostics** tab of **Integration
4. Specify the **connection string** that is used to connect to the data store, choose the **authentication** and enter **user name**, **password**, and/or **credentials**. 5. Click **Test connection** to test the connection to the data store.
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-office-365.md
To create a mapping data flow using the Microsoft 365 connector as a source, com
6. On the tab **Data preview** click on the **Refresh** button to fetch a sample dataset for validation.
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Oracle Cloud Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-oracle-cloud-storage.md
To learn details about the properties, check [GetMetadata activity](control-flow
To learn details about the properties, check [Delete activity](delete-activity.md).
-## Next steps
+## Related content
For a list of data stores that the Copy activity supports as sources and sinks, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Oracle Eloqua https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-oracle-eloqua.md
To copy data from Oracle Eloqua, set the source type in the copy activity to **E
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of supported data stores in the service, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Oracle Responsys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-oracle-responsys.md
To copy data from Oracle Responsys, set the source type in the copy activity to
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Oracle Service Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-oracle-service-cloud.md
To copy data from Oracle Service Cloud, set the source type in the copy activity
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Oracle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-oracle.md
When you copy data from and to Oracle, the following interim data type mappings
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-overview.md
The following file formats are supported. Refer to each article for format-based
- [Parquet format](format-parquet.md) - [XML format](format-xml.md)
-## Next steps
+## Related content
- [Copy activity](copy-activity-overview.md) - [Mapping Data Flow](concepts-data-flow-overview.md)
data-factory Connector Paypal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-paypal.md
To copy data from PayPal, set the source type in the copy activity to **PayPalSo
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Phoenix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-phoenix.md
To copy data from Phoenix, set the source type in the copy activity to **Phoenix
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-postgresql.md
If you were using `RelationalSource` typed source, it is still supported as-is,
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Presto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-presto.md
To copy data from Presto, set the source type in the copy activity to **PrestoSo
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Quickbase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-quickbase.md
source(allowSchemaDrift: true,
report: 'Report') ~> Quickbasesource ```
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Quickbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-quickbooks.md
The Copy Activity in the service cannot copy data directly from Quickbooks Deskt
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-rest.md
You can use this REST connector to export REST API JSON response as-is to variou
To copy data from REST endpoint to tabular sink, refer to [schema mapping](copy-activity-schema-and-type-mapping.md#schema-mapping).
-## Next steps
+## Related content
For a list of data stores that Copy Activity supports as sources and sinks in Azure Data Factory, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Salesforce Marketing Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-salesforce-marketing-cloud.md
To copy data from Salesforce Marketing Cloud, set the source type in the copy ac
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Salesforce Service Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-salesforce-service-cloud.md
When you copy data from Salesforce Service Cloud, the following mappings are use
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-salesforce.md
When you copy data from Salesforce, the following mappings are used from Salesfo
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Sap Business Warehouse Open Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-business-warehouse-open-hub.md
To learn details about the properties, check [Lookup activity](control-flow-look
**Resolution:** Disable "SAP HANA Execution" option in DTP, reprocess the data, then try executing the copy activity again.
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Sap Business Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-business-warehouse.md
When copying data from SAP BW, the following mappings are used from SAP BW data
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Sap Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-change-data-capture.md
If **Run mode** is set to **Full on every run** or **Full on the first run, then
If partitions are equally sized, source partitioning can linearly increase the throughput of data extraction. To achieve such performance improvements, sufficient resources are required in the SAP source system, the virtual machine hosting the self-hosted integration runtime, and the Azure integration runtime.
-## Next steps
+## Related content
- [Overview and architecture of the SAP CDC capabilities](sap-change-data-capture-introduction-architecture.md) - [Replicate multiple objects from SAP via SAP CDC](solution-template-replicate-multiple-objects-sap-cdc.md)
data-factory Connector Sap Cloud For Customer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-cloud-for-customer.md
When copying data from SAP Cloud for Customer, the following mappings are used f
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Sap Ecc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-ecc.md
When you're copying data from SAP ECC, the following mappings are used from ODat
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of the data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Sap Hana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-hana.md
Follow the [Prerequisites](#prerequisites) to set up Self-hosted Integration Run
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Sap Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-table.md
When you're copying data from an SAP table, the following mappings are used from
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of the data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-servicenow.md
ServiceNow table index can help improve query performance, refer to [Create a ta
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Sftp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sftp.md
For information about Delete activity properties, see [Delete activity](delete-a
] ```
-## Next steps
+## Related content
For a list of data stores that are supported as sources and sinks by the Copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Sharepoint Online List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sharepoint-online-list.md
You can copy file from SharePoint Online by using **Web activity** to authentica
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores that Copy Activity supports as sources and sinks, see [Supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Shopify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-shopify.md
To copy data from Shopify, set the source type in the copy activity to **Shopify
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Smartsheet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-smartsheet.md
source(allowSchemaDrift: true,
entityType: 'sheets') ~> SmartsheetSource ```
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-snowflake.md
By setting the pipeline Logging Level to None, we exclude the transmission of in
For more information about the properties, see [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by Copy activity, see [supported data stores and formats](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-spark.md
To copy data from Spark, set the source type in the copy activity to **SparkSour
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sql-server.md
derivedColumn1 sink(allowSchemaDrift: true,
5. Create a **rule for the Windows Firewall** on the machine to allow incoming traffic through this port. 6. **Verify connection**: To connect to SQL Server by using a fully qualified name, use SQL Server Management Studio from a different machine. An example is `"<machine>.<domain>.corp.<company>.com,1433"`.
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Square https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-square.md
To copy data from Square, set the source type in the copy activity to **SquareSo
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Sybase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sybase.md
To learn details about the properties, check [Lookup activity](control-flow-look
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Teamdesk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-teamdesk.md
source(allowSchemaDrift: true,
view: 'View') ~> TeamDesksource ```
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Teradata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-teradata.md
When you copy data from Teradata, the following mappings apply from Teradata's d
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Troubleshoot Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-azure-blob-storage.md
This article provides suggestions to troubleshoot common problems with the Azure
- **Recommendation**: For more information about connection errors in the public endpoint, see [Connection error in public endpoint](security-and-access-control-troubleshoot-guide.md#connection-error-in-public-endpoint).
-## Next steps
+## Related content
For more troubleshooting help, try these resources:
data-factory Connector Troubleshoot Azure Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-azure-cosmos-db.md
Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
- **Recommendation**: Check your Azure Cosmos DB partition design. For more information, see [Logical partitions](../cosmos-db/partitioning-overview.md#logical-partitions).
-## Next steps
+## Related content
For more troubleshooting help, try these resources:
data-factory Connector Troubleshoot Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-azure-data-explorer.md
This article provides suggestions to troubleshoot common problems with the Azure
- **Recommendation**: For transient failures, set retries for the activity. For permanent failures, check your configuration and contact support.
-## Next steps
+## Related content
For more troubleshooting help, try these resources:
data-factory Connector Troubleshoot Azure Data Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-azure-data-lake.md
This article provides suggestions to troubleshoot common problems with the Azure
1. If you use service principal or managed identity authentication, grant service principal or managed identity appropriate permissions to do copy. For source, at least the **Storage Blob Data Reader** role. For sink, at least the **Storage Blob Data Contributor** role. For more information, see [Copy and transform data in Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#service-principal-authentication).
-## Next steps
+## Related content
For more troubleshooting help, try these resources:
data-factory Connector Troubleshoot Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-azure-files.md
This article provides suggestions to troubleshoot common problems with the Azure
- **Recommendation**: To check the error details, see [Azure Files help](/rest/api/storageservices/file-service-error-codes). For further help, contact the Azure Files team.
-## Next steps
+## Related content
For more troubleshooting help, try these resources:
data-factory Connector Troubleshoot Azure Table Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-azure-table-storage.md
This article provides suggestions to troubleshoot common problems with the Azure
- **Recommendation**: Double-check and fix the source columns, as necessary.
-## Next steps
+## Related content
For more troubleshooting help, try these resources:
data-factory Connector Troubleshoot Db2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-db2.md
This article provides suggestions to troubleshoot common problems with the Azure
- **Recommendation**: Try to set "NULLID" in the `packageCollection` property.
-## Next steps
+## Related content
For more troubleshooting help, try these resources:
data-factory Connector Troubleshoot Delimited Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-delimited-text.md
This article provides suggestions to troubleshoot common problems with the delim
| If the expected column count is "1" in an error message, you might have specified wrong compression or format settings, which caused the files to be parsed incorrectly. | Check the format settings to make sure they match your source files. | | If your source is a folder, the files under the specified folder might have a different schema. | Make sure that the files in the specified folder have an identical schema. |
-## Next steps
+## Related content
For more troubleshooting help, try these resources:
data-factory Connector Troubleshoot Dynamics Dataverse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-dynamics-dataverse.md
This article provides suggestions to troubleshoot common problems with the Dynam
- **Recommendation**: Enable the staging and retry.
-## Next steps
+## Related content
For more troubleshooting help, try these resources:
data-factory Connector Troubleshoot File System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-file-system.md
This article provides suggestions to troubleshoot common problems with the file
- **Recommendation**: Using command line from [Set up an existing self-hosted IR via local PowerShell](create-self-hosted-integration-runtime.md#set-up-an-existing-self-hosted-ir-via-local-powershell) , you could allow or disallow local SHIR file system access.
-## Next steps
+## Related content
For more troubleshooting help, try these resources:
data-factory Connector Troubleshoot Ftp Sftp Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-ftp-sftp-http.md
This article provides suggestions to troubleshoot common problems with the FTP,
- **Recommendation**: For more information about HTTP status code, see this [document](/troubleshoot/developer/webapps/iis/www-administration-management/http-status-code).
-## Next steps
+## Related content
For more troubleshooting help, try these resources:
data-factory Connector Troubleshoot Google Adwords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-google-adwords.md
This article provides suggestions to troubleshoot common problems with the Googl
2. The syntax for Google Ads query language is similar to AWQL from the AdWords API, but not identical. You can refer this [document](https://developers.google.com/google-ads/api/docs/migration/querying) for more details.
-## Next steps
+## Related content
For more troubleshooting help, try these resources:
data-factory Connector Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-guide.md
The errors below are general to the copy activity and could occur with any conne
- **Recommendation**: Remove the parameters in the referred linked service to eliminate the error. Otherwise, run the pipeline without testing connection or previewing data. 
-## Next steps
+## Related content
For more troubleshooting help, try these resources:
data-factory Connector Troubleshoot Hive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-hive.md
This article provides suggestions to troubleshoot common problems with the Hive
3. Edit the **krb5.ini** file. 4. Shut down and restart the VM and the SHIR from the machine.
-## Next steps
+## Related content
For more troubleshooting help, try these resources:
data-factory Connector Troubleshoot Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-mongodb.md
This article provides suggestions to troubleshoot common problems with the Mongo
- **Resolution**: Upgrade your MongoDB linked service to the latest version. Refer to this [article](connector-mongodb.md#upgrade-the-mongodb-linked-service).
-## Next steps
+## Related content
For more troubleshooting help, try these resources:
data-factory Connector Troubleshoot Oracle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-oracle.md
This article provides suggestions to troubleshoot common problems with the Oracl
To learn the byte sequence in the result, see [How are dates stored in Oracle?](https://stackoverflow.com/questions/13568193/how-are-dates-stored-in-oracle).
-## Next steps
+## Related content
For more troubleshooting help, try these resources:
data-factory Connector Troubleshoot Orc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-orc.md
This article provides suggestions to troubleshoot common problems with the ORC f
- **Recommendation**: Check the ticks value and avoid using the datetime value '0001-01-01 00:00:00'.
-## Next steps
+## Related content
For more troubleshooting help, try these resources:
data-factory Connector Troubleshoot Parquet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-parquet.md
This article provides suggestions to troubleshoot common problems with the Parqu
- **Resolution**: Try to generate smaller files (size < 1G) with a limitation of 1000 rows per file.
-## Next steps
+## Related content
For more troubleshooting help, try these resources:
data-factory Connector Troubleshoot Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-postgresql.md
This article provides suggestions to troubleshoot common problems with the Azure
- **Cause**: No partition column name is provided, and it couldn't be decided automatically.
-## Next steps
+## Related content
For more troubleshooting help, try these resources:
data-factory Connector Troubleshoot Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-rest.md
This article provides suggestions to troubleshoot common problems with the REST
Tools like **Postman** and **Fiddler** are recommended for the preceding case.
-## Next steps
+## Related content
For more troubleshooting help, try these resources:
data-factory Connector Troubleshoot Sap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-sap.md
This article provides suggestions to troubleshoot common problems with the SAP T
```
-## Next steps
+## Related content
For more troubleshooting help, try these resources:
data-factory Connector Troubleshoot Sharepoint Online List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-sharepoint-online-list.md
You need to enable ACS to acquire the access token. Take the following steps:
1. Use ACS to get the access token.
-## Next steps
+## Related content
For more troubleshooting help, try these resources:
data-factory Connector Troubleshoot Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-snowflake.md
The copy activity fails with the following error when using Snowflake as sink:<b
- Direct copy: Make sure to grant access permission to Snowflake in the other source/sink. Currently, only Azure Blob Storage that uses shared access signature authentication is supported as source or sink. When you generate the shared access signature, make sure to set the allowed permissions and IP addresses to Snowflake in the Azure Blob Storage. For more information, see this [article](https://docs.snowflake.com/en/user-guide/data-load-azure-config.html#option-2-generating-a-sas-token). - Staged copy: The staging Azure Blob Storage linked service must use shared access signature authentication. When you generate the shared access signature, make sure to set the allowed permissions and IP addresses to Snowflake in the staging Azure Blob Storage. For more information, see this [article](https://docs.snowflake.com/en/user-guide/data-load-azure-config.html#option-2-generating-a-sas-token).
-## Next steps
+## Related content
For more troubleshooting help, try these resources:
data-factory Connector Troubleshoot Synapse Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-synapse-sql.md
This article provides suggestions to troubleshoot common problems with the Azure
2. Otherwise, enable public network access by setting **Public network access** option to **Selected networks** on Azure SQL Database **Networking** setting page.
-## Next steps
+## Related content
For more troubleshooting help, try these resources:
data-factory Connector Troubleshoot Xml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-xml.md
This article provides suggestions to troubleshoot common problems with the XML f
- **Recommendation**: Correct the XML file to make it well formed.
-## Next steps
+## Related content
For more troubleshooting help, try these resources:
data-factory Connector Twilio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-twilio.md
source(allowSchemaDrift: true,
from: '+17755425856') ~> TwilioSource ```
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Vertica https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-vertica.md
To copy data from Vertica, set the source type in the copy activity to **Vertica
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Web Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-web-table.md
If you are using Excel 2013, use [Microsoft Power Query for Excel](https://www.m
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Xero https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-xero.md
The following tables can only be queried with complete schema:
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of supported data stores by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Zendesk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-zendesk.md
source(allowSchemaDrift: true,
entity: 'tickets') ~> ZendeskSource ```
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Zoho https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-zoho.md
To copy data from Zoho, set the source type in the copy activity to **ZohoSource
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Continuous Integration Delivery Automate Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-automate-azure-pipelines.md
The data factory team has provided a [sample pre- and post-deployment script](co
>[!WARNING] >Make sure to use **PowerShell Core** in ADO task to run the script
-## Next steps
+## Related content
- [Continuous integration and delivery overview](continuous-integration-delivery.md) - [Manually promote a Resource Manager template to each environment](continuous-integration-delivery-manual-promotion.md)
data-factory Continuous Integration Delivery Automate Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-automate-github-actions.md
LetΓÇÖs test the setup by making some changes in the development Data Factory in
4. You can also navigate to the target Data Factory instance to which you deployed changes to and make sure it reflects the latest changes.
-## Next steps
+## Related content
- [Continuous integration and delivery overview](continuous-integration-delivery.md) - [Manually promote a Resource Manager template to each environment](continuous-integration-delivery-manual-promotion.md)
data-factory Continuous Integration Delivery Hotfix Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-hotfix-environment.md
See the video below an in-depth video tutorial on how to hot-fix your environmen
> [!VIDEO https://www.microsoft.com/videoplayer/embed/RE4I7fi]
-## Next steps
+## Related content
- [Automated publishing for continuous integration and delivery](continuous-integration-delivery-improvements.md) - [Continuous integration and delivery overview](continuous-integration-delivery.md)
data-factory Continuous Integration Delivery Improvements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-improvements.md
Follow these steps to get started:
> [!NOTE] > The generated artifacts already contain pre and post deployment scripts for the triggers so it isn't necessary to add these manually. However, when deploying one would still need to reference the [documentation on stopping and starting triggers](continuous-integration-delivery-sample-script.md#script-execution-and-parameters) to execute the provided script.
-## Next steps
+## Related content
Learn more information about continuous integration and delivery in Data Factory: [Continuous integration and delivery in Azure Data Factory](continuous-integration-delivery.md).
data-factory Continuous Integration Delivery Linked Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-linked-templates.md
If you don't have Git configured, you can access the linked templates via **Expo
When deploying your resources, you specify that the deployment is either an incremental update or a complete update. The difference between these two modes is how Resource Manager handles existing resources in the resource group that aren't in the template. Review [Deployment Modes](../azure-resource-manager/templates/deployment-modes.md).
-## Next steps
+## Related content
- [Continuous integration and delivery overview](continuous-integration-delivery.md) - [Automate continuous integration using Azure Pipelines releases](continuous-integration-delivery-automate-azure-pipelines.md)
data-factory Continuous Integration Delivery Manual Promotion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-manual-promotion.md
Use the steps below to promote a Resource Manager template to each environment f
:::image type="content" source="media/continuous-integration-delivery/continuous-integration-image5.png" alt-text="Settings section":::
-## Next steps
+## Related content
- [Continuous integration and delivery overview](continuous-integration-delivery.md) - [Automate continuous integration using Azure Pipelines releases](continuous-integration-delivery-automate-azure-pipelines.md)
data-factory Continuous Integration Delivery Resource Manager Custom Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-resource-manager-custom-parameters.md
The following example shows how to add a single value to the default parameteriz
} ```
-## Next steps
+## Related content
- [Continuous integration and delivery overview](continuous-integration-delivery.md) - [Automate continuous integration using Azure Pipelines releases](continuous-integration-delivery-automate-azure-pipelines.md)
data-factory Continuous Integration Delivery Sample Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-sample-script.md
The following YAML code executes a script that can be used to stop triggers befo
workingDirectory: ../ ```
-## Next steps
+## Related content
- [Continuous integration and delivery overview](continuous-integration-delivery.md) - [Automate continuous integration using Azure Pipelines releases](continuous-integration-delivery-automate-azure-pipelines.md)
data-factory Continuous Integration Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery.md
If you're using Git integration with your data factory and have a CI/CD pipeline
**No action is required unless you are using 'PartialArmTemplates'. Otherwise, switch to any supported mechanism for deployments using: 'ARMTemplateForFactory.json' or 'linkedTemplates' files.**
-## Next steps
+## Related content
- [Continuous deployment improvements](continuous-integration-delivery-improvements.md#continuous-deployment-improvements) - [Automate continuous integration using Azure Pipelines releases](continuous-integration-delivery-automate-azure-pipelines.md)
data-factory Control Flow Append Variable Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-append-variable-activity.md
Type | Activity Type is AppendVariable | Yes
Value | String literal or expression object value used to append into specified variable | Yes VariableName | Name of the variable that will be modified by activity, the variable must be of type ΓÇÿArrayΓÇÖ | Yes
-## Next steps
+## Related content
Learn about a related control flow activity: - [Set Variable Activity](control-flow-set-variable-activity.md)
data-factory Control Flow Azure Function Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-azure-function-activity.md
Learn more about Durable Functions in [this article](../azure-functions/durable/
You can find a sample that uses an Azure Function to extract the content of a tar file [here](https://github.com/Azure/Azure-DataFactory/tree/master/SamplesV2/UntarAzureFilesWithAzureFunction).
-## Next steps
+## Related content
Learn more about supported activities in [Pipelines and activities](concepts-pipelines-activities.md).
data-factory Control Flow Execute Data Flow Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-execute-data-flow-activity.md
To get the number of rows read from a source named 'source1' that was used in th
> [!NOTE] > If a sink has zero rows written, it won't show up in metrics. Existence can be verified using the `contains` function. For example, `contains(activity('dataflowActivity').output.runStatus.metrics, 'sink1')` checks whether any rows were written to sink1.
-## Next steps
+## Related content
See supported control flow activities:
data-factory Control Flow Execute Pipeline Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-execute-pipeline-activity.md
The master pipeline forwards these values to the invoked pipeline as shown in th
> [!WARNING] >Execute Pipeline activity passes array parameter as string to the child pipeline.This is due to the fact that the payload is passed from the parent pipeline to the >child as string. We can see it when we check the input passed to the child pipeline. Plese check this [section](./data-factory-troubleshoot-guide.md#execute-pipeline-passes-array-parameter-as-string-to-the-child-pipeline) for more details.
-## Next steps
+## Related content
See other supported control flow activities: - [For Each Activity](control-flow-for-each-activity.md)
data-factory Control Flow Expression Language Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-expression-language-functions.md
And returns this result: `"Paris"`
> [!NOTE] > One can add comments to data flow expressions, but not in pipeline expressions.
-## Next steps
+## Related content
For a list of system variables you can use in expressions, see [System variables](control-flow-system-variables.md).
data-factory Control Flow Fail Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-fail-activity.md
The dynamic content in both `message` and `errorCode` can't be interpreted. | "F
\* This situation shouldn't occur if the pipeline is developed with the web user interface (UI) of Data Factory.
-## Next steps
+## Related content
See other supported control flow activities, including:
data-factory Control Flow Filter Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-filter-activity.md
In this example, the pipeline has two activities: **Filter** and **ForEach**. Th
} ```
-## Next steps
+## Related content
See other supported control flow activities: - [If Condition Activity](control-flow-if-condition-activity.md)
data-factory Control Flow For Each Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-for-each-activity.md
Here are some limitations of the ForEach activity and suggested workarounds.
| SetVariable can't be used inside a ForEach activity that runs in parallel as the variables are global to the whole pipeline, they are not scoped to a ForEach or any other activity. | Consider using sequential ForEach or use Execute Pipeline inside ForEach (Variable/Parameter handled in child Pipeline).| | | |
-## Next steps
+## Related content
See other supported control flow activities: - [Execute Pipeline Activity](control-flow-execute-pipeline-activity.md)
data-factory Control Flow Get Metadata Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-get-metadata-activity.md
The Get Metadata results are shown in the activity output. Following are two sam
} ```
-## Next steps
+## Related content
Learn about other supported control flow activities: - [Execute Pipeline activity](control-flow-execute-pipeline-activity.md)
data-factory Control Flow If Condition Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-if-condition-activity.md
Write-Host "\nActivity 'Error' section:" -foregroundcolor "Yellow"
$result.Error -join "`r`n" ```
-## Next steps
+## Related content
See other supported control flow activities: - [Execute Pipeline Activity](control-flow-execute-pipeline-activity.md)
data-factory Control Flow Lookup Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-lookup-activity.md
Here are some limitations of the Lookup activity and suggested workarounds.
| The Lookup activity has a maximum of 5,000 rows, and a maximum size of 4 MB. | Design a two-level pipeline where the outer pipeline iterates over an inner pipeline, which retrieves data that doesn't exceed the maximum rows or size. | | | |
-## Next steps
+## Related content
See other control flow activities supported by Azure Data Factory and Synapse pipelines: - [Execute Pipeline activity](control-flow-execute-pipeline-activity.md)
data-factory Control Flow Power Query Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-power-query-activity.md
You have the option to sink your output to multiple destinations. Click on the p
In the Mapping tab, you can configure column mapping from the output of your Power Query activity to the target schema of your chosen sink. Read more about column mapping from the [data flow sink mapping documentation](data-flow-sink.md#field-mapping).
-## Next steps
+## Related content
Learn more about data wrangling concepts using [Power Query in Azure Data Factory](wrangling-tutorial.md)
data-factory Control Flow Set Variable Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-set-variable-activity.md
A common scenario involving variable is to use a variable as an iterator within
Variables are scoped at the pipeline level. This means that they're not thread safe and can cause unexpected and undesired behavior if they're accessed from within a parallel iteration activity such as a ForEach loop, especially when the value is also being modified within that foreach activity.
-## Next steps
+## Related content
Learn about another related control flow activity: - [Append Variable Activity](control-flow-append-variable-activity.md)
data-factory Control Flow Switch Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-switch-activity.md
Write-Host "\nActivity 'Error' section:" -foregroundcolor "Yellow"
$result.Error -join "`r`n" ```
-## Next steps
+## Related content
See other control flow activities supported by Data Factory:
data-factory Control Flow System Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-system-variables.md
These system variables can be referenced anywhere in the trigger JSON for trigge
| @triggerBody().event.data._keyName_ | Data field in custom event is a free from JSON blob, which customer can use to send messages and data. Please use data._keyName_ to reference each field. For example, @triggerBody().event.data.callback returns the value for the _callback_ field stored under _data_. | | @trigger().startTime | Time at which the trigger fired to invoke the pipeline run. |
-## Next steps
+## Related content
* For information about how these variables are used in expressions, see [Expression language & functions](control-flow-expression-language-functions.md). * To use trigger scope system variables in pipeline, see [Reference trigger metadata in pipeline](how-to-use-trigger-parameterization.md)
data-factory Control Flow Until Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-until-activity.md
while ($True) {
} ```
-## Next steps
+## Related content
See other supported control flow activities: - [If Condition Activity](control-flow-if-condition-activity.md)
data-factory Control Flow Validation Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-validation-activity.md
To use a Validation activity in a pipeline, complete the following steps:
|minimumSize | Minimum size of a file in bytes. If no value is specified, default value is 0 bytes | Integer | No |
-## Next steps
+## Related content
See other supported control flow activities: - [If Condition Activity](control-flow-if-condition-activity.md)
data-factory Control Flow Wait Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-wait-activity.md
In this example, the pipeline has two activities: **Until** and **Wait**. The Wa
```
-## Next steps
+## Related content
See other supported control flow activities: - [If Condition Activity](control-flow-if-condition-activity.md)
data-factory Control Flow Web Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-web-activity.md
public HttpResponseMessage Execute(JObject payload)
```
-## Next steps
+## Related content
See other supported control flow activities: - [Execute Pipeline Activity](control-flow-execute-pipeline-activity.md)
data-factory Control Flow Webhook Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-webhook-activity.md
When you use the **Report status on callback** property, you must add the follow
} ```
-## Next steps
+## Related content
See the following supported control flow activities:
data-factory Copy Activity Data Consistency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-data-consistency.md
From the log file above, you can see sample1.csv has been skipped because it fai
-## Next steps
+## Related content
See the other Copy Activity articles: - [Copy activity overview](copy-activity-overview.md)
data-factory Copy Activity Fault Tolerance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-fault-tolerance.md
data1, data2, data3, "UserErrorInvalidDataValue", "Column 'Prop_2' contains an i
data4, data5, data6, "2627", "Violation of PRIMARY KEY constraint 'PK_tblintstrdatetimewithpk'. Cannot insert duplicate key in object 'dbo.tblintstrdatetimewithpk'. The duplicate key value is (data4)." ```
-## Next steps
+## Related content
See the other copy activity articles: - [Copy activity overview](copy-activity-overview.md)
data-factory Copy Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-log.md
select top 1 OperationItem, CopyDuration=DATEDIFF(SECOND, min(TIMESTAMP), max(TI
```
-## Next steps
+## Related content
See the other Copy Activity articles: - [Copy activity overview](copy-activity-overview.md)
data-factory Copy Activity Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-monitoring.md
Copy activity execution details and performance characteristics are also returne
} ```
-## Next steps
+## Related content
See the other Copy Activity articles: \- [Copy activity overview](copy-activity-overview.md)
data-factory Copy Activity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-overview.md
When you move data from source to destination store, copy activity provides an o
## Session log You can log your copied file names, which can help you to further ensure the data is not only successfully copied from source to destination store, but also consistent between source and destination store by reviewing the copy activity session logs. See [Session sign in copy activity](copy-activity-log.md) for details.
-## Next steps
+## Related content
See the following quickstarts, tutorials, and samples: - [Copy data from one location to another location in the same Azure Blob storage account](quickstart-create-data-factory-dot-net.md)
data-factory Copy Activity Performance Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-performance-features.md
You're charged based on two steps: copy duration and copy type.
* When you use staging during a cloud copy, which is copying data from a cloud data store to another cloud data store, both stages empowered by Azure integration runtime, you're charged the [sum of copy duration for step 1 and step 2] x [cloud copy unit price]. * When you use staging during a hybrid copy, which is copying data from an on-premises data store to a cloud data store, one stage empowered by a self-hosted integration runtime, you're charged for [hybrid copy duration] x [hybrid copy unit price] + [cloud copy duration] x [cloud copy unit price].
-## Next steps
+## Related content
See the other copy activity articles: - [Copy activity overview](copy-activity-overview.md)
data-factory Copy Activity Performance Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-performance-troubleshooting.md
Here is performance monitoring and tuning references for some of the supported d
* SQL Server: [Monitor and tune for performance](/sql/relational-databases/performance/monitor-and-tune-for-performance). * On-premises file server: [Performance tuning for file servers](/previous-versions//dn567661(v=vs.85)).
-## Next steps
+## Related content
See the other copy activity articles: - [Copy activity overview](copy-activity-overview.md)
data-factory Copy Activity Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-performance.md
You can set the `parallelCopies` property to indicate the parallelism you want t
A data copy operation can send the data _directly_ to the sink data store. Alternatively, you can choose to use Blob storage as an _interim staging_ store. [Learn more](copy-activity-performance-features.md#staged-copy).
-## Next steps
+## Related content
See the other copy activity articles:
data-factory Copy Activity Preserve Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-preserve-metadata.md
Here's an example of copy activity JSON configuration (see `preserve`):
] ```
-## Next steps
+## Related content
See the other Copy Activity articles:
data-factory Copy Activity Schema And Type Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-schema-and-type-mapping.md
Configure the schema-mapping rule as the following copy activity JSON sample:
} ```
-## Next steps
+## Related content
See the other Copy Activity articles: - [Copy activity overview](copy-activity-overview.md)
data-factory Copy Clone Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-clone-data-factory.md
Here are some of the circumstances in which you may find it useful to copy or cl
1. For security reasons, the generated Resource Manager template won't contain any secret information, for example passwords for linked services. Hence, you need to provide the credentials as deployment parameters. If manually inputting credential isn't desirable for your settings, please consider retrieving the connection strings and passwords from Azure Key Vault instead. [See more](store-credentials-in-key-vault.md)
-## Next steps
+## Related content
Review the guidance for creating a data factory in the Azure portal in [Create a data factory by using the Azure Data Factory UI](quickstart-create-data-factory-portal.md).
data-factory Copy Data Tool Metadata Driven https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-data-tool-metadata-driven.md
This pipeline will copy objects from one group. The objects belonging to this gr
- OPENJSON is used in generated SQL scripts by copy data tool. If you are using SQL Server to host control table, it must be SQL Server 2016 (13.x) and later in order to support OPENJSON function.
-## Next steps
+## Related content
Try these tutorials that use the Copy Data tool: - [Quickstart: Create a data factory using the Copy Data tool](quickstart-hello-world-copy-data-tool.md)
data-factory Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-data-tool.md
A one-time copy operation enables data movement from a source to a destination o
:::image type="content" source="./media/copy-data-tool/scheduling-options.png" alt-text="Scheduling options":::
-## Next steps
+## Related content
Try these tutorials that use the Copy Data tool: - [Quickstart: Create a data factory using the Copy Data tool](quickstart-hello-world-copy-data-tool.md)
data-factory Create Azure Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-azure-integration-runtime.md
Once an Azure IR is created, you can reference it in your Linked Service definit
```
-## Next steps
+## Related content
See the following articles on how to create other types of integration runtimes: - [Create self-hosted integration runtime](create-self-hosted-integration-runtime.md)
data-factory Create Azure Ssis Integration Runtime Deploy Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-azure-ssis-integration-runtime-deploy-packages.md
For more information, see [Deploy SSIS projects/packages](/sql/integration-servi
In both cases, you can also run your deployed packages on Azure-SSIS IR by using the Execute SSIS Package activity in Data Factory pipelines. For more information, see [Invoke SSIS package execution as a first-class Data Factory activity](./how-to-invoke-ssis-package-ssis-activity.md).
-## Next steps
+## Related content
- [Learn how to provision an Azure-SSIS IR using the Azure portal](create-azure-ssis-integration-runtime-portal.md). - [Learn how to provision an Azure-SSIS IR using Azure PowerShell](create-azure-ssis-integration-runtime-powershell.md).
data-factory Create Azure Ssis Integration Runtime Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-azure-ssis-integration-runtime-portal.md
On the **Connections** pane of **Manage** hub, switch to the **Integration runti
1. For the remaining steps to set up an Azure-SSIS IR, see the [Provision an Azure SSIS integration runtime](#provision-an-azure-ssis-integration-runtime) section.
-## Next steps
+## Related content
- [Create an Azure-SSIS IR via Azure PowerShell](create-azure-ssis-integration-runtime-powershell.md). - [Create an Azure-SSIS IR via Azure Resource Manager template](create-azure-ssis-integration-runtime-resource-manager-template.md).
data-factory Create Azure Ssis Integration Runtime Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-azure-ssis-integration-runtime-powershell.md
write-host("##### Completed #####")
write-host("If any cmdlet is unsuccessful, please consider using -Debug option for diagnostics.") ```
-## Next steps
+## Related content
- [Create an Azure-SSIS IR via Azure portal](create-azure-ssis-integration-runtime-portal.md). - [Create an Azure-SSIS IR via Azure Resource Manager template](create-azure-ssis-integration-runtime-resource-manager-template.md).
data-factory Create Azure Ssis Integration Runtime Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-azure-ssis-integration-runtime-resource-manager-template.md
Following are steps to create an Azure-SSIS integration runtime with an Azure Re
> When you provision an Azure-SSIS IR, Access Redistributable and Azure Feature Pack for SSIS are also installed. These components provide connectivity to Excel files, Access files, and various Azure data sources, in addition to the data sources that built-in components already support. For more information about built-in/preinstalled components, see [Built-in/preinstalled components on Azure-SSIS IR](./built-in-preinstalled-components-ssis-integration-runtime.md). For more information about additional components that you can install, see [Custom setups for Azure-SSIS IR](./how-to-configure-azure-ssis-ir-custom-setup.md).
-## Next steps
+## Related content
- [Learn how to provision an Azure-SSIS IR using the Azure portal](create-azure-ssis-integration-runtime-portal.md). - [Learn how to provision an Azure-SSIS IR using Azure PowerShell](create-azure-ssis-integration-runtime-powershell.md).
data-factory Create Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-azure-ssis-integration-runtime.md
The following table compares certain features of an Azure SQL Database server an
| | | |
-## Next steps
+## Related content
- [Learn how to provision an Azure-SSIS IR using the Azure portal](create-azure-ssis-integration-runtime-portal.md). - [Learn how to provision an Azure-SSIS IR using Azure PowerShell](create-azure-ssis-integration-runtime-powershell.md).
data-factory Create Self Hosted Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-self-hosted-integration-runtime.md
When installing a self-hosted integration runtime consider following
- Share across multiple data sources - Share across multiple data factories
-## Next steps
+## Related content
For step-by-step instructions, see [Tutorial: Copy on-premises data to cloud](tutorial-hybrid-copy-powershell.md).
data-factory Create Shared Self Hosted Integration Runtime Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-shared-self-hosted-integration-runtime-powershell.md
Remove-AzDataFactoryV2IntegrationRuntime `
> This feature is available only in Data Factory V2.
-### Next steps
+### Related content
- Review [integration runtime concepts in Azure Data Factory](./concepts-integration-runtime.md).
data-factory Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/credentials.md
Below are the generic steps for using a **user-assigned managed identity** in th
> You can use [SDK](/dotnet/api/microsoft.azure.management.synapse?preserve-view=true&view=azure-dotnet-preview)/ [PowerShell](/powershell/module/az.synapse/?context=%2Fazure%2Fsynapse-analytics%2Fcontext%2Fcontext&view=azps-9.1.0&preserve-view=true)/ [REST APIs](/rest/api/synapse/) for the above actions. An example of creating a user-assigned managed identity and assigning it permissions to a resource with Bicep/ARM is available in [this example](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.datafactory/data-factory-get-started). > Linked services with user-assigned managed identity are currently not supported in Synapse Spark.
-## Next steps
+## Related content
- [Managed identity](data-factory-service-identity.md)
data-factory Data Access Strategies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-access-strategies.md
For more information about supported network security mechanisms on data stores
| Azure laaS | SQL Server, Oracle, etc. | Yes | - | | On-premise laaS | SQL Server, Oracle, etc. | Yes | - |
-## Next steps
+## Related content
For more information, see the following related articles: * [Supported data stores](./copy-activity-overview.md#supported-data-stores-and-formats)
data-factory Data Factory Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-private-link.md
You're unable to access each PaaS resource when both sides are exposed to Privat
For example, customer A is using a private link to access the portal of data factory A in virtual network A. When data factory A doesn't block public access, customer B can access the portal of data factory A in virtual network B via public. But when customer B creates a private endpoint against data factory B in virtual network B, then customer B can't access data factory A via public in virtual network B anymore.
-## Next steps
+## Related content
- [Create a data factory by using the Azure Data Factory UI](quickstart-create-data-factory-portal.md) - [Introduction to Azure Data Factory](introduction.md)
data-factory Data Factory Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-service-identity.md
You can create, delete, manage user-assigned managed identities in Microsoft Ent
In order to use a user-assigned managed identity, you must first [create credentials](credentials.md) in your service instance for the UAMI.
-## Next steps
+## Related content
- [Create credentials](credentials.md).
data-factory Data Factory Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-troubleshoot-guide.md
Then our pipeline will succeed. And we can see in the input box that the paramet
:::image type="content" source="media/data-factory-troubleshoot-guide/input-type-array.png" alt-text="Screenshot showing input type array.":::
-## Next steps
+## Related content
For more troubleshooting help, try these resources:
data-factory Data Factory Tutorials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-tutorials.md
Below is a list of tutorials to help explain and walk through a series of Data F
[Microsoft Purview](turorial-push-lineage-to-purview.md)
-## Next steps
+## Related content
Learn more about Data Factory [pipelines](concepts-pipelines-activities.md) and [data flows](concepts-data-flow-overview.md).
data-factory Data Factory Ux Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-ux-troubleshoot-guide.md
The source of the error message is JSON file that describes the pipeline. It hap
The solution is to fix JSON files at first and then reopen the pipeline using Authoring tool.
-## Next steps
+## Related content
For more troubleshooting help, try these resources:
data-factory Data Flow Aggregate Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-aggregate-functions.md
The following functions are only available in aggregate, pivot, unpivot, and win
| [varianceSampleIf](data-flow-expressions-usage.md#varianceSampleIf) | Based on a criteria, gets the unbiased variance of a column. | |||
-## Next steps
+## Related content
- List of all [array functions](data-flow-array-functions.md). - List of all [cached lookup functions](data-flow-cached-lookup-functions.md).
data-factory Data Flow Aggregate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-aggregate.md
MoviesYear aggregate(groupBy(year),
avgrating = avg(toInteger(Rating))) ~> AvgComedyRatingByYear ```
-## Next steps
+## Related content
* Define window-based aggregation using the [Window transformation](data-flow-window.md)
data-factory Data Flow Alter Row https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-alter-row.md
SpecifyUpsertConditions alterRow(insertIf(alterRowCondition == 'insert'),
deleteIf(alterRowCondition == 'delete')) ~> AlterRow ```
-## Next steps
+## Related content
After the Alter Row transformation, you may want to [sink your data into a destination data store](data-flow-sink.md).
data-factory Data Flow Array Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-array-functions.md
Array functions perform transformations on data structures that are arrays. Thes
| [union](data-flow-expressions-usage.md#union) | Returns a union set of distinct items from 2 arrays.| |||
-## Next steps
+## Related content
- List of all [aggregate functions](data-flow-aggregate-functions.md). - List of all [cached lookup functions](data-flow-cached-lookup-functions.md).
data-factory Data Flow Assert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-assert.md
source1, source2 assert(expectTrue(CountryRegion == 'United States', false, 'non
```
-## Next steps
+## Related content
* Use the [Select transformation](data-flow-select.md) to select and validate columns. * Use the [Derived column transformation](data-flow-derived-column.md) to transform column values.
data-factory Data Flow Cached Lookup Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-cached-lookup-functions.md
The following functions are only available when using a cached lookup when you'v
| [outputs](data-flow-expressions-usage.md#outputs) | Returns the entire output row set of the results of the cache sink| |||
-## Next steps
+## Related content
- List of all [aggregate functions](data-flow-aggregate-functions.md). - List of all [array functions](data-flow-array-functions.md).
data-factory Data Flow Cast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-cast.md
To modify the data type for columns in your data flow, add columns to "Cast sett
), errors: true) ~> <castTransformationName<> ```
-## Next steps
+## Related content
Modify existing columns and new columns using the [derived column transformation](data-flow-derived-column.md).
data-factory Data Flow Conditional Split https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-conditional-split.md
CleanData
) ~> SplitByYear@(moviesBefore1960, moviesAfter1980, AllOtherMovies) ```
-## Next steps
+## Related content
Common data flow transformations used with conditional split are the [join transformation](data-flow-join.md), [lookup transformation](data-flow-lookup.md), and the [select transformation](data-flow-select.md)
data-factory Data Flow Conversion Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-conversion-functions.md
Conversion functions are used to convert data and test for data types
| [toUTC](data-flow-expressions-usage.md#toUTC) | Converts the timestamp to UTC. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. It's defaulted to the current timezone. Refer to Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html. | |||
-## Next steps
+## Related content
- List of all [aggregate functions](data-flow-aggregate-functions.md). - List of all [array functions](data-flow-array-functions.md).
data-factory Data Flow Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-create.md
You can also add data flows directly to your workspace without using a template.
-## Next steps
+## Related content
* [Tutorial: Transform data using mapping data flows](tutorial-data-flow.md) * Begin building your data transformation with a [source transformation](data-flow-source.md).
data-factory Data Flow Date Time Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-date-time-functions.md
In Data Factory and Synapse pipelines, use date and time functions to express da
| [year](data-flow-expressions-usage.md#year) | Gets the year value of a date. | |||
-## Next steps
+## Related content
- [Aggregate functions](data-flow-aggregate-functions.md) - [Array functions](data-flow-array-functions.md)
data-factory Data Flow Derived Column https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-derived-column.md
MoviesYear derive(
) ~> CleanData ```
-## Next steps
+## Related content
- Learn more about the [Mapping Data Flow expression language](data-transformation-functions.md).
data-factory Data Flow Exists https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-exists.md
NameNorm2, TypeConversions
) ~> checkForChanges ```
-## Next steps
+## Related content
Similar transformations are [Lookup](data-flow-lookup.md) and [Join](data-flow-join.md).
data-factory Data Flow Expression Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-expression-functions.md
In Data Factory and Synapse pipelines, use the expression language of the mappin
| [xor](data-flow-expressions-usage.md#xor) | Logical XOR operator. Same as ^ operator. | |||
-## Next steps
+## Related content
- List of all [aggregate functions](data-flow-aggregate-functions.md). - List of all [array functions](data-flow-array-functions.md).
data-factory Data Flow Expressions Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-expressions-usage.md
___
Gets the year value of a date. * ``year(toDate('2012-8-8')) -> 2012``
-## Next steps
+## Related content
- List of all [aggregate functions](data-flow-aggregate-functions.md). - List of all [array functions](data-flow-array-functions.md).
data-factory Data Flow External Call https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-external-call.md
ExternalCall1 sink(allowSchemaDrift: true,
saveOrder: 1) ~> sink1 ```
-## Next steps
+## Related content
* Use the [Flatten transformation](data-flow-flatten.md) to pivot rows to columns. * Use the [Derived column transformation](data-flow-derived-column.md) to transform rows.
data-factory Data Flow Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-filter.md
CleanData
```
-## Next steps
+## Related content
Filter out columns with the [select transformation](data-flow-select.md)
data-factory Data Flow Flatten https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-flatten.md
source foldDown(unroll(goods.orders.shipped.orderItems, goods.orders),
skipDuplicateMapOutputs: false) ```
-## Next steps
+## Related content
* Use the [Pivot transformation](data-flow-pivot.md) to pivot rows to columns. * Use the [Unpivot transformation](data-flow-unpivot.md) to pivot columns to rows.
data-factory Data Flow Join https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-join.md
LeftStream, RightStream
)~> JoiningColumns ```
-## Next steps
+## Related content
After joining data, create a [derived column](data-flow-derived-column.md) and [sink](data-flow-sink.md) your data to a destination data store.
data-factory Data Flow Lookup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-lookup.md
SQLProducts, DimProd lookup(ProductID == ProductKey,
broadcast: 'auto')~> LookupKeys ```
-## Next steps
+## Related content
* The [join](data-flow-join.md) and [exists](data-flow-exists.md) transformations both take in multiple stream inputs * Use a [conditional split transformation](data-flow-conditional-split.md) with ```isMatch()``` to split rows on matching and non-matching values
data-factory Data Flow Map Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-map-functions.md
The following articles provide details about map functions supported by Azure Da
| [reassociate](data-flow-expressions-usage.md#reassociate) | Transforms a map by associating the keys to new values. It takes a mapping function where you can address the item as #key and current value as #value. | |||
-## Next steps
+## Related content
- List of all [aggregate functions](data-flow-aggregate-functions.md). - List of all [array functions](data-flow-array-functions.md).
data-factory Data Flow Metafunctions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-metafunctions.md
Metafunctions primarily function on metadata in your data flow
| [unhex](data-flow-expressions-usage.md#unhex) | Unhexes a binary value from its string representation. This can be used with sha2, md5 to convert from string to binary representation| |||
-## Next steps
+## Related content
- List of all [aggregate functions](data-flow-aggregate-functions.md). - List of all [array functions](data-flow-array-functions.md).
data-factory Data Flow New Branch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-new-branch.md
In the below example, the data flow is reading taxi trip data. Output aggregated
> [!NOTE] > When clicking the plus (+) to add transformations to your graph, you will only see the New Branch option when there are subsequent transformation blocks. This is because New Branch creates a reference to the existing stream and requires further upstream processing to operate on. If you do not see the New Branch option, add a Derived Column or other transformation first, then return to the previous block and you will see New Branch as an option.
-## Next steps
+## Related content
After branching, you may want to use the [data flow transformations](data-flow-transformation-overview.md)
data-factory Data Flow Parse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-parse.md
parse(csv = csvString ? (id as integer,
documentForm: 'documentPerLine') ~> ParseCsv ```
-## Next steps
+## Related content
* Use the [Flatten transformation](data-flow-flatten.md) to pivot rows to columns. * Use the [Derived column transformation](data-flow-derived-column.md) to transform rows.
data-factory Data Flow Pivot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-pivot.md
BasketballPlayerStats pivot(groupBy(Tm),
```
-## Next steps
+## Related content
Try the [unpivot transformation](data-flow-unpivot.md) to turn column values into row values.
data-factory Data Flow Rank https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-rank.md
PruneColumns
) ~> RankByPoints ```
-## Next steps
+## Related content
Filter rows based upon the rank values using the [filter transformation](data-flow-filter.md).
data-factory Data Flow Reserved Capacity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-reserved-capacity-overview.md
You can cancel, exchange, or refund reservations with certain limitations. For m
If you have questions or need help, [create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
-## Next steps
+## Related content
To learn more about Azure Reservations, see the following articles:
data-factory Data Flow Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-script.md
DerivedColumn1 window(over(dummy),
```size(array(columns()))```
-## Next steps
+## Related content
Explore Data Flows by starting with the [data flows overview article](concepts-data-flow-overview.md)
data-factory Data Flow Select https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-select.md
DerivedColumn1 select(mapColumn(
skipDuplicateMapOutputs: true) ~> Select1 ```
-## Next steps
+## Related content
* After using Select to rename, reorder, and alias columns, use the [Sink transformation](data-flow-sink.md) to land your data into a data store.
data-factory Data Flow Sink https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-sink.md
sink(input(
errorHandlingOption: 'stopOnFirstError') ~> sink1 ```
-## Next steps
+## Related content
Now that you've created your data flow, add a [data flow activity to your pipeline](concepts-data-flow-overview.md).
data-factory Data Flow Sort https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-sort.md
BasketballStats sort(desc(PTS, true),
asc(Age, true)) ~> Sort1 ```
-## Next steps
+## Related content
After sorting, you may want to use the [Aggregate Transformation](data-flow-aggregate.md)
data-factory Data Flow Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-source.md
If you're reading from an Azure SQL Database source, custom **Source** partition
For more information on optimization within mapping data flow, see the [Optimize tab](concepts-data-flow-overview.md#optimize).
-## Next steps
+## Related content
Begin building your data flow with a [derived-column transformation](data-flow-derived-column.md) and a [select transformation](data-flow-select.md).
data-factory Data Flow Stringify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-stringify.md
stringify(mydata = body.properties.periods ? string,
format: 'json') ~> Stringify1 ```
-## Next steps
+## Related content
* Use the [Flatten transformation](data-flow-flatten.md) to pivot rows to columns. * Use the [Parse transformation](data-flow-parse.md) to convert complex embedded types to separate columns.
data-factory Data Flow Surrogate Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-surrogate-key.md
AggregateDayStats
) ~> SurrogateKey1 ```
-## Next steps
+## Related content
These examples use the [Join](data-flow-join.md) and [Derived Column](data-flow-derived-column.md) transformations.
data-factory Data Flow Troubleshoot Connector Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-troubleshoot-connector-format.md
For the Snowflake VARIANT, it can only accept the data flow value that is struct
alter table tablename rename column newcolumnname to "details"; ```
-## Next steps
+## Related content
For more help with troubleshooting, see these resources: * [Troubleshoot mapping data flows in Azure Data Factory](data-flow-troubleshoot-guide.md)
data-factory Data Flow Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-troubleshoot-guide.md
You may encounter the following issues before the improvement, but after the imp
After the improvement, the parsed column result should be:<br/> `A "" (empty string) B "" (empty string)`<br/>
-## Next steps
+## Related content
For more help with troubleshooting, see these resources:
data-factory Data Flow Understand Reservation Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-understand-reservation-charges.md
To understand and view the application of your Azure Reservations in billing usa
If you have questions or need help, [create a support request](https://go.microsoft.com/fwlink/?linkid=2083458).
-## Next steps
+## Related content
To learn more about Azure Reservations, see the following article:
data-factory Data Flow Union https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-union.md
If you choose "union by position", each column value will drop into the original
:::image type="content" source="media/data-flow/unionoutput.png" alt-text="Union output":::
-## Next steps
+## Related content
Explore similar transformations including [Join](data-flow-join.md) and [Exists](data-flow-exists.md).
data-factory Data Flow Unpivot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-unpivot.md
Setting the Column Arrangement to "Normal" will group together all of the new un
The final unpivoted data result set shows the column totals now unpivoted into separate row values.
-## Next steps
+## Related content
Use the [Pivot transformation](data-flow-pivot.md) to pivot rows to columns.
data-factory Data Flow Window Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-window-functions.md
The following functions are only available in window transformations.
| [rowNumber](data-flow-expressions-usage.md#rowNumber) | Assigns a sequential row numbering for rows in a window starting with 1. | |||
-## Next steps
+## Related content
- List of all [aggregate functions](data-flow-aggregate-functions.md). - List of all [array functions](data-flow-array-functions.md).
data-factory Data Flow Window https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-window.md
Lastly, use the Expression Builder to define the aggregations you wish to use wi
The full list of aggregation and analytical functions available for you to use in the Data Flow Expression Language via the Expression Builder are listed in [Data transformation expressions in mapping data flow](data-transformation-functions.md).
-## Next steps
+## Related content
If you are looking for a simple group-by aggregation, use the [Aggregate transformation](data-flow-aggregate.md)
data-factory Data Migration Guidance Hdfs Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-migration-guidance-hdfs-azure-storage.md
Here's the estimated price based on our assumptions:
- [Copy new and changed files based on LastModifiedDate](./tutorial-incremental-copy-lastmodified-copy-data-tool.md) - [Data Factory pricing page](https://azure.microsoft.com/pricing/details/data-factory/data-pipeline/)
-## Next steps
+## Related content
- [Copy files from multiple containers by using Azure Data Factory](solution-template-copy-files-multiple-containers.md)
data-factory Data Migration Guidance Netezza Azure Sqldw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-migration-guidance-netezza-azure-sqldw.md
For more information, see the following articles and guides:
- [Copy data incrementally from multiple tables](./tutorial-incremental-copy-multiple-tables-portal.md) - [Azure Data Factory pricing page](https://azure.microsoft.com/pricing/details/data-factory/data-pipeline/)
-## Next steps
+## Related content
- [Copy files from multiple containers by using Azure Data Factory](solution-template-copy-files-multiple-containers.md)
data-factory Data Migration Guidance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-migration-guidance-overview.md
This table helps you determine whether you can meet your intended migration wind
> By using online migration, you can achieve both historical data loading and incremental feeds end-to-end through a single tool. Through this approach, your data can be kept synchronized between the existing store and the new store during the entire migration window. This means you can rebuild your ETL logic on the new store with refreshed data.
-## Next steps
+## Related content
- [Migrate data from AWS S3 to Azure](data-migration-guidance-s3-azure-storage.md) - [Migrate data from on-premises hadoop cluster to Azure](data-migration-guidance-hdfs-azure-storage.md)
data-factory Data Migration Guidance S3 Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-migration-guidance-s3-azure-storage.md
Here's the estimated price based on the above assumptions:
Here's the [template](solution-template-migration-s3-azure.md) to start with to migrate petabytes of data consisting of hundreds of millions of files from Amazon S3 to Azure Data Lake Storage Gen2.
-## Next steps
+## Related content
- [Copy files from multiple containers with Azure Data Factory](solution-template-copy-files-multiple-containers.md)
data-factory Data Movement Security Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-movement-security-considerations.md
Yes. More details [here](https://azure.microsoft.com/blog/sharing-a-self-hosted-
The self-hosted integration runtime makes HTTP-based connections to access the internet. The outbound ports 443 must be opened for the self-hosted integration runtime to make this connection. Open inbound port 8060 only at the machine level (not the corporate firewall level) for credential manager application. If Azure SQL Database or Azure Synapse Analytics is used as the source or the destination, you need to open port 1433 as well. For more information, see the [Firewall configurations and allow list setting up for IP addresses](#firewall-configurations-and-allow-list-setting-up-for-ip-addresses) section.
-## Next steps
+## Related content
For information about Azure Data Factory Copy Activity performance, see [Copy Activity performance and tuning guide](copy-activity-performance.md).
data-factory Data Transformation Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-transformation-functions.md
The following articles provide details about expressions and functions supported
For details about the usage of each function in a comprehensive alphabetical list, refer to [Usage details of all data transformation expressions](data-flow-expressions-usage.md).
-## Next steps
+## Related content
[Learn how to use Expression Builder](concepts-data-flow-expression-builder.md).
data-factory Deactivate Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/deactivate-activity.md
Deactivation is a powerful tool for pipeline developer. It allows developers to
An inactive activity never actually runs. This means the activity won't have an error field, or its typical output fields. Any references to missing fields may throw errors downstream.
-## Next steps
+## Related content
Learn more about Azure Data Factory and Synapse pipelines.
data-factory Delete Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/delete-activity.md
You can also get the template to move files from [here](solution-template-move-f
- When using file attribute filter in delete activity: modifiedDatetimeStart and modifiedDatetimeEnd to select files to be deleted, make sure to set "wildcardFileName": "*" in delete activity as well.
-## Next steps
+## Related content
Learn more about moving files in Azure Data Factory and Synapse pipelines.
data-factory Deploy Linked Arm Templates With Vsts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/deploy-linked-arm-templates-with-vsts.md
The scenario we walk through here's to deploy VNet with a Network Security Grou
1. Save the release pipeline and trigger a release.
-## Next steps
+## Related content
- [Automate continuous integration using Azure Pipelines releases](continuous-integration-delivery-automate-azure-pipelines.md)
data-factory Enable Azure Key Vault For Managed Airflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/enable-azure-key-vault-for-managed-airflow.md
Follow these steps to enable the Azure Key Vault as the secret backend for your
:::image type="content" source="media/enable-azure-key-vault-for-managed-airflow/secrets-configuration.png" alt-text="Screenshot showing the configuration of secrets in Azure Key Vault." lightbox="media/enable-azure-key-vault-for-managed-airflow/secrets-configuration.png":::
-## Next steps
+## Related content
- [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md) - [Managed Airflow pricing](airflow-pricing.md)
data-factory Enable Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/enable-customer-managed-key.md
The following settings will be added in ARM template. These properties can be pa
> [!NOTE] > Adding the encryption setting to the ARM templates adds a factory-level setting that will override other factory level settings, such as git configurations, in other environments. If you have these settings enabled in an elevated environment such as UAT or PROD, please refer to [Global Parameters in CI/CD](author-global-parameters.md#cicd).
-## Next steps
+## Related content
Go through the [tutorials](tutorial-copy-data-dot-net.md) to learn about using Data Factory in more scenarios.
data-factory Encrypt Credentials Self Hosted Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/encrypt-credentials-self-hosted-integration-runtime.md
Now, use the output JSON file from the previous command containing the encrypted
Set-AzDataFactoryV2LinkedService -DataFactoryName $dataFactoryName -ResourceGroupName $ResourceGroupName -Name "EncryptedSqlServerLinkedService" -DefinitionFile ".\encryptedSqlServerLinkedService.json" ```
-## Next steps
+## Related content
For information about security considerations for data movement, see [Data movement security considerations](data-movement-security-considerations.md).
data-factory Format Avro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-avro.md
Avro [complex data types](https://avro.apache.org/docs/current/spec.html#schema_
### Data flows When working with Avro files in data flows, you can read and write complex data types, but be sure to clear the physical schema from the dataset first. In data flows, you can set your logical projection and derive columns that are complex structures, then auto-map those fields to an Avro file.
-## Next steps
+## Related content
- [Copy activity overview](copy-activity-overview.md) - [Lookup activity](control-flow-lookup-activity.md)
data-factory Format Binary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-binary.md
The following properties are supported in the copy activity ***\*sink\**** secti
| type | The type property of the copy activity source must be set to **BinarySink**. | Yes | | storeSettings | A group of properties on how to write data to a data store. Each file-based connector has its own supported write settings under `storeSettings`. **See details in connector article -> Copy activity properties section**. | No |
-## Next steps
+## Related content
- [Copy activity overview](copy-activity-overview.md) - [GetMetadata activity](control-flow-get-metadata-activity.md)
data-factory Format Common Data Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-common-data-model.md
CDMSource sink(allowSchemaDrift: true,
```
-## Next steps
+## Related content
Create a [source transformation](data-flow-source.md) in mapping data flow.
data-factory Format Delimited Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-delimited-text.md
Here are some common connectors and formats related to the delimited text format
- JSON format(format-json.md) - Parquet format(format-parquet.md)
-## Next steps
+## Related content
- [Copy activity overview](copy-activity-overview.md) - [Mapping data flow](concepts-data-flow-overview.md)
data-factory Format Delta https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-delta.md
In Settings tab, you find three more options to optimize delta sink transformati
When you write to a delta sink, there's a known limitation where the numbers of rows written won't show-up in the monitoring output.
-## Next steps
+## Related content
* Create a [source transformation](data-flow-source.md) in mapping data flow. * Create a [sink transformation](data-flow-sink.md) in mapping data flow.
data-factory Format Excel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-excel.md
The Excel connector does not support streaming read for the Copy activity and mu
- Use a dataflow activity to move the large Excel file into another data store. Dataflow supports streaming read for Excel and can move/transfer large files quickly. - Manually convert the large Excel file to CSV format, then use a Copy activity to move the file.
-## Next steps
+## Related content
- [Copy activity overview](copy-activity-overview.md) - [Lookup activity](control-flow-lookup-activity.md)
data-factory Format Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-json.md
Here are some common connectors and formats related to the JSON format:
- OData connector(connector-odata.md) - Parquet format(format-parquet.md)
-## Next steps
+## Related content
- [Copy activity overview](copy-activity-overview.md) - [Mapping data flow](concepts-data-flow-overview.md)
data-factory Format Orc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-orc.md
For copy running on Self-hosted IR with ORC file serialization/deserialization,
Example: set variable `_JAVA_OPTIONS` with value `-Xms256m -Xmx16g`. The flag `Xms` specifies the initial memory allocation pool for a Java Virtual Machine (JVM), while `Xmx` specifies the maximum memory allocation pool. This means that JVM will be started with `Xms` amount of memory and will be able to use a maximum of `Xmx` amount of memory. By default, the service uses min 64 MB and max 1G.
-## Next steps
+## Related content
- [Copy activity overview](copy-activity-overview.md) - [Lookup activity](control-flow-lookup-activity.md)
data-factory Format Parquet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-parquet.md
ParquetSource sink(
Parquet complex data types (e.g. MAP, LIST, STRUCT) are currently supported only in Data Flows, not in Copy Activity. To use complex types in data flows, do not import the file schema in the dataset, leaving schema blank in the dataset. Then, in the Source transformation, import the projection.
-## Next steps
+## Related content
- [Copy activity overview](copy-activity-overview.md) - [Mapping data flow](concepts-data-flow-overview.md)
data-factory Format Xml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-xml.md
Note the following when using XML as source.
- If an XML element has both simple text value and attributes/child elements, the simple text value is parsed as the value of a "value column" with built-in field name `_value_`. And it inherits the namespace of the element as well if applies.
-## Next steps
+## Related content
- [Copy activity overview](copy-activity-overview.md) - [Mapping data flow](concepts-data-flow-overview.md)
data-factory Get Started With Managed Airflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/get-started-with-managed-airflow.md
Mitigation: Sign in into the Airflow UI and see if there are any DAG parsing err
:::image type="content" source="media/how-does-managed-airflow-work/import-dag-issues.png" alt-text="Screenshot shows import dag issues.":::
-## Next steps
+## Related content
- [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md) - [Managed Airflow pricing](airflow-pricing.md)
data-factory How Does Managed Airflow Work https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-does-managed-airflow-work.md
If you're using Airflow version 1.x, delete DAGs that are deployed on any Airflo
> [!NOTE] > This is the current experience during the Public Preview, and we will be improving this experience. 
-## Next steps
+## Related content
- [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md) - [Managed Airflow pricing](airflow-pricing.md)
data-factory How To Change Data Capture Resource With Schema Evolution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-change-data-capture-resource-with-schema-evolution.md
Confirm that the new column **PersonalEmail** appears in the Delta sink. You now
:::image type="content" source="media/adf-cdc/change-data-capture-resource-128.png" alt-text="Screenshot of a Delta file with a schema change." lightbox="media/adf-cdc/change-data-capture-resource-128.png":::
-## Next steps
+## Related content
* [Learn more about the CDC resource](concepts-change-data-capture-resource.md)
data-factory How To Change Data Capture Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-change-data-capture-resource.md
Before you begin the procedures in this article, make sure that you have these r
:::image type="content" source="media/adf-cdc/change-data-capture-resource-92.png" alt-text="Screenshot of a detailed breakdown of each mapping in a change data capture artifact." lightbox="media/adf-cdc/change-data-capture-resource-92.png":::
-## Next steps
+## Related content
* [Learn more about the CDC resource](concepts-change-data-capture-resource.md)
data-factory How To Clean Up Ssisdb Logs With Elastic Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-clean-up-ssisdb-logs-with-elastic-jobs.md
SELECT * FROM jobs.job_executions WHERE is_active = 1
ORDER BY start_time DESC ```
-## Next steps
+## Related content
To manage and monitor your Azure-SSIS IR, see the following articles.
data-factory How To Configure Azure Ssis Ir Custom Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-configure-azure-ssis-ir-custom-setup.md
To view and reuse some samples of standard custom setups, complete the following
1. After your standard custom setup finishes and your Azure-SSIS IR starts, you can find all custom setup logs in the *main.cmd.log* folder of your blob container. They include the standard output of *main.cmd* and other execution logs.
-## Next steps
+## Related content
- [Set up the Enterprise Edition of Azure-SSIS IR](how-to-configure-azure-ssis-ir-enterprise-edition.md) - [Develop paid or licensed components for Azure-SSIS IR](how-to-develop-azure-ssis-ir-licensed-components.md)
data-factory How To Configure Azure Ssis Ir Enterprise Edition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-configure-azure-ssis-ir-enterprise-edition.md
Some of these features require you to install additional components to customize
-ResourceGroupName $MyResourceGroupName ```
-## Next steps
+## Related content
- [Custom setup for the Azure-SSIS integration runtime](how-to-configure-azure-ssis-ir-custom-setup.md)
data-factory How To Configure Shir For Log Analytics Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-configure-shir-for-log-analytics-collection.md
Perf
| summarize Value=max(CounterValue) by CounterName, TimeStamps=TimeGenerated ```
-## Next Steps
+## Related content
- [Review integration runtime concepts in Azure Data Factory.](concepts-integration-runtime.md) - Learn how to [create a self-hosted integration runtime in the Azure portal.](create-self-hosted-integration-runtime.md)
data-factory How To Create Custom Event Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-custom-event-trigger.md
Specifically, you need `Microsoft.EventGrid/EventSubscriptions/Write` permission
- When authoring in the data factory (in the development environment for instance), the Azure account signed in needs to have the above permission - When publishing through [CI/CD](continuous-integration-delivery.md), the account used to publish the ARM template into the testing or production factory needs to have the above permission.
-## Next steps
+## Related content
* Get detailed information about [trigger execution](concepts-pipeline-execution-triggers.md#trigger-execution-with-json). * Learn how to [reference trigger metadata in pipeline runs](how-to-use-trigger-parameterization.md).
data-factory How To Create Event Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-event-trigger.md
There are three noticeable call outs in the workflow related to Event triggering
* That said, if you have a Copy or other activity inside the pipeline to process the data in Storage account, the service will make direct contact with Storage, using the credentials stored in the Linked Service. Ensure that Linked Service is set up appropriately * However, if you make no reference to the Storage account in the pipeline, you do not need to grant permission to the service to access Storage account
-## Next steps
+## Related content
* For detailed information about triggers, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md#trigger-execution-with-json). * Learn how to reference trigger metadata in pipeline, see [Reference Trigger Metadata in Pipeline Runs](how-to-use-trigger-parameterization.md)
data-factory How To Create Schedule Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-schedule-trigger.md
The examples assume that the **interval** value is 1, and that the **frequency**
| `{"minutes":[0,15,30,45], "monthlyOccurrences":[{"day":"friday", "occurrence":-1}]}` | Run every 15 minutes on the last Friday of the month. | | `{"minutes":[15,45], "hours":[5,17], "monthlyOccurrences":[{"day":"wednesday", "occurrence":3}]}` | Run at 5:15 AM, 5:45 AM, 5:15 PM, and 5:45 PM on the third Wednesday of every month. |
-## Next steps
+## Related content
- For detailed information about triggers, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md#trigger-execution-with-json). - Learn how to reference trigger metadata in pipeline, see [Reference Trigger Metadata in Pipeline Runs](how-to-use-trigger-parameterization.md)
data-factory How To Create Tumbling Window Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-tumbling-window-trigger.md
This section shows you how to use Azure CLI to create, start, and monitor a trig
To monitor trigger runs and pipeline runs in the Azure portal, see [Monitor pipeline runs](quickstart-create-data-factory-resource-manager-template.md#monitor-the-pipeline).
-## Next steps
+## Related content
* For detailed information about triggers, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md#trigger-execution-with-json). * [Create a tumbling window trigger dependency](tumbling-window-trigger-dependency.md).
data-factory How To Data Flow Dedupe Nulls Snippets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-data-flow-dedupe-nulls-snippets.md
By using code snippets in mapping data flows, you can easily perform common task
You have now created a working data flow with generic deduping and null checks by taking existing code snippets from the Data Flow Script library and adding them into your existing design.
-## Next steps
+## Related content
* Build the rest of your data flow logic by using mapping data flows [transformations](concepts-data-flow-overview.md).
data-factory How To Data Flow Error Rows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-data-flow-error-rows.md
This video walks through an example of setting-up error row handling logic in yo
:::image type="content" source="media/data-flow/error-row-3.png" alt-text="complete data flow with error rows":::
-## Next steps
+## Related content
* Build the rest of your data flow logic by using mapping data flows [transformations](concepts-data-flow-overview.md).
data-factory How To Develop Azure Ssis Ir Licensed Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-develop-azure-ssis-ir-licensed-components.md
Here's an example from our partner, [Aecorsoft](https://www.aecorsoft.com/blog/2
You can find a list of ISV partners who have adapted their components and extensions for the Azure-SSIS IR at the end of this blog post - [Enterprise Edition, Custom Setup, and 3rd Party Extensibility for SSIS in ADF](https://techcommunity.microsoft.com/t5/SQL-Server-Integration-Services/Enterprise-Edition-Custom-Setup-and-3rd-Party-Extensibility-for/ba-p/388360).
-## Next steps
+## Related content
- [Custom setup for the Azure-SSIS integration runtime](how-to-configure-azure-ssis-ir-custom-setup.md)
data-factory How To Expression Language Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-expression-language-functions.md
Please follow [Mapping data flow with parameters](./parameters-data-flow.md) for
Please follow [Metadata driven pipeline with parameters](./how-to-use-trigger-parameterization.md) to learn more about how to use parameters to design metadata driven pipelines. This is a popular use case for parameters.
-## Next steps
+## Related content
For a list of system variables you can use in expressions, see [System variables](control-flow-system-variables.md).
data-factory How To Fixed Width https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-fixed-width.md
By using mapping data flows in Microsoft Azure Data Factory, you can transform d
The fixed-width data is now split, with four characters each and assigned to Col1, Col2, Col3, Col4, and so on. Based on the preceding example, the data is split into four columns.
-## Next steps
+## Related content
* Build the rest of your data flow logic by using mapping data flows [transformations](concepts-data-flow-overview.md).
data-factory How To Invoke Ssis Package Azure Enabled Dtexec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-invoke-ssis-package-azure-enabled-dtexec.md
Invoking AzureDTExec offers similar options as invoking dtexec. For more informa
> [!NOTE] > Invoking AzureDTExec with new values for its options generates a new pipeline except for the option **/De[cript]**.
-## Next steps
+## Related content
After unique pipelines with the Execute SSIS Package activity in them are generated and run when you invoke AzureDTExec, they can be monitored on the Data Factory portal. You can also assign Data Factory triggers to them if you want to orchestrate/schedule them using Data Factory. For more information, see [Run SSIS packages as Data Factory activities](./how-to-invoke-ssis-package-ssis-activity.md).
data-factory How To Invoke Ssis Package Managed Instance Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-invoke-ssis-package-managed-instance-agent.md
To cancel package execution from a SQL Managed Instance Agent job, take the foll
1. Stop the corresponding operation based on **executionId**.
-## Next steps
+## Related content
You can also schedule SSIS packages by using Azure Data Factory. For step-by-step instructions, see [Azure Data Factory event trigger](how-to-create-event-trigger.md).
data-factory How To Invoke Ssis Package Ssdt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-invoke-ssis-package-ssdt.md
After starting your package execution, we'll format and display its logs in the
- The Azure-enabled SSDT supports only commercial/global cloud regions and doesn't support governmental/national cloud regions for now.
-## Next steps
+## Related content
Once you're satisfied with running your packages in Azure from SSDT, you can deploy and run them as Execute SSIS Package activities in ADF pipelines, see [Running SSIS packages as Execute SSIS Package activities in ADF pipelines](./how-to-invoke-ssis-package-ssis-activity.md).
data-factory How To Invoke Ssis Package Ssis Activity Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-invoke-ssis-package-ssis-activity-powershell.md
In the previous step, you ran the pipeline on demand. You can also create a sche
select * from catalog.executions ```
-## Next steps
+## Related content
- [Run an SSIS package with the Execute SSIS Package activity in the Azure Data Factory Studio portal](how-to-invoke-ssis-package-ssis-activity.md) - [Modernize and extend your ETL/ELT workflows with SSIS activities in Azure Data Factory pipelines](https://techcommunity.microsoft.com/t5/SQL-Server-Integration-Services/Modernize-and-Extend-Your-ETL-ELT-Workflows-with-SSIS-Activities/ba-p/388370)
data-factory How To Invoke Ssis Package Ssis Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-invoke-ssis-package-ssis-activity.md
In this step, you trigger a pipeline run.
You can also create a scheduled trigger for your pipeline so that the pipeline runs on a schedule, such as hourly or daily. For an example, see [Create a data factory - Data Factory UI](quickstart-create-data-factory-portal.md#trigger-the-pipeline-on-a-schedule).
-## Next steps
+## Related content
- [Run an SSIS package with the Execute SSIS Package activity in Azure Data Factory with PowerShell](how-to-invoke-ssis-package-ssis-activity-powershell.md) - [Modernize and extend your ETL/ELT workflows with SSIS activities in Azure Data Factory pipelines](https://techcommunity.microsoft.com/t5/SQL-Server-Integration-Services/Modernize-and-Extend-Your-ETL-ELT-Workflows-with-SSIS-Activities/ba-p/388370)
data-factory How To Invoke Ssis Package Stored Procedure Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-invoke-ssis-package-stored-procedure-activity.md
In the previous step, you invoked the pipeline on-demand. You can also create a
```
-## Next steps
+## Related content
You can also monitor the pipeline using the Azure portal. For step-by-step instructions, see [Monitor the pipeline](quickstart-create-data-factory-resource-manager-template.md#monitor-the-pipeline).
data-factory How To Manage Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-manage-settings.md
To apply changes, select a **Regional format** and make sure to hit the **Apply*
> [!NOTE] > Applying regional format changes will discard any unsaved changes in your data factory.
-## Next steps
+## Related content
- [Manage the ADF preview experience](how-to-manage-studio-preview-exp.md) - [Introduction to Azure Data Factory](introduction.md) - [Build a pipeline with a copy activity](quickstart-create-data-factory-powershell.md)
data-factory How To Manage Studio Preview Exp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-manage-studio-preview-exp.md
We want to hear from you! If you see this pop-up, please let us know your though
:::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-19.png" alt-text="Screenshot of the feedback survey where user can select between one and five stars.":::
-## Next steps
+## Related content
- [What's New in Azure Data Factory](whats-new.md) - [How to manage Azure Data Factory Settings](how-to-manage-settings.md)
data-factory How To Migrate Ssis Job Ssms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-migrate-ssis-job-ssms.md
The feature described in this article requires SQL Server Management Studio vers
1. Migrate, then check results. :::image type="content" source="media/how-to-migrate-ssis-job-ssms/step5.png" alt-text="Screenshot shows the Migration Result page, which displays the progress of the migration.":::
-## Next steps
+## Related content
[Run and monitor pipeline](how-to-invoke-ssis-package-ssis-activity.md)
data-factory How To Run Self Hosted Integration Runtime In Windows Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-run-self-hosted-integration-runtime-in-windows-container.md
Currently we don't support the below features when running the Self-Hosted Integ
There is a known issue when hosting an Azure Data Factory self-hosted integration runtime in Azure App Service. Azure App Service creates a new container instead of reusing existing container after restarting. This may cause self-hosted integration runtime node leak problem.
-### Next steps
+### Related content
- Review [integration runtime concepts in Azure Data Factory](./concepts-integration-runtime.md). - Learn how to [create a self-hosted integration runtime in the Azure portal](./create-self-hosted-integration-runtime.md).
data-factory How To Schedule Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-schedule-azure-ssis-integration-runtime.md
In the previous section, you created an Azure Automation runbook that can either
6. When you finish testing, disable your schedules by editing them. Select **Schedules** on the left menu, select **Start IR daily/Stop IR daily**, and then select **No** for **Enabled**.
-## Next steps
+## Related content
See the following blog post:
data-factory How To Send Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-send-email.md
make your messages dynamic. For example:  
The above expressions will return the relevant error messages from a Copy activity failure, which can be redirected then to your Web activity that sends the email. Refer to the [Copy activity output properties](copy-activity-monitoring.md) article for more details.
-## Next steps
+## Related content
[How to send Teams notifications from a pipeline](how-to-send-notifications-to-teams.md)
data-factory How To Send Notifications To Teams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-send-notifications-to-teams.md
The above expressions will return the relevant error messages from a failure, wh
We also encourage you to review the Microsoft Teams supported [notification payload schema](https://adaptivecards.io/explorer/AdaptiveCard.html) and further customize the above template to your needs.
-## Next steps
+## Related content
[How to send email from a pipeline](how-to-send-email.md)
data-factory How To Sqldb To Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-sqldb-to-cosmosdb.md
The resulting Azure Cosmos DB container will embed the inner query into a single
If everything looks good, you are now ready to create a new pipeline, add this data flow activity to that pipeline and execute it. You can execute from debug or a triggered run. After a few minutes, you should have a new denormalized container of orders called "orders" in your Azure Cosmos DB database.
-## Next steps
+## Related content
* Build the rest of your data flow logic by using mapping data flows [transformations](concepts-data-flow-overview.md). * [Download the completed pipeline template](https://github.com/kromerm/adfdataflowdocs/blob/master/sampledata/SQL%20Orders%20to%20CosmosDB.zip) for this tutorial and import the template into your factory.
data-factory How To Use Azure Key Vault Secrets Pipeline Activities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-use-azure-key-vault-secrets-pipeline-activities.md
This feature relies on the data factory managed identity. Learn how it works fr
:::image type="content" source="media/how-to-use-azure-key-vault-secrets-pipeline-activities/usewebactivity.png" alt-text="Code expression":::
-## Next steps
+## Related content
To learn how to use Azure Key Vault to store credentials for data stores and computes, see [Store credentials in Azure Key Vault](./store-credentials-in-key-vault.md)
data-factory How To Use Sql Managed Instance With Ir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-use-sql-managed-instance-with-ir.md
SSISDB logs retention policy are defined by below properties in [catalog.catalog
To remove SSISDB logs that are outside the retention window set by the administrator, you can trigger the stored procedure `[internal].[cleanup_server_retention_window_exclusive]`. Optionally, you can schedule SQL Managed Instance agent job execution to trigger the stored procedure.
-## Next steps
+## Related content
- [Execute SSIS packages by Azure SQL Managed Instance Agent job](how-to-invoke-ssis-package-managed-instance-agent.md) - [Set up Business continuity and disaster recovery (BCDR)](configure-bcdr-azure-ssis-integration-runtime.md)
data-factory How To Use Trigger Parameterization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-use-trigger-parameterization.md
Under **pipelines** section, assign parameter values in **parameters** section.
To use the values in pipeline, utilize parameters _@pipeline().parameters.parameterName_, __not__ system variable, in pipeline definitions.
-## Next steps
+## Related content
For detailed information about triggers, see [Pipeline execution and triggers](concepts-pipeline-execution-triggers.md#trigger-execution-with-json).
data-factory Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/introduction.md
Control flow is an orchestration of pipeline activities that includes chaining a
### Variables Variables can be used inside of pipelines to store temporary values and can also be used in conjunction with parameters to enable passing values between pipelines, data flows, and other activities.
-## Next steps
+## Related content
Here are important next step documents to explore: - [Dataset and linked services](concepts-datasets-linked-services.md)
data-factory Iterative Development Debugging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/iterative-development-debugging.md
Using the activity runtime will create a new cluster using the settings specifie
:::image type="content" source="media/iterative-development-debugging/iterative-development-dataflow.png" alt-text="Running a pipeline with a dataflow":::
-## Next steps
+## Related content
After testing your changes, promote them to higher environments using [continuous integration and deployment](continuous-integration-delivery.md).
data-factory Join Azure Ssis Integration Runtime Virtual Network Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/join-azure-ssis-integration-runtime-virtual-network-powershell.md
Start-AzDataFactoryV2IntegrationRuntime -ResourceGroupName $ResourceGroupName `
If you use the express/standard virtual network injection method, this command takes 5/20-30 minutes to finish, respectively.
-## Next steps
+## Related content
- [Configure a virtual network to inject Azure-SSIS IR](azure-ssis-integration-runtime-virtual-network-configuration.md) - [Express virtual network injection method](azure-ssis-integration-runtime-express-virtual-network-injection.md)
data-factory Join Azure Ssis Integration Runtime Virtual Network Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/join-azure-ssis-integration-runtime-virtual-network-ui.md
After you've configured an Azure Resource Manager/classic virtual network, you c
1. Start your Azure-SSIS IR by selecting the **Start** button in **Actions** column for your Azure-SSIS IR. It takes about 5/20-30 minutes to start your Azure-SSIS IR that joins a virtual network with express/standard injection method, respectively.
-## Next steps
+## Related content
- [Configure a virtual network to inject Azure-SSIS IR](azure-ssis-integration-runtime-virtual-network-configuration.md) - [Express virtual network injection method](azure-ssis-integration-runtime-express-virtual-network-injection.md)
data-factory Join Azure Ssis Integration Runtime Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/join-azure-ssis-integration-runtime-virtual-network.md
If your SSIS packages access other cloud data stores/resources that allow only s
In all cases, the virtual network can only be deployed through Azure Resource Manager deployment model.
-## Next steps
+## Related content
- [Configure a virtual network to inject Azure-SSIS IR](azure-ssis-integration-runtime-virtual-network-configuration.md) - [Express virtual network injection method](azure-ssis-integration-runtime-express-virtual-network-injection.md)
data-factory Kubernetes Secret Pull Image From Private Container Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/kubernetes-secret-pull-image-from-private-container-registry.md
Provide the required field **Secret name**, select **Private registry auth** for
Once you provide the required fields, select **Apply** to add the secret.
-## Next steps
+## Related content
- [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md) - [Managed Airflow pricing](airflow-pricing.md)
data-factory Load Azure Data Lake Storage Gen2 From Gen1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/load-azure-data-lake-storage-gen2-from-gen1.md
You can also enable [fault tolerance](copy-activity-fault-tolerance.md) in copy
In Data Factory, the [Data Lake Storage Gen1 connector](connector-azure-data-lake-store.md) supports service principal and managed identity for Azure resource authentications. The [Data Lake Storage Gen2 connector](connector-azure-data-lake-storage.md) supports account key, service principal, and managed identity for Azure resource authentications. To make Data Factory able to navigate and copy all the files or access control lists (ACLs) you will need to grant high enough permissions to the account to access, read, or write all files and set ACLs if you choose to. You should grant the account a super-user or owner role during the migration period and remove the elevated permissions once the migration is completed.
-## Next steps
+## Related content
> [!div class="nextstepaction"] > [Copy activity overview](copy-activity-overview.md)
data-factory Load Azure Data Lake Storage Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/load-azure-data-lake-storage-gen2.md
This article shows you how to use the Data Factory Copy Data tool to load data f
11. Verify that the data is copied into your Data Lake Storage Gen2 account.
-## Next steps
+## Related content
* [Copy activity overview](copy-activity-overview.md) * [Azure Data Lake Storage Gen2 connector](connector-azure-data-lake-storage.md)
data-factory Load Azure Data Lake Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/load-azure-data-lake-store.md
This article shows you how to use the Data Factory Copy Data tool to _load data
:::image type="content" source="./media/load-data-into-azure-data-lake-store/adls-copy-result.png" alt-text="Verify Data Lake Storage Gen1 output":::
-## Next steps
+## Related content
Advance to the following article to learn about Data Lake Storage Gen1 support:
data-factory Load Azure Sql Data Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/load-azure-sql-data-warehouse.md
This article shows you how to use the Copy Data tool to _load data from Azure SQ
:::image type="content" source="./media/load-azure-sql-data-warehouse/monitor-activity-run-details-2.png" alt-text="Monitor activity run details second":::
-## Next steps
+## Related content
Advance to the following article to learn about Azure Synapse Analytics support:
data-factory Load Office 365 Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/load-office-365-data.md
Once the consent is provided, data extraction will continue and, after some time
Now go to the destination Azure Blob Storage and verify that Microsoft 365 (Office 365) data has been extracted in Binary format.
-## Next steps
+## Related content
Advance to the following article to learn about Azure Synapse Analytics support:
data-factory Load Sap Bw Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/load-sap-bw-data.md
To set the status of the delta DTP to **Fetched**, you can use the following opt
*No Data Transfer; Delta Status in Source: Fetched*
-## Next steps
+## Related content
Learn about SAP BW Open Hub connector support:
data-factory Manage Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/manage-azure-ssis-integration-runtime.md
After you provision and start an instance of Azure-SSIS integration runtime, you
Remove-AzResourceGroup -Name $ResourceGroupName -Force ```
-## Next steps
+## Related content
For more information about Azure-SSIS runtime, see the following topics: - [Azure-SSIS Integration Runtime](concepts-integration-runtime.md#azure-ssis-integration-runtime). This article provides conceptual information about integration runtimes in general including the Azure-SSIS IR.
data-factory Managed Virtual Network Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/managed-virtual-network-private-endpoint.md
You're unable to access each PaaS resource when both sides are exposed to Privat
For example, you have a managed private endpoint for storage account A. You can also access storage account B through public network in the same managed virtual network. But when storage account B has a private endpoint connection from other managed virtual network or customer virtual network, then you can't access storage account B in your managed virtual network through public network.
-## Next steps
+## Related content
See the following tutorials:
data-factory Memory Optimized Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/memory-optimized-compute.md
Data flow activities in Azure Data Factory and Azure Synapse support the [Comput
If your data flow has many joins and lookups, you may want to use a memory optimized cluster. These more memory intensive operations will benefit particularly by additional memory, and any out-of-memory errors encountered with the default compute type will be minimized. **Memory optimized** clusters do incur the highest cost per core, but may avoid pipeline failures for memory intensive operations. If you experience any out of memory errors when executing data flows, switch to a memory optimized Azure IR configuration.
-## Next steps
+## Related content
[Data Flow type properties](control-flow-execute-data-flow-activity.md#type-properties)
data-factory Monitor Configure Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-configure-diagnostics.md
Create or add diagnostic settings for your data factory.
After a few moments, the new setting appears in your list of settings for this data factory. Diagnostic logs are streamed to that workspace as soon as new event data is generated. Up to 15 minutes might elapse between when an event is emitted and when it appears in Log Analytics.
-## Next steps
+## Related content
[Set up diagnostics logs via the Azure Monitor REST API](monitor-logs-rest.md)
data-factory Monitor Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-integration-runtime.md
See the following articles to learn more about Azure-SSIS integration runtime:
- [Manage an Azure-SSIS IR](manage-azure-ssis-integration-runtime.md). This article shows you how to start, stop, or delete your Azure-SSIS IR. It also shows you how to scale it out by adding more nodes. - [Join an Azure-SSIS IR to a virtual network](join-azure-ssis-integration-runtime-virtual-network.md). This article provides instructions on joining your Azure-SSIS IR to a virtual network.
-## Next steps
+## Related content
See the following articles for monitoring pipelines in different ways: - [Quickstart: create a data factory](quickstart-create-data-factory-dot-net.md).
data-factory Monitor Logs Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-logs-rest.md
https://management.azure.com/{resource-id}/providers/microsoft.insights/diagnost
``` For more information, see [Diagnostic settings](/rest/api/monitor/diagnosticsettings).
-## Next steps
+## Related content
[Monitor SSIS operations with Azure Monitor](monitor-ssis.md)
data-factory Monitor Managed Virtual Network Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-managed-virtual-network-integration-runtime.md
By implementing either of these solutions, you can enhance the performance of yo
:::image type="content" source="media\monitor-managed-virtual-network-integration-runtime\monitor-managed-virtual-network-integration-runtime-intermittent-activity.png" alt-text="Screenshot of an intermittent activity scenario for an integration runtime within a managed virtual network." lightbox="media\monitor-managed-virtual-network-integration-runtime\monitor-managed-virtual-network-integration-runtime-intermittent-activity.png":::
-## Next steps
+## Related content
Advance to the following article to learn about managed virtual networks and managed private endpoints: [Azure Data Factory managed virtual network](managed-virtual-network-private-endpoint.md).
data-factory Monitor Metrics Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-metrics-alerts.md
Sign in to the Azure portal, and select **Monitor** > **Alerts** to create alert
:::image type="content" source="media/monitor-using-azure-monitor/alerts_image12.png" lightbox="media/monitor-using-azure-monitor/alerts_image12.png" alt-text="Screenshot that shows defining an action group.":::
-## Next steps
+## Related content
[Configure diagnostics settings and workspace](monitor-configure-diagnostics.md)
data-factory Monitor Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-programmatically.md
For a complete walk-through of creating and monitoring a pipeline using PowerShe
For complete documentation on PowerShell cmdlets, see [Data Factory PowerShell cmdlet reference](/powershell/module/az.datafactory).
-## Next steps
+## Related content
See [Monitor pipelines using Azure Monitor](monitor-using-azure-monitor.md) article to learn about using Azure Monitor to monitor Data Factory pipelines.
data-factory Monitor Schema Logs Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-schema-logs-events.md
Log Analytics inherits the schema from Azure Monitor with the following exceptio
| $.properties.SystemParameters | SystemParameters | Dynamic | | $.properties.Tags | Tags | Dynamic |
-## Next steps
+## Related content
[Monitor programmatically using SDKs](monitor-programmatically.md)
data-factory Monitor Shir In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-shir-in-azure.md
Performance counters in Windows and Linux provide insight into the performance o
When a deployment requires a more in-depth level of analysis or has reached a certain scale, it becomes impractical to log on to locally to each Self Hosted Integration Runtime host. Therefore, we recommend using Azure Monitor and Azure Log Analytics specifically to collect that data and enable a single pane of glass monitoring for your Self Hosted Integration Runtimes. See the article on [Configuring the SHIR for log analytics collection](how-to-configure-shir-for-log-analytics-collection.md) for instructions on how to instrument your Self Hosted Integration Runtimes for Azure Monitor.
-## Next Steps
+## Related content
- [How to configure SHIR for log analytics collection](how-to-configure-shir-for-log-analytics-collection.md) - [Review integration runtime concepts in Azure Data Factory.](concepts-integration-runtime.md)
data-factory Monitor Ssis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-ssis.md
When querying SSIS package execution logs on Logs Analytics, you can join them u
:::image type="content" source="media/data-factory-monitor-oms/log-analytics-query2.png" alt-text="Querying SSIS package execution logs on Log Analytics":::
-## Next steps
+## Related content
[Schema of logs and events](monitor-schema-logs-events.md)
data-factory Monitor Using Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-using-azure-monitor.md
Data Factory stores pipeline-run data for only 45 days. Use Azure Monitor if you
- You want to monitor across data factories. You can route data from multiple data factories to a single Monitor workspace. * **Partner Solution:** Diagnostic logs could be sent to Partner solutions through integration. For potential partner integrations, [click to learn more about partner integration.](../partner-solutions/overview.md) You can also use a storage account or event-hub namespace that isn't in the subscription of the resource that emits logs. The user who configures the setting must have appropriate Azure role-based access control (Azure RBAC) access to both subscriptions.
-## Next steps
+## Related content
- [Azure Data Factory metrics and alerts](monitor-metrics-alerts.md) - [Monitor and manage pipelines programmatically](monitor-programmatically.md)
data-factory Monitor Visually https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-visually.md
For a seven-minute introduction and demonstration of this feature, watch the fol
:::image type="content" source="media/monitor-visually/create-alert-rule.png" alt-text="Screenshot of options for creating an alert rule.":::
-## Next steps
+## Related content
To learn about monitoring and managing pipelines, see the [Monitor and manage pipelines programmatically](./monitor-programmatically.md) article.
data-factory Naming Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/naming-rules.md
The following table provides naming rules for Data Factory artifacts.
| Resource Group |Unique across Microsoft Azure. Names are case-insensitive. | For more info, see [Azure naming rules and restrictions](/azure/cloud-adoption-framework/ready/azure-best-practices/naming-and-tagging#resource-naming). | | Pipeline parameters & variable |Unique within the pipeline. Names are case-insensitive. | <ul><li>Validation check on parameter names and variable names is limited to uniqueness because of backward compatibility reason.</li><li>When use parameters or variables to reference entity names, for example linked service, the entity naming rules apply.</li><li>A good practice is to follow data flow transformation naming rules to name your pipeline parameters and variables.</li></ul> |
-## Next steps
+## Related content
Learn how to create data factories by following step-by-step instructions in [Quickstart: create a data factory](quickstart-create-data-factory-powershell.md) article.
data-factory Parameters Data Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/parameters-data-flow.md
For example, if you wanted to map a string column based upon a parameter `column
> [!NOTE] > In data flow expressions, string interpolation (substituting variables inside of the string) isn't supported. Instead, concatenate the expression into string values. For example, `'string part 1' + $variable + 'string part 2'`
-## Next steps
+## Related content
* [Execute data flow activity](control-flow-execute-data-flow-activity.md) * [Control flow expressions](control-flow-expression-language-functions.md)
data-factory Password Change Airflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/password-change-airflow.md
We recommend using **Microsoft Entra ID** authentication in Managed Airflow envi
:::image type="content" source="media/password-change-airflow/password-change-airflow.png" alt-text="Screenshot showing how to change an Airflow password in the integration runtime settings.":::
-## Next steps
+## Related content
- [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md) - [Managed Airflow pricing](airflow-pricing.md)
data-factory Pipeline Trigger Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pipeline-trigger-troubleshoot-guide.md
Failure type is user configuration issue. String of parameters, instead of Array
Input **execute pipeline** activity for pipeline parameter as *@createArray('a','b')* for example, if you want to pass parameters 'a' and 'b'. If you want to pass numbers, for example, use *@createArray(1,2,3)*. Use createArray function to force parameters being passed as an array.
-## Next steps
+## Related content
For more troubleshooting help, try these resources:
data-factory Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/plan-manage-costs.md
Budgets can be created with filters for specific resources or services in Azure
You can also [export your cost data](../cost-management-billing/costs/tutorial-export-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. This is helpful when you need or others to do other data analysis for costs. For example, finance teams can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets.
-## Next steps
+## Related content
- Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
data-factory Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/policy-reference.md
the link in the **Version** column to view the source on the
[!INCLUDE [azure-policy-reference-rp-datafactory](../../includes/policy/reference/byrp/microsoft.datafactory.md)]
-## Next steps
+## Related content
- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy). - Review the [Azure Policy definition structure](../governance/policy/concepts/definition-structure.md).
data-factory Pricing Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-concepts.md
The prices used in the following examples are hypothetical and don't intend to i
- [Pricing example: Get delta data from SAP ECC via SAP CDC in mapping data flows](pricing-examples-get-delta-data-from-sap-ecc.md)
-## Next steps
+## Related content
Now that you understand the pricing for Azure Data Factory, you can get started! - [Create a data factory by using the Azure Data Factory UI](quickstart-create-data-factory-portal.md)
data-factory Pricing Examples Copy Transform Azure Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-copy-transform-azure-databricks.md
To accomplish the scenario, you need to create a pipeline with the following ite
:::image type="content" source="media/pricing-concepts/scenario-2-pricing-calculator.png" alt-text="Screenshot of the pricing calculator configured for a copy data and transform with Azure Databricks scenario." lightbox="media/pricing-concepts/scenario-2-pricing-calculator.png":::
-## Next steps
+## Related content
- [Pricing example: Copy data from AWS S3 to Azure Blob storage hourly for 30 days](pricing-examples-s3-to-blob.md) - [Pricing example: Copy data and transform with dynamic parameters hourly for 30 days](pricing-examples-copy-transform-dynamic-parameters.md)
data-factory Pricing Examples Copy Transform Dynamic Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-copy-transform-dynamic-parameters.md
To accomplish the scenario, you need to create a pipeline with the following ite
:::image type="content" source="media/pricing-concepts/scenario-3-pricing-calculator.png" alt-text="Screenshot of the pricing calculator configured for a copy data and transform with dynamic parameters scenario." lightbox="media/pricing-concepts/scenario-3-pricing-calculator.png":::
-## Next steps
+## Related content
- [Pricing example: Copy data from AWS S3 to Azure Blob storage hourly for 30 days](pricing-examples-s3-to-blob.md) - [Pricing example: Copy data and transform with Azure Databricks hourly for 30 days](pricing-examples-copy-transform-azure-databricks.md)
data-factory Pricing Examples Data Integration Managed Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-data-integration-managed-vnet.md
To accomplish the scenario, you need to create two pipelines with the following
:::image type="content" source="media/pricing-concepts/scenario-5-pricing-calculator.png" alt-text="Screenshot of the pricing calculator configured for data integration with Managed VNET." lightbox="media/pricing-concepts/scenario-5-pricing-calculator.png":::
-## Next steps
+## Related content
- [Pricing example: Copy data from AWS S3 to Azure Blob storage hourly for 30 days](pricing-examples-s3-to-blob.md) - [Pricing example: Copy data and transform with Azure Databricks hourly for 30 days](pricing-examples-copy-transform-azure-databricks.md)
data-factory Pricing Examples Get Delta Data From Sap Ecc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-get-delta-data-from-sap-ecc.md
Assuming every time it requires 15 minutes to complete the job, the cost estimat
:::image type="content" source="media/pricing-concepts/scenario-6-pricing-calculator.png" alt-text="Screenshot of the pricing calculator configured for getting delta data from SAP ECC via SAP CDC in mapping data flows." lightbox="media/pricing-concepts/scenario-6-pricing-calculator.png":::
-## Next steps
+## Related content
- [Pricing example: Copy data from AWS S3 to Azure Blob storage hourly for 30 days](pricing-examples-s3-to-blob.md) - [Pricing example: Copy data and transform with Azure Databricks hourly for 30 days](pricing-examples-copy-transform-azure-databricks.md)
data-factory Pricing Examples Mapping Data Flow Debug Workday https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-mapping-data-flow-debug-workday.md
A data factory engineer is responsible for designing, building, and testing mapp
**8 (hours) x 8 (compute-optimized cores) x $0.193 = $12.35**
-## Next steps
+## Related content
- [Pricing example: Copy data from AWS S3 to Azure Blob storage hourly for 30 days](pricing-examples-s3-to-blob.md) - [Pricing example: Copy data and transform with Azure Databricks hourly for 30 days](pricing-examples-copy-transform-azure-databricks.md)
data-factory Pricing Examples S3 To Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-s3-to-blob.md
To accomplish the scenario, you need to create a pipeline with the following ite
:::image type="content" source="media/pricing-concepts/scenario-1-pricing-calculator.png" alt-text="Screenshot of the pricing calculator configured for an hourly pipeline run." lightbox="media/pricing-concepts/scenario-1-pricing-calculator.png":::
-## Next steps
+## Related content
- [Pricing example: Copy data and transform with Azure Databricks hourly for 30 days](pricing-examples-copy-transform-azure-databricks.md) - [Pricing example: Copy data and transform with dynamic parameters hourly for 30 days](pricing-examples-copy-transform-dynamic-parameters.md)
data-factory Pricing Examples Ssis On Azure Ssis Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-ssis-on-azure-ssis-integration-runtime.md
In the above example, if you keep your Azure-SSIS IR running for 2 hours, using
To manage your Azure-SSIS IR running cost, you can scale down your VM size, scale in your cluster size, bring your own SQL Server license via Azure Hybrid Benefit (AHB) option that offers significant savings, see [Azure-SSIS IR pricing](https://azure.microsoft.com/pricing/details/data-factory/ssis/), and or start & stop your Azure-SSIS IR whenever convenient/on demand/just in time to process your SSIS workloads, see [Reconfigure Azure-SSIS IR](manage-azure-ssis-integration-runtime.md#to-reconfigure-an-azure-ssis-ir) and [Schedule Azure-SSIS IR](how-to-schedule-azure-ssis-integration-runtime.md).
-## Next steps
+## Related content
- [Pricing example: Copy data from AWS S3 to Azure Blob storage hourly for 30 days](pricing-examples-s3-to-blob.md) - [Pricing example: Copy data and transform with Azure Databricks hourly for 30 days](pricing-examples-copy-transform-azure-databricks.md)
data-factory Pricing Examples Transform Mapping Data Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-transform-mapping-data-flows.md
To accomplish the scenario, you need to create a pipeline with the following ite
:::image type="content" source="media/pricing-concepts/scenario-4-pricing-calculator.png" alt-text="Screenshot of the data flow section of the pricing calculator configured to transform data in a blob store with mapping data flows." lightbox="media/pricing-concepts/scenario-4-pricing-calculator.png":::
-## Next steps
+## Related content
- [Pricing example: Copy data from AWS S3 to Azure Blob storage hourly for 30 days](pricing-examples-s3-to-blob.md) - [Pricing example: Copy data and transform with Azure Databricks hourly for 30 days](pricing-examples-copy-transform-azure-databricks.md)
data-factory Quickstart Create Data Factory Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-azure-cli.md
In this quickstart, you created the following JSON files:
Delete them by using standard Bash commands.
-## Next steps
+## Related content
- [Pipelines and activities in Azure Data Factory](concepts-pipelines-activities.md) - [Linked services in Azure Data Factory](concepts-linked-services.md)
data-factory Quickstart Create Data Factory Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-bicep.md
You can also use the Azure portal to delete the resource group.
1. Select **Delete resource group**. 1. A tab will appear. Enter the resource group name and select **Delete**.
-## Next steps
+## Related content
In this quickstart, you created an Azure Data Factory using Bicep and validated the deployment. To learn more about Azure Data Factory and Bicep, continue on to the articles below.
data-factory Quickstart Create Data Factory Dot Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-dot-net.md
Console.WriteLine("Deleting the data factory");
client.Factories.Delete(resourceGroup, dataFactoryName); ```
-## Next steps
+## Related content
The pipeline in this sample copies data from one location to another location in an Azure blob storage. Go through the [tutorials](tutorial-copy-data-dot-net.md) to learn about using Data Factory in more scenarios.
data-factory Quickstart Create Data Factory Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-powershell.md
$RunId = Invoke-AzDataFactoryV2Pipeline `
[!INCLUDE [data-factory-quickstart-verify-output-cleanup.md](includes/data-factory-quickstart-verify-output-cleanup.md)]
-## Next steps
+## Related content
The pipeline in this sample copies data from one location to another location in an Azure blob storage. Go through the [tutorials](tutorial-copy-data-dot-net.md) to learn about using Data Factory in more scenarios.
data-factory Quickstart Create Data Factory Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-python.md
To delete the data factory, add the following code to the program:
adf_client.factories.delete(rg_name, df_name) ```
-## Next steps
+## Related content
The pipeline in this sample copies data from one location to another location in an Azure blob storage. Go through the [tutorials](tutorial-copy-data-dot-net.md) to learn about using Data Factory in more scenarios.
data-factory Quickstart Create Data Factory Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-resource-manager-template.md
If you want to delete just the data factory, and not the entire resource group,
Remove-AzDataFactoryV2 -Name $dataFactoryName -ResourceGroupName $resourceGroupName ```
-## Next steps
+## Related content
In this quickstart, you created an Azure Data Factory using an ARM template and validated the deployment. To learn more about Azure Data Factory and Azure Resource Manager, continue on to the articles below.
data-factory Quickstart Create Data Factory Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-rest-api.md
Run the following command to delete only the data factory:
Remove-AzDataFactoryV2 -Name "<NameOfYourDataFactory>" -ResourceGroupName "<NameOfResourceGroup>" ```
-## Next steps
+## Related content
The pipeline in this sample copies data from one location to another location in an Azure blob storage. Go through the [tutorials](tutorial-copy-data-dot-net.md) to learn about using Data Factory in more scenarios.
data-factory Quickstart Create Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory.md
A quick creation experience provided in the Azure Data Factory Studio to enable
> [!NOTE] > If you see that the web browser is stuck at "Authorizing", clear the **Block third-party cookies and site data** check box. Or keep it selected, create an exception for **login.microsoftonline.com**, and then try to open the app again.
-## Next steps
+## Related content
Learn how to use Azure Data Factory to copy data from one location to another with the [Hello World - How to copy data](quickstart-hello-world-copy-data-tool.md) tutorial. Lean how to create a data flow with Azure Data Factory[data-flow-create.md].
data-factory Quickstart Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-get-started.md
All of the resources referenced above will be created in the new resource group,
You can clean up all the resources you created in this quickstart in either of two ways. You can [delete the entire Azure resource group](../azure-resource-manager/management/delete-resource-group.md), which includes all the resources created in it. Or if you want to keep some resources intact, browse to the resource group and delete only the specific resources you want, keeping the others. For example, if you are using this template to create a data factory for use in another tutorial, you can delete the other resources but keep only the data factory.
-## Next steps
+## Related content
In this quickstart, you created an Azure Data Factory containing a pipeline with a copy activity. To learn more about Azure Data Factory, continue on to the article and Learn module below.
data-factory Quickstart Hello World Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-hello-world-copy-data-tool.md
The steps below will walk you through how to easily copy data with the copy data
1. On the Activity runs page, select the **Details** link (eyeglasses icon) under the **Activity name** column for more details about copy operation. For details about the properties, see [Copy Activity overview](copy-activity-overview.md).
-## Next steps
+## Related content
The pipeline in this sample copies data from one location to another location in Azure Blob storage. To learn about using Data Factory in more scenarios, go through the [tutorials](tutorial-copy-data-portal.md).
data-factory Quickstart Learn Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-learn-modules.md
This module introduces you to the details of Azure Data Factory's ingestion meth
:::image type="content" source="media/quickstart-learn-modules/petabyte-scale-ingestion.png" alt-text="Screenshot showing the Petabyte-scale ingestion with Azure Data Factory module start page.":::
-## Next steps
+## Related content
- [Quickstart: Get started with Azure Data Factory](quickstart-get-started.md) - [Quickstart: Create data factory using UI](quickstart-create-data-factory-portal.md)
data-factory Sap Change Data Capture Debug Shir Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-debug-shir-logs.md
After you've uploaded and sent your self-hosted integration runtime logs, contac
:::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-diagnostics-report-id.png" alt-text="Screenshot of the self-hosted integration runtime's diagnostic log confirmation, with Report ID and Timestamp highlighted.":::
-## Next steps
+## Related content
[SAP CDC (Change Data Capture) Connector](connector-sap-change-data-capture.md)
data-factory Sap Change Data Capture Introduction Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-introduction-architecture.md
In this process, the SAP data sources are *providers*. The providers run on SAP
Because ODP completely decouples providers from subscribers, any SAP documentation that offers provider configurations are applicable to Data Factory as a subscriber. For more information about ODP, see [Introduction to operational data provisioning](https://wiki.scn.sap.com/wiki/display/BI/Introduction+to+Operational+Data+Provisioning).
-## Next steps
+## Related content
[Prerequisites and setup for the SAP CDC solution](sap-change-data-capture-prerequisites-configuration.md)
data-factory Sap Change Data Capture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-management.md
In the subscription, a list of requests corresponds to mapping data flow runs in
Based on the timestamp in the first row, find the line that corresponds to the mapping data flow run you want to analyze. If the number of rows shown equals the number of rows read by the mapping data flow, you've verified that Data Factory has read and transferred the data as provided by the SAP system. In this scenario, we recommend that you consult with the team that's responsible for your SAP system.
-## Next steps
+## Related content
Learn more about [SAP connectors](industry-sap-connectors.md).
data-factory Sap Change Data Capture Prepare Linked Service Source Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-prepare-linked-service-source-dataset.md
To set up an SAP CDC linked service:
To set up a mapping data flow using the SAP CDC dataset as a source, follow [Transform data with the SAP CDC connector](connector-sap-change-data-capture.md#transform-data-with-the-sap-cdc-connector)
-## Next steps
+## Related content
[Debug the SAP CDC connector by sending self-hosted integration runtime logs](sap-change-data-capture-debug-shir-logs.md)
data-factory Sap Change Data Capture Prerequisites Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-prerequisites-configuration.md
The following SAP support notes resolve known issues on SAP systems:
- [3038236 - To resolve CDS view extractions that fail to populate ODQ](https://launchpad.support.sap.com/#/notes/3038236) - [3076927 - To remove unsupported callbacks when extracting from SAP BW or BW/4HANA](https://launchpad.support.sap.com/#/notes/3076927)
-## Next steps
+## Related content
[Set up a self-hosted integration runtime for your SAP CDC solution](sap-change-data-capture-shir-preparation.md)
data-factory Sap Change Data Capture Shir Preparation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-shir-preparation.md
yyy.yyy.yyy.yyy sapbw01
zzz.zzz.zzz.zzz sapnw01 ```
-## Next steps
+## Related content
[Set up an SAP CDC linked service and source dataset](sap-change-data-capture-prepare-linked-service-source-dataset.md)
data-factory Scenario Dataflow Process Data Aml Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scenario-dataflow-process-data-aml-models.md
Let's look back at the entire pipeline logic.
:::image type="content" source="./media/scenario-dataflow-process-data-aml-models/entire-pipeline.png" alt-text="Screenshot that shows the logic of the entire pipeline.":::
-## Next steps
+## Related content
Build the rest of your data flow logic by using mapping data flow [transformations](concepts-data-flow-overview.md).
data-factory Scenario Ssis Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scenario-ssis-migration-overview.md
It is also a practical way to use [SSIS DevOps Tools](/sql/integration-services/
- [Configure the Azure-SSIS Integration Runtime for high performance](configure-azure-ssis-integration-runtime-performance.md) - [How to start and stop Azure-SSIS Integration Runtime on a schedule](how-to-schedule-azure-ssis-integration-runtime.md)
-## Next steps
+## Related content
- [Validate SSIS packages deployed to Azure](/sql/integration-services/lift-shift/ssis-azure-validate-packages) - [Run SSIS packages deployed in Azure](/sql/integration-services/lift-shift/ssis-azure-run-packages)
data-factory Bulk Copy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scripts/bulk-copy-powershell.md
This script uses the following commands:
| [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. | |||
-## Next steps
+## Related content
For more information on the Azure PowerShell, see [Azure PowerShell documentation](/powershell/).
data-factory Copy Azure Blob Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scripts/copy-azure-blob-powershell.md
This script uses the following commands:
| [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. | |||
-## Next steps
+## Related content
For more information on the Azure PowerShell, see [Azure PowerShell documentation](/powershell/).
data-factory Deploy Azure Ssis Integration Runtime Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scripts/deploy-azure-ssis-integration-runtime-powershell.md
This script uses the following commands:
| [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. | |||
-## Next steps
+## Related content
For more information on the Azure PowerShell, see [Azure PowerShell documentation](/powershell/).
data-factory Hybrid Copy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scripts/hybrid-copy-powershell.md
This script uses the following commands:
| [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. | |||
-## Next steps
+## Related content
For more information on the Azure PowerShell, see [Azure PowerShell documentation](/powershell/).
data-factory Incremental Copy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scripts/incremental-copy-powershell.md
This script uses the following commands:
| [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. | |||
-## Next steps
+## Related content
For more information on the Azure PowerShell, see [Azure PowerShell documentation](/powershell/).
data-factory Transform Data Spark Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scripts/transform-data-spark-powershell.md
This script uses the following commands:
| [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Deletes a resource group including all nested resources. | |||
-## Next steps
+## Related content
For more information on the Azure PowerShell, see [Azure PowerShell documentation](/powershell/).
data-factory Security And Access Control Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/security-and-access-control-troubleshoot-guide.md
You might notice other data factories (on different tenants) as you're attemptin
The self-hosted IR can't be shared across tenants.
-## Next steps
+## Related content
For more help with troubleshooting, try the following resources:
data-factory Self Hosted Integration Runtime Auto Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/self-hosted-integration-runtime-auto-update.md
If you have multiple nodes, and for some reasons that some of them aren't auto-u
## Self-hosted Integration Runtime Expire Notification If you want to manually control which version of self-hosted integration runtime, you can disable the setting of auto-update and install it manually. Each version of self-hosted integration runtime expires in one year. The expiring message is shown in ADF portal and self-hosted integration runtime client **90 days** before expiration.
-## Next steps
+## Related content
- Review [integration runtime concepts in Azure Data Factory](./concepts-integration-runtime.md). - Learn how to [create a self-hosted integration runtime in the Azure portal](./create-self-hosted-integration-runtime.md).
data-factory Self Hosted Integration Runtime Diagnostic Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/self-hosted-integration-runtime-diagnostic-tool.md
The execution result and detail log messages are generated as a HTML report. You
:::image type="content" source="./media/self-hosted-integration-runtime-diagnostic-tool/diagnostic-report.png" alt-text="Screenshot that shows the diagnostic result report.":::
-## Next steps
+## Related content
- Review [integration runtime concepts in Azure Data Factory](./concepts-integration-runtime.md).
data-factory Self Hosted Integration Runtime Proxy Ssis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/self-hosted-integration-runtime-proxy-ssis.md
If you need to access data stores that have been configured to use only the stro
- Changing variable values of type object in on-premises staging tasks won't be reflected in other tasks. - *ParameterMapping* in OLEDB Source is currently unsupported. As a workaround, please use *SQL Command From Variable* as the *AccessMode* and use *Expression* to insert your variables/parameters in a SQL command. As an illustration, see the *ParameterMappingSample.dtsx* package that can be found in the *SelfHostedIRProxy/Limitations* folder of our public preview blob container. Using Azure Storage Explorer, you can connect to our public preview blob container by entering the above SAS URI.
-## Next steps
+## Related content
After you've configured your self-hosted IR as a proxy for your Azure-SSIS IR, you can deploy and run your packages to access data and or run any SQL statements/processes on premises as Execute SSIS Package activities in Data Factory pipelines. To learn how, see [Run SSIS packages as Execute SSIS Package activities in Data Factory pipelines](./how-to-invoke-ssis-package-ssis-activity.md). See also our blogs: [Run Any SQL Anywhere in 3 Easy Steps with SSIS in Azure Data Factory](https://techcommunity.microsoft.com/t5/sql-server-integration-services/run-any-sql-anywhere-in-3-easy-steps-with-ssis-in-azure-data/ba-p/2457244) and [Run Any Process Anywhere in 3 Easy Steps with SSIS in Azure Data Factory](https://techcommunity.microsoft.com/t5/sql-server-integration-services/run-any-process-anywhere-in-3-easy-steps-with-ssis-in-azure-data/ba-p/2962609).
data-factory Self Hosted Integration Runtime Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/self-hosted-integration-runtime-troubleshoot-guide.md
How to determine whether you're affected:
If it isn't in the trusted root CA, [download it here](http://cacerts.digicert.com/DigiCertGlobalRootG2.crt).
-## Next steps
+## Related content
For more help with troubleshooting, try the following resources:
data-factory Solution Template Bulk Copy From Files To Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-bulk-copy-from-files-to-database.md
The template defines the following two parameters:
:::image type="content" source="media/solution-template-bulk-copy-from-files-to-database/run-succeeded.png" alt-text="Review the result":::
-## Next steps
+## Related content
- [Introduction to Azure Data Factory](introduction.md)
data-factory Solution Template Bulk Copy With Control Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-bulk-copy-with-control-table.md
The last three parameters, which define the path in your destination store are o
:::image type="content" source="mediB_with_ControlTable9.png" alt-text="Screenshot showing the Polybase setting.":::
-## Next steps
+## Related content
- [Introduction to Azure Data Factory](introduction.md)
data-factory Solution Template Copy Files Multiple Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-copy-files-multiple-containers.md
If you want to copy multiple containers under root folders between storage store
:::image type="content" source="media/solution-template-copy-files-multiple-containers/copy-files-multiple-containers-image-6.png" alt-text="Review the result":::
-## Next steps
+## Related content
- [Bulk copy from a database by using a control table with Azure Data Factory](solution-template-bulk-copy-with-control-table.md)
data-factory Solution Template Copy New Files Last Modified Date https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-copy-new-files-last-modified-date.md
The template defines six parameters:
:::image type="content" source="media/solution-template-copy-new-files-last-modified-date/copy-new-files-last-modified-date-15.png" alt-text="Screenshot that shows the results that return when the pipeline is triggered.":::
-## Next steps
+## Related content
- [Introduction to Azure Data Factory](introduction.md)
data-factory Solution Template Databricks Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-databricks-notebook.md
In the new pipeline, most settings are configured automatically with default val
> For correlating with Data Factory pipeline runs, this example appends the pipeline run ID from the data factory to the output folder. This helps keep track of files generated by each run. > :::image type="content" source="media/solution-template-Databricks-notebook/verify-data-files.png" alt-text="Appended pipeline run ID":::
-## Next steps
+## Related content
- [Introduction to Azure Data Factory](introduction.md)
data-factory Solution Template Delta Copy With Control Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-delta-copy-with-control-table.md
The template defines following parameters:
:::image type="content" source="mediB_with_ControlTable15.png" alt-text="Screenshot showing where to configure Polybase.":::
-## Next steps
+## Related content
- [Bulk copy from a database by using a control table with Azure Data Factory](solution-template-bulk-copy-with-control-table.md) - [Copy files from multiple containers with Azure Data Factory](solution-template-copy-files-multiple-containers.md)
data-factory Solution Template Extract Data From Pdf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-extract-data-from-pdf.md
This template defines 4 parameters:
:::image type="content" source="media/solution-template-extract-data-from-pdf/extract-data-from-pdf-7.png" alt-text="Screenshot of the results that return when the pipeline is triggered.":::
-## Next steps
+## Related content
- [What's New in Azure Data Factory](whats-new.md) - [Introduction to Azure Data Factory](introduction.md)
data-factory Solution Template Migration S3 Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-migration-s3-azure.md
The template contains two parameters:
:::image type="content" source="media/solution-template-migration-s3-azure/delta-migration-s3-azure-6.png" alt-text="Screenshot that shows the results from the control table after you run the query.":::
-## Next steps
+## Related content
- [Copy files from multiple containers](solution-template-copy-files-multiple-containers.md) - [Move files](solution-template-move-files.md)
data-factory Solution Template Move Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-move-files.md
The template defines four parameters:
:::image type="content" source="media/solution-template-move-files/move-files6.png" alt-text="Screenshot showing the result of the pipeline run.":::
-## Next steps
+## Related content
- [Copy new and changed files by LastModifiedDate with Azure Data Factory](solution-template-copy-new-files-lastmodifieddate.md)
data-factory Solution Template Pii Detection And Masking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-pii-detection-and-masking.md
This template defines 3 parameters:
:::image type="content" source="media/solution-template-pii-detection-and-masking/pii-detection-and-masking-10.png" alt-text="Screenshot of the results that return after the pipeline is triggered.":::
-## Next steps
+## Related content
- [What's New in Azure Data Factory](whats-new.md) - [Introduction to Azure Data Factory](introduction.md)
data-factory Solution Template Replicate Multiple Objects Sap Cdc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-replicate-multiple-objects-sap-cdc.md
A sample control file is as below:
:::image type="content" source="media/solution-template-replicate-multiple-objects-sap-cdc/sap-cdc-template-pipeline.png" alt-text="Screenshot of SAP CDC pipeline.":::
-## Next steps
+## Related content
- [Azure Data Factory SAP CDC](sap-change-data-capture-introduction-architecture.md) - [SAP CDC advanced topics](sap-change-data-capture-advanced-topics.md)
data-factory Solution Template Synapse Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-synapse-notebook.md
Review the configurations of your pipeline and make any necessary changes.
:::image type="content" source="media/solution-template-synapse-notebook/fail-activity.png" alt-text="Fail pipeline":::
-## Next steps
+## Related content
- [Overview of templates](solution-templates-introduction.md)
data-factory Source Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/source-control.md
After you remove the association with the current repo, you can configure your G
> [!IMPORTANT] > Removing Git configuration from a data factory doesn't delete anything from the repository. The factory will contain all published resources. You can continue to edit the factory directly against the service.
-## Next steps
+## Related content
* To learn more about monitoring and managing pipelines, see [Monitor and manage pipelines programmatically](monitor-programmatically.md). * To implement continuous integration and deployment, see [Continuous integration and delivery (CI/CD) in Azure Data Factory](continuous-integration-delivery.md).
data-factory Ssis Azure Connect With Windows Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/ssis-azure-connect-with-windows-auth.md
To access a file share in Azure Files from packages running in Azure, do the fol
catalog.set_execution_credential @domain = N'Azure', @user = N'<storage-account-name>', @password = N'<storage-account-key>' ```
-## Next steps
+## Related content
- Deploy your packages. For more info, see [Deploy an SSIS project to Azure with SSMS](/sql/integration-services/ssis-quickstart-deploy-ssms). - Run your packages. For more info, see [Run SSIS packages in Azure with SSMS](/sql/integration-services/ssis-quickstart-run-ssms).
data-factory Ssis Azure Files File Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/ssis-azure-files-file-shares.md
To use **Azure Files** when you lift and shift packages that use local file syst
3. Update local file paths in your packages to UNC paths pointing to Azure Files. For example, update `C:\abc.txt` to `\\<storage-account-name>.file.core.windows.net\<share-name>\abc.txt`.
-## Next steps
+## Related content
- Deploy your packages. For more info, see [Deploy an SSIS project to Azure with SSMS](/sql/integration-services/ssis-quickstart-deploy-ssms). - Run your packages. For more info, see [Run SSIS packages in Azure with SSMS](/sql/integration-services/ssis-quickstart-run-ssms).
data-factory Ssis Integration Runtime Diagnose Connectivity Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/ssis-integration-runtime-diagnose-connectivity-faq.md
Use the following sections to learn about the most common errors that occur when
- **Potential cause**: Transient network issue. - **Recommendation**: Check whether the server or firewall network is stable.
-## Next steps
+## Related content
- [Migrate SSIS jobs with SSMS](how-to-migrate-ssis-job-ssms.md) - [Run SSIS packages in Azure with SSDT](how-to-invoke-ssis-package-ssdt.md)
data-factory Store Credentials In Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/store-credentials-in-key-vault.md
Select **Azure Key Vault** for secret fields while creating the connection to yo
} ```
-## Next steps
+## Related content
For a list of data stores supported as sources and sinks by the copy activity in Azure Data Factory, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Supported File Formats And Compression Codecs Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/supported-file-formats-and-compression-codecs-legacy.md
You can see a sample that uses an Azure function to [extract the contents of a t
You can also build this functionality using a custom dotnet activity. Further information is available [here](./transform-data-using-dotnet-custom-activity.md)
-## Next steps
+## Related content
Learn the latest supported file formats and compressions from [Supported file formats and compressions](supported-file-formats-and-compression-codecs.md).
data-factory Supported File Formats And Compression Codecs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/supported-file-formats-and-compression-codecs.md
In addition, you can also parse or generate files of a given format. For example
* Copy data in Gzip compressed-text (CSV) format from Azure Blob storage and write it to Azure SQL Database. * Many more activities that require serialization/deserialization or compression/decompression.
-## Next steps
+## Related content
See the other Copy Activity articles:
data-factory Transform Data Databricks Jar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-databricks-jar.md
For more information, see the [Databricks documentation](/azure/databricks/dev-t
As an example, to copy a JAR to dbfs: `dbfs cp SparkPi-assembly-0.1.jar dbfs:/docs/sparkpi.jar`
-## Next steps
+## Related content
For an eleven-minute introduction and demonstration of this feature, watch the [video](/Shows/Azure-Friday/Execute-Jars-and-Python-scripts-on-Azure-Databricks-using-Data-Factory/player).
data-factory Transform Data Machine Learning Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-machine-learning-service.md
continueOnStepFailure | Whether to continue execution of other steps in the Mach
> [!NOTE] > To populate the dropdown items in Machine Learning pipeline name and ID, the user needs to have permission to list ML pipelines. The UI calls AzureMLService APIs directly using the logged in user's credentials.
-## Next steps
+## Related content
See the following articles that explain how to transform data in other ways: * [Execute Data Flow activity](control-flow-execute-data-flow-activity.md)
data-factory Transform Data Using Custom Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-custom-activity.md
See [Automatically scale compute nodes in an Azure Batch pool](../batch/batch-au
If the pool is using the default [autoScaleEvaluationInterval](/rest/api/batchservice/pool/enableautoscale), the Batch service could take 15-30 minutes to prepare the VM before running the custom activity. If the pool is using a different autoScaleEvaluationInterval, the Batch service could take autoScaleEvaluationInterval + 10 minutes.
-## Next steps
+## Related content
See the following articles that explain how to transform data in other ways: * [U-SQL activity](transform-data-using-data-lake-analytics.md)
data-factory Transform Data Using Data Lake Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-data-lake-analytics.md
It is possible to use dynamic parameters instead. For example:
In this case, input files are still picked up from the /datalake/input folder and output files are generated in the /datalake/output folder. The file names are dynamic based on the window start time being passed in when pipeline gets triggered.
-## Next steps
+## Related content
See the following articles that explain how to transform data in other ways: * [Hive activity](transform-data-using-hadoop-hive.md)
data-factory Transform Data Using Databricks Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-databricks-notebook.md
You can click on the **Job name** and navigate to see further details. On succes
:::image type="content" source="media/transform-data-using-databricks-notebook/databricks-output.png" alt-text="Screenshot showing how to view the run details and output.":::
-## Next steps
+## Related content
The pipeline in this sample triggers a Databricks Notebook activity and passes a parameter to it. You learned how to:
data-factory Transform Data Using Hadoop Hive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-hadoop-hive.md
To use an HDInsight Hive activity for Azure Data Lake Analytics in a pipeline, c
>[!NOTE] >The default value for queryTimeout is 120 minutes.
-## Next steps
+## Related content
See the following articles that explain how to transform data in other ways: * [U-SQL activity](transform-data-using-data-lake-analytics.md)
data-factory Transform Data Using Hadoop Map Reduce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-hadoop-map-reduce.md
You can use the HDInsight MapReduce Activity to run any MapReduce jar file on an
``` You can specify any arguments for the MapReduce program in the **arguments** section. At runtime, you see a few extra arguments (for example: mapreduce.job.tags) from the MapReduce framework. To differentiate your arguments with the MapReduce arguments, consider using both option and value as arguments as shown in the following example (-s,--input,--output etc., are options immediately followed by their values).
-## Next steps
+## Related content
See the following articles that explain how to transform data in other ways: * [U-SQL activity](transform-data-using-data-lake-analytics.md)
data-factory Transform Data Using Hadoop Pig https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-hadoop-pig.md
To use an HDInsight Pig activity to a pipeline, complete the following steps:
| arguments | Specifies an array of arguments for a Hadoop job. The arguments are passed as command-line arguments to each task. | No | | defines | Specify parameters as key/value pairs for referencing within the Pig script. | No |
-## Next steps
+## Related content
See the following articles that explain how to transform data in other ways: * [U-SQL activity](transform-data-using-data-lake-analytics.md)
data-factory Transform Data Using Hadoop Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-hadoop-streaming.md
To use an HDInsight Streaming activity to a pipeline, complete the following ste
| arguments | Specifies an array of arguments for a Hadoop job. The arguments are passed as command-line arguments to each task. | No | | defines | Specify parameters as key/value pairs for referencing within the Hive script. | No |
-## Next steps
+## Related content
See the following articles that explain how to transform data in other ways: * [U-SQL activity](transform-data-using-data-lake-analytics.md)
data-factory Transform Data Using Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-machine-learning.md
Let's look at a scenario for using Web service parameters. You have a deployed M
After you are done with retraining, update the scoring web service (predictive experiment exposed as a web service) with the newly trained model by using the **ML Studio (classic) Update Resource Activity**. See [Updating models using Update Resource Activity](update-machine-learning-models.md) article for details.
-## Next steps
+## Related content
See the following articles that explain how to transform data in other ways: * [U-SQL activity](transform-data-using-data-lake-analytics.md)
data-factory Transform Data Using Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-script.md
Logging options:
> [!NOTE] > **Billing** - The Script activity will be [billed](https://azure.microsoft.com/pricing/details/data-factory/data-pipeline/) as **Pipeline activities**.
-## Next steps
+## Related content
See the following articles that explain how to transform data in other ways: * [U-SQL activity](transform-data-using-data-lake-analytics.md)
data-factory Transform Data Using Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-spark.md
SparkJob2
files ```
-## Next steps
+## Related content
See the following articles that explain how to transform data in other ways: * [U-SQL activity](transform-data-using-data-lake-analytics.md)
data-factory Transform Data Using Stored Procedure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-stored-procedure.md
The data type you specify for the parameter is the internal service type that ma
- [Oracle data type mapping](connector-oracle.md#data-type-mapping-for-oracle) - [SQL Server data type mapping](connector-sql-server.md#data-type-mapping-for-sql-server)
-## Next steps
+## Related content
See the following articles that explain how to transform data in other ways: * [U-SQL Activity](transform-data-using-data-lake-analytics.md)
data-factory Transform Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data.md
You create a linked service for the compute environment and then use the linked
See [Compute Linked Services](compute-linked-services.md) article to learn about supported compute services.
-## Next steps
+## Related content
See the following tutorial for an example of using a transformation activity: [Tutorial: transform data using Spark](tutorial-transform-data-spark-powershell.md)
data-factory Tumbling Window Trigger Dependency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tumbling-window-trigger-dependency.md
Transparent boxes show the dependency windows for each down stream-dependent tri
To rerun a window in Gantt chart view, select the solid color box for the window, and an action panel will pop up with details and rerun options
-## Next steps
+## Related content
* Review [How to create a tumbling window trigger](how-to-create-tumbling-window-trigger.md)
data-factory Tutorial Bulk Copy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-bulk-copy-portal.md
Here are the steps to create the pipeline:
1. Confirm that the data was copied to the target Azure Synapse Analytics you used in this tutorial.
-## Next steps
+## Related content
You performed the following steps in this tutorial: > [!div class="checklist"]
data-factory Tutorial Bulk Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-bulk-copy.md
This pipeline performs two steps:
3. Connect to your sink Azure Synapse Analytics and confirm that data has been copied from Azure SQL Database properly.
-## Next steps
+## Related content
You performed the following steps in this tutorial:
data-factory Tutorial Control Flow Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-control-flow-portal.md
In this step, you create a pipeline with one Copy activity and two Web activitie
:::image type="content" source="./media/tutorial-control-flow-portal/activity-run-error.png" alt-text="Activity run error":::
-## Next steps
+## Related content
You performed the following steps in this tutorial: > [!div class="checklist"]
data-factory Tutorial Control Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-control-flow.md
Checking copy activity run details...
Press any key to exit... ```
-## Next steps
+## Related content
You did the following tasks in this tutorial:
data-factory Tutorial Copy Data Dot Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-copy-data-dot-net.md
Checking copy activity run details...
Press any key to exit... ```
-## Next steps
+## Related content
The pipeline in this sample copies data from one location to another location in an Azure blob storage. You learned how to:
data-factory Tutorial Copy Data Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-copy-data-portal.md
In this schedule, you create a schedule trigger for the pipeline. The trigger ru
1. Verify that two rows per minute (for each pipeline run) are inserted into the **emp** table until the specified end time.
-## Next steps
+## Related content
The pipeline in this sample copies data from one location to another location in Blob storage. You learned how to: > [!div class="checklist"]
data-factory Tutorial Data Flow Adventure Works Retail Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-data-flow-adventure-works-retail-template.md
If the pipeline fails to run successfully, there's a few main things to check fo
* Data flow sources. If you used different column or table names than what were provided in the example schema, you'll need to step through the data flows to verify that the columns are mapped correctly. * Data flow sink. The schema and data format configurations on the target database will need to match the data flow template. Like above, if any changes were made you those items will need to be aligned.
-## Next steps
+## Related content
* Learn more about [mapping data flows](concepts-data-flow-overview.md). * Learn more about [pipeline templates](solution-templates-introduction.md)
data-factory Tutorial Data Flow Delta Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-data-flow-delta-lake.md
You will generate two data flows in this tutorial. The first data flow is a simp
### Download completed sample [Here is a sample solution for the Delta pipeline with a data flow for update/delete rows in the lake:](https://github.com/kromerm/adfdataflowdocs/blob/master/sampledata/DeltaPipeline.zip)
-## Next steps
+## Related content
Learn more about the [data flow expression language](data-transformation-functions.md).
data-factory Tutorial Data Flow Dynamic Columns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-data-flow-dynamic-columns.md
Now that you've stored the configuration file contents in memory, you can dynami
:::image type="content" source="media/data-flow/dynacols-2.png" alt-text="Source 2":::
-## Next steps
+## Related content
* The completed pipeline from this tutorial can be downloaded from [here](https://github.com/kromerm/adfdataflowdocs/blob/master/sampledata/DynaColsPipe.zip) * Learn more about [data flow sinks](data-flow-sink.md).
data-factory Tutorial Data Flow Write To Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-data-flow-write-to-lake.md
The techniques listed in the above tutorials are good use cases for creating fol
1. Pick the column that you wish to use for generating file names. 1. To manipulate the data values, or even if need to generate synthetic values for file names, use the Derived Column transformation to create the values you wish to use in your file names.
-## Next steps
+## Related content
Learn more about [data flow sinks](data-flow-sink.md).
data-factory Tutorial Data Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-data-flow.md
You can debug a pipeline before you publish it. In this step, you're going to tr
If you followed this tutorial correctly, you should have written 83 rows and 2 columns into your sink folder. You can verify the data is correct by checking your blob storage.
-## Next steps
+## Related content
The pipeline in this tutorial runs a data flow that aggregates the average rating of comedies from 1910 to 2000 and writes the data to ADLS. You learned how to:
data-factory Tutorial Deploy Ssis Packages Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-deploy-ssis-packages-azure-powershell.md
For more SSIS documentation, see:
- [Connect to on-premises data sources with Windows authentication](/sql/integration-services/lift-shift/ssis-azure-connect-with-windows-auth) - [Schedule package executions in Azure](/sql/integration-services/lift-shift/ssis-azure-schedule-packages)
-## Next steps
+## Related content
In this tutorial, you learned how to:
data-factory Tutorial Deploy Ssis Packages Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-deploy-ssis-packages-azure.md
See also the following SSIS documentation:
- [Schedule package executions in Azure](/sql/integration-services/lift-shift/ssis-azure-schedule-packages) - [Connect to on-premises data sources with Windows authentication](/sql/integration-services/lift-shift/ssis-azure-connect-with-windows-auth)
-## Next steps
+## Related content
To learn about customizing your Azure-SSIS integration runtime, advance to the following article:
data-factory Tutorial Deploy Ssis Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-deploy-ssis-virtual-network.md
After you've configured a virtual network, you can join your Azure-SSIS IR to th
1. Start your Azure-SSIS IR by selecting the **Start** button in **Actions** column for your Azure-SSIS IR. It takes about 5 minutes to start your Azure-SSIS IR that joins a virtual network with express injection method.
-## Next steps
+## Related content
- [Configure a virtual network to inject Azure-SSIS IR](azure-ssis-integration-runtime-virtual-network-configuration.md) - [Express virtual network injection method](azure-ssis-integration-runtime-express-virtual-network-injection.md)
data-factory Tutorial Hybrid Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-hybrid-copy-data-tool.md
You use the name and key of your storage account in this tutorial. To get the na
:::image type="content" source="./media/tutorial-hybrid-copy-data-tool/author-tab.png" alt-text="Screenshot that shows the Author tab.":::
-## Next steps
+## Related content
The pipeline in this sample copies data from a SQL Server database to Blob storage. You learned how to: > [!div class="checklist"]
data-factory Tutorial Hybrid Copy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-hybrid-copy-portal.md
Select **Add Trigger** on the toolbar for the pipeline, and then select **Trigge
The pipeline automatically creates the output folder named *fromonprem* in the `adftutorial` blob container. Confirm that you see the *[pipeline().RunId].txt* file in the output folder.
-## Next steps
+## Related content
The pipeline in this sample copies data from one location to another in Blob storage. You learned how to: > [!div class="checklist"]
data-factory Tutorial Hybrid Copy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-hybrid-copy-powershell.md
The pipeline automatically creates the output folder named *fromonprem* in the `
:::image type="content" source="media/tutorial-hybrid-copy-powershell/fromonprem-file.png" alt-text="Output file":::
-## Next steps
+## Related content
The pipeline in this sample copies data from one location to another in Azure Blob storage. You learned how to: > [!div class="checklist"]
data-factory Tutorial Incremental Copy Change Data Capture Feature Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-change-data-capture-feature-portal.md
You see the second file in the `customers/incremental/YYYY/MM/DD` folder of the
:::image type="content" source="media/tutorial-incremental-copy-change-data-capture-feature-portal/incremental-copy-pipeline-run.png" alt-text="Output file from incremental copy":::
-## Next steps
+## Related content
Advance to the following tutorial to learn about copying new and changed files only based on their LastModifiedDate: > [!div class="nextstepaction"]
data-factory Tutorial Incremental Copy Change Tracking Feature Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-change-tracking-feature-portal.md
PersonID Name Age SYS_CHANGE_VERSION SYS_CHANGE_OPERATION
6 new 50 1 I ```
-## Next steps
+## Related content
Advance to the following tutorial to learn about copying only new and changed files, based on `LastModifiedDate`:
data-factory Tutorial Incremental Copy Change Tracking Feature Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-change-tracking-feature-powershell.md
PersonID Name Age SYS_CHANGE_VERSION SYS_CHANGE_OPERATION
```
-## Next steps
+## Related content
Advance to the following tutorial to learn about copying new and changed files only based on their LastModifiedDate: > [!div class="nextstepaction"]
data-factory Tutorial Incremental Copy Lastmodified Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-lastmodified-copy-data-tool.md
Prepare your Blob storage for the tutorial by completing these steps:
:::image type="content" source="./media/tutorial-incremental-copy-lastmodified-copy-data-tool/monitor-pipeline-runs8.png" alt-text="Scan files by using Azure Storage Explorer":::
-## Next steps
+## Related content
Go to the following tutorial to learn how to transform data by using an Apache Spark cluster on Azure: > [!div class="nextstepaction"]
data-factory Tutorial Incremental Copy Multiple Tables Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-multiple-tables-portal.md
project_table 2017-10-01 00:00:00.000
Notice that the watermark values for both tables were updated.
-## Next steps
+## Related content
You performed the following steps in this tutorial: > [!div class="checklist"]
data-factory Tutorial Incremental Copy Multiple Tables Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-multiple-tables-powershell.md
project_table 2017-10-01 00:00:00.000
Notice that the watermark values for both tables were updated.
-## Next steps
+## Related content
You performed the following steps in this tutorial: > [!div class="checklist"]
data-factory Tutorial Incremental Copy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-overview.md
You can copy new files only, where files or folders has already been time partit
For step-by-step instructions, see the following tutorial: <br/> - [Incrementally copy new files based on time partitioned folder or file name from Azure Blob storage to Azure Blob storage](tutorial-incremental-copy-partitioned-file-name-copy-data-tool.md)
-## Next steps
+## Related content
Advance to the following tutorial: > [!div class="nextstepaction"]
data-factory Tutorial Incremental Copy Partitioned File Name Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-partitioned-file-name-copy-data-tool.md
Prepare your Blob storage for the tutorial by performing these steps.
12. Select the new **DeltaCopyFromBlobPipeline** link for the second pipeline run when it comes, and do the same to review details. You will see the source file (file2.txt) has been copied from **source/2021/07/15/07/** to **destination/2021/07/15/07/** with the same file name. You can also verify the same by using Azure Storage Explorer (https://storageexplorer.com/) to scan the files in **destination** container.
-## Next steps
+## Related content
Advance to the following tutorial to learn about transforming data by using a Spark cluster on Azure: > [!div class="nextstepaction"]
data-factory Tutorial Incremental Copy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-portal.md
PersonID | Name | LastModifytime
| data_source_table | 2017-09-07 09:01:00.000 | ```
-## Next steps
+## Related content
You performed the following steps in this tutorial:
data-factory Tutorial Incremental Copy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-powershell.md
In this tutorial, you create a pipeline with two Lookup activities, one Copy act
data_source_table | 2017-09-07 09:01:00.000
-## Next steps
+## Related content
You performed the following steps in this tutorial: > [!div class="checklist"]
data-factory Tutorial Managed Virtual Network Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-managed-virtual-network-migrate.md
For Azure Data Factory, you can move existing Azure integration runtime directly
## Azure Synapse Analytics For Azure Synapse Analytics, Azure integration runtime can't be moved directly in existing workspace. You need to create a new workspace with a managed workspace virtual network. In new workspace, Azure integration runtime is in a managed virtual network and you can reference it in the linked service.
-## Next steps
+## Related content
Advance to the following tutorial to learn about managed virtual network:
data-factory Tutorial Managed Virtual Network On Premise Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-managed-virtual-network-on-premise-sql-server.md
data factory from the resources list.
Go to the backend server VM, confirm telnet the SQL Server works: **telnet **<**FQDN**>** 1433**.
-## Next steps
+## Related content
Advance to the following tutorial to learn about accessing Microsoft Azure SQL Managed Instance from Data Factory Managed VNet using Private Endpoint:
data-factory Tutorial Managed Virtual Network Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-managed-virtual-network-sql-managed-instance.md
link service list.
:::image type="content" source="./media/tutorial-managed-virtual-network/linked-service-mi-3.png" alt-text="Screenshot that shows the SQL MI linked service creation page.":::
-## Next steps
+## Related content
Advance to the following tutorial to learn about accessing on premises SQL Server from Data Factory Managed VNET using Private Endpoint:
data-factory Tutorial Operationalize Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-operationalize-pipelines.md
We understand some pipelines will naturally take more time to finish than others
Follow the steps to set up [Data Factory Alerts](monitor-metrics-alerts.md#data-factory-alerts) on the metric. Your engineers will get notified to intervene and take steps to meet the SLAs, through emails or SMSs.
-## Next steps
+## Related content
[Data Factory metrics and alerts](monitor-metrics-alerts.md)
data-factory Tutorial Pipeline Failure Error Handling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-pipeline-failure-error-handling.md
You can add multiple activities for error handling.
-## Next steps
+## Related content
[Data Factory metrics and alerts](monitor-metrics-alerts.md)
data-factory Tutorial Pipeline Return Value https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-pipeline-return-value.md
You may have multiple Set Pipeline Return value activities in a pipeline. Howeve
To avoid missing key situation in the calling pipeline, described above, we encourage you to have the same list of keys for all branches in child pipeline. Consider using _null_ types for keys that don't have values, in a specific branch.
-## Next steps
+## Related content
Learn about another related control flow activity: - [Set Variable Activity](control-flow-set-variable-activity.md) - [Append Variable Activity](control-flow-append-variable-activity.md)
data-factory Tutorial Push Lineage To Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-push-lineage-to-purview.md
On the activity asset, click the Lineage tab, you can see all the lineage inform
> [!NOTE] > For the lineage of Execute SSIS Package activity, we only support source and destination. The lineage for transformation is not supported yet.
-## Next steps
+## Related content
[Catalog lineage user guide](../purview/catalog-lineage-user-guide.md)
data-factory Tutorial Run Existing Pipeline With Airflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-run-existing-pipeline-with-airflow.md
Data Factory pipelines provide 100+ data source connectors that provide scalable
:::image type="content" source="media/tutorial-run-existing-pipeline-with-airflow/airflow-environment.png" alt-text="Screenshot showing the data factory management tab with the Airflow section selected.":::
-## Next steps
+## Related content
- [Managed Airflow pricing](airflow-pricing.md) - [Changing password for Managed Airflow environments](password-change-airflow.md)
data-factory Tutorial Transform Data Hive Virtual Network Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-transform-data-hive-virtual-network-portal.md
Note the following points:
:::image type="content" source="./media/tutorial-transform-data-using-hive-in-vnet-portal/output-file.png" alt-text="Output file":::
-## Next steps
+## Related content
You performed the following steps in this tutorial: > [!div class="checklist"]
data-factory Tutorial Transform Data Hive Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-transform-data-hive-virtual-network.md
Set-AzDataFactoryV2Pipeline -DataFactoryName $dataFactoryName -ResourceGroupName
246 en-US SCH-i500 District Of Columbia ```
-## Next steps
+## Related content
You performed the following steps in this tutorial: > [!div class="checklist"]
data-factory Tutorial Transform Data Spark Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-transform-data-spark-portal.md
The file should have each word from the input text file and the number of times
(u'file', 1) ```
-## Next steps
+## Related content
The pipeline in this sample transforms data by using a Spark activity and an on-demand HDInsight linked service. You learned how to: > [!div class="checklist"]
data-factory Tutorial Transform Data Spark Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-transform-data-spark-powershell.md
You have authored linked service and pipeline definitions in JSON files. Now, le
4. Confirm that a folder named `outputfiles` is created in the `spark` folder of adftutorial container with the output from the spark program.
-## Next steps
+## Related content
The pipeline in this sample copies data from one location to another location in an Azure blob storage. You learned how to: > [!div class="checklist"]
data-factory Update Machine Learning Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/update-machine-learning-models.md
The pipeline has two activities: **AzureMLBatchExecution** and **AzureMLUpdateRe
} } ```
-## Next steps
+## Related content
See the following articles that explain how to transform data in other ways: * [U-SQL activity](transform-data-using-data-lake-analytics.md)
data-factory Wrangling Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/wrangling-functions.md
To set the date/time format when using Power Query ADF, please follow these sets
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RWNdQg]
-## Next steps
+## Related content
Learn how to [create a data wrangling Power Query in ADF](wrangling-tutorial.md).
data-factory Wrangling Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/wrangling-overview.md
Currently not all Power Query M functions are supported for data wrangling despi
For more information on supported transformations, see [Power Query data wrangling functions](wrangling-functions.md).
-## Next steps
+## Related content
Learn how to [create a data wrangling Power Query mash-up](wrangling-tutorial.md).
data-factory Wrangling Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/wrangling-tutorial.md
Go to the **Monitor** tab to visualize the output of a triggered Power Query act
:::image type="content" source="media/wrangling-data-flow/tutorial2.png" alt-text="Screenshot that shows the output of a triggered wrangling Power Query activity run.":::
-## Next steps
+## Related content
Learn how to [create a mapping data flow](tutorial-data-flow.md).
data-manager-for-agri Concepts Ingest Satellite Imagery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-ingest-satellite-imagery.md
In some cases, it isn't desirable and traceability to a single tile source is pr
Data Manager for Agriculture uses the WSG84 (EPSG: 4326), a flat coordinate system, whereas Sentinel-2 imagery is presented in UTM, a ground projection system that approximates the round earth. Translating between a flat image and a round earth involves an approximation translation. The accuracy of this translation is set to equal at the equator (10 m^2) and increases in error margin as the point in question moves away from the equator to the poles.
-For consistency, our data manager uses the following formula at 10-m base for all Sentinel-2 calls:
+For consistency, our data manager uses the following formula at 10 m^2 base for all Sentinel-2 calls:
+
$$ Latitude = \frac{10 m}{111320}
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Microsoft Defender for Containers provides security alerts on the cluster level
| **Suspicious external access to an Azure storage account with overly permissive SAS token (Preview)**<br>Storage.Blob_AccountSas.InternalSasUsedExternally | The alert indicates that someone with an external (public) IP address accessed the storage account using an overly permissive SAS token with a long expiration date. This type of access is considered suspicious because the SAS token is typically only used in internal networks (from private IP addresses). <br>The activity may indicate that a SAS token has been leaked by a malicious actor or leaked unintentionally from a legitimate source. <br>Even if the access is legitimate, using a high-permission SAS token with a long expiration date goes against security best practices and poses a potential security risk. <br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan. | Exfiltration / Resource Development / Impact | Medium | | **Suspicious external operation to an Azure storage account with overly permissive SAS token (Preview)**<br>Storage.Blob_AccountSas.UnusualOperationFromExternalIp | The alert indicates that someone with an external (public) IP address accessed the storage account using an overly permissive SAS token with a long expiration date. The access is considered suspicious because operations invoked outside your network (not from private IP addresses) with this SAS token are typically used for a specific set of Read/Write/Delete operations, but other operations occurred, which makes this access suspicious. <br>This activity may indicate that a SAS token has been leaked by a malicious actor or leaked unintentionally from a legitimate source. <br>Even if the access is legitimate, using a high-permission SAS token with a long expiration date goes against security best practices and poses a potential security risk. <br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan. | Exfiltration / Resource Development / Impact | Medium | | **Unusual SAS token was used to access an Azure storage account from a public IP address (Preview)**<br>Storage.Blob_AccountSas.UnusualExternalAccess | The alert indicates that someone with an external (public) IP address has accessed the storage account using an account SAS token. The access is highly unusual and considered suspicious, as access to the storage account using SAS tokens typically comes only from internal (private) IP addresses. <br>It's possible that a SAS token was leaked or generated by a malicious actor either from within your organization or externally to gain access to this storage account. <br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan. | Exfiltration / Resource Development / Impact | Low |
-| **Malicious file uploaded to storage account**<br>Storage.Blob_AM.MalwareFound | The alert indicates that a malicious blob was uploaded to a storage account. This security alert is generated by the Malware Scanning feature in Defender for Storage. <br>Potential causes may include an intentional upload of malware by a threat actor or an unintentional upload of a malicious file by a legitimate user. <br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan with the Malware Scanning feature enabled. | LateralMovement | High |
+| **Malicious file uploaded to storage account**<br>Storage.Blob_AM.MalwareFound | The alert indicates that a malicious blob was uploaded to a storage account. This security alert is generated by the Malware Scanning feature in Defender for Storage. <br>Potential causes may include an intentional upload of malware by a threat actor or an unintentional upload of a malicious file by a legitimate user. <br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan with the Malware Scanning feature enabled. | Lateral Movement | High |
+| **Malicious blob was downloaded from a storage account (Preview)**<br>Storage.Blob_MalwareDownload | The alert indicates that a malicious blob was downloaded from a storage account. Potential causes may include malware that was uploaded to the storage account and not removed or quarantined, thereby enabling a threat actor to download it, or an unintentional download of the malware by legitimate users or applications. <br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan with the Malware Scanning feature enabled. | Lateral Movement | High, if Eicar - low |
## <a name="alerts-azurecosmos"></a>Alerts for Azure Cosmos DB
defender-for-cloud Azure Devops Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/azure-devops-extension.md
If you don't have access to install the extension, you must request access from
``` > [!NOTE]
- > The artifactName 'CodeAnalysisLogs' is required for integration with Defender for Cloud.
+ > The artifactName 'CodeAnalysisLogs' is required for integration with Defender for Cloud. For additional tool configuration options, see [the Microsoft Security DevOps wiki](https://github.com/microsoft/security-devops-action/wiki)
1. To commit the pipeline, select **Save and run**.
defender-for-cloud Defender For Apis Prepare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-apis-prepare.md
Review the requirements on this page before setting up [Microsoft Defender for A
## Cloud and region support
-Defender for APIs is in public preview in the Azure commercial cloud, in these regions:
+Defender for APIs is available in the Azure commercial cloud, in these regions:
- Asia (Southeast Asia, EastAsia) - Australia (Australia East, Australia Southeast, Australia Central, Australia Central 2) - Brazil (Brazil South, Brazil Southeast)
defender-for-cloud Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/github-action.md
Microsoft Security DevOps uses the following Open Source tools:
name: alerts path: ${{ steps.msdo.outputs.sarifFile }} ```
+ > [!NOTE]
+ > For additional tool configuration options, see [the Microsoft Security DevOps wiki](https://github.com/microsoft/security-devops-action/wiki)
- For additional configuration options, see [the Microsoft Security DevOps wiki](https://github.com/microsoft/security-devops-action/wiki)
1. Select **Start commit**
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes description: This page is updated frequently with the latest updates in Defender for Cloud. Previously updated : 11/27/2023 Last updated : 12/04/2023 # What's new in Microsoft Defender for Cloud?
To learn about *planned* changes that are coming soon to Defender for Cloud, see
If you're looking for items older than six months, you can find them in the [Archive for What's new in Microsoft Defender for Cloud](release-notes-archive.md).
+## December 2023
+
+| Date | Update |
+|--|--|
+| December 4 | [Defender for Storage alert released for preview: malicious blob was downloaded from a storage account](#defender-for-storage-alert-released-for-preview-malicious-blob-was-downloaded-from-a-storage-account)
+
+### Defender for Storage alert released for preview: malicious blob was downloaded from a storage account
+
+December 4, 2023
+
+The following alert is being released for preview:
+
+|Alert (alert type)|Description|MITRE tactics|Severity|
+|-|-|-|-|
+| **Malicious blob was downloaded from a storage account (Preview)**<br>Storage.Blob_MalwareDownload | The alert indicates that a malicious blob was downloaded from a storage account. Potential causes may include malware that was uploaded to the storage account and not removed or quarantined, thereby enabling a threat actor to download it, or an unintentional download of the malware by legitimate users or applications. <br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan with the Malware Scanning feature enabled. | Lateral Movement | High, if Eicar - low |
+
+See the [extension-based alerts in Defender for Storage](alerts-reference.md#alerts-azurestorage).
+
+For a complete list of alerts, see the [reference table for all security alerts in Microsoft Defender for Cloud](alerts-reference.md).
+ ## November 2023 | Date | Update |
deployment-environments Quickstart Create Access Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-access-environments.md
Previously updated : 04/25/2023 Last updated : 12/01/2023
-# Quickstart: Create and access Azure Deployment Environments by using the developer portal
+# Quickstart: Create and access an environment in Azure Deployment Environments
This quickstart shows you how to create and access an [environment](concept-environments-key-concepts.md#environments) in an existing Azure Deployment Environments project.
-In this quickstart, you learn how to:
-
-> [!div class="checklist"]
->
-> - Create an environment
-> - Access an environment
- ## Prerequisites - [Create and configure a dev center](quickstart-create-and-configure-devcenter.md).
An environment in Azure Deployment Environments is a collection of Azure resourc
[!INCLUDE [note-deployment-environments-user](includes/note-deployment-environments-user.md)] 1. Sign in to the [developer portal](https://devportal.microsoft.com).+ 1. From the **New** menu at the top left, select **New environment**.
- :::image type="content" source="media/quickstart-create-access-environments/new-environment.png" alt-text="Screenshot showing the new menu with new environment highlighted.":::
+ :::image type="content" source="media/quickstart-create-access-environments/dev-new-environment.png" alt-text="Screenshot showing the new menu with new environment highlighted." lightbox="media/quickstart-create-access-environments/dev-new-environment-expanded.png":::
-1. In the Add an environment pane, select the following information:
+1. In the **Add an environment** pane, select the following information:
|Field |Value | ||| |Name | Enter a descriptive name for your environment. |
- |Project | Select the project you want to create the environment in. If you have access to more than one project, you see a list of the available projects. |
- |Type | Select the environment type you want to create. If you have access to more than one environment type, you see a list of the available types. |
+ |Project | Select the project you want to create the environment in. If you have access to more than one project, you see a list of available projects. |
+ |Type | Select the environment type you want to create. If you have access to more than one environment type, you see a list of available types. |
|Environment definitions | Select the environment definition you want to use to create the environment. You see a list of the environment definitions available from the catalogs associated with your dev center. |
- :::image type="content" source="media/quickstart-create-access-environments/add-environment.png" alt-text="Screenshot showing add environment pane.":::
+ :::image type="content" source="media/quickstart-create-access-environments/dev-add-environment.png" alt-text="Screenshot showing add environment pane." lightbox="media/quickstart-create-access-environments/dev-add-environment-expanded.png":::
- If your environment is configured to accept parameters, you're able to enter them on a separate pane. In this example, you don't need to specify any parameters.
+ If your environment is configured to accept parameters, you can enter them on a separate pane. In this example, you don't need to specify any parameters.
1. Select **Create**. You see your environment in the developer portal immediately, with an indicator that shows creation in progress.
-
+ ## Access an environment You can access and manage your environments in the Azure Deployment Environments developer portal. 1. Sign in to the [developer portal](https://devportal.microsoft.com).
-1. You're able to view all of your existing environments. To access the specific resources created as part of an Environment, select the **Environment Resources** link.
+1. You can view all of your existing environments. To access the specific resources created as part of an environment, select the **Environment Resources** link.
+
+ :::image type="content" source="media/quickstart-create-access-environments/dev-environment-resources.png" alt-text="Screenshot showing an environment card, with the environment resources link highlighted." lightbox="media/quickstart-create-access-environments/dev-environment-resources-expanded.png":::
- :::image type="content" source="media/quickstart-create-access-environments/environment-resources.png" alt-text="Screenshot showing an environment card, with the environment resources link highlighted.":::
+1. You can view the resources in your environment listed in the Azure portal.
-1. You're able to view the resources in your environment listed in the Azure portal.
- :::image type="content" source="media/quickstart-create-access-environments/azure-portal-view-of-environment.png" alt-text="Screenshot showing Azure portal list of environment resources.":::
+ :::image type="content" source="media/quickstart-create-access-environments/azure-portal-view-of-environment.png" alt-text="Screenshot showing Azure portal list of environment resources." lightbox="media/quickstart-create-access-environments/azure-portal-view-of-environment.png":::
- Creating an environment automatically creates a resource group that stores the environment's resources. The resource group name follows the pattern {projectName}-{environmentName}. You can view the resource group in the Azure portal.
+ Creating an environment automatically creates a resource group that stores the environment's resources. The resource group name follows the pattern `{projectName}-{environmentName}`. You can view the resource group in the Azure portal.
## Next steps -- Learn how to [add and configure a catalog](how-to-configure-catalog.md).-- Learn how to [add and configure an environment definition](configure-environment-definition.md).
+- [Add and configure a catalog](how-to-configure-catalog.md)
+- [Add and configure an environment definition](configure-environment-definition.md)
deployment-environments Quickstart Create And Configure Devcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-devcenter.md
Title: Create and configure a dev center
+ Title: Create and configure a dev center for Azure Deployment Environments
-description: Learn how to configure a dev center in Azure Deployment Environments. You create a dev center, attach an identity, attach a catalog, and create environment types.
+description: Learn how to configure a dev center, attach an identity, and attach a catalog in Azure Deployment Environments.
Previously updated : 10/23/2023 Last updated : 12/01/2023 # Quickstart: Create and configure a dev center for Azure Deployment Environments
-In this quickstart, you'll set up all the resources in Azure Deployment Environments to enable development teams to self-service deployment environments for their applications. Learn how to create and configure a dev center, add a catalog to the dev center, and define an environment type.
+In this quickstart, you set up all the resources in Azure Deployment Environments to enable self-service deployment environments for development teams. Learn how to create and configure a dev center, add a catalog to the dev center, and define an environment type.
-A platform engineering team typically sets up a dev center, attaches external catalogs to the dev center, creates projects, and provides access to development teams. Development teams create [environments](concept-environments-key-concepts.md#environments) by using [environment definitions](concept-environments-key-concepts.md#environment-definitions), connect to individual resources, and deploy applications. To learn more about the components of Azure Deployment Environments, see [Key concepts for Azure Deployment Environments](concept-environments-key-concepts.md).
+A platform engineering team typically sets up a dev center, attaches external catalogs to the dev center, creates projects, and provides access to development teams. Development teams then create [environments](concept-environments-key-concepts.md#environments) by using [environment definitions](concept-environments-key-concepts.md#environment-definitions), connect to individual resources, and deploy applications. To learn more about the components of Azure Deployment Environments, see [Key concepts for Azure Deployment Environments](concept-environments-key-concepts.md).
-The following diagram shows the steps you perform in this quickstart to configure a dev center for Azure Deployment Environments in the Azure portal.
+The following diagram shows the steps to configure a dev center for Azure Deployment Environments in the Azure portal.
-You need to perform the steps in both quickstarts before you can create a deployment environment.
+You need to perform the steps in this quickstart and then [create a project](quickstart-create-and-configure-projects.md) before you can [create a deployment environment](quickstart-create-access-environments.md).
## Prerequisites
You need to perform the steps in both quickstarts before you can create a deploy
- Azure role-based access control role with permissions to create and manage resources in the subscription, such as [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Owner](../role-based-access-control/built-in-roles.md#owner). - An [Azure DevOps repository](https://azure.microsoft.com/products/devops/repos/) repository that contains IaC templates. You can use the [Deployment Environments sample catalog](https://github.com/azure/deployment-environments) that contains samples created and maintained by the Azure Deployment Environments team. - In your Azure DevOps organization, [create a project](/azure/devops/repos/get-started/sign-up-invite-teammates?view=azure-devops&branch=main&preserve-view=true) to store your repository.
- - Import the [Deployment Environments sample catalog](https://github.com/azure/deployment-environments)
+ - Import the [Deployment Environments sample catalog](https://github.com/azure/deployment-environments) into the project you created.
## Create a dev center+ To create and configure a dev center in Azure Deployment Environments by using the Azure portal: 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Search for **Azure Deployment Environments**, and then select the service in the results. 1. In **Dev centers**, select **Create**.
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/deployment-environments-add-devcenter.png" alt-text="Screenshot that shows how to create a dev center in Azure Deployment Environments.":::
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/deployment-environments-add-devcenter.png" alt-text="Screenshot that shows how to create a dev center in Azure Deployment Environments." lightbox="media/quickstart-create-and-configure-devcenter/deployment-environments-add-devcenter.png":::
1. In **Create a dev center**, on the **Basics** tab, select or enter the following information:
To create and configure a dev center in Azure Deployment Environments by using t
|**Location**|Select the location or region where you want to create the dev center.| 1. Select **Review + Create**.+ 1. On the **Review** tab, wait for deployment validation, and then select **Create**.
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/create-devcenter-review.png" alt-text="Screenshot that shows the Review tab of a dev center to validate the deployment details.":::
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/create-devcenter.png" alt-text="Screenshot that shows the Review tab of a dev center to validate the deployment details." lightbox="media/quickstart-create-and-configure-devcenter/create-devcenter-expanded.png":::
1. You can check the progress of the deployment in your Azure portal notifications.
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/azure-notifications.png" alt-text="Screenshot that shows portal notifications to confirm the creation of a dev center.":::
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/azure-notifications.png" alt-text="Screenshot that shows portal notifications to confirm the creation of a dev center." lightbox="media/quickstart-create-and-configure-devcenter/azure-notifications.png":::
1. When the creation of the dev center is complete, select **Go to resource**. 1. In **Dev centers**, verify that the dev center appears.
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/deployment-environments-devcenter-created.png" alt-text="Screenshot that shows the Dev centers overview, to confirm that the dev center is created.":::
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/deployment-environments-devcenter-created.png" alt-text="Screenshot that shows the dev centers overview, to confirm that the dev center was created." lightbox="media/quickstart-create-and-configure-devcenter/deployment-environments-devcenter-created.png":::
## Configure a managed identity for the dev center
In this quickstart, you configure a system-assigned managed identity for your de
To attach a system-assigned managed identity to your dev center:
-1. In Dev centers, select your dev center.
-1. In the left menu under Settings, select **Identity**.
-1. Under **System assigned**, set **Status** to **On**, and then select **Save**.
+1. In **Dev centers**, select your dev center.
+1. In the left menu under **Settings**, select **Identity**.
+1. Under **System assigned**, set **Status** to **On**, and then select **Save**.
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/system-assigned-managed-identity-on.png" alt-text="Screenshot that shows a system-assigned managed identity.":::
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/system-assigned-managed-identity-on.png" alt-text="Screenshot that shows a system-assigned managed identity." lightbox="media/quickstart-create-and-configure-devcenter/system-assigned-managed-identity-on.png":::
-1. In the **Enable system assigned managed identity** dialog, select **Yes**.
+1. In the **Enable system assigned managed identity** dialog, select **Yes**. It might take a while for the rest of the fields to appear.
### Assign roles for the dev center managed identity
-The managed identity that represents your dev center requires access to the subscriptions where you configure the [project environment types](concept-environments-key-concepts.md#project-environment-types), and to the Azure DevOps repo that stores your catalog.
+The managed identity that represents your dev center requires access to the subscription where you configure the [project environment types](concept-environments-key-concepts.md#project-environment-types), and to the Azure DevOps repository that stores your catalog.
-1. Navigate to your dev center.
-1. On the left menu under Settings, select **Identity**.
-1. Under System assigned > Permissions, select **Azure role assignments**.
+1. Navigate to your dev center.
+1. On the left menu under **Settings**, select **Identity**.
+1. Under **System assigned** > **Permissions**, select **Azure role assignments**.
- :::image type="content" source="media/quickstart-create-configure-projects/system-assigned-managed-identity.png" alt-text="Screenshot that shows a system-assigned managed identity with Role assignments highlighted.":::
+ :::image type="content" source="media/quickstart-create-configure-projects/system-assigned-managed-identity.png" alt-text="Screenshot that shows a system-assigned managed identity with Role assignments highlighted." lightbox="media/quickstart-create-configure-projects/system-assigned-managed-identity.png":::
1. To give Contributor access to the subscription, select **Add role assignment (Preview)**, enter or select the following information, and then select **Save**:
-
+ |Name |Value | ||-| |**Scope**|Subscription|
The managed identity that represents your dev center requires access to the subs
|**Role**|User Access Administrator| ### Assign permissions in Azure DevOps for the dev center managed identity+ You must give the dev center managed identity permissions to the repository in Azure DevOps.
-1. Sign in to your [Azure DevOps organization](https://dev.azure.com).
+1. Sign in to your [Azure DevOps organization](https://dev.azure.com).
+
+ > [!NOTE]
+ > Your Azure DevOps organization must be in the same directory as the Azure subscription that contains your dev center.
1. Select **Organization settings**.
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/devops-organization-settings.png" alt-text="Screenshot showing the Azure DevOps organization page, with Organization Settings highlighted.":::
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/devops-organization-settings.png" alt-text="Screenshot showing the Azure DevOps organization page, with Organization Settings highlighted." lightbox="media/quickstart-create-and-configure-devcenter/devops-organization-settings.png":::
1. On the **Overview** page, select **Users**.
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/devops-organization-overview.png" alt-text="Screenshot showing the Organization overview page, with Users highlighted.":::
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/devops-organization-overview.png" alt-text="Screenshot showing the Organization overview page, with Users highlighted." lightbox="media/quickstart-create-and-configure-devcenter/devops-organization-overview.png":::
1. On the **Users** page, select **Add users**.
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/devops-add-user.png" alt-text="Screenshot showing the Users page, with Add user highlighted.":::
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/devops-add-user.png" alt-text="Screenshot showing the Users page, with Add user highlighted." lightbox="media/quickstart-create-and-configure-devcenter/devops-add-user.png":::
1. Complete **Add new users** by entering or selecting the following information, and then select **Add**: |Name |Value | ||-|
- |**Users or Service Principals**|Enter the name of your dev center. </br> When you use a system assigned managed account, specify the name of the dev center, not the Object ID of the Managed Account. When you use a user assigned managed account, use the name of the managed account. |
+ |**Users or Service Principals**|Enter the name of your dev center. <br><br> When you use a system-assigned managed account, specify the name of the dev center, not the Object ID of the managed account. When you use a user-assigned managed account, use the name of the managed account. |
|**Access level**|Select **Basic**.|
- |**Add to projects**|Select the project that contains your repository.|
+ |**Add to projects**|Select the project you created in the prerequisites section that contains your repository.|
|**Azure DevOps Groups**|Select **Project Readers**.| |**Send email invites (to Users only)**|Clear the checkbox.|
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/devops-add-user-blade.png" alt-text="Screenshot showing Add users, with example entries and Add highlighted.":::
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/add-user-blade.png" alt-text="Screenshot showing Add users, with example entries and Add highlighted." lightbox="media/quickstart-create-and-configure-devcenter/add-user-blade-expanded.png":::
## Add a catalog to the dev center
-Azure Deployment Environments supports attaching Azure DevOps repositories and GitHub repositories. You can store a set of curated IaC templates in a repository. Attaching the repository to a dev center as a catalog gives your development teams access to the templates and enables them to quickly create consistent environments.
+
+In Azure Deployment Environments, you can attach Azure DevOps repositories or GitHub repositories. You can store a set of curated IaC templates in a repository. Attaching the repository to a dev center as a catalog gives your development teams access to the templates and enables them to quickly create consistent environments.
In this quickstart, you attach an Azure DevOps repository. ### Add a catalog to your dev center
-1. Navigate to your dev center.
+
+1. Go back to the Azure portal and navigate to your dev center.
1. In the left menu under **Environment configuration**, select **Catalogs**, and then select **Add**.
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/catalogs-page.png" alt-text="Screenshot that shows the Catalogs pane.":::
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/catalogs-page.png" alt-text="Screenshot that shows the Catalogs pane." lightbox="media/quickstart-create-and-configure-devcenter/catalogs-page.png":::
1. In **Add catalog**, enter the following information, and then select **Add**:
In this quickstart, you attach an Azure DevOps repository.
| **Branch** | Select the branch. | | **Folder path** | Dev Box retrieves a list of folders in your branch. Select the folder that stores your IaC templates. |
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/add-catalog-to-devcenter.png" alt-text="Screenshot showing the add catalog pane with examples entries and Add highlighted.":::
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/add-catalog.png" alt-text="Screenshot showing the add catalog pane with examples entries and Add highlighted." lightbox="media/quickstart-create-and-configure-devcenter/add-catalog-expanded.png":::
1. In **Catalogs** for the dev center, verify that your catalog appears. If the connection is successful, **Status** is **Connected**. Connecting to a catalog can take a few minutes the first time.
Use an environment type to help you define the different types of environments y
|**Name**|Enter a name for the environment type.| |**Tags**|Enter a tag name and a tag value.|
- :::image type="content" source="media/quickstart-create-and-configure-devcenter/create-environment-type.png" alt-text="Screenshot that shows the Create environment type pane.":::
+ :::image type="content" source="media/quickstart-create-and-configure-devcenter/create-environment-type.png" alt-text="Screenshot that shows the Create environment type pane." lightbox="media/quickstart-create-and-configure-devcenter/create-environment-type.png":::
-1. Confirm that the environment type is added by checking your Azure portal notifications.
+1. Confirm that the environment type was added by checking your Azure portal notifications.
An environment type that you add to your dev center is available in each project in the dev center, but environment types aren't enabled by default. When you enable an environment type at the project level, the environment type determines the managed identity and subscription that are used to deploy environments.
deployment-environments Quickstart Create And Configure Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-projects.md
Title: Create and configure a project
+ Title: Create and configure an Azure Deployment Environments project
description: Learn how to create a project in Azure Deployment Environments and associate the project with a dev center.
Previously updated : 09/06/2023 Last updated : 12/01/2023 # Quickstart: Create and configure an Azure Deployment Environments project
-This quickstart shows you how to create a project in Azure Deployment Environments, and associate the project with the dev center you created in [Quickstart: Create and configure a dev center](./quickstart-create-and-configure-devcenter.md). After you complete this quickstart, developers can use the developer portal to create environments to deploy their applications.
+This quickstart shows you how to create a project in Azure Deployment Environments, then associate the project with the dev center you created in [Quickstart: Create and configure a dev center](./quickstart-create-and-configure-devcenter.md). After you complete this quickstart, developers can use the developer portal to create environments to deploy their applications.
-The following diagram shows the steps you perform in this quickstart to configure a project associated with a dev center for Deployment Environments in the Azure portal.
+The following diagram shows the steps to configure a project associated with a dev center for Deployment Environments in the Azure portal.
First, you create a project. Then, assign the dev center managed identity the Contributor and the User Access Administrator roles to the subscription. Then, you configure the project by creating a project environment type. Finally, you give the development team access to the project by assigning the [Deployment Environments User](how-to-configure-deployment-environments-user.md) role to the project.
-You need to perform the steps in both quickstarts before you can create a deployment environment.
- ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- Azure role-based access control role with permissions to create and manage resources in the subscription, such as [Contributor](../role-based-access-control/built-in-roles.md#contributor)
+- Azure role-based access control role with permissions to create and manage resources in the subscription, such as [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Owner](../role-based-access-control/built-in-roles.md#owner).
- An Azure Deployment Environments dev center with a catalog attached. If you don't have a dev center with a catalog, see [Quickstart: Create and configure a dev center](./quickstart-create-and-configure-devcenter.md). ## Create a project
In Azure Deployment Environments, a project represents a team or business functi
To create an Azure Deployment Environments project in your dev center:
-1. In the [Azure portal](https://portal.azure.com/), go to Azure Deployment Environments.
+1. In the [Azure portal](https://portal.azure.com), go to Azure Deployment Environments.
1. In the left menu under **Configure**, select **Projects**.
To create an Azure Deployment Environments project in your dev center:
|Name |Value | |-|--|
- |**Subscription** |Select the subscription in which you want to create the project. |
+ |**Subscription** |Select the subscription in which you want to create the project. |
|**Resource group**|Either use an existing resource group or select **Create new** and enter a name for the resource group. | |**Dev center**|Select a dev center to associate with this project. All settings for the dev center apply to the project. | |**Name**|Enter a name for the project. |
To create an Azure Deployment Environments project in your dev center:
1. On the **Review + Create** tab, wait for deployment validation, and then select **Create**.
- :::image type="content" source="media/quickstart-create-configure-projects/create-project.png" alt-text="Screenshot that shows selecting the create project basics tab.":::
+ :::image type="content" source="media/quickstart-create-configure-projects/review-create-project.png" alt-text="Screenshot that shows selecting the create project basics tab." lightbox="media/quickstart-create-configure-projects/review-create-project-expanded.png":::
1. Confirm that the project was successfully created by checking your Azure portal notifications. Then, select **Go to resource**. 1. Confirm that you see the project overview pane.
- :::image type="content" source="media/quickstart-create-configure-projects/created-project.png" alt-text="Screenshot that shows the project overview pane.":::
+ :::image type="content" source="media/quickstart-create-configure-projects/created-project.png" alt-text="Screenshot that shows the project overview pane." lightbox="media/quickstart-create-configure-projects/created-project.png":::
## Create a project environment type
To configure a project, add a [project environment type](how-to-configure-projec
1. In the left menu under **Environment configuration**, select **Environment types**, and then select **Add**.
- :::image type="content" source="media/quickstart-create-configure-projects/add-environment-types.png" alt-text="Screenshot that shows the Environment types pane.":::
+ :::image type="content" source="media/quickstart-create-configure-projects/add-environment-types.png" alt-text="Screenshot that shows the Environment types pane." lightbox="media/quickstart-create-configure-projects/add-environment-types.png":::
1. In **Add environment type to \<project-name\>**, enter or select the following information:
To configure a project, add a [project environment type](how-to-configure-projec
|**Permissions on environment resources** > **Additional access** | Select the users or Microsoft Entra groups to assign to specific roles on the environment resources.| |**Tags** | Enter a tag name and a tag value. These tags are applied on all resources that are created as part of the environment.|
- :::image type="content" source="./media/quickstart-create-configure-projects/add-project-environment-type-page.png" alt-text="Screenshot that shows adding details in the Add project environment type pane.":::
+ :::image type="content" source="media/quickstart-create-configure-projects/add-project-environment-type.png" alt-text="Screenshot that shows adding details in the Add project environment type pane." lightbox="media/quickstart-create-configure-projects/add-project-environment-type-expanded.png":::
> [!NOTE]
-> At least one identity (system-assigned or user-assigned) must be enabled for deployment identity. The identity is used to perform the environment deployment on behalf of the developer. Additionally, the identity attached to the dev center should be [assigned the Contributor and the User Access Admistrator roles](how-to-configure-managed-identity.md) for access to the deployment subscription for each environment type.
+> At least one identity (system-assigned or user-assigned) must be enabled for deployment identity. The identity is used to perform the environment deployment on behalf of the developer. Additionally, the identity attached to the dev center should be [assigned the Contributor and the User Access Admistrator roles](how-to-configure-managed-identity.md) for access to the deployment subscription for each environment type.
## Give access to the development team
-Before developers can create environments based on the environment types in a project, you must provide access for them through a role assignment at the level of the project. The Deployment Environments User role enables users to create, manage and delete their own environments. You must have sufficient permissions to a project before you can add users to it.
+Before developers can create environments based on the environment types in a project, you must provide access for them through a role assignment at the level of the project. The Deployment Environments User role enables users to create, manage, and delete their own environments. You must have sufficient permissions to a project before you can add users to it.
1. In the Azure portal, go to your project.
Before developers can create environments based on the environment types in a pr
1. Select **Add** > **Add role assignment**. 1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
-
+ | Setting | Value | | | | | **Role** | Select **[Deployment Environments User](how-to-configure-deployment-environments-user.md)**. | | **Assign access to** | Select **User, group, or service principal**. | | **Members** | Select the users or groups you want to have access to the project. |
- :::image type="content" source="media/quickstart-create-configure-projects/add-role-assignment.png" alt-text="Screenshot that shows the Add role assignment pane.":::
+ :::image type="content" source="media/quickstart-create-configure-projects/add-role-assignment.png" alt-text="Screenshot that shows the Add role assignment pane." lightbox="media/quickstart-create-configure-projects/add-role-assignment.png":::
[!INCLUDE [note-deployment-environments-user](includes/note-deployment-environments-user.md)]
dms Known Issues Azure Sql Migration Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-sql-migration-azure-data-studio.md
Title: "Known issues, limitations, and troubleshooting"
-description: Known issues, limitations and troubleshooting guide for Azure SQL Migration extension for Azure Data Studio
+description: Known issues, limitations, and troubleshooting guide for Azure SQL Migration extension for Azure Data Studio
Previously updated : 04/21/2023+ Last updated : 11/30/2023
# Known issues, limitations, and troubleshooting
-Known issues and troubleshooting steps associated with the Azure SQL Migration extension for Azure Data Studio.
+This article provides a list of known issues and troubleshooting steps associated with the Azure SQL Migration extension for Azure Data Studio.
-> [!IMPORTANT]
+> [!IMPORTANT]
> The latest version of Integration Runtime (5.28.8488) prevents access to a network file share on a local host. This security measure will lead to failures when performing migrations to Azure SQL using DMS. Please ensure you run Integration Runtime on a different machine than the network share hosting.
-## Error code: 2007 - CutoverFailedOrCancelled
+## Error code: 2007 - CutoverFailedOrCancelled
-- **Message**: `Cutover failed or cancelled for database <DatabaseName>. Error details: The restore plan is broken because firstLsn <First LSN> of log backup <URL of backup in Azure Storage container>' is not <= lastLsn <last LSN> of Full backup <URL of backup in Azure Storage container>'. Restore to point in time.`
+- **Message**: `Cutover failed or cancelled for database <DatabaseName>. Error details: The restore plan is broken because firstLsn <First LSN> of log backup <URL of backup in Azure Storage container>' is not <= lastLsn <last LSN> of Full backup <URL of backup in Azure Storage container>'. Restore to point in time.`
-- **Cause**: The error might occur due to the backups being placed incorrectly in the Azure Storage container. If the backups are placed in the network file share, this error could also occur due to network connectivity issues.
+- **Cause**: The error can occur due to the backups being placed incorrectly in the Azure Storage container. If the backups are placed in the network file share, this error could also occur due to network connectivity issues.
-- **Recommendation**: Ensure the database backups in your Azure Storage container are correct. If you're using network file share, there might be network-related issues and lags that are causing this error. Wait for the process to be completed.
+- **Recommendation**: Ensure the database backups in your Azure Storage container are correct. If you're using network file share, there can be network-related issues and lags that are causing this error. Wait for the process to be completed.
## Error code: 2009 - MigrationRestoreFailed
Known issues and troubleshooting steps associated with the Azure SQL Migration e
- **Cause**: Before migrating data, you need to migrate the certificate of the source SQL Server instance from a database that is protected by Transparent Data Encryption (TDE) to the target Azure SQL Managed Instance or SQL Server on Azure Virtual Machine. -- **Recommendation**: Migrate the TDE certificate to the target instance and retry the process. For more information about migrating TDE-enabled databases, see [Tutorial: Migrate TDE-enabled databases (preview) to Azure SQL in Azure Data Studio](/azure/dms/tutorial-transparent-data-encryption-migration-ads). -
+- **Recommendation**: Migrate the TDE certificate to the target instance and retry the process. For more information about migrating TDE-enabled databases, see [Tutorial: Migrate TDE-enabled databases (preview) to Azure SQL in Azure Data Studio](/azure/dms/tutorial-transparent-data-encryption-migration-ads).
- **Message**: `Migration for Database <DatabaseName> failed with error 'Non retriable error occurred while restoring backup with index 1 - 3169 The database was backed up on a server running version %ls. That version is incompatible with this server, which is running version %ls. Either restore the database on a server that supports the backup, or use a backup that is compatible with this server.` - **Cause**: Unable to restore a SQL Server backup to an earlier version of SQL Server than the version at which the backup was created. -- **Recommendation**: See [Issues that affect database restoration between different SQL Server versions](/support/sql/admin/backup-restore-operations) for troubleshooting steps. -
+- **Recommendation**: See [Issues that affect database restoration between different SQL Server versions](/support/sql/admin/backup-restore-operations) for troubleshooting steps.
- **Message**: `Migration for Database <DatabaseName> failed with error 'The managed instance has reached its storage limit. The storage usage for the managed instance can't exceed 32768 MBs.` - **Cause**: The Azure SQL Managed Instance has reached its resource limits. -- **Recommendation**: For more information about storage limits, see [Overview of Azure SQL Managed Instance resource limits](/azure/azure-sql/managed-instance/resource-limits). -
+- **Recommendation**: For more information about storage limits, see [Overview of Azure SQL Managed Instance resource limits](/azure/azure-sql/managed-instance/resource-limits).
- **Message**: `Migration for Database <DatabaseName> failed with error 'Non retriable error occurred while restoring backup with index 1 - 3634 The operating system returned the error '1450(Insufficient system resources exist to complete the requested service.)` - **Cause**: One of the symptoms listed in [OS errors 1450 and 665 are reported for database files during DBCC CHECKDB or Database Snapshot Creation](/support/sql/admin/1450-and-665-errors-running-dbcc-checkdb#symptoms) can be the cause. -- **Recommendation**: See [OS errors 1450 and 665 are reported for database files during DBCC CHECKDB or Database Snapshot Creation](/support/sql/admin/1450-and-665-errors-running-dbcc-checkdb#symptoms) for troubleshooting steps. -
+- **Recommendation**: See [OS errors 1450 and 665 are reported for database files during DBCC CHECKDB or Database Snapshot Creation](/support/sql/admin/1450-and-665-errors-running-dbcc-checkdb#symptoms) for troubleshooting steps.
- **Message**: `The restore plan is broken because firstLsn <First LSN> of log backup <URL of backup in Azure Storage container>' isn't <= lastLsn <last LSN> of Full backup <URL of backup in Azure Storage container>'. Restore to point in time.` -- **Cause**: The error might occur due to the backups being placed incorrectly in the Azure Storage container. If the backups are placed in the network file share, this error could also occur due to network connectivity issues.--- **Recommendation**: Ensure the database backups in your Azure Storage container are correct. If you're using network file share, there might be network related issues and lags that are causing this error. Wait for the process to complete.
+- **Cause**: The error can occur due to the backups being placed incorrectly in the Azure Storage container. If the backups are placed in the network file share, this error could also occur due to network connectivity issues.
+- **Recommendation**: Ensure the database backups in your Azure Storage container are correct. If you're using network file share, there can be network related issues and lags that are causing this error. Wait for the process to complete.
- **Message**: `Migration for Database <Database Name> failed with error 'Non retriable error occurred while restoring backup with index 1 - 3234 Logical file <Name> isn't part of database <Database GUID>. Use RESTORE FILELISTONLY to list the logical file names. RESTORE DATABASE is terminating abnormally.'.`
Known issues and troubleshooting steps associated with the Azure SQL Migration e
- **Recommendation**: Run RESTORE FILELISTONLY to check the logical file names in your backup. For more information about RESTORE FILELISTONLY, see [RESTORE Statements - FILELISTONLY (Transact-SQL)](/sql/t-sql/statements/restore-statements-filelistonly-transact-sql). - - **Message**: `Migration for Database <Database Name> failed with error 'Azure SQL target resource failed to connect to storage account. Make sure the target SQL VNet is allowed under the Azure Storage firewall rules.'` - **Cause**: Azure Storage firewall isn't configured to allow access to Azure SQL target.
Known issues and troubleshooting steps associated with the Azure SQL Migration e
- **Recommendation**: If migrating multiple databases to **Azure SQL Managed Instance** using the same Azure Blob Storage container, you must place backup files for different databases in separate folders inside the container. For more information about LRS, see [Migrate databases from SQL Server to SQL Managed Instance by using Log Replay Service (Preview)](/azure/azure-sql/managed-instance/log-replay-service-migrate#limitations). -- **Message**: `Migration for Database <Database Name> failed with error 'Non retriable error occurred while restoring backup with index 1 - 12824 The sp_configure value 'contained database authentication' must be set to 1 in order to restore a contained database. You may need to use RECONFIGURE to set the value_in_use.
-RESTORE DATABASE is terminating abnormally.`
+- **Message**: `Migration for Database <Database Name> failed with error 'Non retriable error occurred while restoring backup with index 1 - 12824 The sp_configure value 'contained database authentication' must be set to 1 in order to restore a contained database. You may need to use RECONFIGURE to set the value_in_use. RESTORE DATABASE is terminating abnormally.`
- **Cause**: The source database is a contained database. A specific configuration is needed to enable restoring a contained database. For more information about contained databases, see [Contained Database Users](/sql/relational-databases/security/contained-database-users-making-your-database-portable). - **Recommendation**: Run the following query connected to the source SQL Server in the context of the specific database before starting the migration. Then, attempt the migration of the contained database again.
-```sql
Enable "contained database authentication"
-EXEC sp_configure 'contained', 1;
-RECONFIGURE;
-```
-> [!NOTE]
-> For more information on general troubleshooting steps for Azure SQL Managed Instance errors, see [Known issues with Azure SQL Managed Instance](/azure/azure-sql/managed-instance/doc-changes-updates-known-issues)
+ ```sql
+ -- Enable "contained database authentication"
+ EXEC sp_configure 'contained', 1;
+ RECONFIGURE;
+ ```
+
+ > [!NOTE]
+ > For more information on general troubleshooting steps for Azure SQL Managed Instance errors, see [Known issues with Azure SQL Managed Instance](/azure/azure-sql/managed-instance/doc-changes-updates-known-issues)
## Error code: 2012 - TestConnectionFailed
RECONFIGURE;
- **Cause**: Connection to the Self-Hosted Integration Runtime has failed. -- **Recommendation**: See [Troubleshoot Self-Hosted Integration Runtime](../data-factory/self-hosted-integration-runtime-troubleshoot-guide.md) for general troubleshooting steps for Integration Runtime connectivity errors. -
+- **Recommendation**: See [Troubleshoot Self-Hosted Integration Runtime](../data-factory/self-hosted-integration-runtime-troubleshoot-guide.md) for general troubleshooting steps for Integration Runtime connectivity errors.
## Error code: 2014 - IntegrationRuntimeIsNotOnline
RECONFIGURE;
- **Cause**: The Self-Hosted Integration Runtime isn't online. -- **Recommendation**: Make sure the Self-hosted Integration Runtime is registered and online. To perform the registration, you can use scripts from [Automating self-hosted integration runtime installation using local PowerShell scripts](../data-factory/self-hosted-integration-runtime-automation-scripts.md). Also, see [Troubleshoot self-hosted integration runtime](../data-factory/self-hosted-integration-runtime-troubleshoot-guide.md) for general troubleshooting steps for Integration Runtime connectivity errors. -
+- **Recommendation**: Make sure the Self-hosted Integration Runtime is registered and online. To perform the registration, you can use scripts from [Automating self-hosted integration runtime installation using local PowerShell scripts](../data-factory/self-hosted-integration-runtime-automation-scripts.md). Also, see [Troubleshoot self-hosted integration runtime](../data-factory/self-hosted-integration-runtime-troubleshoot-guide.md) for general troubleshooting steps for Integration Runtime connectivity errors.
## Error code: 2030 - AzureSQLManagedInstanceNotReady
RECONFIGURE;
- **Cause**: Azure SQL Managed Instance not in ready state. -- **Recommendation**: Wait until the Azure SQL Managed Instance has finished deploying and is ready, then retry the process. -
+- **Recommendation**: Wait until the Azure SQL Managed Instance has finished deploying and is ready, then retry the process.
## Error code: 2033 - SqlDataCopyFailed
RECONFIGURE;
- **Cause**: ADF pipeline for data movement failed. -- **Recommendation**: Check the MigrationStatusDetails page for more detailed error information. -
+- **Recommendation**: Check the MigrationStatusDetails page for more detailed error information.
## Error code: 2038 - MigrationCompletedDuringCancel
RECONFIGURE;
- **Cause**: A cancellation request was received, but the migration was completed successfully before the cancellation was completed. -- **Recommendation**: No action required migration succeeded. -
+- **Recommendation**: No action required. Migration succeeded.
## Error code: 2039 - MigrationRetryNotAllowed
RECONFIGURE;
- **Cause**: A retry request was received when the migration wasn't in a state allowing retrying. -- **Recommendation**: No action required migration is ongoing or completed. -
+- **Recommendation**: No action required. Migration is ongoing or completed.
## Error code: 2040 - MigrationTimeoutWaitingForRetry
RECONFIGURE;
- **Cause**: Migration was idle in a failed, but retrievable state for 8 hours and was automatically canceled. -- **Recommendation**: No action is required; the migration was canceled. -
+- **Recommendation**: No action is required; the migration was canceled.
## Error code: 2041 - DataCopyCompletedDuringCancel
RECONFIGURE;
- **Cause**: Cancel request was received, and the data copy was completed successfully, but the target database schema hasn't been returned to its original state. -- **Recommendation**: If desired, the target database can be returned to its original state by running the first query and all of the returned queries, then running the second query and doing the same. -
-```sql
-SELECT [ROLLBACK] FROM [dbo].[__migration_status]
-WHERE STEP in (3,4,6);
+- **Recommendation**: If desired, the target database can be returned to its original state by running the first query and all of the returned queries, then running the second query and doing the same.
-SELECT [ROLLBACK] FROM [dbo].[__migration_status]
-WHERE STEP in (5,7,8) ORDER BY STEP DESC;
-```
+ ```sql
+ SELECT [ROLLBACK] FROM [dbo].[__migration_status]
+ WHERE STEP in (3,4,6);
+ SELECT [ROLLBACK] FROM [dbo].[__migration_status]
+ WHERE STEP in (5,7,8) ORDER BY STEP DESC;
+ ```
-## Error code: 2042 - PreCopyStepsCompletedDuringCancel
+## Error code: 2042 - PreCopyStepsCompletedDuringCancel
- **Message**: `Pre Copy steps finished successfully before canceling completed. Target database Foreign keys and temporal tables have been altered. Schema migration may be required again for future migrations. Target server: <Target Server>, Target database: <Target Database>.` - **Cause**: Cancel request was received and the steps to prepare the target database for copy were completed successfully. The target database schema hasn't been returned to its original state. -- **Recommendation**: If desired, target database can be returned to its original state by running the following query and all of the returned queries.
+- **Recommendation**: If desired, target database can be returned to its original state by running the following query and all of the returned queries.
-```sql
-SELECT [ROLLBACK] FROM [dbo].[__migration_status]
-WHERE STEP in (3,4,6);
-```
+ ```sql
+ SELECT [ROLLBACK] FROM [dbo].[__migration_status]
+ WHERE STEP in (3,4,6);
+ ```
## Error code: 2043 - CreateContainerFailed
WHERE STEP in (3,4,6);
- **Cause**: The request failed due to an underlying issue such as network connectivity, a DNS failure, a server certificate validation, or a timeout. -- **Recommendation**: For more troubleshooting steps, see [Troubleshoot Azure Data Factory and Synapse pipelines](../data-factory/data-factory-troubleshoot-guide.md#error-code-2108). -
+- **Recommendation**: For more troubleshooting steps, see [Troubleshoot Azure Data Factory and Synapse pipelines](../data-factory/data-factory-troubleshoot-guide.md#error-code-2108).
## Error code: 2049 - FileShareTestConnectionFailed
WHERE STEP in (3,4,6);
- **Cause**: The network share where the database backups are stored is in the same machine as the self-hosted Integration Runtime (SHIR). -- **Recommendation**: The latest version of Integration Runtime (**5.28.8488**) prevents access to a network file share on a local host. Ensure you run Integration Runtime on a different machine than the network share hosting. If hosting the self-hosted Integration Runtime and the network share on different machines isn't possible with your current migration setup, you can use the option to opt out using ```DisableLocalFolderPathValidation```.
- > [!NOTE]
- > For more information, see [Set up an existing self-hosted IR via local PowerShell](../data-factory/create-self-hosted-integration-runtime.md#set-up-an-existing-self-hosted-ir-via-local-powershell). Use the disabling option with discretion as this is less secure.
+- **Recommendation**: The latest version of Integration Runtime (**5.28.8488**) prevents access to a network file share on a local host. Ensure you run Integration Runtime on a different machine than the network share hosting. If hosting the self-hosted Integration Runtime and the network share on different machines isn't possible with your current migration setup, you can use the option to opt out using `DisableLocalFolderPathValidation`.
+ > [!NOTE]
+ > For more information, see [Set up an existing self-hosted IR via local PowerShell](../data-factory/create-self-hosted-integration-runtime.md#set-up-an-existing-self-hosted-ir-via-local-powershell). Use the disabling option with discretion as this is less secure.
## Error code: 2056 - SqlInfoValidationFailed -- **Message**: CollationMismatch: `Source database collation <CollationOptionSource> is not the same as the target database <CollationOptionTarget>. Source database: <SourceDatabaseName> Target database: <TargetDatabaseName>.`
+- **Message**: `CollationMismatch: Source database collation <CollationOptionSource> is not the same as the target database <CollationOptionTarget>. Source database: <SourceDatabaseName> Target database: <TargetDatabaseName>.`
- **Cause**: The source database collation isn't the same as the target database's collation. - **Recommendation**: Make sure to change the target Azure SQL Database collation to the same as the source SQL Server database. Azure SQL Database uses `SQL_Latin1_General_CP1_CI_AS` collation by default, in case your source SQL Server database uses a different collation you might need to re-create or select a different target database whose collation matches. For more information, see [Collation and Unicode support](/sql/relational-databases/collations/collation-and-unicode-support) --- **Message**: DatabaseSizeMoreThanMax: No tables were found in the target Azure SQL Database. Check if schema migration was completed beforehand.
+- **Message**: `DatabaseSizeMoreThanMax: No tables were found in the target Azure SQL Database. Check if schema migration was completed beforehand.`
- **Cause**: The selected tables for the migration don't exist in the target Azure SQL Database. - **Recommendation**: Make sure the target database schema was created before starting the migration. For more information on how to deploy the target database schema, see [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension) -- **Message**: DatabaseSizeMoreThanMax: `The source database size <Source Database Size> exceeds the maximum allowed size of the target database <Target Database Size>. Check if the target database has enough space.`
+- **Message**: `DatabaseSizeMoreThanMax: The source database size <Source Database Size> exceeds the maximum allowed size of the target database <Target Database Size>. Check if the target database has enough space.`
- **Cause**: The target database doesn't have enough space. - **Recommendation**: Make sure the target database schema was created before starting the migration. For more information on how to deploy the target database schema, see [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension). -- **Message**: NoTablesFound: `Some of the source tables don't exist in the target database. Missing tables: <TableList>`.
+- **Message**: `NoTablesFound: Some of the source tables don't exist in the target database. Missing tables: <TableList>`.
- **Cause**: The selected tables for the migration don't exist in the target Azure SQL Database. - **Recommendation**: Check if the selected tables exist in the target Azure SQL Database. If this migration is called from a PowerShell script, check if the table list parameter includes the correct table names and is passed into the migration.
-
-- **Message**: SqlVersionOutOfRange: `Source instance version is lower than 2008, which is not supported to migrate. Source instance: <InstanceName>`.
+- **Message**: `SqlVersionOutOfRange: Source instance version is lower than 2008, which is not supported to migrate. Source instance: <InstanceName>`.
- **Cause**: Azure Database Migration Service doesn't support migrating from SQL Server instances lower than 2008. -- **Recommendation**: Upgrade your source SQL Server instance to a newer version of SQL Server. For more information, see [Upgrade SQL Server](/sql/database-engine/install-windows/upgrade-sql-server)-
+- **Recommendation**: Upgrade your source SQL Server instance to a newer version of SQL Server. For more information, see [Upgrade SQL Server](/sql/database-engine/install-windows/upgrade-sql-server).
-- **Message**: TableMappingMismatch: `Some of the source tables don't exist in the target database. Missing tables: <TableList>`.
+- **Message**: `TableMappingMismatch: Some of the source tables don't exist in the target database. Missing tables: <TableList>`.
- **Cause**: The selected tables for the migration don't exist in the target Azure SQL Database.
WHERE STEP in (3,4,6);
## Error code: 2060 - SqlSchemaCopyFailed -- **Message**:` The SELECT permission was denied on the object 'sql_logins', database 'master', schema 'sys'.`
+- **Message**: `The SELECT permission was denied on the object 'sql_logins', database 'master', schema 'sys'.`
-- **Cause**: The account customers use to connect Azure SQL Database lacks the permission to access sys.sql_logins table.
+- **Cause**: The account customers use to connect Azure SQL Database lacks the permission to access `sys.sql_logins` table.
- **Recommendation**: There are two ways to mitigate the issue:
-1. Add 'sysadmin' role to the account, which grant the admin permission.
-2. If customers cannot use admin account or cannot grant admin permission to the account, they can create a user in master and grant dbmanager and loginmanager permission to the user. For example,
-```sql
Run the script in the master
-create user testuser from login testlogin;
-exec sp_addRoleMember 'dbmanager', 'testuser'
-exec sp_addRoleMember 'loginmanager', 'testuser'
-```
+ 1. Add 'sysadmin' role to the account, which grants the admin permission.
-- **Message**:` Failed to get service token from ADF service.`
+ 1. If customers can't use admin account or can't grant admin permission to the account, they can create a user in master and grant **dbmanager** and **loginmanager** permission to the user. For example,
-- **Cause**: The customer's SHIR fails to connect data factory.
+ ```sql
+ -- Run the script in the master
+ CREATE USER testuser FROM LOGIN testlogin;
+ EXEC sp_addRoleMember 'dbmanager', 'testuser';
+ EXEC sp_addRoleMember 'loginmanager', 'testuser';
+ ```
-- **Recommendation**: This is sample doc how to solve it: [Integration runtime Unable to connect to Data Factory](https://learn.microsoft.com/answers/questions/139976/integration-runtime-unable-to-connect-to-data-fact)
+- **Message**: `Failed to get service token from ADF service.`
+- **Cause**: The customer's SHIR fails to connect data factory.
+- **Recommendation**: This is sample doc how to solve it: [Integration runtime Unable to connect to Data Factory](/answers/questions/139976/integration-runtime-unable-to-connect-to-data-fact)
-- **Message**:` IR Nodes are offline.`
+- **Message**: `IR Nodes are offline.`
- **Cause**: The cause might be that the network is interrupted during migration and thus the IR node become offline. Make sure that the machine where SHIR is installed is on. - **Recommendation**: Make sure that the machine where SHIR is installed is on.
+- **Message**: `Deployed failure: {0}. Object element: {1}.`
-- **Message**:` Deployed failure: {0}. Object element: {1}.`
+- **Cause**: This is the most common error customers might encounter. It means that the object can't be deployed to the target because it's unsupported on the target.
-- **Cause**: This is the most common error customers might encounter. It means that the object cannot be deployed to the target because it is unsupported on the target.
+- **Recommendation**: Customers need to check the assessment results ([Assessment rules](/azure/azure-sql/migration-guides/database/sql-server-to-sql-database-assessment-rules)). This is the list of assessment issues that might fail the schema migration:
-- **Recommendation**: Customers need to check the assessment results ([Assessment rules](https://learn.microsoft.com/azure/azure-sql/migration-guides/database/sql-server-to-sql-database-assessment-rules?view=azuresql)). This is the list of assessment issues that might fail the schema migration:
-
-[BUIK INSERT](https://learn.microsoft.com/azure/azure-sql/migration-guides/database/sql-server-to-sql-database-assessment-rules?view=azuresql#BulkInsert)
+ [BULK INSERT](/azure/azure-sql/migration-guides/database/sql-server-to-sql-database-assessment-rules#BulkInsert)
-[COMPUTE clause](https://learn.microsoft.com/azure/azure-sql/migration-guides/database/sql-server-to-sql-database-assessment-rules?view=azuresql#ComputeClause)
+ [COMPUTE clause](/azure/azure-sql/migration-guides/database/sql-server-to-sql-database-assessment-rules#ComputeClause)
-[Cryptographic provider](https://learn.microsoft.com/azure/azure-sql/migration-guides/database/sql-server-to-sql-database-assessment-rules?view=azuresql#CryptographicProvider)
+ [Cryptographic provider](/azure/azure-sql/migration-guides/database/sql-server-to-sql-database-assessment-rules#CryptographicProvider)
-[Cross database references](https://learn.microsoft.com/azure/azure-sql/migration-guides/database/sql-server-to-sql-database-assessment-rules?view=azuresql#CrossDatabaseReferences)
+ [Cross database references](/azure/azure-sql/migration-guides/database/sql-server-to-sql-database-assessment-rules#CrossDatabaseReferences)
-[Database principal alias](https://learn.microsoft.com/azure/azure-sql/migration-guides/database/sql-server-to-sql-database-assessment-rules?view=azuresql#DatabasePrincipalAlias)
+ [Database principal alias](/azure/azure-sql/migration-guides/database/sql-server-to-sql-database-assessment-rules#DatabasePrincipalAlias)
-[DISABLE_DEF_CNST_CHK option](https://learn.microsoft.com/azure/azure-sql/migration-guides/database/sql-server-to-sql-database-assessment-rules?view=azuresql#DisableDefCNSTCHK)
+ [DISABLE_DEF_CNST_CHK option](/azure/azure-sql/migration-guides/database/sql-server-to-sql-database-assessment-rules#DisableDefCNSTCHK)
-[FASTFIRSTROW hint](https://learn.microsoft.com/azure/azure-sql/migration-guides/database/sql-server-to-sql-database-assessment-rules?view=azuresql#FastFirstRowHint)
+ [FASTFIRSTROW hint](/azure/azure-sql/migration-guides/database/sql-server-to-sql-database-assessment-rules#FastFirstRowHint)
-[FILESTREAM](https://learn.microsoft.com/azure/azure-sql/migration-guides/database/sql-server-to-sql-database-assessment-rules?view=azuresql#FileStream)
+ [FILESTREAM](/azure/azure-sql/migration-guides/database/sql-server-to-sql-database-assessment-rules#FileStream)
-[MS DTC](https://learn.microsoft.com/azure/azure-sql/migration-guides/database/sql-server-to-sql-database-assessment-rules?view=azuresql#MSDTCTransactSQL)
+ [MS DTC](/azure/azure-sql/migration-guides/database/sql-server-to-sql-database-assessment-rules#MSDTCTransactSQL)
-[OPENROWSET (bulk)](https://learn.microsoft.com/azure/azure-sql/migration-guides/database/sql-server-to-sql-database-assessment-rules?view=azuresql#OpenRowsetWithNonBlobDataSourceBulk)
+ [OPENROWSET (bulk)](/azure/azure-sql/migration-guides/database/sql-server-to-sql-database-assessment-rules#OpenRowsetWithNonBlobDataSourceBulk)
-[OPENROWSET (provider)](https://learn.microsoft.com/azure/azure-sql/migration-guides/database/sql-server-to-sql-database-assessment-rules?view=azuresql#OpenRowsetWithSQLAndNonSQLProvider)
+ [OPENROWSET (provider)](/azure/azure-sql/migration-guides/database/sql-server-to-sql-database-assessment-rules#OpenRowsetWithSQLAndNonSQLProvider)
-Note: To view error detail, Open Microsoft Integration runtime configurtion manager > Diagnostics > logging > view logs.
-It will open the Event viewer > Application and Service logs > Connectors - Integration runtime and now filter for errors.
+ > [!NOTE]
+ > To view error detail, Open the Microsoft Integration runtime configuration manager, and navigate to **Diagnostics > Logging > View logs**. In the Event viewer, navigate to **Application and Service logs > Connectors - Integration runtime**, and filter for errors.
-- **Message**: Deployed failure: Index cannot be created on computed column '{0}' of table '{1}' because the underlying object '{2}' has a different owner. Object element: {3}.
-
- ` Sample Generated Script:: IF NOT EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[Sales].[Customer]') AND name = N'AK_Customer_AccountNumber') CREATE UNIQUE NONCLUSTERED INDEX [AK_Customer_AccountNumber] ON [Sales].[Customer] ( [AccountNumber] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) `
+- **Message**: `Deployed failure: Index cannot be created on computed column '{0}' of table '{1}' because the underlying object '{2}' has a different owner. Object element: {3}.`
-- **Cause**: All function references in the computed column must have the same owner as the table.
+ Sample Generated Script: `IF NOT EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[Sales].[Customer]') AND name = N'AK_Customer_AccountNumber') CREATE UNIQUE NONCLUSTERED INDEX [AK_Customer_AccountNumber] ON [Sales].[Customer] ( [AccountNumber] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)`
-- **Recommendation**: Check the doc [Ownership Requirement](https://learn.microsoft.com/sql/relational-databases/indexes/indexes-on-computed-columns?view=sql-server-ver16#ownership-requirements).
+- **Cause**: All function references in the computed column must have the same owner as the table.
+- **Recommendation**: See [Ownership Requirements](/sql/relational-databases/indexes/indexes-on-computed-columns#ownership-requirements).
## Error code: Ext_RestoreSettingsError -- **Message**: Unable to read blobs in storage container, exception: The remote server returned an error: (403) Forbidden.; The remote server returned an error: (403) Forbidden
+- **Message**: `Unable to read blobs in storage container, exception: The remote server returned an error: (403) Forbidden.; The remote server returned an error: (403) Forbidden`
- **Cause**: The Azure SQL target is unable to connect to blob storage. - **Recommendation**: Confirm that target network settings allow access to blob storage. For example, if you're migrating to a SQL Server on Azure VM target, ensure that outbound connections on the Virtual Machine aren't being blocked. --- **Message**: Failed to create restore job. Unable to read blobs in storage container, exception: The remote name couldn't be resolved.
+- **Message**: `Failed to create restore job. Unable to read blobs in storage container, exception: The remote name could not be resolved.`
- **Cause**: The Azure SQL target is unable to connect to blob storage. - **Recommendation**: Confirm that target network settings allow access to blob storage. For example, if migrating to SQL VM, ensure that outbound connections on VM aren't being blocked. - - **Message**: `Migration for Database <Database Name> failed with error 'Migration cannot be completed because provided backup file name <Backup File Name> should be the last restore backup file <Last Restore Backup File Name>'`. - **Cause**: The most recent backup wasn't specified in the backup settings. - **Recommendation**: Specify the most recent backup file name in backup settings and retry the operation. - - **Message**: `Operation failed: errorCode: Ext_RestoreSettingsError, message: RestoreId: 1111111-aaaa-bbbb-cccc-dddddddd, OperationId: 2222222-aaaa-bbbb-cccc-dddddddd, Detail: Unable to read blobs in storage container, exception: Unable to connect to the remote server;Unable to connect to the remote server;A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 11.111.11.111:443.` - **Cause**: The error is possible to occur for both storage accounts with public network and private endpoint configuration. It's also possible that you have an on-premises DNS server that controls a hybrid network routing and DHCP. Unless you allow the Azure IP addresses configured in your DNS server, your SQL Server on Azure VM target has no chance to resolve the remote storage blob endpoint. - **Recommendation**: To debug this issue, you can try pinging your Azure Blob Storage URL from your SQL Server on Azure VM target and confirm if you have a connectivity problem. To solve this issue, you have to allow the Azure IP addresses configured in your DNS server. For more information, see [Troubleshoot Azure Private Endpoint connectivity problems](/azure/private-link/troubleshoot-private-endpoint-connectivity)
-## Azure SQL Database limitations
+## Azure SQL Database limitations
-Migrating to Azure SQL Database by using the Azure SQL extension for Azure Data Studio has the following limitations:
+Migrating to Azure SQL Database by using the Azure SQL extension for Azure Data Studio has the following limitations:
[!INCLUDE [sql-db-limitations](includes/sql-database-limitations.md)]
-## Azure SQL Managed Instance limitations
+## Azure SQL Managed Instance limitations
-Migrating to Azure SQL Managed Instance by using the Azure SQL extension for Azure Data Studio has the following limitations:
+Migrating to Azure SQL Managed Instance by using the Azure SQL extension for Azure Data Studio has the following limitations:
[!INCLUDE [sql-mi-limitations](includes/sql-managed-instance-limitations.md)]
-## SQL Server on Azure VMs limitations
+## SQL Server on Azure VMs limitations
-Migrating to SQL Server on Azure VMs by using the Azure SQL extension for Azure Data Studio has the following limitations:
+Migrating to SQL Server on Azure VMs by using the Azure SQL extension for Azure Data Studio has the following limitations:
[!INCLUDE [sql-vm-limitations](includes/sql-virtual-machines-limitations.md)]
-## Azure Data Studio Limitations
+## Azure Data Studio limitations
-### Failed to start Sql Migration Service: Error: Request error:
+### Failed to start Sql Migration Service: Error: Request error:
- **Message**: `Error at ClientRequest.<anonymous> (c:\Users\MyUser\.azuredatastudio\extensions\microsoft.sql-migration-1.4.2\dist\main.js:2:7448) at ClientRequest.emit (node:events:538:35) at TLSSocket.socketOnEnd (node:_http_client:466:9) at TLSSocket.emit (node:events:538:35) at endReadableNT (node:internal/streams/readable:1345:12) at process.processTicksAndRejections (node:internal/process/task_queues:83:21)`-- **Cause**: This issue occurs when Azure Data Studio isn't able to download the MigrationService package from https://github.com/microsoft/sqltoolsservice/releases. The download failure can be due to disconnected network work or unresolved proxy settings. +
+- **Cause**: This issue occurs when Azure Data Studio isn't able to download the MigrationService package from https://github.com/microsoft/sqltoolsservice/releases. The download failure can be due to disconnected network work or unresolved proxy settings.
+ - **Recommendation**: The sure fire way of solving this issue is by downloading the package manually. Follow the mitigation steps outlined in this link: https://github.com/microsoft/azuredatastudio/issues/22558#issuecomment-1496307891
-## Next steps
+## Related content
-- For an overview and installation of the Azure SQL migration extension, see [Azure SQL migration extension for Azure Data Studio](/azure-data-studio/extensions/azure-sql-migration-extension)-- For more information on known limitations with Log Replay Service, see [Migrate databases from SQL Server to SQL Managed Instance by using Log Replay Service (Preview)](/azure/azure-sql/managed-instance/log-replay-service-migrate#limitations)-- For more information on SQL Server on Virtual machine resource limits, see [Checklist: Best practices for SQL Server on Azure VMs](/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist)
+- [Azure SQL migration extension for Azure Data Studio](/azure-data-studio/extensions/azure-sql-migration-extension)
+- [Migrate databases from SQL Server to SQL Managed Instance by using Log Replay Service (Preview)](/azure/azure-sql/managed-instance/log-replay-service-migrate#limitations)
+- [Checklist: Best practices for SQL Server on Azure VMs](/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist)
dns Dns Private Records https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-records.md
Previously updated : 10/12/2023 Last updated : 12/04/2023
The DNS standards permit a single TXT record to contain multiple strings, each o
When calling the Azure DNS REST API, you need to specify each TXT string separately. When you use the Azure portal, PowerShell, or CLI interfaces, you should specify a single string per record. This string is automatically divided into 255-character segments if necessary.
-The multiple strings in a DNS record shouldn't be confused with the multiple TXT records in a TXT record set. A TXT record set can contain multiple records, *each of which* can contain multiple strings. Azure private DNS supports a total string length of up to 1024 characters in each TXT record set (across all records combined).
+The multiple strings in a DNS record shouldn't be confused with the multiple TXT records in a TXT record set. A TXT record set can contain multiple records, *each of which* can contain multiple strings. Azure DNS supports a total string length of up to 4096 characters`*` in each TXT record set (across all records combined).
+
+`*` 4096 character support is currently only available in the Azure Public Cloud. National clouds are limited to 1024 characters until 4k support rollout is complete.
## Tags and metadata
energy-data-services Concepts Entitlements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-entitlements.md
Please note that different groups and associated user entitlements need to be se
The entitlements service enables three use cases for authorization: -- **Data groups** used for data authorization (for example, data.welldb.viewers, data.welldb.owners)-- **Service groups** used for service authorization (for example, service.storage.user, service.storage.admin)-- **User groups** used for hierarchical grouping of user and service identities (for example, users.datalake.viewers, users.datalake.editors)-
-Some user, data, and service groups are created by default when a data partition is provisioned. Details of these groups and their hierarchy scope is in [Bootstrapped OSDU Entitlements Groups](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/osdu-entitlement-roles.md).
+1. **Data groups** are used to enable authorization for data.
+ 1. Some examples are data.welldb.viewers and data.welldb.owners.
+ 2. The data groups are added in the ACL of individual data records to enable viewer and owner access of the data.
+ 3. Individual users who are part of the data groups are authorized to view or own the data depending on the scope of the data group.
+2. **Service groups** are used to enable authorization for services.
+ 1. Some examples are service.storage.user and service.storage.admin.
+ 2. The service groups are predefined when OSDU services are provisioned in each data partition of Azure Data Manager for Energy instance.
+ 3. These groups enable viewer, editor, and admin access to call the OSDU APIs corresponding to the OSDU services.
+3. **User groups** are used for hierarchical grouping of user and service groups.
+ 1. Some examples are users.datalake.viewers and users.datalake.editors.
+ 2. Some user groups are created by default when a data partition is provisioned. Details of these groups and their hierarchy scope is in [Bootstrapped OSDU Entitlements Groups](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/osdu-entitlement-roles.md).
+
+Individual users can be added to a `user group`. The `user group` is then added to a `data group`. The data group is added to the ACL of the data record. It enables abstraction for the data groups since individual users need not be added one by one to the data group and instead can be added to the `user group`. This structure thus helps provide scalability to manage memberships in OSDU.
## Group naming
event-grid Communication Services Telephony Sms Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/communication-services-telephony-sms-events.md
This section contains an example of what that data would look like for each even
"MessageId": "Incoming_20200918002745d29ebbea-3341-4466-9690-0a03af35228e", "From": "15555555555", "To": "15555555555",
- "Message": "Great to connect with ACS events",
+ "Message": "Great to connect with Azure Communication Services events",
"ReceivedTimestamp": "2020-09-18T00:27:45.32Z" }, "eventType": "Microsoft.Communication.SMSReceived",
event-grid Communication Services Voice Video Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/communication-services-voice-video-events.md
This section contains an example of what that data would look like for each even
``` ## Limitations
-Aside from `IncomingCall`, Calling events are only available for ACS VoIP users. PSTN, bots, echo bot and Teams users events are excluded.
-No calling events will be available for ACS - Teams meeting interop call.
+Aside from `IncomingCall`, Calling events are only available for Azure Communication Services VoIP users. PSTN, bots, echo bot and Teams users events are excluded.
+No calling events will be available for Azure Communication Services - Teams meeting interop call.
-`IncomingCall` events have support for ACS VoIP users and PSTN numbers. For more details on which scenarios can trigger `IncomingCall` events, see the following [Incoming call concepts](../communication-services/concepts/call-automation/incoming-call-notification.md) documentation.
+`IncomingCall` events have support for Azure Communication Services VoIP users and PSTN numbers. For more details on which scenarios can trigger `IncomingCall` events, see the following [Incoming call concepts](../communication-services/concepts/call-automation/incoming-call-notification.md) documentation.
## Next steps See the following tutorial: [Quickstart: Handle voice and video calling events](../communication-services/quickstarts/voice-video-calling/handle-calling-events.md).
event-hubs Event Hubs Ip Filtering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-ip-filtering.md
From API version **2021-06-01-preview onwards**, the default value of the `defau
The API version **2021-06-01-preview onwards** also introduces a new property named `publicNetworkAccess`. If it's set to `Disabled`, operations are restricted to private links only. If it's set to `Enabled`, operations are allowed over the public internet.
-For more information about these properties, see [Create or Update Network Rule Set](/rest/api/eventhub/controlplane-preview/namespaces-network-rule-set/create-or-update-network-rule-set) and [Create or Update Private Endpoint Connections](/rest/api/eventhub/controlplane-preview/private-endpoint-connections/create-or-update).
+For more information about these properties, see [Create or Update Network Rule Set](/rest/api/eventhub/namespaces/create-or-update-network-rule-set) and [Create or Update Private Endpoint Connections](/rest/api/eventhub/private-endpoint-connections/create-or-update).
> [!NOTE] > None of the above settings bypass validation of claims via SAS or Microsoft Entra authentication. The authentication check always runs after the service validates the network checks that are configured by `defaultAction`, `publicNetworkAccess`, `privateEndpointConnections` settings.
event-hubs Event Hubs Quickstart Kafka Enabled Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-quickstart-kafka-enabled-event-hubs.md
description: 'This quickstart shows you how to stream data into and from Azure E
Last updated 02/07/2023 -+
event-hubs Event Hubs Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-service-endpoints.md
From API version **2021-06-01-preview onwards**, the default value of the `defau
The API version **2021-06-01-preview onwards** also introduces a new property named `publicNetworkAccess`. If it's set to `Disabled`, operations are restricted to private links only. If it's set to `Enabled`, operations are allowed over the public internet.
-For more information about these properties, see [Create or Update Network Rule Set](/rest/api/eventhub/controlplane-preview/namespaces-network-rule-set/create-or-update-network-rule-set) and [Create or Update Private Endpoint Connections](/rest/api/eventhub/controlplane-preview/private-endpoint-connections/create-or-update).
+For more information about these properties, see [Create or Update Network Rule Set](/rest/api/eventhub/namespaces/create-or-update-network-rule-set) and [Create or Update Private Endpoint Connections](/rest/api/eventhub/private-endpoint-connections/create-or-update).
> [!NOTE] > None of the above settings bypass validation of claims via SAS or Microsoft Entra authentication. The authentication check always runs after the service validates the network checks that are configured by `defaultAction`, `publicNetworkAccess`, `privateEndpointConnections` settings.
event-hubs Schema Registry Client Side Enforcement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/schema-registry-client-side-enforcement.md
description: This article provides information on using schemas in a schema regi
Last updated 04/26/2023 -+ # Client-side schema enforcement
event-hubs Schema Registry Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/schema-registry-concepts.md
description: This article explains concepts for Azure Schema Registry in Azure E
Last updated 04/26/2023 -+ # Schema Registry in Azure Event Hubs
event-hubs Schema Registry Dotnet Send Receive Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/schema-registry-dotnet-send-receive-quickstart.md
Last updated 04/26/2023
ms.devlang: csharp -+ # Validate using an Avro schema when streaming events using Event Hubs .NET SDKs (AMQP)
event-hubs Schema Registry Kafka Java Send Receive Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/schema-registry-kafka-java-send-receive-quickstart.md
Last updated 04/26/2023
ms.devlang: java -+ # Validate schemas for Apache Kafka applications using Avro (Java)
frontdoor Create Front Door Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-bicep.md
description: This quickstart describes how to create an Azure Front Door Standar
Previously updated : 07/08/2022 Last updated : 12/04/2023
In this quickstart, you'll create a Front Door Standard/Premium, an App Service,
Multiple Azure resources are defined in the Bicep file:
-* [**Microsoft.Network/frontDoors**](/azure/templates/microsoft.network/frontDoors)
+* [**Microsoft.Cdn/profiles**](/azure/templates/microsoft.cdn/profiles) (Azure Front Door Standard/Premium profile)
* [**Microsoft.Web/serverfarms**](/azure/templates/microsoft.web/serverfarms) (App service plan to host web apps) * [**Microsoft.Web/sites**](/azure/templates/microsoft.web/sites) (Web app origin servicing request for Front Door)
frontdoor Front Door Route Matching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-route-matching.md
Previously updated : 06/01/2023 Last updated : 12/04/2023 zone_pivot_groups: front-door-tiers
After Front Door determines the specific frontend host and filters for possible
::: zone pivot="front-door-standard-premium" >[!NOTE]
-> * Any paths without a wildcard are considered to be exact-match paths. If a path ends in a `/`, this is considered an exact match.
+> The wildcard character `*` is only valid for paths that don't have any other characters after it. Additionally, the wildcard character `*` must be preceded by a slash `/`. Paths without a wildcard are considered to be exact-match paths. A path that ends in a slash `/` is also an exact-match path. Ensure that your paths follow these rules to avoid any errors.
::: zone-end
healthcare-apis Dicom Extended Query Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-extended-query-tags-overview.md
The following VR types are supported:
> [!NOTE] > Sequential tags, which are tags under a tag of type Sequence of Items (SQ), are currently not supported. > You can add up to 128 extended query tags.
+> We do not index extended query tags if the value is null or empty.
#### Responses
healthcare-apis Dicom Services Conformance Statement V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-conformance-statement-v2.md
The following DICOM elements are required to be present in every DICOM file atte
* `PatientID` > [!NOTE]
-> All UIDs must be between 1 and 64 characters long, and only contain alpha numeric characters or the following special characters: `.`, `-`. `PatientID` is validated based on its `LO` `VR` type.
+> All UIDs must be between 1 and 64 characters long, and only contain alpha numeric characters or the following special characters: `.`, `-`. `PatientID` continues to be a required tag and can have the value as null in the input. `PatientID` is validated based on its `LO` `VR` type.
Each file stored must have a unique combination of `StudyInstanceUID`, `SeriesInstanceUID`, and `SopInstanceUID`. The warning code `45070` is returned if a file with the same identifiers already exists.
We support searching the following attributes and search types.
| `ManufacturerModelName` | | X | X | X | X | | | `SOPInstanceUID` | | | X | | X | X |
+> [!NOTE]
+> We do not support searching using empty string for any attributes.
+ #### Search matching We support the following matching types.
We support searching on these attributes:
|`ProcedureStepState`| |`StudyInstanceUID`|
+> [!NOTE]
+> We do not support searching using empty string for any attributes.
+ ##### Search Matching We support these matching types:
healthcare-apis Export Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/export-files.md
POST /export HTTP/1.1
Accept: */* Content-Type: application/json {
- "sources": {
+ "source": {
"type": "identifiers", "settings": { "values": [
key-vault Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/backup-restore.md
You must provide following information to execute a full backup:
- Storage account blob storage container - Storage container SAS token with permissions `crdw` (if storage account is not behind a private endpoint) -
-### Prerequisites if the storage account is behind a private endpoint (preview):
+#### Prerequisites if the storage account is behind a private endpoint (preview):
-1. Ensure you have the latest CLI version installed.
+1. Ensure you have the Azure CLI version 2.54.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI](/cli/azure/install-azure-cli).
2. Create a user assigned managed identity. 3. Create a storage account (or use an existing storage account). 4. Enable Trusted service bypass on the storage account in the ΓÇ£NetworkingΓÇ¥ tab, under ΓÇ£Exceptions.ΓÇ¥
You must provide following information to execute a full backup:
az keyvault update-hsm --hsm-name mhsmdemo2 ΓÇôg mhsmrgname --mi-user-assigned "/subscriptions/subid/resourcegroups/mhsmrgname/providers/Microsoft.ManagedIdentity/userAssignedIdentities/userassignedidentitynamefromstep2" ``` + ## Full backup Backup is a long running operation but will immediately return a Job ID. You can check the status of backup process using this Job ID. The backup process creates a folder inside the designated container with a following naming pattern **`mhsm-{HSM_NAME}-{YYYY}{MM}{DD}{HH}{mm}{SS}`**, where HSM_NAME is the name of managed HSM being backed up and YYYY, MM, DD, HH, MM, mm, SS are the year, month, date, hour, minutes, and seconds of date/time in UTC when the backup command was received.
load-balancer Ipv6 Configure Standard Load Balancer Template Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/ipv6-configure-standard-load-balancer-template-json.md
Previously updated : 03/31/2020 Last updated : 12/04/2023
load-balancer Ipv6 Dual Stack Standard Internal Load Balancer Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/ipv6-dual-stack-standard-internal-load-balancer-powershell.md
Previously updated : 6/27/2023 Last updated : 06/27/2023
load-balancer Load Balancer Multivip Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-multivip-overview.md
Previously updated : 09/19/2022 Last updated : 12/04/2023
The following table shows the complete mapping in the load balancer:
The destination of the inbound flow is now the frontend IP address on the loopback interface in the VM. Each rule must produce a flow with a unique combination of destination IP address and destination port. Port reuse is possible on the same VM by varying the destination IP address to the frontend IP address of the flow. Your service is exposed to the load balancer by binding it to the frontendΓÇÖs IP address and port of the respective loopback interface.
-You'll notice the destination port doesn't change in the example. In floating IP scenarios, Azure Load Balancer also supports defining a load balancing rule to change the backend destination port and to make it different from the frontend destination port.
+You notice the destination port doesn't change in the example. In floating IP scenarios, Azure Load Balancer also supports defining a load balancing rule to change the backend destination port and to make it different from the frontend destination port.
The Floating IP rule type is the foundation of several load balancer configuration patterns. One example that is currently available is the [Configure one or more Always On availability group listeners](/azure/azure-sql/virtual-machines/windows/availability-group-listener-powershell-configure) configuration. Over time, we'll document more of these scenarios.
load-balancer Monitor Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/monitor-load-balancer.md
Previously updated : 11/16/2022 Last updated : 12/04/2023 ms.devlang: azurecli
This article describes the monitoring data generated by Load Balancer. Load Bala
## Load balancer insights
-Some services in Azure have a special focused pre-built monitoring dashboard in the Azure portal that provides a starting point for monitoring your service. These special dashboards are called "insights".
+Some services in Azure have a special focused prebuilt monitoring dashboard in the Azure portal that provides a starting point for monitoring your service. These special dashboards are called "insights".
Load Balancer insights provide:
For a list of the tables used by Azure Monitor Logs and queryable by Log Analyti
Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [logs](../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../azure-monitor/alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks
-If you're creating or running an application, which run on Load Balancer [Azure Monitor Application Insights](../azure-monitor/app/app-insights-overview.md) may offer other types of alerts.
+If you're creating or running an application that runs on Load Balancer, [Azure Monitor Application Insights](../azure-monitor/app/app-insights-overview.md) offers other types of alerts.
The following table lists common and recommended alert rules for Load Balancer.
load-balancer Tutorial Load Balancer Standard Public Zonal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-load-balancer-standard-public-zonal-portal.md
# Customer intent: As an IT administrator, I want to create a load balancer that load balances incoming internet traffic to virtual machines within a specific zone in a region. Previously updated : 12/05/2022 Last updated : 12/04/2023
For more information about availability zones and a standard load balancer, see
Sign in to the [Azure portal](https://portal.azure.com).
-## Create the virtual network
-In this section, you'll create a virtual network and subnet.
-1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual Networks** in the search results.
-2. In **Virtual networks**, select **+ Create**.
-
-3. In **Create virtual network**, enter or select this information in the **Basics** tab:
-
- | **Setting** | **Value** |
- ||--|
- | **Project Details** | |
- | Subscription | Select your Azure subscription |
- | Resource Group | Select **Create new**. </br> In **Name** enter **CreateZonalLBTutorial-rg**. </br> Select **OK**. |
- | **Instance details** | |
- | Name | Enter **myVNet** |
- | Region | Select **(Europe) West Europe** |
-
-4. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
-
-5. In the **IP Addresses** tab, enter this information:
-
- | Setting | Value |
- |--|-|
- | IPv4 address space | Enter **10.1.0.0/16** |
-
-6. Select **+ Add subnet**.
-
-7. On the **Add subnet** page, enter this information:
-
- | Setting | Value |
- |--|-|
- | Subnet name | Enter **myBackendSubnet** |
- | Subnet address range | Enter **10.1.0.0/24** |
-
-8. Select **Add**.
-
-9. Select the **Security** tab.
-
-10. Under **BastionHost**, select **Enable**. Enter this information:
-
- | Setting | Value |
- |--|-|
- | Bastion name | Enter **myBastionHost** |
- | AzureBastionSubnet address space | Enter **10.1.1.0/26** |
- | Public IP Address | Select **Create new**. </br> For **Name**, enter **myBastionIP**. </br> Select **OK**. |
-
-11. Select the **Review + create** tab or select the **Review + create** button.
-
-12. Select **Create**.
-
-> [!IMPORTANT]
-
-> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
-
->
-
-## Create NAT gateway
-
-In this section, you'll create a NAT gateway for outbound internet access for resources in the virtual network.
-
-1. In the search box at the top of the portal, enter **NAT gateway**. Select **NAT gateways** in the search results.
-
-2. In **NAT gateways**, select **+ Create**.
-
-3. In **Create network address translation (NAT) gateway**, enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **CreateZonalLBTutorial-rg**. |
- | **Instance details** | |
- | NAT gateway name | Enter **myNATgateway**. |
- | Availability zone | Select **1**. |
- | Idle timeout (minutes) | Enter **15**. |
-
-4. Select the **Outbound IP** tab or select the **Next: Outbound IP** button at the bottom of the page.
-
-5. In **Outbound IP**, for **Public IP addresses**, select **Create a new public IP address**.
-
-6. On the **Add a public IP address** page, for **Name**, enter **myNATGatewayIP**.
-
-7. Select **OK**.
-
-8. Select the **Subnet** tab or select the **Next: Subnet** button at the bottom of the page.
-
-9. On the **Subnet** page, for **Virtual network**, select **myVNet** from the dropdown.
-
-10. For **Subnet name**, select **myBackendSubnet**.
-
-11. Select the **Review + create** button at the bottom of the page, or select the **Review + create** tab.
-
-12. Select **Create**.
-
-## Create load balancer
-
-In this section, you'll create a zonal load balancer that load balances virtual machines.
-
-During the creation of the load balancer, you'll configure:
-
-* Frontend IP address
-* Backend pool
-* Inbound load-balancing rules
-
-1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
-
-2. In the **Load balancer** page, select **Create**.
-
-3. In the **Basics** tab of the **Create load balancer** page, enter or select the following information:
-
- | Setting | Value |
- | | |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **CreateZonalLBTutorial-rg**. |
- | **Instance details** | |
- | Name | Enter **myLoadBalancer** |
- | Region | Select **(Europe) West Europe**. |
- | SKU | Leave the default **Standard**. |
- | Type | Select **Public**. |
- | Tier | Leave the default **Regional**. |
-
-4. Select **Next: Frontend IP configuration** at the bottom of the page.
-
-5. In **Frontend IP configuration**, select **+ Add a frontend IP configuration**.
-
-6. For **Name**, type **LoadBalancerFrontend**.
-
-7. For **IP version**, select either **IPv4** or **IPv6**.
-
- > [!NOTE]
- > IPv6 isn't currently supported with Routing Preference or Cross-region load-balancing (Global Tier).
-
-8. For **IP type**, select **IP address**.
-
- > [!NOTE]
- > For more information on IP prefixes, see [Azure Public IP address prefix](../virtual-network/ip-services/public-ip-address-prefix.md).
-
-9. For **Public IP address**, select **Create new**.
-
-10. On the **Add a public IP address** page, for **Name**, enter **myPublicIP**.
-
-11. For **Availability zone**, select **1** from the dropdown, then click **OK** to close the **Add a public IP address** page.
-
- > [!NOTE]
- > In regions with [Availability Zones](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select no-zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. In regions without Availability Zones, this field won't appear. </br> For more information on availability zones, see [Availability zones overview](../availability-zones/az-overview.md).
-
-12. If you see **Routing preference** settings, leave the default of **Microsoft Network** for **Routing preference**.
-
-13. Select **OK**.
-
-14. Select **Add**.
-
-15. At the bottom of the page, select **Next: Backend pools**.
-
-16. On the **Backend pools** page, select **+ Add a backend pool**.
-
-17. On the **Add backend pool** page, for **Name**, type **myBackendPool**.
-
-18. For **Virtual network**, select **myVNet** from the dropdown.
-
-19. For **Backend Pool Configuration**, select either **NIC** or **IP Address**.
-
-20. Select **Save**.
-
-21. At the bottom of the page, select the **Next: Inbound rules** button.
-
-22. On the **Inbound rules** page, for **Load balancing rule**, select **+ Add a load balancing rule**.
-
-23. On the **Add load balancing rule** page, enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | Name | Enter **myHTTPRule** |
- | IP Version | Select **IPv4** or **IPv6** depending on your requirements. |
- | Frontend IP address | Select **LoadBalancerFrontend**. |
- | Backend pool | Select **myBackendPool**. |
- | Protocol | Select **TCP**. |
- | Port | Enter **80**. |
- | Backend port | Enter **80**. |
- | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **HTTP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. |
- | Session persistence | Select **None**. |
- | Idle timeout (minutes) | Enter or select **15**. |
- | TCP reset | Select **Enabled**. |
- | Floating IP | Select **Disabled**. |
- | Outbound source network address translation (SNAT) | Leave the default of **(Recommended) Use outbound rules to provide backend pool members access to the internet.** |
-
-24. Select **Add**.
-
-25. At the bottom of the page, select the **Review + create** button.
-
-26. Select **Create**.
-
- > [!NOTE]
- > In this example we created a NAT gateway to provide outbound Internet access. The outbound rules tab in the configuration is bypassed as it's optional isn't needed with the NAT gateway. For more information on Azure NAT gateway, see [What is Azure Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md)
- > For more information about outbound connections in Azure, see [Source Network Address Translation (SNAT) for outbound connections](../load-balancer/load-balancer-outbound-connections.md)
-
-## Create virtual machines
-
-In this section, you'll create three VMs (**myVM1**, **myVM2**, and **myVM3**) in one zone (**Zone 1**).
-
-These VMs are added to the backend pool of the load balancer that was created earlier.
-
-1. On the upper-left side of the portal, select **Create a resource** > **Compute** > **Virtual machine**.
-
-2. In **Create a virtual machine**, type or select the values in the **Basics** tab:
-
- | Setting | Value |
- |--|-|
- | **Project Details** | |
- | Subscription | Select your Azure subscription |
- | Resource Group | Select **CreateZonalLBTutorial-rg** |
- | **Instance details** | |
- | Virtual machine name | Enter **myVM1** |
- | Region | Select **(Europe) West Europe** |
- | Availability Options | Select **Availability zone** |
- | Availability zone | Select **1** |
- | Image | Select **Windows Server 2019 Datacenter - Gen1** |
- | Azure Spot instance | Leave the default of unchecked. |
- | Size | Choose VM size or take default setting |
- | **Administrator account** | |
- | Username | Enter a username |
- | Password | Enter a password |
- | Confirm password | Reenter password |
- | **Inbound port rules** | |
- | Public inbound ports | Select **None** |
-
-3. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
-
-4. In the Networking tab, select or enter:
-
- | Setting | Value |
- |-|-|
- | **Network interface** | |
- | Virtual network | **myVNet** |
- | Subnet | **myBackendSubnet** |
- | Public IP | Select **None**. |
- | NIC network security group | Select **Advanced**|
- | Configure network security group | Select **Create new**. </br> In the **Create network security group**, enter **myNSG** in **Name**. </br> Under **Inbound rules**, select **+Add an inbound rule**. </br> Under **Service**, select **HTTP**. </br> Under **Priority**, enter **100**. </br> In **Name**, enter **myNSGRule** </br> Select **Add** </br> Select **OK** |
- | **Load balancing** |
- | Place this virtual machine behind an existing load-balancing solution? | Select the check box. |
- | **Load balancing settings** |
- | Load-balancing options | Select **Azure load balancing** |
- | Select a load balancer | Select **myLoadBalancer** |
- | Select a backend pool | Select **myBackendPool** |
-
-7. Select **Review + create**.
-
-8. Review the settings, and then select **Create**.
-
-9. Follow the steps 1 to 8 to create two more VMs with the following values and all the other settings the same as **myVM1**:
-
- | Setting | VM 2| VM 3|
- | - | -- ||
- | Name | **myVM2** |**myVM3**|
- | Availability zone | **1** |**1**|
- | Network security group | Select the existing **myNSG**| Select the existing **myNSG**|
[!INCLUDE [ephemeral-ip-note.md](../../includes/ephemeral-ip-note.md)]
-## Install IIS
-
-1. Select **All services** in the left-hand menu, select **All resources**, and then from the resources list, select **myVM1** that is located in the **CreateZonalLBTutorial-rg** resource group.
-
-2. On the **Overview** page, select **Connect**, then **Bastion**.
-
-3. Select **Use Bastion**.
-
-4. Enter the username and password entered during VM creation.
-
-5. Select **Connect**.
-
-6. On the server desktop, navigate to **Windows Administrative Tools** > **Windows PowerShell**.
-
-7. In the PowerShell Window, run the following commands to:
-
- * Install the IIS server
- * Remove the default iisstart.htm file
- * Add a new iisstart.htm file that displays the name of the VM:
-
- ```powershell
- # Install IIS server role
- Install-WindowsFeature -name Web-Server -IncludeManagementTools
-
- # Remove default htm file
- Remove-Item C:\inetpub\wwwroot\iisstart.htm
-
- # Add a new htm file that displays server name
- Add-Content -Path "C:\inetpub\wwwroot\iisstart.htm" -Value $("Hello World from " + $env:computername)
- ```
-
-8. Close the Bastion session with **myVM1**.
-
-9. Repeat steps 1 to 8 to install IIS and the updated iisstart.htm file on **myVM2** and **myVM3**.
-
-## Test the load balancer
-
-1. In the search box at the top of the page, enter **Load balancer**. Select **Load balancers** in the search results.
-
-2. Click the load balancer you created, **myLoadBalancer**. On the **Frontend IP configuration** page for your load balancer, locate the public **IP address**.
-
-3. Copy the public IP address, and then paste it into the address bar of your browser. The custom VM page of the IIS Web server is displayed in the browser.
-
- :::image type="content" source="./media/tutorial-load-balancer-standard-zonal-portal/load-balancer-test.png" alt-text="Screenshot of load balancer test":::
-
-## Clean up resources
-When no longer needed, delete the resource group, load balancer, and all related resources. To do so, select the resource group **CreateZonalLBTutorial-rg** that contains the resources and then select **Delete**.
## Next steps
machine-learning Concept Foundation Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-foundation-models.md
Support|Supported by Microsoft and covered by [Azure Machine Learning SLA](https
## Learn more Learn [how to use Foundation Models in Azure Machine Learning](./how-to-use-foundation-models.md) for fine-tuning, evaluation and deployment using Azure Machine Learning studio UI or code based methods.
-* Explore the [model catalog in Azure Machine Learning studio](https://ml.azure.com/model/catalog). You need a [Azure Machine Learning workspace](./quickstart-create-resources.md) to explore the catalog.
+* Explore the [model catalog in Azure Machine Learning studio](https://ml.azure.com/model/catalog). You need an [Azure Machine Learning workspace](./quickstart-create-resources.md) to explore the catalog.
* [Evaluate, fine-tune and deploy models](./how-to-use-foundation-models.md) curated by Azure Machine Learning.
machine-learning Concept Prebuilt Docker Images Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-prebuilt-docker-images-inference.md
Prebuilt Docker container images for inference are used when deploying a model w
> [!IMPORTANT] > The list provided below includes only **currently supported** inference docker images by Azure Machine Learning.
+* All the docker images run as non-root user.
+* We recommend using `latest` tag for docker images. Prebuilt docker images for inference are published to Microsoft container registry (MCR), to query list of tags available, follow [instructions on the GitHub repository](https://github.com/microsoft/ContainerRegistry#browsing-mcr-content).
+* If you want to use a specific tag for any inference docker image, we support from `latest` to the tag that is *6 months* old from the `latest`.
+
+**Inference minimal base images**
+
+Framework version | CPU/GPU | Pre-installed packages | MCR Path
+ | | | |
+NA | CPU | NA | `mcr.microsoft.com/azureml/minimal-ubuntu18.04-py37-cpu-inference:latest`
+NA | GPU | NA | `mcr.microsoft.com/azureml/minimal-ubuntu18.04-py37-cuda11.0.3-gpu-inference:latest`
+NA | CPU | NA | `mcr.microsoft.com/azureml/minimal-ubuntu20.04-py38-cpu-inference:latest`
+NA | GPU | NA | `mcr.microsoft.com/azureml/minimal-ubuntu20.04-py38-cuda11.6.2-gpu-inference:latest`
## How to use inference prebuilt docker images?
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
To allow the installation of R packages, allow __outbound__ traffic to `cloud.r-
## Scenario: Using compute cluster or compute instance with a public IP
+> [!IMPORTANT]
+> A compute instance or compute cluster without a public IP does not need inbound traffic from Azure Batch management and Azure Machine Learning services. However, if you have multiple computes and some of them use a public IP address, you will need to allow this traffic.
+
+When using Azure Machine Learning __compute instance__ or __compute cluster__ (_with a public IP address_), allow inbound traffic from the Azure Machine Learning service. A compute instance or compute cluster _with no public IP_ (preview) __doesn't__ require this inbound communication. A Network Security Group allowing this traffic is dynamically created for you, however you may need to also create user-defined routes (UDR) if you have a firewall. When creating a UDR for this traffic, you can use either **IP Addresses** or **service tags** to route the traffic.
+
+# [IP Address routes](#tab/ipaddress)
+
+For the Azure Machine Learning service, you must add the IP address of both the __primary__ and __secondary__ regions. To find the secondary region, see the [Cross-region replication in Azure](/azure/availability-zones/cross-region-replication-azure). For example, if your Azure Machine Learning service is in East US 2, the secondary region is Central US.
+
+To get a list of IP addresses of the Azure Machine Learning service, download the [Azure IP Ranges and Service Tags](https://www.microsoft.com/download/details.aspx?id=56519) and search the file for `AzureMachineLearning.<region>`, where `<region>` is your Azure region.
+
+> [!IMPORTANT]
+> The IP addresses may change over time.
+
+When creating the UDR, set the __Next hop type__ to __Internet__. This means the inbound communication from Azure skips your firewall to access the load balancers with public IPs of Compute Instance and Compute Cluster. UDR is required because Compute Instance and Compute Cluster will get random public IPs at creation, and you cannot know the public IPs before creation to register them on your firewall to allow the inbound from Azure to specific IPs for Compute Instance and Compute Cluster. The following image shows an example IP address based UDR in the Azure portal:
++
+# [Service tag routes](#tab/servicetag)
+
+Create user-defined routes for the `AzureMachineLearning` service tag.
+
+The following command demonstrates adding a route for this service tag:
+
+```azurecli
+az network route-table route create -g MyResourceGroup --route-table-name MyRouteTable -n AzureMLRoute --address-prefix AzureMachineLearning --next-hop-type Internet
+```
+++
+For information on configuring UDR, see [Route network traffic with a routing table](/azure/virtual-network/tutorial-create-route-table-portal).
## Scenario: Firewall between Azure Machine Learning and Azure Storage endpoints
machine-learning How To Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-image-models.md
Instance segmentation | **MaskRCNN ResNet FPN**| `maskrcnn_resnet18_fpn` <br> `m
#### Supported model architectures - HuggingFace and MMDetection (preview)
-With the new backend that runs on [Azure Machine Learning pipelines](concept-ml-pipelines.md), you can additionally use any image classification model from the [HuggingFace Hub](https://huggingface.co/models?pipeline_tag=image-classification&library=transformers) which is part of the transformers library (such as microsoft/beit-base-patch16-224), as well as any object detection or instance segmentation model from the [MMDetection Version 2.28.2 Model Zoo](https://mmdetection.readthedocs.io/en/v2.28.2/model_zoo.html) (such as atss_r50_fpn_1x_coco).
+With the new backend that runs on [Azure Machine Learning pipelines](concept-ml-pipelines.md), you can additionally use any image classification model from the [HuggingFace Hub](https://huggingface.co/models?pipeline_tag=image-classification&library=transformers) which is part of the transformers library (such as microsoft/beit-base-patch16-224), as well as any object detection or instance segmentation model from the [MMDetection Version 3.1.0 Model Zoo](https://mmdetection.readthedocs.io/en/v3.1.0/model_zoo.html) (such as `atss_r50_fpn_1x_coco`).
-In addition to supporting any model from HuggingFace Transfomers and MMDetection 2.28.2, we also offer a list of curated models from these libraries in the azureml-staging registry. These curated models have been tested thoroughly and use default hyperparameters selected from extensive benchmarking to ensure effective training. The table below summarizes these curated models.
+In addition to supporting any model from HuggingFace Transfomers and MMDetection 3.1.0, we also offer a list of curated models from these libraries in the azureml registry. These curated models have been tested thoroughly and use default hyperparameters selected from extensive benchmarking to ensure effective training. The table below summarizes these curated models.
Task | model architectures | String literal syntax |-|-
-Image classification<br> (multi-class and multi-label)| **BEiT** <br> **ViT** <br> **DeiT** <br> **SwinV2]** | [`microsoft/beit-base-patch16-224-pt22k-ft22k`](https://ml.azure.com/registries/azureml/models/microsoft-beit-base-patch16-224-pt22k-ft22k/version/5)<br> [`google/vit-base-patch16-224`](https://ml.azure.com/registries/azureml/models/google-vit-base-patch16-224/version/5)<br> [`facebook/deit-base-patch16-224`](https://ml.azure.com/registries/azureml/models/facebook-deit-base-patch16-224/version/5)<br> [`microsoft/swinv2-base-patch4-window12-192-22k`](https://ml.azure.com/registries/azureml/models/microsoft-swinv2-base-patch4-window12-192-22k/version/5)
-Object Detection | **Sparse R-CNN** <br> **Deformable DETR** <br> **VFNet** <br> **YOLOF** <br> **Swin** | [`sparse_rcnn_r50_fpn_300_proposals_crop_mstrain_480-800_3x_coco`](https://ml.azure.com/registries/azureml/models/sparse_rcnn_r50_fpn_300_proposals_crop_mstrain_480-800_3x_coco/version/3)<br> [`sparse_rcnn_r101_fpn_300_proposals_crop_mstrain_480-800_3x_coco`](https://ml.azure.com/registries/azureml/models/sparse_rcnn_r101_fpn_300_proposals_crop_mstrain_480-800_3x_coco/version/3) <br> [`deformable_detr_twostage_refine_r50_16x2_50e_coco`](https://ml.azure.com/registries/azureml/models/deformable_detr_twostage_refine_r50_16x2_50e_coco/version/3) <br> [`vfnet_r50_fpn_mdconv_c3-c5_mstrain_2x_coco`](https://ml.azure.com/registries/azureml/models/vfnet_r50_fpn_mdconv_c3-c5_mstrain_2x_coco/version/3) <br> [`vfnet_x101_64x4d_fpn_mdconv_c3-c5_mstrain_2x_coco`](https://ml.azure.com/registries/azureml/models/vfnet_x101_64x4d_fpn_mdconv_c3-c5_mstrain_2x_coco/version/3) <br> [`yolof_r50_c5_8x8_1x_coco`](https://ml.azure.com/registries/azureml/models/yolof_r50_c5_8x8_1x_coco/version/3)
-Instance Segmentation | **Swin** | [`mask_rcnn_swin-t-p4-w7_fpn_1x_coco`](https://ml.azure.com/registries/azureml/models/mask_rcnn_swin-t-p4-w7_fpn_1x_coco/version/3)
+Image classification<br> (multi-class and multi-label)| **BEiT** <br> **ViT** <br> **DeiT** <br> **SwinV2** | [`microsoft/beit-base-patch16-224-pt22k-ft22k`](https://ml.azure.com/registries/azureml/models/microsoft-beit-base-patch16-224-pt22k-ft22k/version/5)<br> [`google/vit-base-patch16-224`](https://ml.azure.com/registries/azureml/models/google-vit-base-patch16-224/version/5)<br> [`facebook/deit-base-patch16-224`](https://ml.azure.com/registries/azureml/models/facebook-deit-base-patch16-224/version/5)<br> [`microsoft/swinv2-base-patch4-window12-192-22k`](https://ml.azure.com/registries/azureml/models/microsoft-swinv2-base-patch4-window12-192-22k/version/5)
+Object Detection | **Sparse R-CNN** <br> **Deformable DETR** <br> **VFNet** <br> **YOLOF** <br> **Swin** | [`mmd-3x-sparse-rcnn_r50_fpn_300-proposals_crop-ms-480-800-3x_coco`](https://ml.azure.com/registries/azureml/models/mmd-3x-sparse-rcnn_r50_fpn_300-proposals_crop-ms-480-800-3x_coco/version/8)<br> [`mmd-3x-sparse-rcnn_r101_fpn_300-proposals_crop-ms-480-800-3x_coco`](https://ml.azure.com/registries/azureml/models/mmd-3x-sparse-rcnn_r101_fpn_300-proposals_crop-ms-480-800-3x_coco/version/8) <br> [`mmd-3x-deformable-detr_refine_twostage_r50_16xb2-50e_coco`](https://ml.azure.com/registries/azureml/models/mmd-3x-deformable-detr_refine_twostage_r50_16xb2-50e_coco/version/8) <br> [`mmd-3x-vfnet_r50-mdconv-c3-c5_fpn_ms-2x_coco`](https://ml.azure.com/registries/azureml/models/mmd-3x-vfnet_r50-mdconv-c3-c5_fpn_ms-2x_coco/version/8) <br> [`mmd-3x-vfnet_x101-64x4d-mdconv-c3-c5_fpn_ms-2x_coco`](https://ml.azure.com/registries/azureml/models/mmd-3x-vfnet_x101-64x4d-mdconv-c3-c5_fpn_ms-2x_coco/version/8) <br> [`mmd-3x-yolof_r50_c5_8x8_1x_coco`](https://ml.azure.com/registries/azureml/models/mmd-3x-yolof_r50_c5_8x8_1x_coco/version/8)
+Instance Segmentation | **Swin** | [`mmd-3x-mask-rcnn_swin-t-p4-w7_fpn_1x_coco`](https://ml.azure.com/registries/azureml/models/mmd-3x-mask-rcnn_swin-t-p4-w7_fpn_1x_coco/version/8)
We constantly update the list of curated models. You can get the most up-to-date list of the curated models for a given task using the Python SDK: ``` credential = DefaultAzureCredential()
-ml_client = MLClient(credential, registry_name="azureml-staging")
+ml_client = MLClient(credential, registry_name="azureml")
models = ml_client.models.list() classification_models = []
machine-learning How To Autoscale Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-autoscale-endpoints.md
rule_scale_in = ScaleRule(
metric_resource_uri = deployment.id, time_grain = datetime.timedelta(minutes = 1), statistic = "Average",
- operator = "GreaterThan",
+ operator = "less Than",
time_aggregation = "Last", time_window = datetime.timedelta(minutes = 5),
- threshold = 70
+ threshold = 30
), scale_action = ScaleAction( direction = "Increase", type = "ChangeCount",
- value = 2,
+ value = 1,
cooldown = datetime.timedelta(hours = 1) ) )
To learn more about autoscale with Azure Monitor, see the following articles:
- [Understand autoscale settings](../azure-monitor/autoscale/autoscale-understanding-settings.md) - [Overview of common autoscale patterns](../azure-monitor/autoscale/autoscale-common-scale-patterns.md) - [Best practices for autoscale](../azure-monitor/autoscale/autoscale-best-practices.md)-- [Troubleshooting Azure autoscale](../azure-monitor/autoscale/autoscale-troubleshoot.md)
+- [Troubleshooting Azure autoscale](../azure-monitor/autoscale/autoscale-troubleshoot.md)
machine-learning How To Configure Network Isolation With V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-network-isolation-with-v2.md
ws.update(v1_legacy_mode=False)
The Azure CLI [extension v1 for machine learning](./v1/reference-azure-machine-learning-cli.md) provides the [az ml workspace update](/cli/azure/ml(v1)/workspace#az-ml(v1)-workspace-update) command. To disable the parameter for a workspace, add the parameter `--v1-legacy-mode False`. > [!IMPORTANT]
-> The `v1-legacy-mode` parameter is only available in version 1.41.0 or newer of the Azure CLI extension for machine learning v1 (`azure-cli-ml`). Use the `az version` command to view version information.
+> The `v1-legacy-mode` parameter is only available in version 1.41.0 or newer of the Azure CLI extension for machine learning v1 (`azure-cli-ml`). The parameter is __not__ available in the v2 (`ml`) extension. Use the `az version` command to view version information, including the extension and version that is installed.
```azurecli az ml workspace update -g <myresourcegroup> -n <myworkspace> --v1-legacy-mode False
machine-learning Overview What Is Azure Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/overview-what-is-azure-machine-learning.md
Machine Learning also includes features for monitoring and auditing:
* Job artifacts, such as code snapshots, logs, and other outputs. * Lineage between jobs and assets, such as containers, data, and compute resources.
+If you use Apache Airflow, the [airflow-provider-azure-machinelearning](https://github.com/Azure/airflow-provider-azure-machinelearning) package is a provider that enables you to submit workflows to Azure Machine Learning from Apache AirFlow.
+ ## Next steps Start using Azure Machine Learning:
machine-learning How To Deploy And Where https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-and-where.md
When you register a model, we upload the model to the cloud (in your workspace's
The following examples demonstrate how to register a model.
+> [!IMPORTANT]
+> You should use only models that you create or obtain from a trusted source. You should treat serialized models as code, because security vulnerabilities have been discovered in a number of popular formats. Also, models might be intentionally trained with malicious intent to provide biased or inaccurate output.
# [Azure CLI](#tab/azcli)
machine-learning How To Select Algorithms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-select-algorithms.md
A common question is ΓÇ£Which machine learning algorithm should I use?ΓÇ¥ The al
>This article applies to classic prebuilt components and not compatible with CLI v2 and SDK v2. ## Business scenarios and the Machine Learning Algorithm Cheat Sheet
-The [Azure Machine Learning Algorithm Cheat Sheet](./algorithm-cheat-sheet.md?WT.mc_id=docs-article-lazzeri) helps you with the first consideration: **What you want to do with your data**? On the Machine Learning Algorithm Cheat Sheet, look for task you want to do, and then find a [Azure Machine Learning designer](./concept-designer.md?WT.mc_id=docs-article-lazzeri) algorithm for the predictive analytics solution.
+The [Azure Machine Learning Algorithm Cheat Sheet](./algorithm-cheat-sheet.md?WT.mc_id=docs-article-lazzeri) helps you with the first consideration: **What you want to do with your data**? On the Machine Learning Algorithm Cheat Sheet, look for task you want to do, and then find an [Azure Machine Learning designer](./concept-designer.md?WT.mc_id=docs-article-lazzeri) algorithm for the predictive analytics solution.
Machine Learning designer provides a comprehensive portfolio of algorithms, such as [Multiclass Decision Forest](../algorithm-module-reference/multiclass-decision-forest.md?WT.mc_id=docs-article-lazzeri), [Recommendation systems](../algorithm-module-reference/evaluate-recommender.md?WT.mc_id=docs-article-lazzeri), [Neural Network Regression](../algorithm-module-reference/neural-network-regression.md?WT.mc_id=docs-article-lazzeri), [Multiclass Neural Network](../algorithm-module-reference/multiclass-neural-network.md?WT.mc_id=docs-article-lazzeri), and [K-Means Clustering](../algorithm-module-reference/k-means-clustering.md?WT.mc_id=docs-article-lazzeri). Each algorithm is designed to address a different type of machine learning problem. See the [Machine Learning designer algorithm and component reference](../component-reference/component-reference.md?WT.mc_id=docs-article-lazzeri) for a complete list along with documentation about how each algorithm works and how to tune parameters to optimize the algorithm.
machine-learning How To Set Up Training Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-set-up-training-targets.md
Select the compute target where your training script will run on. If no compute
The example code in this article assumes that you have already created a compute target `my_compute_target` from the "Prerequisites" section.
->[!Note]
->Azure Databricks is not supported as a compute target for model training. You can use Azure Databricks for data preparation and deployment tasks.
-
+>[!NOTE]
+> - Azure Databricks is not supported as a compute target for model training. You can use Azure Databricks for data preparation and deployment tasks.
+> - To create and attach a compute target for training on Azure Arc-enabled Kubernetes cluster, see [Configure Azure Arc-enabled Machine Learning](../how-to-attach-kubernetes-anywhere.md)
## Create an environment Azure Machine Learning [environments](../concept-environments.md) are an encapsulation of the environment where your machine learning training happens. They specify the Python packages, Docker image, environment variables, and software settings around your training and scoring scripts. They also specify runtimes (Python, Spark, or Docker).
machine-learning Migrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/migrate-overview.md
To migrate to Azure Machine Learning, we recommend the following approach:
>[!NOTE] > The **designer** feature in Azure Machine Learning provides a similar drag-and-drop experience to Studio (classic). However, Azure Machine Learning also provides robust [code-first workflows](../concept-model-management-and-deployment.md) as an alternative. This migration series focuses on the designer, since it's most similar to the Studio (classic) experience.
- [!INCLUDE [aml-compare-classic](../includes/machine-learning-compare-classic-aml.md)]
+ The following table summarizes the key differences between ML Studio (classic) and Azure Machine Learning.
+
+ | Feature | ML Studio (classic) | Azure Machine Learning |
+ || | |
+ | Drag and drop interface | Classic experience | Updated experience - [Azure Machine Learning designer](../concept-designer.md)|
+ | Code SDKs | Not supported | Fully integrated with [Azure Machine Learning Python](/python/api/overview/azure/ml/) and [R](https://github.com/Azure/azureml-sdk-for-r) SDKs |
+ | Experiment | Scalable (10-GB training data limit) | Scale with compute target |
+ | Training compute targets | Proprietary compute target, CPU support only | Wide range of customizable [training compute targets](../concept-compute-target.md#training-compute-targets). Includes GPU and CPU support |
+ | Deployment compute targets | Proprietary web service format, not customizable | Wide range of customizable [deployment compute targets](../concept-compute-target.md#compute-targets-for-inference). Includes GPU and CPU support |
+ | ML Pipeline | Not supported | Build flexible, modular [pipelines](../concept-ml-pipelines.md) to automate workflows |
+ | MLOps | Basic model management and deployment; CPU only deployments | Entity versioning (model, data, workflows), workflow automation, integration with CICD tooling, CPU and GPU deployments [and more](../concept-model-management-and-deployment.md) |
+ | Model format | Proprietary format, Studio (classic) only | Multiple supported formats depending on training job type |
+ | Automated model training and hyperparameter tuning | Not supported | [Supported](../concept-automated-ml.md). Code-first and no-code options. |
+ | Data drift detection | Not supported | [Supported](../v1/how-to-monitor-datasets.md) |
+ | Data labeling projects | Not supported | [Supported](../how-to-create-image-labeling-projects.md) |
+ | Role-Based Access Control (RBAC) | Only contributor and owner role | [Flexible role definition and RBAC control](../how-to-assign-roles.md) |
+ | AI Gallery | Supported ([https://gallery.azure.ai/](https://gallery.azure.ai/)) | Unsupported <br><br> Learn with [sample Python SDK notebooks](https://github.com/Azure/MachineLearningNotebooks). |
3. Verify that your critical Studio (classic) modules are supported in Azure Machine Learning designer. For more information, see the [Studio (classic) and designer component-mapping](#studio-classic-and-designer-component-mapping) table below.
machine-learning Reference Azure Machine Learning Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/reference-azure-machine-learning-cli.md
The following commands demonstrate how to register a trained model, and then dep
## Inference configuration schema
+The entries in the `inferenceconfig.json` document map to the parameters for the [InferenceConfig](/python/api/azureml-core/azureml.core.model.inferenceconfig) class. The following table describes the mapping between entities in the JSON document and the parameters for the method:
+
+| JSON entity | Method parameter | Description |
+| -- | -- | -- |
+| `entryScript` | `entry_script` | Path to a local file that contains the code to run for the image. |
+| `sourceDirectory` | `source_directory` | Optional. Path to folders that contain all files to create the image, which makes it easy to access any files within this folder or subfolder. You can upload an entire folder from your local machine as dependencies for the Webservice. Note: your entry_script, conda_file, and extra_docker_file_steps paths are relative paths to the source_directory path. |
+| `environment` | `environment` | Optional. Azure Machine Learning [environment](/python/api/azureml-core/azureml.core.environment.environment).|
+
+You can include full specifications of an Azure Machine Learning [environment](/python/api/azureml-core/azureml.core.environment.environment) in the inference configuration file. If this environment doesn't exist in your workspace, Azure Machine Learning will create it. Otherwise, Azure Machine Learning will update the environment if necessary. The following JSON is an example:
+
+```json
+{
+ "entryScript": "score.py",
+ "environment": {
+ "docker": {
+ "arguments": [],
+ "baseDockerfile": null,
+ "baseImage": "mcr.microsoft.com/azureml/intelmpi2018.3-ubuntu18.04",
+ "enabled": false,
+ "sharedVolumes": true,
+ "shmSize": null
+ },
+ "environmentVariables": {
+ "EXAMPLE_ENV_VAR": "EXAMPLE_VALUE"
+ },
+ "name": "my-deploy-env",
+ "python": {
+ "baseCondaEnvironment": null,
+ "condaDependencies": {
+ "channels": [
+ "conda-forge"
+ ],
+ "dependencies": [
+ "python=3.7",
+ {
+ "pip": [
+ "azureml-defaults",
+ "azureml-telemetry",
+ "scikit-learn==0.22.1",
+ "inference-schema[numpy-support]"
+ ]
+ }
+ ],
+ "name": "project_environment"
+ },
+ "condaDependenciesFile": null,
+ "interpreterPath": "python",
+ "userManagedDependencies": false
+ },
+ "version": "1"
+ }
+}
+```
+
+You can also use an existing Azure Machine Learning [environment](/python/api/azureml-core/azureml.core.environment.environment) in separated CLI parameters and remove the "environment" key from the inference configuration file. Use -e for the environment name, and --ev for the environment version. If you don't specify --ev, the latest version will be used. Here is an example of an inference configuration file:
+
+```json
+{
+ "entryScript": "score.py",
+ "sourceDirectory": null
+}
+```
+
+The following command demonstrates how to deploy a model using the previous inference configuration file (named myInferenceConfig.json).
+
+It also uses the latest version of an existing Azure Machine Learning [environment](/python/api/azureml-core/azureml.core.environment.environment) (named AzureML-Minimal).
+
+```azurecli-interactive
+az ml model deploy -m mymodel:1 --ic myInferenceConfig.json -e AzureML-Minimal --dc deploymentconfig.json
+```
<a id="deploymentconfig"></a>
mysql How To Migrate Single Flexible Minimum Downtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/how-to-migrate-single-flexible-minimum-downtime.md
To configure Data in replication, perform the following steps:
- If SSL enforcement is enabled, then:
- i. Download the certificate needed to communicate over SSL with your Azure Database for MySQL server from [here](https://www.digicert.com/CACerts/BaltimoreCyberTrustRoot.crt.pem).
+ i. Download the certificate needed to communicate over SSL with your Azure Database for MySQL server from [here](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem).
ii. Open the file in notepad and paste the contents to the section ΓÇ£PLACE YOUR PUBLIC KEY CERTIFICATE'S CONTEXT HEREΓÇ£.
mysql How To Troubleshoot High Cpu Utilization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-troubleshoot-high-cpu-utilization.md
For example, consider a sudden surge in connections that initiates surge of data
Besides capturing metrics, itΓÇÖs important to also trace the workload to understand if one or more queries are causing the spike in CPU utilization.
+## High CPU causes
+
+CPU spikes can occur for various reasons, primarily due to spikes in connections and poorly written SQL queries, or a combination of both:
+
+#### Spike in connections
+
+An increase in connections can lead to an increase in threads, which in turn can cause a rise in CPU usage as it has to manage these connections along with their queries and resources. To troubleshoot a spike in connections, you should check the [Total Connections](./../flexible-server/concepts-monitoring.md#list-of-metrics) metric and refer to the next section for more details about these connections. You can utilize the performance_schema to identify the hosts and users currently connected to the server with the following commands:
+
+Current connected hosts
+```
+ select HOST,CURRENT_CONNECTIONS From performance_schema.hosts
+ where CURRENT_CONNECTIONS > 0
+ and host not in ('NULL','localhost');
+```
+
+Current connected users
+```
+ select USER,CURRENT_CONNECTIONS from performance_schema.users
+ where CURRENT_CONNECTIONS >0
+ and USER not in ('NULL','azure_superuser');
+```
+
+#### Poorly written SQL queries
+
+Queries that are expensive to execute and scan a large number of rows without an index, or those that perform temporary sorts along with other inefficient plans, can lead to CPU spikes. While some queries may execute quickly in a single session, they can cause CPU spikes when run in multiple sessions. Therefore, itΓÇÖs crucial to always explain your queries that you capture from the [show processlist](https://dev.mysql.com/doc/refman/5.7/en/show-processlist.html) and ensure their execution plans are efficient. This can be achieved by ensuring they scan a minimal number of rows by using filters/where clause, utilize indexes and avoid using large temporary sort along with other bad execution plans. For more information about execution plans, see [EXPLAIN Output Format](https://dev.mysql.com/doc/refman/5.7/en/explain-output.html).
+ ## Capturing details of the current workload The SHOW (FULL) PROCESSLIST command displays a list of all user sessions currently connected to the Azure Database for MySQL server. It also provides details about the current state and activity of each session.
open-datasets Dataset Oj Sales Simulated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-oj-sales-simulated.md
View the original dataset description or download the dataset.
<!-- nbstart https://opendatasets-api.azure.com/discoveryapi/OpenDataset/DownloadNotebook?serviceType=AzureNotebooks&package=azureml-opendatasets&registryId=sample-oj-sales-simulated -->
-> [!TIP]
-> **[Download the notebook instead](https://opendatasets-api.azure.com/discoveryapi/OpenDataset/DownloadNotebook?serviceType=AzureNotebooks&package=azureml-opendatasets&registryId=sample-oj-sales-simulated)**.
- ```python from azureml.core.workspace import Workspace ws = Workspace.from_config()
datastore.upload(src_dir = oj_sales_path,
We need to define the path of the data to create the [FileDataset](/python/api/azureml-core/azureml.data.file_dataset.filedataset). + ```python from azureml.core.dataset import Dataset
input_ds = Dataset.File.from_files(path=path_on_datastore, validate=False)
We want to register the dataset to our workspace so we can call it as an input into our Pipeline for forecasting. + ```python registered_ds = input_ds.register(ws, ds_name, create_new_version=True) named_ds = registered_ds.as_named_input(ds_name)
named_ds = registered_ds.as_named_input(ds_name)
<!-- nbend --> + ### Azure Databricks
named_ds = registered_ds.as_named_input(ds_name)
<!-- nbstart https://opendatasets-api.azure.com/discoveryapi/OpenDataset/DownloadNotebook?serviceType=AzureDatabricks&package=azureml-opendatasets&registryId=sample-oj-sales-simulated -->
-> [!TIP]
-> **[Download the notebook instead](https://opendatasets-api.azure.com/discoveryapi/OpenDataset/DownloadNotebook?serviceType=AzureDatabricks&package=azureml-opendatasets&registryId=sample-oj-sales-simulated)**.
- ``` # This is a package in preview. # You need to pip install azureml-opendatasets in Databricks cluster. https://learn.microsoft.com/azure/data-explorer/connect-from-databricks#install-the-python-library-on-your-azure-databricks-cluster
if sys.platform == 'linux':
<!-- nbend --> + ## Next steps
operator-nexus Howto Platform Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-platform-prerequisites.md
you'll first need to create a Network Fabric Controller and then a Cluster Manag
- Set up Resource Groups to place and group resources in a logical manner that will be created for Operator Nexus platform. - Establish ExpressRoute connectivity from your WAN to an Azure Region
+- To enable Microsoft Defender for Endpoint for on-premises bare metal machines (BMMs), you must have selected a Defender for Servers plan in your Operator Nexus subscription prior to deployment. Additional information available [here](./howto-set-up-defender-for-cloud-security.md).
## On your premises prerequisites
orbital About Ground Stations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/about-ground-stations.md
In addition, we support public satellites for downlink-only operations that util
## Partner ground stations
-Azure Orbital Ground Station offers a common data plane and API to access all antenna in the global network. An active contract with the partner network(s) you wish to integrate with Azure Orbital Ground Station is required to onboard with a partner. Once you have the proper contract(s) and regulatory approval(s) in place, your subscription is approved to access partner ground station sites by the Azure Orbital Ground Station team. Learn how to [request authorization of a spacecraft](register-spacecraft.md#request-authorization-of-the-new-spacecraft-resource) and [configure a contact profile](concepts-contact-profile.md#configuring-a-contact-profile-for-applicable-partner-ground-stations) for partner ground stations.
+Azure Orbital Ground Station offers a common data plane and API to access all antenna in the global network. An active contract with the partner network(s) you wish to integrate with Azure Orbital Ground Station is required to onboard with a partner. Once you have the proper contract(s) and regulatory approval(s) in place, your subscription is approved to access partner ground station sites by the Azure Orbital Ground Station team. Learn how to [request authorization of a spacecraft](register-spacecraft.md#request-authorization-of-the-new-spacecraft-resource) and [configure a contact profile](concepts-contact-profile.md#configure-a-contact-profile-for-applicable-partner-ground-stations) for partner ground stations.
## Next steps
orbital Concepts Contact Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/concepts-contact-profile.md
#Customer intent: As a satellite operator or user, I want to understand how to use the contact profile so that I can take passes using Azure Orbital Ground Station.
-# Ground station contact profile
+# Ground station contact profile resource
The contact profile resource stores pass requirements such as links and endpoint details. Use this resource along with the spacecraft resource during contact scheduling to view and schedule available passes.
-You can create many contact profiles to represent different types of passes depending on your mission operations. For example, you can create a contact profile for a command and control pass or a contact profile for a downlink-only pass.
+You can create many contact profiles to represent different types of passes depending on your mission operations. For example, you can create a contact profile for a command and control pass or a contact profile for a downlink-only pass. These resources are mutable and don't undergo an authorization process like spacecraft resources do. One contact profile can be used with many spacecraft resources.
-These resources are mutable and don't undergo an authorization process like the spacecraft resources do. One contact profile can be used with many spacecraft resources.
-
-See [how to configure a contact profile](contact-profile.md) for a full list of parameters.
-
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- Subnet that is created in the relevant VNET and resource group. See [prepare network for Azure Orbital Ground Station integration](prepare-network.md).-
-## Creating a contact profile
-
-Follow these steps to [create a contact profile](contact-profile.md).
-
-## Adjusting pass parameters
-
-Specify a minimum pass time to ensure passes are a certain duration. Specify a minimum elevation to ensure passes are above a certain elevation.
-
-The minimum pass time and minimum elevation parameters are used by Azure Orbital Ground Station during the contact scheduling. Avoid changing these on a pass-by-pass basis and instead create multiple contact profiles if you require flexibility.
-
-## Understanding links and channels
+## Understand links and channels
A whole band, unique in direction and polarity, is called a link. Channels, which are children under links, specify the center frequency, bandwidth, and endpoints. Typically there's only one channel per link, but some applications require multiple channels per link.
-You can specify EIRP and G/T requirements for each link. EIRP applies to uplinks and G/T applies to downlinks. You can provide a name for each link and channel to keep track of these properties. Each channel has a modem associated with it. Follow the steps in [how to setup software modem](modem-chain.md) to understand the options.
-
-Refer to the example below to understand how to specify an RHCP channel and an LHCP channel if your mission requires dual-polarization on downlink. To find this information about your contact profile, navigate to the contact profile resource overview and click 'JSON view.'
+You can specify EIRP and G/T requirements for each link. EIRP applies to uplinks and G/T applies to downlinks. You can provide a name for each link and channel to keep track of these properties. Each channel has a modem associated with it. Follow the steps in [how to set up a software modem](modem-chain.md) to understand the options.
+
+## Contact profile parameters
+
+| **Parameter** | **Description** |
+| | |
+| **Pass parameters** | |
+| Minimum viable contact duration | The minimum duration of a contact in ISO 8601 format. Acts as a prerequisite to show available time slots to communicate with your spacecraft. If an available time window is less than this time, it won't be in the list of available options. Avoid changing on a pass-by-pass basis and instead create multiple contact profiles if you require flexibility. |
+| Minimum elevation | The minimum elevation of a contact, after acquisition of signal (AOS), in decimal degrees. Acts as a prerequisite to show available time slots to communicate with your spacecraft. Using a higher value might reduce the duration of the contact. Avoid changing on a pass-by-pass basis and instead create multiple contact profiles if you require flexibility. |
+| Auto track configuration | The frequency band to be used for autotracking during the contact (X band, S band, or Disabled). |
+| Event Hubs Namespace and Instance | The Event Hubs namespace/instance to send telemetry data of your contacts. |
+| **Network Configuration** | |
+| Virtual Network | The virtual network used for a contact. This VNET must be in the same region as the contact profile. |
+| Subnet | The subnet used for a contact. This subnet must be within the above VNET, be delegated to the Microsoft.Orbital service, and have a minimum address prefix of size /24. |
+| Third-party configuration | Mission configuration and provider name associated with a partner ground network. |
+| **Links** | |
+| Direction | Direction of the link (uplink or downlink). |
+| Gain/Temperature | Required gain to noise temperature in dB/K. |
+| EIRP in dBW | Required effective isotropic radiated power in dBW. |
+| Polarization | Link polarization (RHCP, LHCP, Dual, or Linear Vertical). |
+| **Channels** | |
+| Center Frequency | The channel center frequency in MHz. |
+| Bandwidth | The channel bandwidth in MHz. |
+| Endpoint | The name, IP address, port, and protocol of the data delivery endpoint. |
+| Demodulation Configuration | Copy of the modem configuration file such as Kratos QRadio or Kratos QuantumRx. Only valid for downlink directions. If provided, the modem connects to the customer endpoint and sends demodulated data instead of a VITA.49 stream. |
+| Modulation Configuration | Copy of the modem configuration file such as Kratos QRadio. Only valid for uplink directions. If provided, the modem connects to the customer endpoint and accepts commands from the customer instead of a VITA.49 stream. |
+
+## Example of dual-polarization downlink contact profile
+
+Refer to the example below to understand how to specify an RHCP channel and an LHCP channel if your mission requires dual-polarization on downlink. To find this information about your contact profile, navigate to the contact profile resource overview in Azure portal and click 'JSON view.'
```json {
Refer to the example below to understand how to specify an RHCP channel and an L
} } ```
+## Create a contact profile
-## Modifying or deleting a contact profile
+Follow these instructions to create a contact profile [via the Azure portal](contact-profile.md) or [use the Azure Orbital Ground Station API](/rest/api/orbital//azureorbitalgroundstation/contact-profiles/create-or-update/).
-You can modify or delete the contact profile via the [Azure portal](https://aka.ms/orbital/portal) or [Azure Orbital Ground Station API](/rest/api/orbital/).
+## Modify or delete a contact profile
-In the Azure portal, navigate to the contact profile resource.
+To modify or delete a contact profile via the [Azure portal](https://aka.ms/orbital/portal), navigate to the contact profile resource.
- To modify minimum viable contact duration, minimum elevation, auto tracking, or events hubs telemetry, click 'Overview' on the left panel then click 'Edit properties.' - To edit links and channels, click 'Links' under 'Configurations' on the left panel then click 'Edit link' on the desired link. - To edit third-party configurations, click 'Third-Party Configurations' under 'Configurations' on the left panel then click 'Edit' on the desired configuration. - To delete a contact profile, click 'Overview' on the left panel then click 'Delete.'
-## Configuring a contact profile for applicable partner ground stations
+You can also use the Azure Orbital Ground Station API to [modify](/rest/api/orbital/azureorbitalgroundstation/contact-profiles/create-or-update) or [delete](/rest/api/orbital/azureorbitalgroundstation/contact-profiles/delete) a contact profile.
+
+## Configure a contact profile for applicable partner ground stations
-After onboarding with a partner ground station network, you receive a name that identifies your configuration file. When [creating your contact profile](contact-profile.md#create-a-contact-profile-resource), add this configuration name to your link in the 'Third-Party Configuration" parameter. This links your contact profile to the partner network.
+After onboarding with a partner ground station network, you receive a name that identifies your configuration file. When [creating your contact profile](contact-profile.md), add this configuration name to your link in the 'Third-Party Configuration" parameter. This links your contact profile to the partner network.
## Next steps
orbital Concepts Contact https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/concepts-contact.md
# Ground station contact resource
-A contact occurs when the spacecraft passes over a specified ground station. You can find available passes and schedule contacts for your spacecraft through the Azure Orbital Ground Station platform. A contact and ground station pass mean the same thing.
+To establish connectivity with your spacecraft, schedule and execute a contact on a ground station. A contact, sometimes called a ground station 'pass,' can only occur when the spacecraft passes over a specified ground station while orbiting. You can find available contact opportunities and schedule contacts for your spacecraft through the Azure Orbital Ground Station [API](/rest/api/orbital/) or [Azure portal](https://aka.ms/orbital/portal).
-When you schedule a contact for a spacecraft, a contact resource is created under your spacecraft resource in your resource group. The contact is only associated with that particular spacecraft and can't be transferred to another spacecraft, resource group, or region.
+Contacts are scheduled for a particular combination of a [spacecraft](spacecraft-object.md) and [contact profile](concepts-contact-profile.md). When you schedule a contact for a spacecraft, a contact resource is created under your spacecraft resource in your Azure resource group. The contact is only associated with that particular spacecraft and can't be transferred to another spacecraft, resource group, or region.
## Contact parameters
The contact resource contains the start time and end time of the pass and other
The RX and TX start/end times might differ depending on the individual station masks. Billing meters are engaged between the Reservation Start Time and Reservation End Time.
-## Create a contact
+## Schedule a contact
-In order to create a contact, you must have the following prerequisites:
-- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- An [authorized](register-spacecraft.md) spacecraft resource.-- A [contact profile](contact-profile.md) with links in accordance with the spacecraft resource above.-
-Contacts are created on a per-pass and per-site basis. If you already know the pass timings for your spacecraft and desired ground station, you can directly proceed to schedule the pass with these times. The service will succeed in creating the contact resource if the window is available and fail if the window is unavailable.
-
-If you don't know your spacecraft's pass timings or which ground station sites are available, you can use the [Azure portal](https://aka.ms/orbital/portal) or [Azure Orbital Ground Station API](/rest/api/orbital/) to determine those details. Query the available passes and use the results to schedule your passes accordingly.
-
-| Method | List available contacts | Schedule contacts | Notes |
-|-|-|-|-|
-|Portal| Yes | Yes | Custom pass timings aren't supported. You must use the results from the query. |
-|API | Yes | Yes | Custom pass timings are supported. |
-
-See [how-to schedule a contact](schedule-contact.md) for instructions to use the Azure portal. See [API documentation](/rest/api/orbital/) for instructions to use the Azure Orbital Ground Station API.
+Use the [Azure portal](https://aka.ms/orbital/portal) or [Azure Orbital Ground Station API](/rest/api/orbital/) to [create a contact resource](schedule-contact.md) for your spacecraft resource.
## Cancel a scheduled contact
-In order to cancel a scheduled contact, you must delete the contact resource. You must have the following prerequisites:
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- An [authorized](register-spacecraft.md) spacecraft resource.-- A [contact profile](contact-profile.md) with links in accordance with the spacecraft resource above.-- A [scheduled contact](schedule-contact.md).
+In order to cancel a scheduled contact, you must delete the contact resource.
+To delete a contact resource via the [Azure portal](https://aka.ms/orbital/portal):
1. In the Azure portal search box, enter **Spacecraft**. Select **Spacecraft** in the search results.
-2. In the **Spacecraft** page, select the name of the spacecraft for the scheduled contact.
-3. Select **Contacts** from the left menu bar in the spacecraftΓÇÖs overview page.
+2. In the **Spacecraft** page, click the spacecraft associated with the scheduled contact.
+3. Click **Contacts** from the left menu bar in the spacecraftΓÇÖs overview page.
:::image type="content" source="media/orbital-eos-delete-contact.png" alt-text="Select a scheduled contact" lightbox="media/orbital-eos-delete-contact.png":::
-4. Select the name of the contact to be deleted
-5. Select **Delete** from the top bar of the contact's configuration view
+4. Click the contact to be deleted.
+5. Click **Delete** from the top bar of the contact's configuration view.
:::image type="content" source="media/orbital-eos-contact-config-view.png" alt-text="Delete a scheduled contact" lightbox="media/orbital-eos-contact-config-view.png"::: 6. The scheduled contact will be canceled once the contact entry is deleted.
+Alternatively, use the Contacts REST Operation Group to [delete a contact](/rest/api/orbital/azureorbitalgroundstation/contacts/delete/) with the Azure Orbital Ground Station API.
+ ## Next steps - [Schedule a contact](schedule-contact.md)-- [Update the Spacecraft TLE](update-tle.md)
+- [Update the spacecraft TLE](update-tle.md)
orbital Contact Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/contact-profile.md
# Configure a contact profile
-Learn how to configure a [contact profile](concepts-contact-profile.md) with Azure Orbital Ground Station to save and reuse contact configurations. To schedule a contact, you must have a contact profile resource and satellite resource.
+Learn how to create a [contact profile](concepts-contact-profile.md) with Azure Orbital Ground Station to save and reuse contact configurations. To schedule a contact, you must have a contact profile resource and [spacecraft resource](spacecraft-object.md).
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Contributor permissions at the subscription level.-- To collect telemetry during the contact, [create an event hub](receive-real-time-telemetry.md). [Learn more about Azure Event Hubs](../event-hubs/event-hubs-about.md).
+- A delegated subnet that is created in the relevant VNET and resource group. See [prepare network for Azure Orbital Ground Station integration](prepare-network.md).
- An IP address (private or public) for data retrieval/delivery. Learn how to [create a VM and use its private IP](../virtual-machines/windows/quick-create-portal.md).
+- To collect telemetry during the contact, [create an event hub](receive-real-time-telemetry.md). [Learn more about Azure Event Hubs](../event-hubs/event-hubs-about.md).
-## Sign in to Azure
-
-Sign in to the [Azure portal - Orbital](https://aka.ms/orbital/portal).
-
-## Create a contact profile resource
+## Azure portal method
-1. In the Azure portal search box, enter **Contact Profiles**. Select **Contact Profiles** in the search results. Alternatively, navigate to the Azure Orbital service and click **Contact profiles** in the left column.
-2. In the **Contact Profiles** page, click **Create**.
-3. In **Create Contact Profile Resource**, enter or select the following information in the **Basics** tab:
+1. Sign in to the [Azure portal - Orbital](https://aka.ms/orbital/portal).
+2. In the Azure portal search box, enter **Contact Profiles**. Select **Contact Profiles** in the search results. Alternatively, navigate to the Azure Orbital service and click **Contact profiles** in the left column.
+3. In the **Contact Profiles** page, click **Create**.
+4. In **Create Contact Profile Resource**, enter or select the following information in the **Basics** tab:
| **Field** | **Value** | | | |
Sign in to the [Azure portal - Orbital](https://aka.ms/orbital/portal).
:::image type="content" source="media/orbital-eos-contact-profile.png" alt-text="Screenshot of the contact profile basics page." lightbox="media/orbital-eos-contact-profile.png":::
-4. Click **Next**. In the **Links** pane, click **Add new Link**.
-5. In the **Add Link** page, enter or select the following information per link direction:
+5. Click **Next**. In the **Links** pane, click **Add new Link**.
+6. In the **Add Link** page, enter or select the following information per link direction:
| **Field** | **Value** | | | |
Sign in to the [Azure portal - Orbital](https://aka.ms/orbital/portal).
:::image type="content" source="media/orbital-eos-contact-link.png" alt-text="Screenshot of the contact profile links pane." lightbox="media/orbital-eos-contact-link.png":::
-6. Click **Add Channel**. In the **Add Channel** pane, enter or select the following information per channel:
+7. Click **Add Channel**. In the **Add Channel** pane, enter or select the following information per channel:
| **Field** | **Value** | | | |
Sign in to the [Azure portal - Orbital](https://aka.ms/orbital/portal).
| **Modulation Configuration** (_uplink only_) | Refer to [configure the RF chain](modem-chain.md) for options. | | **Encoding Configuration** (_uplink only_)| If applicable, paste your encoding configuration. |
-7. Click **Submit** to add the channel. After adding all channels, click **Submit** to add the link.
-8. If a mission requires third-party providers, click the **Third-Party Configuration** tab.
+8. Click **Submit** to add the channel. After adding all channels, click **Submit** to add the link.
+9. If a mission requires third-party providers, click the **Third-Party Configuration** tab.
> [!NOTE] > Mission configurations are agreed upon with partner network providers. Contacts can only be successfully scheduled with the partners if the contact profile contains the appropriate mission configuration.
Sign in to the [Azure portal - Orbital](https://aka.ms/orbital/portal).
After a successful deployment, the contact profile is added to your resource group.
+## API method
+
+Use the Contact Profiles REST Operation Group to [create a contact profile](/rest/api/orbital/azureorbitalgroundstation/contact-profiles/create-or-update/) in the Azure Orbital Ground Station API.
+ ## Next steps - [Receive real-time antenna telemetry](receive-real-time-telemetry.md)
orbital Mission Phases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/mission-phases.md
Azure Orbital Ground Station provides easy and secure access to communication pr
## Pre-launch -- Initiate ground station licensing ahead of launch to ensure you can communicate with your spacecraft.
+- [Initiate ground station licensing](initiate-licensing.md) ahead of launch to ensure you can communicate with your spacecraft.
- [Create and authorize a spacecraft](register-spacecraft.md) resource for your satellite.-- [Configure a contact profile](contact-profile.md) with links and channels. - [Prepare your network](prepare-network.md) to send and receive data between the spacecraft and Azure Orbital Ground Station.
+- [Configure a contact profile](contact-profile.md) with links and channels.
- [Add a modem configuration file](modem-chain.md) to the contact profile.-- Prepare for launch with RF compatibility testing and enrollment in Launch Window Scheduling (in preview).
+- [Prepare for launch](prepare-for-launch.md) with RF compatibility testing and enrollment in Launch Window Scheduling (in preview).
## Launch and nominal operations
+- [Schedule contacts](schedule-contact.md) with your spacecraft.
- Keep the [spacecraft TLE](update-tle.md) up to date.-- [Receive real-time telemetry](receive-real-time-telemetry.md) from the contact.
+- [Receive real-time antenna telemetry](receive-real-time-telemetry.md) from ground station passes.
- [Use sample queries](resource-graph-samples.md) for Azure Resource Graph.
partner-solutions Manage Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/manage-access.md
+
+ Title: Use Confluent Access Management in the Azure portal
+
+description: This article describes how to use Confluent Access Management in the Azure portal to add, delete and manage users.
+
+subservice: confluent
+ Last updated : 11/29/2023
+# CustomerIntent: As an organization admin, I want to manage user permissions in Apache Kafka on Confluent Cloud so that I can add, delete and manage users.
++
+# How to manage user permissions in a Confluent organization
+
+User access management is a feature that enables the organization admin to add, view and remove users and roles inside a Confluent organization. By managing user permissions, you can ensure that only authorized users can access and perform actions on your Confluent Cloud resources.
+
+This guide presents step by step instructions to manage users and roles in Apache Kafka on Confluent Cloud - An Azure Native ISV Service, via Azure portal.
+
+The following actions are supported:
+
+* Adding a user to a Confluent organization.
+* Viewing a user's role permissions in a Confluent organization.
+* Adding or removing role permissions assigned to a user in a Confluent organization.
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free)
+* An existing Confluent organization.
+* Required permission: Azure subscription Owner, or subscription Contributor with minimum permission AccountAdmin in the Confluent Organization.
+
+## Add a user to a Confluent organization
+
+Follow the steps below to add a user to a Confluent organization.
+
+1. Open your Confluent organization in the Azure portal and select **Confluent Account and Access** from the left menu. The page shows a list of users who currently belong to this Confluent organization. The same list is visible under **Accounts & access** in the Confluent portal.
+
+ :::image type="content" source="media/manage-access/account-and-access.png" alt-text="Screenshot of the Azure platform showing the Confluent Account and Access menu.":::
+
+ > [!TIP]
+ > If you get the error "You do not have enough permissions on the organization to perform this operation", make sure that you have the required permissions. You must be a subscription Owner or Contributor.
+
+1. Select **Add User**. A new pane opens with a list of users who belong to your tenant.
+
+ :::image type="content" source="media/manage-access/add-user.png" alt-text="Screenshot of the Azure platform showing the Add user option.":::
+
+1. Select the user you want to add and select **Add User**.
+
+ :::image type="content" source="media/manage-access/select-user-to-add.png"alt-text="Screenshot of the Azure platform showing choosing a user to add.":::
+
+1. A notification indicates that the user has been added. The newly added user is listed in the Confluent Account and Access page and in the Confluent portal.
+
+## View a user's permissions
+
+Review permissions assigned to a user in their Confluent resource.
+
+1. The Confluent Account and Access page shows the list of users in your current Confluent organization. Select **Manage Permissions** on the right end of the user you want to see the permissions for.
+1. A pane opens, showing permissions. It shows that the newly added user doesn't have any permission in this Confluent organization yet. Optionally select the chevron next to the organization to expand into user permissions view for all environments and clusters.
+
+ :::image type="content" source="media/manage-access/view-roles.png"alt-text="Screenshot of the Azure platform showing roles attributed to a user.":::
+
+## Assign a permission to a user
+
+Give the new user some permissions in your Confluent organization.
+
+1. In your Confluent organization, select **Confluent Account and Access** from the left menu and select **Manage Permissions** on the right end of the user you want to assign a permission to.
+1. Select **Add Role** to get a list of role permissions available.
+
+ :::image type="content" source="media/manage-access/add-role.png"alt-text="Screenshot of the Azure platform showing the Add Role option.":::
+
+1. Check the list of role permissions available, select the one you want to assign to the user, then select **Add Role**.
+
+ :::image type="content" source="media/manage-access/select-role.png"alt-text="Screenshot of the Azure platform showing how to assign a role to a user.":::
+
+1. A notification indicates that the new user role has been added. The list of assigned roles for the user is updated with the newly added role.
+
+## Remove a user's permissions
+
+Remove a permission assigned to a user in the Confluent organization.
+
+1. In your Confluent organization, select **Confluent Account and Access** from the left menu and select **Manage Permissions** on the right end of the user whose permission you want to remove.
+1. In **Manage Permissions**, select **Remove Role**.
+
+ :::image type="content" source="media/manage-access/remove-role.png"alt-text="Screenshot of the Azure platform showing selecting a Confluent organization role to remove.":::
+
+1. Under **Enter Role Name to be removed**, enter the name of the role you want to remove. Optionally select the copy icon next to the name of the role to copy and then paste it in the text box. Select **Remove Role**.
+
+ :::image type="content" source="media/manage-access/confirm-role-removal.png"alt-text="Screenshot of the Azure platform showing confirmation of Confluent organization role removal.":::
+
+1. The role is removed and you see the refreshed roles.
+
+## Related content
+
+* For help with troubleshooting, see [Troubleshooting Apache Kafka on Confluent Cloud solutions](troubleshoot.md).
+* If you need to contact support, see [Get support for Confluent Cloud resource](get-support.md).
+* To learn more about managing Confluent Cloud, go to [Manage the Confluent Cloud resource](manage.md).
partner-solutions Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/manage.md
You're billed for prorated usage up to the time of cluster deletion. After your
## Next steps -- For help with troubleshooting, see [Troubleshooting Apache Kafka on Confluent Cloud solutions](troubleshoot.md).--- If you need to contact support, see [Get support for Confluent Cloud resource](get-support.md).--- Get started with Apache Kafka on Confluent Cloud - Azure Native ISV Service on-
- > [!div class="nextstepaction"]
- > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Microsoft.Confluent%2Forganizations)
-
- > [!div class="nextstepaction"]
- > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/confluentinc.confluent-cloud-azure-prod?tab=Overview)
+* For help with troubleshooting, see [Troubleshooting Apache Kafka on Confluent Cloud solutions](troubleshoot.md).
+* If you need to contact support, see [Get support for Confluent Cloud resource](get-support.md).
+* To learn about managing user permissions, go to [How to manage user permissions in a Confluent organization](manage.md).
postgresql Concepts Compute Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-compute-storage.md
To choose a pricing tier, use the following table as a starting point:
| General Purpose | Most business workloads that require balanced compute and memory with scalable I/O throughput. Examples include servers for hosting web and mobile apps and other enterprise applications. | | Memory Optimized | High-performance database workloads that require in-memory performance for faster transaction processing and higher concurrency. Examples include servers for processing real-time data and high-performance transactional or analytical apps. |
-After you create a server for the compute tier, you can change the number of vCores (up or down) and the storage size (up) in seconds. You also can independently adjust the backup retention period up or down. For more information, see the [Scaling resources](#scale-resources) section.
+After you create a server for the compute tier, you can change the number of vCores (up or down) and the storage size (up) in seconds. You also can independently adjust the backup retention period up or down. For more information, see the [Scaling resources](./concepts-scaling-resources.md) page.
## Compute tiers, vCores, and server types
The minimum and maximum IOPS are determined by the selected compute size. To lea
Learn how to [scale up or down IOPS](how-to-scale-compute-storage-portal.md). -
-## Scale resources
-
-After you create your server, you can independently change the vCores, the compute tier, the amount of storage, and the backup retention period. You can scale the number of vCores up or down. You can scale the backup retention period up or down from 7 to 35 days. The storage size can only be increased. You can scale the resources through the Azure portal or the Azure CLI.
-
-> [!NOTE]
-> After you increase the storage size, you can't go back to a smaller storage size.
-
-When you change the number of vCores or the compute tier, the server is restarted for the new server type to take effect. During the moment when the system switches over to the new server, no new connections can be established, and all uncommitted transactions are rolled back.
--
-The time it takes to restart your server depends on the crash recovery process and database activity at the time of the restart. Restart typically takes a minute or less but it can be higher and can take several minutes, depending on transactional activity at the time of the restart. Scaling the storage does not require a server restart in most cases.
-
-To improve the restart time, we recommend that you perform scale operations during off-peak hours. That approach reduces the time needed to restart the database server.
-
-Changing the backup retention period is an online operation.
-
-## Near-zero downtime scaling
-
-Near Zero Downtime Scaling is a feature designed to minimize downtime when modifying storage and compute tiers. If you modify the number of vCores or change the compute tier, the server undergoes a restart to apply the new configuration. During this transition to the new server, no new connections can be established. This process with regular scaling could take anywhere from 2 to 10 minutes. However, with the new Near Zero Downtime Scaling feature this duration has been reduced to less than 30 seconds. This significant decrease in downtime greatly improves the overall availability of your flexible server workloads.
-
-Near Zero Downtime Feature is enabled across all public regions and **no customer action is required** to use this capability. This feature works by deploying a new virtual machine (VM) with the updated configuration. Once the new VM is ready, it seamlessly transitions, shutting down the old server and replacing it with the updated VM, ensuring minimal downtime. Importantly, this feature doesn't add any additional cost and you won't be charged for the new server. Instead you're billed for the new updated server once the scaling process is complete. This scaling process is triggered when changes are made to the storage and compute tiers, and the experience remains consistent for both (HA) and non-HA servers.
-
-> [!NOTE]
-> Near Zero Downtime Scaling process is the default operation. However, in cases where the following limitations are encountered, the system switches to regular scaling, which involves more downtime compared to the near zero downtime scaling.
-
-#### Pre-requisites
-- You should allow all inbound/outbound connections between the IPs in the delegated subnet. If this is not enabled near downtime scaling process will not work and scaling will occur through the standard scaling process which results in more downtime.
-
-#### Limitations
--- Near Zero Downtime Scaling will not work if there are regional capacity constraints or quota limits on customer subscriptions.--- Near Zero Downtime Scaling doesn't work for replica server but supports the source server. For replica server it will automatically go through regular scaling process.--- Near Zero Downtime Scaling will not work if a VNET injected Server with delegated subnet does not have sufficient usable IP addresses. If you have a standalone server, one additional IP address is necessary, and for a HA-enabled server, two extra IP addresses are required.-- ## Price For the most up-to-date pricing information, see the [Azure Database for PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/flexible-server/) page. The [Azure portal](https://portal.azure.com/#create/Microsoft.PostgreSQLServer) shows the monthly cost on the **Pricing tier** tab, based on the options that you select.
postgresql Concepts Query Store Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-query-store-best-practices.md
Title: Query Store best practices in Azure Database for PostgreSQL - Flex Server description: This article describes best practices for Query Store in Azure Database for PostgreSQL - Flex Server.--++
Last updated 7/1/2023
This article outlines best practices for using Query Store in Azure Database for PostgreSQL. ## Set the optimal query capture mode+ Let Query Store capture the data that matters to you. |**pg_qs.query_capture_mode** | **Scenario**| ||| |_All_ |Analyze your workload thoroughly in terms of all queries and their execution frequencies and other statistics. Identify new queries in your workload. Detect if ad hoc queries are used to identify opportunities for user or auto parameterization. _All_ comes with an increased resource consumption cost. |
-|_Top_ |Focus your attention on top queries - those issued by clients.
-|_None_ |If set to None, Query Store will not capture any new queries. You've already captured a query set and time window that you want to investigate and you want to eliminate the distractions that other queries may introduce. _None_ is suitable for testing and bench-marking environments. _None_ should be used with caution as you might miss the opportunity to track and optimize important new queries. |
+|_Top_ |Focus your attention on top queries - those issued by clients.
+|_None_ |If set to None, Query Store won't capture any new queries. You've already captured a query set and time window that you want to investigate and you want to eliminate the distractions that other queries may introduce. _None_ is suitable for testing and bench-marking environments. _None_ should be used with caution as you might miss the opportunity to track and optimize important new queries. |
> [!NOTE]
Let Query Store capture the data that matters to you.
## Keep the data you need
-The **pg_qs.retention_period_in_days** parameter specifies in days the data retention period for Query Store. Older query and statistics data is deleted. By default, Query Store is configured to retain the data for 7 days. Avoid keeping historical data you do not plan to use. Increase the value if you need to keep data longer.
+
+The **pg_qs.retention_period_in_days** parameter specifies in days the data retention period for Query Store. Older query and statistics data is deleted. By default, Query Store is configured to retain the data for seven days. Avoid keeping historical data you don't plan to use. Increase the value if you need to keep data longer.
## Next steps+ - Learn how to get or set parameters using the [Azure portal](howto-configure-server-parameters-using-portal.md) or the [Azure CLI](howto-configure-server-parameters-using-cli.md).
postgresql Concepts Query Store Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-query-store-scenarios.md
Title: Query Store scenarios - Azure Database for PostgreSQL - Flex Server description: This article describes some scenarios for Query Store in Azure Database for PostgreSQL - Flex Server.--++ Last updated : 11/30/2023 Previously updated : 7/1/2023 + # Usage scenarios for Query Store - Flexible Server [!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-You can use Query Store in a wide variety of scenarios in which tracking and maintaining predictable workload performance is critical. Consider the following examples:
-- Identifying and tuning top expensive queries -- A/B testing -- Keeping performance stable during upgrades -- Identifying and improving ad hoc workloads
+You can use Query Store in a wide variety of scenarios in which tracking and maintaining predictable workload performance is critical. Consider the following examples:
+- Identifying and tuning top expensive queries
+- A/B testing
+- Keeping performance stable during upgrades
+- Identifying and improving improvised workloads
+
+## Identify and tune expensive queries
+
+### Identify longest running queries
-## Identify and tune expensive queries
+Use Query Store views on the azure_sys database of your server to quickly identify the longest running queries. These queries typically tend to consume the most resources. Optimizing your longest running queries can improve performance by freeing up resources used by other queries running on your system.
-### Identify longest running queries
-Use Query Store views on the azure_sys database of your server to quickly identify the longest running queries. These queries typically tend to consume the most resources. Optimizing your longest running queries can improve performance by freeing up resources used by other queries running on your system.
+### Target queries with performance deltas
-### Target queries with performance deltas
Query Store slices the performance data into time windows, so you can track a query's performance over time. This helps you identify exactly which queries are contributing to an increase in overall time spent. As a result you can do targeted troubleshooting of your workload.
-### Tuning expensive queries
-When you identify a query with suboptimal performance, the action you take depends on the nature of the problem:
+### Tune expensive queries
+
+When you identify a query with suboptimal performance, the action you take depends on the nature of the problem:
- Make sure that the statistics are up-to-date for the underlying tables used by the query.-- Consider rewriting expensive queries. For example, take advantage of query parameterization and reduce the use of dynamic SQL. Implement optimal logic when reading data like applying data filtering on database side, not on application side. --
-## A/B testing
-Use Query Store to compare workload performance before and after an application change you plan to introduce or before and after migration. Example scenarios for using Query Store to assess the impact of changes to workload performance:
-- Migration between PostgreSQL versions. -- Rolling out a new version of an application. -- Adding additional resources to the server. -- Creating missing indexes on tables referenced by expensive queries. -- Migration from Single Server to Flex Server.
-
-In any of these scenarios, apply the following workflow:
-1. Run your workload with Query Store before the planned change to generate a performance baseline.
-2. Apply application change(s) at the controlled moment in time.
-3. Continue running the workload long enough to generate performance image of the system after the change.
-4. Compare results from before and after the change.
-5. Decide whether to keep the change or rollback.
--
-## Identify and improve ad hoc workloads
-Some workloads do not have dominant queries that you can tune to improve overall application performance. Those workloads are typically characterized with a relatively large number of unique queries, each of them consuming a portion of system resources. Each unique query is executed infrequently, so individually their runtime consumption is not critical. On the other hand, given that the application is generating new queries all the time, a significant portion of system resources is spent on query compilation, which is not optimal. Usually, this situation happens if your application generates queries (instead of using stored procedures or parameterized queries) or if it relies on object-relational mapping frameworks that generate queries by default.
-
-If you are in control of the application code, you may consider rewriting the data access layer to use stored procedures or parameterized queries. However, this situation can be also be improved without application changes by forcing query parameterization for the entire database (all queries) or for the individual query templates with the same query hash.
-
-## Next steps
-- Learn more about the [best practices for using Query Store](concepts-query-store-best-practices.md)
+- Consider rewriting expensive queries. For example, take advantage of query parameterization and reduce the use of dynamic SQL. Implement optimal logic when reading data like applying data filtering on database side, not on application side.
+
+## A/B testing
+
+Use Query Store to compare workload performance before and after an application change you plan to introduce or before and after migration. Example scenarios for using Query Store to assess the impact of changes to workload performance:
+- Migration between PostgreSQL versions.
+- Rolling out a new version of an application.
+- Adding additional resources to the server.
+- Creating missing indexes on tables referenced by expensive queries.
+- Migration from Single Server to Flex Server.
+
+In any of these scenarios, apply the following workflow:
+1. Run your workload with Query Store before the planned change to generate a performance baseline.
+1. Apply application change(s) at the controlled moment in time.
+1. Continue running the workload long enough to generate performance image of the system after the change.
+1. Compare results from before and after the change.
+1. Decide whether to keep the change or rollback.
+
+## Identify and improve improvised workloads
+
+Some workloads don't have dominant queries that you can tune to improve overall application performance. Those workloads are typically characterized with a relatively large number of unique queries, each of them consuming a portion of system resources. Each unique query is executed infrequently, so individually their runtime consumption isn't critical. On the other hand, given that the application is generating new queries all the time, a significant portion of system resources is spent on query compilation, which isn't optimal. Usually, this situation happens if your application generates queries (instead of using stored procedures or parameterized queries) or if it relies on object-relational mapping frameworks that generate queries by default.
+
+If you are in control of the application code, you might consider rewriting the data access layer to use stored procedures or parameterized queries. However, this situation can be also be improved without application changes by forcing query parameterization for the entire database (all queries) or for the individual query templates with the same query hash.
+
+## Next step
+
+> [!div class="nextstepaction"]
+> [best practices for using Query Store](concepts-query-store-best-practices.md)
postgresql Concepts Query Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-query-store.md
Title: Query Store - Azure Database for PostgreSQL - Flexible Server description: This article describes the Query Store feature in Azure Database for PostgreSQL - Flexible Server.--++ Last updated : 11/30/2023 Previously updated : 9/1/2023 # Monitor Performance with Query Store
Last updated 9/1/2023
The Query Store feature in Azure Database for PostgreSQL provides a way to track query performance over time. Query Store simplifies performance-troubleshooting by helping you quickly find the longest running and most resource-intensive queries. Query Store automatically captures a history of queries and runtime statistics, and it retains them for your review. It slices the data by time so that you can see temporal usage patterns. Data for all users, databases and queries is stored in a database named **azure_sys** in the Azure Database for PostgreSQL instance.
-> [!IMPORTANT]
+> [!IMPORTANT]
> Do not modify the **azure_sys** database or its schema. Doing so will prevent Query Store and related performance features from functioning correctly.
-## Enabling Query Store
-Query Store is an opt-in feature, so it isn't enabled by default on a server. Query store is enabled or disabled globally for all databases on a given server and cannot be turned on or off per database.
-> [!IMPORTANT]
-> Do not enable Query Store on Burstable pricing tier as it would cause performance impact.
+## Enable Query Store
+
+Query Store is an opt-in feature, so it isn't enabled by default on a server. Query store is enabled or disabled globally for all databases on a given server and can't be turned on or off per database.
+> [!IMPORTANT]
+> Do not enable Query Store on Burstable pricing tier as it would cause performance impact.
### Enable Query Store+ 1. Sign in to the Azure portal and select your Azure Database for PostgreSQL server.
-2. Select **Server Parameters** in the **Settings** section of the menu.
-3. Search for the `pg_qs.query_capture_mode` parameter.
-4. Set the value to `TOP` or `ALL` and **Save**.
+1. Select **Server Parameters** in the **Settings** section of the menu.
+1. Search for the `pg_qs.query_capture_mode` parameter.
+1. Set the value to `TOP` or `ALL` and **Save**.
Allow up to 20 minutes for the first batch of data to persist in the azure_sys database. ### Enable Query Store Wait Sampling+ 1. Search for the `pgms_wait_sampling.query_capture_mode` parameter.
-2. Set the value to `ALL` and **Save**.
+1. Set the value to `ALL` and **Save**.
## Information in Query Store+ Query Store has two stores: - A runtime stats store for persisting the query execution statistics information. - A wait stats store for persisting wait statistics information.
Common scenarios for using Query Store include:
- Comparing the average execution time of a query across time windows to see large deltas - Identifying longest running queries in the past few hours - Identifying top N queries that are waiting on resources-- Understanding wait nature for a particular query
+- Understanding waits nature for a particular query
To minimize space usage, the runtime execution statistics in the runtime stats store are aggregated over a fixed, configurable time window. The information in these stores can be queried using views.+ ## Access Query Store information+ Query Store data is stored in the azure_sys database on your Postgres server. The following query returns information about queries in Query Store:
-```sql
+```sql
SELECT * FROM query_store.qs_view;- ``` Or this query for wait stats: ```sql- SELECT * FROM query_store.pgms_wait_sampling_view;- ```
-## Finding wait queries
+
+## Find wait queries
Wait event types combine different wait events into buckets by similarity. Query Store provides the wait event type, specific wait event name, and the query in question. Being able to correlate this wait information with the query runtime statistics means you can gain a deeper understanding of what contributes to query performance characteristics. Here are some examples of how you can gain more insights into your workload using the wait statistics in Query Store: | **Observation** | **Action** |
-|||
+| | |
|High Lock waits | Check the query texts for the affected queries and identify the target entities. Look in Query Store for other queries modifying the same entity, which is executed frequently and/or have high duration. After identifying these queries, consider changing the application logic to improve concurrency, or use a less restrictive isolation level.
-| High Buffer IO waits | Find the queries with a high number of physical reads in Query Store. If they match the queries with high IO waits, consider introducing an index on the underlying entity, in order to do seeks instead of scans. This would minimize the IO overhead of the queries. Check the **Performance Recommendations** for your server in the portal to see if there are index recommendations for this server that would optimize the queries.|
-| High Memory waits | Find the top memory consuming queries in Query Store. These queries are probably delaying further progress of the affected queries. Check the **Performance Recommendations** for your server in the portal to see if there are index recommendations that would optimize these queries.|
+| High Buffer IO waits | Find the queries with a high number of physical reads in Query Store. If they match the queries with high IO waits, consider introducing an index on the underlying entity, in order to do seeks instead of scans. This would minimize the IO overhead of the queries. Check the **Performance Recommendations** for your server in the portal to see if there are index recommendations for this server that would optimize the queries. |
+| High Memory waits | Find the top memory consuming queries in Query Store. These queries are probably delaying further progress of the affected queries. Check the **Performance Recommendations** for your server in the portal to see if there are index recommendations that would optimize these queries. |
## Configuration options+ When Query Store is enabled it saves data in 15-minute aggregation windows, up to 500 distinct queries per window. The following options are available for configuring Query Store parameters.
-| **Parameter** | **Description** | **Default** | **Range**|
-|||||
+| **Parameter** | **Description** | **Default** | **Range** |
+| | | | |
| pg_qs.query_capture_mode | Sets which statements are tracked. | none | none, top, all |
-| pg_qs.store_query_plans | Turns saving query plans on or off for pg_qs | off | on, off |
-| pg_qs.max_plan_size | Sets the maximum number of bytes that will be saved for query plan text for pg_qs; longer plans will be truncated. | 7500 | 100 - 10k |
+| pg_qs.store_query_plans | Turns saving query plans on or off for pg_qs | off | on, off |
+| pg_qs.max_plan_size | Sets the maximum number of bytes that will be saved for query plan text for pg_qs; longer plans will be truncated. | 7500 | 100 - 10k |
| pg_qs.max_query_text_length | Sets the maximum query length that can be saved. Longer queries will be truncated. | 6000 | 100 - 10K | | pg_qs.retention_period_in_days | Sets the retention period. | 7 | 1 - 30 | | pg_qs.index_generation_interval | Sets the index recommendation generating frequency for all databases when query store enabled. | 15 | 15 - 10080 | | pg_qs.track_utility | Sets whether utility commands are tracked | on | on, off |
-
-The following options apply specifically to wait statistics.
-| **Parameter** | **Description** | **Default** | **Range**|
-|||||
-| pgms_wait_sampling.query_capture_mode | Sets which statements are tracked for wait stats. | none | none, all|
+The following options apply specifically to wait statistics.
+
+| **Parameter** | **Description** | **Default** | **Range** |
+| | | | |
+| pgms_wait_sampling.query_capture_mode | Sets which statements are tracked for wait stats. | none | none, all |
| Pgms_wait_sampling.history_period | Set the frequency, in milliseconds, at which wait events are sampled. | 100 | 1-600000 |
-> [!NOTE]
+> [!NOTE]
> **pg_qs.query_capture_mode** supersedes **pgms_wait_sampling.query_capture_mode**. If pg_qs.query_capture_mode is NONE, the pgms_wait_sampling.query_capture_mode setting has no effect. Use the [Azure portal](howto-configure-server-parameters-using-portal.md) to get or set a different value for a parameter. ## Views and functions+ View and manage Query Store using the following views and functions. Anyone in the PostgreSQL public role can use these views to see the data in Query Store. These views are only available in the **azure_sys** database.
-Queries are normalized by looking at their structure after removing literals and constants. If two queries are identical except for literal values, they will have the same queryId.
+Queries are normalized by looking at their structure after removing literals and constants. If two queries are identical except for literal values, they'll have the same queryId.
+ ### query_store.qs_view
-This view returns all the data in Query Store. There is one row for each distinct database ID, user ID, and query ID.
-
-|**Name** |**Type** | **References** | **Description**|
-|||||
-|runtime_stats_entry_id |bigint | | ID from the runtime_stats_entries table|
-|user_id |oid |pg_authid.oid |OID of user who executed the statement|
-|db_id |oid |pg_database.oid |OID of database in which the statement was executed|
-|query_id |bigint || Internal hash code, computed from the statement's parse tree|
-|query_sql_text |varchar(10000) || Text of a representative statement. Different queries with the same structure are clustered together; this text is the text for the first of the queries in the cluster. The default query text length is 6000 and can be modified using query store parameter `pg_qs.max_query_text_length`.|
-|plan_id |bigint | |ID of the plan corresponding to this query|
-|start_time |timestamp || Queries are aggregated by time buckets - the time span of a bucket is 15 minutes by default. This is the start time corresponding to the time bucket for this entry.|
-|end_time |timestamp || End time corresponding to the time bucket for this entry.|
-|calls |bigint || Number of times the query executed|
-|total_time |double precision || Total query execution time, in milliseconds|
-|min_time |double precision || Minimum query execution time, in milliseconds|
-|max_time |double precision || Maximum query execution time, in milliseconds|
-|mean_time |double precision || Mean query execution time, in milliseconds|
-|stddev_time| double precision || Standard deviation of the query execution time, in milliseconds |
-|rows |bigint || Total number of rows retrieved or affected by the statement|
-|shared_blks_hit| bigint || Total number of shared block cache hits by the statement|
-|shared_blks_read| bigint || Total number of shared blocks read by the statement|
-|shared_blks_dirtied| bigint || Total number of shared blocks dirtied by the statement |
-|shared_blks_written| bigint || Total number of shared blocks written by the statement|
-|local_blks_hit| bigint || Total number of local block cache hits by the statement|
-|local_blks_read| bigint || Total number of local blocks read by the statement|
-|local_blks_dirtied| bigint || Total number of local blocks dirtied by the statement|
-|local_blks_written| bigint || Total number of local blocks written by the statement|
-|temp_blks_read |bigint || Total number of temp blocks read by the statement|
-|temp_blks_written| bigint || Total number of temp blocks written by the statement|
-|blk_read_time |double precision || Total time the statement spent reading blocks, in milliseconds (if track_io_timing is enabled, otherwise zero)|
-|blk_write_time |double precision || Total time the statement spent writing blocks, in milliseconds (if track_io_timing is enabled, otherwise zero)|
+
+This view returns all the data in Query Store. There's one row for each distinct database ID, user ID, and query ID.
+
+| **Name** | **Type** | **References** | **Description** |
+| | | | |
+| runtime_stats_entry_id | bigint | | ID from the runtime_stats_entries table |
+| user_id | oid | pg_authid.oid | OID of user who executed the statement |
+| db_id | oid | pg_database.oid | OID of database in which the statement was executed |
+| query_id | bigint | | Internal hash code, computed from the statement's parse tree |
+| query_sql_text | varchar(10000) | | Text of a representative statement. Different queries with the same structure are clustered together; this text is the text for the first of the queries in the cluster. The default query text length is 6000 and can be modified using query store parameter `pg_qs.max_query_text_length`. |
+| plan_id | bigint | | ID of the plan corresponding to this query |
+| start_time | timestamp | | Queries are aggregated by time buckets - the time span of a bucket is 15 minutes by default. This is the start time corresponding to the time bucket for this entry. |
+| end_time | timestamp | | End time corresponding to the time bucket for this entry. |
+| calls | bigint | | Number of times the query executed |
+| total_time | double precision | | Total query execution time, in milliseconds |
+| min_time | double precision | | Minimum query execution time, in milliseconds |
+| max_time | double precision | | Maximum query execution time, in milliseconds |
+| mean_time | double precision | | Mean query execution time, in milliseconds |
+| stddev_time | double precision | | Standard deviation of the query execution time, in milliseconds |
+| rows | bigint | | Total number of rows retrieved or affected by the statement |
+| shared_blks_hit | bigint | | Total number of shared block cache hits by the statement |
+| shared_blks_read | bigint | | Total number of shared blocks read by the statement |
+| shared_blks_dirtied | bigint | | Total number of shared blocks dirtied by the statement |
+| shared_blks_written | bigint | | Total number of shared blocks written by the statement |
+| local_blks_hit | bigint | | Total number of local block cache hits by the statement |
+| local_blks_read | bigint | | Total number of local blocks read by the statement |
+| local_blks_dirtied | bigint | | Total number of local blocks dirtied by the statement |
+| local_blks_written | bigint | | Total number of local blocks written by the statement |
+| temp_blks_read | bigint | | Total number of temp blocks read by the statement |
+| temp_blks_written | bigint | | Total number of temp blocks written by the statement |
+| blk_read_time | double precision | | Total time the statement spent reading blocks, in milliseconds (if track_io_timing is enabled, otherwise zero) |
+| blk_write_time | double precision | | Total time the statement spent writing blocks, in milliseconds (if track_io_timing is enabled, otherwise zero) |
### query_store.query_texts_view
-This view returns query text data in Query Store. There is one row for each distinct query_text.
+
+This view returns query text data in Query Store. There's one row for each distinct query_text.
| **Name** | **Type** | **Description** |
-|--|--|--|
+|--| -- | -- |
| query_text_id | bigint | ID for the query_texts table | | query_sql_text | Varchar(10000) | Text of a representative statement. Different queries with the same structure are clustered together; this text is the text for the first of the queries in the cluster. | ### query_store.pgms_wait_sampling_view
-This view returns wait events data in Query Store. There is one row for each distinct database ID, user ID, query ID, and event.
+
+This view returns wait events data in Query Store. There's one row for each distinct database ID, user ID, query ID, and event.
| **Name** | **Type** | **References** | **Description** |
-|--|--|--|--|
+| -- |--| -- |--|
| user_id | oid | pg_authid.oid | OID of user who executed the statement | | db_id | oid | pg_database.oid | OID of database in which the statement was executed | | query_id | bigint | | Internal hash code, computed from the statement's parse tree | | event_type | text | | The type of event for which the backend is waiting | | event | text | | The wait event name if backend is currently waiting | | calls | Integer | | Number of the same event captured |+ ### query_store.query_plans_view
-This view returns the query plan that was used to execute a query. There is one row per each distinct database ID, and query ID. This will only store query plans for non-utility queries.
-|**plan_id**|**db_id**|**query_id**|**plan_text**|
-|--|--|--|--|
+This view returns the query plan that was used to execute a query. There's one row per each distinct database ID, and query ID. This will only store query plans for nonutility queries.
+
+| **plan_id** | **db_id** | **query_id** | **plan_text** |
+| -- |--| -- |--|
| plan_id | bigint | | The hash value from the query_text | | db_id | oid | pg_database.oid | OID of database in which the statement was executed | | query_id | bigint | | Internal hash code, computed from the statement's parse tree | | plan_text | varchar(10000) | Execution plan of the statement given costs=false, buffers=false, and format=false. This is the same output given by EXPLAIN. |+ ### Functions+ `qs_reset` discards all statistics gathered so far by Query Store. This function can only be executed by the server admin role.
-`staging_data_reset` discards all statistics gathered in memory by Query Store (that is, the data in memory that has not been flushed yet to the database). This function can only be executed by the server admin role.
+`staging_data_reset` discards all statistics gathered in memory by Query Store (that is, the data in memory that hasn't been flushed yet to the database). This function can only be executed by the server admin role.
## Limitations and known issues-- If a PostgreSQL server has the parameter `default_transaction_read_only` on, Query Store will not capture any data.
-## Next steps
-- Learn more about [scenarios where Query Store can be especially helpful](concepts-query-store-scenarios.md).-- Learn more about [best practices for using Query Store](concepts-query-store-best-practices.md).
+- If a PostgreSQL server has the parameter `default_transaction_read_only` on, Query Store won't capture any data.
+
+## Related content
+
+- [scenarios where Query Store can be especially helpful](concepts-query-store-scenarios.md)
+- [best practices for using Query Store](concepts-query-store-best-practices.md)
postgresql Concepts Scaling Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-scaling-resources.md
Previously updated : 12/01/2023 Last updated : 12/04/2023 # Scaling Resources in Azure Database for PostgreSQL - Flexible Server
You can scale **vertically** by adding more resources to the Flexible server ins
You scale **horizontally** by creating [read replicas](./concepts-read-replicas.md). Read replicas let you scale your read workloads onto separate flexible server instance without affecting the performance and availability of the primary instance.
-When you change the number of vCores or the compute tier, the instance is restarted for the new server type to take effect. During this time the system switches over to the new server type, no new connections can be established, and all uncommitted transactions are rolled back. The overall time it takes to restart your server depends on the crash recovery process and database activity at the time of the restart. Restarts typically takes a minute or less but it can be higher and can take several minutes, depending on transactional activity at the time of the restart. Scaling the storage doesn't require a server restart in most cases. Similarly, backup retention period changes are an online operation. To improve the restart time, we recommend that you perform scale operations during off-peak hours. That approach reduces the time needed to restart the database server.
+When you change the number of vCores or the compute tier, the instance is restarted for the new server type to take effect. During this time the system switches over to the new server type, no new connections can be established, and all uncommitted transactions are rolled back. The overall time it takes to restart your server depends on the crash recovery process and database activity at the time of the restart. Restarts typically takes a minute or less but it can be higher and can take several minutes, depending on transactional activity at the time of the restart.
+
+If you application is sensitive to loss of in-flight transactions that may occur during compute scaling, we recommend implementing transaction [retry pattern](../single-server/concepts-connectivity.md#handling-transient-errors).
+
+Scaling the storage doesn't require a server restart in most cases. Similarly, backup retention period changes are an online operation. To improve the restart time, we recommend that you perform scale operations during off-peak hours. That approach reduces the time needed to restart the database server.
## Near-zero downtime scaling
When updating your Flexible server in scaling scenarios, we create a new copy of
> [!NOTE] > Near-zero downtime scaling process is the _default_ operation. However, in cases where the following limitations are encountered, the system switches to regular scaling, which involves more downtime compared to the near-zero downtime scaling.
-#### Prerequisites
-- In order for near-zero downtime scaling to work, you should enable all inbound/outbound connections between the IPs in the delegated subnet. If these aren't enabled near zero downtime scaling process will not work and scaling will occur through the standard scaling workflow.
-
#### Limitations
+- In order for near-zero downtime scaling to work, you should enable all [inbound/outbound connections between the IPs in the delegated subnet when using VNET integrated networking](../flexible-server/concepts-networking-private.md#virtual-network-concepts). If these aren't enabled near zero downtime scaling process will not work and scaling will occur through the standard scaling workflow.
- Near-zero Downtime Scaling won't work if there are regional capacity constraints or quota limits on customer subscriptions. - Near-zero Downtime Scaling doesn't work for replica server but supports the primary server. For replica server it will automatically go through regular scaling process.-- Near-zero Downtime Scaling won't work if a virtual network injected Server with delegated subnet doesn't have sufficient usable IP addresses. If you have a standalone server, one extra IP address is necessary, and for a HA-enabled server, two extra IP addresses are required.
+- Near-zero Downtime Scaling won't work if a [virtual network injected Server with delegated subnet](../flexible-server/concepts-networking-private.md#virtual-network-concepts) doesn't have sufficient usable IP addresses. If you have a standalone server, one extra IP address is necessary, and for a HA-enabled server, two extra IP addresses are required.
## Related content -- [create a PostgreSQL server in the portal](how-to-manage-server-portal.md)-- [service limits](concepts-limits.md)
+- [create a PostgreSQL server in the portal](how-to-manage-server-portal.md).
quotas How To Guide Monitoring Alerting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/quotas/how-to-guide-monitoring-alerting.md
The simplest way to create a quota alert is to use the Azure portal. Follow thes
| **Fields** | **Description** | |:--|:--|
- | Alert rule name | The alert rule name must be distinct and can't be duplicated, even across different resource groups. |
- | Alert me when the usage % reaches | Adjust the slider to select your desired usage percentage for triggering alerts. For example, at the default 80%, you receive an alert when your quota reaches 80% capacity.|
- | Severity | Select the severity of the alert when the ruleΓÇÖs condition is met.|
- | [Frequency of evaluation](../azure-monitor/alerts/alerts-overview.md#stateful-alerts) | Choose how **often** the alert rule should **run**, by selecting 5, 10, or 15 minutes. If the frequency is smaller than the aggregation granularity, the frequency of evaluation results in sliding window evaluation. |
- | [Resource group](../azure-resource-manager/management/manage-resource-groups-portal.md) | Select a resource group similar to other quotas in your subscription, or create a new resource group. |
- | [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md?tabs=azure-portal) | A workspace within the subscription that is being monitored and is used as the scope for rule execution. Select from the dropdown or create a new workspace. If you create a new workspace, use it for all alerts in your subscription. |
- | [Managed identity](/entra/identity/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp) | Select from the dropdown, or create a new managed identity. This managed identity must have **Reader** access to the subscription (to read usage data) and to the selected Log Analytics workspace (to read the log alerts). |
- | Notify me by | Select one or more of the three check boxes, depending on your notification preferences. |
- | [Use an existing action group](../azure-monitor/alerts/action-groups.md) | Check the box to use an existing action group. An action group **invokes** a defined set of **notifications** and actions when an alert is triggered. You can create an action group to automatically increase the quota whenever possible. |
- | [Dimensions](../azure-monitor/alerts/alerts-types.md#dimensions-in-log-alert-rules) | Options for selecting **multiple Quotas** and **regions** within a single alert rule. Adding dimensions is a cost-effective approach compared to creating a new alert for each quota or region.|
- | [Estimated cost](https://azure.microsoft.com/pricing/details/monitor/) |The estimated cost is automatically calculated, based on running this **new alert rule** against your quota. Each alert creation costs $0.50 USD, and each additional dimension adds $0.05 USD to the cost. |
-
+ | Alert Rule Name | Alert rule name must be **distinct** and can't be duplicated, even across different resource groups |
+ | Alert me when the usage % reaches | **Adjust** the slider to select your desired usage percentage for **triggering** alerts. For example, at the default 80%, you receive an alert when your quota reaches 80% capacity.|
+ | Severity | Select the **severity** of the alert when the **ruleΓÇÖs condition** is met.|
+ | [Frequency of evaluation](../azure-monitor/alerts/alerts-overview.md#stateful-alerts) | Choose how **often** the alert rule should **run**, by selecting 5, 10, or 15 minutes. If the frequency is smaller than the aggregation granularity, frequency of evaluation results in sliding window evaluation. |
+ | [Resource Group](../azure-resource-manager/management/manage-resource-groups-portal.md) | Resource Group is a collection of resources that share the same lifecycles, permissions, and policies. Select a resource group similar to other quotas in your subscription, or create a new resource group. |
+ | [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md?tabs=azure-portal) | A workspace within the subscription that is being **monitored** and is used as the **scope for rule execution**. Select from the dropdown or create a new workspace. If you create a new workspace, use it for all alerts in your subscription. |
+ | [Managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-azp) | Select from the dropdown, or **Create New**. Managed Identity should have **read permissions** to the Subscription (to read Usage data from ARG) and Log Analytics workspace that is chosen(to read the log alerts). |
+ | Notify me by | There are three notifications methods and you can check one or all three check boxes, depending on your notification preference. |
+ | [Use an existing action group](../azure-monitor/alerts/action-groups.md) | Check the box to use an existing action group. An action group **invokes** a defined set of **notifications** and actions when an alert is triggered. You can create Action Group to automatically Increase the Quota whenever possible. |
+ | [Dimensions](../azure-monitor/alerts/alerts-types.md#monitor-the-same-condition-on-multiple-resources-using-splitting-by-dimensions-1) | Here are the options for selecting **multiple Quotas** and **regions** within a single alert rule. Adding dimensions is a cost-effective approach compared to creating a new alert for each quota or region.|
+ | [Estimated cost](https://azure.microsoft.com/pricing/details/monitor/) |Estimated cost is automatically calculated cost associated with running this **new alert rule** against your quota. See [Azure Monitor cost and usage](../azure-monitor/cost-usage.md) for more information. |
+
> [!TIP] > Within the same subscription, we advise using the same **Resource group**, **Log Analytics workspace,** and **Managed identity** values for all alert rules.
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles.md
Previously updated : 10/30/2023 Last updated : 11/30/2023
The following table provides a brief description of each built-in role. Click th
> | [Disk Snapshot Contributor](#disk-snapshot-contributor) | Provides permission to backup vault to manage disk snapshots. | 7efff54f-a5b4-42b5-a1c5-5411624893ce | > | [Virtual Machine Administrator Login](#virtual-machine-administrator-login) | View Virtual Machines in the portal and login as administrator | 1c0163c0-47e6-4577-8991-ea5c82e286e4 | > | [Virtual Machine Contributor](#virtual-machine-contributor) | Create and manage virtual machines, manage disks, install and run software, reset password of the root user of the virtual machine using VM extensions, and manage local user accounts using VM extensions. This role does not grant you management access to the virtual network or storage account the virtual machines are connected to. This role does not allow you to assign roles in Azure RBAC. | 9980e02c-c2be-4d73-94e8-173b1dc7cf3c |
-> | [Virtual Machine Data Access Administrator (preview)](#virtual-machine-data-access-administrator-preview) | Add or remove virtual machine data plane role assignments. Includes an ABAC condition to constrain role assignments. | 66f75aeb-eabe-4b70-9f1e-c350c4c9ad04 |
+> | [Virtual Machine Data Access Administrator (preview)](#virtual-machine-data-access-administrator-preview) | Manage access to Virtual Machines by adding or removing role assignments for the Virtual Machine Administrator Login and Virtual Machine User Login roles. Includes an ABAC condition to constrain role assignments. | 66f75aeb-eabe-4b70-9f1e-c350c4c9ad04 |
> | [Virtual Machine User Login](#virtual-machine-user-login) | View Virtual Machines in the portal and login as a regular user. | fb879df8-f326-4884-b1cf-06f3ad86be52 | > | [Windows Admin Center Administrator Login](#windows-admin-center-administrator-login) | Let's you manage the OS of your resource via Windows Admin Center as an administrator. | a6333a3e-0164-44c3-b281-7a577aff287f | > | **Networking** | | |
The following table provides a brief description of each built-in role. Click th
> | [Data Box Contributor](#data-box-contributor) | Lets you manage everything under Data Box Service except giving access to others. | add466c9-e687-43fc-8d98-dfcf8d720be5 | > | [Data Box Reader](#data-box-reader) | Lets you manage Data Box Service except creating order or editing order details and giving access to others. | 028f4ed7-e2a9-465e-a8f4-9c0ffdfdc027 | > | [Data Lake Analytics Developer](#data-lake-analytics-developer) | Lets you submit, monitor, and manage your own jobs but not create or delete Data Lake Analytics accounts. | 47b7735b-770e-4598-a7da-8b91488b4c88 |
+> | [Defender for Storage Data Scanner](#defender-for-storage-data-scanner) | Grants access to read blobs and update index tags. This role is used by the data scanner of Defender for Storage. | 1e7ca9b1-60d1-4db8-a914-f2ca1ff27c40 |
> | [Elastic SAN Owner](#elastic-san-owner) | Allows for full access to all resources under Azure Elastic SAN including changing network security policies to unblock data path access | 80dcbedb-47ef-405d-95bd-188a1b4ac406 | > | [Elastic SAN Reader](#elastic-san-reader) | Allows for control path read access to Azure Elastic SAN | af6a70f8-3c9f-4105-acf1-d719e9fca4ca | > | [Elastic SAN Volume Group Owner](#elastic-san-volume-group-owner) | Allows for full access to a volume group in Azure Elastic SAN including changing network security policies to unblock data path access | a8281131-f312-4f34-8d98-ae12be9f0d23 |
Create and manage virtual machines, manage disks, install and run software, rese
### Virtual Machine Data Access Administrator (preview)
-Add or remove virtual machine data plane role assignments. Includes an ABAC condition to constrain role assignments.
+Manage access to Virtual Machines by adding or removing role assignments for the Virtual Machine Administrator Login and Virtual Machine User Login roles. Includes an ABAC condition to constrain role assignments.
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/roleAssignments/write | |
-> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/roleAssignments/delete | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/roleAssignments/write | Create a role assignment at the specified scope. |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/roleAssignments/delete | Delete a role assignment at the specified scope. |
> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments | > | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
-> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/read | |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/read | Gets the list of subscriptions. |
> | [Microsoft.Management](resource-provider-operations.md#microsoftmanagement)/managementGroups/read | List management groups for the authenticated user. | > | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/publicIPAddresses/read | Gets a public ip address definition. | > | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/read | Get the virtual network definition |
Add or remove virtual machine data plane role assignments. Includes an ABAC cond
"id": "/providers/Microsoft.Authorization/roleDefinitions/66f75aeb-eabe-4b70-9f1e-c350c4c9ad04", "properties": { "roleName": "Virtual Machine Data Access Administrator (preview)",
- "description": "Add or remove virtual machine data plane role assignments. Includes an ABAC condition to constrain role assignments.",
+ "description": "Manage access to Virtual Machines by adding or removing role assignments for the Virtual Machine Administrator Login and Virtual Machine User Login roles. Includes an ABAC condition to constrain role assignments.",
"assignableScopes": [ "/" ],
Let's you manage the OS of your resource via Windows Admin Center as an administ
> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/networkSecurityGroups/securityRules/write | Creates a security rule or updates an existing security rule | > | [Microsoft.HybridConnectivity](resource-provider-operations.md#microsofthybridconnectivity)/endpoints/write | Create or update the endpoint to the target resource. | > | [Microsoft.HybridConnectivity](resource-provider-operations.md#microsofthybridconnectivity)/endpoints/read | Get or list of endpoints to the target resource. |
-> | [Microsoft.HybridConnectivity](resource-provider-operations.md#microsofthybridconnectivity)/endpoints/listManagedProxyDetails/action | |
+> | [Microsoft.HybridConnectivity](resource-provider-operations.md#microsofthybridconnectivity)/endpoints/listManagedProxyDetails/action | Get managed proxy details for the resource. |
> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/virtualMachines/read | Get the properties of a virtual machine | > | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/virtualMachines/patchAssessmentResults/latest/read | Retrieves the summary of the latest patch assessment operation | > | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/virtualMachines/patchAssessmentResults/latest/softwarePatches/read | Retrieves list of patches assessed during the last patch assessment operation |
Lets you submit, monitor, and manage your own jobs but not create or delete Data
} ```
+### Defender for Storage Data Scanner
+
+Grants access to read blobs and update index tags. This role is used by the data scanner of Defender for Storage.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/blobServices/containers/read | Returns list of containers |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/blobServices/containers/blobs/read | Returns a blob or a list of blobs |
+> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/blobServices/containers/blobs/tags/write | Returns the result of writing blob tags |
+> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/blobServices/containers/blobs/tags/read | Returns the result of reading blob tags |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Grants access to read blobs and update index tags. This role is used by the data scanner of Defender for Storage.",
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/1e7ca9b1-60d1-4db8-a914-f2ca1ff27c40",
+ "name": "1e7ca9b1-60d1-4db8-a914-f2ca1ff27c40",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Storage/storageAccounts/blobServices/containers/read"
+ ],
+ "notActions": [],
+ "dataActions": [
+ "Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read",
+ "Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write",
+ "Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/read"
+ ],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Defender for Storage Data Scanner",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+ ### Elastic SAN Owner Allows for full access to all resources under Azure Elastic SAN including changing network security policies to unblock data path access
Full access to Azure SignalR Service REST APIs
> | *none* | | > | **DataActions** | | > | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/auth/clientToken/action | Generate an AccessToken for client to connect to ASRS, the token will expire in 5 minutes by default |
-> | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/hub/send/action | Broadcast messages to all client connections in the hub |
-> | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/group/send/action | Broadcast message to group |
-> | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/group/read | Check group existence or user existence in group |
-> | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/group/write | Join / Leave group |
-> | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/clientConnection/send/action | Send messages directly to a client connection |
-> | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/clientConnection/read | Check client connection existence |
-> | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/clientConnection/write | Close client connection |
-> | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/user/send/action | Send messages to user, who may consist of multiple client connections |
-> | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/user/read | Check user existence |
-> | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/user/write | Modify a user |
+> | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/hub/* | |
+> | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/group/* | |
+> | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/clientConnection/* | |
+> | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/user/* | |
> | **NotDataActions** | | > | *none* | |
Full access to Azure SignalR Service REST APIs
"notActions": [], "dataActions": [ "Microsoft.SignalRService/SignalR/auth/clientToken/action",
- "Microsoft.SignalRService/SignalR/hub/send/action",
- "Microsoft.SignalRService/SignalR/group/send/action",
- "Microsoft.SignalRService/SignalR/group/read",
- "Microsoft.SignalRService/SignalR/group/write",
- "Microsoft.SignalRService/SignalR/clientConnection/send/action",
- "Microsoft.SignalRService/SignalR/clientConnection/read",
- "Microsoft.SignalRService/SignalR/clientConnection/write",
- "Microsoft.SignalRService/SignalR/user/send/action",
- "Microsoft.SignalRService/SignalR/user/read",
- "Microsoft.SignalRService/SignalR/user/write"
+ "Microsoft.SignalRService/SignalR/hub/*",
+ "Microsoft.SignalRService/SignalR/group/*",
+ "Microsoft.SignalRService/SignalR/clientConnection/*",
+ "Microsoft.SignalRService/SignalR/user/*"
], "notDataActions": [] }
Full access to Azure SignalR Service REST APIs
> | **NotActions** | | > | *none* | | > | **DataActions** | |
-> | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/auth/accessKey/action | Generate an AccessKey for signing AccessTokens, the key will expire in 90 minutes by default |
-> | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/auth/clientToken/action | Generate an AccessToken for client to connect to ASRS, the token will expire in 5 minutes by default |
-> | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/hub/send/action | Broadcast messages to all client connections in the hub |
-> | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/group/send/action | Broadcast message to group |
-> | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/group/read | Check group existence or user existence in group |
-> | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/group/write | Join / Leave group |
-> | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/clientConnection/send/action | Send messages directly to a client connection |
-> | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/clientConnection/read | Check client connection existence |
-> | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/clientConnection/write | Close client connection |
-> | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/serverConnection/write | Start a server connection |
-> | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/user/send/action | Send messages to user, who may consist of multiple client connections |
-> | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/user/read | Check user existence |
-> | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/user/write | Modify a user |
-> | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/livetrace/* | |
+> | [Microsoft.SignalRService](resource-provider-operations.md#microsoftsignalrservice)/SignalR/* | |
> | **NotDataActions** | | > | *none* | |
Full access to Azure SignalR Service REST APIs
"actions": [], "notActions": [], "dataActions": [
- "Microsoft.SignalRService/SignalR/auth/accessKey/action",
- "Microsoft.SignalRService/SignalR/auth/clientToken/action",
- "Microsoft.SignalRService/SignalR/hub/send/action",
- "Microsoft.SignalRService/SignalR/group/send/action",
- "Microsoft.SignalRService/SignalR/group/read",
- "Microsoft.SignalRService/SignalR/group/write",
- "Microsoft.SignalRService/SignalR/clientConnection/send/action",
- "Microsoft.SignalRService/SignalR/clientConnection/read",
- "Microsoft.SignalRService/SignalR/clientConnection/write",
- "Microsoft.SignalRService/SignalR/serverConnection/write",
- "Microsoft.SignalRService/SignalR/user/send/action",
- "Microsoft.SignalRService/SignalR/user/read",
- "Microsoft.SignalRService/SignalR/user/write",
- "Microsoft.SignalRService/SignalR/livetrace/*"
+ "Microsoft.SignalRService/SignalR/*"
], "notDataActions": [] }
Lets you perform detect, verify, identify, group, and find similar operations on
> | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/Face/detectliveness/multimodal/action | <p>Performs liveness detection on a target face in a sequence of infrared, color and/or depth images, and returns the liveness classification of the target face as either &lsquo;real face&rsquo;, &lsquo;spoof face&rsquo;, or &lsquo;uncertain&rsquo; if a classification cannot be made with the given inputs.</p> | > | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/Face/detectliveness/singlemodal/action | <p>Performs liveness detection on a target face in a sequence of images of the same modality (e.g. color or infrared), and returns the liveness classification of the target face as either &lsquo;real face&rsquo;, &lsquo;spoof face&rsquo;, or &lsquo;uncertain&rsquo; if a classification cannot be made with the given inputs.</p> | > | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/Face/detectlivenesswithverify/singlemodal/action | Detects liveness of a target face in a sequence of images of the same stream type (e.g. color) and then compares with VerifyImage to return confidence score for identity scenarios. |
+> | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/Face/*/sessions/action | |
+> | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/Face/*/sessions/delete | |
+> | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/Face/*/sessions/read | |
+> | [Microsoft.CognitiveServices](resource-provider-operations.md#microsoftcognitiveservices)/accounts/Face/*/sessions/audit/read | |
> | **NotDataActions** | | > | *none* | |
Lets you perform detect, verify, identify, group, and find similar operations on
"Microsoft.CognitiveServices/accounts/Face/findsimilars/action", "Microsoft.CognitiveServices/accounts/Face/detectliveness/multimodal/action", "Microsoft.CognitiveServices/accounts/Face/detectliveness/singlemodal/action",
- "Microsoft.CognitiveServices/accounts/Face/detectlivenesswithverify/singlemodal/action"
+ "Microsoft.CognitiveServices/accounts/Face/detectlivenesswithverify/singlemodal/action",
+ "Microsoft.CognitiveServices/accounts/Face/*/sessions/action",
+ "Microsoft.CognitiveServices/accounts/Face/*/sessions/delete",
+ "Microsoft.CognitiveServices/accounts/Face/*/sessions/read",
+ "Microsoft.CognitiveServices/accounts/Face/*/sessions/audit/read"
], "notDataActions": [] }
Has the same access as API Management Service Workspace API Developer as well as
> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/tags/productLinks/* | | > | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/products/read | Lists a collection of products in the specified service instance. or Gets the details of the product specified by its identifier. | > | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/products/apiLinks/* | |
+> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/groups/read | Lists a collection of groups defined within a service instance. or Gets the details of the group specified by its identifier. |
> | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/groups/users/* | | > | [Microsoft.ApiManagement](resource-provider-operations.md#microsoftapimanagement)/service/read | Read metadata for an API Management Service instance | > | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
Has the same access as API Management Service Workspace API Developer as well as
"Microsoft.ApiManagement/service/tags/productLinks/*", "Microsoft.ApiManagement/service/products/read", "Microsoft.ApiManagement/service/products/apiLinks/*",
+ "Microsoft.ApiManagement/service/groups/read",
"Microsoft.ApiManagement/service/groups/users/*", "Microsoft.ApiManagement/service/read", "Microsoft.Authorization/*/read"
Allows send access to event grid events.
> | [Microsoft.EventGrid](resource-provider-operations.md#microsofteventgrid)/domains/read | Read a domain | > | [Microsoft.EventGrid](resource-provider-operations.md#microsofteventgrid)/partnerNamespaces/read | Read a partner namespace | > | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | [Microsoft.EventGrid](resource-provider-operations.md#microsofteventgrid)/namespaces/read | Read a namespace |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
Allows send access to event grid events.
"Microsoft.EventGrid/topics/read", "Microsoft.EventGrid/domains/read", "Microsoft.EventGrid/partnerNamespaces/read",
- "Microsoft.Resources/subscriptions/resourceGroups/read"
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.EventGrid/namespaces/read"
], "notActions": [], "dataActions": [
Role allows user or principal full access to FHIR Data [Learn more](../healthcar
> | [Microsoft.HealthcareApis](resource-provider-operations.md#microsofthealthcareapis)/services/fhir/resources/* | | > | [Microsoft.HealthcareApis](resource-provider-operations.md#microsofthealthcareapis)/workspaces/fhirservices/resources/* | | > | **NotDataActions** | |
-> | *none* | |
+> | [Microsoft.HealthcareApis](resource-provider-operations.md#microsofthealthcareapis)/services/fhir/resources/smart/action | Allows user to access FHIR Service according to SMART on FHIR specification. |
+> | [Microsoft.HealthcareApis](resource-provider-operations.md#microsofthealthcareapis)/workspaces/fhirservices/resources/smart/action | Allows user to access FHIR Service according to SMART on FHIR specification. |
```json {
Role allows user or principal full access to FHIR Data [Learn more](../healthcar
"Microsoft.HealthcareApis/services/fhir/resources/*", "Microsoft.HealthcareApis/workspaces/fhirservices/resources/*" ],
- "notDataActions": []
+ "notDataActions": [
+ "Microsoft.HealthcareApis/services/fhir/resources/smart/action",
+ "Microsoft.HealthcareApis/workspaces/fhirservices/resources/smart/action"
+ ]
} ], "roleName": "FHIR Data Contributor",
Read, download the reports objects and related other resource objects. [Learn mo
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | [Microsoft.AppComplianceAutomation](resource-provider-operations.md#microsoftappcomplianceautomation)/*/read | |
-> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/read | Returns the list of storage accounts or gets the properties for the specified storage account. |
-> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/blobServices/containers/read | Returns list of containers |
-> | [Microsoft.Storage](resource-provider-operations.md#microsoftstorage)/storageAccounts/blobServices/read | Returns blob service properties or statistics |
-> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/resources/read | Get the list of resources based upon filters. |
-> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/read | Gets the list of subscriptions. |
-> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resources/read | Gets resources of a subscription. |
-> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
-> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/resources/read | Gets the resources for the resource group. |
-> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/tags/read | Gets all the tags on a resource. |
+> | */read | Read resources of all types, except secrets. |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
Read, download the reports objects and related other resource objects. [Learn mo
"permissions": [ { "actions": [
- "Microsoft.AppComplianceAutomation/*/read",
- "Microsoft.Storage/storageAccounts/read",
- "Microsoft.Storage/storageAccounts/blobServices/containers/read",
- "Microsoft.Storage/storageAccounts/blobServices/read",
- "Microsoft.Resources/resources/read",
- "Microsoft.Resources/subscriptions/read",
- "Microsoft.Resources/subscriptions/resources/read",
- "Microsoft.Resources/subscriptions/resourceGroups/read",
- "Microsoft.Resources/subscriptions/resourceGroups/resources/read",
- "Microsoft.Resources/tags/read"
+ "*/read"
], "notActions": [], "dataActions": [],
Can read write or delete the attestation provider instance [Learn more](../attes
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | Microsoft.Attestation/attestationProviders/attestation/read | |
-> | Microsoft.Attestation/attestationProviders/attestation/write | |
-> | Microsoft.Attestation/attestationProviders/attestation/delete | |
+> | Microsoft.Attestation/attestationProviders/attestation/read | Gets the attestation service status. |
+> | Microsoft.Attestation/attestationProviders/attestation/write | Adds attestation service. |
+> | Microsoft.Attestation/attestationProviders/attestation/delete | Removes attestation service. |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
Can read the attestation provider properties [Learn more](../attestation/trouble
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | Microsoft.Attestation/attestationProviders/attestation/read | |
+> | Microsoft.Attestation/attestationProviders/attestation/read | Gets the attestation service status. |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
Manage access to Azure Key Vault by adding or removing role assignments for the
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | */read | Read resources of all types, except secrets. |
> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/roleAssignments/write | Create a role assignment at the specified scope. | > | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/roleAssignments/delete | Delete a role assignment at the specified scope. | > | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
Can read, write, delete and re-onboard Azure Connected Machines.
> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/privateLinkScopes/* | | > | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/*/read | | > | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/* | Create and manage a deployment |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/licenses/write | Installs or Updates an Azure Arc licenses |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/licenses/delete | Deletes an Azure Arc licenses |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/licenseProfiles/read | Reads any Azure Arc licenseProfiles |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/licenseProfiles/write | Installs or Updates an Azure Arc licenseProfiles |
+> | [Microsoft.HybridCompute](resource-provider-operations.md#microsofthybridcompute)/machines/licenseProfiles/delete | Deletes an Azure Arc licenseProfiles |
> | **NotActions** | | > | *none* | | > | **DataActions** | |
Can read, write, delete and re-onboard Azure Connected Machines.
"Microsoft.HybridCompute/machines/extensions/delete", "Microsoft.HybridCompute/privateLinkScopes/*", "Microsoft.HybridCompute/*/read",
- "Microsoft.Resources/deployments/*"
+ "Microsoft.Resources/deployments/*",
+ "Microsoft.HybridCompute/licenses/write",
+ "Microsoft.HybridCompute/licenses/delete",
+ "Microsoft.HybridCompute/machines/licenseProfiles/read",
+ "Microsoft.HybridCompute/machines/licenseProfiles/write",
+ "Microsoft.HybridCompute/machines/licenseProfiles/delete"
], "notActions": [], "dataActions": [],
sap Integration Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/integration-get-started.md
For more information about integration with SAP Fiori, see the following resourc
- [Introduction to the Application Gateway WAF Triage Workbook](https://techcommunity.microsoft.com/t5/azure-network-security-blog/introducing-the-application-gateway-waf-triage-workbook/ba-p/2973341). Also see the following SAP resources:-- [Azure CDN for SAPUI5 libraries](https://blogs.sap.com/2021/03/22/sap-fiori-using-azure-cdn-for-sapui5-libraries/) - [Web Application Firewall Setup for Internet facing SAP Fiori Apps](https://blogs.sap.com/2020/12/03/sap-on-azure-application-gateway-web-application-firewall-waf-v2-setup-for-internet-facing-sap-fiori-apps/) ### Microsoft Entra ID (formerly Azure AD)
Protect your data, apps, and infrastructure against rapidly evolving cyber threa
Use [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md) to secure your cloud-infrastructure surrounding the SAP system including automated responses.
-Complimenting that, use the [SAP certified](https://www.sap.com/dmc/exp/2013_09_adpd/enEN/#/solutions?id=s:33db1376-91ae-4f36-a435-aafa892a88d8) solution [Microsoft Sentinel](../../sentinel/sap/sap-solution-security-content.md) to protect your SAP system from within using signals from the SAP Audit Log among others.
+Complimenting that, use the [SAP certified](https://www.sap.com/dmc/exp/2013_09_adpd/enEN/#/solutions?id=s:33db1376-91ae-4f36-a435-aafa892a88d8) solution [Microsoft Sentinel](../../sentinel/sap/sap-solution-security-content.md) to protect your SAP system and [SAP Business Technology Platform (BTP)](../../sentinel/sap/sap-btp-solution-overview.md) instance from within using signals from the SAP Audit Log among others.
Learn more about identity focused integration capabilities that power the analysis on Defender and Sentinel via the [Microsoft Entra ID section](#microsoft-entra-id-formerly-azure-ad). Leverage the [immutable vault for Azure Backup](/azure/backup/backup-azure-immutable-vault-concept) to protect your SAP data from ransomware attacks.
+See the Microsoft Security Copilot working with an SAP Incident in action [here](https://www.youtube.com/watch?v=snV2joMnSlc&t=234s).
+
+#### Microsoft Sentinel for SAP
+
+For more information about [SAP certified](https://www.sap.com/dmc/exp/2013_09_adpd/enEN/#/solutions?id=s:33db1376-91ae-4f36-a435-aafa892a88d8) threat monitoring with Microsoft Sentinel for SAP, see the following Microsoft resources:
+
+- [Microsoft Sentinel incident response playbooks for SAP](../../sentinel/sap/sap-incident-response-playbooks.md)
+- [SAP security content reference](../../sentinel/sap/sap-solution-security-content.md)
+- [Deploy the Microsoft Sentinel solution for SAP](../../sentinel/sap/deploy-sap-security-content.md)
+- [Deploy Microsoft Sentinel Solution for SAP BTP](../../sentinel/sap/deploy-sap-btp-solution.md)
+- [Microsoft Sentinel SAP solution data reference](../../sentinel/sap/sap-solution-log-reference.md)
+- [Deploying Microsoft Sentinel SAP agent into an AKS/Kubernetes cluster](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/deploying-microsoft-sentinel-threat-monitoring-for-sap-agent/ba-p/3528040)
+
+Also see the following SAP resources:
+
+- [How to use Microsoft Sentinel's SOAR capabilities with SAP](https://blogs.sap.com/2023/05/22/from-zero-to-hero-security-coverage-with-microsoft-sentinel-for-your-critical-sap-security-signals-blog-series/)
+- [Deploy SAP user blocking based on suspicious activity on the SAP backend](https://blogs.sap.com/2023/05/22/from-zero-to-hero-security-coverage-with-microsoft-sentinel-for-your-critical-sap-security-signals-youre-gonna-hear-me-soar-part-1/)
+- [Automatically trigger re-activation of the SAP audit log on malicious deactivation](https://blogs.sap.com/2023/05/23/from-zero-to-hero-security-coverage-with-microsoft-sentinel-for-your-critical-sap-security-signals-part-3/)
+- [Automatically remediate Sentinel SAP Collector Agent attack](https://blogs.sap.com/2023/07/06/from-zero-to-hero-security-coverage-with-microsoft-sentinel-for-your-critical-sap-security-signals-part-4/)
+
+See below video to experience the SAP security orchestration, automation and response workflow with Sentinel in action:
+
+> [!VIDEO https://www.youtube.com/embed/b-AZnR-nQpg]
+ #### Microsoft Defender for Cloud The [Defender product family](../../defender-for-cloud/defender-for-cloud-introduction.md) consist of multiple products tailored to provide "cloud security posture management" (CSPM) and "cloud workload protection" (CWPP) for the various workload types. Below excerpt serves as entry point to start securing your SAP system.
Also see the following SAP resources:
> [!Tip] > Microsoft Defender for Server includes Endpoint detection and response (EDR) features that are provided by Microsoft Defender for Endpoint Plan 2.
-#### Microsoft Sentinel for SAP
-
-For more information about [SAP certified](https://www.sap.com/dmc/exp/2013_09_adpd/enEN/#/solutions?id=s:33db1376-91ae-4f36-a435-aafa892a88d8) threat monitoring with Microsoft Sentinel for SAP, see the following Microsoft resources:
--- [Microsoft Sentinel incident response playbooks for SAP](../../sentinel/sap/sap-incident-response-playbooks.md)-- [SAP security content reference](../../sentinel/sap/sap-solution-security-content.md)-- [Deploy the Microsoft Sentinel solution for SAP](../../sentinel/sap/deploy-sap-security-content.md)-- [Microsoft Sentinel SAP solution data reference](../../sentinel/sap/sap-solution-log-reference.md)-- [Deploying Microsoft Sentinel SAP agent into an AKS/Kubernetes cluster](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/deploying-microsoft-sentinel-threat-monitoring-for-sap-agent/ba-p/3528040)-
-Also see the following SAP resources:
--- [How to use Microsoft Sentinel's SOAR capabilities with SAP](https://blogs.sap.com/2023/05/22/from-zero-to-hero-security-coverage-with-microsoft-sentinel-for-your-critical-sap-security-signals-blog-series/)-- [Deploy SAP user blocking based on suspicious activity on the SAP backend](https://blogs.sap.com/2023/05/22/from-zero-to-hero-security-coverage-with-microsoft-sentinel-for-your-critical-sap-security-signals-youre-gonna-hear-me-soar-part-1/)-- [Automatically trigger re-activation of the SAP audit log on malicious deactivation](https://blogs.sap.com/2023/05/23/from-zero-to-hero-security-coverage-with-microsoft-sentinel-for-your-critical-sap-security-signals-part-3/)-- [Automatically remediate Sentinel SAP Collector Agent attack](https://blogs.sap.com/2023/07/06/from-zero-to-hero-security-coverage-with-microsoft-sentinel-for-your-critical-sap-security-signals-part-4/)-
-See below video to experience the SAP security orchestration, automation and response workflow with Sentinel in action:
-
-> [!VIDEO https://www.youtube.com/embed/b-AZnR-nQpg]
- #### Immutable vault for Azure Backup for SAP For more information about [immutable vault for Azure Backup](/azure/backup/backup-azure-immutable-vault-concept), see the following Azure documentation:
security Azure Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/azure-domains.md
This page is a partial list of the Azure domains in use. Some of them are REST A
|[Azure Cloud Services](../../cloud-services/cloud-services-choose-me.md) and [Azure Virtual Machines](../../virtual-machines/index.yml)|*.cloudapp.net| |[Azure Cloud Services](../../cloud-services/cloud-services-choose-me.md) and [Azure Virtual Machines](../../virtual-machines/index.yml)|*.cloudapp.azure.com| |[Azure Container Registry](https://azure.microsoft.com/services/container-registry/)|*.azurecr.io|
-|Azure Container Service (ACS) (deprecated)|*.azurecontainer.io|
+|Azure Container Service (deprecated)|*.azurecontainer.io|
|[Azure Content Delivery Network (CDN)](https://azure.microsoft.com/services/cdn/)|*.vo.msecnd.net| |[Azure Cosmos DB](../../cosmos-db/index.yml)|*.cosmos.azure.com| |[Azure Cosmos DB](../../cosmos-db/index.yml)|*.documents.azure.com|
service-bus-messaging Service Bus Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-service-endpoints.md
Use [`az servicebus namespace network-rule-set`](/cli/azure/servicebus/namespace
## Use Azure PowerShell Use the following Azure PowerShell commands to add, list, remove, update, and delete network rules for a Service Bus namespace. -- [`Add-AzServiceBusVirtualNetworkRule`](/powershell/module/az.servicebus/add-azservicebusvirtualnetworkrule) to add a virtual network rule.
+- [`Set-AzServiceBusNetworkRuleSet`](/powershell/module/az.servicebus/add-azservicebusvirtualnetworkrule) to add a virtual network rule.
- [`New-AzServiceBusVirtualNetworkRuleConfig`](/powershell/module/az.servicebus/new-azservicebusipruleconfig) and [`Set-AzServiceBusNetworkRuleSet`](/powershell/module/az.servicebus/set-azservicebusnetworkruleset) together to add a virtual network rule. - [`Remove-AzServiceBusVirtualNetworkRule`](/powershell/module/az.servicebus/remove-azservicebusvirtualnetworkrule) to remove s virtual network rule.
service-connector How To Integrate Cosmos Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-cassandra.md
Supported authentication and clients for App Service, Azure Functions, Container
| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | | | Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| None | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
service-connector How To Integrate Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-db.md
Previously updated : 10/31/2023 Last updated : 12/04/2023 # Integrate Azure Cosmos DB for MongoDB with Service Connector
This page shows supported authentication methods and clients, and shows sample c
Supported authentication and clients for App Service, Azure Functions, Container Apps, and Azure Spring Apps:
-### [Azure App Service](#tab/app-service)
| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | | | - | - | - | - |
Supported authentication and clients for App Service, Azure Functions, Container
| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Java - Spring Boot | | | ![yes icon](./media/green-check.png) | | | Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
| Go | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |-
-### [Azure Functions](#tab/azure-functions)
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-| | - | - | - | - |
-| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
-| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Go | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-
-### [Azure Container Apps](#tab/container-apps)
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-| | - | - | - | - |
-| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
-| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Go | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-
-### [Azure Spring Apps](#tab/spring-apps)
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-| | - | | - | - |
-| .NET | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
-| Node.js | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Go | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| None | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
Use the connection details below to connect compute services to Azure Cosmos DB.
|--|-|-| | AZURE_COSMOS_CONNECTIONSTRING | MongoDB API connection string | `mongodb://<mongo-db-admin-user>:<password>@<mongo-db-server>.mongo.cosmos.azure.com:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000&appName=@<mongo-db-server>@` |
+#### Sample code
+ Refer to the steps and code below to connect to Azure Cosmos DB for MongoDB using a connection string. [!INCLUDE [code sample for mongo](./includes/code-cosmosmongo-secret.md)]
Refer to the steps and code below to connect to Azure Cosmos DB for MongoDB usin
| AZURE_COSMOS_SCOPE | Your managed identity scope | `https://management.azure.com/.default` | | AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<Azure-Cosmos-DB-API-for-MongoDB-account>.documents.azure.com:443/` |
+#### Sample code
+ Refer to the steps and code below to connect to Azure Cosmos DB for MongoDB using a system-assigned managed identity. [!INCLUDE [code sample for mongo](./includes/code-cosmosmongo-me-id.md)]
Refer to the steps and code below to connect to Azure Cosmos DB for MongoDB usin
| AZURE_COSMOS_CLIENTID | Your client ID | `<client-ID>` | | AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<Azure-Cosmos-DB-API-for-MongoDB-account>.documents.azure.com:443/` |
+#### Sample code
+ Refer to the steps and code below to connect to Azure Cosmos DB for MongoDB using a user-assigned managed identity. [!INCLUDE [code sample for mongo](./includes/code-cosmosmongo-me-id.md)]
Refer to the steps and code below to connect to Azure Cosmos DB for MongoDB usin
| AZURE_COSMOS_TENANTID | Your tenant ID | `<tenant-ID>` | | AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<Azure-Cosmos-DB-API-for-MongoDB-account>.documents.azure.com:443/` |
+#### Sample code
+ Refer to the steps and code below to connect to Azure Cosmos DB for MongoDB using a service principal. [!INCLUDE [code sample for mongo](./includes/code-cosmosmongo-me-id.md)]
service-connector How To Integrate Cosmos Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-gremlin.md
Previously updated : 10/31/2023 Last updated : 12/04/2023 # Integrate the Azure Cosmos DB for Gremlin with Service Connector
Supported authentication and clients for App Service, Azure Functions, Container
| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | PHP | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Go | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| None | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
service-connector How To Integrate Cosmos Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-sql.md
Previously updated : 10/24/2023 Last updated : 12/04/2023 # Integrate the Azure Cosmos DB for NoSQL with Service Connector
Supported authentication and clients for App Service, Azure Functions, Container
| Java - Spring Boot | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Go | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| None | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
## Default environment variable names or application properties and Sample code
service-connector How To Integrate Cosmos Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-table.md
Previously updated : 11/01/2023 Last updated : 12/04/2023 # Integrate the Azure Cosmos DB for Table with Service Connector
Supported authentication and clients for App Service, Azure Functions, Container
| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Go | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| None | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
Title: Support matrix for Azure VM disaster recovery with Azure Site Recovery description: Summarizes support for Azure VMs disaster recovery to a secondary region with Azure Site Recovery. Previously updated : 11/15/2023 Last updated : 12/04/2023
Rocky Linux | [See supported versions](#supported-rocky-linux-kernel-versions-fo
**Release** | **Mobility service version** | **Kernel version** | | | |
-14.04 LTS | [9.56]() | No new 14.04 LTS kernels supported in this release. |
+14.04 LTS | [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d) | No new 14.04 LTS kernels supported in this release. |
14.04 LTS | [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | No new 14.04 LTS kernels supported in this release. | 14.04 LTS | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| No new 14.04 LTS kernels supported in this release. | 14.04 LTS | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5)| No new 14.04 LTS kernels supported in this release. | 14.04 LTS | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | No new 14.04 LTS kernels supported in this release. | |||
-16.04 LTS | [9.56]() | No new 16.04 LTS kernels supported in this release. |
+16.04 LTS | [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d) | No new 16.04 LTS kernels supported in this release. |
16.04 LTS | [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | No new 16.04 LTS kernels supported in this release. | 16.04 LTS | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| No new 16.04 LTS kernels supported in this release. | 16.04 LTS | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5)| No new 16.04 LTS kernels supported in this release. | 16.04 LTS | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | No new 16.04 LTS kernels supported in this release. | |||
-18.04 LTS | [9.56]() | No new 18.04 LTS kernels supported in this release. |
+18.04 LTS | [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d) | No new 18.04 LTS kernels supported in this release. |
18.04 LTS |[9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | 4.15.0-1166-azure <br> 4.15.0-1167-azure <br> 4.15.0-212-generic <br> 4.15.0-213-generic <br> 5.4.0-1108-azure <br> 5.4.0-1109-azure <br> 5.4.0-149-generic <br> 5.4.0-150-generic | 18.04 LTS |[9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 4.15.0-208-generic <br> 4.15.0-209-generic <br> 5.4.0-1105-azure <br> 5.4.0-1106-azure <br> 5.4.0-146-generic <br> 4.15.0-1163-azure <br> 4.15.0-1164-azure <br> 4.15.0-1165-azure <br> 4.15.0-210-generic <br> 4.15.0-211-generic <br> 5.4.0-1107-azure <br> 5.4.0-147-generic <br> 5.4.0-147-generic <br> 5.4.0-148-generic <br> 4.15.0-212-generic <br> 4.15.0-1166-azure <br> 5.4.0-149-generic <br> 5.4.0-150-generic <br> 5.4.0-1108-azure <br> 5.4.0-1109-azure | 18.04 LTS |[9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5)| 5.4.0-137-generic <br> 5.4.0-1101-azure <br> 4.15.0-1161-azure <br> 4.15.0-204-generic <br> 5.4.0-1103-azure <br> 5.4.0-139-generic <br> 4.15.0-206-generic <br> 5.4.0-1104-azure <br> 5.4.0-144-generic <br> 4.15.0-1162-azure | 18.04 LTS |[9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0)| 4.15.0-196-generic <br> 4.15.0-1157-azure <br> 5.4.0-1098-azure <br> 4.15.0-1158-azure <br> 4.15.0-1159-azure <br> 4.15.0-201-generic <br> 4.15.0-202-generic <br> 5.4.0-1100-azure <br> 5.4.0-136-generic | |||
-20.04 LTS | [9.56]() | 5.15.0-1049-azure <br> 5.15.0-1050-azure <br> 5.15.0-1051-azure <br> 5.15.0-86-generic <br> 5.15.0-87-generic <br> 5.15.0-88-generic <br> 5.4.0-1117-azure <br> 5.4.0-1118-azure <br> 5.4.0-1119-azure <br> 5.4.0-164-generic <br> 5.4.0-165-generic <br> 5.4.0-166-generic |
+20.04 LTS | [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d) | 5.15.0-1049-azure <br> 5.15.0-1050-azure <br> 5.15.0-1051-azure <br> 5.15.0-86-generic <br> 5.15.0-87-generic <br> 5.15.0-88-generic <br> 5.4.0-1117-azure <br> 5.4.0-1118-azure <br> 5.4.0-1119-azure <br> 5.4.0-164-generic <br> 5.4.0-165-generic <br> 5.4.0-166-generic |
20.04 LTS |[9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | 5.15.0-1039-azure <br> 5.15.0-1040-azure <br> 5.15.0-1041-azure <br> 5.15.0-73-generic <br> 5.15.0-75-generic <br> 5.15.0-76-generic <br> 5.4.0-1108-azure <br> 5.4.0-1109-azure <br> 5.4.0-1110-azure <br> 5.4.0-1111-azure <br> 5.4.0-149-generic <br> 5.4.0-150-generic <br> 5.4.0-152-generic <br> 5.4.0-153-generic <br> 5.4.0-155-generic <br> 5.4.0-1112-azure <br> 5.15.0-78-generic <br> 5.15.0-1042-azure <br> 5.15.0-79-generic <br> 5.4.0-156-generic <br> 5.15.0-1047-azure <br> 5.15.0-84-generic <br> 5.4.0-1116-azure <br> 5.4.0-163-generic <br> 5.15.0-1043-azure <br> 5.15.0-1045-azure <br> 5.15.0-1046-azure <br> 5.15.0-82-generic <br> 5.15.0-83-generic | 20.04 LTS |[9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 5.15.0-1035-azure <br> 5.15.0-1036-azure <br> 5.15.0-69-generic <br> 5.4.0-1105-azure <br> 5.4.0-1106-azure <br> 5.4.0-146-generic <br> 5.4.0-147-generic <br> 5.15.0-1037-azure <br> 5.15.0-1038-azure <br> 5.15.0-70-generic <br> 5.15.0-71-generic <br> 5.15.0-72-generic <br> 5.4.0-1107-azure <br> 5.4.0-148-generic <br> 5.4.0-149-generic <br> 5.4.0-150-generic <br> 5.4.0-1108-azure <br> 5.4.0-1109-azure <br> 5.15.0-73-generic <br> 5.15.0-1039-azure | 20.04 LTS | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | 5.4.0-1101-azure <br> 5.15.0-1033-azure <br> 5.15.0-60-generic <br> 5.4.0-1103-azure <br> 5.4.0-139-generic <br> 5.15.0-1034-azure <br> 5.15.0-67-generic <br> 5.4.0-1104-azure <br> 5.4.0-144-generic | 20.04 LTS | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | 5.4.0-1095-azure <br> 5.15.0-1023-azure <br> 5.4.0-1098-azure <br> 5.15.0-1029-azure <br> 5.15.0-1030-azure <br> 5.15.0-1031-azure <br> 5.15.0-57-generic <br> 5.15.0-58-generic <br> 5.4.0-1100-azure <br> 5.4.0-136-generic <br> 5.4.0-137-generic | |||
-22.04 LTS | [9.56]() | 5.15.0-1049-azure <br> 5.15.0-1050-azure <br> 5.15.0-1051-azure <br> 5.15.0-86-generic <br> 5.15.0-87-generic <br> 5.15.0-88-generic |
+22.04 LTS | [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d) | 5.15.0-1049-azure <br> 5.15.0-1050-azure <br> 5.15.0-1051-azure <br> 5.15.0-86-generic <br> 5.15.0-87-generic <br> 5.15.0-88-generic |
22.04 LTS |[9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810)| 5.15.0-1039-azure <br> 5.15.0-1040-azure <br> 5.15.0-1041-azure <br> 5.15.0-73-generic <br> 5.15.0-75-generic <br> 5.15.0-76-generic <br> 5.15.0-78-generic <br> 5.15.0-1042-azure <br> 5.15.0-1044-azure <br> 5.15.0-79-generic <br> 5.15.0-1047-azure <br> 5.15.0-84-generic <br> 5.15.0-1045-azure <br> 5.15.0-1046-azure <br> 5.15.0-82-generic <br> 5.15.0-83-generic | 22.04 LTS |[9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 5.15.0-1035-azure <br> 5.15.0-1036-azure <br> 5.15.0-69-generic <br> 5.15.0-70-generic <br> 5.15.0-1037-azure <br> 5.15.0-1038-azure <br> 5.15.0-71-generic <br> 5.15.0-72-generic <br> 5.15.0-73-generic <br> 5.15.0-1039-azure | 22.04 LTS | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | 5.15.0-1003-azure <br> 5.15.0-1005-azure <br> 5.15.0-1007-azure <br> 5.15.0-1008-azure <br> 5.15.0-1010-azure <br> 5.15.0-1012-azure <br> 5.15.0-1013-azure <br> 5.15.0-1014-azure <br> 5.15.0-1017-azure <br> 5.15.0-1019-azure <br> 5.15.0-1020-azure <br> 5.15.0-1021-azure <br> 5.15.0-1022-azure <br> 5.15.0-1023-azure <br> 5.15.0-1024-azure <br> 5.15.0-1029-azure <br> 5.15.0-1030-azure <br> 5.15.0-1031-azure <br> 5.15.0-25-generic <br> 5.15.0-27-generic <br> 5.15.0-30-generic <br> 5.15.0-33-generic <br> 5.15.0-35-generic <br> 5.15.0-37-generic <br> 5.15.0-39-generic <br> 5.15.0-40-generic <br> 5.15.0-41-generic <br> 5.15.0-43-generic <br> 5.15.0-46-generic <br> 5.15.0-47-generic <br> 5.15.0-48-generic <br> 5.15.0-50-generic <br> 5.15.0-52-generic <br> 5.15.0-53-generic <br> 5.15.0-56-generic <br> 5.15.0-57-generic <br> 5.15.0-58-generic <br> 5.15.0-1033-azure <br> 5.15.0-60-generic <br> 5.15.0-1034-azure <br> 5.15.0-67-generic |
Rocky Linux | [See supported versions](#supported-rocky-linux-kernel-versions-fo
**Release** | **Mobility service version** | **Kernel version** | | | |
-Debian 7 | [9.56]()| No new Debian 7 kernels supported in this release. |
+Debian 7 | [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d)| No new Debian 7 kernels supported in this release. |
Debian 7 | [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | No new Debian 7 kernels supported in this release. | Debian 7 | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| No new Debian 7 kernels supported in this release. | Debian 7 | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5)| No new Debian 7 kernels supported in this release. | Debian 7 | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | No new Debian 7 kernels supported in this release. | |||
-Debian 8 | [9.56]()| No new Debian 8 kernels supported in this release. |
+Debian 8 | [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d)| No new Debian 8 kernels supported in this release. |
Debian 8 | [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | No new Debian 8 kernels supported in this release. | Debian 8 | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| No new Debian 8 kernels supported in this release. | Debian 8 | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5)| No new Debian 8 kernels supported in this release. | Debian 8 | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | No new Debian 8 kernels supported in this release. | |||
-Debian 9.1 | [9.56]()| No new Debian 9.1 kernels supported in this release. |
+Debian 9.1 | [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d)| No new Debian 9.1 kernels supported in this release. |
Debian 9.1 | [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810)| No new Debian 9.1 kernels supported in this release. | Debian 9.1 | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| No new Debian 9.1 kernels supported in this release. | Debian 9.1 | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5)| No new Debian 9.1 kernels supported in this release. | Debian 9.1 | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | No new Debian 9.1 kernels supported in this release. | |||
-Debian 10 | [9.56]()| 5.10.0-0.deb10.26-amd64 <br> 5.10.0-0.deb10.26-cloud-amd64 |
+Debian 10 | [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d)| 5.10.0-0.deb10.26-amd64 <br> 5.10.0-0.deb10.26-cloud-amd64 |
Debian 10 | [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810)| 5.10.0-0.deb10.23-amd64 <br> 5.10.0-0.deb10.23-cloud-amd64 <br> 4.19.0-25-amd64 <br> 4.19.0-25-cloud-amd64 <br> 5.10.0-0.deb10.24-amd64 <br> 5.10.0-0.deb10.24-cloud-amd64 | Debian 10 | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 5.10.0-0.bpo.3-amd64 <br> 5.10.0-0.bpo.3-cloud-amd64 <br> 5.10.0-0.bpo.4-amd64 <br> 5.10.0-0.bpo.4-cloud-amd64 <br> 5.10.0-0.bpo.5-amd64 <br> 5.10.0-0.bpo.5-cloud-amd64 <br> 4.19.0-24-amd64 <br> 4.19.0-24-cloud-amd64 <br> 5.10.0-0.deb10.22-amd64 <br> 5.10.0-0.deb10.22-cloud-amd64 <br> 5.10.0-0.deb10.23-amd64 <br> 5.10.0-0.deb10.23-cloud-amd64 | Debian 10 | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5)| 5.10.0-0.deb10.21-amd64 <br> 5.10.0-0.deb10.21-cloud-amd64 | Debian 10 | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | 4.19.0-23-amd64 <br> 4.19.0-23-cloud-amd64 <br> 5.10.0-0.deb10.20-amd64 <br> 5.10.0-0.deb10.20-cloud-amd64 | |||
-Debian 11 | [9.56]()| 5.10.0-26-amd64 <br> 5.10.0-26-cloud-amd64 |
+Debian 11 | [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d)| 5.10.0-26-amd64 <br> 5.10.0-26-cloud-amd64 |
Debian 11 | [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810)| 5.10.0-24-amd64 <br> 5.10.0-24-cloud-amd64 <br> 5.10.0-25-amd64 <br> 5.10.0-25-cloud-amd64 | Debian 11 | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 5.10.0-22-amd64 <br> 5.10.0-22-cloud-amd64 <br> 5.10.0-23-amd64 <br> 5.10.0-23-cloud-amd64 | Debian 11 | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | 5.10.0-21-amd64 </br> 5.10.0-21-cloud-amd64 |
Debian 11 | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azur
**Release** | **Mobility service version** | **Kernel version** | | | |
-SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.56]() | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 4.12.14-16.152-azure:5 |
+SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 4.12.14-16.152-azure:5 |
SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 4.12.14-16.136-azure:5 <br> 4.12.14-16.139-azure:5 <br> 4.12.14-16.146-azure:5 <br> 4.12.14-16.149-azure:5 | SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 4.12.14-16.130-azure:5 <br> 4.12.14-16.133-azure:5 | SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 4.12.14-16.124-azure:5 <br> 4.12.14-16.127-azure:5 |
SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.52](https://suppo
**Release** | **Mobility service version** | **Kernel version** | | | |
-SUSE Linux Enterprise Server 15 (SP1, SP2, SP3, SP4, SP5) | [9.56]() | By default, all [stock SUSE 15, SP1, SP2, SP3, SP4, SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.14.21-150400.14.69-azure:4 <br> 5.14.21-150500.31-azure:5 <br> 5.14.21-150500.33.11-azure:5 <br> 5.14.21-150500.33.14-azure:5 <br> 5.14.21-150500.33.17-azure:5 <br> 5.14.21-150500.33.20-azure:5 <br> 5.14.21-150500.33.3-azure:5 <br> 5.14.21-150500.33.6-azure:5 |
+SUSE Linux Enterprise Server 15 (SP1, SP2, SP3, SP4, SP5) | [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d) | By default, all [stock SUSE 15, SP1, SP2, SP3, SP4, SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.14.21-150400.14.69-azure:4 <br> 5.14.21-150500.31-azure:5 <br> 5.14.21-150500.33.11-azure:5 <br> 5.14.21-150500.33.14-azure:5 <br> 5.14.21-150500.33.17-azure:5 <br> 5.14.21-150500.33.20-azure:5 <br> 5.14.21-150500.33.3-azure:5 <br> 5.14.21-150500.33.6-azure:5 |
SUSE Linux Enterprise Server 15 (SP1, SP2, SP3, SP4) | [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | By default, all [stock SUSE 15, SP1, SP2, SP3, SP4 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.14.21-150400.14.52-azure:4 <br> 4.12.14-16.139-azure:5 <br> 5.14.21-150400.14.55-azure:4 <br> 5.14.21-150400.14.60-azure:4 <br> 5.14.21-150400.14.63-azure:4 <br> 5.14.21-150400.14.66-azure:4 | SUSE Linux Enterprise Server 15 (SP1, SP2, SP3, SP4) | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f) | By default, all [stock SUSE 15, SP1, SP2, SP3, SP4 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.14.21-150400.14.40-azure:4 <br> 5.14.21-150400.14.43-azure:4 <br> 5.14.21-150400.14.46-azure:4 <br> 5.14.21-150400.14.49-azure:4 | SUSE Linux Enterprise Server 15 (SP1, SP2, SP3, SP4) | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | By default, all [stock SUSE 15, SP1, SP2, SP3, SP4 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.14.21-150400.14.31-azure:4 <br> 5.14.21-150400.14.34-azure:4 <br> 5.14.21-150400.14.37-azure:4 |
SUSE Linux Enterprise Server 15 (SP1, SP2, SP3, SP4) | [9.52](https://support.mi
**Release** | **Mobility service version** | **Kernel version** | | | |
-Rocky Linux | [9.56]() | Rocky Linux 8.7 <br> Rocky Linux 9.0 <br> Rocky Linux 9.1 |
+Rocky Linux | [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d) | Rocky Linux 8.7 <br> Rocky Linux 9.0 <br> Rocky Linux 9.1 |
> [!NOTE] > To support latest Linux kernels within 15 days of release, Azure Site Recovery rolls out hot fix patch on top of latest mobility agent version. This fix is rolled out in between two major version releases. To update to latest version of mobility agent (including hot fix patch) follow steps mentioned in [this article](service-updates-how-to.md#azure-vm-disaster-recovery-to-azure). This patch is currently rolled out for mobility agents used in Azure to Azure DR scenario.
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md
Title: Support matrix for VMware/physical disaster recovery in Azure Site Recove
description: Summarizes support for disaster recovery of VMware VMs and physical server to Azure using Azure Site Recovery. Previously updated : 11/21/2023 Last updated : 12/04/2023
Rocky Linux | [See supported versions](#rocky-linux-server-supported-kernel-vers
**Supported release** | **Mobility service version** | **Kernel version** | | | |
-14.04 LTS | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0), [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5), [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f), [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810), [9.56]() <br> **Note:** Support for 9.56 is only available for Modernized experience. | 3.13.0-24-generic to 3.13.0-170-generic,<br/>3.16.0-25-generic to 3.16.0-77-generic,<br/>3.19.0-18-generic to 3.19.0-80-generic,<br/>4.2.0-18-generic to 4.2.0-42-generic,<br/>4.4.0-21-generic to 4.4.0-148-generic,<br/>4.15.0-1023-azure to 4.15.0-1045-azure |
+14.04 LTS | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0), [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5), [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f), [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810), [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d) <br> **Note:** Support for 9.56 is only available for Modernized experience. | 3.13.0-24-generic to 3.13.0-170-generic,<br/>3.16.0-25-generic to 3.16.0-77-generic,<br/>3.19.0-18-generic to 3.19.0-80-generic,<br/>4.2.0-18-generic to 4.2.0-42-generic,<br/>4.4.0-21-generic to 4.4.0-148-generic,<br/>4.15.0-1023-azure to 4.15.0-1045-azure |
||| 16.04 LTS | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0), [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5), [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f), [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | 4.4.0-21-generic to 4.4.0-210-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic, 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-142-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1113-azure </br> 4.15.0-101-generic to 4.15.0-107-generic | |||
-18.04 LTS | [9.56]() <br> **Note:** Support for 9.56 is only available for Modernized experience.| No new Ubuntu 18.04 kernels supported in this release|
+18.04 LTS | [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d) <br> **Note:** Support for 9.56 is only available for Modernized experience.| No new Ubuntu 18.04 kernels supported in this release|
18.04 LTS |[9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | 4.15.0-1163-azure <br> 4.15.0-1164-azure <br> 4.15.0-1165-azure <br> 4.15.0-1166-azure <br> 4.15.0-1167-azure <br> 4.15.0-210-generic <br> 4.15.0-211-generic <br> 4.15.0-212-generic <br> 4.15.0-213-generic <br> 5.4.0-1107-azure <br> 5.4.0-1108-azure <br> 5.4.0-1109-azure <br> 5.4.0-147-generic <br> 5.4.0-148-generic <br> 5.4.0-149-generic <br> 5.4.0-150-generic | 18.04 LTS|[9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 4.15.0-1161-azure <br> 4.15.0-1162-azure <br> 4.15.0-204-generic <br> 4.15.0-206-generic <br> 4.15.0-208-generic <br> 4.15.0-209-generic <br> 5.4.0-1101-azure <br> 5.4.0-1103-azure <br> 5.4.0-1104-azure <br> 5.4.0-1105-azure <br> 5.4.0-1106-azure <br> 5.4.0-139-generic <br> 5.4.0-144-generic <br> 5.4.0-146-generic | 18.04 LTS|[9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5)| 4.15.0-1157-azure </br> 4.15.0-1158-azure </br> 4.15.0-1159-azure </br> 4.15.0-197-generic </br> 4.15.0-200-generic </br> 4.15.0-201-generic </br> 4.15.0-202-generic <br> 5.4.0-1095-azure <br> 5.4.0-1098-azure <br> 5.4.0-1100-azure <br> 5.4.0-132-generic <br> 5.4.0-135-generic <br> 5.4.0-136-generic <br> 5.4.0-137-generic | 18.04 LTS | [9.52](https://support.microsoft.com/en-us/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | 4.15.0-1153-azure </br> 4.15.0-194-generic </br> 4.15.0-196-generic </br>5.4.0-1094-azure </br> 5.4.0-128-generic </br> 5.4.0-131-generic| |||
-20.04 LTS |[9.56]() <br> **Note**: Support for Ubuntu 20.04 is available for Modernized experience only and not available for Classic experience yet. | 5.15.0-1049-azure <br> 5.15.0-1050-azure <br> 5.15.0-1051-azure <br> 5.15.0-86-generic <br> 5.15.0-87-generic <br> 5.15.0-88-generic <br> 5.4.0-1117-azure <br> 5.4.0-1118-azure <br> 5.4.0-1119-azure <br> 5.4.0-164-generic <br> 5.4.0-165-generic <br> 5.4.0-166-generic |
+20.04 LTS |[9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d) <br> **Note**: Support for Ubuntu 20.04 is available for Modernized experience only and not available for Classic experience yet. | 5.15.0-1049-azure <br> 5.15.0-1050-azure <br> 5.15.0-1051-azure <br> 5.15.0-86-generic <br> 5.15.0-87-generic <br> 5.15.0-88-generic <br> 5.4.0-1117-azure <br> 5.4.0-1118-azure <br> 5.4.0-1119-azure <br> 5.4.0-164-generic <br> 5.4.0-165-generic <br> 5.4.0-166-generic |
20.04 LTS|[9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | 5.15.0-1037-azure <br> 5.15.0-1038-azure <br> 5.15.0-1039-azure <br> 5.15.0-1040-azure <br> 5.15.0-1041-azure <br> 5.15.0-70-generic <br> 5.15.0-71-generic <br> 5.15.0-72-generic <br> 5.15.0-73-generic <br> 5.15.0-75-generic <br> 5.15.0-76-generic <br> 5.4.0-1107-azure <br> 5.4.0-1108-azure <br> 5.4.0-1109-azure <br> 5.4.0-1110-azure <br> 5.4.0-1111-azure <br> 5.4.0-148-generic <br> 5.4.0-149-generic <br> 5.4.0-150-generic <br> 5.4.0-152-generic <br> 5.4.0-153-generic | 20.04 LTS|[9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 5.15.0-1033-azure <br> 5.15.0-1034-azure <br> 5.15.0-1035-azure <br> 5.15.0-1036-azure <br> 5.15.0-60-generic <br> 5.15.0-67-generic <br> 5.15.0-69-generic <br> 5.4.0-1101-azure <br> 5.4.0-1103-azure <br> 5.4.0-1104-azure <br> 5.4.0-1105-azure <br> 5.4.0-1106-azure <br> 5.4.0-139-generic <br> 5.4.0-144-generic <br> 5.4.0-146-generic <br> 5.4.0-147-generic | 20.04 LTS|[9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5)| 5.15.0-1023-azure </br> 5.15.0-1029-azure </br> 5.15.0-1030-azure </br> 5.15.0-1031-azure </br> 5.15.0-53-generic </br> 5.15.0-56-generic </br> 5.15.0-57-generic <br> 5.15.0-58-generic <br> 5.4.0-1095-azure <br> 5.4.0-1098-azure <br> 5.4.0-1100-azure <br> 5.4.0-132-generic <br> 5.4.0-135-generic <br> 5.4.0-136-generic <br> 5.4.0-137-generic | 20.04 LTS|[9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0)|5.15.0-1021-azure </br> 5.15.0-1022-azure </br> 5.15.0-50-generic </br> 5.15.0-52-generic </br> 5.4.0-1094-azure </br> 5.4.0-128-generic </br> 5.4.0-131-generic | |||
-22.04 LTS <br> **Note**: Support for Ubuntu 22.04 is available for Modernized experience only and not available for Classic experience yet. |[9.56]() | 5.15.0-1049-azure <br> 5.15.0-1050-azure <br> 5.15.0-1051-azure <br> 5.15.0-86-generic <br> 5.15.0-87-generic <br> 5.15.0-88-generic |
+22.04 LTS <br> **Note**: Support for Ubuntu 22.04 is available for Modernized experience only and not available for Classic experience yet. |[9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d) | 5.15.0-1049-azure <br> 5.15.0-1050-azure <br> 5.15.0-1051-azure <br> 5.15.0-86-generic <br> 5.15.0-87-generic <br> 5.15.0-88-generic |
22.04 LTS <br> **Note**: Support for Ubuntu 22.04 is available for Modernized experience only and not available for Classic experience yet. |[9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810)| 5.15.0-1037-azure <br> 5.15.0-1038-azure <br> 5.15.0-1039-azure <br> 5.15.0-1040-azure <br> 5.15.0-1041-azure <br> 5.15.0-71-generic <br> 5.15.0-72-generic <br> 5.15.0-73-generic <br> 5.15.0-75-generic <br> 5.15.0-76-generic | 22.04 LTS <br> **Note**: Support for Ubuntu 22.04 is available for Modernized experience only and not available for Classic experience yet. |[9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 5.15.0-1033-azure <br> 5.15.0-1034-azure <br> 5.15.0-1035-azure <br> 5.15.0-1036-azure <br> 5.15.0-60-generic <br> 5.15.0-67-generic <br> 5.15.0-69-generic <br> 5.15.0-70-generic| 22.04 LTS <br> **Note**: Support for Ubuntu 22.04 is available for Modernized experience only and not available for Classic experience yet. |[9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5)|5.15.0-1003-azure </br> 5.15.0-1005-azure </br> 5.15.0-1007-azure </br> 5.15.0-1008-azure </br> 5.15.0-1010-azure </br> 5.15.0-1012-azure </br> 5.15.0-1013-azure <br> 5.15.0-1014-azure <br> 5.15.0-1017-azure <br> 5.15.0-1019-azure <br> 5.15.0-1020-azure <br> 5.15.0-1021-azure <br> 5.15.0-1022-azure <br> 5.15.0-1023-azure <br> 5.15.0-1024-azure <br> 5.15.0-1029-azure <br> 5.15.0-1030-azure <br> 5.15.0-1031-azure <br> 5.15.0-25-generic <br> 5.15.0-27-generic <br> 5.15.0-30-generic <br> 5.15.0-33-generic <br> 5.15.0-35-generic <br> 5.15.0-37-generic <br> 5.15.0-39-generic <br> 5.15.0-40-generic <br> 5.15.0-41-generic <br> 5.15.0-43-generic <br> 5.15.0-46-generic <br> 5.15.0-47-generic <br> 5.15.0-48-generic <br> 5.15.0-50-generic <br> 5.15.0-52-generic <br> 5.15.0-53-generic <br> 5.15.0-56-generic <br> 5.15.0-57-generic <br> 5.15.0-58-generic |
Rocky Linux | [See supported versions](#rocky-linux-server-supported-kernel-vers
**Supported release** | **Mobility service version** | **Kernel version** | | | |
-Debian 7 | [9.52](https://support.microsoft.com/en-us/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0), [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5), [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f), [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810), 9.56 <br> **Note:** Support for 9.56 is only available for Modernized experience. | 3.2.0-4-amd64 to 3.2.0-6-amd64, 3.16.0-0.bpo.4-amd64 |
+Debian 7 | [9.52](https://support.microsoft.com/en-us/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0), [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5), [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f), [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810), [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d) <br> **Note:** Support for 9.56 is only available for Modernized experience. | 3.2.0-4-amd64 to 3.2.0-6-amd64, 3.16.0-0.bpo.4-amd64 |
|||
-Debian 8 | [9.52](https://support.microsoft.com/en-us/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0), [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5), [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f), [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810), 9.56 <br> **Note:** Support for 9.56 is only available for Modernized experience. | 3.16.0-4-amd64 to 3.16.0-11-amd64, 4.9.0-0.bpo.4-amd64 to 4.9.0-0.bpo.12-amd64 |
+Debian 8 | [9.52](https://support.microsoft.com/en-us/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0), [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5), [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f), [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810), [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d) <br> **Note:** Support for 9.56 is only available for Modernized experience. | 3.16.0-4-amd64 to 3.16.0-11-amd64, 4.9.0-0.bpo.4-amd64 to 4.9.0-0.bpo.12-amd64 |
|||
-Debian 9.1 | [9.56]() <br> **Note:** Support for 9.56 is only available for Modernized experience. | No new Debian 9.1 kernels supported in this release|
+Debian 9.1 | [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d) <br> **Note:** Support for 9.56 is only available for Modernized experience. | No new Debian 9.1 kernels supported in this release|
Debian 9.1 | [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | No new Debian 9.1 kernels supported in this release| Debian 9.1 | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| No new Debian 9.1 kernels supported in this release Debian 9.1 | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | No new Debian 9.1 kernels supported in this release| Debian 9.1 | [9.52](https://support.microsoft.com/en-us/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | No new Debian 9.1 kernels supported in this release| |||
-Debian 10 | [9.56]() <br> **Note**: Support for 9.56 agent is available for Modernized experience only. | 5.10.0-0.deb10.26-amd64 <br> 5.10.0-0.deb10.26-cloud-amd64 |
+Debian 10 | [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d) <br> **Note**: Support for 9.56 agent is available for Modernized experience only. | 5.10.0-0.deb10.26-amd64 <br> 5.10.0-0.deb10.26-cloud-amd64 |
Debian 10 | [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | 4.19.0-24-amd64 <br> 4.19.0-24-cloud-amd64 <br> 5.10.0-0.deb10.22-amd64 <br> 5.10.0-0.deb10.22-cloud-amd64 <br> 5.10.0-0.deb10.23-amd64 <br> 5.10.0-0.deb10.23-cloud-amd64 | Debian 10 | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 5.10.0-0.bpo.3-amd64 <br> 5.10.0-0.bpo.3-cloud-amd64 <br> 5.10.0-0.bpo.4-amd64 <br> 5.10.0-0.bpo.4-cloud-amd64 <br> 5.10.0-0.bpo.5-amd64 <br> 5.10.0-0.bpo.5-cloud-amd64 <br> 5.10.0-0.deb10.21-amd64 <br> 5.10.0-0.deb10.21-cloud-amd64 | Debian 10 | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | 4.19.0-23-amd64 </br> 4.19.0-23-cloud-amd64 </br> 5.10.0-0.deb10.20-amd64 </br> 5.10.0-0.deb10.20-cloud-amd64 | Debian 10 | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | 4.19.0-22-amd64 </br> 4.19.0-22-cloud-amd64 </br> 5.10.0-0.deb10.19-amd64 </br> 5.10.0-0.deb10.19-cloud-amd64 | |||
-Debian 11 | [9.56]() <br> **Note**: Support for 9.56 agent is available for Modernized experience only. | 5.10.0-26-amd64 <br> 5.10.0-26-cloud-amd64 |
+Debian 11 | [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d) <br> **Note**: Support for 9.56 agent is available for Modernized experience only. | 5.10.0-26-amd64 <br> 5.10.0-26-cloud-amd64 |
Debian 11 | [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810)| 5.10.0-22-amd64 <br> 5.10.0-22-cloud-amd64 <br> 5.10.0-23-amd64 <br> 5.10.0-23-cloud-amd64 | Debian 11 | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 5.10.0-21-amd64 <br> 5.10.0-21-cloud-amd64 | Debian 11 | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | 5.10.0-20-amd64 </br> 5.10.0-20-cloud-amd64 |
Debian 11 | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azur
**Release** | **Mobility service version** | **Kernel version** | | | |
-SUSE Linux Enterprise Server 12, SP1, SP2, SP3, SP4 | [9.56]() <br> **Note**: Support for 9.56 agent is available for Modernized experience only. | By default, all [stock SUSE 12 SP1, SP2, SP3, SP4, SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br> No new SUSE 12 kernels supported in this release. |
+SUSE Linux Enterprise Server 12, SP1, SP2, SP3, SP4 | [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d) <br> **Note**: Support for 9.56 agent is available for Modernized experience only. | By default, all [stock SUSE 12 SP1, SP2, SP3, SP4, SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br> No new SUSE 12 kernels supported in this release. |
SUSE Linux Enterprise Server 12, SP1, SP2, SP3, SP4 | [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | By default, all [stock SUSE 12 SP1, SP2, SP3, SP4, SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br> 4.12.14-16.130-azure:5 <br> 4.12.14-16.133-azure:5 <br> 4.12.14-16.136-azure:5 | SUSE Linux Enterprise Server 12, SP1, SP2, SP3, SP4 | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f) | By default, all [stock SUSE 12 SP1, SP2, SP3, SP4, SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br> 4.12.14-16.124-azure:5 <br> 4.12.14-16.127-azure:5 | SUSE Linux Enterprise Server 12, SP1, SP2, SP3, SP4 | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | By default, all [stock SUSE 12 SP1, SP2, SP3, SP4, SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br> 4.12.14-16.115-azure:5 <br> 4.12.14-16.120-azure:5 |
SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.52](https://suppo
**Release** | **Mobility service version** | **Kernel version** | | | |
-SUSE Linux Enterprise Server 15, SP1, SP2, SP3, SP4, SP5 <br> **Note:** SUSE 15 SP5 is only supported for Modernized experience. | [9.56]() <br> **Note**: Support for 9.56 agent is available for Modernized experience only. | By default, all [stock SUSE 15, SP1, SP2, SP3, SP4, SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br> 4.12.14-16.152-azure:5 <br> 5.14.21-150400.14.69-azure:4 <br> 5.14.21-150500.31-azure:5 <br> 5.14.21-150500.33.11-azure:5 <br> 5.14.21-150500.33.14-azure:5 <br> 5.14.21-150500.33.17-azure:5 <br> 5.14.21-150500.33.20-azure:5 <br> 5.14.21-150500.33.3-azure:5 <br> 5.14.21-150500.33.6-azure:5 |
+SUSE Linux Enterprise Server 15, SP1, SP2, SP3, SP4, SP5 <br> **Note:** SUSE 15 SP5 is only supported for Modernized experience. | [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d) <br> **Note**: Support for 9.56 agent is available for Modernized experience only. | By default, all [stock SUSE 15, SP1, SP2, SP3, SP4, SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br> 4.12.14-16.152-azure:5 <br> 5.14.21-150400.14.69-azure:4 <br> 5.14.21-150500.31-azure:5 <br> 5.14.21-150500.33.11-azure:5 <br> 5.14.21-150500.33.14-azure:5 <br> 5.14.21-150500.33.17-azure:5 <br> 5.14.21-150500.33.20-azure:5 <br> 5.14.21-150500.33.3-azure:5 <br> 5.14.21-150500.33.6-azure:5 |
SUSE Linux Enterprise Server 15, SP1, SP2, SP3, SP4 | [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | By default, all [stock SUSE 15, SP1, SP2, SP3, SP4 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br> 5.14.21-150400.14.49-azure:4 <br> 5.14.21-150400.14.52-azure:4 | SUSE Linux Enterprise Server 15, SP1, SP2, SP3, SP4 | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f) | By default, all [stock SUSE 15, SP1, SP2, SP3, SP4 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br> 5.14.21-150400.14.31-azure:4 <br> 5.14.21-150400.14.34-azure:4 <br> 5.14.21-150400.14.37-azure:4 <br> 5.14.21-150400.14.43-azure:4 <br> 5.14.21-150400.14.46-azure:4 <br> 5.14.21-150400.14.40-azure:4 | SUSE Linux Enterprise Server 15, SP1, SP2, SP3, SP4 | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | By default, all [stock SUSE 15, SP1, SP2, SP3, SP4 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br> 5.14.21-150400.14.21-azure:4 <br> 5.14.21-150400.14.28-azure:4 <br> 5.3.18-150300.38.88-azure:3 |
SUSE Linux Enterprise Server 15, SP1, SP2, SP3, SP4 | [9.52](https://support.mic
**Release** | **Mobility service version** | **Kernel version** | | | |
-Rocky Linux <br> **Note**: Support for Rocky Linux is available for Modernized experience only. | [9.56]() | Rocky Linux 8.7 <br> Rocky Linux 9.0 <br> Rocky Linux 9.1 |
+Rocky Linux <br> **Note**: Support for Rocky Linux is available for Modernized experience only. | [9.56](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d) | Rocky Linux 8.7 <br> Rocky Linux 9.0 <br> Rocky Linux 9.1 |
## Linux file systems/guest storage
spring-apps How To Migrate Standard Tier To Enterprise Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-migrate-standard-tier-to-enterprise-tier.md
The app creation steps are the same as Standard plan.
## Use Application Configuration Service for external configuration
-For externalized configuration in a distributed system, managed Spring Cloud Config Server (OSS) is available only in the Basic and Standard plans. In the Enterprise plan, Application Configuration Service for Tanzu (ACS) provides similar functions for your apps. The following table describes some differences in usage between the OSS config server and ACS.
+For externalized configuration in a distributed system, managed Spring Cloud Config Server (OSS) is available only in the Basic and Standard plans. In the Enterprise plan, Application Configuration Service for Tanzu provides similar functions for your apps. The following table describes some differences in usage between the OSS config server and Application Configuration Service.
| Component | Support plans | Enabled | Bind to app | Profile | ||-|-|-|--| | Spring Cloud Config Server | Basic/Standard | Always enabled. | Auto bound | Configured in app's source code. | | Application Configuration Service for Tanzu | Enterprise | Enable on demand. | Manual bind | Provided as `config-file-pattern` in an Azure Spring Apps deployment. |
-Unlike the client-server mode in the OSS config server, ACS manages configuration by using the Kubernetes-native `ConfigMap`, which is populated from properties defined in backend Git repositories. ACS can't get the active profile configured in the app's source code to match the right configuration, so the explicit configuration `config-file-pattern` should be specified at the Azure Spring Apps deployment level.
+Unlike the client-server mode in the OSS config server, Application Configuration Service manages configuration by using the Kubernetes-native `ConfigMap`, which is populated from properties defined in backend Git repositories. Application Configuration Service can't get the active profile configured in the app's source code to match the right configuration, so the explicit configuration `config-file-pattern` should be specified at the Azure Spring Apps deployment level.
## Configure Application Configuration Service for Tanzu
storage Files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-whats-new.md
Azure Files is updated regularly to offer new features and enhancements. This ar
Customers using NFS Azure file shares can now take point-in-time snapshots of file shares. This enables users to roll back their entire filesystem to a previous point in time, or restore specific files that were accidentally deleted or corrupted. Customers using this preview feature can perform share-level Snapshot management operations via REST API, PowerShell, and Azure CLI.
-This preview feature is currently available in a limited number of Azure regions. [Learn more](storage-files-how-to-mount-nfs-shares.md#nfs-file-share-snapshots-preview).
+This preview feature is now available in all Azure public cloud regions. [Learn more](storage-files-how-to-mount-nfs-shares.md#nfs-file-share-snapshots-preview).
#### Azure Files now supports all valid Unicode characters
storage Storage Files How To Mount Nfs Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-how-to-mount-nfs-shares.md
description: Learn how to mount a Network File System (NFS) Azure file share on
Previously updated : 10/18/2023 Last updated : 12/04/2023
Azure Backup isn't currently supported for NFS file shares.
AzCopy isn't currently supported for NFS file shares. To copy data from an NFS Azure file share or share snapshot, use file system copy tools such as rsync or fpsync.
-### Regional availability for NFS Azure file share snapshots
+### Regional availability
+NFS Azure file share snapshots preview is now available in all Azure public cloud regions.
### Create a snapshot
storage Storage Blobs Container Calculate Size Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-blobs-container-calculate-size-powershell.md
ms.devlang: powershell Previously updated : 11/21/2023 Last updated : 12/04/2023
This script calculates the size of all Azure Blob Storage containers in a storag
## Sample script
-```powershell
-# This script will show how to get the total size of the blobs in all containers in a storage account.
-# Before running this, you need to create a storage account, at least one container,
-# and upload some blobs into that container.
-# note: this retrieves all of the blobs in each container in one command.
-# Run the Connect-AzAccount cmdlet to connect to Azure.
-# Requests that are sent as part of this tool will incur transactional costs.
-#
-
-$containerstats = @()
-
-# Provide the name of your storage account and resource group
-$storage_account_name = "<name-of-your-storage-account>"
-$resource_group = "<name-of-your-resource-group"
-
-# Get a reference to the storage account and the context.
-$storageAccount = Get-AzStorageAccount `
- -ResourceGroupName $resource_group `
- -Name $storage_account_name
-$Ctx = $storageAccount.Context
-
-$container_continuation_token = $null
-do {
- $containers = Get-AzStorageContainer -Context $Ctx -MaxCount 5000 -ContinuationToken $container_continuation_token
- $container_continuation_token = $null;
-
- if ($containers -ne $null)
- {
- $container_continuation_token = $containers[$containers.Count - 1].ContinuationToken
-
- for ([int] $c = 0; $c -lt $containers.Count; $c++)
- {
- $container = $containers[$c].Name
- Write-Verbose "Processing container : $container"
- $total_usage = 0
- $total_blob_count = 0
- $soft_delete_usage = 0
- $soft_delete_count = 0
- $version_usage = 0
- $version_count =
- $snapshot_count = 0
- $snapshot_usage = 0
- $blob_continuation_token = $null
-
- do {
- $blobs = Get-AzStorageBlob -Context $Ctx -IncludeDeleted -IncludeVersion -Container $container -ConcurrentTaskCount 100 -MaxCount 5000 -ContinuationToken $blob_continuation_token
- $blob_continuation_token = $null;
-
- if ($blobs -ne $null)
- {
- $blob_continuation_token = $blobs[$blobs.Count - 1].ContinuationToken
-
- for ([int] $b = 0; $b -lt $blobs.Count; $b++)
- {
- $total_blob_count++
- $total_usage += $blobs[$b].Length
-
- if ($blobs[$b].IsDeleted)
- {
- $soft_delete_count++
- $soft_delete_usage += $blobs[$b].Length
- }
-
- if ($blobs[$b].SnapshotTime -ne $null)
- {
- $snapshot_count++
- $snapshot_usage+= $blobs[$b].Length
- }
-
- if ($blobs[$b].VersionId -ne $null)
- {
- $version_count++
- $version_usage += $blobs[$b].Length
- }
- }
-
- If ($blob_continuation_token -ne $null)
- {
- Write-Verbose "Blob listing continuation token = {0}".Replace("{0}",$blob_continuation_token.NextMarker)
- }
- }
- } while ($blob_continuation_token -ne $null)
-
- Write-Verbose "Calculated size of $container = $total_usage with soft_delete usage of $soft_delete_usage"
- $containerstats += [PSCustomObject] @{
- Name = $container
- TotalBlobCount = $total_blob_count
- TotalBlobUsageinGB = $total_usage/1GB
- SoftDeletedBlobCount = $soft_delete_count
- SoftDeletedBlobUsageinGB = $soft_delete_usage/1GB
- SnapshotCount = $snapshot_count
- SnapshotUsageinGB = $snapshot_usage/1GB
- VersionCount = $version_count
- VersionUsageinGB = $version_usage/1GB
- }
- }
- }
-
- If ($container_continuation_token -ne $null)
- {
- Write-Verbose "Container listing continuation token = {0}".Replace("{0}",$container_continuation_token.NextMarker)
- }
-} while ($container_continuation_token -ne $null)
-
-Write-Host "Total container stats"
-$containerstats | Format-Table -AutoSize
-```
## Clean up deployment
stream-analytics Confluent Kafka Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/confluent-kafka-input.md
To upload certificates, you must have "**Key Vault Administrator**" access to y
> Your Azure Stream Analytics job will fail when the certificate used for authentication expires. To resolve this, you must update/replace the certificate in your key vault and restart your Azure Stream Analytics job. Make sure you have Azure CLI configured and installed locally with PowerShell.
-You can visit this page to get guidance on setting up Azure CLI: [Get started with Azure CLI](https://learn.microsoft.com/cli/azure/get-started-with-azure-cli#how-to-sign-into-the-azure-cli)
+You can visit this page to get guidance on setting up Azure CLI: [Get started with Azure CLI](/cli/azure/get-started-with-azure-cli#how-to-sign-into-the-azure-cli)
**Login to Azure CLI:** ```PowerShell
stream-analytics Confluent Kafka Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/confluent-kafka-output.md
To upload certificates, you must have "**Key Vault Administrator**" access to y
> Your Azure Stream Analytics job will fail when the certificate used for authentication expires. To resolve this, you must update/replace the certificate in your key vault and restart your Azure Stream Analytics job. Make sure you have Azure CLI configured and installed locally with PowerShell.
-You can visit this page to get guidance on setting up Azure CLI: [Get started with Azure CLI](https://learn.microsoft.com/cli/azure/get-started-with-azure-cli#how-to-sign-into-the-azure-cli)
+You can visit this page to get guidance on setting up Azure CLI: [Get started with Azure CLI](/cli/azure/get-started-with-azure-cli#how-to-sign-into-the-azure-cli)
**Login to Azure CLI:** ```PowerShell
stream-analytics Kafka Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/kafka-output.md
To upload certificates, you must have "**Key Vault Administrator**" access to y
> Your Azure Stream Analytics job will fail when the certificate used for authentication expires. To resolve this, you must update/replace the certificate in your key vault and restart your Azure Stream Analytics job. Make sure you have Azure CLI configured locally with PowerShell.
-You can visit this page to get guidance on setting up Azure CLI: [Get started with Azure CLI](https://learn.microsoft.com/cli/azure/get-started-with-azure-cli#how-to-sign-into-the-azure-cli)
+You can visit this page to get guidance on setting up Azure CLI: [Get started with Azure CLI](/cli/azure/get-started-with-azure-cli#how-to-sign-into-the-azure-cli)
**Login to Azure CLI:** ```PowerShell
stream-analytics Stream Analytics Define Kafka Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-define-kafka-input.md
Follow the following to grant admin access:
> Your Azure Stream Analytics job will fail when the certificate used for authentication expires. To resolve this, you must update/replace the certificate in your key vault and restart your Azure Stream Analytics job. Make sure you have Azure CLI configured locally with PowerShell.
-You can visit this page to get guidance on setting up Azure CLI: [Get started with Azure CLI](https://learn.microsoft.com/cli/azure/get-started-with-azure-cli#how-to-sign-into-the-azure-cli)
+You can visit this page to get guidance on setting up Azure CLI: [Get started with Azure CLI](/cli/azure/get-started-with-azure-cli#how-to-sign-into-the-azure-cli)
**Login to Azure CLI:** ```PowerShell
synapse-analytics Concepts Data Factory Differences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/data-integration/concepts-data-factory-differences.md
Check below table for features availability:
| Category | Feature | Azure Data Factory | Azure Synapse Analytics | | | - | :: | :: | | **Integration Runtime** | Support for Cross-region Integration Runtime (Data Flows) | Γ£ô | Γ£ù |
-| | Integration Runtime Sharing | Γ£ô<br><small>*Can be shared across different data factories* | Γ£ù |
+| | Integration Runtime Sharing | Γ£ô *Can be shared across different data factories* | Γ£ù |
| **Pipelines Activities** | Support for Power Query Activity | Γ£ô | Γ£ù | | | Support for global parameters | Γ£ô | Γ£ù |
-| **Template Gallery and Knowledge center** | Solution Templates | Γ£ô<br><small>*Azure Data Factory Template Gallery* | Γ£ô<br><small>*Synapse Workspace Knowledge center* |
+| **Template Gallery and Knowledge center** | Solution Templates | Γ£ô *Azure Data Factory Template Gallery* | Γ£ô *Synapse Workspace Knowledge center* |
| **GIT Repository Integration** | GIT Integration | Γ£ô | Γ£ô |
-| **Monitoring** | Monitoring of Spark Jobs for Data Flow | Γ£ù | Γ£ô<br>*Leverage the Synapse Spark pools* |
+| **Monitoring** | Monitoring of Spark Jobs for Data Flow | Γ£ù | Γ£ô *Leverage the Synapse Spark pools* |
## Next steps
synapse-analytics Overview Map Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/database-designer/overview-map-data.md
Title: Map Data in Azure Synapse Analytics | Microsoft Docs description: Learn how to use the Map Data tool in Azure Synapse Analytics--++
synapse-analytics Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/metadata/database.md
This script enables you to create users without admin privileges who can read an
First, create a new Spark database named `mytestdb` using a Spark cluster you have already created in your workspace. You can achieve that, for example, using a Spark C# Notebook with the following .NET for Spark statement: ```csharp
-spark.Sql("CREATE DATABASE mytestlakedb")
+spark.sql("CREATE DATABASE mytestlakedb")
``` After a short delay, you can see the lake database from serverless SQL pool. For example, run the following statement from serverless SQL pool.
synapse-analytics Apache Spark 24 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-24-runtime.md
Title: Azure Synapse Runtime for Apache Spark 2.4 (unsupported) description: Versions of Spark, Scala, Python, and .NET for Apache Spark 2.4.-++ Last updated 04/18/2022 -
synapse-analytics Apache Spark 3 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-3-runtime.md
Title: Azure Synapse Runtime for Apache Spark 3.1 (EOLA) description: Supported versions of Spark, Scala, Python, and .NET for Apache Spark 3.1.-++ Last updated 11/28/2022- # Azure Synapse Runtime for Apache Spark 3.1 (EOLA)
-Azure Synapse Analytics supports multiple runtimes for Apache Spark. This document will cover the runtime components and versions for the Azure Synapse Runtime for Apache Spark 3.1.
+Azure Synapse Analytics supports multiple runtimes for Apache Spark. This document covers the runtime components and versions for the Azure Synapse Runtime for Apache Spark 3.1.
> [!IMPORTANT] > * End of life announced (EOLA) for Azure Synapse Runtime for Apache Spark 3.1 has been announced January 26, 2023.
abseil-cpp=20210324.0
absl-py=0.13.0
-adal=1.2.7
+Microsoft Authentication Library=1.2.7
adlfs=0.7.7
chardet=4.0.0
charls=2.2.0
-click=8.0.1
+Select=8.0.1
cloudpickle=1.6.0
httr 1.4.3
hwriter 1.3.2.1
-ids 1.0.1
+IDs 1.0.1
ini 0.3.1
zoo 1.8-10
## Migration between Apache Spark versions - support
-For guidance on migrating from older runtime versions to Azure Synapse Runtime for Apache Spark 3.3 or 3.4 refer to [Runtime for Apache Spark Overview](./apache-spark-version-support.md).
+For guidance on migrating from older runtime versions to Azure Synapse Runtime for Apache Spark 3.3 or 3.4, refer to [Runtime for Apache Spark Overview](./apache-spark-version-support.md).
synapse-analytics Apache Spark 32 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-32-runtime.md
Title: Azure Synapse Runtime for Apache Spark 3.2 description: Supported versions of Spark, Scala, Python, and .NET for Apache Spark 3.2.-++ Last updated 11/28/2022- - # Azure Synapse Runtime for Apache Spark 3.2 (EOLA)
synapse-analytics Apache Spark 33 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-33-runtime.md
Title: Azure Synapse Runtime for Apache Spark 3.3 description: New runtime is GA and ready for production workloads. Spark 3.3.1, Python 3.10, Delta Lake 2.2. + Last updated 11/17/2022 - - # Azure Synapse Runtime for Apache Spark 3.3 (GA)
synapse-analytics Apache Spark 34 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-34-runtime.md
Title: Azure Synapse Runtime for Apache Spark 3.4 description: New runtime is in Public Preview. Try it and use Spark 3.4.1, Python 3.10, Delta Lake 2.4. + Last updated 11/17/2023 -- # Azure Synapse Runtime for Apache Spark 3.4 (Public Preview)
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This docume
| Python | 3.10 | | R | 4.2.2 | --
-As of now, creation of Spark 3.4 pools will be available only thru Azure Synapse Studio. In the upcoming weeks we will add the Azure Portal and ARM support.
-
+As of now, creation of Spark 3.4 pools are available only through Azure Synapse Studio. In the upcoming weeks we'll add the Azure portal and ARM support.
## Libraries+ The following sections present the libraries included in Azure Synapse Runtime for Apache Spark 3.4 (Public Preview). ### Scala and Java default libraries+ The following table lists all the default level packages for Java/Scala and their respective versions. | GroupID | ArtifactID | Version |
The following table lists all the default level packages for Java/Scala and thei
| pl.edu.icm | JLargeArrays | 1.5 | | stax | stax-api | 1.0.1 |
-### Python libraries
-The Azure Synapse Runtime for Apache Spark 3.4 is currently in Public Preview. During this phase, the Python libraries will experience significant updates. Additionally, please note that some machine learning capabilities are not yet supported, such as the PREDICT method and Synapse ML.
+### Python libraries
+
+The Azure Synapse Runtime for Apache Spark 3.4 is currently in Public Preview. During this phase, the Python libraries experience significant updates. Additionally, please note that some machine learning capabilities aren't yet supported, such as the PREDICT method and Synapse ML.
### R libraries
The following table lists all the default level packages for R and their respect
## Migration between Apache Spark versions - support
-For guidance on migrating from older runtime versions to Azure Synapse Runtime for Apache Spark 3.4 refer to [Runtime for Apache Spark Overview](./apache-spark-version-support.md).
+For guidance on migrating from older runtime versions to Azure Synapse Runtime for Apache Spark 3.4, refer to [Runtime for Apache Spark Overview](./apache-spark-version-support.md).
synapse-analytics Apache Spark Performance Hyperspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-performance-hyperspace.md
Title: Hyperspace indexes for Apache Spark
description: Performance optimization for Apache Spark using Hyperspace indexes -+ Last updated 02/10/2023- zone_pivot_groups: programming-languages-spark-all-minus-sql-r
synapse-analytics Apache Spark R Language https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-r-language.md
Last updated 09/27/2022
# Use R for Apache Spark with Azure Synapse Analytics (Preview)
-Azure Synapse Analytics provides built-in R support for Apache Spark. As part of this, data scientists can leverage Azure Synapse Analytics notebooks to write and run their R code. This also includes support for [SparkR](https://spark.apache.org/docs/latest/sparkr.html) and [SparklyR](https://spark.rstudio.com/), which allows users to interact with Spark using familiar Spark or R interfaces.
+Azure Synapse Analytics provides built-in R support for Apache Spark. As part of this, data scientists can use Azure Synapse Analytics notebooks to write and run their R code. This also includes support for [SparkR](https://spark.apache.org/docs/latest/sparkr.html) and [SparklyR](https://spark.rstudio.com/), which allows users to interact with Spark using familiar Spark or R interfaces.
-In this article, you will learn how to use R for Apache Spark with Azure Synapse Analytics.
+In this article, you'll learn how to use R for Apache Spark with Azure Synapse Analytics.
## R Runtime
To learn more about how to manage workspace libraries, see the following article
When doing interactive data analysis or machine learning, you might try newer packages or you might need packages that are currently unavailable on your Apache Spark pool. Instead of updating the pool configuration, users can now use session-scoped packages to add, manage, and update session dependencies. - When you install session-scoped libraries, only the current notebook has access to the specified libraries.
- - These libraries will not impact other sessions or jobs using the same Spark pool.
+ - These libraries won't impact other sessions or jobs using the same Spark pool.
- These libraries are installed on top of the base runtime and pool level libraries.
- - Notebook libraries will take the highest precedence.
- - Session-scoped R libraries do not persist across sessions. These libraries will be installed at the start of each session when the related installation commands are executed
+ - Notebook libraries take the highest precedence.
+ - Session-scoped R libraries don't persist across sessions. These libraries are installed at the start of each session when the related installation commands are executed
- Session-scoped R libraries are automatically installed across both the driver and worker nodes For example, users can install an R library from CRAN and CRAN snapshots. In the example below, *Highcharter* is a popular package for R visualizations. I can install this package on all nodes within my Apache Spark pool using the following command:
head(df)
### Create a SparkR dataframe using the Spark data source API
-SparkR supports operating on a variety of data sources through the SparkDataFrame interface. The general method for creating a DataFrame from a data source is ```read.df```. This method takes the path for the file to load and the type of data source. SparkR supports reading CSV, JSON, text, and Parquet files natively.
+SparkR supports operating on various data sources through the SparkDataFrame interface. The general method for creating a DataFrame from a data source is ```read.df```. This method takes the path for the file to load and the type of data source. SparkR supports reading CSV, JSON, text, and Parquet files natively.
```r # Read a csv from ADLSg2
synapse-analytics Apache Spark Version Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-version-support.md
Title: Apache Spark version support description: Supported versions of Spark, Scala, Python, .NET ---- Previously updated : 11/17/2022 -+ Last updated : 11/30/2023++++
+ - devx-track-dotnet
+ - devx-track-python
# Azure Synapse runtimes
-Apache Spark pools in Azure Synapse use runtimes to tie together essential component versions such as Azure Synapse optimizations, packages, and connectors with a specific Apache Spark version. Each runtime is upgraded periodically to include new improvements, features, and patches. When you create a serverless Apache Spark pool, you have the option to select the corresponding Apache Spark version. Based on this, the pool comes pre-installed with the associated runtime components and packages. The runtimes have the following advantages:
+Apache Spark pools in Azure Synapse use runtimes to tie together essential component versions such as Azure Synapse optimizations, packages, and connectors with a specific Apache Spark version. Each runtime is upgraded periodically to include new improvements, features, and patches. When you create a serverless Apache Spark pool, you have the option to select the corresponding Apache Spark version. Based on this, the pool comes preinstalled with the associated runtime components and packages.
+
+The runtimes have the following advantages:
- Faster session startup times - Tested compatibility with specific Apache Spark versions - Access to popular, compatible connectors and open-source packages
+## Supported Azure Synapse runtime releases
-## Supported Azure Synapse runtime releases
-
-> [!WARNING]
+> [!WARNING]
> End of Support Notification for Azure Synapse Runtime for Apache Spark 2.4
-> * Effective September 29, 2023, the Azure Synapse will discontinue official support for Spark 2.4 Runtimes.
+> * Effective September 29, 2023, the Azure Synapse will discontinue official support for Spark 2.4 Runtimes.
> * Post September 29, we will not be addressing any support tickets related to Spark 2.4. There will be no release pipeline in place for bug or security fixes for Spark 2.4. Utilizing Spark 2.4 post the support cutoff date is undertaken at one's own risk. We strongly discourage its continued use due to potential security and functionality concerns.
-> * Recognizing that certain customers may need additional time to transition to a higher runtime version, we are temporarily extending the usage option for Spark 2.4, but we will not provide any official support for it.
+> * Recognizing that certain customers might need additional time to transition to a higher runtime version, we are temporarily extending the usage option for Spark 2.4, but we will not provide any official support for it.
> * We strongly advise to proactively upgrade their workloads to a more recent version of the runtime (e.g., [Azure Synapse Runtime for Apache Spark 3.3 (GA)](./apache-spark-33-runtime.md)). The following table lists the runtime name, Apache Spark version, and release date for supported Azure Synapse Runtime releases.
-| Runtime name | Release date | Release stage | End of life announcement date | End of life effective date |
-|-|-||-|-|
-| [Azure Synapse Runtime for Apache Spark 3.4](./apache-spark-34-runtime.md) | Nov 21, 2023 | Public Preview (GA expected in Q1 2024) |
-| [Azure Synapse Runtime for Apache Spark 3.3](./apache-spark-33-runtime.md) | Nov 17, 2022 | GA (as of Feb 23, 2023) | Q1/Q2 2024 | Q1 2025 |
-| [Azure Synapse Runtime for Apache Spark 3.2](./apache-spark-32-runtime.md) | July 8, 2022 | __End of Life Announced (EOLA)__ | July 8, 2023 | July 8, 2024 |
-| [Azure Synapse Runtime for Apache Spark 3.1](./apache-spark-3-runtime.md) | May 26, 2021 | __End of Life Announced (EOLA)__ | January 26, 2023 | January 26, 2024 |
-| [Azure Synapse Runtime for Apache Spark 2.4](./apache-spark-24-runtime.md) | December 15, 2020 | __End of Life (EOL)__ | __July 29, 2022__ | __September 29, 2023__ |
+| Runtime name | Release date | Release stage | End of life announcement date | End of life effective date |
+| | | | | |
+| [Azure Synapse Runtime for Apache Spark 3.4](./apache-spark-34-runtime.md) | Nov 21, 2023 | Public Preview (GA expected in Q1 2024) |
+| [Azure Synapse Runtime for Apache Spark 3.3](./apache-spark-33-runtime.md) | Nov 17, 2022 | GA (as of Feb 23, 2023) | Q1/Q2 2024 | Q1 2025 |
+| [Azure Synapse Runtime for Apache Spark 3.2](./apache-spark-32-runtime.md) | July 8, 2022 | __End of Life Announced (EOLA)__ | July 8, 2023 | July 8, 2024 |
+| [Azure Synapse Runtime for Apache Spark 3.1](./apache-spark-3-runtime.md) | May 26, 2021 | __End of Life Announced (EOLA)__ | January 26, 2023 | January 26, 2024 |
+| [Azure Synapse Runtime for Apache Spark 2.4](./apache-spark-24-runtime.md) | December 15, 2020 | __End of Life (EOL)__ | __July 29, 2022__ | __September 29, 2023__ |
## Runtime release stages
For the complete runtime for Apache Spark lifecycle and support policies, refer
## Runtime patching
-Azure Synapse runtime for Apache Spark patches are rolled out monthly containing bug, feature and security fixes to the Apache Spark core engine, language environments, connectors and libraries.
--
-> [!NOTE]
-> - Maintenance updates will be automatically applied to new sessions for a given serverless Apache Spark pool.
+Azure Synapse runtimes for Apache Spark patches are rolled out monthly containing bug, feature and security fixes to the Apache Spark core engine, language environments, connectors and libraries.
+> [!NOTE]
+> - Maintenance updates will be automatically applied to new sessions for a given serverless Apache Spark pool.
> - You should test and validate that your applications run properly when using new runtime versions.
-> [!IMPORTANT]
+> [!IMPORTANT]
> __Log4j 1.2.x security patches__
->
+>
> Open-source Log4j library version 1.2.x has several known CVEs (Common Vulnerabilities and Exposures), as described [here](https://logging.apache.org/log4j/1.2/https://docsupdatetracker.net/index.html).
->
+>
> On all Synapse Spark Pool runtimes, we have patched the Log4j 1.2.17 JARs to mitigate the following CVEs: CVE-2019-1751, CVE-2020-9488, CVE-2021-4104, CVE-2022-23302, CVE-2022-2330, CVE-2022-23307
->
+>
> The applied patch works by removing the following files which are required to invoke the vulnerabilities: > * ```org/apache/log4j/net/SocketServer.class``` > * ```org/apache/log4j/net/SMTPAppender.class```
Azure Synapse runtime for Apache Spark patches are rolled out monthly containing
> * ```org/apache/log4j/net/JMSSink.class``` > * ```org/apache/log4j/jdbc/JDBCAppender.class``` > * ```org/apache/log4j/chainsaw/*```
->
+>
> While the above classes were not used in the default Log4j configurations in Synapse, it is possible that some user application could still depend on it. If your application needs to use these classes, use Library Management to add a secure version of Log4j to the Spark Pool. __Do not use Log4j version 1.2.17__, as it would be reintroducing the vulnerabilities. The patch policy differs based on the [runtime lifecycle stage](./runtime-for-apache-spark-lifecycle-and-supportability.md):
-1. Generally Available (GA) runtime: Receive no upgrades on major versions (i.e. 3.x -> 4.x). And will upgrade a minor version (i.e. 3.x -> 3.y) as long as there are no deprecation or regression impacts.
-2. Preview runtime: No major version upgrades unless strictly necessary. Minor versions (3.x -> 3.y) will be upgraded to add latest features to a runtime.
-3. Long Term Support (LTS) runtime is patched with security fixes only.
-4. End of life announced (EOLA) runtime will not have bug and feature fixes. Security fixes are backported based on risk assessment.
+
+- Generally Available (GA) runtime: Receive no upgrades on major versions (that is, 3.x -> 4.x). And will upgrade a minor version (that is, 3.x -> 3.y) as long as there are no deprecation or regression impacts.
+
+- Preview runtime: No major version upgrades unless strictly necessary. Minor versions (3.x -> 3.y) will be upgraded to add latest features to a runtime.
+
+- Long Term Support (LTS) runtime is patched with security fixes only.
+
+- End of life announced (EOLA) runtime won't have bug and feature fixes. Security fixes are backported based on risk assessment.
## Migration between Apache Spark versions - support
-General Upgrade guidelines/ FAQ's:
+General Upgrade guidelines/ FAQs:
Question: What steps should be taken in migrating from 2.4 to 3.X? Answer: Refer to the following migration guide: https://spark.apache.org/docs/latest/sql-migration-guide.html
-Question: I get an error when I try to upgrade Spark pool runtime using PowerShell commandlet when the Spark pool has attached libraries
+Question: I got an error when I tried to upgrade Spark pool runtime using PowerShell commandlet when they have attached libraries
-Answer: Do not use PowerShell Commandlet if you have custom libraries attached to the Spark pool. Instead follow these steps:
+Answer: Don't use PowerShell Commandlet if you have custom libraries installed in your synapse workspace. Instead follow these steps:
-* Recreate Spark Pool 3.3 from the ground up.
-* Downgrade the current Spark Pool 3.3 to 3.1, remove any packages attached, and then upgrade again to 3.3
+- Recreate Spark Pool 3.3 from the ground up.
+- Downgrade the current Spark Pool 3.3 to 3.1, remove any packages attached, and then upgrade again to 3.3.
synapse-analytics Apache Spark Sql Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/data-sources/apache-spark-sql-connector.md
Title: Azure SQL and SQL Server description: This article provides information on how to use the connector for moving data between Azure MS SQL and serverless Apache Spark pools. -+
synapse-analytics Runtime For Apache Spark Lifecycle And Supportability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/runtime-for-apache-spark-lifecycle-and-supportability.md
Title: Synapse runtime for Apache Spark lifecycle and supportability description: Lifecycle and support policies for Synapse runtime for Apache Spark---+++ Last updated : 12/01/2023+ Previously updated : 07/19/2022--+ # Synapse runtime for Apache Spark lifecycle and supportability
The Apache Spark project usually releases minor versions about __every 6 months_
The following chart captures a typical lifecycle path for a Synapse runtime for Apache Spark.
-![How to enable Intelligent Cache during new Spark pools creation](./media/runtime-for-apache-spark-lifecycle/runtime-for-apache-spark-lifecycle.png)
-| Runtime release stage | Typical Lifecycle* | Notes |
-| -- | -- | -- |
+| Runtime release stage | Typical Lifecycle* | Notes |
+| | | |
| Preview | 3 months* | Microsoft Azure Preview terms apply. See here for details: [Preview Terms Of Use | Microsoft Azure](https://azure.microsoft.com/support/legal/preview-supplemental-terms/?cdn=disable) |
-| Generally Available (GA) | 12 months* | Generally Available (GA) runtimes are open to all eligible customers and are ready for production use. <br/> A GA runtime may not be elected to move into an LTS stage at Microsoft discretion. |
+| Generally Available (GA) | 12 months* | Generally Available (GA) runtimes are open to all eligible customers and are ready for production use.<br />A GA runtime might not be elected to move into an LTS stage at Microsoft discretion. |
| Long Term Support (LTS) | 12 months* | Long term support (LTS) runtimes are open to all eligible customers and are ready for production use, but customers are encouraged to expedite validation and workload migration to latest GA runtimes. |
-| End of Life announced (EOLA) | 12 months* for GA or LTS runtimes.<br/>1 month* for Preview runtimes. | Prior to the end of a given runtime's lifecycle, we aim to provide 12 months' notice by publishing the End-of-Life Announcement (EOLA) date in the [Azure Synapse Runtimes page](./apache-spark-version-support.md) and 6 months' email notice to customers as an exit ramp to migrate their workloads to a GA runtime. |
+| End of Life announced (EOLA) | 12 months* for GA or LTS runtimes.<br />1 month* for Preview runtimes. | Prior to the end of a given runtime's lifecycle, we aim to provide 12 months' notice by publishing the End-of-Life Announcement (EOLA) date in the [Azure Synapse Runtimes page](./apache-spark-version-support.md) and 6 months' email notice to customers as an exit ramp to migrate their workloads to a GA runtime. |
| End of Life (EOL) | - | At this stage, the runtime is retired and no longer supported. |--
-\* *Expected duration of a runtime in each stage. These timelines are provided as an example for a given runtime, and may vary depending on various factors. Lifecycle timelines are subject to change at Microsoft discretion.*
+\* *Expected duration of a runtime in each stage. These timelines are provided as an example for a given runtime, and might vary depending on various factors. Lifecycle timelines are subject to change at Microsoft discretion.*
\** *Your use of runtimes is governed by the terms applicable to your Azure subscription.*
-> [!IMPORTANT]
->
+> [!IMPORTANT]
+>
> * The above timelines are provided as examples based on current Apache Spark releases. If the Apache Spark project changes the lifecycle of a specific version affecting a Synapse runtime, changes to the stage dates are noted on the [release notes](./apache-spark-version-support.md).
-> * Both GA and LTS runtimes may be moved into EOL stage faster based on outstanding security risks and usage rates criteria at Microsoft discretion.
+> * Both GA and LTS runtimes might be moved into EOL stage faster based on outstanding security risks and usage rates criteria at Microsoft discretion.
> * Please refer to [Lifecycle FAQ - Microsoft Azure](/lifecycle/faq/azure) for information about Azure lifecycle policies. > ## Release stages and support ### Preview runtimes+ Azure Synapse Analytics provides previews to give you a chance to evaluate and share feedback on features before they become generally available (GA). At the end of the Preview lifecycle for the runtime, Microsoft will assess if the runtime moves into a Generally Availability (GA) based on customer usage, security and stability criteria.
At the end of the Preview lifecycle for the runtime, Microsoft will assess if th
If not eligible for GA stage, the Preview runtime moves into the retirement cycle. ### Generally available runtimes
-Once a runtime is Generally Available, only security fixes are backported. In addition, new components or features are introduced if they don't change underlying dependencies or component versions.
+
+Once a runtime is Generally Available, only security fixes are backported. In addition, new components or features are introduced if they don't change underlying dependencies or component versions.
At the end of the GA lifecycle for the runtime, Microsoft will assess if the runtime has an extended lifecycle (LTS) based on customer usage, security and stability criteria. If not eligible for LTS stage, the GA runtime moves into the retirement cycle. ### Long term support runtimes
-For runtimes that are covered by Long term support (LTS) customers are encouraged to expedite validation and migration of code base and workloads to the latest GA runtimes. We recommend that customers don't onboard new workloads using an LTS runtime. Security fixes and stability improvements may be backported, but no new components or features are introduced into the runtime at this stage.
+
+For runtimes that are covered by Long term support (LTS) customers are encouraged to expedite validation and migration of code base and workloads to the latest GA runtimes. We recommend that customers don't onboard new workloads using an LTS runtime. Security fixes and stability improvements might be backported, but no new components or features are introduced into the runtime at this stage.
### End of life announcement+ Prior to the end of the runtime lifecycle at any stage, an end of life announcement (EOLA) is performed. Support SLAs are applicable for EOL announced runtimes, but all customers must migrate to a GA stage runtime no later than the EOL date. During the EOLA stage, existing Synapse Spark pools function as expected, and new pools of the same version can be created. The runtime version is listed on Azure Synapse Studio, Synapse API, or Azure portal. At the same time, we strongly recommend migrating your workloads to the latest General Availability (GA) runtimes.
-If necessary due to outstanding security issues, runtime usage, or other factors, **Microsoft may expedite moving a runtime into the final EOL stage at any time, at Microsoft's discretion.**
+If necessary due to outstanding security issues, runtime usage, or other factors, **Microsoft might expedite moving a runtime into the final EOL stage at any time, at Microsoft's discretion.**
### End of life date and retirement+ As of the applicable EOL (End-of-Life) date, runtimes are considered retired and deprecated.
-* It isn't possible to create new Spark pools using the retired version through Azure Synapse Studio, the Synapse API, or the Azure portal.
-* The retired runtime version won't be available in Azure Synapse Studio, the Synapse API, or the Azure portal.
-* Spark Pool definitions and associated metadata will remain in the Synapse workspace for a defined period after the applicable End-of-Life (EOL) date. **However, all pipelines, jobs, and notebooks will no longer be able to execute.**
+- It isn't possible to create new Spark pools using the retired version through Azure Synapse Studio, the Synapse API, or the Azure portal.
+- The retired runtime version won't be available in Azure Synapse Studio, the Synapse API, or the Azure portal.
+- Spark Pool definitions and associated metadata will remain in the Synapse workspace for a defined period after the applicable End-of-Life (EOL) date. **However, all pipelines, jobs, and notebooks will no longer be able to execute.**
virtual-machines Azure Hybrid Benefit Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/azure-hybrid-benefit-linux.md
Customers may see savings estimated to up to 76% with Azure Hybrid Benefit for L
> [!TIP] > Try the **[Azure Hybrid Benefit Savings Calculator](https://azure.microsoft.com/pricing/hybrid-benefit/#calculator)** to visualize the cost saving benefits of this feature. - ## Defining Pay-as-you-go (PAYG) and Bring-your-own-subscription (BYOS) In Azure, there are two main licensing pricing options: 'pay-as-you-go' (PAYG) and 'bring-your-own-subscription' (BYOS). 'PAYG' is a pricing option where you pay for the resources you use on an hourly or monthly basis. You only pay for what you use and can scale up or down as needed. On the other hand, 'BYOS' is a licensing option where you can use your existing licenses for certain software, in this case RHEL and SLES, on Azure virtual machines. You can use your existing licenses and don't have to purchase new ones for use in Azure.
In Azure, there are two main licensing pricing options: 'pay-as-you-go' (PAYG) a
:::image type="content" source="./media/ahb-linux/azure-hybrid-benefit-compare.png" alt-text="Diagram that shows the use of Azure Hybrid Benefit to switch Linux virtual machines between pay-as-you-go and bring-your-own-subscription."::: > [!NOTE]
-> Virtual machines deployed from PAYG images or VMs converted from BYOS models incur *both* an infrastructure fee and a software fee. If you have your own license, use Azure Hybrid Benefit to convers from a PAYG to BYOS model.
+> Virtual machines deployed from PAYG images or VMs converted from BYOS models incur *both* an infrastructure fee and a software fee. If you have your own license, use Azure Hybrid Benefit to convert from a PAYG to BYOS model.
You can use Azure Hybrid Benefit to switch back to pay-as-you-go billing at any time. - ## Which Linux virtual machines qualify for Azure Hybrid Benefit? Azure dedicated host instances and SQL hybrid benefits aren't eligible for Azure Hybrid Benefit if you already use Azure Hybrid Benefit with Linux virtual machines. -- ## Enabling Azure Hybrid Benefit ### Enabling AHB on New VMs
You can use the `az vm extension` and `az vm update` commands to update new virt
``` - RHEL License Types: RHEL_BASE, RHEL_EUS, RHEL_SAPAPPS, RHEL_SAPHA, RHEL_BASESAPAPPS, RHEL_BASESAPHAΓÇï- - SLES License Types: SLES_STANDARD, SLES_SAP, SLES_HPCΓÇï - ### Enabling AHB on Existing VM #### [Azure portal](#tab/ahbExistingPortal)
You can use the `az vm extension` and `az vm update` commands to update existing
```azurecli az vm extension ```
+ > [!Note]
+ > The complete az vm extension depends on the particular distribution you are using, please refer to the next section for the complete details.
1. Update with the correct license type ```azurecli
You can use the `az vm extension` and `az vm update` commands to update existing
`````` - RHEL License Types: RHEL_BASE, RHEL_EUS, RHEL_SAPAPPS, RHEL_SAPHA, RHEL_BASESAPAPPS, RHEL_BASESAPHAΓÇï- - SLES License Types: SLES_STANDARD, SLES_SAP, SLES_HPCΓÇï - ---- ## Check the current licensing model of an AHB enabled VM You can view the Azure Hybrid Benefit status of a virtual machine by using the Azure CLI or by using Azure Instance Metadata Service.
From within the virtual machine itself, you can query the attested metadata in A
## PAYG to BYOS conversions
-Converting from a Pay-as-you-go to a Bring-your-own-subscription model.
-### Operating system instructions
-#### [Red Hat (RHEL)](#tab/rhelpaygreqs)
-
-Azure Hybrid Benefit for converting PAYG virtual machines to BYOS for RHEL is available to Red Hat customers who meet the following criteria:
+
+### Convert a Pay As You Go(PAYG) image to BYOS using the Azure CLI
+If you deployed an Azure Marketplace image with PAYG licensing model and desire to convert it to BYOS, follow this process to convert it to the desired licensing model.
-- Have active or unused RHEL subscriptions that are eligible for use in Azure-- Have correctly enabled one or more of their subscriptions for use in Azure with the [Red Hat Cloud Access](https://www.redhat.com/en/technologies/cloud-computing/cloud-access) program
+#### [Red Hat (RHEL)](#tab/rhelAzcliByosConv)
-Bring your own subscription to Red Hat:
+1. Install the Azure Hybrid Benefit extension on a running virtual machine. You can use the Azure portal or use the following command via the Azure CLI:
+ ```azurecli
+ az vm extension set -n AHBForRHEL --publisher Microsoft.Azure.AzureHybridBenefit --vm-name myVMName --resource-group myResourceGroup
+ ```
-1. Enable one or more of your eligible RHEL subscriptions for use in Azure using the [Red Hat Cloud Access customer interface](https://access.redhat.com/management/cloud). The Azure subscriptions that you provide during the Red Hat Cloud Access enablement process then have access to Azure Hybrid Benefit
+1. Apply the `RHEL_BYOS` license type to the machine:
-1. Apply Azure Hybrid Benefit to any RHEL pay-as-you-go virtual machines that you deploy in Azure Marketplace pay-as-you-go images. You can use the Azure portal or the Azure CLI to enable Azure Hybrid Benefit.
+ ```azurecli
+ # This will enable BYOS on a RHEL(PAYG) virtual machine using Azure Hybrid Benefit
+ az vm update -g myResourceGroup -n myVmName --license-type RHEL_BYOS
+ ```
+1. Once the PAYG to BYOS conversion is complete, you must register the machine with Red Hat for system updates and usage compliance.
-1. Follow the recommended [next steps](https://access.redhat.com/articles/5419341) to configure update sources for your RHEL virtual machines and for RHEL subscription compliance guidelines.
+1. If you desire to return to PAYG model, you need to set up the license-type to "None", otherwise, it continues to be BYOS.
+ ```azurecli
+ # If the image started as PAYG and was converted to BYOS, the following command will revert it back to PAYG.
+ az vm update -g myResourceGroup -n myVmName --license-type NONE
+ ```
-#### [SUSE (SLES)](#tab/slespaygreqs)
-Azure Hybrid Benefit for pay-as-you-go virtual machines for SUSE is available to customers who have:
+#### [SUSE (SLES)](#tab/slesAzcliByosConv)
-- Unused SUSE subscriptions that are eligible to use in Azure.-- One or more active SUSE subscriptions to use on-premises that should be moved to Azure.-- Purchased subscriptions that they activated in the SUSE Customer Center to use in Azure.
+1. Install the Azure Hybrid Benefit extension on a running virtual machine. You can use the Azure portal or use the following command via the Azure CLI:
+ ```azurecli
+ az vm extension set -n AHBForSLES --publisher SUSE.AzureHybridBenefit --vm-name myVMName --resource-group myResourceGroup
+ ```
-> [!IMPORTANT]
-> Ensure that you select the correct subscription to use in Azure.
+1. Apply the `SLES_BYOS` license type to the virtual machine.
-To start using Azure Hybrid Benefit for SUSE:
+ ```azurecli
+ # This will enable BYOS on a SLES virtual machine
+ az vm update -g myResourceGroup -n myVmName --license-type SLES_BYOS
+ ```
-1. Register the subscription that you purchased from SUSE or a SUSE distributor with the [SUSE Customer Center](https://scc.suse.com).
-2. Activate the subscription in the SUSE Customer Center.
-3. Register your virtual machines that are receiving Azure Hybrid Benefit with the SUSE Customer Center to get the updates from the SUSE Customer Center.
+1. Once the PAYG to BYOS conversion is complete, you must register the machine on your own with SUSE for software updates and usage compliance.
+1. If you desire to return to PAYG model, you need to set up license-type to "None", otherwise, it continues to be BYOS.
+ ```azurecli
+ # If the image started as PAYG and was converted to BYOS, the following command will revert it back to PAYG.
+ az vm update -g myResourceGroup -n myVmName --license-type NONE
+ ```
+
+## BYOS to PAYG conversions
+Converting to PAYG model is supported for Azure Marketplace images labeled BYOS, machines imported from on-premises or a third party cloud provider.
--
-### Convert to BYOS using the Azure CLI
-
-#### [Red Hat (RHEL)](#tab/rhelAzcliByosConv)
-* For RHEL virtual machines, run the command with a `--license-type` parameter of `RHEL_BYOS`.
-
-```azurecli
-# This will enable BYOS on a RHEL virtual machine using Azure Hybrid Benefit
-az vm update -g myResourceGroup -n myVmName --license-type RHEL_BYOS
-```
+#### [Red Hat (RHEL)](#tab/rhelazclipaygconv)
1. Install the Azure Hybrid Benefit extension on a running virtual machine. You can use the Azure portal or use the following command via the Azure CLI:- ```azurecli az vm extension set -n AHBForRHEL --publisher Microsoft.Azure.AzureHybridBenefit --vm-name myVMName --resource-group myResourceGroup ```
az vm update -g myResourceGroup -n myVmName --license-type RHEL_BYOS
# This will enable Azure Hybrid Benefit to fetch software updates for RHEL BASE SAP HA repositories az vm update -g myResourceGroup -n myVmName --license-type RHEL_BASESAPHA
+ ```
+1. If you desire to return to BYOS model, you need to set up license-type to "None", otherwise, it continues to be PAYG.
+ ```azurecli
+ # If the image started as BYOS and was converted to PAYG, the following command will revert it back to BYOS.
+ az vm update -g myResourceGroup -n myVmName --license-type NONE
```
-1. Wait five minutes for the extension to read the license type value and install the repositories.
-
-1. You should now be connected to Red Hat Update Infrastructure. The relevant repositories are installed on your machine. You can validate the installation by running the following command on your virtual machine:
-
- ```bash
- sudo yum repolist
- ```
-
-1. If the extension isn't running by itself, you can try the following command on the virtual machine:
-
- ```bash
- sudo systemctl start azure-hybrid-benefit.service
- ```
-
-1. You can use the following command in your RHEL virtual machine to get the current status of the service:
-
- ```bash
- sudo ahb-service -status
- ```
-
-#### [SUSE (SLES)](#tab/slesAzcliByosConv)
-* For SLES virtual machines, run the command with a `--license-type` parameter of `SLES_BYOS`.
-
-```azurecli
-# This will enable BYOS on a SLES virtual machine
-az vm update -g myResourceGroup -n myVmName --license-type SLES_BYOS
-```
+#### [SUSE (SLES)](#tab/slesazclipaygconv)
1. Install the Azure Hybrid Benefit extension on a running virtual machine. You can use the Azure portal or use the following command via the Azure CLI:- ```azurecli az vm extension set -n AHBForSLES --publisher SUSE.AzureHybridBenefit --vm-name myVMName --resource-group myResourceGroup ```
az vm update -g myResourceGroup -n myVmName --license-type SLES_BYOS
az vm update -g myResourceGroup -n myVmName --license-type SLES_HPC ```
-1. Wait five minutes for the extension to read the license type value and install the repositories.
-
-1. You should now be connected to the SUSE public cloud update infrastructure on Azure. The relevant repositories are installed on your machine. You can verify this change by running the following command to list SUSE repositories on your virtual machine:
-
- ```bash
- sudo zypper repos
+1. If you desire to return to BYOS model, you need to set up the "None" license type, otherwise, it continues to be PAYG.
+ ```azurecli
+ # If the image started as BYOS and was converted to PAYG, the following command will revert it back to BYOS.
+ az vm update -g myResourceGroup -n myVmName --license-type NONE
```--
+#### Multiple VMs
------
-## BYOS to PAYG conversions
-Converting from a Bring-your-own-subscription to a Pay-as-you-go model.
-#### [Single VM](#tab/paygclisingle)
-
-If the system was originally a PAYG image and you want to return the VM to a PAYG model, use a `--license-type` value of `None`. For example:
-
-```azurecli
-# This will enable PAYG on a virtual machine using Azure Hybrid Benefit
-az vm update -g myResourceGroup -n myVmName --license-type None
-```
-
-If you have a BYOS and want to convert the VM to PAYG, use a `--license-type` value that covers the VM needs as described further in this article. For example, for RHEL systems you can use any of the following: RHEL_BASE, RHEL_EUS, RHEL_SAPAPPS, RHEL_SAPHA, RHEL_BASEAPAPPS or RHEL_BASESAPHA.
-
-#### [Multiple VMs](#tab/paygclimultiple)
-
-To switch the licensing model on a large number of virtual machines, you can use the `--ids` parameter in the Azure CLI:
+The following command converts the machines specified in the argument to BYOS.
```azurecli # This will enable BYOS on a RHEL virtual machine. In this example, ids.txt is an
virtual-machines Disk Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-overview.md
Linux server distributions that are not endorsed by Azure do not support Azure D
| OpenLogic | CentOS 8.4 | 8_4 | OpenLogic:CentOS:8_4:latest | OS and data disk | | OpenLogic | CentOS 8.3 | 8_3 | OpenLogic:CentOS:8_3:latest | OS and data disk | | OpenLogic | CentOS 8.2 | 8_2 | OpenLogic:CentOS:8_2:latest | OS and data disk |
-| OpenLogic | CentOS 8.1 | 8_1 | OpenLogic:CentOS:8_1:latest | OS and data disk |
| OpenLogic | CentOS 7-LVM | 7-LVM | OpenLogic:CentOS-LVM:7-LVM:7.9.2021020400 | OS and data disk | | OpenLogic | CentOS 7.9 | 7_9 | OpenLogic:CentOS:7_9:latest | OS and data disk | | OpenLogic | CentOS 7.8 | 7_8 | OpenLogic:CentOS:7_8:latest | OS and data disk |
virtual-machines Monitor Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/monitor-vm.md
Azure Monitor alerts proactively notify you when important conditions are found
Start by enabling recommended alerts. These are a predefined set of alert rules based on host metrics for the VM. You can quickly enable and customize each of these rules with a few clicks in the Azure portal. See [Tutorial: Enable recommended alert rules for Azure virtual machine](../azure-monitor/vm/tutorial-monitor-vm-alert-recommended.md). This includes the [VM availability metric](monitor-vm-reference.md#vm-availability-metric-preview) which alerts when the VM stops running. ### Multi-resource metric alerts
-Using recommended alerts, a separate alert rule is created for each VM. You can choose to instead use a [multi-resource alert rule](../azure-monitor/alerts/alerts-types.md#monitor-multiple-resources) to use a single alert rule that applies to all VMs in a particular resource group or subscription (within the same region). See [Create availability alert rule for Azure virtual machine (preview)](../azure-monitor/vm/tutorial-monitor-vm-alert-availability.md) for a tutorial using the availability metric.
+Using recommended alerts, a separate alert rule is created for each VM. You can choose to instead use a [multi-resource alert rule](../azure-monitor/alerts/alerts-types.md#monitor-multiple-resources-with-one-alert-rule) to use a single alert rule that applies to all VMs in a particular resource group or subscription (within the same region). See [Create availability alert rule for Azure virtual machine (preview)](../azure-monitor/vm/tutorial-monitor-vm-alert-availability.md) for a tutorial using the availability metric.
### Other alert rules For more information about the various alerts for Azure virtual machines, see the following resources:
virtual-machines Resize Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/resize-vm.md
az vm resize \
**Use PowerShell to resize a VM not in an availability set.**
-This script sets the variables `$resourceGroup`, `$vm`, and `$size`. It then checks if the desired VM size is available by using `az vm list-vm-resize-options` and checking if the output contains the desired size. If the desired size isn't available, the script exits with an error message. If the desired size is available, the script deallocates the VM, resizes it, and starts it again.
+This Cloud shell PowerShell script initializes the variables `$resourceGroup`, `$vm`, and `$size` with the resource group name, VM name, and desired VM size respectively. It then retrieves the VM object from Azure using the `Get-AzVM` cmdlet. The script modifies the `VmSize` property of the VM's hardware profile to the desired size. Finally, it applies these changes to the VM in Azure using the `Update-AzVM` cmdlet.
```azurepowershell-interactive # Set variables
-$resourceGroup = "myResourceGroup"
-$vm = "myVM"
-$size = "Standard_DS3_v2"
-
-# Check if the desired VM size is available
-if ((az vm list-vm-resize-options --resource-group $resourceGroup --name $vm --query "[].name" | ConvertFrom-Json) -notcontains $size) {
- Write-Host "The desired VM size is not available."
- exit 1
-}
+$resourceGroup = 'myResourceGroup'
+$vm = 'myVM'
+$size = 'Standard_DS3_v2'
+# Get the VM
+$vm = Get-AzVM -ResourceGroupName $resourceGroup -Name $vmName
+# Change the VM size
+$vm.HardwareProfile.VmSize = $size
+# Update the VM
+Update-AzVM -ResourceGroupName $resourceGroup -VM $vm
+```
+As an alternative to running the script in Azure Cloud Shell, you can also execute it locally on your machine. This local version of the PowerShell script includes additional steps to import the Azure module and authenticate your Azure account.
-# Deallocate the VM
-az vm deallocate --resource-group $resourceGroup --name $vm
+> [!NOTE]
+> The local PowerShell may require the VM to restart to take effect.
-# Resize the VM
-az vm resize --resource-group $resourceGroup --name $vm --size $size
-# Start the VM
-az vm start --resource-group $resourceGroup --name $vm
+```powershell
+# Import the Azure module
+Import-Module Az
+# Login to your Azure account
+Connect-AzAccount
+# Set variables
+$resourceGroup = 'myResourceGroup'
+$vmName = 'myVM'
+$size = 'Standard_DS3_v2'
+# Select the subscription
+Select-AzSubscription -SubscriptionId '<subscriptionID>'
+# Get the VM
+$vm = Get-AzVM -ResourceGroupName $resourceGroup -Name $vmName
+# Change the VM size
+$vm.HardwareProfile.VmSize = $size
+# Update the VM
+Update-AzVM -ResourceGroupName $resourceGroup -VM $vm
``` - > [!WARNING] > Deallocating the VM also releases any dynamic IP addresses assigned to the VM. The OS and data disks are not affected. >
virtual-machines Virtual Machines Powershell Sample Create Managed Disk From Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/virtual-machines-powershell-sample-create-managed-disk-from-vhd.md
Previously updated : 10/24/2023 Last updated : 12/04/2023
Don't create multiple identical managed disks from a VHD file in small amount of
## Sample script
-[!code-powershell[main](../../../powershell_scripts/virtual-machine/create-managed-disks-from-vhd-in-different-subscription/create-managed-disks-from-vhd-in-different-subscription.ps1 "Create managed disk from VHD")]
+[!code-powershell[main](../../../new_powershell_scripts/managed-disks/create-managed-disks-from-vhd-in-different-subscription.ps1 "Create managed disk from VHD")]
## Script explanation
virtual-network Deploy Container Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/deploy-container-networking.md
The Azure Virtual Network container network interface (CNI) plug-in installs in an Azure virtual machine and brings virtual network capabilities to Kubernetes Pods and Docker containers. To learn more about the plug-in, see [Enable containers to use Azure Virtual Network capabilities](container-networking-overview.md). Additionally, the plug-in can be used with the Azure Kubernetes Service (AKS) by choosing the [Advanced Networking](../aks/configure-azure-cni.md?toc=%2fazure%2fvirtual-network%2ftoc.json) option, which automatically places AKS containers in a virtual network.
-## Deploy plug-in for ACS-Engine Kubernetes cluster
+## Deploy plug-in for Azure Container Service-Engine Kubernetes cluster
-The ACS-Engine deploys a Kubernetes cluster with an Azure Resource Manager template. The cluster configuration is specified in a JSON file that is passed to the tool when generating the template. To learn more about the entire list of supported cluster settings and their descriptions, see [Microsoft Azure Container Service Engine - Cluster Definition](https://github.com/Azure/acs-engine/blob/master/docs/clusterdefinition.md). The plug-in is the default networking plug-in for clusters created using the ACS-Engine. The following network configuration settings are important when configuring the plug-in:
+The Azure Container Service-Engine deploys a Kubernetes cluster with an Azure Resource Manager template. The cluster configuration is specified in a JSON file that is passed to the tool when generating the template. To learn more about the entire list of supported cluster settings and their descriptions, see [Microsoft Azure Container Service Engine - Cluster Definition](https://github.com/Azure/acs-engine/blob/master/docs/clusterdefinition.md). The plug-in is the default networking plug-in for clusters created using the Azure Container Service-Engine. The following network configuration settings are important when configuring the plug-in:
| Setting | Description | |--| |
vpn-gateway About Zone Redundant Vnet Gateways https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/about-zone-redundant-vnet-gateways.md
Previously updated : 09/03/2020 Last updated : 12/04/2023 # About zone-redundant virtual network gateway in Azure availability zones
For information about gateway SKUs, see [VPN gateway SKUs](vpn-gateway-about-vpn
## <a name="pipskus"></a>Public IP SKUs
-Zone-redundant gateways and zonal gateways both rely on the Azure public IP resource *Standard* SKU. The configuration of the Azure public IP resource determines whether the gateway that you deploy is zone-redundant, or zonal. If you create a public IP resource with a *Basic* SKU, the gateway won't have any zone redundancy, and the gateway resources will be regional.
+Zone-redundant, zonal and non-zonal gateways rely on the configuration of *Standard* SKU of Azure public IP resource. If you create a public IP resource with a *Basic* SKU, the gateway won't have any zone redundancy, and the gateway resources are regional.
+
+For more information, see [Availability zones](../virtual-network/ip-services/public-ip-addresses.md#availability-zone).
### <a name="pipzrg"></a>Zone-redundant gateways
-When you create a public IP address using the **Standard** public IP SKU without specifying a zone, the behavior differs depending on whether the gateway is a VPN gateway, or an ExpressRoute gateway.
+When you create a public IP address using the **Standard** public IP SKU with zone-redundant option, the behavior differs depending on whether the gateway is a VPN gateway, or an ExpressRoute gateway.
-* For a VPN gateway, the two gateway instances will be deployed in any 2 out of these three zones to provide zone-redundancy.
+* For a VPN gateway, the two gateway instances are deployed in any two out of these three zones to provide zone-redundancy.
* For an ExpressRoute gateway, since there can be more than two instances, the gateway can span across all the three zones. ### <a name="pipzg"></a>Zonal gateways
-When you create a public IP address using the **Standard** public IP SKU and specify the Zone (1, 2, or 3), all the gateway instances will be deployed in the same zone.
+When you create a public IP address using the **Standard** public IP SKU and specify the Zone (1, 2, or 3), all the gateway instances are deployed in the same zone.
+
+### <a name="piprg"></a>Non-zonal or regional gateways
-### <a name="piprg"></a>Regional gateways
+A non-zonal or regional gateway doesn't have zone-redundancy. These gateways are created in the following scenarios:
-When you create a public IP address using the **Basic** public IP SKU, the gateway is deployed as a regional gateway and doesn't have any zone-redundancy built into the gateway.
+* When you create a public IP address using the **Standard** public IP SKU with the "No Zone" option
+* When you create a public IP address using the **Basic** public IP SKU
## <a name="faq"></a>FAQ
From your perspective, you can deploy your gateways with zone-redundancy. This m
### Can I use the Azure portal?
-Yes, you can use the Azure portal to deploy these SKUs. However, you'll see these SKUs only in those Azure regions that have Azure availability zones.
+Yes, you can use the Azure portal to deploy these SKUs. However, you see these SKUs only in those Azure regions that have Azure availability zones.
### What regions are available for me to use these SKUs?
Migrating your existing virtual network gateways to zone-redundant or zonal gate
### Can I deploy both VPN and ExpressRoute gateways in same virtual network?
-Co-existence of both VPN and ExpressRoute gateways in the same virtual network is supported. However, you should reserve a /27 IP address range for the gateway subnet.
+Coexistence of both VPN and ExpressRoute gateways in the same virtual network is supported. However, you should reserve a /27 IP address range for the gateway subnet.
## Next steps
vpn-gateway Vpn Gateway About Skus Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-skus-legacy.md
description: How to work with the old virtual network gateway SKUs; Basic, Stand
Previously updated : 11/29/2023 Last updated : 12/04/2023 # Working with VPN Gateway legacy SKUs
You can view legacy gateway pricing in the **Virtual Network Gateways** section,
## SKU deprecation
-The Standard and High Performance SKUs will be deprecated September 30, 2025. The product team will make a migration path available for these SKUs by November 30, 2024. **At this time, there's no action that you need to take**.
+The Standard and High Performance SKUs will be deprecated September 30, 2025. You can view the announcement [here](https://go.microsoft.com/fwlink/?linkid=2255127). The product team will make a migration path available for these SKUs by November 30, 2024. **At this time, there's no action that you need to take**.
When the migration path becomes available, you can migrate your legacy SKUs to the following SKUs:
Standard and High Performance SKUs will be deprecated September 30, 2025. The pr
For more information about the new Gateway SKUs, see [Gateway SKUs](vpn-gateway-about-vpngateways.md#gwsku).
-For more information about configuration settings, see [About VPN Gateway configuration settings](vpn-gateway-about-vpn-gateway-settings.md).
+For more information about configuration settings, see [About VPN Gateway configuration settings](vpn-gateway-about-vpn-gateway-settings.md).