Updates from: 10/23/2023 01:14:43
Service Microsoft Docs article Related commit history on GitHub Change details
aks Load Balancer Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/load-balancer-standard.md
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S
default public-svc LoadBalancer 10.0.39.110 52.156.88.187 80:32068/TCP 52s ```
-When you view the service details, the public IP address created for this service on the load balancer is shown in the *EXTERNAL-IP* column. It may take a few minutes for the IP address to change from *\<pending\>* to an actual public IP address.
+When you view the service details, the public IP address created for this service on the load balancer is shown in the *EXTERNAL-IP* column. It might take a few minutes for the IP address to change from *\<pending\>* to an actual public IP address.
For more detailed information about your service, use the following command.
az aks create \
> [!IMPORTANT] >
-> If you have applications on your cluster that can establish a large number of connections to small set of destinations, like many instances of a frontend application connecting to a database, you may have a scenario susceptible to encounter SNAT port exhaustion. SNAT port exhaustion happens when an application runs out of outbound ports to use to establish a connection to another application or host. If you have a scenario susceptible to encounter SNAT port exhaustion, we highly recommended you increase the allocated outbound ports and outbound frontend IPs on the load balancer.
+> If you have applications on your cluster that can establish a large number of connections to small set of destinations, like many instances of a frontend application connecting to a database, you might have a scenario susceptible to encounter SNAT port exhaustion. SNAT port exhaustion happens when an application runs out of outbound ports to use to establish a connection to another application or host. If you have a scenario susceptible to encounter SNAT port exhaustion, we highly recommended you increase the allocated outbound ports and outbound frontend IPs on the load balancer.
> > For more information on SNAT, see [Use SNAT for outbound connections](../load-balancer/load-balancer-outbound-connections.md).
When calculating the number of outbound ports and IPs and setting the values, ke
* The number of outbound ports per node is fixed based on the value you set. * The value for outbound ports must be a multiple of 8. * Adding more IPs doesn't add more ports to any node, but it provides capacity for more nodes in the cluster.
-* You must account for nodes that may be added as part of upgrades, including the count of nodes specified via [maxSurge values][maxsurge].
+* You must account for nodes that might be added as part of upgrades, including the count of nodes specified via [maxSurge values][maxsurge].
The following examples show how the values you set affect the number of outbound ports and IP addresses:
If you expect to have numerous short-lived connections and no long-lived connect
When setting *IdleTimeoutInMinutes* to a different value than the default of 30 minutes, consider how long your workloads need an outbound connection. Also consider that the default timeout value for a *Standard* SKU load balancer used outside of AKS is 4 minutes. An *IdleTimeoutInMinutes* value that more accurately reflects your specific AKS workload can help decrease SNAT exhaustion caused by tying up connections no longer being used. > [!WARNING]
-> Altering the values for *AllocatedOutboundPorts* and *IdleTimeoutInMinutes* may significantly change the behavior of the outbound rule for your load balancer and shouldn't be done lightly. Check the [SNAT Troubleshooting section][troubleshoot-snat] and review the [Load Balancer outbound rules][azure-lb-outbound-rules-overview] and [outbound connections in Azure][azure-lb-outbound-connections] before updating these values to fully understand the impact of your changes.
+> Altering the values for *AllocatedOutboundPorts* and *IdleTimeoutInMinutes* might significantly change the behavior of the outbound rule for your load balancer and shouldn't be done lightly. Check the [SNAT Troubleshooting section][troubleshoot-snat] and review the [Load Balancer outbound rules][azure-lb-outbound-rules-overview] and [outbound connections in Azure][azure-lb-outbound-connections] before updating these values to fully understand the impact of your changes.
## Restrict inbound traffic to specific IP ranges
The following annotations are supported for Kubernetes services with type `LoadB
| `service.beta.kubernetes.io/azure-load-balancer-internal` | `true` or `false` | Specify whether the load balancer should be internal. If not set, it defaults to public. | `service.beta.kubernetes.io/azure-load-balancer-internal-subnet` | Name of the subnet | Specify which subnet the internal load balancer should be bound to. If not set, it defaults to the subnet configured in cloud config file. | `service.beta.kubernetes.io/azure-dns-label-name` | Name of the DNS label on Public IPs | Specify the DNS label name for the **public** service. If it's set to an empty string, the DNS entry in the Public IP isn't used.
-| `service.beta.kubernetes.io/azure-shared-securityrule` | `true` or `false` | Specify that the service should be exposed using an Azure security rule that may be shared with another service. Trade specificity of rules for an increase in the number of services that can be exposed. This annotation relies on the Azure [Augmented Security Rules](../virtual-network/network-security-groups-overview.md#augmented-security-rules) feature of Network Security groups.
+| `service.beta.kubernetes.io/azure-shared-securityrule` | `true` or `false` | Specify that the service should be exposed using an Azure security rule that might be shared with another service. Trade specificity of rules for an increase in the number of services that can be exposed. This annotation relies on the Azure [Augmented Security Rules](../virtual-network/network-security-groups-overview.md#augmented-security-rules) feature of Network Security groups.
| `service.beta.kubernetes.io/azure-load-balancer-resource-group` | Name of the resource group | Specify the resource group of load balancer public IPs that aren't in the same resource group as the cluster infrastructure (node resource group). | `service.beta.kubernetes.io/azure-allowed-service-tags` | List of allowed service tags | Specify a list of allowed [service tags][service-tags] separated by commas. | `service.beta.kubernetes.io/azure-load-balancer-tcp-idle-timeout` | TCP idle timeouts in minutes | Specify the time in minutes for TCP connection idle timeouts to occur on the load balancer. The default and minimum value is 4. The maximum value is 30. The value must be an integer.
Since v1.21, two service annotations `service.beta.kubernetes.io/azure-load-bala
### Custom Load Balancer health probe for port
-Different ports in a service may require different health probe configurations. This could be because of service design (such as a single health endpoint controlling multiple ports), or Kubernetes features like the [MixedProtocolLBService](https://kubernetes.io/docs/concepts/services-networking/service/#load-balancers-with-mixed-protocol-types).
+Different ports in a service can require different health probe configurations. This could be because of service design (such as a single health endpoint controlling multiple ports), or Kubernetes features like the [MixedProtocolLBService](https://kubernetes.io/docs/concepts/services-networking/service/#load-balancers-with-mixed-protocol-types).
The following annotations can be used to customize probe configuration per service port.
The following annotations can be used to customize probe configuration per servi
| service.beta.kubernetes.io/port_{port}_health-probe_num-of-probe | service.beta.kubernetes.io/azure-load-balancer-health-probe-num-of-probe | Number of consecutive probe failures before the port is considered unhealthy | | service.beta.kubernetes.io/port_{port}_health-probe_interval | service.beta.kubernetes.io/azure-load-balancer-health-probe-interval | The amount of time between probe attempts |
-For following manifest, probe rule for port httpsserver is different from the one for httpserver because annoations for port httpsserver are specified.
+For following manifest, probe rule for port httpsserver is different from the one for httpserver because annotations for port httpsserver are specified.
```yaml apiVersion: v1
To learn more about using internal load balancer for inbound traffic, see the [A
[use-multiple-node-pools]: use-multiple-node-pools.md [troubleshoot-snat]: #troubleshooting-snat [service-tags]: ../virtual-network/network-security-groups-overview.md#service-tags
-[maxsurge]: upgrade-cluster.md#customize-node-surge-upgrade
+[maxsurge]: ./upgrade-aks-cluster.md#customize-node-surge-upgrade
[az-lb]: ../load-balancer/load-balancer-overview.md [alb-outbound-rules]: ../load-balancer/outbound-rules.md
aks Node Image Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-image-upgrade.md
az aks nodepool show \
[kubernetes-json-path]: https://kubernetes.io/docs/reference/kubectl/jsonpath/ <!-- LINKS - internal -->
-[upgrade-cluster]: upgrade-cluster.md
+[upgrade-cluster]: upgrade-aks-cluster.md
[github-schedule]: node-upgrade-github-actions.md [use-multiple-node-pools]: create-node-pools.md
-[max-surge]: upgrade-cluster.md#customize-node-surge-upgrade
+[max-surge]: upgrade-aks-cluster.md#customize-node-surge-upgrade
[auto-upgrade-node-image]: auto-upgrade-node-image.md [az-aks-nodepool-get-upgrades]: /cli/azure/aks/nodepool#az_aks_nodepool_get_upgrades [az-aks-nodepool-show]: /cli/azure/aks/nodepool#az_aks_nodepool_show
aks Operator Best Practices Run At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-run-at-scale.md
To increase the node limit beyond 1000, you must have the following pre-requisit
* Clusters using Kubernetes version 1.23 or above. > [!NOTE]
-> It may take up to a week to enable your clusters with the increased node limit.
+> It can take up to one week to enable your clusters with the increased node limit.
## Networking considerations and best practices
To increase the node limit beyond 1000, you must have the following pre-requisit
## Cluster upgrade considerations and best practices * The hard limit of 5000 nodes per AKS cluster prevents clusters at this limit from performing upgrades. This limit prevents these upgrades from performing because there's no more capacity to perform rolling updates with the max surge property. If you have a cluster at this limit, we recommend scaling the cluster down below 3000 nodes before doing cluster upgrades to provide extra capacity for node churn, and to minimize the control plane load.
-* AKS configures upgrades to surge with one extra node through the max surge settings by default. This default value allows AKS to minimize workload disruption by creating an extra node before the cordon/drain of existing applications to replace an older-versioned node. When you upgrade clusters with a large number of nodes, using the default max surge settings can cause an upgrade to take several hours to complete. The completion process can take so long because the upgrade needs to churn through a large number of nodes. You can customize the max surge settings per node pool to enable a trade-off between upgrade speed and upgrade disruption. When you increase the the max surge settings, the upgrade process completes faster, but you may experience disruptions during the upgrade process.
+* AKS configures upgrades to surge with one extra node through the max surge settings by default. This default value allows AKS to minimize workload disruption by creating an extra node before the cordon/drain of existing applications to replace an older-versioned node. When you upgrade clusters with a large number of nodes, using the default max surge settings can cause an upgrade to take several hours to complete. The completion process can take so long because the upgrade needs to churn through a large number of nodes. You can customize the max surge settings per node pool to enable a trade-off between upgrade speed and upgrade disruption. When you increase the max surge settings, the upgrade process completes faster, but you might experience disruptions during the upgrade process.
* We don't recommend upgrading a cluster with greater than 500 nodes with the default max surge configuration of one node. Instead, we recommend increasing the max surge settings to somewhere between 10 to 20 percent, with up to a maximum max surge of 500 nodes. Base these settings on your workload disruption tolerance. For more information, see [Customize node surge upgrade][max surge]. * For more cluster upgrade information, see [Upgrade an AKS cluster][cluster upgrades]. <!-- Links - External --> [Managed NAT Gateway - Azure Kubernetes Service]: nat-gateway.md [Configure Azure CNI networking for dynamic allocation of IPs and enhanced subnet support in Azure Kubernetes Service (AKS)]: configure-azure-cni-dynamic-ip-allocation.md
-[max surge]: upgrade-cluster.md?tabs=azure-cli#customize-node-surge-upgrade
+[max surge]: upgrade-aks-cluster.md#customize-node-surge-upgrade
[support-ticket]: https://portal.azure.com/#create/Microsoft.Support/Parameters/%7B%0D%0A%09%22subId%22%3A+%22%22%2C%0D%0A%09%22pesId%22%3A+%225a3a423f-8667-9095-1770-0a554a934512%22%2C%0D%0A%09%22supportTopicId%22%3A+%2280ea0df7-5108-8e37-2b0e-9737517f0b96%22%2C%0D%0A%09%22contextInfo%22%3A+%22AksLabelDeprecationMarch22%22%2C%0D%0A%09%22caller%22%3A+%22Microsoft_Azure_ContainerService+%2B+AksLabelDeprecationMarch22%22%2C%0D%0A%09%22severity%22%3A+%223%22%0D%0A%7D [standard-tier]: free-standard-pricing-tiers.md [throttling-policies]: https://azure.microsoft.com/blog/api-management-advanced-caching-and-throttling-policies/
aks Stop Cluster Upgrade Api Breaking Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/stop-cluster-upgrade-api-breaking-changes.md
+
+ Title: Stop Azure Kubernetes Service (AKS) cluster upgrades automatically on API breaking changes (Preview)
+description: Learn how to stop Azure Kubernetes Service (AKS) cluster upgrades automatically on API breaking changes.
++ Last updated : 10/19/2023++
+# Stop Azure Kubernetes Service (AKS) cluster upgrades automatically on API breaking changes (Preview)
++
+To stay within a supported Kubernetes version, you have to upgrade your cluster at least once per year and prepare for all possible disruptions. These disruptions include ones caused by API breaking changes, deprecations, and dependencies such as Helm and Container Storage Interface (CSI). It can be difficult to anticipate these disruptions and migrate critical workloads without experiencing any downtime.
+
+AKS now automatically stops upgrade operations consisting of a minor version change with deprecated APIs and sends you an error message to alert you about the issue.
+
+## Before you begin
+
+Before you begin, make sure you meet the following prerequisites:
+
+* The upgrade operation is a Kubernetes minor version change for the cluster control plane.
+* The Kubernetes version you're upgrading to is 1.26 or later.
+* If you're using REST, the upgrade operation uses a preview API version of `2023-01-02-preview` or later.
+* If you're using the Azure CLI, you need the `aks-preview` CLI extension 0.5.154 or later.
+* The last seen usage of deprecated APIs for the targeted version you're upgrading to must occur within 12 hours before the upgrade operation. AKS records usage hourly, so any usage of deprecated APIs within one hour isn't guaranteed to appear in the detection.
+
+## Mitigate stopped upgrade operations
+
+If you meet the [prerequisites](#before-you-begin), attempt an upgrade, and receive an error message similar to the following example error message:
+
+```output
+Bad Request({
+ "code": "ValidationError",
+ "message": "Control Plane upgrade is blocked due to recent usage of a Kubernetes API deprecated in the specified version. Please refer to https://kubernetes.io/docs/reference/using-api/deprecation-guide to migrate the usage. To bypass this error, set enable-force-upgrade in upgradeSettings.overrideSettings. Bypassing this error without migrating usage will result in the deprecated Kubernetes API calls failing. Usage details: 1 error occurred:\n\t* usage has been detected on API flowcontrol.apiserver.k8s.io.prioritylevelconfigurations.v1beta1, and was recently seen at: 2023-03-23 20:57:18 +0000 UTC, which will be removed in 1.26\n\n",
+ "subcode": "UpgradeBlockedOnDeprecatedAPIUsage"
+})
+```
+
+You have two options to mitigate the issue. You can either [remove usage of deprecated APIs (recommended)](#remove-usage-of-deprecated-apis-recommended) or [bypass validation to ignore API changes](#bypass-validation-to-ignore-api-changes).
+
+### Remove usage of deprecated APIs (recommended)
+
+1. In the Azure portal, navigate to your cluster's overview page, and select **Diagnose and solve problems**.
+
+2. Navigate to the **Create, Upgrade, Delete, and Scale** category, and select **Kubernetes API deprecations**.
+
+ :::image type="content" source="./media/upgrade-cluster/applens-api-detection-full-v2.png" alt-text="A screenshot of the Azure portal showing the 'Selected Kubernetes API deprecations' section.":::
+
+3. Wait 12 hours from the time the last deprecated API usage was seen. Check the verb in the deprecated API usage to know if it's a [watch][k8s-api].
+
+4. Retry your cluster upgrade.
+
+You can also check past API usage by enabling [Container Insights][container-insights] and exploring kube audit logs. Check the verb in the deprecated API usage to understand if it's a [watch][k8s-api] use case.
+
+### Bypass validation to ignore API changes
+
+> [!NOTE]
+> This method requires you to use the `aks-preview` Azure CLI extension version 0.5.134 or later. This method isn't recommended, as deprecated APIs in the targeted Kubernetes version may not work long term. We recommend removing them as soon as possible after the upgrade completes.
+
+* Bypass validation to ignore API breaking changes using the [`az aks update`][az-aks-update] command. Specify the `enable-force-upgrade` flag and set the `upgrade-override-until` property to define the end of the window during which validation is bypassed. If no value is set, it defaults the window to three days from the current time. The date and time you specify must be in the future.
+
+ ```azurecli-interactive
+ az aks update --name myAKSCluster --resource-group myResourceGroup --enable-force-upgrade --upgrade-override-until 2023-10-01T13:00:00Z
+ ```
+
+ > [!NOTE]
+ > `Z` is the zone designator for the zero UTC/GMT offset, also known as 'Zulu' time. This example sets the end of the window to `13:00:00` GMT. For more information, see [Combined date and time representations](https://wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations).
+
+## Next steps
+
+This article showed you how to stop AKS cluster upgrades automatically on API breaking changes. To learn more about more upgrade options for AKS clusters, see [Upgrade options for Azure Kubernetes Service (AKS) clusters](./upgrade-cluster.md).
+
+<!-- LINKS - external -->
+[k8s-api]: https://kubernetes.io/docs/reference/using-api/api-concepts/
+
+<!-- LINKS - internal -->
+[az-aks-update]: /cli/azure/aks#az_aks_update
+[container-insights]:/azure/azure-monitor/containers/container-insights-log-query#resource-logs
aks Tutorial Kubernetes Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-upgrade-cluster.md
# Tutorial: Upgrade Kubernetes in Azure Kubernetes Service (AKS)
-As part of the application and cluster lifecycle, you may want to upgrade to the latest available version of Kubernetes. You can upgrade your Azure Kubernetes Service (AKS) cluster using the Azure CLI, Azure PowerShell, or the Azure portal.
+As part of the application and cluster lifecycle, you might want to upgrade to the latest available version of Kubernetes. You can upgrade your Azure Kubernetes Service (AKS) cluster using the Azure CLI, Azure PowerShell, or the Azure portal.
In this tutorial, part seven of seven, you learn how to:
If no upgrades are available, create a new cluster with a supported version of K
AKS nodes are carefully cordoned and drained to minimize any potential disruptions to running applications. During this process, AKS performs the following steps:
-* Adds a new buffer node (or as many nodes as configured in [max surge](./upgrade-cluster.md#customize-node-surge-upgrade)) to the cluster that runs the specified Kubernetes version.
+* Adds a new buffer node (or as many nodes as configured in [max surge](./upgrade-aks-cluster.md#customize-node-surge-upgrade)) to the cluster that runs the specified Kubernetes version.
* [Cordons and drains][kubernetes-drain] one of the old nodes to minimize disruption to running applications. If you're using max surge, it [cordons and drains][kubernetes-drain] as many nodes at the same time as the number of buffer nodes specified. * When the old node is fully drained, it's reimaged to receive the new version and becomes the buffer node for the following node to be upgraded. * This process repeats until all nodes in the cluster have been upgraded.
It takes a few minutes to upgrade the cluster, depending on how many nodes you h
## View the upgrade events > [!NOTE]
-> When you upgrade your cluster, the following Kubernetes events may occur on the nodes:
+> When you upgrade your cluster, the following Kubernetes events might occur on the nodes:
> > * **Surge**: Create a surge node. > * **Drain**: Evict pods from the node. Each pod has a *five minute timeout* to complete the eviction.
Confirm the upgrade was successful using the following steps:
## Delete the cluster
-As this tutorial is the last part of the series, you may want to delete your AKS cluster. The Kubernetes nodes run on Azure virtual machines and continue incurring charges even if you don't use the cluster.
+As this tutorial is the last part of the series, you might want to delete your AKS cluster. The Kubernetes nodes run on Azure virtual machines and continue incurring charges even if you don't use the cluster.
### [Azure CLI](#tab/azure-cli)
aks Update Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/update-credentials.md
Last updated 03/01/2023
# Update or rotate the credentials for an Azure Kubernetes Service (AKS) cluster
-AKS clusters created with a service principal have a one-year expiration time. As you near the expiration date, you can reset the credentials to extend the service principal for an additional period of time. You may also want to update, or rotate, the credentials as part of a defined security policy. AKS clusters [integrated with Microsoft Entra ID][aad-integration] as an authentication provider have two more identities: the Microsoft Entra Server App and the Microsoft Entra Client App. This article details how to update the service principal and Microsoft Entra credentials for an AKS cluster.
+AKS clusters created with a service principal have a one-year expiration time. As you near the expiration date, you can reset the credentials to extend the service principal for an additional period of time. You might also want to update, or rotate, the credentials as part of a defined security policy. AKS clusters [integrated with Microsoft Entra ID][aad-integration] as an authentication provider have two more identities: the Microsoft Entra Server App and the Microsoft Entra Client App. This article details how to update the service principal and Microsoft Entra credentials for an AKS cluster.
> [!NOTE] > Alternatively, you can use a managed identity for permissions instead of a service principal. Managed identities don't require updates or rotations. For more information, see [Use managed identities](use-managed-identity.md).
When you want to update the credentials for an AKS cluster, you can choose to ei
* Create a new service principal and update the cluster to use these new credentials. > [!WARNING]
-> If you choose to create a *new* service principal, wait around 30 minutes for the service principal permission to propagate across all regions. Updating a large AKS cluster to use these credentials may take a long time to complete.
+> If you choose to create a *new* service principal, wait around 30 minutes for the service principal permission to propagate across all regions. Updating a large AKS cluster to use these credentials can take a long time to complete.
### Check the expiration date of your service principal
Next, you [update AKS cluster with the new service principal credential][update-
## Update AKS cluster with service principal credentials >[!IMPORTANT]
->For large clusters, updating your AKS cluster with a new service principal may take a long time to complete. Consider reviewing and customizing the [node surge upgrade settings][node-surge-upgrade] to minimize disruption during the update. For small and midsize clusters, it takes a several minutes for the new credentials to update in the cluster.
+>For large clusters, updating your AKS cluster with a new service principal can take a long time to complete. Consider reviewing and customizing the [node surge upgrade settings][node-surge-upgrade] to minimize disruption during the update. For small and midsize clusters, it takes a several minutes for the new credentials to update in the cluster.
Update the AKS cluster with your new or existing credentials by running the [`az aks update-credentials`][az-aks-update-credentials] command.
In this article, you learned how to update or rotate service principal and Micro
[az-ad-app-credential-list]: /cli/azure/ad/app/credential#az_ad_app_credential_list [az-ad-app-credential-reset]: /cli/azure/ad/app/credential#az_ad_app_credential_reset [node-image-upgrade]: ./node-image-upgrade.md
-[node-surge-upgrade]: upgrade-cluster.md#customize-node-surge-upgrade
+[node-surge-upgrade]: upgrade-aks-cluster.md#customize-node-surge-upgrade
[update-cluster-service-principal-credentials]: #update-aks-cluster-with-service-principal-credentials [reset-existing-service-principal-credentials]: #reset-the-existing-service-principal-credentials
aks Upgrade Aks Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-aks-cluster.md
+
+ Title: Upgrade an Azure Kubernetes Service (AKS) cluster
+description: Learn how to upgrade an Azure Kubernetes Service (AKS) cluster to get the latest features and security updates.
++ Last updated : 10/19/2023++
+# Upgrade an Azure Kubernetes Service (AKS) cluster
+
+Part of the AKS cluster lifecycle involves performing periodic upgrades to the latest Kubernetes version. It's important you apply the latest security releases and upgrades to get the latest features. This article shows you how to check for and apply upgrades to your AKS cluster.
+
+## Kubernetes version upgrades
+
+When you upgrade a supported AKS cluster, you can't skip Kubernetes minor versions. You must perform all upgrades sequentially by major version number. For example, upgrades between *1.14.x* -> *1.15.x* or *1.15.x* -> *1.16.x* are allowed. *1.14.x* -> *1.16.x* isn't allowed. You can only skip multiple versions when upgrading from an *unsupported version* back to a *supported version*. For example, you can perform an upgrade from an unsupported *1.10.x* to a supported *1.12.x* if available.
+
+When you perform an upgrade from an *unsupported version* that skips two or more minor versions, the upgrade has no guarantee of functionality and is excluded from the service-level agreements and limited warranty. If your version is significantly out of date, we recommend you recreate your cluster instead.
+
+## Before you begin
+
+* If you're using the Azure CLI, this article requires Azure CLI version 2.34.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+* If you're using Azure PowerShell, this article requires Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
+* Performing upgrade operations requires the `Microsoft.ContainerService/managedClusters/agentPools/write` RBAC role. For more on Azure RBAC roles, see the [Azure resource provider operations][azure-rp-operations].
+
+> [!WARNING]
+> An AKS cluster upgrade triggers a cordon and drain of your nodes. If you have a low compute quota available, the upgrade might fail. For more information, see [increase quotas](../azure-portal/supportability/regional-quota-requests.md).
+
+## Check for available AKS cluster upgrades
+
+> [!NOTE]
+> To stay up to date with AKS fixes, releases, and updates, see the [AKS release tracker][release-tracker].
+
+### [Azure CLI](#tab/azure-cli)
+
+* Check which Kubernetes releases are available for your cluster using the [`az aks get-upgrades`][az-aks-get-upgrades] command.
+
+ ```azurecli-interactive
+ az aks get-upgrades --resource-group myResourceGroup --name myAKSCluster --output table
+ ```
+
+ The following example output shows the current version as *1.26.6* and lists the available versions under `upgrades`:
+
+ ```output
+ {
+ "agentPoolProfiles": null,
+ "controlPlaneProfile": {
+ "kubernetesVersion": "1.26.6",
+ ...
+ "upgrades": [
+ {
+ "isPreview": null,
+ "kubernetesVersion": "1.27.1"
+ },
+ {
+ "isPreview": null,
+ "kubernetesVersion": "1.27.3"
+ }
+ ]
+ },
+ ...
+ }
+ ```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+* Check which Kubernetes releases are available for your cluster and the region in which it resides using the [`Get-AzAksVersion`][get-azaksversion] cmdlet.
+
+ ```azurepowershell-interactive
+ Get-AzAksVersion -Location eastus | Where-Object OrchestratorVersion
+ ```
+
+ The following example output shows the available versions under `OrchestratorVersion`:
+
+ ```output
+ Default IsPreview OrchestratorType OrchestratorVersion
+ - - -
+ Kubernetes 1.27.1
+ Kubernetes 1.27.3
+ ```
+
+### [Azure portal](#tab/azure-portal)
+
+Check which Kubernetes releases are available for your cluster using the following steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. Navigate to your AKS cluster.
+3. Under **Settings**, select **Cluster configuration**.
+4. In **Kubernetes version**, select **Upgrade version**.
+5. In **Kubernetes version**, select the version to check for available upgrades.
+
+The Azure portal highlights all the deprecated APIs between your current version and new available versions you intend to migrate to. For more information, see [the Kubernetes API Removal and Deprecation process][k8s-deprecation].
++++
+## Troubleshoot AKS cluster upgrade error messages
+
+### [Azure CLI](#tab/azure-cli)
+
+The following example output means the `appservice-kube` extension isn't compatible with your Azure CLI version (a minimum of version 2.34.1 is required):
+
+```output
+The 'appservice-kube' extension is not compatible with this version of the CLI.
+You have CLI core version 2.0.81 and this extension requires a min of 2.34.1.
+Table output unavailable. Use the --query option to specify an appropriate query. Use --debug for more info.
+```
+
+If you receive this output, you need to update your Azure CLI version. The `az upgrade` command was added in version 2.11.0 and doesn't work with versions prior to 2.11.0. You can update older versions by reinstalling Azure CLI as described in [Install the Azure CLI](/cli/azure/install-azure-cli). If your Azure CLI version is 2.11.0 or later, run `az upgrade` to upgrade Azure CLI to the latest version.
+
+If your Azure CLI is updated and you receive the following example output, it means that no upgrades are available:
+
+```output
+ERROR: Table output unavailable. Use the --query option to specify an appropriate query. Use --debug for more info.
+```
+
+If no upgrades are available, create a new cluster with a supported version of Kubernetes and migrate your workloads from the existing cluster to the new cluster. It's not supported to upgrade a cluster to a newer Kubernetes version when `az aks get-upgrades` shows that no upgrades are available.
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+If no upgrades are available, create a new cluster with a supported version of Kubernetes and migrate your workloads from the existing cluster to the new cluster. It's not supported to upgrade a cluster to a newer Kubernetes version when `Get-AzAksUpgradeProfile` shows that no upgrades are available.
+
+### [Azure portal](#tab/azure-portal)
+
+If no upgrades are available, create a new cluster with a supported version of Kubernetes and migrate your workloads from the existing cluster to the new cluster. It's not supported to upgrade a cluster to a newer Kubernetes version when no upgrades are available.
+++
+## Upgrade an AKS cluster
+
+During the cluster upgrade process, AKS performs the following operations:
+
+* Add a new buffer node (or as many nodes as configured in [max surge](#customize-node-surge-upgrade)) to the cluster that runs the specified Kubernetes version.
+* [Cordon and drain][kubernetes-drain] one of the old nodes to minimize disruption to running applications. If you're using max surge, it [cordons and drains][kubernetes-drain] as many nodes at the same time as the number of buffer nodes specified.
+* When the old node is fully drained, it's reimaged to receive the new version and becomes the buffer node for the following node to be upgraded.
+* This process repeats until all nodes in the cluster have been upgraded.
+* At the end of the process, the last buffer node is deleted, maintaining the existing agent node count and zone balance.
++
+### [Azure CLI](#tab/azure-cli)
+
+1. Upgrade your cluster using the [`az aks upgrade`][az-aks-upgrade] command.
+
+ ```azurecli-interactive
+ az aks upgrade \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --kubernetes-version <KUBERNETES_VERSION>
+ ```
+
+2. Confirm the upgrade was successful using the [`az aks show`][az-aks-show] command.
+
+ ```azurecli-interactive
+ az aks show --resource-group myResourceGroup --name myAKSCluster --output table
+ ```
+
+ The following example output shows that the cluster now runs *1.27.3*:
+
+ ```output
+ Name Location ResourceGroup KubernetesVersion ProvisioningState Fqdn
+ - - - -
+ myAKSCluster eastus myResourceGroup 1.27.3 Succeeded myakscluster-dns-379cbbb9.hcp.eastus.azmk8s.io
+ ```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+1. Upgrade your cluster using the [`Set-AzAksCluster`][set-azakscluster] command.
+
+ ```azurepowershell-interactive
+ Set-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -KubernetesVersion <KUBERNETES_VERSION>
+ ```
+
+2. Confirm the upgrade was successful using the [`Get-AzAksCluster`][get-azakscluster] command.
+
+ ```azurepowershell-interactive
+ Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster |
+ Format-Table -Property Name, Location, KubernetesVersion, ProvisioningState, Fqdn
+ ```
+
+ The following example output shows that the cluster now runs *1.27.3*:
+
+ ```output
+ Name Location KubernetesVersion ProvisioningState Fqdn
+ - -- -- -- -
+ myAKSCluster eastus 1.27.3 Succeeded myakscluster-dns-379cbbb9.hcp.eastus.azmk8s.io
+ ```
+
+### [Azure portal](#tab/azure-portal)
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. Navigate to your AKS cluster.
+3. Under **Settings**, select **Cluster configuration**.
+4. In **Kubernetes version**, select **Upgrade version**.
+5. In **Kubernetes version**, select your desired version and then select **Save**.
+6. Navigate to your AKS cluster **Overview** page, and select the **Kubernetes version** to confirm the upgrade was successful.
+++
+### Set auto-upgrade channel
+
+You can set an auto-upgrade channel on your cluster. For more information, see [Auto-upgrading an AKS cluster][aks-auto-upgrade].
+
+### Customize node surge upgrade
+
+> [!IMPORTANT]
+>
+> * Node surges require subscription quota for the requested max surge count for each upgrade operation. For example, a cluster that has five node pools, each with a count of four nodes, has a total of 20 nodes. If each node pool has a max surge value of 50%, additional compute and IP quota of 10 nodes (2 nodes * 5 pools) is required to complete the upgrade.
+>
+> * The max surge setting on a node pool is persistent. Subsequent Kubernetes upgrades or node version upgrades will use this setting. You can change the max surge value for your node pools at any time. For production node pools, we recommend a max-surge setting of 33%.
+>
+> * If you're using Azure CNI, validate there are available IPs in the subnet to [satisfy IP requirements of Azure CNI](configure-azure-cni.md).
+
+AKS configures upgrades to surge with one extra node by default. A default value of *one* for the max surge settings enables AKS to minimize workload disruption by creating an extra node before the cordon/drain of existing applications to replace an older versioned node. You can customize the max surge value per node pool. When you increase the max surge value, the upgrade process completes faster, and you might experience disruptions during the upgrade process.
+
+For example, a max surge value of *100%* provides the fastest possible upgrade process, but also causes all nodes in the node pool to be drained simultaneously. You might want to use a higher value such as this for testing environments. For production node pools, we recommend a `max_surge` setting of *33%*.
+
+AKS accepts both integer values and a percentage value for max surge. An integer such as *5* indicates five extra nodes to surge. A value of *50%* indicates a surge value of half the current node count in the pool. Max surge percent values can be a minimum of *1%* and a maximum of *100%*. A percent value is rounded up to the nearest node count. If the max surge value is higher than the required number of nodes to be upgraded, the number of nodes to be upgraded is used for the max surge value. During an upgrade, the max surge value can be a minimum of *1* and a maximum value equal to the number of nodes in your node pool. You can set larger values, but you can't set the maximum number of nodes used for max surge higher than the number of nodes in the pool at the time of upgrade.
+
+#### Set max surge value
+
+* Set max surge values for new or existing node pools using the [`az aks nodepool add`][az-aks-nodepool-add] or [`az aks nodepool update`][az-aks-nodepool-update] command.
+
+ ```azurecli-interactive
+ # Set max surge for a new node pool
+ az aks nodepool add -n mynodepool -g MyResourceGroup --cluster-name MyManagedCluster --max-surge 33%
+
+ # Update max surge for an existing node pool
+ az aks nodepool update -n mynodepool -g MyResourceGroup --cluster-name MyManagedCluster --max-surge 5
+ ```
+
+## View upgrade events
+
+* View upgrade events using the `kubectl get events` command.
+
+ ```azurecli-interactive
+ kubectl get events
+ ```
+
+ The following example output shows some of the above events listed during an upgrade:
+
+ ```output
+ ...
+ default 2m1s Normal Drain node/aks-nodepool1-96663640-vmss000001 Draining node: [aks-nodepool1-96663640-vmss000001]
+ ...
+ default 9m22s Normal Surge node/aks-nodepool1-96663640-vmss000002 Created a surge node [aks-nodepool1-96663640-vmss000002 nodepool1] for agentpool %!s(MISSING)
+ ...
+ ```
+
+## Next steps
+
+To learn how to configure automatic upgrades, see [Configure automatic upgrades for an AKS cluster][configure-automatic-aks-upgrades].
+
+<!-- LINKS - internal -->
+[azure-cli-install]: /cli/azure/install-azure-cli
+[azure-powershell-install]: /powershell/azure/install-az-ps
+[az-aks-get-upgrades]: /cli/azure/aks#az_aks_get_upgrades
+[az-aks-upgrade]: /cli/azure/aks#az_aks_upgrade
+[set-azakscluster]: /powershell/module/az.aks/set-azakscluster
+[az-aks-show]: /cli/azure/aks#az_aks_show
+[get-azakscluster]: /powershell/module/az.aks/get-azakscluster
+[aks-auto-upgrade]: auto-upgrade-cluster.md
+[k8s-deprecation]: https://kubernetes.io/blog/2022/11/18/upcoming-changes-in-kubernetes-1-26/#:~:text=A%20deprecated%20API%20is%20one%20that%20has%20been,point%20you%20must%20migrate%20to%20using%20the%20replacement
+[azure-rp-operations]: ../role-based-access-control/built-in-roles.md#containers
+[get-azaksversion]: /powershell/module/az.aks/get-azaksversion
+[az-aks-nodepool-add]: /cli/azure/aks/nodepool#az_aks_nodepool_add
+[az-aks-nodepool-update]: /cli/azure/aks/nodepool#az_aks_nodepool_update
+[configure-automatic-aks-upgrades]: ./upgrade-cluster.md#configure-automatic-upgrades
+[release-tracker]: release-tracker.md
+
+<!-- LINKS - external -->
+[kubernetes-drain]: https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/
aks Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-cluster.md
Title: Upgrade an Azure Kubernetes Service (AKS) cluster
-description: Learn how to upgrade an Azure Kubernetes Service (AKS) cluster to get the latest features and security updates.
+ Title: Upgrade options for Azure Kubernetes Service (AKS) clusters
+description: Learn the different ways to upgrade an Azure Kubernetes Service (AKS) cluster.
Previously updated : 10/16/2023 Last updated : 10/19/2023
-# Upgrade an Azure Kubernetes Service (AKS) cluster
+# Upgrade options for Azure Kubernetes Service (AKS) clusters
-Part of the AKS cluster lifecycle involves performing periodic upgrades to the latest Kubernetes version. It's important you apply the latest security releases, or upgrade to get the latest features. This article shows you how to check for, configure, and apply upgrades to your AKS cluster.
+This article shares different upgrade options for AKS clusters. To perform a basic Kubernetes version upgrade, see [Upgrade an AKS cluster](./upgrade-aks-cluster.md).
For AKS clusters that use multiple node pools or Windows Server nodes, see [Upgrade a node pool in AKS][nodepool-upgrade]. To upgrade a specific node pool without performing a Kubernetes cluster upgrade, see [Upgrade a specific node pool][specific-nodepool].
-> [!NOTE]
-> The Azure Linux node pool is now generally available (GA). To learn about the benefits and deployment steps, see the [Introduction to the Azure Linux Container Host for AKS][intro-azure-linux].
+## Perform manual upgrades
-## Kubernetes version upgrades
+You can perform manual upgrades to control when your cluster upgrades to a new Kubernetes version. Manual upgrades are useful when you want to test a new Kubernetes version before upgrading your production cluster. You can also use manual upgrades to upgrade your cluster to a specific Kubernetes version that isn't the latest available version.
-When you upgrade a supported AKS cluster, Kubernetes minor versions can't be skipped. You must perform all upgrades sequentially by major version number. For example, upgrades between *1.14.x* -> *1.15.x* or *1.15.x* -> *1.16.x* are allowed, however *1.14.x* -> *1.16.x* isn't allowed.
+To perform manual upgrades, see the following articles:
-Skipping multiple versions can only be done when upgrading from an *unsupported version* back to a *supported version*. For example, an upgrade from an unsupported *1.10.x* -> a supported *1.15.x* can be completed if available. When performing an upgrade from an *unsupported version* that skips two or more minor versions, the upgrade is performed without any guarantee of functionality and is excluded from the service-level agreements and limited warranty. If your version is significantly out of date, we recommend you recreate your cluster.
+* [Upgrade an AKS cluster](./upgrade-aks-cluster.md)
+* [Upgrade the node image](./node-image-upgrade.md)
+* [Customize node surge upgrade](./upgrade-aks-cluster.md#customize-node-surge-upgrade)
+* [Process node OS updates](./node-updates-kured.md)
-> [!NOTE]
-> Any upgrade operation, whether performed manually or automatically, upgrades the node image version if not already using the latest version. The latest version is contingent on a full AKS release and can be determined by visiting the [AKS release tracker][release-tracker].
+## Configure automatic upgrades
-> [!IMPORTANT]
-> An upgrade operation might fail if you made customizations to AKS agent nodes. For more information see our [Support policy][support-policy-user-customizations-agent-nodes].
+You can configure automatic upgrades to automatically upgrade your cluster to the latest available Kubernetes version. Automatic upgrades are useful when you want to ensure your cluster is always running the latest Kubernetes version. You can also use automatic upgrades to ensure your cluster is always running a supported Kubernetes version.
-## Before you begin
+To configure automatic upgrades, see the following articles:
-* If you use the Azure CLI, you need Azure CLI version 2.34.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
-* If you use Azure PowerShell, you need Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
-* Performing upgrade operations requires the `Microsoft.ContainerService/managedClusters/agentPools/write` RBAC role. For more information, see [Create custom roles][azure-rbac-provider-operations].
-
-> [!WARNING]
-> An AKS cluster upgrade triggers a cordon and drain of your nodes. If you have a low compute quota available, the upgrade may fail. For more information, see [increase quotas](../azure-portal/supportability/regional-quota-requests.md).
-
-## Check for available AKS cluster upgrades
-
-### [Azure CLI](#tab/azure-cli)
-
-Check which Kubernetes releases are available for your cluster using the [`az aks get-upgrades`][az-aks-get-upgrades] command.
-
-```azurecli-interactive
-az aks get-upgrades --resource-group myResourceGroup --name myAKSCluster --output table
-```
-
-The following example output shows that the cluster can be upgraded to versions *1.19.1* and *1.19.3*:
-
-```output
-Name ResourceGroup MasterVersion Upgrades
-- --
-default myResourceGroup 1.18.10 1.19.1, 1.19.3
-```
-
-### [Azure PowerShell](#tab/azure-powershell)
-
-Check which Kubernetes releases are available for your cluster using [`Get-AzAksUpgradeProfile`][get-azaksupgradeprofile] command.
-
-```azurepowershell-interactive
-Get-AzAksUpgradeProfile -ResourceGroupName myResourceGroup -ClusterName myAKSCluster |
-Select-Object -Property Name, ControlPlaneProfileKubernetesVersion -ExpandProperty ControlPlaneProfileUpgrade |
-Format-Table -Property *
-```
-
-The following example output shows that the cluster can be upgraded to versions *1.19.1* and *1.19.3*:
-
-```output
-Name ControlPlaneProfileKubernetesVersion IsPreview KubernetesVersion
-- --
-default 1.18.10 1.19.1
-default 1.18.10 1.19.3
-```
-
-### [Azure portal](#tab/azure-portal)
-
-Check which Kubernetes releases are available for your cluster using the following steps:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Navigate to your AKS cluster.
-3. Under **Settings**, select **Cluster configuration**.
-4. In **Kubernetes version**, select **Upgrade version**.
-5. In **Kubernetes version**, select the version to check for available upgrades.
-
-The Azure portal highlights all the deprecated APIs between your current version and newer, available versions you intend to migrate to. For more information, see [the Kubernetes API Removal and Deprecation process][k8s-deprecation].
----
-### Troubleshoot AKS cluster upgrade error messages
-
-### [Azure CLI](#tab/azure-cli)
-
-The following example output means the `appservice-kube` extension isn't compatible with your Azure CLI version (a minimum of version 2.34.1 is required):
-
-```output
-The 'appservice-kube' extension is not compatible with this version of the CLI.
-You have CLI core version 2.0.81 and this extension requires a min of 2.34.1.
-Table output unavailable. Use the --query option to specify an appropriate query. Use --debug for more info.
-```
-
-If you receive this output, you need to update your Azure CLI version. The `az upgrade` command was added in version 2.11.0 and doesn't work with versions prior to 2.11.0. You can update older versions by reinstalling Azure CLI as described in [Install the Azure CLI](/cli/azure/install-azure-cli). If your Azure CLI version is 2.11.0 or later, you receive a message to run `az upgrade` to upgrade Azure CLI to the latest version.
-
-If your Azure CLI is updated and you receive the following example output, it means that no upgrades are available:
-
-```output
-ERROR: Table output unavailable. Use the --query option to specify an appropriate query. Use --debug for more info.
-```
-
-If no upgrades are available, create a new cluster with a supported version of Kubernetes and migrate your workloads from the existing cluster to the new cluster. It's not supported to upgrade a cluster to a newer Kubernetes version when `az aks get-upgrades` shows that no upgrades are available.
-
-### [Azure PowerShell](#tab/azure-powershell)
-
-If no upgrades are available, create a new cluster with a supported version of Kubernetes and migrate your workloads from the existing cluster to the new cluster. It's not supported to upgrade a cluster to a newer Kubernetes version when `Get-AzAksUpgradeProfile` shows that no upgrades are available.
-
-### [Azure portal](#tab/azure-portal)
-
-If no upgrades are available, create a new cluster with a supported version of Kubernetes and migrate your workloads from the existing cluster to the new cluster. It's not supported to upgrade a cluster to a newer Kubernetes version when no upgrades are available.
---
-## Upgrade an AKS cluster
-
-During the cluster upgrade process, AKS performs the following operations:
-
-* Add a new buffer node (or as many nodes as configured in [max surge](#customize-node-surge-upgrade)) to the cluster that runs the specified Kubernetes version.
-* [Cordon and drain][kubernetes-drain] one of the old nodes to minimize disruption to running applications. If you're using max surge, it [cordons and drains][kubernetes-drain] as many nodes at the same time as the number of buffer nodes specified.
-* When the old node is fully drained, it's reimaged to receive the new version and becomes the buffer node for the following node to be upgraded.
-* This process repeats until all nodes in the cluster have been upgraded.
-* At the end of the process, the last buffer node is deleted, maintaining the existing agent node count and zone balance.
--
-> [!IMPORTANT]
-> Ensure that any `PodDisruptionBudgets` (PDBs) allow for at least *one* pod replica to be moved at a time otherwise the drain/evict operation will fail.
-> If the drain operation fails, the upgrade operation will fail by design to ensure that the applications are not disrupted. Please correct what caused the operation to stop (incorrect PDBs, lack of quota, and so on) and re-try the operation.
-
-### [Azure CLI](#tab/azure-cli)
-
-1. Upgrade your cluster using the [`az aks upgrade`][az-aks-upgrade] command.
-
- ```azurecli-interactive
- az aks upgrade \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --kubernetes-version KUBERNETES_VERSION
- ```
-
-2. Confirm the upgrade was successful using the [`az aks show`][az-aks-show] command.
-
- ```azurecli-interactive
- az aks show --resource-group myResourceGroup --name myAKSCluster --output table
- ```
-
- The following example output shows that the cluster now runs *1.19.1*:
-
- ```output
- Name Location ResourceGroup KubernetesVersion ProvisioningState Fqdn
- - - - -
- myAKSCluster eastus myResourceGroup 1.19.1 Succeeded myakscluster-dns-379cbbb9.hcp.eastus.azmk8s.io
- ```
-
-### [Azure PowerShell](#tab/azure-powershell)
-
-1. Upgrade your cluster using the [`Set-AzAksCluster`][set-azakscluster] command.
-
- ```azurepowershell-interactive
- Set-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -KubernetesVersion <KUBERNETES_VERSION>
- ```
-
-2. Confirm the upgrade was successful using the [`Get-AzAksCluster`][get-azakscluster] command.
-
- ```azurepowershell-interactive
- Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster |
- Format-Table -Property Name, Location, KubernetesVersion, ProvisioningState, Fqdn
- ```
-
- The following example output shows that the cluster now runs *1.19.1*:
-
- ```output
- Name Location KubernetesVersion ProvisioningState Fqdn
- - -- -- -- -
- myAKSCluster eastus 1.19.1 Succeeded myakscluster-dns-379cbbb9.hcp.eastus.azmk8s.io
- ```
-
-### [Azure portal](#tab/azure-portal)
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Navigate to your AKS cluster.
-3. Under **Settings**, select **Cluster configuration**.
-4. In **Kubernetes version**, select **Upgrade version**.
-5. In **Kubernetes version**, select your desired version and then select **Save**.
-6. Navigate to your AKS cluster **Overview** page, and select the **Kubernetes version** to confirm the upgrade was successful.
-
-The Azure portal highlights all the deprecated APIs between your current and newer version, and available versions you intend to migrate to. For more information, see [the Kubernetes API removal and deprecation process][k8s-deprecation].
----
-## View the upgrade events
-
-When you upgrade your cluster, the following Kubernetes events may occur on each node:
-
-* **Surge**: Creates a surge node.
-* **Drain**: Evicts pods from the node. Each pod has a 30-second timeout to complete the eviction.
-* **Update**: Update of a node succeeds or fails.
-* **Delete**: Deletes a surge node.
-
-Use `kubectl get events` to show events in the default namespaces while running an upgrade. For example:
-
-```azurecli-interactive
-kubectl get events
-```
-
-The following example output shows some of the above events listed during an upgrade.
-
-```output
-...
-default 2m1s Normal Drain node/aks-nodepool1-96663640-vmss000001 Draining node: [aks-nodepool1-96663640-vmss000001]
-...
-default 9m22s Normal Surge node/aks-nodepool1-96663640-vmss000002 Created a surge node [aks-nodepool1-96663640-vmss000002 nodepool1] for agentpool %!s(MISSING)
-...
-```
-
-## Stop cluster upgrades automatically on API breaking changes
-
-To stay within a supported Kubernetes version, you usually have to upgrade your cluster at least once per year and prepare for all possible disruptions. These disruptions include ones caused by API breaking changes, deprecations, and dependencies such as Helm and CSI. It can be difficult to anticipate these disruptions and migrate critical workloads without experiencing any downtime.
-
-AKS automatically stops upgrade operations consisting of a minor version change if deprecated APIs are detected. This feature alerts you with an error message if it detects usage of APIs that are deprecated in the targeted version.
-
-All of the following criteria must be met in order for the stop to occur:
-
-* The upgrade operation is a Kubernetes minor version change for the cluster control plane.
-* The Kubernetes version you're upgrading to is 1.26 or later
-* The last seen usage of deprecated APIs for the targeted version you're upgrading to must occur within 12 hours before the upgrade operation. AKS records usage hourly, so any usage of deprecated APIs within one hour isn't guaranteed to appear in the detection.
-* Even API usage that is actually watching for deprecated resources is covered here. Look at the [Verb][k8s-api] for the distinction.
-
-### Mitigating stopped upgrade operations
-
-If you attempt an upgrade and all of the previous criteria are met, you receive an error message similar to the following example error message:
-
-```output
-Bad Request({
- "code": "ValidationError",
- "message": "Control Plane upgrade is blocked due to recent usage of a Kubernetes API deprecated in the specified version. Please refer to https://kubernetes.io/docs/reference/using-api/deprecation-guide to migrate the usage. To bypass this error, set enable-force-upgrade in upgradeSettings.overrideSettings. Bypassing this error without migrating usage will result in the deprecated Kubernetes API calls failing. Usage details: 1 error occurred:\n\t* usage has been detected on API flowcontrol.apiserver.k8s.io.prioritylevelconfigurations.v1beta1, and was recently seen at: 2023-03-23 20:57:18 +0000 UTC, which will be removed in 1.26\n\n",
- "subcode": "UpgradeBlockedOnDeprecatedAPIUsage"
-})
-```
-
-After receiving the error message, you have two options to mitigate the issue. You can either [remove usage of deprecated APIs (recommended)](#remove-usage-of-deprecated-apis-recommended) or [bypass validation to ignore API changes](#bypass-validation-to-ignore-api-changes).
-
-### Remove usage of deprecated APIs (recommended)
-
-1. In the Azure portal, navigate to your cluster's overview page, and select **Diagnose and solve problems**.
-
-2. Navigate to the **Create, Upgrade, Delete and Scale** category, and select **Kubernetes API deprecations**.
-
- :::image type="content" source="./media/upgrade-cluster/applens-api-detection-full-v2.png" alt-text="A screenshot of the Azure portal showing the 'Selected Kubernetes API deprecations' section.":::
-
-3. Wait 12 hours from the time the last deprecated API usage was seen. Check the verb in the deprecated api usage to know if it is a [watch][k8s-api].
-
-4. Retry your cluster upgrade.
-
-You can also check past API usage by enabling [Container Insights][container-insights] and exploring kube audit logs. Check the verb in the deprecated api usage to understand, if it is a [watch][k8s-api] use case.
-
-### Bypass validation to ignore API changes
-
-> [!NOTE]
-> This method requires you to use the Azure CLI version 2.53 or later. This method isn't recommended, as deprecated APIs in the targeted Kubernetes version may not work long term. We recommend to removing them as soon as possible after the upgrade completes.
-
-Bypass validation to ignore API breaking changes using the [`az aks update`][az-aks-update] command, specifying `enable-force-upgrade`, and setting the `upgrade-override-until` property to define the end of the window during which validation is bypassed. If no value is set, it defaults the window to three days from the current time. The date and time you specify must be in the future.
-
-```azurecli-interactive
-az aks update --name myAKSCluster --resource-group myResourceGroup --enable-force-upgrade --upgrade-override-until 2023-10-01T13:00:00Z
-```
-
-> [!NOTE]
-> `Z` is the zone designator for the zero UTC/GMT offset, also known as 'Zulu' time. This example sets the end of the window to `13:00:00` GMT. For more information, see [Combined date and time representations](https://wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations).
-
-## Customize node surge upgrade
-
-> [!IMPORTANT]
->
-> Node surges require subscription quota for the requested max surge count for each upgrade operation. For example, a cluster that has five node pools, each with a count of four nodes, has a total of 20 nodes. If each node pool has a max surge value of 50%, additional compute and IP quota of 10 nodes (2 nodes * 5 pools) is required to complete the upgrade.
->
-> The max surge setting on a node pool is persistent. Subsequent Kubernetes upgrades or node version upgrades will use this setting. You may change the max surge value for your node pools at any time. For production node pools, we recommend a max-surge setting of 33%.
->
-> If you're using Azure CNI, validate there are available IPs in the subnet to [satisfy IP requirements of Azure CNI](configure-azure-cni.md).
-
-By default, AKS configures upgrades to surge with one extra node. A default value of one for the max surge settings enables AKS to minimize workload disruption by creating an extra node before the cordon/drain of existing applications to replace an older versioned node. The max surge value can be customized per node pool to enable a trade-off between upgrade speed and upgrade disruption. When you increase the max surge value, the upgrade process completes faster. If you set a large value for max surge, you might experience disruptions during the upgrade process.
-
-For example, a max surge value of *100%* provides the fastest possible upgrade process (doubling the node count) but also causes all nodes in the node pool to be drained simultaneously. You might want to use a higher value such as this for testing environments. For production node pools, we recommend a `max_surge` setting of *33%*.
-
-AKS accepts both integer values and a percentage value for max surge. An integer such as *5* indicates five extra nodes to surge. A value of *50%* indicates a surge value of half the current node count in the pool. Max surge percent values can be a minimum of *1%* and a maximum of *100%*. A percent value is rounded up to the nearest node count. If the max surge value is higher than the required number of nodes to be upgraded, the number of nodes to be upgraded is used for the max surge value.
-
-During an upgrade, the max surge value can be a minimum of *1* and a maximum value equal to the number of nodes in your node pool. You can set larger values, but the maximum number of nodes used for max surge isn't higher than the number of nodes in the pool at the time of upgrade.
-
-### Set max surge values
-
-Set max surge values for new or existing node pools using the following commands:
-
-```azurecli-interactive
-# Set max surge for a new node pool
-az aks nodepool add -n mynodepool -g MyResourceGroup --cluster-name MyManagedCluster --max-surge 33%
-
-# Update max surge for an existing node pool
-az aks nodepool update -n mynodepool -g MyResourceGroup --cluster-name MyManagedCluster --max-surge 5
-```
-
-## Set auto-upgrade channel
-
-You can set an auto-upgrade channel on your cluster. For more information, see [Auto-upgrading an AKS cluster][aks-auto-upgrade].
+* [Automatically upgrade an AKS cluster](./auto-upgrade-cluster.md)
+* [Use Planned Maintenance to schedule and control upgrades for your AKS cluster](./planned-maintenance.md)
+* [Stop AKS cluster upgrades automatically on API breaking changes (Preview)](./stop-cluster-upgrade-api-breaking-changes.md)
+* [Automatically upgrade AKS cluster node operating system images](./auto-upgrade-node-image.md)
+* [Apply security updates to AKS nodes automatically using GitHub Actions](./node-upgrade-github-actions.md)
## Special considerations for node pools that span multiple availability zones AKS uses best-effort zone balancing in node groups. During an upgrade surge, the zones for the surge nodes in Virtual Machine Scale Sets are unknown ahead of time, which can temporarily cause an unbalanced zone configuration during an upgrade. However, AKS deletes surge nodes once the upgrade completes and preserves the original zone balance. If you want to keep your zones balanced during upgrades, you can increase the surge to a multiple of *three nodes*, and Virtual Machine Scale Sets balances your nodes across availability zones with best-effort zone balancing.
-If you have PVCs backed by Azure LRS Disks, they'll be bound to a particular zone. They may fail to recover immediately if the surge node doesn't match the zone of the PVC. This could cause downtime on your application when the upgrade operation continues to drain nodes but the PVs are bound to a zone. To handle this case and maintain high availability, configure a [Pod Disruption Budget](https://kubernetes.io/docs/tasks/run-application/configure-pdb/) on your application to allow Kubernetes to respect your availability requirements during the drain operation.
+Persistent volume claims (PVCs) backed by Azure locally redundant storage (LRS) Disks are bound to a particular zone and might fail to recover immediately if the surge node doesn't match the zone of the PVC. If the zones don't match, it can cause downtime on your application when the upgrade operation continues to drain nodes but the PVs are bound to a zone. To handle this case and maintain high availability, configure a [Pod Disruption Budget](https://kubernetes.io/docs/tasks/run-application/configure-pdb/) on your application to allow Kubernetes to respect your availability requirements during the drain operation.
## Optimize upgrades to improve performance and minimize disruptions
-The combination of [Planned Maintenance Window][planned-maintenance], [Max Surge](#customize-node-surge-upgrade), and [Pod Disruption Budget][pdb-spec] can significantly increase the likelihood of node upgrades completing successfully by the end of the maintenance window while also minimizing disruptions.
+The combination of [Planned Maintenance Window][planned-maintenance], [Max Surge](./upgrade-aks-cluster.md#customize-node-surge-upgrade), and [Pod Disruption Budget][pdb-spec] can significantly increase the likelihood of node upgrades completing successfully by the end of the maintenance window while also minimizing disruptions.
-* [Planned Maintenance Window][planned-maintenance] enables service teams to schedule auto-upgrade during a pre-defined window, typically a low-traffic period, to minimize workload impact. A window duration of at least 4 hours is recommended.
-* Max Surge on the node pool allows requesting additional quota during the upgrade process and limits the number of nodes selected for upgrade simultaneously. A higher max surge results in a faster upgrade process. However, setting it at 100% is not recommended as it would upgrade all nodes simultaneously, potentially causing disruptions to running applications. A max surge quota of 33% for production node pools is recommended.
-* [Pod Disruption Budget][pdb-spec] is set for service applications and limits the number of pods that can be down during voluntary disruptions, such as AKS-controlled node upgrades. It can be configured as `minAvailable` replicas, indicating the minimum number of application pods that need to be active, or `maxUnavailable` replicas, indicating the maximum number of application pods that can be terminated, ensuring high availability for the application. Refer to the guidance provided for configuring [Pod Disruption Budgets (PDBs)][pdb-concepts]. PDB values should be validated to determine the settings that work best for your specific service.
+* [Planned Maintenance Window][planned-maintenance] enables service teams to schedule auto-upgrade during a pre-defined window, typically a low-traffic period, to minimize workload impact. We recommend a window duration of at least *four hours*.
+* [Max Surge](./upgrade-aks-cluster.md#customize-node-surge-upgrade) on the node pool allows requesting extra quota during the upgrade process and limits the number of nodes selected for upgrade simultaneously. A higher max surge results in a faster upgrade process. We don't recommend setting it at 100%, as it upgrades all nodes simultaneously, which can cause disruptions to running applications. We recommend a max surge quota of *33%* for production node pools.
+* [Pod Disruption Budget][pdb-spec] is set for service applications and limits the number of pods that can be down during voluntary disruptions, such as AKS-controlled node upgrades. It can be configured as `minAvailable` replicas, indicating the minimum number of application pods that need to be active, or `maxUnavailable` replicas, indicating the maximum number of application pods that can be terminated, ensuring high availability for the application. Refer to the guidance provided for configuring [Pod Disruption Budgets (PDBs)][pdb-concepts]. PDB values should be validated to determine the settings that work best for your specific service.
## Next steps
-This article showed you how to upgrade an existing AKS cluster. To learn more about deploying and managing AKS clusters, see the following tutorials:
+This article listed different upgrade options for AKS clusters. To learn more about deploying and managing AKS clusters, see the following tutorial:
> [!div class="nextstepaction"] > [AKS tutorials][aks-tutorial-prepare-app] <!-- LINKS - external -->
-[kubernetes-drain]: https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/
[pdb-spec]: https://kubernetes.io/docs/tasks/run-application/configure-pdb/ [pdb-concepts]:https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#pod-disruption-budgets <!-- LINKS - internal --> [aks-tutorial-prepare-app]: ./tutorial-kubernetes-prepare-app.md
-[azure-rbac-provider-operations]: manage-azure-rbac.md#create-custom-roles-definitions
-[azure-cli-install]: /cli/azure/install-azure-cli
-[azure-powershell-install]: /powershell/azure/install-az-ps
-[az-aks-get-upgrades]: /cli/azure/aks#az_aks_get_upgrades
-[get-azaksupgradeprofile]: /powershell/module/az.aks/get-azaksupgradeprofile
-[az-aks-upgrade]: /cli/azure/aks#az_aks_upgrade
-[az-aks-update]: /cli/azure/aks#az_aks_update
-[set-azakscluster]: /powershell/module/az.aks/set-azakscluster
-[az-aks-show]: /cli/azure/aks#az_aks_show
-[get-azakscluster]: /powershell/module/az.aks/get-azakscluster
[nodepool-upgrade]: manage-node-pools.md#upgrade-a-single-node-pool [planned-maintenance]: planned-maintenance.md
-[aks-auto-upgrade]: auto-upgrade-cluster.md
-[release-tracker]: release-tracker.md
[specific-nodepool]: node-image-upgrade.md#upgrade-a-specific-node-pool
-[k8s-deprecation]: https://kubernetes.io/blog/2022/11/18/upcoming-changes-in-kubernetes-1-26/#:~:text=A%20deprecated%20API%20is%20one%20that%20has%20been,point%20you%20must%20migrate%20to%20using%20the%20replacement
-[k8s-api]: https://kubernetes.io/docs/reference/using-api/api-concepts/
-[container-insights]:/azure/azure-monitor/containers/container-insights-log-query#resource-logs
-[support-policy-user-customizations-agent-nodes]: support-policies.md#user-customization-of-agent-nodes
-[intro-azure-linux]: ../azure-linux/intro-azure-linux.md
aks Use Group Managed Service Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-group-managed-service-accounts.md
Last updated 08/30/2023
## Prerequisites
-* Kubernetes 1.19 or greater. To check your version, see [Check for available upgrades](./upgrade-cluster.md#check-for-available-aks-cluster-upgrades). To upgrade your version, see [Upgrade AKS cluster](./upgrade-cluster.md#upgrade-an-aks-cluster).
+* Kubernetes 1.19 or greater. To check your version, see [Check for available upgrades](./upgrade-aks-cluster.md#check-for-available-aks-cluster-upgrades). To upgrade your version, see [Upgrade AKS cluster](./upgrade-aks-cluster.md).
* Azure CLI version 2.35.0 or greater. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). * [Managed identities][aks-managed-id] enabled on your AKS cluster. * Permissions to create or update an Azure Key Vault.
api-management Stv1 Platform Retirement August 2024 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/stv1-platform-retirement-august-2024.md
Title: Azure API Management - stv1 platform retirement (August 2024) | Microsoft Docs
-description: Azure API Management is retiring the stv1 compute platform effective 31 August 2024. If your API Management instance is hosted on the stv1 platform, you must migrate to the stv2 platform.
+description: Azure API Management will retire the stv1 compute platform effective 31 August 2024. Instances hosted on the stv1 platform must be migrated to the stv2 platform.
documentationcenter: '' Previously updated : 01/10/2023 Last updated : 10/19/2023
The following table summarizes the compute platforms currently used for instance
| Version | Description | Architecture | Tiers | | -| -| -- | - |
-| `stv2` | Single-tenant v2 | Azure-allocated compute infrastructure that supports availability zones, private endpoints | Developer, Basic, Standard, Premium<sup>1</sup> |
+| `stv2`, `stv2.1` | Single-tenant v2 | Azure-allocated compute infrastructure that supports availability zones, private endpoints | Developer, Basic, Standard, Premium |
| `stv1` | Single-tenant v1 | Azure-allocated compute infrastructure | Developer, Basic, Standard, Premium | | `mtv1` | Multi-tenant v1 | Shared infrastructure that supports native autoscaling and scaling down to zero in times of no traffic | Consumption |
-To take advantage of upcoming features, we're recommending that customers migrate their Azure API Management instances from the `stv1` compute platform to the `stv2` compute platform. The `stv2` compute platform comes with additional features and improvements such as support for Azure Private Link and other networking features.
+For continued support and to take advantage of upcoming features, customers must migrate their Azure API Management instances from the `stv1` compute platform to the `stv2` compute platform. The `stv2` compute platform comes with additional features and improvements such as support for Azure Private Link and other networking features.
New instances created in service tiers other than the Consumption tier are mostly hosted on the `stv2` platform already. Existing instances on the `stv1` compute platform will continue to work normally until the retirement date, but those instances wonΓÇÖt receive the latest features available to the `stv2` platform. Support for `stv1` instances will be retired by 31 August 2024. ## Is my service affected by this?
-If the value of the `platformVersion` property of your service is `stv1`, it is hosted on the `stv1` platform. See [How do I know which platform hosts my API Management instance?](../compute-infrastructure.md#how-do-i-know-which-platform-hosts-my-api-management-instance)
+If the value of the `platformVersion` property of your service is `stv1`, it's hosted on the `stv1` platform. See [How do I know which platform hosts my API Management instance?](../compute-infrastructure.md#how-do-i-know-which-platform-hosts-my-api-management-instance)
## What is the deadline for the change? Support for API Management instances hosted on the `stv1` platform will be retired by 31 August 2024.
-After 31 August 2024, any instance hosted on the `stv1` platform won't be supported, and could experience system outages.
+> [!WARNING]
+> * After 31 August 2024, any instance hosted on the `stv1` platform will be shut down, and the instance won't respond to API requests.
+> * Data from a shut-down instance will be backed up by Azure. The owner may trigger restoration of the instance on the `stv2` platform, but the instance will remain shut down until then.
+ ## What do I need to do? **Migrate all your existing instances hosted on the `stv1` compute platform to the `stv2` compute platform by 31 August 2024.**
-If you have existing instances hosted on the `stv1` platform, you can follow our [migration guide](../migrate-stv1-to-stv2.md) which provides all the details to ensure a successful migration.
-
-## Help and support
-
-If you have questions, get answers from community experts in [Microsoft Q&A](https://aka.ms/apim/retirement/stv1). If you have a support plan and you need technical help, create a [support request](https://portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview).
+If you have existing instances hosted on the `stv1` platform, follow our [migration guide](../migrate-stv1-to-stv2.md) to ensure a successful migration.
-1. For **Summary**, type a description of your issue, for example, "stv1 retirement".
-1. Under **Issue type**, select **Technical**.
-1. Under **Subscription**, select your subscription.
-1. Under **Service**, select **My services**, then select **API Management Service**.
-1. Under **Resource**, select the Azure resource that youΓÇÖre creating a support request for.
-1. For **Problem type**, select **Administration and Management**.
-1. For **Problem subtype**, select **Upgrade, Scale or SKU Changes**.
-## Next steps
+## Related content
See all [upcoming breaking changes and feature retirements](overview.md).
api-management Migrate Stv1 To Stv2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/migrate-stv1-to-stv2.md
Title: Migrate Azure API Management instance to stv2 platform | Microsoft Docs
-description: Follow these steps to migrate your Azure API Management instance from the stv1 compute platform to the stv2 compute platform. Migration steps depend on whether the instance is deployed (injected) in a VNet.
+description: Migrate your Azure API Management instance from the stv1 compute platform to the stv2 platform. Migration steps depend on whether the instance is injected in a VNet.
Previously updated : 07/31/2023 Last updated : 10/18/2023
You can migrate an API Management instance hosted on the `stv1` compute platform to the `stv2` platform. This article provides migration steps for two scenarios, depending on whether or not your API Management instance is currently deployed (injected) in an [external](api-management-using-with-vnet.md) or [internal](api-management-using-with-internal-vnet.md) VNet.
-* **Non-VNet-injected API Management instance** - Use the [Migrate to stv2](/rest/api/apimanagement/current-ga/api-management-service/migratetostv2) REST API
+* [**Scenario 1: Non-VNet-injected API Management instance**](#scenario-1-migrate-api-management-instance-not-injected-in-a-vnet) - Migrate your instance using the portal or the [Migrate to stv2](/rest/api/apimanagement/current-ga/api-management-service/migratetostv2) REST API.
-* **VNet-injected API Management instance** - Manually update the VNet configuration settings
+* [**Scenario 2: VNet-injected API Management instance**](#scenario-2-migrate-a-network-injected-api-management-instance) - Migrate your instance by manually updating the VNet configuration settings
For more information about the `stv1` and `stv2` platforms and the benefits of using the `stv2` platform, see [Compute platform for API Management](compute-infrastructure.md). > [!IMPORTANT]
-> * Migration is a long-running operation. Your instance will experience downtime during the last 10-15 minutes of migration. Plan your migration accordingly.
-> * The VIP address(es) of your API Management will change if you're using scenario 2 mentioned below (service injected in a VNet). For scenario 1 (not injected in a VNet), the VIP will temporarily change during migration for up to 15 minutes, but the original VIP of the service will be restored at the end of the migration operation.
-> * Migration to `stv2` is not reversible.
+> Support for API Management instances hosted on the `stv1` platform will be [retired by 31 August 2024](breaking-changes/stv1-platform-retirement-august-2024.md). To ensure continued support and operation of your API Management instance, you must migrate any instance hosted on the `stv1` platform to `stv2` before that date.
-> [!IMPORTANT]
-> Support for API Management instances hosted on the `stv1` platform will be [retired by 31 August 2024](breaking-changes/stv1-platform-retirement-august-2024.md). To ensure proper operation of your API Management instance, you should migrate any instance hosted on the `stv1` platform to `stv2` before that date.
+> [!CAUTION]
+> * Migrating your API Management instance to new infrastructure is a long-running operation. Depending on your service configuration, you may have temporary downtime during migration, and you may need to update your network dependencies after migration to reach your API Management instance. Plan your migration accordingly.
+> * Migration to `stv2` is not reversible.
[!INCLUDE [api-management-availability-premium-dev-standard-basic](../../includes/api-management-availability-premium-dev-standard-basic.md)]
For more information about the `stv1` and `stv2` platforms and the benefits of u
## Scenario 1: Migrate API Management instance, not injected in a VNet
-For an API Management instance that's not deployed in a VNet, invoke the Migrate to `stv2` REST API. For example, run the following Azure CLI commands, setting variables where indicated with the name of your API Management instance and the name of the resource group in which it was created.
+For an API Management instance that's not deployed in a VNet, migrate your instance using the **Platform migration** blade in the portal, or invoke the Migrate to `stv2` REST API.
+
+You can choose whether the virtual IP address of API Management will change, or whether the original VIP address is preserved.
+
+* **New virtual IP address (recommended)** - If you choose this mode, API requests remain responsive during migration. Infrastructure configuration (such as custom domains, locations, and CA certificates) will be locked for 30 minutes. After migration, you'll need to update any network dependencies including DNS, firewall rules, and VNets to use the new VIP address.
+
+* **Preserve IP address** - If you preserve the VIP address, API requests will be unresponsive for approximately 15 minutes while the IP address is migrated to the new infrastructure. Infrastructure configuration (such as custom domains, locations, and CA certificates) will be locked for 45 minutes. No further configuration is required after migration.
+
+#### [Portal](#tab/portal)
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance.
+1. In the left menu, under **Settings**, select **Platform migration**.
+1. On the **Platform migration** page, select one of the two migration options:
+
+ * **New virtual IP address (recommended)**. The VIP address of your API Management instance will change automatically. Your service will have no downtime, but after migration you'll need to update any network dependencies including DNS, firewall rules, and VNets to use the new VIP address.
+
+ * **Preserve IP address** - The VIP address of your API Management instance won't change. Your instance will have downtime for up to 15 minutes.
+
+ :::image type="content" source="media/migrate-stv1-to-stv2/platform-migration-portal.png" alt-text="Screenshot of API Management platform migration in the portal.":::
+
+1. Review guidance for the migration process, and prepare your environment.
+
+1. After you've completed preparation steps, select **I have read and understand the impact of the migration process.** Select **Migrate**.
+
+#### [Azure CLI](#tab/cli)
+
+Run the following Azure CLI commands, setting variables where indicated with the name of your API Management instance and the name of the resource group in which it was created.
> [!NOTE] > The Migrate to `stv2` REST API is available starting in API Management REST API version `2022-04-01-preview`.
RG_NAME={name of your resource group}
# Get resource ID of API Management instance APIM_RESOURCE_ID=$(az apim show --name $APIM_NAME --resource-group $RG_NAME --query id --output tsv)
-# Call REST API to migrate to stv2
-az rest --method post --uri "$APIM_RESOURCE_ID/migrateToStv2?api-version=2022-08-01"
+# Call REST API to migrate to stv2 and change VIP address
+az rest --method post --uri "$APIM_RESOURCE_ID/migrateToStv2?api-version=2023-03-01-preview" --body '{"mode": "NewIp"}'
+
+# Alternate call to migrate to stv2 and preserve VIP address
+# az rest --method post --uri "$APIM_RESOURCE_ID/migrateToStv2?api-version=2023-03-01-preview" --body '{"mode": "PreserveIp"}'
``` ++ ## Scenario 2: Migrate a network-injected API Management instance
-Trigger migration of a network-injected API Management instance to the `stv2` platform by updating the existing network configuration to use new network settings (see the following section). After that update completes, as an optional step, you may migrate back to the original VNet and subnet you used.
+Trigger migration of a network-injected API Management instance to the `stv2` platform by updating the existing network configuration to use new network settings (see the following section). After that update completes, as an optional step, you can migrate back to the original VNet and subnet you used.
You can also migrate to the `stv2` platform by enabling [zone redundancy](../reliability/migrate-api-mgt.md).
+> [!IMPORTANT]
+> The VIP address of your API Management instance will change. However, API requests remain responsive during migration. Infrastructure configuration (such as custom domains, locations, and CA certificates) will be locked for 30 minutes. After migration, you'll need to update any network dependencies including DNS, firewall rules, and VNets to use the new VIP address.
++ ### Update VNet configuration Update the configuration of the VNet in each location (region) where the API Management instance is deployed.
The virtual network configuration is updated, and the instance is migrated to th
### (Optional) Migrate back to original VNet and subnet
-You may optionally migrate back to the original VNet and subnet you used in each region before migration to the `stv2` platform. To do so, update the VNet configuration again, this time specifying the original VNet and subnet. As in the preceding migration, expect a long-running operation, and expect the VIP address to change.
+You can optionally migrate back to the original VNet and subnet you used in each region before migration to the `stv2` platform. To do so, update the VNet configuration again, this time specifying the original VNet and subnet. As in the preceding migration, expect a long-running operation, and expect the VIP address to change.
#### Prerequisites
You may optionally migrate back to the original VNet and subnet you used in each
To verify that the migration was successful, check the [platform version](compute-infrastructure.md#how-do-i-know-which-platform-hosts-my-api-management-instance) of your API Management instance. After successful migration, the value is `stv2`.
-## Next steps
+
+## Related content
* Learn about [stv1 platform retirement](breaking-changes/stv1-platform-retirement-august-2024.md).
+* Learn about [IP addresses of API Management](api-management-howto-ip-addresses.md)
* For instances deployed in a VNet, see the [Virtual network configuration reference](virtual-network-reference.md).+
azure-arc Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/upgrade.md
Title: Upgrade Arc resource bridge (preview) description: Learn how to upgrade Arc resource bridge (preview) using either cloud-managed upgrade or manual upgrade. Previously updated : 10/02/2023 Last updated : 10/20/2023 # Upgrade Arc resource bridge (preview)
-This article describes how Arc resource bridge (preview) is upgraded and the two ways upgrade can be performed: cloud-managed upgrade or manual upgrade. Currently, some private cloud providers differ in how they handle Arc resource bridge upgrades. For more information, refer to the [Private Cloud Providers](#private-cloud-providers) section.
+This article describes how Arc resource bridge (preview) is upgraded, and the two ways upgrade can be performed: cloud-managed upgrade or manual upgrade. Currently, some private cloud providers differ in how they handle Arc resource bridge upgrades. For more information, see the [Private cloud providers](#private-cloud-providers) section.
## Prerequisites
The upgrade process deploys a new resource bridge using the reserved appliance V
Deploying a new resource bridge consists of downloading the appliance image (~3.5 GB) from the cloud, using the image to deploy a new appliance VM, verifying the new resource bridge is running, connecting it to Azure, deleting the old appliance VM, and reserving the old IP to be used for a future upgrade.
-Overall, the upgrade generally takes at least 30 minutes, depending on network speeds. A short intermittent downtime may happen during the handoff between the old Arc resource bridge to the new Arc resource bridge. Additional downtime may occur if prerequisites are not met, or if a change in the network (DNS, firewall, proxy, etc.) impacts the Arc resource bridge's network connectivity.
+Overall, the upgrade generally takes at least 30 minutes, depending on network speeds. A short intermittent downtime might happen during the handoff between the old Arc resource bridge to the new Arc resource bridge. Additional downtime can occur if prerequisites aren't met, or if a change in the network (DNS, firewall, proxy, etc.) impacts the Arc resource bridge's network connectivity.
There are two ways to upgrade Arc resource bridge: cloud-managed upgrades managed by Microsoft, or manual upgrades where Azure CLI commands are performed by an admin. ## Cloud-managed upgrade
-Arc resource bridge is a Microsoft-managed product. Microsoft manages upgrades of Arc resource bridge through cloud-managed upgrade. Cloud-managed upgrade allows Microsoft to ensure that the resource bridge remains on a supported version.
+Arc resource bridge is a Microsoft-managed product. Microsoft manages upgrades of Arc resource bridge through cloud-managed upgrade. Cloud-managed upgrade allows Microsoft to ensure that the resource bridge remains on a supported version.
> [!IMPORTANT]
-> Currently, your appliance version must be on 1.0.15 and you must request access in order to use cloud-managed upgrade. To do so, [open a support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Select **Technical** for **Issue type** and **Azure Arc Resource Bridge** for **Service type**. In the **Summary** field, enter *Requesting access to cloud-managed upgrade*, and select **Resource Bridge Agent issue** for **Problem type**. Complete the rest of the support request and then select **Create**. We'll review your account and contact you to confirm your access to cloud-managed upgrade.
+> Currently, in order to use cloud-managed upgrade, your appliance version must be on version 1.0.15 and you must request access. To do so, [open a support request](/azure/azure-portal/supportability/how-to-create-azure-support-request). Select **Technical** for **Issue type** and **Azure Arc Resource Bridge** for **Service type**. In the **Summary** field, enter *Requesting access to cloud-managed upgrade*, and select **Resource Bridge Agent issue** for **Problem type**. Complete the rest of the support request and then select **Create**. We'll review your account and contact you to confirm your access to cloud-managed upgrade.
-Cloud-managed upgrades are handled through Azure. A notification is pushed to Azure to reflect the state of the appliance VM as it upgrades. As the resource bridge progresses through the upgrade, its status may switch back and forth between different upgrade steps. Upgrade is complete when the appliance VM `status` is `Running` and `provisioningState` is `Succeeded`.
+Cloud-managed upgrades are handled through Azure. A notification is pushed to Azure to reflect the state of the appliance VM as it upgrades. As the resource bridge progresses through the upgrade, its status might switch back and forth between different upgrade steps. Upgrade is complete when the appliance VM `status` is `Running` and `provisioningState` is `Succeeded`.
-To check the status of a cloud-managed upgrade, check the Azure resource in ARM or run the following Azure CLI command from the management machine:
+To check the status of a cloud-managed upgrade, check the Azure resource in ARM, or run the following Azure CLI command from the management machine:
```azurecli az arcappliance show --resource-group [REQUIRED] --name [REQUIRED]
az arcappliance show --resource-group [REQUIRED] --name [REQUIRED]
## Manual upgrade
-Arc resource bridge can be manually upgraded from the management machine. You must meet all upgrade prerequisites before attempting to upgrade. The management machine must have the kubeconfig and appliance configuration files stored locally. Manual upgrade generally takes between 30-90 minutes, depending on network speeds. The upgrade command takes your Arc resource bridge to the next appliance version, which might not be the latest available appliance version. Multiple upgrades could be needed to reach the minimum n-3 supported version. You can check your appliance version by checking the Azure resource of your Arc resource bridge.
+Arc resource bridge can be manually upgraded from the management machine. You must meet all upgrade prerequisites before attempting to upgrade. The management machine must have the kubeconfig and appliance configuration files stored locally.
+
+Manual upgrade generally takes between 30-90 minutes, depending on network speeds. The upgrade command takes your Arc resource bridge to the next appliance version, which might not be the latest available appliance version. Multiple upgrades could be needed to reach a [supported version](#supported-versions). You can check your appliance version by checking the Azure resource of your Arc resource bridge.
To manually upgrade your Arc resource bridge, make sure you have installed the latest `az arcappliance` CLI extension by running the extension upgrade command from the management machine:
az arcappliance upgrade <private cloud> --config-file <file path to ARBname-appl
For example, to upgrade a resource bridge on VMware: `az arcappliance upgrade vmware --config-file c:\contosoARB01-appliance.yaml`
-For example, to upgrade a resource bridge on Azure Stack HCI, run: `az arcappliance upgrade hci --config-file c:\contosoARB01-appliance.yaml`
+Or to upgrade a resource bridge on Azure Stack HCI, run: `az arcappliance upgrade hci --config-file c:\contosoARB01-appliance.yaml`
## Private cloud providers Currently, private cloud providers differ in how they perform Arc resource bridge upgrades. Review the following information to see how to upgrade your Arc resource bridge for a specific provider.
-For Arc-enabled VMware, manual upgrade is available and cloud-managed upgrade is supported for appliances on version 1.0.15 and higher. When Arc-enabled VMware announces General Availability, appliances on 1.0.15 and higher will receive cloud-managed upgrade as the default experience. Appliances that are below version 1.0.15 must be manually upgraded.
+For Arc-enabled VMware vSphere (preview), manual upgrade is available, and cloud-managed upgrade is supported for appliances on version 1.0.15 and higher. When Arc-enabled VMware vSphere announces General Availability, appliances on version 1.0.15 and higher will receive cloud-managed upgrade as the default experience. Appliances that are below version 1.0.15 must be manually upgraded.
-[Azure Arc VM management (preview) on Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview) supports upgrade of an Arc resource bridge on Azure Stack HCI, version 22H2 up until appliance version 1.0.14 and `az arcappliance` CLI extension version 0.2.33. These upgrades can be done through manual upgrade or a support request for cloud-managed upgrade. For subsequent upgrades, you must transition to Azure Stack HCI, version 23H2 (preview). In version 23H2 (preview), the LCM tool manages upgrades across all components as a "validated recipe" package. For more information, visit the [Arc VM management FAQ page](/azure-stack/hci/manage/azure-arc-vms-faq).
+[Azure Arc VM management (preview) on Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview) supports upgrade of an Arc resource bridge on Azure Stack HCI, version 22H2 up until appliance version 1.0.14 and `az arcappliance` CLI extension version 0.2.33. These upgrades can be done through manual upgrade or a support request for cloud-managed upgrade. For subsequent upgrades, you must transition to Azure Stack HCI, version 23H2 (preview). In version 23H2 (preview), the LCM tool manages upgrades across all components as a "validated recipe" package. For more information, visit the [Arc VM management FAQ page](/azure-stack/hci/manage/azure-arc-vms-faq).
-For Arc-enabled SCVMM, the upgrade feature isn't currently available yet. Review the steps for [performing the recovery operation](/azure/azure-arc/system-center-virtual-machine-manager/disaster-recovery), then delete the appliance VM from SCVMM and perform the recovery steps.  This deploys a new resource bridge and reconnect pre-existing Azure resources.
+For Arc-enabled System Center Virtual Machine Manager (SCVMM) (preview), the upgrade feature isn't currently available yet. Review the steps for [performing the recovery operation](/azure/azure-arc/system-center-virtual-machine-manager/disaster-recovery), then delete the appliance VM from SCVMM and perform the recovery steps. This deploys a new resource bridge and reconnects pre-existing Azure resources.
## Version releases
-The Arc resource bridge version is tied to the versions of underlying components used in the appliance image, such as the Kubernetes version. When there is a change in the appliance image, the Arc resource bridge version gets incremented. This generally happens when a new `az arcappliance` CLI extension version is released. A new extension is typically released on a monthly cadence at the end of the month. For detailed release info, refer to the [Arc resource bridge release notes](https://github.com/Azure/ArcResourceBridge/releases) on GitHub.
+The Arc resource bridge version is tied to the versions of underlying components used in the appliance image, such as the Kubernetes version. When there is a change in the appliance image, the Arc resource bridge version gets incremented. This generally happens when a new `az arcappliance` CLI extension version is released. A new extension is typically released on a monthly cadence at the end of the month. For detailed release info, see the [Arc resource bridge release notes](https://github.com/Azure/ArcResourceBridge/releases) on GitHub.
## Supported versions
Generally, the latest released version and the previous three versions (n-3) of
- n-2 version: 1.0.8 - n-3 version: 1.0.7
-There may be instances where supported versions are not sequential. For example, version 1.0.11 is released and later found to contain a bug. A hot fix is released in version 1.0.12 and version 1.0.11 is removed. In this scenario, n-3 supported versions become 1.0.12, 1.0.10, 1.0.9, 1.0.8.
+There might be instances where supported versions are not sequential. For example, version 1.0.11 is released and later found to contain a bug. A hot fix is released in version 1.0.12 and version 1.0.11 is removed. In this scenario, n-3 supported versions become 1.0.12, 1.0.10, 1.0.9, 1.0.8.
-Arc resource bridge typically releases a new version on a monthly cadence, at the end of the month. Delays may occur that could push the release date further out. Regardless of when a new release comes out, if you are within n-3 supported versions, then your Arc resource bridge version is supported. To stay updated on releases, visit the [Arc resource bridge release notes](https://github.com/Azure/ArcResourceBridge/releases) on GitHub.
+Arc resource bridge typically releases a new version on a monthly cadence, at the end of the month, although it's possible that delays could push the release date further out. Regardless of when a new release comes out, if you are within n-3 supported versions, then your Arc resource bridge version is supported. To stay updated on releases, visit the [Arc resource bridge release notes](https://github.com/Azure/ArcResourceBridge/releases) on GitHub.
-If a resource bridge is not upgraded to one of the supported versions (n-3), then it will fall outside the support window and be unsupported. If this happens, it may not always be possible to upgrade an unsupported resource bridge to a newer version, as component services used by Arc resource bridge may no longer be compatible. In addition, the unsupported resource bridge may not be able to provide reliable monitoring and health metrics.
+If a resource bridge isn't upgraded to one of the supported versions (n-3), then it will fall outside the support window and be unsupported. If this happens, it might not always be possible to upgrade an unsupported resource bridge to a newer version, as component services used by Arc resource bridge could no longer be compatible. In addition, the unsupported resource bridge might not be able to provide reliable monitoring and health metrics.
-If an Arc resource bridge is unable to be upgraded to a supported version, you must delete it and deploy a new resource bridge. Depending on which private cloud product you're using, there may be other steps required to reconnect the resource bridge to existing resources. For details, check the partner product's Arc resource bridge recovery documentation.
+If an Arc resource bridge is unable to be upgraded to a supported version, you must delete it and deploy a new resource bridge. Depending on which private cloud product you're using, there might be other steps required to reconnect the resource bridge to existing resources. For details, check the partner product's Arc resource bridge recovery documentation.
## Notification and upgrade availability
-If your Arc resource bridge is at n-3 versions, then you may receive an email notification letting you know that your resource bridge may soon be out of support once the next version is released. If you receive this notification, upgrade the resource bridge as soon as possible to allow debug time for any issues with manual upgrade, or submit a support ticket if cloud-managed upgrade was unable to upgrade your resource bridge.
+If your Arc resource bridge is at version n-3, you might receive an email notification letting you know that your resource bridge will be out of support once the next version is released. If you receive this notification, upgrade the resource bridge as soon as possible to allow debug time for any issues with manual upgrade, or submit a support ticket if cloud-managed upgrade was unable to upgrade your resource bridge.
To check if your Arc resource bridge has an upgrade available, run the command:
To check if your Arc resource bridge has an upgrade available, run the command:
az arcappliance get-upgrades --resource-group [REQUIRED] --name [REQUIRED] ```
-To see the current version of an Arc resource bridge appliance, run `az arcappliance show` or check the Azure resource of your Arc resource bridge.
-
-##
+To see the current version of an Arc resource bridge appliance, run `az arcappliance show` or check the Azure resource of your Arc resource bridge.
## Next steps - Learn about [Arc resource bridge maintenance operations](maintenance.md). - Learn about [troubleshooting Arc resource bridge](troubleshoot-resource-bridge.md).--
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/overview.md
Title: Overview of the Azure Connected System Center Virtual Machine Manager (preview) description: This article provides a detailed overview of the Azure Arc-enabled System Center Virtual Machine Manager (preview). Previously updated : 07/24/2023 Last updated : 10/18/2023 ms. --+++ keywords: "VMM, Arc, Azure" # Overview of Arc-enabled System Center Virtual Machine Manager (preview)
-Azure Arc-enabled System Center Virtual Machine Manager (SCVMM) empowers System Center customers to connect their VMM environment to Azure and perform VM self-service operations from Azure portal. With Azure Arc-enabled SCVMM, you get a consistent management experience across Azure.
+Azure Arc-enabled System Center Virtual Machine Manager (SCVMM) empowers System Center customers to connect their VMM environment to Azure and perform VM self-service operations from Azure portal. Azure Arc-enabled SCVMM extends the Azure control plane to SCVMM managed infrastructure, enabling the use of Azure security, governance, and management capabilities consistently across System Center managed estate and Azure.
-Azure Arc-enabled System Center Virtual Machine Manager allows you to manage your Hybrid environment and perform self-service VM operations through Azure portal. For Microsoft Azure Pack customers, this solution is intended as an alternative to perform VM self-service operations.
+Azure Arc-enabled System Center Virtual Machine Manager also allows you to manage your hybrid environment consistently and perform self-service VM operations through Azure portal. For Microsoft Azure Pack customers, this solution is intended as an alternative to perform VM self-service operations.
Arc-enabled System Center VMM allows you to: -- Perform various VM lifecycle operations such as start, stop, pause, and delete VMs on VMM managed VMs directly from Azure.-- Empower developers and application teams to self-serve VM operations on-demand using [Azure role-based access control (RBAC)](../../role-based-access-control/overview.md).-- Browse your VMM resources (VMs, templates, VM networks, and storage) in Azure, providing you a single pane view for your infrastructure across both environments.-- Discover and onboard existing SCVMM managed VMs to Azure.
+- Perform various VM lifecycle operations such as start, stop, pause, and delete VMs on SCVMM managed VMs directly from Azure.
+- Empower developers and application teams to self-serve VM operations on demand using [Azure role-based access control (RBAC)](https://learn.microsoft.com/azure/role-based-access-control/overview).
+- Browse your VMM resources (VMs, templates, VM networks, and storage) in Azure, providing you a single pane view for your infrastructure across both environments.
+- Discover and onboard existing SCVMM managed VMs to Azure.
+- Install the Arc-connected machine agents at scale on SCVMM VMs to [govern, protect, configure, and monitor them](https://learn.microsoft.com/azure/azure-arc/servers/overview#supported-cloud-operations).
+
+## Onboard resources to Azure management at scale
+
+Azure services such as Microsoft Defender for Cloud, Azure Monitor, Azure Update Manager, and Azure Policy provide a rich set of capabilities to secure, monitor, patch, and govern off-Azure resources via Arc.
+
+By using Arc-enabled SCVMM's capabilities to discover your SCVMM managed estate and install the Arc agent at scale, you can simplify onboarding your entire System Center estate to these services.
## How does it work?
To Arc-enable a System Center VMM management server, deploy [Azure Arc resource
The following image shows the architecture for the Arc-enabled SCVMM:
-### Supported VMM versions
+## How is Arc-enabled SCVMM different from Arc-enabled Servers
-Azure Arc-enabled SCVMM works with VMM 2016, 2019 and 2022 versions and supports SCVMM management servers with a maximum of 3500 VMS.
+- Azure Arc-enabled servers interact on the guest operating system level, with no awareness of the underlying infrastructure fabric and the virtualization platform that they're running on. Since Arc-enabled servers also support bare-metal machines, there might, in fact, not even be a host hypervisor in some cases.
+- Azure Arc-enabled SCVMM is a superset of Arc-enabled servers that extends management capabilities beyond the guest operating system to the VM itself. This provides lifecycle management and CRUD (Create, Read, Update, and Delete) operations on an SCVMM VM. These lifecycle management capabilities are exposed in the Azure portal and look and feel just like a regular Azure VM. Azure Arc-enabled SCVMM also provides guest operating system management, in fact, it uses the same components as Azure Arc-enabled servers.
+
+You have the flexibility to start with either option, or incorporate the other one later without any disruption. With both options, you will enjoy the same consistent experience.
### Supported scenarios
The following scenarios are supported in Azure Arc-enabled SCVMM (preview):
- Administrators can use the Azure portal to browse SCVMM inventory and register SCVMM cloud, virtual machines, VM networks, and VM templates into Azure. - Administrators can provide app teams/developers fine-grained permissions on those SCVMM resources through Azure RBAC. - App teams can use Azure interfaces (portal, CLI, or REST API) to manage the lifecycle of on-premises VMs they use for deploying their applications (CRUD, Start/Stop/Restart).
+- Administrators can install Arc agents on SCVMM VMs at-scale and install corresponding extensions to leverage Azure management services like Microsoft Defender for Cloud, Azure Update Manager, Azure Monitor, etc.
+
+### Supported VMM versions
+
+Azure Arc-enabled SCVMM works with VMM 2016, 2019 and 2022 versions and supports SCVMM management servers with a maximum of 3500 VMs.
### Supported regions
Azure Arc-enabled SCVMM doesn't store/process customer data outside the region t
## Next steps
-[See how to create a Azure Arc VM](create-virtual-machine.md)
+[Create an Azure Arc VM](create-virtual-machine.md)
azure-arc Set Up And Manage Self Service Access Scvmm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/set-up-and-manage-self-service-access-scvmm.md
+
+ Title: Set up and manage self-service access to SCVMM resources
+description: Learn how to switch to the new preview version and use its capabilities
++++++ Last updated : 10/18/2023
+keywords: "VMM, Arc, Azure"
++
+# Set up and manage self-service access to SCVMM resources
+
+Once your SCVMM resources are enabled in Azure, as a final step, provide your teams with the required access for a self-service experience. This article describes how to use built-in roles to manage granular access to SCVMM resources through Azure Role-based Access Control (RBAC) and allow your teams to deploy and manage VMs.
+
+## Prerequisites
+
+- Your SCVMM instance must be connected to Azure Arc.
+- Your SCVMM resources such as virtual machines, clouds, VM networks and VM templates must be Azure enabled.
+- You must have **User Access Administrator** or **Owner** role at the scope (resource group/subscription) to assign roles to other users.
+
+## Provide access to use Arc-enabled SCVMM resources
+
+To provision SCVMM VMs and change their size, add disks, change network interfaces, or delete them, your users need to have permission on the compute, network, storage, and to the VM template resources that they will use. These permissions are provided by the built-in Azure Arc SCVMM Private Cloud User role.
+
+You must assign this role to an individual cloud, VM network, and VM template that a user or a group needs to access.
+
+1. Go to the [SCVMM management servers (preview)](https://ms.portal.azure.com/#view/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/~/scVmmManagementServer) list in Arc center.
+2. Search and select your SCVMM management server.
+3. Navigate to the **Clouds** in **SCVMM inventory** section in the table of contents.
+4. Find and select the cloud for which you want to assign permissions.
+ This will take you to the Arc resource representing the SCVMM Cloud.
+1. Select **Access control (IAM)** in the table of contents.
+1. Select **Add role assignments** on the **Grant access to this resource**.
+1. Select **Azure Arc ScVmm Private Cloud User** role and select **Next**.
+1. Select **Select members** and search for the Microsoft Entra user or group that you want to provide access to.
+1. Select the Microsoft Entra user or group name. Repeat this for each user or group to which you want to grant this permission.
+1. Select **Review + assign** to complete the role assignment.
+1. Repeat steps 3-9 for each VM network and VM template that you want to provide access to.
+
+If you have organized your SCVMM resources into a resource group, you can provide the same role at the resource group scope.
+
+Your users now have access to SCVMM cloud resources. However, your users will also need to have permission on the subscription/resource group where they would like to deploy and manage VMs.
+
+## Provide access to subscription or resource group where VMs will be deployed
+
+In addition to having access to SCVMM resources through the **Azure Arc ScVmm Private Cloud User** role, your users must have permissions on the subscription and resource group where they deploy and manage VMs.
+
+The **Azure Arc ScVmm VM Contributor** role is a built-in role that provides permissions to conduct all SCVMM virtual machine operations.
+
+1. Go to the [Azure portal](https://ms.portal.azure.com/#home).
+2. Search and navigate to the subscription or resource group to which you want to provide access.
+3. Select **Access control (IAM)** from the table of contents on the left.
+4. Select **Add role assignments** on the **Grant access to this resource**.
+5. Select **Azure Arc ScVmm VM Contributor** role and select **Next**.
+6. Select the option **Select members**, and search for the Microsoft Entra user or group that you want to provide access to.
+7. Select the Microsoft Entra user or group name. Repeat this for each user or group to which you want to grant this permission.
+8. Select on **Review + assign** to complete the role assignment.
+
+## Next steps
+
+[Create an Azure Arc VM](https://learn.microsoft.com/azure/azure-arc/system-center-virtual-machine-manager/create-virtual-machine).
azure-functions Create First Function Cli Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-java.md
If desired, you can skip to [Run the function locally](#run-the-function-locally
#### Function.java *Function.java* contains a `run` method that receives request data in the `request` variable is an [HttpRequestMessage](/java/api/com.microsoft.azure.functions.httprequestmessage) that's decorated with the [HttpTrigger](/java/api/com.microsoft.azure.functions.annotation.httptrigger) annotation, which defines the trigger behavior. + The response message is generated by the [HttpResponseMessage.Builder](/java/api/com.microsoft.azure.functions.httpresponsemessage.builder) API. #### pom.xml Settings for the Azure resources created to host your app are defined in the **configuration** element of the plugin with a **groupId** of `com.microsoft.azure` in the generated pom.xml file. For example, the configuration element below instructs a Maven-based deployment to create a function app in the `java-functions-group` resource group in the `westus` region. The function app itself runs on Windows hosted in the `java-functions-app-service-plan` plan, which by default is a serverless Consumption plan. + You can change these settings to control how resources are created in Azure, such as by changing `runtime.os` from `windows` to `linux` before initial deployment. For a complete list of settings supported by the Maven plug-in, see the [configuration details](https://github.com/microsoft/azure-maven-plugins/wiki/Azure-Functions:-Configuration-Details). #### FunctionTest.java
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
var host = new HostBuilder()
.Build(); ```
-As part of configuring your app in `Program.cs`, you can also define the behavior for how errors are surfaced to your logs. By default, exceptions thrown by your code may end up wrapped in an `RpcException`. To remove this extra layer, set the `EnableUserCodeExceptions` property to "true" as part of configuring the builder:
+As part of configuring your app in `Program.cs`, you can also define the behavior for how errors are surfaced to your logs. By default, exceptions thrown by your code may end up wrapped in an `RpcException`. To remove this extra layer, set the `EnableUserCodeException` property to "true" as part of configuring the builder:
```csharp var host = new HostBuilder() .ConfigureFunctionsWorkerDefaults(builder => {}, options => {
- options.EnableUserCodeExceptions = true;
+ options.EnableUserCodeException = true;
}) .Build(); ```
azure-maps Drawing Tools Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-tools-events.md
Check out more code samples:
> [Code sample page] [Azure Maps Samples]:https://samples.azuremaps.com
-[Code sample page]: https://aka.ms/AzureMapsSamples
+[Code sample page]: https://samples.azuremaps.com/
[Create a measuring tool sample code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Drawing%20Tools%20Module/Create%20a%20measuring%20tool/Create%20a%20measuring%20tool.html [Create a measuring tool]: https://samples.azuremaps.com/drawing-tools-module/create-a-measuring-tool [Draw and search polygon area sample code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Drawing%20Tools%20Module/Draw%20and%20search%20polygon%20area/Draw%20and%20search%20polygon%20area.html
azure-maps Map Add Popup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-popup.md
map.events.add('mouseleave', symbolLayer, function (){
## Reusing a popup with multiple points
-There are cases in which the best approach is to create one popup and reuse it. For example, you may have a large number of points and want to show only one popup at a time. By reusing the popup, the number of DOM elements created by the application is greatly reduced, which can provide better performance. The following sample creates 3-point features. If you select on any of them, a popup is displayed with the content for that point feature.
+There are cases in which the best approach is to create one popup and reuse it. For example, you might have a large number of points and want to show only one popup at a time. By reusing the popup, the number of DOM elements created by the application is greatly reduced, which can provide better performance. The following sample creates 3-point features. If you select on any of them, a popup is displayed with the content for that point feature.
For a fully functional sample that shows how to create one popup and reuse it rather than creating a popup for each point feature, see [Reusing Popup with Multiple Pins] in the [Azure Maps Samples]. For the source code for this sample, see [Reusing Popup with Multiple Pins source code].
var feature = new atlas.data.Feature(new atlas.data.Point([0, 0]), {
Title: 'Template 2 - PropertyInfo', createDate: new Date(), dateNumber: 1569880860542,
- url: 'https://aka.ms/AzureMapsSamples',
+ url: 'https://samples.azuremaps.com/',
email: 'info@microsoft.com' }),
var popup = new atlas.Popup({
### Multiple content templates
-A feature may also display content using a combination of the String template and the PropertyInfo template. In this case, the String template renders placeholders values on a white background. And, the PropertyInfo template renders a full width image inside a table. The properties in this sample are similar to the properties we explained in the previous samples.
+A feature might also display content using a combination of the String template and the PropertyInfo template. In this case, the String template renders placeholders values on a white background. And, the PropertyInfo template renders a full width image inside a table. The properties in this sample are similar to the properties we explained in the previous samples.
```javascript var templateOptions = {
function InitMap()
Title: 'No template - property table', message: 'This point doesn\'t have a template defined, fallback to title and table of properties.', randomValue: 10,
- url: 'https://aka.ms/AzureMapsSamples',
+ url: 'https://samples.azuremaps.com/',
imageLink: 'https://azuremapscodesamples.azurewebsites.net/common/images/Pike_Market.jpg', email: 'info@microsoft.com' }),
function InitMap()
Title: 'No template - hyperlink detection disabled', message: 'This point doesn\'t have a template defined, fallback to title and table of properties.', randomValue: 10,
- url: 'https://aka.ms/AzureMapsSamples',
+ url: 'https://samples.azuremaps.com/',
email: 'info@microsoft.com', popupTemplate: { detectHyperlinks: false
function InitMap()
Title: 'Template 2 - PropertyInfo', createDate: new Date(), dateNumber: 1569880860542,
- url: 'https://aka.ms/AzureMapsSamples',
+ url: 'https://samples.azuremaps.com/',
email: 'info@microsoft.com', popupTemplate: { content: [{
azure-maps Map Show Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-show-traffic.md
Enhance your user experiences:
> [Code sample page] [Building an accessible map]: map-accessibility.md
-[Code sample page]: https://aka.ms/AzureMapsSamples
+[Code sample page]: https://samples.azuremaps.com/
[Map interaction with mouse events]: map-events.md [Map]: /javascript/api/azure-maps-control/atlas.map [Traffic controls source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Traffic/Traffic%20controls/Traffic%20controls.html
azure-maps Migrate From Bing Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps.md
The following list contains common Bing Maps terms and their corresponding Azure
| Bing Maps Term | Azure Maps Term | |--|-| | Aerial | Satellite or Aerial |
-| Directions | May also be referred to as Routing |
+| Directions | Might also be referred to as Routing |
| Entities | Geometries or Features | | `EntityCollection` | Data source or Layer | | `Geopoint` | Position |
Learn the details of how to migrate your Bing Maps application with these articl
[Azure Active Directory authentication]: azure-maps-authentication.md#azure-ad-authentication [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account [Azure Maps Blog]: https://aka.ms/AzureMapsTechBlog
-[Azure Maps code samples]: https://aka.ms/AzureMapsSamples
+[Azure Maps code samples]: https://samples.azuremaps.com/
[Azure Maps developer forums]: https://aka.ms/AzureMapsForums [Azure Maps Feedback (UserVoice)]: https://aka.ms/AzureMapsFeedback [Azure Maps is also available in Power BI]: power-bi-visual-get-started.md
azure-maps Migrate From Google Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps.md
Learn the details of how to migrate your Google Maps application with these arti
[Azure Maps product page]: https://azure.com/maps [Azure Maps Q&A]: https://aka.ms/AzureMapsFeedback [Azure Maps term of use]: https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=46
-[Azure Maps Web SDK code samples]: https://aka.ms/AzureMapsSamples
+[Azure Maps Web SDK code samples]: https://samples.azuremaps.com/
[Azure portal]: https://portal.azure.com/ [Azure pricing calculator]: https://azure.microsoft.com/pricing/calculator/?service=azure-maps [Azure subscription]: https://azure.com
azure-monitor Data Sources Performance Counters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-performance-counters.md
Performance counters in Windows and Linux provide insight into the performance o
[!INCLUDE [Log Analytics agent deprecation](../../../includes/log-analytics-agent-deprecation.md)]
-![Screenshot that shows performance counters.](media/data-sources-performance-counters/overview.png)
## Configure performance counters Configure performance counters from the [Legacy agents management menu](../agents/agent-data-sources.md#configure-data-sources) for the Log Analytics workspace.
For Windows performance counters, you can choose a specific instance for each pe
### Windows performance counters
-[![Screenshot that shows configuring Windows performance counters.](media/data-sources-performance-counters/configure-windows.png)](media/data-sources-performance-counters/configure-windows.png#lightbox)
Follow this procedure to add a new Windows performance counter to collect. V2 Windows performance counters aren't supported.
Follow this procedure to add a new Windows performance counter to collect. V2 Wi
### Linux performance counters
-[![Screenshot that shows configuring Linux performance counters.](media/data-sources-performance-counters/configure-linux.png)](media/data-sources-performance-counters/configure-linux.png#lightbox)
Follow this procedure to add a new Linux performance counter to collect.
azure-monitor Diagnostics Extension Windows Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/diagnostics-extension-windows-install.md
You can install and configure the diagnostics extension on an individual virtual
1. Select **Diagnostic settings** in the **Monitoring** section of the VM menu. 1. Select **Enable guest-level monitoring** if the diagnostics extension hasn't already been enabled.-
- ![Screenshot that shows enabling monitoring.](media/diagnostics-extension-windows-install/enable-monitoring.png)
+ <!-- convertborder later -->
+ :::image type="content" source="media/diagnostics-extension-windows-install/enable-monitoring.png" lightbox="media/diagnostics-extension-windows-install/enable-monitoring.png" alt-text="Screenshot that shows enabling monitoring." border="false":::
1. A new Azure Storage account will be created for the VM. The name will be based on the name of the resource group for the VM. A default set of guest performance counters and logs will be selected.-
- ![Screenshot that shows Diagnostic settings.](media/diagnostics-extension-windows-install/diagnostic-settings.png)
+ <!-- convertborder later -->
+ :::image type="content" source="media/diagnostics-extension-windows-install/diagnostic-settings.png" lightbox="media/diagnostics-extension-windows-install/diagnostic-settings.png" alt-text="Screenshot that shows Diagnostic settings." border="false":::
1. On the **Performance counters** tab, select the guest metrics you want to collect from this virtual machine. Use the **Custom** setting for more advanced selection.-
- ![Screenshot that shows Performance counters.](media/diagnostics-extension-windows-install/performance-counters.png)
+ <!-- convertborder later -->
+ :::image type="content" source="media/diagnostics-extension-windows-install/performance-counters.png" lightbox="media/diagnostics-extension-windows-install/performance-counters.png" alt-text="Screenshot that shows Performance counters." border="false":::
1. On the **Logs** tab, select the logs to collect from the virtual machine. Logs can be sent to storage or event hubs, but not to Azure Monitor. Use the [Log Analytics agent](../agents/log-analytics-agent.md) to collect guest logs to Azure Monitor.-
- ![Screenshot that shows the Logs tab with different logs selected for a virtual machine.](media/diagnostics-extension-windows-install/logs.png)
+ <!-- convertborder later -->
+ :::image type="content" source="media/diagnostics-extension-windows-install/logs.png" lightbox="media/diagnostics-extension-windows-install/logs.png" alt-text="Screenshot that shows the Logs tab with different logs selected for a virtual machine." border="false":::
1. On the **Crash dumps** tab, specify any processes to collect memory dumps after a crash. The data will be written to the storage account for the diagnostic setting. You can optionally specify a blob container.-
- ![Screenshot that shows the Crash dumps tab.](media/diagnostics-extension-windows-install/crash-dumps.png)
+ <!-- convertborder later -->
+ :::image type="content" source="media/diagnostics-extension-windows-install/crash-dumps.png" lightbox="media/diagnostics-extension-windows-install/crash-dumps.png" alt-text="Screenshot that shows the Crash dumps tab." border="false":::
1. On the **Sinks** tab, specify whether to send the data to locations other than Azure storage. If you select **Azure Monitor**, guest performance data will be sent to Azure Monitor Metrics. You can't configure the event hubs sink by using the Azure portal.-
- ![Screenshot that shows the Sinks tab with the Send diagnostic data to Azure Monitor option enabled.](media/diagnostics-extension-windows-install/sinks.png)
+ <!-- convertborder later -->
+ :::image type="content" source="media/diagnostics-extension-windows-install/sinks.png" lightbox="media/diagnostics-extension-windows-install/sinks.png" alt-text="Screenshot that shows the Sinks tab with the Send diagnostic data to Azure Monitor option enabled." border="false":::
If you haven't enabled a system-assigned identity configured for your virtual machine, you might see the following warning when you save a configuration with the Azure Monitor sink. Select the banner to enable the system-assigned identity.
-
- ![Screenshot that shows the managed identity warning.](media/diagnostics-extension-windows-install/managed-entity.png)
+ <!-- convertborder later -->
+ :::image type="content" source="media/diagnostics-extension-windows-install/managed-entity.png" lightbox="media/diagnostics-extension-windows-install/managed-entity.png" alt-text="Screenshot that shows the managed identity warning." border="false":::
1. On the **Agent** tab, you can change the storage account, set the disk quota, and specify whether to collect diagnostic infrastructure logs. -
- ![Screenshot that shows the Agent tab with the option to set the storage account.](media/diagnostics-extension-windows-install/agent.png)
+ <!-- convertborder later -->
+ :::image type="content" source="media/diagnostics-extension-windows-install/agent.png" lightbox="media/diagnostics-extension-windows-install/agent.png" alt-text="Screenshot that shows the Agent tab with the option to set the storage account." border="false":::
1. Select **Save** to save the configuration.
azure-monitor Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/gateway.md
Each agent must have network connectivity to the gateway so that agents can auto
The following diagram shows data flowing from direct agents, through the gateway, to Azure Automation and Log Analytics. The agent proxy configuration must match the port that the Log Analytics gateway is configured with.
-![Diagram of direct agent communication with services](./media/gateway/oms-omsgateway-agentdirectconnect.png)
The following diagram shows data flow from an Operations Manager management group to Log Analytics.
-![Diagram of Operations Manager communication with Log Analytics](./media/gateway/log-analytics-agent-opsmgrconnect.png)
## Set up your system
To install a gateway using the setup wizard, follow these steps.
1. From the destination folder, double-click **Log Analytics gateway.msi**. 1. On the **Welcome** page, select **Next**.-
- ![Screenshot of Welcome page in the Gateway Setup wizard](./media/gateway/gateway-wizard01.png)
+ <!-- convertborder later -->
+ :::image type="content" source="./media/gateway/gateway-wizard01.png" lightbox="./media/gateway/gateway-wizard01.png" alt-text="Screenshot of Welcome page in the Gateway Setup wizard" border="false":::
1. On the **License Agreement** page, select **I accept the terms in the License Agreement** to agree to the Microsoft Software License Terms, and then select **Next**. 1. On the **Port and proxy address** page:
To install a gateway using the setup wizard, follow these steps.
b. If the server where the gateway is installed needs to communicate through a proxy, enter the proxy address where the gateway needs to connect. For example, enter `http://myorgname.corp.contoso.com:80`. If you leave this field blank, the gateway will try to connect to the internet directly. If your proxy server requires authentication, enter a username and password. c. Select **Next**.-
- ![Screenshot of configuration for the gateway proxy](./media/gateway/gateway-wizard02.png)
+ <!-- convertborder later -->
+ :::image type="content" source="./media/gateway/gateway-wizard02.png" lightbox="./media/gateway/gateway-wizard02.png" alt-text="Screenshot of configuration for the gateway proxy" border="false":::
1. If you do not have Microsoft Update enabled, the Microsoft Update page appears, and you can choose to enable it. Make a selection and then select **Next**. Otherwise, continue to the next step. 1. On the **Destination Folder** page, either leave the default folder C:\Program Files\OMS Gateway or enter the location where you want to install the gateway. Then select **Next**. 1. On the **Ready to install** page, select **Install**. If User Account Control requests permission to install, select **Yes**. 1. After Setup finishes, select **Finish**. To verify that the service is running, open the services.msc snap-in and verify that **OMS Gateway** appears in the list of services and that its status is **Running**.-
- ![Screenshot of local services, showing that OMS Gateway is running](./media/gateway/gateway-service.png)
+ <!-- convertborder later -->
+ :::image type="content" source="./media/gateway/gateway-service.png" lightbox="./media/gateway/gateway-service.png" alt-text="Screenshot of local services, showing that OMS Gateway is running" border="false":::
## Install the Log Analytics gateway using the command line
To learn how to design and deploy a Windows Server 2016 network load balancing c
2. Open Network Load Balancing Manager in Server Manager, click **Tools**, and then click **Network Load Balancing Manager**. 3. To connect a Log Analytics gateway server with the Microsoft Monitoring Agent installed, right-click the cluster's IP address, and then click **Add Host to Cluster**.
- ![Network Load Balancing Manager ΓÇô Add Host To Cluster](./media/gateway/nlb02.png)
+ :::image type="content" source="./media/gateway/nlb02.png" lightbox="./media/gateway/nlb02.png" alt-text="Network Load Balancing Manager ΓÇô Add Host To Cluster":::
4. Enter the IP address of the gateway server that you want to connect.
- ![Network Load Balancing Manager ΓÇô Add Host To Cluster: Connect](./media/gateway/nlb03.png)
+ :::image type="content" source="./media/gateway/nlb03.png" lightbox="./media/gateway/nlb03.png" alt-text="Network Load Balancing Manager ΓÇô Add Host To Cluster: Connect":::
### Azure Load Balancer
To configure integration, update the system proxy configuration by using Netsh o
After completing the integration with Log Analytics, remove the change by running `netsh winhttp reset proxy`. Then, in the Operations console, use the **Configure proxy server** option to specify the Log Analytics gateway server. 1. On the Operations Manager console, under **Operations Management Suite**, select **Connection**, and then select **Configure Proxy Server**.-
- ![Screenshot of Operations Manager, showing the selection Configure Proxy Server](./media/gateway/scom01.png)
+ <!-- convertborder later -->
+ :::image type="content" source="./media/gateway/scom01.png" lightbox="./media/gateway/scom01.png" alt-text="Screenshot of Operations Manager, showing the selection Configure Proxy Server" border="false":::
1. Select **Use a proxy server to access the Operations Management Suite** and then enter the IP address of the Log Analytics gateway server or virtual IP address of the load balancer. Be careful to start with the prefix `http://`.-
- ![Screenshot of Operations Manager, showing the proxy server address](./media/gateway/scom02.png)
+ <!-- convertborder later -->
+ :::image type="content" source="./media/gateway/scom02.png" lightbox="./media/gateway/scom02.png" alt-text="Screenshot of Operations Manager, showing the proxy server address" border="false":::
1. Select **Finish**. Your Operations Manager management group is now configured to communicate through the gateway server to the Log Analytics service.
An error in step 3 means that the module wasn't imported. The error might occur
## Troubleshooting To collect events logged by the gateway, you should have the Log Analytics agent installed.-
-![Screenshot of the Event Viewer list in the Log Analytics gateway log](./media/gateway/event-viewer.png)
+<!-- convertborder later -->
### Log Analytics gateway event IDs and descriptions
The following table shows the performance counters available for the Log Analyti
| Log Analytics Gateway/Error Count |Number of errors | | Log Analytics Gateway/Connected Client |Number of connected clients | | Log Analytics Gateway/Rejection Count |Number of rejections due to any TLS validation error |-
-![Screenshot of Log Analytics gateway interface, showing performance counters](./media/gateway/counters.png)
+<!-- convertborder later -->
## Assistance When you're signed in to the Azure portal, you can get help with the Log Analytics gateway or any other Azure service or feature. To get help, select the question mark icon in the upper-right corner of the portal and select **New support request**. Then complete the new support request form.
-![Screenshot of a new support request](./media/gateway/support.png)
## Next steps
azure-monitor Log Analytics Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/log-analytics-agent.md
To ensure the security of data in transit to Azure Monitor logs, we strongly enc
The agent for Linux and Windows communicates outbound to the Azure Monitor service over TCP port 443. If the machine connects through a firewall or proxy server to communicate over the internet, review the following requirements to understand the network configuration required. If your IT security policies do not allow computers on the network to connect to the internet, set up a [Log Analytics gateway](gateway.md) and configure the agent to connect through the gateway to Azure Monitor. The agent can then receive configuration information and send data collected.
-![Diagram that shows Log Analytics agent communication.](./media/log-analytics-agent/log-analytics-agent-01.png)
The following table lists the proxy and firewall configuration information required for the Linux and Windows agents to communicate with Azure Monitor logs.
azure-monitor Om Agents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/om-agents.md
The management server forwards the data directly to the service. It's never writ
The following diagram shows the connection between the management servers and agents in a System Center Operations Manager management group and Azure Monitor, including the direction and ports.
-![Diagram that shows System Center Operations Manager and Azure Monitor integration. ](./media/om-agents/oms-operations-manager-connection.png)
If your IT security policies don't allow computers on your network to connect to the internet, management servers can be configured to connect to the Log Analytics gateway to receive configuration information and send collected data depending on the solutions enabled. For more information and steps on how to configure your Operations Manager management group to communicate through a Log Analytics gateway to Azure Monitor, see [Connect computers to Azure Monitor by using the Log Analytics gateway](./gateway.md).
azure-monitor Vmext Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/vmext-troubleshoot.md
To verify the status of the extension:
1. In your list of virtual machines, find and select it. 1. On the virtual machine, select **Extensions**. 1. From the list, check to see if the Log Analytics extension is enabled or not. For Linux, the agent is listed as **OMSAgentforLinux**. For Windows, the agent is listed as **MicrosoftMonitoringAgent**.-
- ![Screenshot that shows the VM Extensions view.](./media/vmext-troubleshoot/log-analytics-vmview-extensions.png)
+ <!-- convertborder later -->
+ :::image type="content" source="./media/vmext-troubleshoot/log-analytics-vmview-extensions.png" lightbox="./media/vmext-troubleshoot/log-analytics-vmview-extensions.png" alt-text="Screenshot that shows the VM Extensions view." border="false":::
1. Select the extension to view details.-
- ![Screenshot that shows the VM extension details.](./media/vmext-troubleshoot/log-analytics-vmview-extensiondetails.png)
+ <!-- convertborder later -->
+ :::image type="content" source="./media/vmext-troubleshoot/log-analytics-vmview-extensiondetails.png" lightbox="./media/vmext-troubleshoot/log-analytics-vmview-extensiondetails.png" alt-text="Screenshot that shows the VM extension details." border="false":::
## Troubleshoot the Azure Windows VM extension
azure-monitor Metrics Store Custom Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-store-custom-rest-api.md
Save the access token from the response for use in the following HTTP requests.
- **accessToken**: The authorization token acquired from the previous step. ```Shell
- curl -X POST 'https://<location>/.monitoring.azure.com<resourceId>/metrics' \
+ curl -X POST 'https://<location>.monitoring.azure.com<resourceId>/metrics' \
-H 'Content-Type: application/json' \ -H 'Authorization: Bearer <accessToken>' \ -d @custommetric.json
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
All custom tables created with or migrated to the [data collection rule (DCR)-ba
|:|:| | Active Directory | [AADDomainServicesDNSAuditsGeneral](/azure/azure-monitor/reference/tables/AADDomainServicesDNSAuditsGeneral)<br> [AADDomainServicesDNSAuditsDynamicUpdates](/azure/azure-monitor/reference/tables/AADDomainServicesDNSAuditsDynamicUpdates) | | API Management | [ApiManagementGatewayLogs](/azure/azure-monitor/reference/tables/ApiManagementGatewayLogs)<br>[ApiManagementWebSocketConnectionLogs](/azure/azure-monitor/reference/tables/ApiManagementWebSocketConnectionLogs) |
+| Application Gateways | [AGWAccessLogs](/azure/azure-monitor/reference/tables/AGWAccessLogs)<br>[AGWPerformanceLogs](/azure/azure-monitor/reference/tables/AGWPerformanceLogs)<br>[AGWFirewallLogs](/azure/azure-monitor/reference/tables/AGWFirewallLogs) |
| Application Insights | [AppTraces](/azure/azure-monitor/reference/tables/apptraces) | | Bare Metal Machines | [NCBMSystemLogs](/azure/azure-monitor/reference/tables/NCBMSystemLogs)<br>[NCBMSecurityLogs](/azure/azure-monitor/reference/tables/NCBMSecurityLogs) | | Chaos Experiments | [ChaosStudioExperimentEventLogs](/azure/azure-monitor/reference/tables/ChaosStudioExperimentEventLogs) |
All custom tables created with or migrated to the [data collection rule (DCR)-ba
| Container Apps Environments | [AppEnvSpringAppConsoleLogs](/azure/azure-monitor/reference/tables/AppEnvSpringAppConsoleLogs) | | Communication Services | [ACSCallAutomationIncomingOperations](/azure/azure-monitor/reference/tables/ACSCallAutomationIncomingOperations)<br>[ACSCallAutomationMediaSummary](/azure/azure-monitor/reference/tables/ACSCallAutomationMediaSummary)<br>[ACSCallRecordingIncomingOperations](/azure/azure-monitor/reference/tables/ACSCallRecordingIncomingOperations)<br>[ACSCallRecordingSummary](/azure/azure-monitor/reference/tables/ACSCallRecordingSummary)<br>[ACSCallSummary](/azure/azure-monitor/reference/tables/ACSCallSummary)<br>[ACSJobRouterIncomingOperations](/azure/azure-monitor/reference/tables/ACSJobRouterIncomingOperations)<br>[ACSRoomsIncomingOperations](/azure/azure-monitor/reference/tables/acsroomsincomingoperations) | | Confidential Ledgers | [CCFApplicationLogs](/azure/azure-monitor/reference/tables/CCFApplicationLogs) |
+| Cosmos DB for MongoDB (vCore) | [VCoreMongoRequests](/azure/azure-monitor/reference/tables/VCoreMongoRequests) |
| Data Manager for Energy | [OEPDataplaneLogs](/azure/azure-monitor/reference/tables/OEPDataplaneLogs) | | Dedicated SQL Pool | [SynapseSqlPoolSqlRequests](/azure/azure-monitor/reference/tables/synapsesqlpoolsqlrequests)<br>[SynapseSqlPoolRequestSteps](/azure/azure-monitor/reference/tables/synapsesqlpoolrequeststeps)<br>[SynapseSqlPoolExecRequests](/azure/azure-monitor/reference/tables/synapsesqlpoolexecrequests)<br>[SynapseSqlPoolDmsWorkers](/azure/azure-monitor/reference/tables/synapsesqlpooldmsworkers)<br>[SynapseSqlPoolWaits](/azure/azure-monitor/reference/tables/synapsesqlpoolwaits) |
-| Dev Center | [DevCenterDiagnosticLogs](/azure/azure-monitor/reference/tables/DevCenterDiagnosticLogs)<br>[DevCenterResourceOperationLogs](/azure/azure-monitor/reference/tables/DevCenterResourceOperationLogs) |
+| Dev Centers | [DevCenterDiagnosticLogs](/azure/azure-monitor/reference/tables/DevCenterDiagnosticLogs)<br>[DevCenterResourceOperationLogs](/azure/azure-monitor/reference/tables/DevCenterResourceOperationLogs)<br>[DevCenterBillingEventLogs](/azure/azure-monitor/reference/tables/DevCenterBillingEventLogs) |
| Data Transfer | [DataTransferOperations](/azure/azure-monitor/reference/tables/DataTransferOperations) | | Event Hubs | [AZMSArchiveLogs](/azure/azure-monitor/reference/tables/AZMSArchiveLogs)<br>[AZMSAutoscaleLogs](/azure/azure-monitor/reference/tables/AZMSAutoscaleLogs)<br>[AZMSCustomerManagedKeyUserLogs](/azure/azure-monitor/reference/tables/AZMSCustomerManagedKeyUserLogs)<br>[AZMSKafkaCoordinatorLogs](/azure/azure-monitor/reference/tables/AZMSKafkaCoordinatorLogs)<br>[AZMSKafkaUserErrorLogs](/azure/azure-monitor/reference/tables/AZMSKafkaUserErrorLogs) | | Firewalls | [AZFWFlowTrace](/azure/azure-monitor/reference/tables/AZFWFlowTrace) |
All custom tables created with or migrated to the [data collection rule (DCR)-ba
| Kubernetes services | [AKSAudit](/azure/azure-monitor/reference/tables/AKSAudit)<br>[AKSAuditAdmin](/azure/azure-monitor/reference/tables/AKSAuditAdmin)<br>[AKSControlPlane](/azure/azure-monitor/reference/tables/AKSControlPlane) | | Managed Lustre | [AFSAuditLogs](/azure/azure-monitor/reference/tables/AFSAuditLogs) | | Media Services | [AMSLiveEventOperations](/azure/azure-monitor/reference/tables/AMSLiveEventOperations)<br>[AMSKeyDeliveryRequests](/azure/azure-monitor/reference/tables/AMSKeyDeliveryRequests)<br>[AMSMediaAccountHealth](/azure/azure-monitor/reference/tables/AMSMediaAccountHealth)<br>[AMSStreamingEndpointRequests](/azure/azure-monitor/reference/tables/AMSStreamingEndpointRequests) |
+| Monitor | [AzureMetricsV2](/azure/azure-monitor/reference/tables/AzureMetricsV2) |
| Nexus Clusters | [NCCKubernetesLogs](/azure/azure-monitor/reference/tables/NCCKubernetesLogs)<br>[NCCVMOrchestrationLogs](/azure/azure-monitor/reference/tables/NCCVMOrchestrationLogs) | | Nexus Storage Appliances | [NCSStorageLogs](/azure/azure-monitor/reference/tables/NCSStorageLogs)<br>[NCSStorageAlerts](/azure/azure-monitor/reference/tables/NCSStorageAlerts) |
+| Redis cache | [ACRConnectedClientList](/azure/azure-monitor/reference/tables/ACRConnectedClientList) |
| Redis Cache Enterprise | [REDConnectionEvents](/azure/azure-monitor/reference/tables/REDConnectionEvents) | | Relays | [AZMSHybridConnectionsEvents](/azure/azure-monitor/reference/tables/AZMSHybridConnectionsEvents) |
+| Security | [SecurityAttackPathData](/azure/azure-monitor/reference/tables/SecurityAttackPathData) |
| Service Bus | [AZMSApplicationMetricLogs](/azure/azure-monitor/reference/tables/AZMSApplicationMetricLogs)<br>[AZMSOperationalLogs](/azure/azure-monitor/reference/tables/AZMSOperationalLogs)<br>[AZMSRunTimeAuditLogs](/azure/azure-monitor/reference/tables/AZMSRunTimeAuditLogs)<br>[AZMSVNetConnectionEvents](/azure/azure-monitor/reference/tables/AZMSVNetConnectionEvents) | | Sphere | [ASCAuditLogs](/azure/azure-monitor/reference/tables/ASCAuditLogs)<br>[ASCDeviceEvents](/azure/azure-monitor/reference/tables/ASCDeviceEvents) | | Storage | [StorageBlobLogs](/azure/azure-monitor/reference/tables/StorageBlobLogs)<br>[StorageFileLogs](/azure/azure-monitor/reference/tables/StorageFileLogs)<br>[StorageQueueLogs](/azure/azure-monitor/reference/tables/StorageQueueLogs)<br>[StorageTableLogs](/azure/azure-monitor/reference/tables/StorageTableLogs) |
-| Synapse | [SynapseSqlPoolExecRequests](/azure/azure-monitor/reference/tables/SynapseSqlPoolExecRequests)<br>[SynapseSqlPoolRequestSteps](/azure/azure-monitor/reference/tables/SynapseSqlPoolRequestSteps)<br>[SynapseSqlPoolDmsWorkers](/azure/azure-monitor/reference/tables/SynapseSqlPoolDmsWorkers)<br>[SynapseSqlPoolWaits](/azure/azure-monitor/reference/tables/SynapseSqlPoolWaits) |
+| Synapse Analytics | [SynapseSqlPoolExecRequests](/azure/azure-monitor/reference/tables/SynapseSqlPoolExecRequests)<br>[SynapseSqlPoolRequestSteps](/azure/azure-monitor/reference/tables/SynapseSqlPoolRequestSteps)<br>[SynapseSqlPoolDmsWorkers](/azure/azure-monitor/reference/tables/SynapseSqlPoolDmsWorkers)<br>[SynapseSqlPoolWaits](/azure/azure-monitor/reference/tables/SynapseSqlPoolWaits) |
| Storage Mover | [StorageMoverJobRunLogs](/azure/azure-monitor/reference/tables/StorageMoverJobRunLogs)<br>[StorageMoverCopyLogsFailed](/azure/azure-monitor/reference/tables/StorageMoverCopyLogsFailed)<br>[StorageMoverCopyLogsTransferred](/azure/azure-monitor/reference/tables/StorageMoverCopyLogsTransferred)<br> | | Virtual Network Manager | [AVNMNetworkGroupMembershipChange](/azure/azure-monitor/reference/tables/AVNMNetworkGroupMembershipChange)<br>[AVNMRuleCollectionChange](/azure/azure-monitor/reference/tables/AVNMRuleCollectionChange) |
azure-resource-manager Error Policy Requestdisallowedbypolicy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/error-policy-requestdisallowedbypolicy.md
Title: Request disallowed by policy error
description: Describes the error for request disallowed by policy when deploying resources with an Azure Resource Manager template (ARM template) or Bicep file. Previously updated : 04/05/2023 Last updated : 10/20/2023 # Resolve errors for request disallowed by policy
-This article describes the cause of the `RequestDisallowedByPolicy` error and provides a solution for the error. The request disallowed by policy error can occur when you deploy resources with an Azure Resource Manager template (ARM template) or Bicep file.
+When deploying an Azure Resource Manager template (ARM template) or Bicep file, you get the `RequestDisallowedByPolicy` error when one of the resources to deploy doesn't comply with an existing [Azure Policy](../../governance/policy/overview.md).
## Symptom
In the `id` string, the `{guid}` placeholder represents an Azure subscription ID
## Cause
-In this example, the error occurred when an administrator attempted to create a network interface with a public IP address. A policy assignment enables enforcement of a built-in policy definition that prevents public IPs on network interfaces.
+Your organization assigns policies to enforce organizational standards and to assess compliance at-scale. If you're trying to deploy a resource that violates a policy, the deployment is blocked.
-You can use the name of a policy assignment or policy definition to get more details about a policy that caused the error. The example commands use placeholders for input. For example, replace `<policy definition name>` including the angle brackets, with the definition name from your error message.
+For example, your subscription can have a policy that prevents public IPs on network interfaces. If you attempt to create a network interface with a public IP address, the policy blocks you from creating the network interface.
+
+## Solution
+
+To resolve the `RequestDisallowedByPolicy` error when deploying an ARM template or Bicep file, you need to find which policy is blocking the deployment. Within that policy, you need to review the rules so you can update your deployment to comply with the policy.
+
+The error message includes the names of the policy definition and policy assignment that caused the error. You need these names to get more information about the policy.
# [Azure CLI](#tab/azure-cli) To get more information about a policy definition, use [az policy definition show](/cli/azure/policy/definition#az-policy-definition-show). ```azurecli
-defname=<policy definition name>
-az policy definition show --name $defname
+az policy definition show --name {policy-name}
``` To get more information about a policy assignment, use [az policy assignment show](/cli/azure/policy/assignment#az-policy-assignment-show). ```azurecli
-rg=<resource group name>
-assignmentname=<policy assignment name>
-az policy assignment show --name $assignmentname --resource-group $rg
+az policy assignment show --name {assignment-name} --resource-group {resource-group-name}
``` # [PowerShell](#tab/azure-powershell)
Get-AzPolicyAssignment -Name $assignmentname -Scope $rg.ResourceId | ConvertTo-J
-## Solution
-
-For security or compliance, your subscription administrators might assign policies that limit how resources are deployed. For example, policies that prevent creating public IP addresses, network security groups, user-defined routes, or route tables.
-
-To resolve `RequestDisallowedByPolicy` errors, review the resource policies and determine how to deploy resources that comply with those policies. The error message displays the names of the policy definition and policy assignment.
+Within the policy definition, you see a description of the policy and the rules that are applied. Review the rules and update your ARM template or Bicep file to comply with the rules. For example, if the rule states the public network access is disabled, you need to update the corresponding resource properties.
For more information, see the following articles:
communication-services Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/advanced-messaging/whatsapp/pricing.md
+
+ Title: Advanced Messaging pricing in Azure Communication Services
+
+description: Learn about Communication Service WhatsApp pricing concepts.
++++ Last updated : 06/26/2023++++
+# Advanced Messaging pricing in Azure Communication Services
++
+Prices for Advanced Messaging in Azure Communication Services consist of two components: the usage fee and the channel fee.
+
+## Advanced Messaging usage fee
+
+The Azure Communication Services Advanced Messaging usage fee is based on the number of messages exchanged between the platform and an end user.
+
+| **Message Type** | **Message Fee** |
+||--|
+| Inbound Message | \$0.005/message |
+| Outbound Message | \$0.005/message |
+
+## Advanced Messaging channel price
+
+**WhatsApp**
+
+When you connect your WhatsApp Business account to Azure, Azure Communication Services becomes the billing entity for your WhatsApp usage. WhatsApp provides these rates and it's included in your Azure bill. The information given summarizes the key aspects of WhatsApp pricing. WhatsApp describes their pricing in detail here: [Conversation-Based Pricing](https://developers.facebook.com/docs/whatsapp/pricing).
+
+WhatsApp charges per conversation, not individual message. Conversations are message threads between a business and its customers that last 24 or 72 hours based on the conversation category. Conversations are categorized with one of the following categories:
+
+- **Marketing** ΓÇö Marketing conversations include promotions or offers, informational updates, or invitations for customers to respond or take action.
+- **Utility** ΓÇö Utility conversations facilitate a specific, agreed-upon request or transaction, or update to a customer about an ongoing transaction. These conversations may include transaction confirmations, transaction updates, and/or post-purchase notifications.
+- **Authentication** ΓÇö Authentication conversations enable you to authenticate users with one-time passcodes, potentially at multiple steps in the login process (for example, account verification, account recovery, integrity challenges).
+- **Service** ΓÇö Service conversations help you resolve customer inquiries.
+
+For service conversations, WhatsApp provides 1,000 free conversations each month across all business phone numbers. Marketing, utility and authentication conversations aren't part of the free tier.
+
+WhatsApp rates vary based on conversation category and country/region rate. Rates vary between \$0.003 and \$0.1597 depending on the category and country/region. WhatsApp provides a detailed explanation of their pricing, including the current rate card here: [Conversation-Based Pricing](https://developers.facebook.com/docs/whatsapp/pricing).
+
+## Pricing example: Contoso sends appointment reminders to their WhatsApp customers
+
+Contoso provides a virtual visit solution for its patients. Contoso is scheduling the visit and sends WhatsApp invites to all patients reminding them about their upcoming visit. WhatsApp classifies appointment reminders as **Utility Conversations**. In this case, each WhatsApp conversation is a single message.
+
+Contoso sends appointment reminders to 2,000 patients in North America each month and the pricing would be:
+
+**Advanced Messaging usage for messages:**
+
+2,000 WhatsApp Conversations = 2,000 messages x \$0.005/message = \$10 USD
+
+**WhatsApp Fees (rates subject to change):**
+
+2,000 WhatsApp Conversations \* \$0.015/utility conversation = \$30 USD
+
+To get the latest WhatsApp rates, refer to WhatsAppΓÇÖs pricing documentation: [Conversation-Based Pricing](https://developers.facebook.com/docs/whatsapp/pricing).
+
+## Pricing example: A WhatsApp user reaches out to a business for support
+
+Contoso is a business that provides a contact center for customers to seek product information and support. All these cases are closed within 24 hours and have an average of 20 messages each. Each case equals one WhatsApp Conversation. WhatsApp classifies contact center conversations as ΓÇ£Service Conversations.ΓÇ¥
+
+Contoso manages 2,000 cases in North America each month and the pricing would be:
+
+**Advanced Messaging usage for conversation:**
+
+2,000 WhatsApp Conversations \* 20 messages/conversation x \$0.005/message = \$200 USD
+
+**WhatsApp Fees (rates subject to change):** 1,000 WhatsApp free conversations/month + 1,000 WhatsApp conversations \* \$0.0088/service conversation = \$8.80 USD
+
+To get the latest WhatsApp rates, refer to WhatsAppΓÇÖs pricing documentation: [Conversation-Based Pricing](https://developers.facebook.com/docs/whatsapp/pricing).
+
+## Next steps
+
+- [Register WhatsApp Business Account](../../../quickstarts/advanced-messaging/whatsapp/connect-whatsapp-business-account.md)
+- [Advanced Messaging for WhatsApp Terms of Services](./whatsapp-terms-of-service.md)
+- [Trying WhatsApp Sandbox](../../../quickstarts//advanced-messaging/whatsapp/whatsapp-sandbox-quickstart.md)
+- [Get Started With Advanced Communication Messages SDK](../../../quickstarts//advanced-messaging/whatsapp/get-started.md)
+- [Messaging Policy](../../../concepts/sms/messaging-policy.md)
communication-services Whatsapp Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/advanced-messaging/whatsapp/whatsapp-overview.md
The following documents help you get started with Advanced Messaging for WhatsAp
- [Get Started With Advanced Communication Messages SDK](../../../quickstarts//advanced-messaging/whatsapp/get-started.md) - [Handle Advanced Messaging Events](../../../quickstarts/advanced-messaging/whatsapp/handle-advanced-messaging-events.md) - [Messaging Policy](../../../concepts/sms/messaging-policy.md)
+- [Pricing for Advanced Messaging for WhatsApp](./pricing.md)
communication-services Phone Number Management For Australia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-australia.md
Use the below tables to find all the relevant information on number availability
| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | | :- | :- | :- | :- | : | | Toll-Free |- | - | - | Public Preview\* |
-| Alphanumeric Sender ID\** | Public Preview | - | - | - |
+| Alphanumeric Sender ID\** | General Availability | - | - | - |
\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
More details on eligible subscription types are as follows:
| :- | | Australia |
+## Azure subscription billing locations where Australia alphanumeric sender IDs are available
+| Country/Region |
+| :- |
+| Australia |
+| Austria |
+| Denmark |
+| France |
+| Germany |
+| India |
+| Ireland |
+| Italy |
+| Netherlands |
+| Poland |
+| Portugal |
+| Puerto Rico |
+| Spain |
+| Sweden |
+| Switzerland |
+| United Kingdom |
+| United States |
+ ## Find information about other countries/regions [!INCLUDE [Country Dropdown](../../includes/country-dropdown.md)]
communication-services Phone Number Management For Austria https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-austria.md
Use the below tables to find all the relevant information on number availability
| :- | :- | :- | :- | : | | Toll-Free | - | - | General Availability | General Availability\* | | Local | - | - | General Availability | General Availability\* |
-| Alphanumeric Sender ID\** | Public Preview | - | - | - |
+| Alphanumeric Sender ID\** | General Availability | - | - | - |
\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
More details on eligible subscription types are as follows:
|Canada| |United Kingdom|
+## Azure subscription billing locations where Austria alphanumeric sender IDs are available
+| Country/Region |
+| :- |
+| Australia |
+| Austria |
+| Denmark |
+| France |
+| Germany |
+| India |
+| Ireland |
+| Italy |
+| Netherlands |
+| Poland |
+| Portugal |
+| Puerto Rico |
+| Spain |
+| Sweden |
+| Switzerland |
+| United Kingdom |
+| United States |
+ ## Find information about other countries/regions [!INCLUDE [Country Dropdown](../../includes/country-dropdown.md)]
communication-services Phone Number Management For Canada https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-canada.md
Use the below tables to find all the relevant information on number availability
| :- | :- | :- | :- | : | | Toll-Free |General Availability | General Availability | General Availability | General Availability\* | | Local | - | - | General Availability | General Availability\* |
-| Alphanumeric Sender ID\** | General Availability | - | - | - |
\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
communication-services Phone Number Management For Denmark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-denmark.md
More details on eligible subscription types are as follows:
|Canada| |United Kingdom|
+## Azure subscription billing locations where Denmark alphanumeric sender IDs are available
+| Country/Region |
+| :- |
+| Australia |
+| Austria |
+| Denmark |
+| France |
+| Germany |
+| India |
+| Ireland |
+| Italy |
+| Netherlands |
+| Poland |
+| Portugal |
+| Puerto Rico |
+| Spain |
+| Sweden |
+| Switzerland |
+| United Kingdom |
+| United States |
+ ## Find information about other countries/regions [!INCLUDE [Country Dropdown](../../includes/country-dropdown.md)]
communication-services Phone Number Management For Estonia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-estonia.md
More details on eligible subscription types are as follows:
\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
-## Azure subscription billing locations where Alphanumeric Sender ID is available
+## Azure subscription billing locations where Estonia Alphanumeric Sender ID is available
| Country/Region | | :- | |Australia|
communication-services Phone Number Management For France https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-france.md
More details on eligible subscription types are as follows:
|Italy| |United States|
+## Azure subscription billing locations where France alphanumeric sender IDs are available
+| Country/Region |
+| :- |
+| Australia |
+| Austria |
+| Denmark |
+| France |
+| Germany |
+| India |
+| Ireland |
+| Italy |
+| Netherlands |
+| Poland |
+| Portugal |
+| Puerto Rico |
+| Spain |
+| Sweden |
+| Switzerland |
+| United Kingdom |
+| United States |
+ ## Find information about other countries/regions
communication-services Phone Number Management For Germany https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-germany.md
More details on eligible subscription types are as follows:
|Canada| |United Kingdom|
+## Azure subscription billing locations where Germany alphanumeric sender IDs are available
+| Country/Region |
+| :- |
+| Australia |
+| Austria |
+| Denmark |
+| France |
+| Germany |
+| India |
+| Ireland |
+| Italy |
+| Netherlands |
+| Poland |
+| Portugal |
+| Puerto Rico |
+| Spain |
+| Sweden |
+| Switzerland |
+| United Kingdom |
+| United States |
+ ## Find information about other countries/regions [!INCLUDE [Country Dropdown](../../includes/country-dropdown.md)]
communication-services Phone Number Management For Ireland https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-ireland.md
Use the below tables to find all the relevant information on number availability
| :- | :- | :- | :- | : | | Toll-Free |- | - | General Availability | General Availability\* | | Local | - | - | General Availability | General Availability\* |
-|Alphanumeric Sender ID\**|Public Preview|-|-|-|
+|Alphanumeric Sender ID\**|General Availability|-|-|-|
\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details.
More details on eligible subscription types are as follows:
|Canada| |United Kingdom|
+## Azure subscription billing locations where Ireland alphanumeric sender IDs are available
+| Country/Region |
+| :- |
+| Australia |
+| Austria |
+| Denmark |
+| France |
+| Germany |
+| India |
+| Ireland |
+| Italy |
+| Netherlands |
+| Poland |
+| Portugal |
+| Puerto Rico |
+| Spain |
+| Sweden |
+| Switzerland |
+| United Kingdom |
+| United States |
+ ## Find information about other countries/regions
communication-services Phone Number Management For Italy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-italy.md
More details on eligible subscription types are as follows:
|United Kingdom| |United States|
+## Azure subscription billing locations where Italy alphanumeric sender IDs are available
+| Country/Region |
+| :- |
+| Australia |
+| Austria |
+| Denmark |
+| France |
+| Germany |
+| India |
+| Ireland |
+| Italy |
+| Netherlands |
+| Poland |
+| Portugal |
+| Puerto Rico |
+| Spain |
+| Sweden |
+| Switzerland |
+| United Kingdom |
+| United States |
## Find information about other countries/regions
communication-services Phone Number Management For Latvia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-latvia.md
More details on eligible subscription types are as follows:
\* Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application.
-## Azure subscription billing locations where Alphanumeric Sender ID is available
+## Azure subscription billing locations where Latvia Alphanumeric Sender ID is available
| Country/Region | | :- | |Australia|
communication-services Phone Number Management For Netherlands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-netherlands.md
More details on eligible subscription types are as follows:
|Canada| |United Kingdom|
+## Azure subscription billing locations where Netherlands alphanumeric sender IDs are available
+| Country/Region |
+| :- |
+| Australia |
+| Austria |
+| Denmark |
+| France |
+| Germany |
+| India |
+| Ireland |
+| Italy |
+| Netherlands |
+| Poland |
+| Portugal |
+| Puerto Rico |
+| Spain |
+| Sweden |
+| Switzerland |
+| United Kingdom |
+| United States |
## Find information about other countries/regions
communication-services Phone Number Management For Poland https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-poland.md
More details on eligible subscription types are as follows:
|United Kingdom| |United States|
+## Azure subscription billing locations where Poland alphanumeric sender IDs are available
+| Country/Region |
+| :- |
+| Australia |
+| Austria |
+| Denmark |
+| France |
+| Germany |
+| India |
+| Ireland |
+| Italy |
+| Netherlands |
+| Poland |
+| Portugal |
+| Puerto Rico |
+| Spain |
+| Sweden |
+| Switzerland |
+| United Kingdom |
+| United States |
+ ## Find information about other countries/regions
communication-services Phone Number Management For Portugal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-portugal.md
More details on eligible subscription types are as follows:
|Portugal| |United States*|
-\*Alphanumeric Sender ID only
-
+## Azure subscription billing locations where Portugal alphanumeric sender IDs are available
+| Country/Region |
+| :- |
+| Australia |
+| Austria |
+| Denmark |
+| France |
+| Germany |
+| India |
+| Ireland |
+| Italy |
+| Netherlands |
+| Poland |
+| Portugal |
+| Puerto Rico |
+| Spain |
+| Sweden |
+| Switzerland |
+| United Kingdom |
+| United States |
## Find information about other countries/regions
communication-services Phone Number Management For Spain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-spain.md
More details on eligible subscription types are as follows:
|Spain| |United States*|
-\* Alphanumeric Sender ID only
+## Azure subscription billing locations where Spain alphanumeric sender IDs are available
+| Country/Region |
+| :- |
+| Australia |
+| Austria |
+| Denmark |
+| France |
+| Germany |
+| India |
+| Ireland |
+| Italy |
+| Netherlands |
+| Poland |
+| Portugal |
+| Puerto Rico |
+| Spain |
+| Sweden |
+| Switzerland |
+| United Kingdom |
+| United States |
## Find information about other countries/regions
communication-services Phone Number Management For Sweden https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-sweden.md
More details on eligible subscription types are as follows:
|United Kingdom| |United States| --
+## Azure subscription billing locations where Sweden alphanumeric sender IDs are available
+| Country/Region |
+| :- |
+| Australia |
+| Austria |
+| Denmark |
+| France |
+| Germany |
+| India |
+| Ireland |
+| Italy |
+| Netherlands |
+| Poland |
+| Portugal |
+| Puerto Rico |
+| Spain |
+| Sweden |
+| Switzerland |
+| United Kingdom |
+| United States |
## Find information about other countries/regions
communication-services Phone Number Management For Switzerland https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-switzerland.md
More details on eligible subscription types are as follows:
|Canada| |United Kingdom| --
+## Azure subscription billing locations where Switzerland alphanumeric sender IDs are available
+| Country/Region |
+| :- |
+| Australia |
+| Austria |
+| Denmark |
+| France |
+| Germany |
+| India |
+| Ireland |
+| Italy |
+| Netherlands |
+| Poland |
+| Portugal |
+| Puerto Rico |
+| Spain |
+| Sweden |
+| Switzerland |
+| United Kingdom |
+| United States |
## Find information about other countries/regions
communication-services Phone Number Management For United Kingdom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-united-kingdom.md
More details on eligible subscription types are as follows:
|United Kingdom| |United States|
+## Azure subscription billing locations where United Kingdom alphanumeric sender IDs are available
+| Country/Region |
+| :- |
+| Australia |
+| Austria |
+| Denmark |
+| France |
+| Germany |
+| India |
+| Ireland |
+| Italy |
+| Netherlands |
+| Poland |
+| Portugal |
+| Puerto Rico |
+| Spain |
+| Sweden |
+| Switzerland |
+| United Kingdom |
+| United States |
## Find information about other countries/regions
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms/concepts.md
Key features of Azure Communication Services SMS SDKs include:
Sending SMS to any recipient requires getting a phone number. Choosing the right number type is critical to the success of your messaging campaign. Few of the factors to consider when choosing a number type include destination(s) of the message, throughput requirement of your messaging campaign, and the timeline when you want to start sending messages. Azure Communication Services enables you to send SMS using a variety of sender types - toll-free number (1-8XX), short codes (12345), and alphanumeric sender ID (CONTOSO). The following table walks you through the features of each number type:
-|Factors | Toll-Free| Short Code | Alphanumeric Sender ID|
-||-||--|
-|**Description**|Toll free numbers are telephone numbers with distinct three-digit codes that can be used for business to consumer communication without any charge to the consumer| Short codes are 5-6 digit numbers used for business to consumer messaging such as alerts, notifications, and marketing | Alphanumeric Sender IDs are displayed as a custom alphanumeric phrase like the companyΓÇÖs name (CONTOSO, MyCompany) on the recipient handset. Alphanumeric sender IDs can be used for a variety of use cases like one-time passcodes, marketing alerts, and flight status notifications. There are two types of alphanumeric sender IDs: **Dynamic alphanumeric sender ID:** Supported in countries that do not require registration for use. Dynamic alphanumeric sender IDs can be instantly provisioned. **Pre-registered alphanumeric sender ID:** Supported in countries that require registration for use. Pre-registered alphanumeric sender IDs are typically provisioned in 4-5 weeks. |
-|**Format**|+1 (8XX) XYZ PQRS| 12345 | CONTOSO* |
-|**SMS support**|Two-way SMS| Two-way SMS | One-way outbound SMS |
-|**Calling support**|Yes| No | No |
-|**Provisioning time**| 5-6 weeks| 6-8 weeks | Instant |
-|**Throughput** | 200 messages/min (can be increased upon request)| 6000 messages/ min (can be increased upon request) | 600 messages/ min (can be increased upon request)|
-|**Supported Destinations**| United States, Canada, Puerto Rico| United States | Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia, Norway, Finland, Slovakia, Slovenia, Czech Republic|
-|**Get started**|[Get a toll-free number](../../quickstarts/telephony/get-phone-number.md)|[Get a short code](../../quickstarts/sms/apply-for-short-code.md) | [Enable alphanumeric sender ID](../../quickstarts/sms/enable-alphanumeric-sender-id.md) |
+|Factors | Toll-Free| Short Code | Dynamic Alphanumeric Sender ID| Pre-registered Alphanumeric Sender ID|
+||-||--|--|
+|**Description**|Toll free numbers are telephone numbers with distinct three-digit codes that can be used for business to consumer communication without any charge to the consumer| Short codes are 5-6 digit numbers used for business to consumer messaging such as alerts, notifications, and marketing | Alphanumeric Sender IDs are displayed as a custom alphanumeric phrase like the companyΓÇÖs name (CONTOSO, MyCompany) on the recipient handset. Alphanumeric sender IDs can be used for a variety of use cases like one-time passcodes, marketing alerts, and flight status notifications. Dynamic alphanumeric sender ID is supported in countries that do not require registration for use.| Alphanumeric Sender IDs are displayed as a custom alphanumeric phrase like the companyΓÇÖs name (CONTOSO, MyCompany) on the recipient handset. Alphanumeric sender IDs can be used for a variety of use cases like one-time passcodes, marketing alerts, and flight status notifications. Pre-registered alphanumeric sender ID is supported in countries that require registration for use. |
+|**Format**|+1 (8XX) XYZ PQRS| 12345 | CONTOSO* |CONTOSO* |
+|**SMS support**|Two-way SMS| Two-way SMS | One-way outbound SMS |One-way outbound SMS |
+|**Calling support**|Yes| No | No |No |
+|**Provisioning time**| 5-6 weeks| 6-8 weeks | Instant | 4-5 weeks|
+|**Throughput** | 200 messages/min (can be increased upon request)| 6000 messages/ min (can be increased upon request) | 600 messages/ min (can be increased upon request)|600 messages/ min (can be increased upon request)|
+|**Supported Destinations**| United States, Canada, Puerto Rico| United States | Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia| Norway, Finland, Slovakia, Slovenia, Czech Republic|
+|**Get started**|[Get a toll-free number](../../quickstarts/telephony/get-phone-number.md)|[Get a short code](../../quickstarts/sms/apply-for-short-code.md) | [Enable alphanumeric sender ID](../../quickstarts/sms/enable-alphanumeric-sender-id.md) |[Enable alphanumeric sender ID](../../quickstarts/sms/enable-alphanumeric-sender-id.md) |
\* See [Alphanumeric sender ID FAQ](./sms-faq.md#alphanumeric-sender-id) for detailed formatting requirements.
Sending SMS to any recipient requires getting a phone number. Choosing the right
> [!div class="nextstepaction"] > [Get started with sending sms](../../quickstarts/sms/send.md)
-The following documents may be interesting to you:
+The following documents might be interesting to you:
- Check SMS FAQ for questions regarding [SMS](../sms/sms-faq.md) - Familiarize yourself with the [SMS SDK](../sms/sdk-features.md)
cost-management-billing Create Free Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/create-free-services.md
Previously updated : 10/02/2022 Last updated : 10/20/2022
Your Azure free account includes a *specified quantity* of free services for 12
We recommend that use the [Free service page](https://go.microsoft.com/fwlink/?linkid=859151) in the Azure portal to create free services. Or you can sign in to the [Azure portal](https://portal.azure.com), and search for **free services**. If you create resources outside of the Free services pages, free tiers or free resource configuration options arenΓÇÖt always selected by default. To avoid charges, ensure that you create resources from the Free services page. And then when you create a resource, be sure to the select the tier that's free.
-![Screenshot that shows free services page](./media/create-free-services/billing-freeservices-grid.png)
## Services can be created in any region
data-factory Airflow Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/airflow-pricing.md
Previously updated : 01/24/2023 Last updated : 10/20/2023
data-factory Author Management Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/author-management-hub.md
Previously updated : 01/18/2023 Last updated : 10/20/2023 # Management hub in Azure Data Factory
data-factory Author Visually https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/author-visually.md
Previously updated : 10/25/2022 Last updated : 10/20/2023 # Visual authoring in Azure Data Factory
data-factory Azure Ssis Integration Runtime Express Virtual Network Injection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/azure-ssis-integration-runtime-express-virtual-network-injection.md
description: Learn how to configure a virtual network for express injection of A
Previously updated : 12/16/2022 Last updated : 10/20/2023
data-factory Azure Ssis Integration Runtime Package Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/azure-ssis-integration-runtime-package-store.md
Previously updated : 01/20/2023 Last updated : 10/20/2023 # Manage packages with Azure-SSIS Integration Runtime package store
Once your Azure-SSIS IR is provisioned, you can connect to it to browse its pack
:::image type="content" source="media/azure-ssis-integration-runtime-package-store/ssms-package-store-connect.png" alt-text="Connect to Azure-SSIS IR":::
-On the **Object Explorer** window of SSMS, select **Azure-SSIS Integration Runtime** in the **Connect** drop-down menu. Next, sign in to Azure and select the relevant subscription, ADF, and Azure-SSIS IR that you've provisioned with package stores. Your Azure-SSIS IR will appear with **Running Packages** and **Stored Packages** nodes underneath. Expand the **Stored Packages** node to see your package stores underneath. Expand your package stores to see folders and packages underneath. You may be asked to enter the access credentials for your package stores, if SSMS fails to connect to them automatically. For example, if you expand a package store on top of MSDB, you may be asked to connect to your Azure SQL Managed Instance first.
+On the **Object Explorer** window of SSMS, select **Azure-SSIS Integration Runtime** in the **Connect** drop-down menu. Next, sign in to Azure and select the relevant subscription, ADF, and Azure-SSIS IR that you've provisioned with package stores. Your Azure-SSIS IR will appear with **Running Packages** and **Stored Packages** nodes underneath. Expand the **Stored Packages** node to see your package stores underneath. Expand your package stores to see folders and packages underneath. You might be asked to enter the access credentials for your package stores, if SSMS fails to connect to them automatically. For example, if you expand a package store on top of MSDB, you might be asked to connect to your Azure SQL Managed Instance first.
:::image type="content" source="media/azure-ssis-integration-runtime-package-store/ssms-package-store-connect2.png" alt-text="Connect to Azure SQL Managed Instance":::
data-factory Choose The Right Integration Runtime Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/choose-the-right-integration-runtime-configuration.md
Previously updated : 01/12/2023 Last updated : 10/20/2023 # Choose the right integration runtime configuration for your scenario
-The integration runtime is a very important part of the infrastructure for the data integration solution provided by Azure Data Factory. This requires you to fully consider how to adapt to the existing network structure and data source at the beginning of designing the solution, as well as consider performance, security and cost.
+The integration runtime is an important part of the infrastructure for the data integration solution provided by Azure Data Factory. This requires you to fully consider how to adapt to the existing network structure and data source at the beginning of designing the solution, as well as consider performance, security, and cost.
## Comparison of different types of integration runtimes
-In Azure Data Factory, we have three kinds of integration runtimes: the Azure integration runtime, the self-hosted integration runtime and the Azure-SSIS integration runtime. For the Azure integration runtime, you can also enable a managed virtual network which makes its architecture different than the global Azure integration runtime.
+In Azure Data Factory, we have three kinds of integration runtimes: the Azure integration runtime, the self-hosted integration runtime and the Azure-SSIS integration runtime. For the Azure integration runtime, you can also enable a managed virtual network, which makes its architecture different than the global Azure integration runtime.
-This table lists the differences in some aspects of all integration runtimes, you can choose the appropriate one according to your actual needs. For the Azure-SSIS integration runtime, you can learn more in the article [Create an Azure-SSIS integration runtime](create-azure-ssis-integration-runtime.md).
+This table lists the differences in some aspects of all integration runtimes. You can choose the appropriate one according to your actual needs. For the Azure-SSIS integration runtime, you can learn more in the article [Create an Azure-SSIS integration runtime](create-azure-ssis-integration-runtime.md).
| Feature | Azure integration runtime | Azure integration runtime with managed virtual network | Self-hosted integration runtime | | - | - | | - | | Managed compute | Y | Y | N |
-| Auto-scale | Y | Y* | N |
+| Autoscale | Y | Y* | N |
| Dataflow | Y | Y | N | | On-premises data access | N | Y** | Y | | Private Link/Private Endpoint | N | Y*** | Y | | Custom component/driver | N | N | Y |
- \* When time-to-live (TTL) is enabled, the compute size of integration runtime is reserved according to the configuration and canΓÇÖt be auto-scaled.
+ \* When time-to-live (TTL) is enabled, the compute size of integration runtime is reserved according to the configuration and canΓÇÖt be autoscaled.
- ** On-premises environments must be connected to Azure via Express Route or VPN. Custom components and drivers are not supported.
+ ** On-premises environments must be connected to Azure via Express Route or VPN. Custom components and drivers aren't supported.
*** The private endpoints are managed by the Azure Data Factory service.
-It is very important to choose an appropriate type of integration runtime. Not only must it be suitable for your existing architecture and requirements for data integration, but you also need to consider how to further meet growing business needs and any future increase in workload. But there is no one-size-fits-all approach. The following consideration can help you navigate the decision:
+It's important to choose an appropriate type of integration runtime. Not only must it be suitable for your existing architecture and requirements for data integration, but you also need to consider how to further meet growing business needs and any future increase in workload. But there's no one-size-fits-all approach. The following consideration can help you navigate the decision:
1. What are the integration runtime and data store locations?<br> The integration runtime location defines the location of its back-end compute, and where the data movement, activity dispatching and data transformation are performed. To obtain better performance and transmission efficiency, the integration runtime should be closer to the data source or sink.
- - The Azure integration runtime automatically detects the most suitable location based on some rules (also known as auto-resolve). See details here: [Azure IR location](concepts-integration-runtime.md#azure-ir-location).
+ - The Azure integration runtime automatically detects the most suitable location based on some rules (also known as autoresolve). See details here: [Azure IR location](concepts-integration-runtime.md#azure-ir-location).
- The Azure integration runtime with a managed virtual network has the same region as your data factory. It canΓÇÖt be auto resolved like the Azure integration runtime. - The self-hosted integration runtime is located in the region of your local machines or Azure virtual machines. 2. Is the data store publicly accessible?<br>
- If the data store is publicly accessible, the difference between the different types of integration runtimes is not very large. If the store is behind a firewall or in a private network such as an on-premises or virtual network, the better choices are the Azure integration runtime with a managed virtual network or the self-hosted integration runtime.
+ If the data store is publicly accessible, the difference between the different types of integration runtimes isn't large. If the store is behind a firewall or in a private network such as an on-premises or virtual network, the better choices are the Azure integration runtime with a managed virtual network or the self-hosted integration runtime.
- - There is some additional setup needed such as Private Link Service and Load Balancer when using the Azure integration runtime with a managed virtual network to access a data store behind a firewall or in a private network. You can refer to this tutorial [Access on-premises SQL Server from Data Factory Managed VNet using Private Endpoint](tutorial-managed-virtual-network-on-premise-sql-server.md) as an example. If the data store is in an on-premises environment, then the on-premises must be connected to Azure via Express Route or an S2S VPN.
- - The self-hosted integration runtime is more flexible and does not require additional settings, Express Route, or VPN. But you need to provide and maintain the machine by yourself.
+ - There's some extra setup needed such as Private Link Service and Load Balancer when using the Azure integration runtime with a managed virtual network to access a data store behind a firewall or in a private network. You can refer to this tutorial [Access on-premises SQL Server from Data Factory Managed VNet using Private Endpoint](tutorial-managed-virtual-network-on-premise-sql-server.md) as an example. If the data store is in an on-premises environment, then the on-premises must be connected to Azure via Express Route or an S2S VPN.
+ - The self-hosted integration runtime is more flexible and doesn't require extra settings, Express Route, or VPN. But you need to provide and maintain the machine by yourself.
- You can also add the public IP addresses of the Azure integration runtime to the allowlist of your firewall and allow it to access the data store, but itΓÇÖs not a desirable solution in highly secure production environments. 3. What level of security do you require during data transmission?<br> If you need to process highly confidential data, you want to defend against, for example, man-in-the-middle attacks during data transmission. Then you can choose to use a Private Endpoint and Private Link to ensure data security. - You can create managed private endpoints to your data stores when using the Azure integration runtime with a managed virtual network. The private endpoints are maintained by the Azure Data Factory service within the managed virtual network.
- - You can also create private endpoints in your virtual network and the self-hosted integration runtime can leverage them to access data stores.
+ - You can also create private endpoints in your virtual network and the self-hosted integration runtime can use them to access data stores.
- The Azure integration runtime doesnΓÇÖt support Private Endpoint and Private Link. 4. What level of maintenance are you able to provide?<br> Maintaining infrastructure, servers, and equipment is one of the important tasks of the IT department of an enterprise. It usually takes a lot of time and effort.
- - You donΓÇÖt need to worry about the maintenance such as update, patch and version of the Azure integration runtime and the Azure integration runtime with a managed virtual network. The Azure Data Factory service will take care of all the maintenance efforts.
- - Because the self-hosted integration runtime is installed on customer machines, the maintenance must be taken care of by end users. You can, however, enable auto-update to automatically get the latest version of the self-hosted integration runtime whenever there is an update. To learn about how to enable auto-update and manage version control of the self-hosted integration runtime, you can refer to the article [Self-hosted integration runtime auto-update and expire notification](self-hosted-integration-runtime-auto-update.md). We also provide a diagnostic tool for the self-hosted integration runtime to health-check some common issues. To learn more about the diagnostic tool, refer to the article [Self-hosted integration runtime diagnostic tool](self-hosted-integration-runtime-diagnostic-tool.md). In addition, we recommend using Azure Monitor and Azure Log Analytics specifically to collect that data and enable a single pane of glass monitoring for your self-hosted integration runtimes. Learn more about configuring this in the article [Configure the self-hosted integration runtime for log analytics collection](how-to-configure-shir-for-log-analytics-collection.md) for instructions.
+ - You donΓÇÖt need to worry about the maintenance such as update, patch, and version of the Azure integration runtime and the Azure integration runtime with a managed virtual network. The Azure Data Factory service takes care of all the maintenance efforts.
+ - Because the self-hosted integration runtime is installed on customer machines, the maintenance must be taken care of by end users. You can, however, enable autoupdate to automatically get the latest version of the self-hosted integration runtime whenever there's an update. To learn about how to enable autoupdate and manage version control of the self-hosted integration runtime, you can refer to the article [Self-hosted integration runtime autoupdate and expire notification](self-hosted-integration-runtime-auto-update.md). We also provide a diagnostic tool for the self-hosted integration runtime to health-check some common issues. To learn more about the diagnostic tool, refer to the article [Self-hosted integration runtime diagnostic tool](self-hosted-integration-runtime-diagnostic-tool.md). In addition, we recommend using Azure Monitor and Azure Log Analytics specifically to collect that data and enable a single pane of glass monitoring for your self-hosted integration runtimes. Learn more about configuring this in the article [Configure the self-hosted integration runtime for log analytics collection](how-to-configure-shir-for-log-analytics-collection.md) for instructions.
5. What concurrency requirements do you have?<br> When processing large-scale data, such as large-scale data migration, we hope to improve the efficiency and speed of processing as much as possible. Concurrency is often a major requirement for data integration.
- - The Azure integration runtime has the highest concurrency support among all integration runtime types. Data integration unit (DIU) is the unit of capability to run on Azure Data Factory. You can select the desired number of DIU for e.g. Copy activity. Within the scope of DIU, you can run multiple activities at the same time. For different region groups, we will have different upper limitations. Learn about the details of these limits in the article [Data Factory limits](../azure-resource-manager/management/azure-subscription-service-limits.md#data-factory-limits).
+ - The Azure integration runtime has the highest concurrency support among all integration runtime types. Data integration unit (DIU) is the unit of capability to run on Azure Data Factory. You can select the desired number of DIU for example, Copy activity. Within the scope of DIU, you can run multiple activities at the same time. For different region groups, we'll have different upper limitations. Learn about the details of these limits in the article [Data Factory limits](../azure-resource-manager/management/azure-subscription-service-limits.md#data-factory-limits).
- The Azure integration runtime with a managed virtual network has a similar mechanism to the Azure integration runtime, but due to some architectural constraints, the concurrency it can support is less than Azure integration runtime. - The concurrent activities that the self-hosted integration runtime can run depend on the machine size and cluster size. You can choose a larger machine or use more self-hosted integration nodes in the cluster if you need greater concurrency.
It is very important to choose an appropriate type of integration runtime. Not o
There are some functional differences between the types of integration runtimes. - Dataflow is supported by the Azure integration runtime and the Azure integration runtime with a managed virtual network. However, you canΓÇÖt run Dataflow using self-hosted integration runtime.
- - If you need to install custom components, such as ODBC drivers, a JVM, or a SQL Server certificate, the self-hosted integration runtime is your only option. Custom components are not supported by the Azure integration runtime or the Azure integration runtime with a managed virtual network.
+ - If you need to install custom components, such as ODBC drivers, a JVM, or a SQL Server certificate, the self-hosted integration runtime is your only option. Custom components aren't supported by the Azure integration runtime or the Azure integration runtime with a managed virtual network.
## Architecture for integration runtime
-Based on the characteristics of each integration runtime, different architectures are generally required to meet the business needs of data integration. The following are some typical architectures that can be used as a reference.
+Based on the characteristics of each integration runtime, different architectures are required to meet the business needs of data integration. The following are some typical architectures that can be used as a reference.
### Azure integration runtime
-The Azure integration runtime is a fully managed, auto-scaled compute that you can use to move data from Azure or non-Azure data sources.
+The Azure integration runtime is a fully managed, autoscaled compute that you can use to move data from Azure or non-Azure data sources.
:::image type="content" source="media/choosing-the-right-ir-configuration/integration-runtime-with-fully-managed.png" alt-text="Screenshot of integration runtime is a fully managed."::: 1. The traffic from the Azure integration runtime to data stores is through public network. 1. We provide a range of static public IP addresses for the Azure integration runtime and these IP addresses can be added to the allowlist of the target data store firewall. To learn more about how to get public IP addresses of the Azure Integration runtime, refer to the article [Azure Integration Runtime IP addresses](azure-integration-runtime-ip-addresses.md).
-1. The Azure integration runtime can be auto-resolved according to the region of the data source and data sink. Or you can choose a specific region. We recommend you choose the region closest to your data source or sink, which can provide better execution performance. Learn more about performance considerations in the article [Troubleshoot copy activity on Azure IR](copy-activity-performance-troubleshooting.md#troubleshoot-copy-activity-on-azure-ir).
+1. The Azure integration runtime can be autoresolved according to the region of the data source and data sink. Or you can choose a specific region. We recommend you choose the region closest to your data source or sink, which can provide better execution performance. Learn more about performance considerations in the article [Troubleshoot copy activity on Azure IR](copy-activity-performance-troubleshooting.md#troubleshoot-copy-activity-on-azure-ir).
### Azure integration runtime with managed virtual network
-When using the Azure integration runtime with a managed virtual network, you should use managed private endpoints to connect your data sources to ensure data security during transmission. With some additional settings such as Private Link Service and Load Balancer, managed private endpoints can also be used to access on-premises data sources.
+When using the Azure integration runtime with a managed virtual network, you should use managed private endpoints to connect your data sources to ensure data security during transmission. With some extra settings such as Private Link Service and Load Balancer, managed private endpoints can also be used to access on-premises data sources.
:::image type="content" source="media/choosing-the-right-ir-configuration/integration-runtime-with-managed-virtual-network.png" alt-text="Screenshot of integration runtime with a managed virtual network."::: 1. A managed private endpoint canΓÇÖt be reused across different environments. You need to create a set of managed private endpoints for each environment. For all data sources supported by managed private endpoints, refer to the article [Supported data sources and services](managed-virtual-network-private-endpoint.md#supported-data-sources-and-services). 1. You can also use managed private endpoints for connections to external compute resources that you want to orchestrate such as Azure Databricks and Azure Functions. To see the full list of supported external compute resources, refer to the article [Supported data sources and services](managed-virtual-network-private-endpoint.md#supported-data-sources-and-services).
-1. Managed virtual network is managed by the Azure Data Factory service. VNET peering is not supported between a managed virtual network and a customer virtual network.
+1. Managed virtual network is managed by the Azure Data Factory service. VNET peering isn't supported between a managed virtual network and a customer virtual network.
1. Customers canΓÇÖt directly change configurations such as the NSG rule on a managed virtual network. 1. If any property of a managed private endpoint is different between environments, you can override it by parameterizing that property and providing the respective value during deployment. See details in the article [Best practices for CI/CD](continuous-integration-delivery.md#best-practices-for-cicd).
Since the self-hosted integration runtime runs on a customer managed machine, in
:::image type="content" source="media/choosing-the-right-ir-configuration/self-hosted-integration-runtime-sharing.png" alt-text="Screenshot of using the shared functions of the self-hosted integration runtime for different projects in the same environment.":::
-1. Express Route is not mandatory. Without Express Route, the data will not reach the sink through private networks such as a virtual network or a private link, but through the public network.
+1. Express Route isn't mandatory. Without Express Route, the data won't reach the sink through private networks such as a virtual network or a private link, but through the public network.
1. If the on-premises network is connected to the Azure virtual network via Express Route or VPN, then the self-hosted integration runtime can be installed on virtual machines in a Hub VNET.
-1. The hub-spoke virtual network architecture can be used not only for different projects but also for different environments (Prod, QA and Dev).
+1. The hub-spoke virtual network architecture can be used not only for different projects but also for different environments (Prod, QA, and Dev).
1. The self-hosted integration runtime can be shared with multiple data factories. The primary data factory references it as a shared self-hosted integration runtime and others refer to it as a linked self-hosted integration runtime. A physical self-hosted integration runtime can have multiple nodes in a cluster. Communication only happens between the primary self-hosted integration runtime and primary node, with work being distributed to secondary nodes from the primary node. 1. Credentials of on-premises data stores can be stored either in the local machine or an Azure Key Vault. Azure Key Vault is highly recommended.
-1. Communication between the self-hosted integration runtime and data factory can go through a private link. But currently, interactive authoring via Azure Relay and automatically updating to the latest version from the download center donΓÇÖt support private link. The traffic goes through the firewall of on-premises environment. For more details, refer to the article [Azure Private Link for Azure Data Factory](data-factory-private-link.md).
+1. Communication between the self-hosted integration runtime and data factory can go through a private link. But currently, interactive authoring via Azure Relay and automatically updating to the latest version from the download center donΓÇÖt support private link. The traffic goes through the firewall of on-premises environment. For more information, see the article [Azure Private Link for Azure Data Factory](data-factory-private-link.md).
1. The private link is only required for the primary data factory. All traffic goes through primary data factory, then to other data factories.
-1. The same name of the self-hosted integration runtime across all stages of CI/CD is expected. You can consider using a ternary factory just to contain the shared self-hosted integration runtimes and use linked self-hosted integration runtime in the various production stages. For more details, refer to the article [Continuous integration and delivery](continuous-integration-delivery.md).
+1. The same name of the self-hosted integration runtime across all stages of CI/CD is expected. You can consider using a ternary factory just to contain the shared self-hosted integration runtimes and use linked self-hosted integration runtime in the various production stages. For more information, see the article [Continuous integration and delivery](continuous-integration-delivery.md).
1. You can control how the traffic goes to the download center and Azure Relay using configurations of your on-premises network and Express Route, either through an on-premises proxy or hub virtual network. Make sure the traffic is allowed by proxy or NSG rules.
-1. If you want to secure communication between self-hosted integration runtime nodes, you can enable remote access from the intranet with a TLS/SSL certificate. For more details, refer to the article [Enable remote access from intranet with TLS/SSL certificate (Advanced)](tutorial-enable-remote-access-intranet-tls-ssl-certificate.md).
+1. If you want to secure communication between self-hosted integration runtime nodes, you can enable remote access from the intranet with a TLS/SSL certificate. For more information, see the article [Enable remote access from intranet with TLS/SSL certificate (Advanced)](tutorial-enable-remote-access-intranet-tls-ssl-certificate.md).
data-factory Compute Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/compute-linked-services.md
Previously updated : 10/25/2022 Last updated : 10/20/2023
The following JSON defines a Linux-based on-demand HDInsight linked service. The
> [!IMPORTANT] > The HDInsight cluster creates a **default container** in the blob storage you specified in the JSON (**linkedServiceName**). HDInsight does not delete this container when the cluster is deleted. This behavior is by design. With on-demand HDInsight linked service, a HDInsight cluster is created every time a slice needs to be processed unless there is an existing live cluster (**timeToLive**) and is deleted when the processing is done. >
-> As more activity runs, you see many containers in your Azure blob storage. If you do not need them for troubleshooting of the jobs, you may want to delete them to reduce the storage cost. The names of these containers follow a pattern: `adf**yourfactoryorworkspacename**-**linkedservicename**-datetimestamp`. Use tools such as [Microsoft Azure Storage Explorer](https://storageexplorer.com/) to delete containers in your Azure blob storage.
+> As more activity runs, you see many containers in your Azure blob storage. If you do not need them for troubleshooting of the jobs, you might want to delete them to reduce the storage cost. The names of these containers follow a pattern: `adf**yourfactoryorworkspacename**-**linkedservicename**-datetimestamp`. Use tools such as [Microsoft Azure Storage Explorer](https://storageexplorer.com/) to delete containers in your Azure blob storage.
#### Properties
The following JSON defines a Linux-based on-demand HDInsight linked service. The
| clusterSize | Number of worker/data nodes in the cluster. The HDInsight cluster is created with 2 head nodes along with the number of worker nodes you specify for this property. The nodes are of size Standard_D3 that has 4 cores, so a 4 worker node cluster takes 24 cores (4\*4 = 16 cores for worker nodes, plus 2\*4 = 8 cores for head nodes). See [Set up clusters in HDInsight with Hadoop, Spark, Kafka, and more](../hdinsight/hdinsight-hadoop-provision-linux-clusters.md) for details. | Yes | | linkedServiceName | Azure Storage linked service to be used by the on-demand cluster for storing and processing data. The HDInsight cluster is created in the same region as this Azure Storage account. Azure HDInsight has limitation on the total number of cores you can use in each Azure region it supports. Make sure you have enough core quotas in that Azure region to meet the required clusterSize. For details, refer to [Set up clusters in HDInsight with Hadoop, Spark, Kafka, and more](../hdinsight/hdinsight-hadoop-provision-linux-clusters.md)<p>Currently, you cannot create an on-demand HDInsight cluster that uses an Azure Data Lake Storage (Gen 2) as the storage. If you want to store the result data from HDInsight processing in an Azure Data Lake Storage (Gen 2), use a Copy Activity to copy the data from the Azure Blob Storage to the Azure Data Lake Storage (Gen 2). </p> | Yes | | clusterResourceGroup | The HDInsight cluster is created in this resource group. | Yes |
-| timetolive | The allowed idle time for the on-demand HDInsight cluster. Specifies how long the on-demand HDInsight cluster stays alive after completion of an activity run if there are no other active jobs in the cluster. The minimal allowed value is 5 minutes (00:05:00).<br/><br/>For example, if an activity run takes 6 minutes and timetolive is set to 5 minutes, the cluster stays alive for 5 minutes after the 6 minutes of processing the activity run. If another activity run is executed with the 6-minutes window, it is processed by the same cluster.<br/><br/>Creating an on-demand HDInsight cluster is an expensive operation (could take a while), so use this setting as needed to improve performance of the service by reusing an on-demand HDInsight cluster.<br/><br/>If you set timetolive value to 0, the cluster is deleted as soon as the activity run completes. Whereas, if you set a high value, the cluster may stay idle for you to log on for some troubleshooting purpose but it could result in high costs. Therefore, it is important that you set the appropriate value based on your needs.<br/><br/>If the timetolive property value is appropriately set, multiple pipelines can share the instance of the on-demand HDInsight cluster. | Yes |
+| timetolive | The allowed idle time for the on-demand HDInsight cluster. Specifies how long the on-demand HDInsight cluster stays alive after completion of an activity run if there are no other active jobs in the cluster. The minimal allowed value is 5 minutes (00:05:00).<br/><br/>For example, if an activity run takes 6 minutes and timetolive is set to 5 minutes, the cluster stays alive for 5 minutes after the 6 minutes of processing the activity run. If another activity run is executed with the 6-minutes window, it is processed by the same cluster.<br/><br/>Creating an on-demand HDInsight cluster is an expensive operation (could take a while), so use this setting as needed to improve performance of the service by reusing an on-demand HDInsight cluster.<br/><br/>If you set timetolive value to 0, the cluster is deleted as soon as the activity run completes. Whereas, if you set a high value, the cluster can stay idle for you to log on for some troubleshooting purpose but it could result in high costs. Therefore, it is important that you set the appropriate value based on your needs.<br/><br/>If the timetolive property value is appropriately set, multiple pipelines can share the instance of the on-demand HDInsight cluster. | Yes |
| clusterType | The type of the HDInsight cluster to be created. Allowed values are "hadoop" and "spark". If not specified, default value is hadoop. Enterprise Security Package enabled cluster cannot be created on-demand, instead use an [existing cluster/ bring your own compute](#azure-hdinsight-linked-service). | No | | version | Version of the HDInsight cluster. If not specified, it's using the current HDInsight defined default version. | No | | hostSubscriptionId | The Azure subscription ID used to create HDInsight cluster. If not specified, it uses the Subscription ID of your Azure login context. | No |
You can specify the sizes of head, data, and zookeeper nodes using the following
| zookeeperNodeSize | Specifies the size of the Zoo Keeper node. The default value is: Standard_D3. | No | * Specifying node sizes
-See the [Sizes of Virtual Machines](../virtual-machines/sizes.md) article for string values you need to specify for the properties mentioned in the previous section. The values need to conform to the **CMDLETs & APIS** referenced in the article. As you can see in the article, the data node of Large (default) size has 7-GB memory, which may not be good enough for your scenario.
+See the [Sizes of Virtual Machines](../virtual-machines/sizes.md) article for string values you need to specify for the properties mentioned in the previous section. The values need to conform to the **CMDLETs & APIS** referenced in the article. As you can see in the article, the data node of Large (default) size has 7-GB memory, which might not be good enough for your scenario.
If you want to create D4 sized head nodes and worker nodes, specify **Standard_D4** as the value for headNodeSize and dataNodeSize properties.
If you want to create D4 sized head nodes and worker nodes, specify **Standard_D
"dataNodeSize": "Standard_D4", ```
-If you specify a wrong value for these properties, you may receive the following **error:** Failed to create cluster. Exception: Unable to complete the cluster create operation. Operation failed with code '400'. Cluster left behind state: 'Error'. Message: 'PreClusterCreationValidationFailure'. When you receive this error, ensure that you are using the **CMDLET & APIS** name from the table in the [Sizes of Virtual Machines](../virtual-machines/sizes.md) article.
+If you specify a wrong value for these properties, you might receive the following **error:** Failed to create cluster. Exception: Unable to complete the cluster create operation. Operation failed with code '400'. Cluster left behind state: 'Error'. Message: 'PreClusterCreationValidationFailure'. When you receive this error, ensure that you are using the **CMDLET & APIS** name from the table in the [Sizes of Virtual Machines](../virtual-machines/sizes.md) article.
### Bring your own compute environment In this type of configuration, users can register an already existing computing environment as a linked service. The computing environment is managed by the user and the service uses it to execute the activities.
You can create an Azure HDInsight linked service to register your own HDInsight
| clusterUri | The URI of the HDInsight cluster. | Yes | | username | Specify the name of the user to be used to connect to an existing HDInsight cluster. | Yes | | password | Specify password for the user account. | Yes |
-| linkedServiceName | Name of the Azure Storage linked service that refers to the Azure blob storage used by the HDInsight cluster. <p>Currently, you cannot specify an Azure Data Lake Storage (Gen 2) linked service for this property. If the HDInsight cluster has access to the Data Lake Store, you may access data in the Azure Data Lake Storage (Gen 2) from Hive/Pig scripts. </p> | Yes |
+| linkedServiceName | Name of the Azure Storage linked service that refers to the Azure blob storage used by the HDInsight cluster. <p>Currently, you cannot specify an Azure Data Lake Storage (Gen 2) linked service for this property. If the HDInsight cluster has access to the Data Lake Store, you can access data in the Azure Data Lake Storage (Gen 2) from Hive/Pig scripts. </p> | Yes |
| isEspEnabled | Specify '*true*' if the HDInsight cluster is [Enterprise Security Package](../hdinsight/domain-joined/apache-domain-joined-architecture.md) enabled. Default is '*false*'. | No | | connectVia | The Integration Runtime to be used to dispatch the activities to this linked service. You can use Azure Integration Runtime or Self-hosted Integration Runtime. If not specified, it uses the default Azure Integration Runtime. <br />For Enterprise Security Package (ESP) enabled HDInsight cluster use a self-hosted integration runtime, which has a line of sight to the cluster or it should be deployed inside the same Virtual Network as the ESP HDInsight cluster. | No |
You can create **Azure Databricks linked service** to register Databricks worksp
| domain | Specify the Azure Region accordingly based on the region of the Databricks workspace. Example: https://eastus.azuredatabricks.net | Yes | | accessToken | Access token is required for the service to authenticate to Azure Databricks. Access token needs to be generated from the databricks workspace. More detailed steps to find the access token can be found [here](/azure/databricks/dev-tools/api/latest/authentication#generate-token) | No | | MSI | Use the service's managed identity (system-assigned) to authenticate to Azure Databricks. You do not need Access Token when using 'MSI' authentication. More details about Managed Identity authentication can be found [here](https://techcommunity.microsoft.com/t5/azure-data-factory/azure-databricks-activities-now-support-managed-identity/ba-p/1922818) | No |
-| existingClusterId | Cluster ID of an existing cluster to run all jobs on this. This should be an already created Interactive Cluster. You may need to manually restart the cluster if it stops responding. Databricks suggest running jobs on new clusters for greater reliability. You can find the Cluster ID of an Interactive Cluster on Databricks workspace -> Clusters -> Interactive Cluster Name -> Configuration -> Tags. [More details](https://docs.databricks.com/user-guide/clusters/tags.html) | No
+| existingClusterId | Cluster ID of an existing cluster to run all jobs on this. This should be an already created Interactive Cluster. You might need to manually restart the cluster if it stops responding. Databricks suggest running jobs on new clusters for greater reliability. You can find the Cluster ID of an Interactive Cluster on Databricks workspace -> Clusters -> Interactive Cluster Name -> Configuration -> Tags. [More details](https://docs.databricks.com/user-guide/clusters/tags.html) | No
| instancePoolId | Instance Pool ID of an existing pool in databricks workspace. | No | | newClusterVersion | The Spark version of the cluster. It creates a job cluster in databricks. | No | | newClusterNumOfWorker| Number of worker nodes that this cluster should have. A cluster has one Spark Driver and num_workers Executors for a total of num_workers + 1 Spark nodes. A string formatted Int32, like "1" means numOfWorker is 1 or "1:10" means autoscale from 1 as min and 10 as max. | No |
data-factory Compute Optimized Data Flow Retire https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/compute-optimized-data-flow-retire.md
Previously updated : 01/25/2023 Last updated : 10/20/2023 # Retirement of data flow compute optimized option
data-factory Concept Managed Airflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concept-managed-airflow.md
Previously updated : 01/20/2023 Last updated : 10/20/2023
Azure Data Factory's Managed Airflow service is a simple and efficient way to cr
## When to use Managed Airflow?
-Azure Data Factory offers [Pipelines](concepts-pipelines-activities.md) to visually orchestrate data processes (UI-based authoring). While Managed Airflow, offers Airflow based python DAGs (python code-centric authoring) for defining the data orchestration process. If you have the Airflow background, or are currently using Apache Airflow, you may prefer to use the Managed Airflow instead of the pipelines. On the contrary, if you wouldn't like to write/ manage python-based DAGs for data process orchestration, you may prefer to use pipelines.
+Azure Data Factory offers [Pipelines](concepts-pipelines-activities.md) to visually orchestrate data processes (UI-based authoring). While Managed Airflow, offers Airflow based python DAGs (python code-centric authoring) for defining the data orchestration process. If you have the Airflow background, or are currently using Apache Airflow, you might prefer to use the Managed Airflow instead of the pipelines. On the contrary, if you wouldn't like to write/ manage python-based DAGs for data process orchestration, you might prefer to use pipelines.
With Managed Airflow, Azure Data Factory now offers multi-orchestration capabilities spanning across visual, code-centric, OSS orchestration requirements.
Managed Airflow in Azure Data Factory offers a range of powerful features, inclu
- **Fast and simple deployment**ΓÇ»- You can quickly and easily set up Apache Airflow by selecting an [Apache Airflow version](concept-managed-airflow.md#supported-apache-airflow-versions) when you create a Managed Airflow. - **Cloud scale**ΓÇ»- Managed Airflow automatically scales Apache Airflow nodes when required based on range specification (min, max). -- **Microsoft Entra integration**ΓÇ»- You can enable [Microsoft Entra RBAC](concepts-roles-permissions.md) against your Airflow environment for a single sign on experience that is secured by Microsoft Entra ID.
+- **Microsoft Entra integration**ΓÇ»- You can enable [Microsoft Entra RBAC](concepts-roles-permissions.md) against your Airflow environment for a single sign-on experience that is secured by Microsoft Entra ID.
- **Managed Virtual Network integration**ΓÇ»(coming soon) - You can access your data source via private endpoints or on-premises using ADF Managed Virtual Network that provides extra network isolation. - **Metadata encryption**ΓÇ»- Managed Airflow automatically encrypts metadata using Azure-managed keys to ensure your environment is secure by default. It also supports double encryption with a [Customer-Managed Key (CMK)](enable-customer-managed-key.md). - **Azure Monitoring and alerting**ΓÇ»- All the logs generated by Managed Airflow is exported to Azure Monitor. It also provides metrics to track critical conditions and help you notify if the need be.
data-factory Concepts Annotations User Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-annotations-user-properties.md
Previously updated : 11/01/2022 Last updated : 10/20/2023 # Monitor Azure Data Factory and Azure Synapse Analytics pipelines with annotations and user properties [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-When monitoring your data pipelines, you may want to be able to filter and monitor a certain group of activities, such as those of a project or specific department's pipelines. You may also need to further monitor activities based on dynamic properties. You can achieve these things by leveraging annotations and user properties.
+When monitoring your data pipelines, you might want to be able to filter and monitor a certain group of activities, such as those of a project or specific department's pipelines. You might also need to further monitor activities based on dynamic properties. You can achieve these things by using annotations and user properties.
## Annotations
-Azure Data Factory annotations are tags that you can add to your Azure Data Factory or Azure Synapse Analytics entities to easily identify them. An annotation allows you to classify or group different entities in order to easily monitor or filter them after an execution. Annotations only allow you to define static values and can be added to pipelines, datasets, linked services and triggers.
+Azure Data Factory annotations are tags that you can add to your Azure Data Factory or Azure Synapse Analytics entities to easily identify them. An annotation allows you to classify or group different entities in order to easily monitor or filter them after an execution. Annotations only allow you to define static values and can be added to pipelines, datasets, linked services, and triggers.
## User properties
-User properties are key-value pairs defined at the activity level. By adding user properties, you can view additional information about activities under activity runs window that may help you to monitor your activity executions.
+User properties are key-value pairs defined at the activity level. By adding user properties, you can view additional information about activities under activity runs window that might help you to monitor your activity executions.
User properties allow you to define dynamic values and can be added to any activity, up to 5 per activity, under User Properties tab. ## Create and use annotations and user properties
-As we discussed, annotations are static values that you can assign to pipelines, datasets, linked services, and triggers. Let's assume you want to filter for pipelines that belong to the same business unit or project name. We will first create the annotation. Click on the Properties icon, + New button and name your annotation appropriately. We advise being consistent with your naming.
+As we discussed, annotations are static values that you can assign to pipelines, datasets, linked services, and triggers. Let's assume you want to filter for pipelines that belong to the same business unit or project name. We first create the annotation. Select the Properties icon, + New button and name your annotation appropriately. We advise being consistent with your naming.
![Screenshot showing how to create an annotation.](./media/concepts-annotations-user-properties/create-annotations.png "Create Annotation")
When you go to the Monitor tab, you can filter under Pipeline runs for this Anno
![Screenshot showing how to monitor an annotations.](./media/concepts-annotations-user-properties/monitor-annotations.png "Monitor Annotations")
-If you want to monitor for dynamic values at the activity level, you can do so by leveraging the User properties. You can add these under any activity by clicking on the Activity box, User properties tab and the + New button:
+If you want to monitor for dynamic values at the activity level, you can do so by using the User properties. You can add these under any activity by clicking on the Activity box, User properties tab and the + New button:
![Screenshot showing how to create user properties.](./media/concepts-annotations-user-properties/create-user-properties.png "Create User Properties")
-For Copy Activity specifically, you can auto-generate these:
+For Copy Activity specifically, you can autogenerate these:
![Screenshot showing User Properties under Copy activity.](./media/concepts-annotations-user-properties/copy-activity-user-properties.png "Copy Activity User Properties")
-To monitor User properties, go to the Activity runs monitoring view. Here you will see all the properties you added.
+To monitor User properties, go to the Activity runs monitoring view. Here you see all the properties you added.
![Screenshot showing how to use User Properties in the Monitor tab.](./media/concepts-annotations-user-properties/monitor-user-properties.png "Monitor User Properties")
-You can remove some from the view if you click on the Bookmark sign:
+You can remove some from the view if you select the Bookmark sign:
![Screenshot showing how to remove User Properties.](./media/concepts-annotations-user-properties/remove-user-properties.png "Remove User Properties")
data-factory Concepts Data Flow Column Pattern https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-column-pattern.md
Title: Column patterns in mapping data flow
+ Title: Column patterns in mapping data flows
description: Create generalized data transformation patterns using column patterns in mapping data flows with Azure Data Factory or Synapse Analytics.
Previously updated : 01/11/2023 Last updated : 10/20/2023
-# Using column patterns in mapping data flow
+# Using column patterns in mapping data flows
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-Several mapping data flow transformations allow you to reference template columns based on patterns instead of hard-coded column names. This matching is known as *column patterns*. You can define patterns to match columns based on name, data type, stream, origin, or position instead of requiring exact field names. There are two scenarios where column patterns are useful:
+Several mapping data flows transformations allow you to reference template columns based on patterns instead of hard-coded column names. This matching is known as *column patterns*. You can define patterns to match columns based on name, data type, stream, origin, or position instead of requiring exact field names. There are two scenarios where column patterns are useful:
* If incoming source fields change often such as the case of changing columns in text files or NoSQL databases. This scenario is known as [schema drift](concepts-data-flow-schema-drift.md). * If you wish to do a common operation on a large group of columns. For example, wanting to cast every column that has 'total' in its column name into a double.
The above example matches on all subcolumns of complex column `a`. `a` contains
* `$$` translates to the name or value of each match at run time. Think of `$$` as equivalent to `this` * `$0` translates to the current column name match at run time for scalar types. For hierarchical types, `$0` represents the current matched column hierarchy path. * `name` represents the name of each incoming column
-* `type` represents the data type of each incoming column. The list of data types in the data flow type system can be found [here.](concepts-data-flow-overview.md#data-flow-data-types)
+* `type` represents the data type of each incoming column. The list of data types in the data flows type system can be found [here.](concepts-data-flow-overview.md#data-flow-data-types)
* `stream` represents the name associated with each stream, or transformation in your flow * `position` is the ordinal position of columns in your data flow * `origin` is the transformation where a column originated or was last updated ## Next steps
-* Learn more about the mapping data flow [expression language](data-transformation-functions.md) for data transformations
+* Learn more about the mapping data flows [expression language](data-transformation-functions.md) for data transformations
* Use column patterns in the [sink transformation](data-flow-sink.md) and [select transformation](data-flow-select.md) with rule-based mapping
data-factory Concepts Data Flow Debug Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-debug-mode.md
Previously updated : 11/21/2022 Last updated : 10/20/2023 # Mapping data flow Debug Mode
Last updated 11/21/2022
## Overview
-Azure Data Factory and Synapse Analytics mapping data flow's debug mode allows you to interactively watch the data shape transform while you build and debug your data flows. The debug session can be used both in Data Flow design sessions as well as during pipeline debug execution of data flows. To turn on debug mode, use the **Data Flow Debug** button in the top bar of data flow canvas or pipeline canvas when you have data flow activities.
+Azure Data Factory and Synapse Analytics mapping data flow's debug mode allows you to interactively watch the data shape transform while you build and debug your data flows. The debug session can be used both in Data Flow design sessions and during pipeline debug execution of data flows. To turn on debug mode, use the **Data Flow Debug** button in the top bar of data flow canvas or pipeline canvas when you have data flow activities.
:::image type="content" source="media/data-flow/debug-button.png" alt-text="Screenshot that shows where is the Debug slider 1"::: :::image type="content" source="media/data-flow/debug-button-4.png" alt-text="Screenshot that shows where is the Debug slider 2":::
-Once you turn on the slider, you will be prompted to select which integration runtime configuration you wish to use. If AutoResolveIntegrationRuntime is chosen, a cluster with eight cores of general compute with a default 60-minute time to live will be spun up. If you'd like to allow for more idle team before your session times out, you can choose a higher TTL setting. For more information on data flow integration runtimes, see [Integration Runtime performance](concepts-integration-runtime-performance.md).
+Once you turn on the slider, you'll be prompted to select which integration runtime configuration you wish to use. If AutoResolveIntegrationRuntime is chosen, a cluster with eight cores of general compute with a default 60-minute time to live will be spun up. If you'd like to allow for more idle team before your session times out, you can choose a higher TTL setting. For more information on data flow integration runtimes, see [Integration Runtime performance](concepts-integration-runtime-performance.md).
:::image type="content" source="media/data-flow/debug-new-1.png" alt-text="Debug IR selection":::
-When Debug mode is on, you'll interactively build your data flow with an active Spark cluster. The session will close once you turn debug off. You should be aware of the hourly charges incurred by Data Factory during the time that you have the debug session turned on.
+When Debug mode is on, you'll interactively build your data flow with an active Spark cluster. The session closes once you turn debug off. You should be aware of the hourly charges incurred by Data Factory during the time that you have the debug session turned on.
In most cases, it's a good practice to build your Data Flows in debug mode so that you can validate your business logic and view your data transformations before publishing your work. Use the "Debug" button on the pipeline panel to test your data flow in a pipeline.
In most cases, it's a good practice to build your Data Flows in debug mode so th
> [!NOTE]
-> Every debug session that a user starts from their browser UI is a new session with its own Spark cluster. You can use the monitoring view for debug sessions above to view and manage debug sessions. You are charged for every hour that each debug session is executing including the TTL time.
+> Every debug session that a user starts from their browser UI is a new session with its own Spark cluster. You can use the monitoring view for debug sessions shown in the previous images to view and manage debug sessions. You are charged for every hour that each debug session is executing including the TTL time.
-This video clip talks about tips, tricks, and good practices for data flow debug mode
+This video clip talks about tips, tricks, and good practices for data flow debug mode.
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE5c8Jx] ## Cluster status
-The cluster status indicator at the top of the design surface turns green when the cluster is ready for debug. If your cluster is already warm, then the green indicator will appear almost instantly. If your cluster wasn't already running when you entered debug mode, then the Spark cluster will perform a cold boot. The indicator will spin until the environment is ready for interactive debugging.
+The cluster status indicator at the top of the design surface turns green when the cluster is ready for debug. If your cluster is already warm, then the green indicator appears almost instantly. If your cluster wasn't already running when you entered debug mode, then the Spark cluster performs a cold boot. The indicator spins until the environment is ready for interactive debugging.
-When you are finished with your debugging, turn the Debug switch off so that your Spark cluster can terminate and you'll no longer be billed for debug activity.
+When you're finished with your debugging, turn the Debug switch off so that your Spark cluster can terminate and you'll no longer be billed for debug activity.
## Debug settings
Once you turn on debug mode, you can edit how a data flow previews data. Debug s
If you have parameters in your Data Flow or any of its referenced datasets, you can specify what values to use during debugging by selecting the **Parameters** tab.
-Use the sampling settings here to point to sample files or sample tables of data so that you do not have to change your source datasets. By using a sample file or table here, you can maintain the same logic and property settings in your data flow while testing against a subset of data.
+Use the sampling settings here to point to sample files or sample tables of data so that you don't have to change your source datasets. By using a sample file or table here, you can maintain the same logic and property settings in your data flow while testing against a subset of data.
:::image type="content" source="media/data-flow/debug-settings2.png" alt-text="Debug settings parameters":::
-The default IR used for debug mode in data flows is a small 4-core single worker node with a 4-core single driver node. This works fine with smaller samples of data when testing your data flow logic. If you expand the row limits in your debug settings during data preview or set a higher number of sampled rows in your source during pipeline debug, then you may wish to consider setting a larger compute environment in a new Azure Integration Runtime. Then you can restart your debug session using the larger compute environment.
+The default IR used for debug mode in data flows is a small 4-core single worker node with a 4-core single driver node. This works fine with smaller samples of data when testing your data flow logic. If you expand the row limits in your debug settings during data preview or set a higher number of sampled rows in your source during pipeline debug, then you might wish to consider setting a larger compute environment in a new Azure Integration Runtime. Then you can restart your debug session using the larger compute environment.
## Data preview
-With debug on, the Data Preview tab will light-up on the bottom panel. Without debug mode on, Data Flow will show you only the current metadata in and out of each of your transformations in the Inspect tab. The data preview will only query the number of rows that you have set as your limit in your debug settings. Click **Refresh** to update the data preview based on your current transformations. If your source data has changed, then click the Refresh > Refetch from source.
+With debug on, the Data Preview tab lights up on the bottom panel. Without debug mode on, Data Flow shows you only the current metadata in and out of each of your transformations in the Inspect tab. The data preview will only query the number of rows that you have set as your limit in your debug settings. Select **Refresh** to update the data preview based on your current transformations. If your source data has changed, then select the Refresh > Refetch from source.
:::image type="content" source="media/data-flow/datapreview.png" alt-text="Data preview":::
-You can sort columns in data preview and rearrange columns using drag and drop. Additionally, there is an export button on the top of the data preview panel that you can use to export the preview data to a CSV file for offline data exploration. You can use this feature to export up to 1,000 rows of preview data.
+You can sort columns in data preview and rearrange columns using drag and drop. Additionally, there's an export button on the top of the data preview panel that you can use to export the preview data to a CSV file for offline data exploration. You can use this feature to export up to 1,000 rows of preview data.
> [!NOTE] > File sources only limit the rows that you see, not the rows being read. For very large datasets, it is recommended that you take a small portion of that file and use it for your testing. You can select a temporary file in Debug Settings for each source that is a file dataset type.
-When running in Debug Mode in Data Flow, your data will not be written to the Sink transform. A Debug session is intended to serve as a test harness for your transformations. Sinks are not required during debug and are ignored in your data flow. If you wish to test writing the data in your Sink, execute the Data Flow from a pipeline and use the Debug execution from a pipeline.
+When running in Debug Mode in Data Flow, your data won't be written to the Sink transform. A Debug session is intended to serve as a test harness for your transformations. Sinks aren't required during debug and are ignored in your data flow. If you wish to test writing the data in your Sink, execute the Data Flow from a pipeline and use the Debug execution from a pipeline.
-Data Preview is a snapshot of your transformed data using row limits and data sampling from data frames in Spark memory. Therefore, the sink drivers are not utilized or tested in this scenario.
+Data Preview is a snapshot of your transformed data using row limits and data sampling from data frames in Spark memory. Therefore, the sink drivers aren't utilized or tested in this scenario.
### Testing join conditions
-When unit testing Joins, Exists, or Lookup transformations, make sure that you use a small set of known data for your test. You can use the Debug Settings option above to set a temporary file to use for your testing. This is needed because when limiting or sampling rows from a large dataset, you cannot predict which rows and which keys will be read into the flow for testing. The result is non-deterministic, meaning that your join conditions may fail.
+When unit testing Joins, Exists, or Lookup transformations, make sure that you use a small set of known data for your test. You can use the Debug Settings option described previously to set a temporary file to use for your testing. This is needed because when limiting or sampling rows from a large dataset, you can't predict which rows and which keys are read into the flow for testing. The result is nondeterministic, meaning that your join conditions might fail.
### Quick actions
-Once you see the data preview, you can generate a quick transformation to typecast, remove, or do a modification on a column. Click on the column header and then select one of the options from the data preview toolbar.
+Once you see the data preview, you can generate a quick transformation to typecast, remove, or do a modification on a column. Select the column header and then select one of the options from the data preview toolbar.
:::image type="content" source="media/data-flow/quick-actions1.png" alt-text="Screenshot shows the data preview toolbar with options: Typecast, Modify, Statistics, and Remove.":::
-Once you select a modification, the data preview will immediately refresh. Click **Confirm** in the top-right corner to generate a new transformation.
+Once you select a modification, the data preview will immediately refresh. Select **Confirm** in the top-right corner to generate a new transformation.
:::image type="content" source="media/data-flow/quick-actions2.png" alt-text="Screenshot shows the Confirm button.":::
-**Typecast** and **Modify** will generate a Derived Column transformation and **Remove** will generate a Select transformation.
+**Typecast** and **Modify** generates a Derived Column transformation and **Remove** generates a Select transformation.
:::image type="content" source="media/data-flow/quick-actions3.png" alt-text="Screenshot shows Derived ColumnΓÇÖs Settings.":::
Once you select a modification, the data preview will immediately refresh. Click
### Data profiling
-Selecting a column in your data preview tab and clicking **Statistics** in the data preview toolbar will pop up a chart on the far-right of your data grid with detailed statistics about each field. The service will make a determination based upon the data sampling of which type of chart to display. High-cardinality fields will default to NULL/NOT NULL charts while categorical and numeric data that has low cardinality will display bar charts showing data value frequency. You'll also see max/len length of string fields, min/max values in numeric fields, standard dev, percentiles, counts, and average.
+Selecting a column in your data preview tab and clicking **Statistics** in the data preview toolbar pops up a chart on the far-right of your data grid with detailed statistics about each field. The service makes a determination base upon the data sampling of which type of chart to display. High-cardinality fields default to NULL/NOT NULL charts while categorical and numeric data that has low cardinality displays bar charts showing data value frequency. You also see max/len length of string fields, min/max values in numeric fields, standard dev, percentiles, counts, and average.
:::image type="content" source="media/data-flow/stats.png" alt-text="Column statistics":::
data-factory Concepts Data Flow Expression Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-expression-builder.md
Title: Expression builder in mapping data flow
+ Title: Expression builder in mapping data flows
description: Build expressions by using Expression Builder in mapping data flows in Azure Data Factory and Azure Synapse Analytics
Previously updated : 10/25/2022 Last updated : 10/20/2023 # Build expressions in mapping data flow
In mapping data flow, many transformation properties are entered as expressions.
## Open Expression Builder
-There are multiple entry points to opening the expression builder. These are all dependent on the specific context of the data flow transformation. The most common use case is in transformations like [derived column](data-flow-derived-column.md) and [aggregate](data-flow-aggregate.md) where users create or update columns using the data flow expression language. The expression builder can be opened by selecting **Open expression builder** above the list of columns. You can also click on a column context and open the expression builder directly to that expression.
+There are multiple entry points to opening the expression builder. These are all dependent on the specific context of the data flow transformation. The most common use case is in transformations like [derived column](data-flow-derived-column.md) and [aggregate](data-flow-aggregate.md) where users create or update columns using the data flow expression language. The expression builder can be opened by selecting **Open expression builder** above the list of columns. You can also select a column context and open the expression builder directly to that expression.
:::image type="content" source="media/data-flow/open-expression-builder-derive.png" alt-text="Open Expression Builder derive":::
-In some transformations like [filter](data-flow-filter.md), clicking on a blue expression text box will open the expression builder.
+In some transformations like [filter](data-flow-filter.md), clicking on a blue expression text box opens the expression builder.
:::image type="content" source="media/data-flow/expressionbox.png" alt-text="Blue expression box":::
Mapping data flows supports the creation and use of user defined functions. To s
#### Address array indexes
-When dealing with columns or functions that return array types, use brackets ([]) to access a specific element. If the index doesn't exist, the expression evaluates into NULL.
+When you're dealing with columns or functions that return array types, use brackets ([]) to access a specific element. If the index doesn't exist, the expression evaluates into NULL.
:::image type="content" source="media/data-flow/expression-array.png" alt-text="Expression Builder array":::
When dealing with columns or functions that return array types, use brackets ([]
### Input schema
-If your data flow uses a defined schema in any of its sources, you can reference a column by name in many expressions. If you are utilizing schema drift, you can reference columns explicitly using the `byName()` or `byNames()` functions or match using column patterns.
+If your data flow uses a defined schema in any of its sources, you can reference a column by name in many expressions. If you're utilizing schema drift, you can reference columns explicitly using the `byName()` or `byNames()` functions or match using column patterns.
#### Column names with special characters
When you have column names that include special characters or spaces, surround t
### Parameters
-Parameters are values that are passed into a data flow at run time from a pipeline. To reference a parameter, either click on the parameter from the **Expression elements** view or reference it with a dollar sign in front of its name. For example, a parameter called parameter1 would be referenced by `$parameter1`. To learn more, see [parameterizing mapping data flows](parameters-data-flow.md).
+Parameters are values that are passed into a data flow at run time from a pipeline. To reference a parameter, either select the parameter from the **Expression elements** view or reference it with a dollar sign in front of its name. For example, a parameter called parameter1 is referenced by `$parameter1`. To learn more, see [parameterizing mapping data flows](parameters-data-flow.md).
### Cached lookup
A cached lookup allows you to do an inline lookup of the output of a cached sink
`lookup()` takes in the matching columns in the current transformation as parameters and returns a complex column equal to the row matching the key columns in the cache sink. The complex column returned contains a subcolumn for each column mapped in the cache sink. For example, if you had an error code cache sink `errorCodeCache` that had a key column matching on the code and a column called `Message`. Calling `errorCodeCache#lookup(errorCode).Message` would return the message corresponding with the code passed in.
-`outputs()` takes no parameters and returns the entire cache sink as an array of complex columns. This can't be called if key columns are specified in the sink and should only be used if there is a small number of rows in the cache sink. A common use case is appending the max value of an incrementing key. If a cached single aggregated row `CacheMaxKey` contains a column `MaxKey`, you can reference the first value by calling `CacheMaxKey#outputs()[1].MaxKey`.
+`outputs()` takes no parameters and returns the entire cache sink as an array of complex columns. This can't be called if key columns are specified in the sink and should only be used if there's a few rows in the cache sink. A common use case is appending the max value of an incrementing key. If a cached single aggregated row `CacheMaxKey` contains a column `MaxKey`, you can reference the first value by calling `CacheMaxKey#outputs()[1].MaxKey`.
:::image type="content" source="media/data-flow/cached-lookup-example.png" alt-text="Cached lookup"::: ### Locals
-If you are sharing logic across multiple columns or want to compartmentalize your logic, you can create a local variable. A local is a set of logic that doesn't get propagated downstream to the following transformation. Locals can be created within the expression builder by going to **Expression elements** and selecting **Locals**. Create a new one by selecting **Create new**.
+If you're sharing logic across multiple columns or want to compartmentalize your logic, you can create a local variable. A local is a set of logic that doesn't get propagated downstream to the following transformation. Locals can be created within the expression builder by going to **Expression elements** and selecting **Locals**. Create a new one by selecting **Create new**.
:::image type="content" source="media/data-flow/create-local.png" alt-text="Create local":::
Locals can reference any expression element including functions, input schema, p
:::image type="content" source="media/data-flow/create-local-2.png" alt-text="Create local 2":::
-To reference a local in a transformation, either click on the local from the **Expression elements** view or reference it with a colon in front of its name. For example, a local called local1 would be referenced by `:local1`. To edit a local definition, hover over it in the expression elements view and click on the pencil icon.
+To reference a local in a transformation, either select the local from the **Expression elements** view or reference it with a colon in front of its name. For example, a local called local1 would be referenced by `:local1`. To edit a local definition, hover over it in the expression elements view and select the pencil icon.
:::image type="content" source="media/data-flow/using-locals.png" alt-text="Using locals":::
toLong(
### Data flow time evaluation
-Dataflow processes till milliseconds. For *2018-07-31T20:00:00.2170000*, you will see *2018-07-31T20:00:00.217* in output.
-In the portal for the service, timestamp is being shown in the **current browser setting**, which can eliminate 217, but when you will run the data flow end to end, 217 (milliseconds part will be processed as well). You can use toString(myDateTimeColumn) as expression and see full precision data in preview. Process datetime as datetime rather than string for all practical purposes.
+Dataflow processes till milliseconds. For *2018-07-31T20:00:00.2170000*, you'll see *2018-07-31T20:00:00.217* in output.
+In the portal for the service, timestamp is being shown in the **current browser setting**, which can eliminate 217, but when you'll run the data flow end to end, 217 (milliseconds part is processed as well). You can use toString(myDateTimeColumn) as expression and see full precision data in preview. Process datetime as datetime rather than string for all practical purposes.
## Next steps
-[Begin building data transformation expressions](data-transformation-functions.md)
+[Begin building data transformation expressions.](data-transformation-functions.md)
data-factory Concepts Data Flow Flowlet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-flowlet.md
Title: Flowlets in mapping data flows
-description: Learn the concepts of Flowlets in mapping data flow
+description: Learn the concepts of Flowlets in mapping data flow.
Previously updated : 01/11/2023 Last updated : 10/20/2023 # Flowlets in mapping data flow
Last updated 01/11/2023
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RWQK3m] ## Getting started
-To create a flowlet, click the new flowlet action from the mapping data flow menu options.
+To create a flowlet, select the new flowlet action from the mapping data flow menu options.
![Screenshot showing how to create a flowlet](./media/data-flow-flowlet/flowlet-new-menu.png)
-This will create a new flowlet where you can add in your inputs, outputs, and transformation activities
+This creates a new flowlet where you can add in your inputs, outputs, and transformation activities.
## Flowlet design surface The flowlet design surface is similar to the mapping data flow design surface. The primary differences are the input, output, and debugging experiences that are described below.
The flowlet design surface is similar to the mapping data flow design surface. T
### Flowlet input
-The input of a flowlet defines the input columns expected from a calling mapping data flow. That calling mapping data flow will map columns from a stream into the columns you have defined from the input. This allows your flowlet to perform reusable logic on columns while giving flexibility on the calling mapping data flow for which columns the flowlet applies to.
+The input of a flowlet defines the input columns expected from a calling mapping data flow. That calling mapping data flow maps columns from a stream into the columns you have defined from the input. This allows your flowlet to perform reusable logic on columns while giving flexibility on the calling mapping data flow for which columns the flowlet applies to.
![Screenshot showing flowlet input configuration properties panel.](./media/data-flow-flowlet/flowlet-input.png)
The output of a flowlet defines the output columns that can be expected to emit
### Debugging a flowlet Debugging a flowlet has a couple of differences from the mapping data flow debug experience.
-First, the preview data is only available at the output of the flowlet. To preview data, make sure to select the flowout output and then the Preview Data tab.
+First, the preview data is only available at the output of the flowlet. To preview data, make sure to select the flowlet output and then the Preview Data tab.
![Screenshot showing Preview Data on the output in the flowlet.](./media/data-flow-flowlet/flowlet-debug.png)
-Second, because flowlets are dynamically mapped to inputs, in order to debug them flowlets allow users to enter test data to send through the flowlet. Under the debug settings, you should see a grid to fill out with test data that will match the input columns. Note for inputs with a large number of columns you may need to select on the full screen icon.
+Second, because flowlets are dynamically mapped to inputs, in order to debug them flowlets allow users to enter test data to send through the flowlet. Under the debug settings, you should see a grid to fill out with test data that matches the input columns. Note for inputs with a large number of columns you might need to select on the full screen icon.
![Screenshot showing Debug Settings and how to enter test data for debugging.](./media/data-flow-flowlet/flowlet-debug-settings.png) ## Other methods for creating a flowlet Flowlets can also be created from existing mapping data flows. This allows users to quickly reuse logic already created.
-For a single transformation activity, you can right-click the mapping data flow activity and select Create a new flowlet. This will create a flowlet with that activity and in input to match the activity's inputs.
+For a single transformation activity, you can right-click the mapping data flow activity and select Create a new flowlet. This creates a flowlet with that activity and in input to match the activity's inputs.
![Screenshot showing creating a flowlet from an existing activity using the right-click menu option.](./media/data-flow-flowlet/flowlet-context-create.png)
-If you have mulit-select turned on, you can also select multiple mapping data flow activities. This can be done by either lassoing multiple activities by drawing a rectangle to select them or using shift+select to select multiple activities. Then you will right-click and select Create a new flowlet.
+If you have mulit-select turned on, you can also select multiple mapping data flow activities. This can be done by either lassoing multiple activities by drawing a rectangle to select them or using shift+select to select multiple activities. Then you'll right-click and select Create a new flowlet.
![Screenshot showing multiple selection from existing activities.](./media/data-flow-flowlet/flowlet-context-multi.png)
If you have mulit-select turned on, you can also select multiple mapping data fl
## Running a flowlet inside of a mapping data flow Once the flowlet is created, you can run the flowlet from your mapping data flow activity with the flowlet transformation.
-For more information, see [Flowlet transformation in mapping data flow | Microsoft Docs](data-flow-flowlet.md)
+For more information, see [Flowlet transformation in mapping data flow | Microsoft Docs.](data-flow-flowlet.md)
data-factory Concepts Data Flow Manage Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-manage-graph.md
Previously updated : 01/18/2023 Last updated : 10/20/2023 # Managing the mapping data flow graph
data-factory Concepts Data Flow Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-monitoring.md
Title: Monitoring mapping data flows
-description: How to visually monitor mapping data flows in Azure Data Factory and Synapse Analytics
+description: How to visually monitor mapping data flows in Azure Data Factory and Synapse Analytics.
Previously updated : 10/25/2022 Last updated : 10/20/2023 # Monitor Data Flows [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-After you have completed building and debugging your data flow, you want to schedule your data flow to execute on a schedule within the context of a pipeline. You can schedule the pipeline using Triggers. For testing and debugging you data flow from a pipeline, you can use the Debug button on the toolbar ribbon or Trigger Now option from the Pipeline Builder to execute a single-run execution to test your data flow within the pipeline context.
+After you have completed building and debugging your data flow, you want to schedule your data flow to execute on a schedule within the context of a pipeline. You can schedule the pipeline using Triggers. For testing and debugging your data flow from a pipeline, you can use the Debug button on the toolbar ribbon or Trigger Now option from the Pipeline Builder to execute a single-run execution to test your data flow within the pipeline context.
> [!VIDEO https://www.microsoft.com/videoplayer/embed/RE4P5pV]
-When you execute your pipeline, you can monitor the pipeline and all of the activities contained in the pipeline including the Data Flow activity. Click on the monitor icon in the left-hand UI panel. You can see a screen similar to the one below. The highlighted icons allow you to drill into the activities in the pipeline, including the Data Flow activity.
+When you execute your pipeline, you can monitor the pipeline and all of the activities contained in the pipeline including the Data Flow activity. Select the monitor icon in the left-hand UI panel. You can see a screen similar to the one that follows. The highlighted icons allow you to drill into the activities in the pipeline, including the Data Flow activity.
:::image type="content" source="media/data-flow/monitor-new-001.png" alt-text="Screenshot shows icons to select for pipelines for more information.":::
When you're in the graphical node monitoring view, you can see a simplified view
## View Data Flow Execution Plans
-When your Data Flow is executed in Spark, the service determines optimal code paths based on the entirety of your data flow. Additionally, the execution paths may occur on different scale-out nodes and data partitions. Therefore, the monitoring graph represents the design of your flow, taking into account the execution path of your transformations. When you select individual nodes, you can see "stages" that represent code that was executed together on the cluster. The timings and counts that you see represent those groups or stages as opposed to the individual steps in your design.
+When your Data Flow is executed in Spark, the service determines optimal code paths based on the entirety of your data flow. Additionally, the execution paths might occur on different scale-out nodes and data partitions. Therefore, the monitoring graph represents the design of your flow, taking into account the execution path of your transformations. When you select individual nodes, you can see "stages" that represent code that was executed together on the cluster. The timings and counts that you see represent those groups or stages as opposed to the individual steps in your design.
:::image type="content" source="media/data-flow/monitor-new-005.png" alt-text="Screenshot shows the page for a data flow."::: * When you select the open space in the monitoring window, the stats in the bottom pane display timing and row counts for each Sink and the transformations that led to the sink data for transformation lineage.
-* When you select individual transformations, you receive additional feedback on the right-hand panel that shows partition stats, column counts, skewness (how evenly is the data distributed across partitions), and kurtosis (how spiky is the data).
+* When you select individual transformations, you receive extra feedback on the right-hand panel that shows partition stats, column counts, skewness (how evenly is the data distributed across partitions), and kurtosis (how spiky is the data).
-* Sorting by *processing time* will help you to identify which stages in your data flow took the most time.
+* Sorting by *processing time* helps you to identify which stages in your data flow took the most time.
* To find which transformations inside each stage took the most time, sort on *highest processing time*.
-* The *rows written* is also sortable as a way to identify which streams inside your data flow are writing the most data.
+* The *rows written are also sortable as a way to identify which streams inside your data flow are writing the most data.
* When you select the Sink in the node view, you can see column lineage. There are three different methods that columns are accumulated throughout your data flow to land in the Sink. They are: * Computed: You use the column for conditional processing or within an expression in your data flow, but don't land it in the Sink
- * Derived: The column is a new column that you generated in your flow, that is, it was not present in the Source
- * Mapped: The column originated from the source and your are mapping it to a sink field
+ * Derived: The column is a new column that you generated in your flow, that is, it wasn't present in the Source
+ * Mapped: The column originated from the source and you're mapping it to a sink field
* Data flow status: The current status of your execution * Cluster startup time: Amount of time to acquire the JIT Spark compute environment for your data flow execution * Number of transforms: How many transformation steps are being executed in your flow
When your Data Flow is executed in Spark, the service determines optimal code pa
## Total Sink Processing Time vs. Transformation Processing Time
-Each transformation stage includes a total time for that stage to complete with each partition execution time totaled together. When you click on the Sink you will see "Sink Processing Time". This time includes the total of the transformation time *plus* the I/O time it took to write your data to your destination store. The difference between the Sink Processing Time and the total of the transformation is the I/O time to write the data.
+Each transformation stage includes a total time for that stage to complete with each partition execution time totaled together. When you select the Sink, you see "Sink Processing Time". This time includes the total of the transformation time *plus* the I/O time it took to write your data to your destination store. The difference between the Sink Processing Time and the total of the transformation is the I/O time to write the data.
You can also see detailed timing for each partition transformation step if you open the JSON output from your data flow activity in the pipeline monitoring view. The JSON contains millisecond timing for each partition, whereas the UX monitoring view is an aggregate timing of partitions added together:
You can also see detailed timing for each partition transformation step if you o
### Sink processing time
-When you select a sink transformation icon in your map, the slide-in panel on the right will show an additional data point called "post processing time" at the bottom. This is the amount time spent executing your job on the Spark cluster *after* your data has been loaded, transformed, and written. This time can include closing connection pools, driver shutdown, deleting files, coalescing files, etc. When you perform actions in your flow like "move files" and "output to single file", you will likely see an increase in the post processing time value.
+When you select a sink transformation icon in your map, the slide-in panel on the right shows an extra data point called "post processing time" at the bottom. This is the amount time spent executing your job on the Spark cluster *after* your data has been loaded, transformed, and written. This time can include closing connection pools, driver shutdown, deleting files, coalescing files, etc. When you perform actions in your flow like "move files" and "output to single file", you likely see an increase in the post processing time value.
* Write stage duration: The time to write the data to a staging location for Synapse SQL * Table operation SQL duration: The time spent moving data from temp tables to target table * Pre SQL duration & Post SQL duration: The time spent running pre/post SQL commands
-* Pre commands duration & post commands duration: The time spent running any pre/post operations for file based source/sinks. For example move or delete files after processing.
+* Pre commands duration & post commands duration: The time spent running any pre/post operations for file based source/sinks. For example move or, delete files after processing.
* Merge duration: The time spent merging the file, merge files are used for file based sinks when writing to single file or when "File name as column data" is used. If significant time is spent in this metric, you should avoid using these options. * Stage time: Total amount of time spent inside of Spark to complete the operation as a stage. * Temporary staging stable: Name of the temporary table used by data flows to stage data in the database. ## Error rows
-Enabling error row handling in your data flow sink will be reflected in the monitoring output. When you set the sink to "report success on error", the monitoring output will show the number of success and failed rows when you click on the sink monitoring node.
+Enabling error row handling in your data flow sink will be reflected in the monitoring output. When you set the sink to "report success on error", the monitoring output shows the number of success and failed rows when you select the sink monitoring node.
:::image type="content" source="media/data-flow/error-row-2.png" alt-text="Screenshot shows error rows.":::
-When you select "report failure on error", the same output will be shown only in the activity monitoring output text. This is because the data flow activity will return failure for execution and the detailed monitoring view will be unavailable.
+When you select "report failure on error", the same output is shown only in the activity monitoring output text. This is because the data flow activity returns failure for execution and the detailed monitoring view is unavailable.
:::image type="content" source="media/data-flow/error-rows-4.png" alt-text="Screenshot shows error rows in activity.":::
data-factory Concepts Data Flow Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-overview.md
Previously updated : 01/11/2023 Last updated : 10/20/2023 # Mapping data flows in Azure Data Factory
data-factory Concepts Data Flow Performance Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-performance-pipelines.md
Previously updated : 10/26/2022 Last updated : 10/20/2023 # Using data flows in pipelines
When building complex pipelines with multiple data flows, your logical flow can
If you execute multiple data flows in parallel, the service spins up separate Spark clusters for each activity. This allows for each job to be isolated and run in parallel, but will lead to multiple clusters running at the same time.
-If your data flows execute in parallel, we recommend that you don't enable the Azure IR time to the live property because it will lead to multiple unused warm pools.
+If your data flows execute in parallel, we recommend that you don't enable the Azure IR time to the live property because it leads to multiple unused warm pools.
> [!TIP] > Instead of running the same data flow multiple times in a for each activity, stage your data in a data lake and use wildcard paths to process the data in a single data flow. ## Execute data flows sequentially
-If you execute your data flow activities in sequence, it is recommended that you set a TTL in the Azure IR configuration. The service will reuse the compute resources, resulting in a faster cluster start-up time. Each activity will still be isolated and receive a new Spark context for each execution.
+If you execute your data flow activities in sequence, it's recommended that you set a TTL in the Azure IR configuration. The service reuses the compute resources, resulting in a faster cluster start-up time. Each activity is still isolated and receives a new Spark context for each execution.
## Overloading a single data flow
-If you put all of your logic inside of a single data flow, the service will execute the entire job on a single Spark instance. While this may seem like a way to reduce costs, it mixes together different logical flows and can be difficult to monitor and debug. If one component fails, all other parts of the job will fail as well. Organizing data flows by independent flows of business logic is recommended. If your data flow becomes too large, splitting it into separate components will make monitoring and debugging easier. While there is no hard limit on the number of transformations in a data flow, having too many will make the job complex.
+If you put all of your logic inside of a single data flow, the service executes the entire job on a single Spark instance. While this might seem like a way to reduce costs, it mixes together different logical flows and can be difficult to monitor and debug. If one component fails, all other parts of the job fail as well. Organizing data flows by independent flows of business logic is recommended. If your data flow becomes too large, splitting it into separate components makes monitoring and debugging easier. While there's no hard limit on the number of transformations in a data flow, having too many makes the job complex.
## Execute sinks in parallel The default behavior of data flow sinks is to execute each sink sequentially, in a serial manner, and to fail the data flow when an error is encountered in the sink. Additionally, all sinks are defaulted to the same group unless you go into the data flow properties and set different priorities for the sinks.
-Data flows allow you to group sinks together into groups from the data flow properties tab in the UI designer. You can both set the order of execution of your sinks as well as to group sinks together using the same group number. To help manage groups, you can ask the service to run sinks in the same group, to run in parallel.
+Data flows allow you to group sinks together into groups from the data flow properties tab in the UI designer. You can both set the order of execution of your sinks and to group sinks together using the same group number. To help manage groups, you can ask the service to run sinks in the same group, to run in parallel.
-On the pipeline execute data flow activity under the "Sink Properties" section is an option to turn on parallel sink loading. When you enable "run in parallel", you are instructing data flows write to connected sinks at the same time rather than in a sequential manner. In order to utilize the parallel option, the sinks must be group together and connected to the same stream via a New Branch or Conditional Split.
+On the pipeline, execute data flow activity under the "Sink Properties" section is an option to turn on parallel sink loading. When you enable "run in parallel", you're instructing data flows write to connected sinks at the same time rather than in a sequential manner. In order to utilize the parallel option, the sinks must be group together and connected to the same stream via a New Branch or Conditional Split.
## Access Azure Synapse database templates in pipelines
-You can use an [Azure Synapse database template](../synapse-analytics/database-designer/overview-database-templates.md) when crating a pipeline. When creating a new dataflow, in the source or sink settings, select **Workspace DB**. The database dropdown will list the databases created through the database template. The Workspace DB option is only available for new data flows, it's not available when you use an existing pipeline from the Synapse studio gallery.
+You can use an [Azure Synapse database template](../synapse-analytics/database-designer/overview-database-templates.md) when crating a pipeline. When creating a new dataflow, in the source or sink settings, select **Workspace DB**. The database dropdown lists the databases created through the database template. The Workspace DB option is only available for new data flows, it's not available when you use an existing pipeline from the Synapse studio gallery.
## Next steps
data-factory Concepts Data Flow Performance Sinks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-performance-sinks.md
Previously updated : 01/11/2023 Last updated : 10/20/2023 # Optimizing sinks
-When data flows write to sinks, any custom partitioning will happen immediately before the write. Like the source, in most cases it is recommended that you keep **Use current partitioning** as the selected partition option. Partitioned data will write significantly quicker than unpartitioned data, even your destination is not partitioned. Below are the individual considerations for various sink types.
+When data flows write to sinks, any custom partitioning happens immediately before the write. Like the source, in most cases it's recommended that you keep **Use current partitioning** as the selected partition option. Partitioned data writes much faster than unpartitioned data, even your destination isn't partitioned. Following are the individual considerations for various sink types.
## Azure SQL Database sinks
-With Azure SQL Database, the default partitioning should work in most cases. There is a chance that your sink may have too many partitions for your SQL database to handle. If you are running into this, reduce the number of partitions outputted by your SQL Database sink.
+With Azure SQL Database, the default partitioning should work in most cases. There's a chance that your sink might have too many partitions for your SQL database to handle. If you're running into this, reduce the number of partitions outputted by your SQL Database sink.
### Best practice for deleting rows in sink based on missing rows in source
-Here is a video walk through of how to use data flows with exists, alter row, and sink transformations to achieve this common pattern:
+Here's a video walk-through of how to use data flows with exists, alter row, and sink transformations to achieve this common pattern:
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RWMLr5] ### Impact of error row handling to performance
-When you enable error row handling ("continue on error") in the sink transformation, the service will take an additional step before writing the compatible rows to your destination table. This additional step will have a small performance penalty that can be in the range of 5% added for this step with an additional small performance hit also added if you set the option to also write the incompatible rows to a log file.
+When you enable error row handling ("continue on error") in the sink transformation, the service takes an extra step before writing the compatible rows to your destination table. This extra step has a small performance penalty that can be in the range of 5% added for this step with an extra small performance hit also added if you set the option to also write the incompatible rows to a log file.
### Disabling indexes using a SQL Script
After the write has completed, rebuild the indexes using the following command:
`ALTER INDEX ALL ON dbo.[Table Name] REBUILD`
-These can both be done natively using Pre and Post-SQL scripts within an Azure SQL DB or Synapse sink in mapping data flows.
+These can both be done natively using Pre and Post-SQL scripts within an Azure SQL Database or Synapse sink in mapping data flows.
:::image type="content" source="media/data-flow/disable-indexes-sql.png" alt-text="Disable indexes":::
Schedule a resizing of your source and sink Azure SQL DB and DW before your pipe
## Azure Synapse Analytics sinks
-When writing to Azure Synapse Analytics, make sure that **Enable staging** is set to true. This enables the service to write using the [SQL COPY Command](/sql/t-sql/statements/copy-into-transact-sql) which effectively loads the data in bulk. You will need to reference an Azure Data Lake Storage gen2 or Azure Blob Storage account for staging of the data when using Staging.
+When writing to Azure Synapse Analytics, make sure that **Enable staging** is set to true. This enables the service to write using the [SQL COPY Command](/sql/t-sql/statements/copy-into-transact-sql), which effectively loads the data in bulk. You'll need to reference an Azure Data Lake Storage gen2 or Azure Blob Storage account for staging of the data when using Staging.
Other than Staging, the same best practices apply to Azure Synapse Analytics as Azure SQL Database. ## File-based sinks
-While data flows support a variety of file types, the Spark-native Parquet format is recommended for optimal read and write times.
+While data flows support various file types, the Spark-native Parquet format is recommended for optimal read and write times.
-If the data is evenly distributed, **Use current partitioning** will be the fastest partitioning option for writing files.
+If the data is evenly distributed, **Use current partitioning** is the fastest partitioning option for writing files.
### File name options
-When writing files, you have a choice of naming options that each have a performance impact.
+When writing files, you have a choice of naming options that each have an effect on performance.
:::image type="content" source="media/data-flow/file-sink-settings.png" alt-text="Sink options":::
-Selecting the **Default** option will write the fastest. Each partition will equate to a file with the Spark default name. This is useful if you are just reading from the folder of data.
+Selecting the **Default** option writes the fastest. Each partition equates to a file with the Spark default name. This is useful if you're just reading from the folder of data.
-Setting a naming **Pattern** will rename each partition file to a more user-friendly name. This operation happens after write and is slightly slower than choosing the default.
+Setting a naming **Pattern** renames each partition file to a more user-friendly name. This operation happens after write and is slightly slower than choosing the default.
**Per partition** allows you to name each individual partition manually.
-If a column corresponds to how you wish to output the data, you can select **Name file as column data**. This reshuffles the data and can impact performance if the columns are not evenly distributed.
+If a column corresponds to how you wish to output the data, you can select **Name file as column data**. This reshuffles the data and can affect performance if the columns aren't evenly distributed.
If a column corresponds to how you wish to generate folder names, select **Name folder as column data**.
-**Output to single file** combines all the data into a single partition. This leads to long write times, especially for large datasets. This option is strongly discouraged unless there is an explicit business reason to use it.
+**Output to single file** combines all the data into a single partition. This leads to long write times, especially for large datasets. This option is discouraged unless there's an explicit business reason to use it.
## Azure Cosmos DB sinks
-When writing to Azure Cosmos DB, altering throughput and batch size during data flow execution can improve performance. These changes only take effect during the data flow activity run and will return to the original collection settings after conclusion.
+When you're writing to Azure Cosmos DB, altering throughput and batch size during data flow execution can improve performance. These changes only take effect during the data flow activity run and will return to the original collection settings after conclusion.
**Batch size:** Usually, starting with the default batch size is sufficient. To further tune this value, calculate the rough object size of your data, and make sure that object size * batch size is less than 2MB. If it is, you can increase the batch size to get better throughput. **Throughput:** Set a higher throughput setting here to allow documents to write faster to Azure Cosmos DB. Keep in mind the higher RU costs based upon a high throughput setting.
-**Write throughput budget:** Use a value which is smaller than total RUs per minute. If you have a data flow with a high number of Spark partitions, setting a budget throughput will allow more balance across those partitions.
+**Write throughput budget:** Use a value, which is smaller than total RUs per minute. If you have a data flow with a high number of Spark partitions, setting a budget throughput allows more balance across those partitions.
## Next steps
data-factory Concepts Data Flow Performance Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-performance-sources.md
Previously updated : 11/27/2022 Last updated : 10/20/2023 # Optimizing sources
-For every source except Azure SQL Database, it is recommended that you keep **Use current partitioning** as the selected value. When reading from all other source systems, data flows automatically partitions data evenly based upon the size of the data. A new partition is created for about every 128 MB of data. As your data size increases, the number of partitions increase.
+For every source except Azure SQL Database, it's recommended that you keep **Use current partitioning** as the selected value. When you're reading from all other source systems, data flows automatically partitions data evenly based upon the size of the data. A new partition is created for about every 128 MB of data. As your data size increases, the number of partitions increase.
-Any custom partitioning happens *after* Spark reads in the data and will negatively impact your data flow performance. As the data is evenly partitioned on read, is not recommended unless you understand the shape and cardinality of your data first.
+Any custom partitioning happens *after* Spark reads in the data and negatively affect your data flow performance. As the data is evenly partitioned on read, isn't recommended unless you understand the shape and cardinality of your data first.
> [!NOTE] > Read speeds can be limited by the throughput of your source system. ## Azure SQL Database sources
-Azure SQL Database has a unique partitioning option called 'Source' partitioning. Enabling source partitioning can improve your read times from Azure SQL DB by enabling parallel connections on the source system. Specify the number of partitions and how to partition your data. Use a partition column with high cardinality. You can also enter a query that matches the partitioning scheme of your source table.
+Azure SQL Database has a unique partitioning option called 'Source' partitioning. Enabling source partitioning can improve your read times from Azure SQL Database by enabling parallel connections on the source system. Specify the number of partitions and how to partition your data. Use a partition column with high cardinality. You can also enter a query that matches the partitioning scheme of your source table.
> [!TIP] > For source partitioning, the I/O of the SQL Server is the bottleneck. Adding too many partitions may saturate your source database. Generally four or five partitions is ideal when using this option.
Azure SQL Database has a unique partitioning option called 'Source' partitioning
### Isolation level
-The isolation level of the read on an Azure SQL source system has an impact on performance. Choosing 'Read uncommitted' will provide the fastest performance and prevent any database locks. To learn more about SQL Isolation levels, see [Understanding isolation levels](/sql/connect/jdbc/understanding-isolation-levels).
+The isolation level of the read on an Azure SQL source system affects performance. Choosing 'Read uncommitted' provides the fastest performance and prevent any database locks. To learn more about SQL Isolation levels, see [Understanding isolation levels](/sql/connect/jdbc/understanding-isolation-levels).
### Read using query
-You can read from Azure SQL Database using a table or a SQL query. If you are executing a SQL query, the query must complete before transformation can start. SQL Queries can be useful to push down operations that may execute faster and reduce the amount of data read from a SQL Server such as SELECT, WHERE, and JOIN statements. When pushing down operations, you lose the ability to track lineage and performance of the transformations before the data comes into the data flow.
+You can read from Azure SQL Database using a table or a SQL query. If you're executing a SQL query, the query must complete before transformation can start. SQL Queries can be useful to push down operations that might execute faster and reduce the amount of data read from a SQL Server such as SELECT, WHERE, and JOIN statements. When pushing down operations, you lose the ability to track lineage and performance of the transformations before the data comes into the data flow.
## Azure Synapse Analytics sources
-When using Azure Synapse Analytics, a setting called **Enable staging** exists in the source options. This allows the service to read from Synapse using ```Staging``` which greatly improves read performance by using the most performant bulk loading capability such as CETAS and COPY command. Enabling ```Staging``` requires you to specify an Azure Blob Storage or Azure Data Lake Storage gen2 staging location in the data flow activity settings.
+When using Azure Synapse Analytics, a setting called **Enable staging** exists in the source options. This allows the service to read from Synapse using ```Staging```, which greatly improves read performance by using the most performant bulk loading capability such as CETAS and COPY command. Enabling ```Staging``` requires you to specify an Azure Blob Storage or Azure Data Lake Storage gen2 staging location in the data flow activity settings.
:::image type="content" source="media/data-flow/enable-staging.png" alt-text="Enable staging":::
When using Azure Synapse Analytics, a setting called **Enable staging** exists i
### Parquet vs. delimited text
-While data flows support a variety of file types, the Spark-native Parquet format is recommended for optimal read and write times.
+While data flows support various file types, the Spark-native Parquet format is recommended for optimal read and write times.
If you're running the same data flow on a set of files, we recommend reading from a folder, using wildcard paths or reading from a list of files. A single data flow activity run can process all of your files in batch. More information on how to configure these settings can be found in the **Source transformation** section of the [Azure Blob Storage connector](connector-azure-blob-storage.md#source-transformation) documentation.
-If possible, avoid using the For-Each activity to run data flows over a set of files. This will cause each iteration of the for-each to spin up its own Spark cluster, which is often not necessary and can be expensive.
+If possible, avoid using the For-Each activity to run data flows over a set of files. This causes each iteration of the for-each to spin up its own Spark cluster, which is often not necessary and can be expensive.
### Inline datasets vs. shared datasets
-ADF and Synapse datasets are shared resources in your factories and workspaces. However, when you are reading large numbers of source folders and files with delimited text and JSON sources, you can improve the performance of data flow file discovery by setting the option "User projected schema" inside the Projection | Schema options dialog. This option turns off ADF's default schema auto-discovery and will greatly improve the performance of file discovery. Before setting this option, make sure to import the projection so that ADF has an existing schema for projection. This option does not work with schema drift.
+ADF and Synapse datasets are shared resources in your factories and workspaces. However, when you're reading large numbers of source folders and files with delimited text and JSON sources, you can improve the performance of data flow file discovery by setting the option "User projected schema" inside the Projection | Schema options dialog. This option turns off ADF's default schema autodiscovery and greatly improves the performance of file discovery. Before setting this option, make sure to import the projection so that ADF has an existing schema for projection. This option doesn't work with schema drift.
## Next steps
data-factory Concepts Data Flow Performance Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-performance-transformations.md
Previously updated : 01/11/2023 Last updated : 10/20/2023 # Optimizing transformations
Use the following strategies to optimize performance of transformations in mappi
### Broadcasting
-In joins, lookups, and exists transformations, if one or both data streams are small enough to fit into worker node memory, you can optimize performance by enabling **Broadcasting**. Broadcasting is when you send small data frames to all nodes in the cluster. This allows for the Spark engine to perform a join without reshuffling the data in the large stream. By default, the Spark engine will automatically decide whether or not to broadcast one side of a join. If you are familiar with your incoming data and know that one stream will be significantly smaller than the other, you can select **Fixed** broadcasting. Fixed broadcasting forces Spark to broadcast the selected stream.
+In joins, lookups, and exists transformations, if one or both data streams are small enough to fit into worker node memory, you can optimize performance by enabling **Broadcasting**. Broadcasting is when you send small data frames to all nodes in the cluster. This allows for the Spark engine to perform a join without reshuffling the data in the large stream. By default, the Spark engine automatically decides whether or not to broadcast one side of a join. If you're familiar with your incoming data and know that one stream is smaller than the other, you can select **Fixed** broadcasting. Fixed broadcasting forces Spark to broadcast the selected stream.
-If the size of the broadcasted data is too large for the Spark node, you may get an out of memory error. To avoid out of memory errors, use **memory optimized** clusters. If you experience broadcast timeouts during data flow executions, you can switch off the broadcast optimization. However, this will result in slower performing data flows.
+If the size of the broadcasted data is too large for the Spark node, you might get an out of memory error. To avoid out of memory errors, use **memory optimized** clusters. If you experience broadcast timeouts during data flow executions, you can switch off the broadcast optimization. However, this results in slower performing data flows.
-When working with data sources that can take longer to query, like large database queries, it is recommended to turn broadcast off for joins. Source with long query times can cause Spark timeouts when the cluster attempts to broadcast to compute nodes. Another good choice for turning off broadcast is when you have a stream in your data flow that is aggregating values for use in a lookup transformation later. This pattern can confuse the Spark optimizer and cause timeouts.
+When working with data sources that can take longer to query, like large database queries, it's recommended to turn broadcast off for joins. Source with long query times can cause Spark timeouts when the cluster attempts to broadcast to compute nodes. Another good choice for turning off broadcast is when you have a stream in your data flow that is aggregating values for use in a lookup transformation later. This pattern can confuse the Spark optimizer and cause timeouts.
:::image type="content" source="media/data-flow/joinoptimize.png" alt-text="Join Transformation optimize"::: ### Cross joins
-If you use literal values in your join conditions or have multiple matches on both sides of a join, Spark will run the join as a cross join. A cross join is a full cartesian product that then filters out the joined values. This is significantly slower than other join types. Ensure that you have column references on both sides of your join conditions to avoid the performance impact.
+If you use literal values in your join conditions or have multiple matches on both sides of a join, Spark runs the join as a cross join. A cross join is a full cartesian product that then filters out the joined values. This is slower than other join types. Ensure that you have column references on both sides of your join conditions to avoid the performance impact.
### Sorting before joins
-Unlike merge join in tools like SSIS, the join transformation isn't a mandatory merge join operation. The join keys don't require sorting prior to the transformation. Using Sort transformations in mapping data flows is not recommended.
+Unlike merge join in tools like SSIS, the join transformation isn't a mandatory merge join operation. The join keys don't require sorting prior to the transformation. Using Sort transformations in mapping data flows isn't recommended.
## Window transformation performance
-The [Window transformation in mapping data flow](data-flow-window.md) partitions your data by value in columns that you select as part of the ```over()``` clause in the transformation settings. There are a number of very popular aggregate and analytical functions that are exposed in the Windows transformation. However, if your use case is to generate a window over your entire dataset for the purpose of ranking ```rank()``` or row number ```rowNumber()```, it is recommended that you instead use the [Rank transformation](data-flow-rank.md) and the [Surrogate Key transformation](data-flow-surrogate-key.md). Those transformations will perform better again full dataset operations using those functions.
+The [Window transformation in mapping data flow](data-flow-window.md) partitions your data by value in columns that you select as part of the ```over()``` clause in the transformation settings. There are many popular aggregate and analytical functions that are exposed in the Windows transformation. However, if your use case is to generate a window over your entire dataset for ranking ```rank()``` or row number ```rowNumber()```, it's recommended that you instead use the [Rank transformation](data-flow-rank.md) and the [Surrogate Key transformation](data-flow-surrogate-key.md). Those transformations perform better again full dataset operations using those functions.
## Repartitioning skewed data
-Certain transformations such as joins and aggregates reshuffle your data partitions and can occasionally lead to skewed data. Skewed data means that data is not evenly distributed across the partitions. Heavily skewed data can lead to slower downstream transformations and sink writes. You can check the skewness of your data at any point in a data flow run by clicking on the transformation in the monitoring display.
+Certain transformations such as joins and aggregates reshuffle your data partitions and can occasionally lead to skewed data. Skewed data means that data isn't evenly distributed across the partitions. Heavily skewed data can lead to slower downstream transformations and sink writes. You can check the skewness of your data at any point in a data flow run by clicking on the transformation in the monitoring display.
:::image type="content" source="media/data-flow/skewness-kurtosis.png" alt-text="Skewness and kurtosis":::
-The monitoring display will show how the data is distributed across each partition along with two metrics, skewness and kurtosis. **Skewness** is a measure of how asymmetrical the data is and can have a positive, zero, negative, or undefined value. Negative skew means the left tail is longer than the right. **Kurtosis** is the measure of whether the data is heavy-tailed or light-tailed. High kurtosis values are not desirable. Ideal ranges of skewness lie between -3 and 3 and ranges of kurtosis are less than 10. An easy way to interpret these numbers is looking at the partition chart and seeing if 1 bar is significantly larger than the rest.
+The monitoring display shows how the data is distributed across each partition along with two metrics, skewness and kurtosis. **Skewness** is a measure of how asymmetrical the data is and can have a positive, zero, negative, or undefined value. Negative skew means the left tail is longer than the right. **Kurtosis** is the measure of whether the data is heavy-tailed or light-tailed. High kurtosis values aren't desirable. Ideal ranges of skewness lie between -3 and 3 and ranges of kurtosis are less than 10. An easy way to interpret these numbers is looking at the partition chart and seeing if 1 bar is larger than the rest.
-If your data is not evenly partitioned after a transformation, you can use the [optimize tab](concepts-data-flow-performance.md#optimize-tab) to repartition. Reshuffling data takes time and may not improve your data flow performance.
+If your data isn't evenly partitioned after a transformation, you can use the [optimize tab](concepts-data-flow-performance.md#optimize-tab) to repartition. Reshuffling data takes time and might not improve your data flow performance.
> [!TIP] > If you repartition your data, but have downstream transformations that reshuffle your data, use hash partitioning on a column used as a join key.
data-factory Concepts Data Flow Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-performance.md
Previously updated : 10/25/2022 Last updated : 10/20/2023 # Mapping data flows performance and tuning guide
Last updated 10/25/2022
Mapping data flows in Azure Data Factory and Synapse pipelines provide a code-free interface to design and run data transformations at scale. If you're not familiar with mapping data flows, see the [Mapping Data Flow Overview](concepts-data-flow-overview.md). This article highlights various ways to tune and optimize your data flows so that they meet your performance benchmarks.
-Watch the below video to see shows some sample timings transforming data with data flows.
+Watch the following video to see shows some sample timings transforming data with data flows.
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4rNxM] ## Monitoring data flow performance
-Once you verify your transformation logic using debug mode, run your data flow end-to-end as an activity in a pipeline. Data flows are operationalized in a pipeline using the [execute data flow activity](control-flow-execute-data-flow-activity.md). The data flow activity has a unique monitoring experience compared to other activities that displays a detailed execution plan and performance profile of the transformation logic. To view detailed monitoring information of a data flow, click on the eyeglasses icon in the activity run output of a pipeline. For more information, see [Monitoring mapping data flows](concepts-data-flow-monitoring.md).
+Once you verify your transformation logic using debug mode, run your data flow end-to-end as an activity in a pipeline. Data flows are operationalized in a pipeline using the [execute data flow activity](control-flow-execute-data-flow-activity.md). The data flow activity has a unique monitoring experience compared to other activities that displays a detailed execution plan and performance profile of the transformation logic. To view detailed monitoring information of a data flow, select the eyeglasses icon in the activity run output of a pipeline. For more information, see [Monitoring mapping data flows](concepts-data-flow-monitoring.md).
:::image type="content" source="media/data-flow/monitoring-details.png" alt-text="Data Flow Monitor":::
-When monitoring data flow performance, there are four possible bottlenecks to look out for:
+When you're monitoring data flow performance, there are four possible bottlenecks to look out for:
* Cluster start-up time * Reading from a source
When monitoring data flow performance, there are four possible bottlenecks to lo
:::image type="content" source="media/data-flow/monitoring-performance.png" alt-text="Data Flow Monitoring":::
-Cluster start-up time is the time it takes to spin up an Apache Spark cluster. This value is located in the top-right corner of the monitoring screen. Data flows run on a just-in-time model where each job uses an isolated cluster. This start-up time generally takes 3-5 minutes. For sequential jobs, this can be reduced by enabling a time to live value. For more information, refer to the **Time to live** section in [Integration Runtime performance](concepts-integration-runtime-performance.md#time-to-live).
+Cluster start-up time is the time it takes to spin up an Apache Spark cluster. This value is located in the top-right corner of the monitoring screen. Data flows run on a just-in-time model where each job uses an isolated cluster. This startup time generally takes 3-5 minutes. For sequential jobs, startup time can be reduced by enabling a time to live value. For more information, see the **Time to live** section in [Integration Runtime performance](concepts-integration-runtime-performance.md#time-to-live).
-Data flows utilize a Spark optimizer that reorders and runs your business logic in 'stages' to perform as quickly as possible. For each sink that your data flow writes to, the monitoring output lists the duration of each transformation stage, along with the time it takes to write data into the sink. The time that is the largest is likely the bottleneck of your data flow. If the transformation stage that takes the largest contains a source, then you may want to look at further optimizing your read time. If a transformation is taking a long time, then you may need to repartition or increase the size of your integration runtime. If the sink processing time is large, you may need to scale up your database or verify you are not outputting to a single file.
+Data flows utilize a Spark optimizer that reorders and runs your business logic in 'stages' to perform as quickly as possible. For each sink that your data flow writes to, the monitoring output lists the duration of each transformation stage, along with the time it takes to write data into the sink. The time that is the largest is likely the bottleneck of your data flow. If the transformation stage that takes the largest contains a source, then you might want to look at further optimizing your read time. If a transformation is taking a long time, then you might need to repartition or increase the size of your integration runtime. If the sink processing time is large, you might need to scale up your database or verify you aren't outputting to a single file.
Once you have identified the bottleneck of your data flow, use the below optimizations strategies to improve performance. ## Testing data flow logic
-When designing and testing data flows from UI, debug mode allows you to interactively test against a live Spark cluster. This allows you to preview data and execute your data flows without waiting for a cluster to warm up. For more information, see [Debug Mode](concepts-data-flow-debug-mode.md).
+When you're designing and testing data flows from UI, debug mode allows you to interactively test against a live Spark cluster, which allows you to preview data and execute your data flows without waiting for a cluster to warm up. For more information, see [Debug Mode](concepts-data-flow-debug-mode.md).
## Optimize tab
The **Optimize** tab contains settings to configure the partitioning scheme of t
:::image type="content" source="media/data-flow/optimize.png" alt-text="Screenshot shows the Optimize tab, which includes Partition option, Partition type, and Number of partitions.":::
-By default, *Use current partitioning* is selected which instructs the service keep the current output partitioning of the transformation. As repartitioning data takes time, *Use current partitioning* is recommended in most scenarios. Scenarios where you may want to repartition your data include after aggregates and joins that significantly skew your data or when using Source partitioning on a SQL DB.
+By default, *Use current partitioning* is selected which instructs the service keep the current output partitioning of the transformation. As repartitioning data takes time, *Use current partitioning* is recommended in most scenarios. Scenarios where you might want to repartition your data include after aggregates and joins that significantly skew your data or when using Source partitioning on a SQL database.
-To change the partitioning on any transformation, select the **Optimize** tab and select the **Set Partitioning** radio button. You are presented with a series of options for partitioning. The best method of partitioning differs based on your data volumes, candidate keys, null values, and cardinality.
+To change the partitioning on any transformation, select the **Optimize** tab and select the **Set Partitioning** radio button. You're presented with a series of options for partitioning. The best method of partitioning differs based on your data volumes, candidate keys, null values, and cardinality.
> [!IMPORTANT] > Single partition combines all the distributed data into a single partition. This is a very slow operation that also significantly affects all downstream transformation and writes. This option is strongly discouraged unless there is an explicit business reason to use it.
If you have a good understanding of the cardinality of your data, key partitioni
## Logging level
-If you do not require every pipeline execution of your data flow activities to fully log all verbose telemetry logs, you can optionally set your logging level to "Basic" or "None". When executing your data flows in "Verbose" mode (default), you are requesting the service to fully log activity at each individual partition level during your data transformation. This can be an expensive operation, so only enabling verbose when troubleshooting can improve your overall data flow and pipeline performance. "Basic" mode will only log transformation durations while "None" will only provide a summary of durations.
+If you don't require every pipeline execution of your data flow activities to fully log all verbose telemetry logs, you can optionally set your logging level to "Basic" or "None". When executing your data flows in "Verbose" mode (default), you're requesting the service to fully log activity at each individual partition level during your data transformation. This can be an expensive operation, so only enabling verbose when troubleshooting can improve your overall data flow and pipeline performance. "Basic" mode only logs transformation durations while "None" will only provide a summary of durations.
:::image type="content" source="media/data-flow/logging.png" alt-text="Logging level":::
data-factory Concepts Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-linked-services.md
Previously updated : 10/25/2022 Last updated : 10/20/2023 # Linked services in Azure Data Factory and Azure Synapse Analytics
Azure Data Factory and Azure Synapse Analytics can have one or more pipelines. A
Now, a **dataset** is a named view of data that simply points to or references the data you want to use in your **activities** as inputs and outputs.
-Before you create a dataset, you must create a **linked service** to link your data store to the Data Factory or Synapse Workspace. Linked services are much like connection strings, which define the connection information needed for the service to connect to external resources. Think of it this way; the dataset represents the structure of the data within the linked data stores, and the linked service defines the connection to the data source. For example, an Azure Storage linked service links a storage account to the service. An Azure Blob dataset represents the blob container and the folder within that Azure Storage account that contains the input blobs to be processed.
+Before you create a dataset, you must create a **linked service** to link your data store to the Data Factory or Synapse Workspace. Linked services are much like connection strings, which define the connection information needed for the service to connect to external resources. Think of it this way: the dataset represents the structure of the data within the linked data stores, and the linked service defines the connection to the data source. For example, an Azure Storage linked service links a storage account to the service. An Azure Blob dataset represents the blob container and the folder within that Azure Storage account that contains the input blobs to be processed.
Here is a sample scenario. To copy data from Blob storage to a SQL Database, you create two linked
data-factory Concepts Nested Activities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-nested-activities.md
Previously updated : 10/24/2022 Last updated : 10/20/2023 # Nested activities in Azure Data Factory and Azure Synapse Analytics
Your pipeline canvas will then switch to the context of the inner activity conta
There are constraints on the activities that support nesting (ForEach, Until, Switch, and If Condition), for nesting another nested activity. Specifically: - If and Switch can be used inside ForEach or Until activities.-- If and Switch can not used inside If and Switch activities.
+- If and Switch can't be used inside If and Switch activities.
- ForEach or Until support only a single level of nesting. See the best practices section below on how to use other pipeline activities to enable this scenario. In addition, the
See the best practices section below on how to use other pipeline activities to
If and Switch can be used inside ForEach or Until activities. ForEach or Until supports only single level nesting
-If and Switch can not used inside If and Switch activities.
+If and Switch can't be used inside If and Switch activities.
## Best practices for multiple levels of nested activities In order to have logic that supports nesting more than one level deep, you can use the [Execute Pipeline Activity](control-flow-execute-pipeline-activity.md) inside of your nested activity to call another pipeline that then can have another level of nested activities. A common use case for this pattern is with the ForEach loop where you need to additionally loop based off logic in the inner activities.
data-factory Concepts Parameters Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-parameters-variables.md
Previously updated : 12/08/2022 Last updated : 10/20/2023 # Pipeline parameters and variables in Azure Data Factory and Azure Synapse Analytics
data-factory Concepts Pipelines Activities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-pipelines-activities.md
Previously updated : 10/24/2022 Last updated : 10/20/2023 # Pipelines and activities in Azure Data Factory and Azure Synapse Analytics
This article helps you understand pipelines and activities in Azure Data Factory
## Overview A Data Factory or Synapse Workspace can have one or more pipelines. A pipeline is a logical grouping of activities that together perform a task. For example, a pipeline could contain a set of activities that ingest and clean log data, and then kick off a mapping data flow to analyze the log data. The pipeline allows you to manage the activities as a set instead of each one individually. You deploy and schedule the pipeline instead of the activities independently.
-The activities in a pipeline define actions to perform on your data. For example, you may use a copy activity to copy data from SQL Server to an Azure Blob Storage. Then, use a data flow activity or a Databricks Notebook activity to process and transform data from the blob storage to an Azure Synapse Analytics pool on top of which business intelligence reporting solutions are built.
+The activities in a pipeline define actions to perform on your data. For example, you can use a copy activity to copy data from SQL Server to an Azure Blob Storage. Then, use a data flow activity or a Databricks Notebook activity to process and transform data from the blob storage to an Azure Synapse Analytics pool on top of which business intelligence reporting solutions are built.
Azure Data Factory and Azure Synapse Analytics have three groupings of activities: [data movement activities](copy-activity-overview.md), [data transformation activities](transform-data.md), and [control activities](#control-flow-activities). An activity can take zero or more input [datasets](concepts-datasets-linked-services.md) and produce one or more output [datasets](concepts-datasets-linked-services.md). The following diagram shows the relationship between pipeline, activity, and dataset:
Tag | Description | Required
name | Name of the activity. Specify a name that represents the action that the activity performs. <br/><ul><li>Maximum number of characters: 55</li><li>Must start with a letter-number, or an underscore (\_)</li><li>Following characters are not allowed: ΓÇ£.ΓÇ¥, "+", "?", "/", "<",">","*"," %"," &",":"," \" | Yes</li></ul> description | Text describing what the activity or is used for | Yes type | Type of the activity. See the [Data Movement Activities](#data-movement-activities), [Data Transformation Activities](#data-transformation-activities), and [Control Activities](#control-flow-activities) sections for different types of activities. | Yes
-linkedServiceName | Name of the linked service used by the activity.<br/><br/>An activity may require that you specify the linked service that links to the required compute environment. | Yes for HDInsight Activity, ML Studio (classic) Batch Scoring Activity, Stored Procedure Activity. <br/><br/>No for all others
+linkedServiceName | Name of the linked service used by the activity.<br/><br/>An activity might require that you specify the linked service that links to the required compute environment. | Yes for HDInsight Activity, ML Studio (classic) Batch Scoring Activity, Stored Procedure Activity. <br/><br/>No for all others
typeProperties | Properties in the typeProperties section depend on each type of activity. To see type properties for an activity, click links to the activity in the previous section. | No policy | Policies that affect the run-time behavior of the activity. This property includes a timeout and retry behavior. If it isn't specified, default values are used. For more information, see [Activity policy](#activity-policy) section. | No dependsOn | This property is used to define activity dependencies, and how subsequent activities depend on previous activities. For more information, see [Activity dependency](#activity-dependency) | No
The **typeProperties** section is different for each transformation activity. To
For a complete walkthrough of creating this pipeline, see [Tutorial: transform data using Spark](tutorial-transform-data-spark-powershell.md). ## Multiple activities in a pipeline
-The previous two sample pipelines have only one activity in them. You can have more than one activity in a pipeline. If you have multiple activities in a pipeline and subsequent activities are not dependent on previous activities, the activities may run in parallel.
+The previous two sample pipelines have only one activity in them. You can have more than one activity in a pipeline. If you have multiple activities in a pipeline and subsequent activities are not dependent on previous activities, the activities might run in parallel.
You can chain two activities by using [activity dependency](#activity-dependency), which defines how subsequent activities depend on previous activities, determining the condition whether to continue executing the next task. An activity can depend on one or more previous activities with different dependency conditions.
data-factory Connect Data Factory To Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connect-data-factory-to-azure-purview.md
Previously updated : 01/11/2023 Last updated : 10/20/2023 # Connect Data Factory to Microsoft Purview
Once you connect the data factory to a Microsoft Purview account, you see the fo
:::image type="content" source="./media/data-factory-purview/monitor-purview-connection-status.png" alt-text="Screenshot for monitoring the integration status between Azure Data Factory and Microsoft Purview.":::
-For **Data Lineage - Pipeline**, you may see one of below status:
+For **Data Lineage - Pipeline**, you might see one of below status:
- **Connected**: The data factory is successfully connected to the Microsoft Purview account. Note this indicates data factory is associated with a Microsoft Purview account and has permission to push lineage to it. If your Microsoft Purview account is protected by firewall, you also need to make sure the integration runtime used to execute the activities and conduct lineage push can reach the Microsoft Purview account. Learn more from [Access a secured Microsoft Purview account from Azure Data Factory](how-to-access-secured-purview-account.md). - **Disconnected**: The data factory cannot push lineage to Microsoft Purview because Microsoft Purview Data Curator role is not granted to data factory's managed identity. To fix this issue, go to your Microsoft Purview account to check the role assignments, and manually grant the role as needed. Learn more from [Set up authentication](#set-up-authentication) section.
data-factory Connector Amazon Marketplace Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-marketplace-web-service.md
Previously updated : 01/20/2023 Last updated : 10/20/2023 # Copy data from Amazon Marketplace Web Service using Azure Data Factory or Synapse Analytics
The following properties are supported for Amazon Marketplace Web Service linked
| Property | Description | Required | |: |: |: | | type | The type property must be set to: **AmazonMWS** | Yes |
-| endpoint | The endpoint of the Amazon MWS server, (that is, mws.amazonservices.com) | Yes |
+| endpoint | The endpoint of the Amazon MWS Server (that is, mws.amazonservices.com) | Yes |
| marketplaceID | The Amazon Marketplace ID you want to retrieve data from. To retrieve data from multiple Marketplace IDs, separate them with a comma (`,`). (that is, A2EUQ1WTGCTBG2) | Yes | | sellerID | The Amazon seller ID. | Yes | | mwsAuthToken | The Amazon MWS authentication token. Mark this field as a SecureString to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes |
data-factory Connector Amazon S3 Compatible Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-s3-compatible-storage.md
Previously updated : 01/11/2023 Last updated : 10/20/2023 # Copy data from Amazon S3 Compatible Storage by using Azure Data Factory or Synapse Analytics
data-factory Connector Azure Database For Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-database-for-mysql.md
Previously updated : 12/15/2022 Last updated : 10/20/2023 # Copy and transform data in Azure Database for MySQL using Azure Data Factory or Synapse Analytics
data-factory Connector Azure Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-sql-managed-instance.md
Previously updated : 12/15/2022 Last updated : 10/20/2023 # Copy and transform data in Azure SQL Managed Instance using Azure Data Factory or Synapse Analytics
data-factory Connector Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-cassandra.md
Previously updated : 01/25/2023 Last updated : 10/20/2023 # Copy data from Cassandra using Azure Data Factory or Synapse Analytics
data-factory Connector Concur https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-concur.md
Previously updated : 01/20/2023 Last updated : 10/20/2023 # Copy data from Concur using Azure Data Factory or Synapse Analytics(Preview)
data-factory Connector Couchbase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-couchbase.md
Previously updated : 01/20/2023 Last updated : 10/20/2023 # Copy data from Couchbase using Azure Data Factory (Preview)
data-factory Connector Dynamics Ax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-dynamics-ax.md
Previously updated : 11/30/2022 Last updated : 10/20/2023 # Copy data from Dynamics AX using Azure Data Factory or Synapse Analytics
data-factory Connector Dynamics Crm Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-dynamics-crm-office-365.md
Previously updated : 11/30/2022 Last updated : 10/20/2023 # Copy and transform data in Dynamics 365 (Microsoft Dataverse) or Dynamics CRM using Azure Data Factory or Azure Synapse Analytics
data-factory Connector File System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-file-system.md
Previously updated : 11/10/2022 Last updated : 10/20/2023
data-factory Connector Ftp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-ftp.md
Previously updated : 01/11/2023 Last updated : 10/20/2023
data-factory Connector Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-github.md
Previously updated : 10/24/2022 Last updated : 10/20/2023
data-factory Connector Google Cloud Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-google-cloud-storage.md
Previously updated : 01/11/2023 Last updated : 10/20/2023
data-factory Connector Greenplum https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-greenplum.md
Previously updated : 01/25/2023 Last updated : 10/20/2023 # Copy data from Greenplum using Azure Data Factory or Synapse Analytics
data-factory Connector Hbase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-hbase.md
Previously updated : 01/25/2023 Last updated : 10/20/2023 # Copy data from HBase using Azure Data Factory or Synapse Analytics
data-factory Connector Hdfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-hdfs.md
Previously updated : 01/11/2023 Last updated : 10/20/2023
data-factory Connector Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-http.md
Previously updated : 10/26/2022 Last updated : 10/20/2023 # Copy data from an HTTP endpoint by using Azure Data Factory or Azure Synapse Analytics
data-factory Connector Hubspot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-hubspot.md
Previously updated : 01/18/2023 Last updated : 10/20/2023 # Copy data from HubSpot using Azure Data Factory or Synapse Analytics
data-factory Connector Impala https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-impala.md
Previously updated : 01/25/2023 Last updated : 10/20/2023 # Copy data from Impala using Azure Data Factory or Synapse Analytics
data-factory Connector Informix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-informix.md
Previously updated : 01/18/2023 Last updated : 10/20/2023
data-factory Connector Jira https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-jira.md
Previously updated : 01/18/2023 Last updated : 10/20/2023 # Copy data from Jira using Azure Data Factory or Synapse Analytics
data-factory Connector Magento https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-magento.md
Previously updated : 01/20/2023 Last updated : 10/20/2023 # Copy data from Magento using Azure Data Factory or Synapse Analytics(Preview)
data-factory Connector Marketo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-marketo.md
Previously updated : 01/20/2023 Last updated : 10/20/2023
data-factory Connector Microsoft Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-microsoft-access.md
Previously updated : 01/18/2023 Last updated : 10/20/2023 # Copy data from and to Microsoft Access using Azure Data Factory or Synapse Analytics
data-factory Connector Mongodb Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mongodb-legacy.md
Previously updated : 01/25/2023 Last updated : 10/20/2023 # Copy data from MongoDB using Azure Data Factory or Synapse Analytics (legacy)
data-factory Connector Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mysql.md
Previously updated : 01/11/2023 Last updated : 10/20/2023
data-factory Connector Netezza https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-netezza.md
Previously updated : 01/20/2023 Last updated : 10/20/2023 # Copy data from Netezza by using Azure Data Factory or Synapse Analytics
data-factory Connector Odbc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-odbc.md
Previously updated : 10/25/2022 Last updated : 10/20/2023 # Copy data from and to ODBC data stores using Azure Data Factory or Synapse Analytics
data-factory Connector Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-office-365.md
Previously updated : 11/28/2022 Last updated : 10/20/2023 # Copy and transform data from Microsoft 365 (Office 365) into Azure using Azure Data Factory or Synapse Analytics
data-factory Connector Oracle Cloud Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-oracle-cloud-storage.md
Previously updated : 01/18/2023 Last updated : 10/20/2023
data-factory Connector Oracle Eloqua https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-oracle-eloqua.md
Previously updated : 01/20/2023 Last updated : 10/20/2023 # Copy data from Oracle Eloqua using Azure Data Factory or Synapse Analytics (Preview)
data-factory Connector Oracle Responsys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-oracle-responsys.md
Previously updated : 01/25/2023 Last updated : 10/20/2023 # Copy data from Oracle Responsys using Azure Data Factory or Synapse Analytics (Preview)
data-factory Connector Oracle Service Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-oracle-service-cloud.md
Previously updated : 01/18/2023 Last updated : 10/20/2023 # Copy data from Oracle Service Cloud using Azure Data Factory or Synapse Analytics (Preview)
data-factory Connector Paypal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-paypal.md
Previously updated : 01/25/2023 Last updated : 10/20/2023 # Copy data from PayPal using Azure Data Factory or Synapse Analytics (Preview)
data-factory Connector Phoenix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-phoenix.md
Previously updated : 01/25/2023 Last updated : 10/20/2023 # Copy data from Phoenix using Azure Data Factory or Synapse Analytics
data-factory Connector Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-postgresql.md
Previously updated : 10/25/2022 Last updated : 10/20/2023 # Copy data from PostgreSQL using Azure Data Factory or Synapse Analytics
data-factory Connector Presto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-presto.md
Previously updated : 01/20/2023 Last updated : 10/20/2023 # Copy data from Presto using Azure Data Factory or Synapse Analytics
data-factory Connector Quickbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-quickbooks.md
Previously updated : 01/18/2023 Last updated : 10/20/2023 # Copy data from QuickBooks Online using Azure Data Factory or Synapse Analytics (Preview)
data-factory Connector Salesforce Marketing Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-salesforce-marketing-cloud.md
Previously updated : 11/01/2022 Last updated : 10/20/2023 # Copy data from Salesforce Marketing Cloud using Azure Data Factory or Synapse Analytics
data-factory Connector Sap Business Warehouse Open Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-business-warehouse-open-hub.md
Previously updated : 10/25/2022 Last updated : 10/20/2023 # Copy data from SAP Business Warehouse via Open Hub using Azure Data Factory or Synapse Analytics
data-factory Connector Sap Business Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-business-warehouse.md
Previously updated : 10/25/2022 Last updated : 10/20/2023 # Copy data from SAP Business Warehouse using Azure Data Factory or Synapse Analytics
data-factory Connector Sap Ecc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-ecc.md
Previously updated : 10/25/2022 Last updated : 10/20/2023 # Copy data from SAP ECC using Azure Data Factory or Synapse Analytics
data-factory Connector Sap Hana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-hana.md
Previously updated : 10/20/2022 Last updated : 10/20/2023 # Copy data from SAP HANA using Azure Data Factory or Synapse Analytics
data-factory Connector Sap Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-table.md
Previously updated : 10/20/2022 Last updated : 10/20/2023 # Copy data from an SAP table using Azure Data Factory or Azure Synapse Analytics
data-factory Connector Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-servicenow.md
Previously updated : 01/11/2023 Last updated : 10/20/2023 # Copy data from ServiceNow using Azure Data Factory or Synapse Analytics
data-factory Connector Shopify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-shopify.md
Previously updated : 01/18/2023 Last updated : 10/20/2023 # Copy data from Shopify using Azure Data Factory or Synapse Analytics (Preview)
data-factory Connector Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-spark.md
Previously updated : 01/18/2023 Last updated : 10/20/2023 # Copy data from Spark using Azure Data Factory or Synapse Analytics
data-factory Connector Sybase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sybase.md
Previously updated : 01/20/2023 Last updated : 10/20/2023 # Copy data from Sybase using Azure Data Factory or Synapse Analytics
data-factory Connector Teradata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-teradata.md
Previously updated : 01/18/2023 Last updated : 10/20/2023
data-factory Connector Troubleshoot Azure Data Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-azure-data-lake.md
Previously updated : 11/08/2022 Last updated : 10/20/2023
data-factory Connector Troubleshoot Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-azure-files.md
Previously updated : 10/23/2022 Last updated : 10/20/2023
data-factory Connector Troubleshoot Azure Table Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-azure-table-storage.md
Previously updated : 01/25/2023 Last updated : 10/20/2023
data-factory Connector Troubleshoot Db2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-db2.md
Previously updated : 01/20/2023 Last updated : 10/20/2023
data-factory Connector Troubleshoot Delimited Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-delimited-text.md
Previously updated : 01/18/2023 Last updated : 10/20/2023
data-factory Connector Troubleshoot File System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-file-system.md
Previously updated : 12/01/2022 Last updated : 10/20/2023
data-factory Connector Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-guide.md
Previously updated : 01/05/2023 Last updated : 10/20/2023
data-factory Connector Troubleshoot Hive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-hive.md
Previously updated : 01/28/2023 Last updated : 10/20/2023
data-factory Connector Troubleshoot Oracle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-oracle.md
Previously updated : 01/18/2023 Last updated : 10/20/2023
data-factory Connector Troubleshoot Orc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-orc.md
Previously updated : 01/25/2023 Last updated : 10/20/2023
data-factory Connector Troubleshoot Parquet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-parquet.md
Previously updated : 01/11/2023 Last updated : 10/20/2023
data-factory Connector Troubleshoot Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-postgresql.md
Previously updated : 01/20/2023 Last updated : 10/20/2023
data-factory Connector Troubleshoot Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-rest.md
Previously updated : 01/18/2023 Last updated : 10/20/2023
data-factory Connector Troubleshoot Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-snowflake.md
Previously updated : 11/14/2022 Last updated : 10/20/2023
data-factory Connector Troubleshoot Xml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-xml.md
Previously updated : 01/25/2023 Last updated : 10/20/2023
data-factory Connector Vertica https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-vertica.md
Previously updated : 01/25/2023 Last updated : 10/20/2023 # Copy data from Vertica using Azure Data Factory or Synapse Analytics
data-factory Connector Web Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-web-table.md
Previously updated : 01/18/2023 Last updated : 10/20/2023 # Copy data from Web table by using Azure Data Factory or Synapse Analytics
data-factory Connector Xero https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-xero.md
Previously updated : 01/18/2023 Last updated : 10/20/2023 # Copy data from Xero using Azure Data Factory or Synapse Analytics
data-factory Connector Zoho https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-zoho.md
Previously updated : 01/20/2023 Last updated : 10/20/2023 # Copy data from Zoho using Azure Data Factory or Synapse Analytics (Preview)
data-factory Continuous Integration Delivery Automate Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-automate-azure-pipelines.md
Previously updated : 10/25/2022 Last updated : 10/20/2023
data-factory Continuous Integration Delivery Hotfix Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-hotfix-environment.md
Previously updated : 01/11/2023 Last updated : 10/20/2023
data-factory Continuous Integration Delivery Linked Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-linked-templates.md
Previously updated : 01/11/2023 Last updated : 10/20/2023
data-factory Continuous Integration Delivery Sample Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-sample-script.md
Previously updated : 10/25/2022 Last updated : 10/20/2023
data-factory Control Flow Append Variable Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-append-variable-activity.md
Previously updated : 10/23/2022 Last updated : 10/20/2023 # Append Variable activity in Azure Data Factory and Synapse Analytics
data-factory Control Flow Azure Function Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-azure-function-activity.md
Previously updated : 11/23/2022 Last updated : 10/20/2023
data-factory Control Flow Execute Data Flow Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-execute-data-flow-activity.md
Previously updated : 10/27/2022 Last updated : 10/20/2023 # Data Flow activity in Azure Data Factory and Azure Synapse Analytics
data-factory Control Flow Execute Pipeline Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-execute-pipeline-activity.md
Previously updated : 10/25/2022 Last updated : 10/20/2023 # Execute Pipeline activity in Azure Data Factory and Synapse Analytics
data-factory Control Flow Expression Language Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-expression-language-functions.md
Previously updated : 10/25/2022 Last updated : 10/20/2023 # Expressions and functions in Azure Data Factory and Azure Synapse Analytics
data-factory Control Flow Fail Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-fail-activity.md
Previously updated : 10/25/2022 Last updated : 10/20/2023 # Execute a Fail activity in Azure Data Factory and Synapse Analytics
data-factory Control Flow Filter Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-filter-activity.md
Previously updated : 10/25/2022 Last updated : 10/20/2023 # Filter activity in Azure Data Factory and Synapse Analytics pipelines
data-factory Control Flow For Each Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-for-each-activity.md
Previously updated : 10/26/2022 Last updated : 10/20/2023 # ForEach activity in Azure Data Factory and Azure Synapse Analytics
data-factory Control Flow If Condition Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-if-condition-activity.md
Previously updated : 10/26/2022 Last updated : 10/20/2023
data-factory Control Flow Power Query Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-power-query-activity.md
Previously updated : 10/27/2022 Last updated : 10/20/2023 # Power Query activity in Azure Data Factory
data-factory Control Flow Switch Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-switch-activity.md
Previously updated : 10/25/2022 Last updated : 10/20/2023
data-factory Control Flow System Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-system-variables.md
Previously updated : 10/25/2022 Last updated : 10/20/2023 # System variables supported by Azure Data Factory and Azure Synapse Analytics
data-factory Control Flow Until Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-until-activity.md
Previously updated : 10/25/2022 Last updated : 10/20/2023
data-factory Control Flow Validation Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-validation-activity.md
Previously updated : 10/24/2022 Last updated : 10/20/2023 # Validation activity in Azure Data Factory and Synapse Analytics pipelines
data-factory Control Flow Wait Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-wait-activity.md
Previously updated : 10/24/2022 Last updated : 10/20/2023 # Execute Wait activity in Azure Data Factory and Synapse Analytics
data-factory Control Flow Web Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-web-activity.md
Previously updated : 10/25/2022 Last updated : 10/20/2023 # Web activity in Azure Data Factory and Azure Synapse Analytics
data-factory Control Flow Webhook Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-webhook-activity.md
Previously updated : 10/25/2022 Last updated : 10/20/2023 # Webhook activity in Azure Data Factory
data-factory Copy Activity Fault Tolerance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-fault-tolerance.md
Previously updated : 10/25/2022 Last updated : 10/20/2023 # Fault tolerance of copy activity in Azure Data Factory and Synapse Analytics pipelines
data-factory Copy Activity Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-monitoring.md
Previously updated : 10/25/2022 Last updated : 10/20/2023 # Monitor copy activity
data-factory Copy Activity Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-performance.md
Previously updated : 10/25/2022 Last updated : 10/20/2023 # Copy activity performance and scalability guide
data-factory Copy Activity Preserve Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-preserve-metadata.md
Previously updated : 01/11/2023 Last updated : 10/20/2023
data-factory Copy Activity Schema And Type Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-schema-and-type-mapping.md
Previously updated : 10/25/2022 Last updated : 10/20/2023 # Schema and data type mapping in copy activity
data-factory Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-data-tool.md
Previously updated : 10/25/2022 Last updated : 10/20/2023
data-factory Create Azure Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-azure-integration-runtime.md
description: Learn how to create Azure integration runtime in Azure Data Factory
Previously updated : 10/24/2022 Last updated : 10/20/2023
data-factory Create Azure Ssis Integration Runtime Deploy Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-azure-ssis-integration-runtime-deploy-packages.md
description: Learn how to deploy and run SSIS packages in Azure Data Factory wit
Previously updated : 01/20/2023 Last updated : 10/20/2023
data-factory Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/credentials.md
Previously updated : 10/25/2022 Last updated : 10/20/2023
data-factory Data Flow Alter Row https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-alter-row.md
Previously updated : 11/01/2022 Last updated : 10/20/2023 # Alter row transformation in mapping data flow
data-factory Data Flow Conversion Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-conversion-functions.md
Previously updated : 10/19/2022 Last updated : 10/20/2023 # Conversion functions in mapping data flow
data-factory Data Flow Expressions Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-expressions-usage.md
Previously updated : 10/19/2022 Last updated : 10/20/2023 # Data transformation expression usage in mapping data flow
data-factory Data Flow Sink https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-sink.md
Previously updated : 11/01/2022 Last updated : 10/20/2023 # Sink transformation in mapping data flow
data-factory Data Flow Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-source.md
Previously updated : 10/26/2022 Last updated : 10/20/2023 # Source transformation in mapping data flow
data-factory Enable Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/enable-customer-managed-key.md
Previously updated : 10/14/2022 Last updated : 10/20/2023
data-factory Format Delimited Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-delimited-text.md
Previously updated : 09/05/2022 Last updated : 10/20/2023
data-factory How Does Managed Airflow Work https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-does-managed-airflow-work.md
Previously updated : 01/20/2023 Last updated : 10/20/2023 # How does Azure Data Factory Managed Airflow work?
data-factory Industry Sap Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/industry-sap-overview.md
Previously updated : 01/11/2023 Last updated : 10/20/2023 # SAP knowledge center overview
data-factory Load Azure Data Lake Storage Gen2 From Gen1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/load-azure-data-lake-storage-gen2-from-gen1.md
Previously updated : 10/25/2022 Last updated : 10/20/2023 # Copy data from Azure Data Lake Storage Gen1 to Gen2 with Azure Data Factory
data-factory Memory Optimized Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/memory-optimized-compute.md
Previously updated : 01/18/2023 Last updated : 10/20/2023 # Memory optimized compute type for Data Flows in Azure Data Factory and Azure Synapse
data-factory Monitor Logs Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-logs-rest.md
Previously updated : 01/18/2023 Last updated : 10/20/2023 # Set up diagnostic logs via the Azure Monitor REST API
data-factory Monitor Metrics Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-metrics-alerts.md
Previously updated : 10/25/2022 Last updated : 10/20/2023 # Data Factory metrics and alerts
data-factory Monitor Schema Logs Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-schema-logs-events.md
Previously updated : 10/25/2022 Last updated : 10/20/2023 # Schema of logs and events
data-factory Monitor Ssis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-ssis.md
Previously updated : 01/18/2023 Last updated : 10/20/2023 # Monitor SSIS operations with Azure Monitor
data-factory Monitor Using Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-using-azure-monitor.md
Previously updated : 10/22/2022 Last updated : 10/20/2023 # Monitor and Alert Data Factory by using Azure Monitor
data-factory Password Change Airflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/password-change-airflow.md
Previously updated : 01/24/2023 Last updated : 10/20/2023
data-factory Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/plan-manage-costs.md
Previously updated : 10/21/2022 Last updated : 10/20/2023 # Plan to manage costs for Azure Data Factory
data-factory Quickstart Create Data Factory Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-resource-manager-template.md
Previously updated : 10/25/2022 Last updated : 10/20/2023 # Quickstart: Create an Azure Data Factory using ARM template
data-factory Quickstart Create Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory.md
Previously updated : 10/24/2022 Last updated : 10/20/2023
data-factory Quickstart Hello World Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-hello-world-copy-data-tool.md
Previously updated : 10/24/2022 Last updated : 10/20/2023
data-factory Deploy Azure Ssis Integration Runtime Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/scripts/deploy-azure-ssis-integration-runtime-powershell.md
Previously updated : 01/25/2023 Last updated : 10/20/2023 # PowerShell script - deploy Azure-SSIS integration runtime
data-factory Self Hosted Integration Runtime Auto Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/self-hosted-integration-runtime-auto-update.md
Previously updated : 10/23/2022 Last updated : 10/20/2023 # Self-hosted integration runtime auto-update and expire notification
data-factory Solution Template Replicate Multiple Objects Sap Cdc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-replicate-multiple-objects-sap-cdc.md
Previously updated : 11/28/2022 Last updated : 10/20/2023 # Replicate multiple objects from SAP via SAP CDC
data-factory Solution Templates Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-templates-introduction.md
- Previously updated : 10/18/2022 Last updated : 10/20/2023 # Templates
data-factory Transform Data Using Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-script.md
Previously updated : 10/19/2022 Last updated : 10/20/2023 # Transform data by using the Script activity in Azure Data Factory or Synapse Analytics
data-factory Tumbling Window Trigger Dependency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tumbling-window-trigger-dependency.md
Previously updated : 09/27/2022 Last updated : 10/20/2023 # Create a tumbling window trigger dependency
data-factory Tutorial Control Flow Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-control-flow-portal.md
Previously updated : 10/25/2022 Last updated : 10/20/2023 # Branching and chaining activities in an Azure Data Factory pipeline using the Azure portal
data-factory Tutorial Incremental Copy Change Data Capture Feature Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-change-data-capture-feature-portal.md
Previously updated : 01/11/2023 Last updated : 10/20/2023 # Incrementally load data from Azure SQL Managed Instance to Azure Storage using change data capture (CDC)
data-factory Tutorial Incremental Copy Change Tracking Feature Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-change-tracking-feature-powershell.md
Previously updated : 01/11/2023 Last updated : 10/20/2023 # Incrementally load data from Azure SQL Database to Azure Blob Storage using change tracking information using PowerShell
data-factory Tutorial Pipeline Failure Error Handling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-pipeline-failure-error-handling.md
Previously updated : 01/09/2023 Last updated : 10/20/2023 # Errors and Conditional execution
data-factory Tutorial Run Existing Pipeline With Airflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-run-existing-pipeline-with-airflow.md
Previously updated : 01/24/2023 Last updated : 10/20/2023
data-factory Tutorial Transform Data Hive Virtual Network Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-transform-data-hive-virtual-network-portal.md
Previously updated : 01/20/2023 Last updated : 10/20/2023 # Transform data in Azure Virtual Network using Hive activity in Azure Data Factory using the Azure portal
data-factory Tutorial Transform Data Spark Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-transform-data-spark-portal.md
Previously updated : 01/11/2023 Last updated : 10/20/2023 # Transform data in the cloud by using a Spark activity in Azure Data Factory
defender-for-cloud Defender For Apis Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-apis-introduction.md
Defender for APIs currently provides security for APIs published in Azure API Ma
- **Inventory**: In a single dashboard, get an aggregated view of all managed APIs. - **Security findings**: Analyze API security findings, including information about external, unused, or unauthenticated APIs. - **Security posture**: Review and implement security recommendations to improve API security posture, and harden at-risk surfaces.-- **API data classification**: Classify APIs that receive or respond with sensitive data, to support risk prioritization.
+- **API sensitive data classification**: Classify APIs that receive or respond with sensitive data, to support risk prioritization. Defender for APIs integrates with MIP Purview enabling custom data classification and support for sensitivity labels, and hydration of same into Cloud Security Explorer for end to end Data Security
- **Threat detection**: Ingest API traffic and monitor it with runtime anomaly detection, using machine-learning and rule-based analytics, to detect API security threats, including the [OWASP API Top 10](https://owasp.org/www-project-api-security/) critical threats. -- **Defender CSPM integration**: Integrate with Cloud Security Graph in [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) for API visibility and risk assessment across your organization.
+- **Defender CSPM integration**: Integrate with Cloud Security Graph and Attack Paths in [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) for API visibility and risk assessment across your organization.
- **Azure API Management integration**: With the Defender for APIs plan enabled, you can receive API security recommendations and alerts in the Azure API Management portal. - **SIEM integration**: Integrate with security information and event management (SIEM) systems, making it easier for security teams to investigate with existing threat response workflows. [Learn more](tutorial-security-incident.md).
Last called data (UTC): The date when API traffic was last observed going to/fro
- **30 days unused**: Shows whether API endpoints have received any API call traffic in the last 30 days. APIs that haven't received any traffic in the last 30 days are marked as *Inactive*. - **Authentication**: Shows when a monitored API endpoint has no authentication. Defender for APIs assesses the authentication state using the subscription keys, JSON web token (JWT), and client certificate configured in Azure API Management. If none of these authentication mechanisms are present or executed, the API is marked as *unauthenticated*. - **External traffic observed date**: The date when external API traffic was observed going to/from the API endpoint. -- **Data classification**: Classifies API request and response bodies based on supported data types.
+- **Data classification**: Classifies API request and response bodies based on data types defined in MIP Purview or from a Microsoft supported set.
> [!NOTE] > API endpoints that haven't received any traffic since onboarding to Defender for APIs display the status *Awaiting data* in the API dashboard.
Act on alerts to mitigate threats and risk. Defender for Cloud alerts and recomm
## Next steps
-[Review support and prerequisites](defender-for-apis-prepare.md) for Defender for APIs deployment.
+[Review support and prerequisites](defender-for-apis-prepare.md) for Defender for APIs deployment.
defender-for-cloud Defender For Storage Malware Scan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-malware-scan.md
Some common use-cases and scenarios for malware scanning in Defender for Storage
- **Compliance requirements:** resources that adhere to compliance standards like [NIST](defender-for-cloud-glossary.md#nist), SWIFT, GDPR, and others require robust security practices, which include malware scanning. It's critical for organizations operating in regulated industries or regions. -- **Third-party integration:** third-party data can come from a wide variety of sources, and not all of them may have robust security practices, such as business partners, developers, and contractors. Scanning for malware helps to ensure that this data doesn't introduce security risks to your system.
+- **Third-party integration:** third-party data can come from a wide variety of sources, and not all of them might have robust security practices, such as business partners, developers, and contractors. Scanning for malware helps to ensure that this data doesn't introduce security risks to your system.
- **Collaborative platforms:** similar to file sharing, teams use cloud storage for continuously sharing content and collaborating across teams and organizations. Scanning for malware ensures safe collaboration. -- **Data pipelines:** data moving through ETL (Extract, Transfer, Load) processes can come from multiple sources and may include malware. Scanning for malware can help to ensure the integrity of these pipelines.
+- **Data pipelines:** data moving through ETL (Extract, Transfer, Load) processes can come from multiple sources and might include malware. Scanning for malware can help to ensure the integrity of these pipelines.
- **Machine learning training data:** the quality and security of the training data are critical for effective machine learning models. It's important to ensure these data sets are clean and safe, especially if they include user-generated content or data from external sources. :::image type="content" source="media/defender-for-storage-malware-scan/malware-scan-tax-app-demo.gif" alt-text="animated GIF showing user-generated-content and data from external sources." lightbox="media/defender-for-storage-malware-scan/malware-scan-tax-app-demo.gif":::
+> [!NOTE]
+> Malware scanning is a **near** real time service. Scan times can vary depending on the scanned file size or file type as well as on the load on the service or on the storage account. Microsoft is constantly working on reducing the overall scan time, however you should take this variability in scan times into consideration when designing a user experience based the service.
+ ## Prerequisites To enable and configure Malware Scanning, you must have Owner roles (such as Subscription Owner or Storage Account Owner) or specific roles with the necessary data actions. Learn more about the [required permissions](support-matrix-defender-for-storage.md).
When a blob is uploaded to a protected storage account - a malware scan is trigg
#### Scan regions and data retention
-The malware scanning service that uses Microsoft Defender Antivirus technologies reads the blob. Malware Scanning scans the content "in-memory" and deletes scanned files immediately after scanning. The content isn't retained. The scanning occurs within the same region of the storage account. In some cases, when a file is suspicious, and more data is required, Malware Scanning may share file metadata outside the scanning region, including metadata classified as customer data (for example, SHA-256 hash), with Microsoft Defender for Endpoint.
+The malware scanning service that uses Microsoft Defender Antivirus technologies reads the blob. Malware Scanning scans the content "in-memory" and deletes scanned files immediately after scanning. The content isn't retained. The scanning occurs within the same region of the storage account. In some cases, when a file is suspicious, and more data is required, Malware Scanning might share file metadata outside the scanning region, including metadata classified as customer data (for example, SHA-256 hash), with Microsoft Defender for Endpoint.
#### Access customer data
If you're enabling malware scanning on the subscription level, a new Security Op
Malware scanning scan results are available through four methods. After setup, you'll see scan results as **blob index tags** for every uploaded and scanned file in the storage account, and as **Microsoft Defender for Cloud security alerts** when a file is identified as malicious.
-You may choose to configure extra scan result methods, such as **Event Grid** and **Log Analytics**; these methods require extra configuration. In the next section, you'll learn about the different scan result methods.
+You might choose to configure extra scan result methods, such as **Event Grid** and **Log Analytics**; these methods require extra configuration. In the next section, you'll learn about the different scan result methods.
:::image type="content" source="media/defender-for-storage-malware-scan/view-and-consume-malware-scan-results.png" alt-text="Diagram showing flow of viewing and consuming malware scanning results." lightbox="media/defender-for-storage-malware-scan/view-and-consume-malware-scan-results.png":::
Learn how to configure Malware Scanning so that [every scan result is sent autom
### Logs analytics
-You may want to log your scan results for compliance evidence or investigating scan results. By setting up a Log Analytics Workspace destination, you can store every scan result in a centralized log repository that is easy to query. You can view the results by navigating to the Log Analytics destination workspace and looking for the `StorageMalwareScanningResults` table.
+You might want to log your scan results for compliance evidence or investigating scan results. By setting up a Log Analytics Workspace destination, you can store every scan result in a centralized log repository that is easy to query. You can view the results by navigating to the Log Analytics destination workspace and looking for the `StorageMalwareScanningResults` table.
Learn more about [setting up logging for malware scanning](advanced-configurations-for-malware-scanning.md#setting-up-logging-for-malware-scanning).
defender-for-cloud Defender For Storage Rest Api Enablement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-rest-api-enablement.md
And add the following request body:
"onUpload": { "isEnabled": true, "capGBPerMonth": 5000
- }
+ },
"scanResultsEventGridTopicResourceId": "/subscriptions/<Subscription>/resourceGroups/<resourceGroup>/providers/Microsoft.EventGrid/topics/<topicName>" }, "sensitiveDataDiscovery": {
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md
You can also enable the Defender for Endpoint unified solution at scale through
Here's an example request body for the PUT request to enable the Defender for Endpoint unified solution:
-URI: `https://management.azure.com/subscriptions/<subscriptionId>/providers/Microsoft.Security/settings/WDATP_UNIFIED_SOLUTION?api-version=2022-05-01`
+URI: `https://management.azure.com/subscriptions/<subscriptionId>/providers/Microsoft.Security/settings/WDATP?api-version=2022-05-01`
```json {
- "name": "WDATP_UNIFIED_SOLUTION",
+ "name": "WDATP",
"type": "Microsoft.Security/settings", "kind": "DataExportSettings", "properties": {
defender-for-cloud Plan Defender For Servers Select Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers-select-plan.md
Last updated 11/06/2022 + # Select a Defender for Servers plan This article helps you select the Microsoft Defender for Servers plan that's right for your organization.
You can choose from two Defender for Servers paid plans:
| **[Qualys vulnerability assessment](deploy-vulnerability-assessment-vm.md)** | As an alternative to Defender Vulnerability Management, Defender for Cloud can deploy a Qualys scanner and display the findings. You don't need a Qualys license or account. | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png":::| |**[Adaptive application controls](adaptive-application-controls.md)** | Adaptive application controls define allowlists of known safe applications for machines. To use this feature, Defender for Cloud must be enabled on the subscription. | Not supported in Plan 1 |:::image type="icon" source="./media/icons/yes-icon.png"::: | | **Free data ingestion (500 MB) to Log Analytics workspaces** | Free data ingestion is available for [specific data types](faq-defender-for-servers.yml#what-data-types-are-included-in-the-daily-allowance-) to Log Analytics workspaces. Data ingestion is calculated per node, per reported workspace, and per day. It's available for every workspace that has a *Security* or *AntiMalware* solution installed. | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+| **Free Azure Update Manager Remediation for Arc machines** | [Azure Update Manager remediation of unhealthy resources and recommendations](../update-center/update-manager-faq.md#im-a-defender-for-server-customer-and-use-update-recommendations-powered-by-azure-update-manager-namely-periodic-assessment-should-be-enabled-on-your-machines-and-system-updates-should-be-installed-on-your-machines-would-i-be-charged-for-azure-update-manager) is available at no additional cost for Arc enabled machines. | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png"::: |
| **[Just-in-time virtual machine access](just-in-time-access-overview.md)** | Just-in-time virtual machine access locks down machine ports to reduce the attack surface. To use this feature, Defender for Cloud must be enabled on the subscription. | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png"::: | | **[Adaptive network hardening](adaptive-network-hardening.md)** | Network hardening filters traffic to and from resources by using network security groups (NSGs) to improve your network security posture. Further improve security by hardening the NSG rules based on actual traffic patterns. To use this feature, Defender for Cloud must be enabled on the subscription. | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png"::: | | **[File integrity monitoring](file-integrity-monitoring-overview.md)** | File integrity monitoring examines files and registries for changes that might indicate an attack. A comparison method is used to determine whether suspicious modifications have been made to files. | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png"::: |
A couple of vulnerability assessment options are available in Defender for Serve
## Next steps After you work through these planning steps, [review Azure Arc and agent and extension requirements](plan-defender-for-servers-agents.md).+
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
Title: Connect your AWS account
description: Defend your AWS resources by using Microsoft Defender for Cloud. Previously updated : 09/05/2023 Last updated : 10/22/2023 # Connect your AWS account to Microsoft Defender for Cloud
Deploy the CloudFormation template by using Stack (or StackSet if you have a man
}ΓÇ» ```
+ > [!NOTE]
+ > When running the CloudFormation StackSets when onboarding an AWS management account, you may encounter the following error message:
+ > `You must enable organizations access to operate a service managed stack set`
+ >
+ > This error indicates that you have noe enabled [the trusted access for AWS Organizations](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/stacksets-orgs-activate-trusted-access.html).
+ >
+ > To remediate this error message, your CloudFormation StackSets page has a prompt with a button that you can select to enable trusted access. After trusted access is enabled, the CloudFormation Stack must be run again.
+ ## Monitor your AWS resources The security recommendations page in Defender for Cloud displays your AWS resources. You can use the environments filter to enjoy multicloud capabilities in Defender for Cloud.
defender-for-cloud Secure Score Security Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/secure-score-security-controls.md
In this example:
### Which recommendations are included in the secure score calculations?
-Only built-in recommendations that are part of the default initiative, Azure Security Benchmark, have an impact on the secure score.
+Only built-in recommendations that are part of the default initiative, Microsoft Cloud Security Benchmark, have an impact on the secure score.
Recommendations flagged as **Preview** aren't included in the calculations of your secure score. They should still be remediated wherever possible, so that when the preview period ends they'll contribute towards your score. Preview recommendations are marked with: :::image type="icon" source="media/secure-score-security-controls/preview-icon.png" border="false":::
We recommend every organization carefully reviews their assigned Azure Policy in
> [!TIP] > For details about reviewing and editing your initiatives, see [manage security policies](tutorial-security-policy.md).
-Even though Defender for Cloud's default security initiative, the Azure Security Benchmark, is based on industry best practices and standards, there are scenarios in which the built-in recommendations listed below might not completely fit your organization. It's sometimes necessary to adjust the default initiative - without compromising security - to ensure it's aligned with your organization's own policies, industry standards, regulatory standards, and benchmarks.<br><br>
+Even though Defender for Cloud's default security initiative, the Microsoft Cloud Security Benchmark, is based on industry best practices and standards, there are scenarios in which the built-in recommendations listed below might not completely fit your organization. It's sometimes necessary to adjust the default initiative - without compromising security - to ensure it's aligned with your organization's own policies, industry standards, regulatory standards, and benchmarks.<br><br>
[!INCLUDE [security-center-controls-and-recommendations](../../includes/asc/security-control-recommendations.md)]
dns Dns Private Resolver Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-portal.md
description: In this quickstart, you create and test a private DNS resolver in A
Previously updated : 06/21/2023 Last updated : 10/20/2023
Next, add a virtual network to the resource group that you created, and configur
3. Select the **Inbound Endpoints** tab, select **Add an endpoint**, and then enter a name next to **Endpoint name** (ex: myinboundendpoint). 4. Next to **Subnet**, select the inbound endpoint subnet you created (ex: snet-inbound, 10.0.0.0/28) and then select **Save**.+
+> [!NOTE]
+> You can choose a static or dynamic IP address for the inbound endpoint. A dynamic IP address is used by default. Typically the first available [non-reserved](../virtual-network/virtual-networks-faq.md#are-there-any-restrictions-on-using-ip-addresses-within-these-subnets) IP address is assigned (example: 10.0.0.4). This dynamic IP address does not change unless the endpoint is deleted and reprovisioned. To specify a static address, select **Static** and enter a non-reserved IP address in the subnet.
+ 5. Select the **Outbound Endpoints** tab, select **Add an endpoint**, and then enter a name next to **Endpoint name** (ex: myoutboundendpoint). 6. Next to **Subnet**, select the outbound endpoint subnet you created (ex: snet-outbound, 10.1.1.0/28) and then select **Save**. 7. Select the **Ruleset** tab, select **Add a ruleset**, and enter the following:
healthcare-apis Get Access Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/get-access-token.md
To get an access token using Azure CLI:
1. Use the access token in your requests to the DICOM service by adding it as a header with the name `Authorization` and the value `Bearer <access token>`.
-### Store a token in a variable
+#### Store a token in a variable
The DICOM service uses a `resource` or `Audience` with uniform resource identifier (URI) equal to the URI of the DICOM server `https://dicom.healthcareapis.azure.com`. You can obtain a token and store it in a variable (named `$token`) with the following command:
-```Azure CLICopy
-Try It
+```cURL
$token=$(az account get-access-token --resource=https://dicom.healthcareapis.azure.com --query accessToken --output tsv) ```
-### Tips for using a local installation of Azure CLI
+#### Tips for using a local installation of Azure CLI
* If you're using a local installation, sign in to the Azure CLI with the [az login](/cli/azure/reference-index#az-login) command. To finish authentication, follow the on-screen steps. For more information, see [Sign in with the Azure CLI](/cli/azure/authenticate-azure-cli).
$token=$(az account get-access-token --resource=https://dicom.healthcareapis.azu
You can use a token with the DICOM service [using cURL](dicomweb-standard-apis-curl.md). Here's an example:
-```Azure CLICopy
-Try It
-curl -X GET --header "Authorization: Bearer $token" https://<workspacename-dicomservicename>.dicom.azurehealthcareapis.com/v<version of REST API>/changefeed
+```cURL
+-X GET --header "Authorization: Bearer $token" https://<workspacename-dicomservicename>.dicom.azurehealthcareapis.com/v<version of REST API>/changefeed
``` [!INCLUDE [DICOM trademark statement](../includes/healthcare-apis-dicom-trademark.md)]
operator-nexus Concepts Nexus Kubernetes Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-nexus-kubernetes-cluster.md
remain isolated to specific racks.
## Next steps * [Guide to deploy Nexus kubernetes cluster](./quickstarts-kubernetes-cluster-deployment-bicep.md)
+* [Supported Kubernetes versions](./reference-nexus-kubernetes-cluster-supported-versions.md)
operator-nexus Howto Kubernetes Cluster Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-kubernetes-cluster-upgrade.md
+
+ Title: Upgrade an Azure Operator Nexus Kubernetes cluster
+description: Learn how to upgrade an Azure Operator Nexus Kubernetes cluster to get the latest features and security updates.
++++ Last updated : 10/05/2023 +++
+# Upgrade an Azure Operator Nexus Kubernetes cluster
+
+This article provides instructions on how to upgrade an Operator Nexus Kubernetes cluster to get the latest features and security updates. Part of the Kubernetes cluster lifecycle involves performing periodic upgrades to the latest Kubernetes version. It's important you apply the latest security releases, or upgrade to get the latest features. This article shows you how to check for, configure, and apply upgrades to your Kubernetes cluster.
+
+## Limitations
+
+* The cluster upgrade process is a scale-out approach, meaning that at least one extra node is added (or as many nodes as configured in [max surge](#customize-node-surge-upgrade)). If there isn't sufficient capacity available, the upgrade fails to succeed.
+* When new Kubernetes versions become available, tenant clusters won't undergo automatic upgrades. Users should initiate the upgrade when all network functions in the cluster are ready to support the new Kubernetes version. For more information, see [Upgrade the cluster](#upgrade-the-cluster).
+* Operator Nexus offers cluster-wide upgrades, ensuring consistency across all node pools. Upgrading a single node pool isn't supported. Also, the node image is upgraded as part of the cluster upgrade when a new version is available.
+* Customizations made to agent nodes will be lost during cluster upgrades. It's recommended to place these customizations in `DaemonSet` rather than making manual changes to node configuration in order to preserve them after the upgrade.
+* Modifications made to core addon configurations are restored to the default addon configuration as part of the cluster upgrade process. Avoid customizing addon configuration (for example, Calico, etc.) to prevent potential upgrade failures. If the addon configuration restoration encounters issues, it might lead to upgrade failures.
+* When you upgrade the Operator Nexus Kubernetes cluster, Kubernetes minor versions can't be skipped. You must perform all upgrades sequentially by major version number. For example, upgrades between *1.14.x* -> *1.15.x* or *1.15.x* -> *1.16.x* are allowed, however *1.14.x* -> *1.16.x* isn't allowed. If your version is behind by more than one major version, you should perform multiple sequential upgrades.
+* The max surge values must be set during the cluster creation. You can't change the max surge values after the cluster is created. For more information, see `upgradeSettings` in [Create an Azure Operator Nexus Kubernetes cluster](./quickstarts-kubernetes-cluster-deployment-bicep.md).
+
+## Prerequisites
+
+* An Azure Operator Nexus Kubernetes cluster deployed in a resource group in your Azure subscription.
+* If you're using Azure CLI, this article requires that you're running the latest Azure CLI version. If you need to install or upgrade, see [Install Azure CLI](./howto-install-cli-extensions.md)
+* Understand the version bundles concept. For more information, see [Nexus Kubernetes version bundles](./reference-nexus-kubernetes-cluster-supported-versions.md#version-bundles).
+
+## Check for available upgrades
+
+Check which Kubernetes releases are available for your cluster using the following steps:
+
+### Use Azure CLI
+
+The following Azure CLI command returns the available upgrades for your cluster:
+
+```azurecli
+az networkcloud kubernetescluster show --name <NexusK8sClusterName> --resource-group <ResourceGroup> --output json --query availableUpgrades
+```
+
+Sample output:
+
+```json
+[
+ {
+ "availabilityLifecycle": "GenerallyAvailable",
+ "version": "v1.25.4-4"
+ },
+ {
+ "availabilityLifecycle": "GenerallyAvailable",
+ "version": "v1.25.6-1"
+ },
+ {
+ "availabilityLifecycle": "GenerallyAvailable",
+ "version": "v1.26.3-1"
+ }
+]
+```
+
+### Use the Azure portal
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. Navigate to your Operator Nexus Kubernetes cluster.
+3. Under **Overview**, select **Available upgrades** tab.
++
+### Choose a version to upgrade to
+
+The available upgrade output indicates that there are multiple versions to choose from for upgrading. In this specific scenario, the current cluster is operating on version `v1.25.4-3.` As a result, the available upgrade options include `v1.25.4-4` and the latest patch release `v1.25.6-1.` Furthermore, a new minor version is also available.
+
+You have the flexibility to upgrade to any of the available versions. However, the recommended course of action is to perform the upgrade to the most recent available `major-minor-patch-versionbundle` version.
+
+> [!NOTE]
+> The input format for the version is `major.minor.patch` or `major.minor.patch-versionbundle`. The version input must be one of the available upgrade versions. For example, if the current version of the cluster is `1.1.1-1`, valid version inputs are `1.1.1-2` or `1.1.1-x`. While `1.1.1` is a valid format, it won't trigger any update because the current version is already `1.1.1`. To initiate an update, you can specify the complete version with the version bundle, such as `1.1.1-2`. However, `1.1.2` and `1.2.x` are a valid input and will use the latest version bundle available for `1.1.2` or `1.2.x`.
+
+## Upgrade the cluster
+
+During the cluster upgrade process, Operator Nexus performs the following operations:
+
+* Add a new control plane node with the specified Kubernetes version to the cluster.
+* After the new node has been added, cordon and drain one of the old control plane nodes, ensuring that the workloads running on it are gracefully moved to other healthy control plane nodes.
+* After the old control plane node has been drained, it's removed, and a new control plane node is added to the cluster.
+* This process repeats until all control plane nodes in the cluster have been upgraded.
+* For each agent pool in the cluster, add a new worker node (or as many nodes as configured in [max surge](#customize-node-surge-upgrade)) with the specified Kubernetes version. Multiple Agent pools are upgraded simultaneously.
+* [Cordon and drain][kubernetes-drain] one of the old worker nodes to minimize disruption to running applications. If you're using max surge, it [cordons and drains][kubernetes-drain] as many worker nodes at the same time as the number of buffer nodes specified.
+* After the old worker node has been drained, it's removed, and a new worker node with the new Kubernetes version is added to the cluster (or as many nodes as configured in [max surge](#customize-node-surge-upgrade))
+* This process repeats until all worker nodes in the cluster have been upgraded.
+
+> [!IMPORTANT]
+> Ensure that any `PodDisruptionBudgets` ([PDB](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#pod-disruption-budgets)) allow for at least *one* pod replica to be moved at a time otherwise the drain/evict operation will fail.
+> If the drain operation fails, the upgrade operation will fail as well, to ensure that the applications are not disrupted. Please correct what caused the operation to stop (i.e. incorrect PDBs, lack of quota, etc.) and re-try the operation.
+
+1. Upgrade your cluster using the `networkcloud kubernetescluster update` command.
+
+```azurecli
+az networkcloud kubernetescluster update --name myNexusK8sCluster --resource-group myResourceGroup --kubernetes-version v1.26.3
+```
+
+2. Confirm the upgrade was successful using the `show` command.
+
+```azurecli
+az networkcloud kubernetescluster show --name myNexusK8sCluster --resource-group myResourceGroup --output json --query kubernetesVersion
+```
+
+The following example output shows that the cluster now runs *v1.26.3*:
+
+```output
+"v1.26.3"
+```
+
+3. Ensure that the cluster is healthy.
+
+```azurecli
+az networkcloud kubernetescluster show --name myNexusK8sCluster --resource-group myResourceGroup --output table
+```
+
+The following example output shows that the cluster is healthy:
+
+```output
+Name ResourceGroup ProvisioningState DetailedStatus DetailedStatusMessage Location
+ - - -- --
+myNexusK8sCluster myResourceGroup Succeeded Available Cluster is operational and ready southcentralus
+```
+
+## Customize node surge upgrade
+
+By default, Operator Nexus configures upgrades to surge with one extra worker node. A default value of one for the max surge settings enables Operator Nexus to minimize workload disruption by creating an extra node before the cordon/drain of existing applications to replace an older versioned node. The max surge value can be customized per node pool to enable a trade-off between upgrade speed and upgrade disruption. When you increase the max surge value, the upgrade process completes faster. If you set a large value for max surge, you might experience disruptions during the upgrade process.
+
+For example, a max surge value of 100% provides the fastest possible upgrade process (doubling the node count) but also causes all nodes in the node pool to be drained simultaneously. You might want to use a higher value such as this for testing environments. For production node pools, we recommend a max_surge setting of 33%.
+
+The API accepts both integer values and a percentage value for max surge. An integer such as 5 indicates five extra nodes to surge. A value of 50% indicates a surge value of half the current node count in the pool. Max surge percent values can be a minimum of 1% and a maximum of 100%. A percent value is rounded up to the nearest node count. If the max surge value is higher than the required number of nodes to be upgraded, the number of nodes to be upgraded is used for the max surge value.
+
+During an upgrade, the max surge value can be a minimum of 1 and a maximum value equal to the number of nodes in your node pool. You can set larger values, but the maximum number of nodes used for max surge isn't higher than the number of nodes in the pool at the time of upgrade.
+
+> [!IMPORTANT]
+> The standard Kubernetes workloads natively cycle to the new nodes when they are drained from the nodes being torn down. Please keep in mind that Operator Nexus Kubernetes service cannot make workload promises for nonstandard Kubernetes behaviors.
+
+## Next steps
+
+* Learn more about [Nexus Kubernetes version bundles](./reference-nexus-kubernetes-cluster-supported-versions.md#version-bundles).
+
+<!-- LINKS - external -->
+[kubernetes-drain]: https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/
operator-nexus Reference Nexus Kubernetes Cluster Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/reference-nexus-kubernetes-cluster-supported-versions.md
+
+ Title: Supported Kubernetes versions in Azure Operator Nexus Kubernetes service
+description: Learn the Kubernetes version support policy and lifecycle of clusters in Azure Operator Nexus Kubernetes service
+ Last updated : 10/04/2023+++++
+# Supported Kubernetes versions in Azure Operator Nexus Kubernetes service
+
+This document provides an overview of the versioning schema used for the Operator Nexus Kubernetes service, including the supported Kubernetes versions. It explains the differences between major, minor, and patch versions, and provides guidance on upgrading Kubernetes versions, and what the upgrade experience is like. The document also covers the version support lifecycle and end of life (EOL) for each minor version of Kubernetes.
+
+The Kubernetes community releases minor versions roughly every three months. Starting with version 1.19, the Kubernetes community has [increased the support window for each version from nine months to one year](https://kubernetes.io/blog/2020/08/31/kubernetes-1-19-feature-one-year-support/).
+
+Minor version releases include new features and improvements. Patch releases are more frequent (sometimes weekly) and are intended for critical bug fixes within a minor version. Patch releases include fixes for security vulnerabilities or major bugs.
+
+## Kubernetes versions
+
+Kubernetes uses the standard [Semantic Versioning](https://semver.org/) versioning scheme for each version:
+
+```bash
+[major].[minor].[patch]
+
+Examples:
+ 1.24.7
+ 1.25.4
+```
+
+Each number in the version indicates general compatibility with the previous version:
+
+* **Major version numbers** change when breaking changes to the API might be introduced
+* **Minor version numbers** change when functionality updates are made that are backwards compatible to the other minor releases.
+* **Patch version numbers** change when backwards-compatible bug fixes are made.
+
+We strongly recommend staying up to date with the latest available patches. For example, if your production cluster is on **`1.25.4`**, and **`1.25.6`** is the latest available patch version available for the *1.25* series. You should upgrade to **`1.25.6`** as soon as possible to ensure your cluster is fully patched and supported. Further details on upgrading your cluster can be found in the [Upgrading Kubernetes versions](./howto-kubernetes-cluster-upgrade.md) documentation.
+
+## Nexus Kubernetes release calendar
+
+View the upcoming version releases on the Nexus Kubernetes release calendar.
+
+> [!NOTE]
+> Read more about [our support policy for Kubernetes versioning](#kubernetes-version-support-policy).
+
+For the past release history, see [Kubernetes history](https://github.com/kubernetes/kubernetes/releases).
+
+| K8s version | Nexus GA | End of life | Extended Availability |
+|--|--|--|--|
+| 1.25 | Jun 2023 | Dec 2023 | Until 1.31 GA |
+| 1.26 | Sep 2023 | Mar 2024 | Until 1.32 GA |
+| 1.27* | Sep 2023 | Jul 2024, LTS until Jul 2025 | Until 1.33 GA |
+| 1.28 | Nov 2023 | Oct 2024 | Until 1.34 GA |
++
+*\* Indicates the version is designated for Long Term Support*
+
+## Nexus Kubernetes service version components
+
+An Operator Nexus Kubernetes service version is made of two discrete components that are combined into a single representation:
+
+* The Kubernetes version. For example, 1.25.4, is the version of Kubernetes that you deploy in Operator Nexus. These packages are supplied by Azure AKS, including all patch versions that Operator Nexus supports. For more information on Azure AKS versions, see [AKS Supported Kubernetes Versions](../aks/supported-kubernetes-versions.md)
+* The [Version Bundle](#version-bundles), which encapsulates the features (add-ons) and the operating system image used by nodes in the Operator Nexus Kubernetes cluster, as a single number. For example, 2.
+The combination of these values is represented in the API as the single kubernetesVersion. For example, `1.25.4-2` or the alternatively supported ΓÇ£vΓÇ¥ notation: `v1.25.4-2`.
+
+### Version bundles
+
+By extending the version of Kubernetes to include a secondary value for the patch version, the version bundle, Operator Nexus Kubernetes service can account for cases where the deployment is modified to include extra Operating System related updates. Such updates might include but aren't limited to: updated operating system images, patch releases for features (add-ons) and so on. Version bundles are always backward compatible with prior version bundles within the same patch version, for example, 1.25.4-2 is backwards compatible with 1.25.4-1.
+
+Changes to the configuration of a deployed Operator Nexus Kubernetes cluster should only be applied within a Kubernetes minor version upgrade, not during a patch version upgrade. Examples of configuration changes that could be applied during the minor version upgrade include:
+
+* Changing the configuration of the kube-proxy from using the iptables to ipvs
+* Changing the CNI from one product to another
+
+When we follow these principles, it becomes easier to predict and manage the process of moving between different versions of Kubernetes clusters offered by the Operator Nexus Kubernetes service.
+
+We can easily upgrade from any small update in one Kubernetes version to any small update in the next version, giving you flexibility. For example, an upgrade from 1.24.1-x to 1.25.4-x would be allowed, regardless of the presence of an intermediate 1.24.2-x version.
+
+### Components version and breaking changes
+
+Note the following important changes to make before you upgrade to any of the available minor versions:
+
+| Kubernetes Version | Version Bundle | Components | OS components | Breaking Changes | Notes |
+|--|-|--|||--|
+| 1.25.4 | 1 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.8.4<br>etcd v3.5.6-5 | Mariner 2.0 (2023-06-18) | No breaking changes | |
+| 1.25.4 | 2 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.8.4<br>etcd v3.5.6-5 | Mariner 2.0 (2023-06-18) | No breaking changes | |
+| 1.25.4 | 3 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.8.4<br>etcd v3.5.6-5 | Mariner 2.0 (2023-06-18) | No breaking changes | |
+| 1.25.4 | 4 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.8.4<br>etcd v3.5.6-5 | Mariner 2.0 (2023-06-18) | No breaking changes | |
+| 1.25.6 | 1 | Calico v3.24.0<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.8.6<br>etcd v3.5.6-5 | Mariner 2.0 (2023-06-18) | No breaking changes | |
+| 1.26.3 | 1 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.8.6<br>etcd v3.5.6-5 | Mariner 2.0 (2023-06-18) | No breaking changes | |
+| 1.27.1 | 1 | Calico v3.26.1<br>metrics-server v0.6.3<br>Multus v3.8.0<br>CoreDNS v1.9.3<br>etcd v3.5.6-5 | Mariner 2.0 (2023-09-21) | Cgroupv2 | Steps to disable cgroupv2 can be found [here](./howto-disable-cgroupsv2.md) |
+
+## Upgrading Kubernetes versions
+
+For more information on upgrading your cluster, see [Upgrade an Azure Operator Nexus Kubernetes Service cluster](./howto-kubernetes-cluster-upgrade.md).
+
+## Kubernetes version support policy
+
+Operator Nexus supports three minor versions of Kubernetes:
+
+* The latest GA minor version released in Operator Nexus (which we refer to as *N*).
+* Two previous minor versions.
+ * Each supported minor version also supports a maximum of two latest stable patches while the previous patches are under [extended availability policy](#extended-availability-policy) for the lifetime of the minor version.
+
+Operator Nexus Kubernetes service provides a standardized duration of support for each minor version of Kubernetes that is released. Versions adhere to two different timelines, reflecting:
+
+* Duration of support ΓÇô How long is a version actively maintained. At the end of the supported period, the version is ΓÇ£End of life.ΓÇ¥
+* Extended availability ΓÇô How long can a version be selected for deployment after "End of life."
+
+The supported window of Kubernetes versions on Operator Nexus is known as "N-2": (N (Latest release) - 2 (minor versions)), and ".letter" is representative of patch versions.
+
+For example, if Operator Nexus introduces *1.17.a* today, support is provided for the following versions:
+
+New minor version | Supported Version List
+-- | -
+1.17.a | 1.17.a, 1.17.b, 1.16.c, 1.16.d, 1.15.e, 1.15.f
+
+When a new minor version is introduced, the oldest supported minor version and patch releases are out of support. For example, the current supported version list is:
+
+```
+1.17.a
+1.17.b
+1.16.c
+1.16.d
+1.15.e
+1.15.f
+```
+
+When Operator Nexus releases 1.18.\*, all the 1.15.\* versions go out of support.
+
+### Support timeline
+
+Operator Nexus Kubernetes service provides support for 12 months from the initial AKS GA release of a minor version typically. This timeline follows the timing of Azure AKS, which includes a declared Long-Term Support version 1.27.
+
+Supported versions:
+
+* Can be deployed as new Operator Nexus Kubernetes clusters.
+* Can be the target of upgrades from prior versions. Limited by normal upgrade paths.
+* Might have extra patches or Version Bundles within the minor version.
+
+> [!NOTE]
+> In exceptional circumstances, Nexus Kubernetes service support might be terminated early or immediately if a vulnerability or security concern is identified. Microsoft will proactively notified customers if this were to occur and work to mitigate any potential issues.
+
+### End of life (EOL)
+
+End of life (EOL) means no more patch or version bundles are produced. It's possible the cluster you've set up can't be upgraded anymore because the latest supported versions are no longer available. In this event, the only way to upgrade is to completely recreate the Nexus Kubernetes cluster using the newer version that is supported. Unsupported upgrades through `Extended availability` might be utilized to return to a supported version.
+
+## Extended availability policy
+
+During the extended availability period for unsupported Kubernetes versions (that is, EOL Kubernetes versions), users don't receive security patches or bug fixes. For detailed information on support categories, please refer to the following table.
+
+| Support category | N-2 to N | Extended availability |
+||-|-|
+| Upgrades from N-3 to a supported version | Supported | Supported |
+| Node pool scaling | Supported | Supported |
+| Cluster or node pool creation | Supported | Supported |
+| Kubernetes components (including Add-ons)| Supported | Not supported |
+| Component updates | Supported | Not supported |
+| Component hotfixes | Supported | Not supported |
+| Applying Kubernetes bug fixes | Supported | Not supported |
+| Applying Kubernetes security patches | Supported | Not supported |
+| Node image security patches | Supported | Not supported |
+
+> [!NOTE]
+> Operator Nexus relies on the releases and patches from [kubernetes](https://kubernetes.io/releases/), which is an Open Source project that only supports a sliding window of three minor versions. Operator Nexus can only guarantee [full support](#kubernetes-version-support-policy) while those versions are being serviced upstream. Since there's no more patches being produced upstream, Operator Nexus can either leave those versions unpatched or fork. Due to this limitation, extended availability doesn't support anything from relying on kubernetes upstream.
+
+## Supported `kubectl` versions
+
+You can use one minor version older or newer of `kubectl` relative to your *kube-apiserver* version, consistent with the [Kubernetes support policy for kubectl](https://kubernetes.io/docs/setup/release/version-skew-policy/#kubectl).
+
+For example, if your *kube-apiserver* is at *1.17*, then you can use versions *1.16* to *1.18* of `kubectl` with that *kube-apiserver*.
+
+To install or update `kubectl` to the latest version, run:
+
+### [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az aks install-cli
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+```powershell
+Install-AzAksKubectl -Version latest
+```
+++
+## Long Term Support (LTS)
+
+Azure Kubernetes Service (AKS) provides a Long Term Support (LTS) version of Kubernetes for a two-year period. There's only a single minor version of Kubernetes deemed LTS at any one time.
+
+| | Community Support |Long Term Support |
+||||
+| **When to use** | When you can keep up with upstream Kubernetes releases | Scenarios where your applications aren't compatible with the changes introduced in newer Kubernetes versions, and you can't transition to a continuous release cycle due to technical constraints or other factors |
+| **Support versions** | Three GA minor versions | One Kubernetes version (currently *1.27*) for two years |
+
+The upstream community maintains a minor release of Kubernetes for one year from release. After this period, Microsoft creates and applies security updates to the LTS version of Kubernetes to provide a total of two years of support on AKS.
+
+> [!IMPORTANT]
+> Kubernetes version 1.27 is the first supported LTS version of Kubernetes on Operator Nexus Kubernetes service.
+
+## FAQ
+
+### How does Microsoft notify me of new Kubernetes versions?
+
+This document is updated periodically with planned dates of the new Kubernetes versions.
+
+### How often should I expect to upgrade Kubernetes versions to stay in support?
+
+Starting with Kubernetes 1.19, the [open source community has expanded support to one year](https://kubernetes.io/blog/2020/08/31/kubernetes-1-19-feature-one-year-support/). Operator Nexus commits to enabling patches and support matching the upstream commitments. For Operator Nexus clusters on 1.19 and greater, you can upgrade at a minimum of once a year to stay on a supported version.
+
+### What happens when you upgrade a Kubernetes cluster with a minor version that isn't supported?
+
+If you're on the *N-3* version or older, you're outside of the support window. When you upgrade from version N-3 to N-2, you're back within our support window. For example:
+
+* If the oldest supported AKS version is *1.25.x* and you're on *1.24.x* or older, you're outside of support.
+* Successfully upgrading from *1.24.x* to *1.25.x* or higher brings you back within our support window.
+* "Skip-level upgrades" aren't supported. In order to upgrade from *1.23.x* to *1.25.x*, you must upgrade first to *1.24.x* and then to *1.25.x*.
+
+Downgrades aren't supported.
+
+### What happens if I don't upgrade my cluster?
+
+If you don't upgrade your cluster, you continue to receive support for the Kubernetes version you're running until the end of the support period. After that, you'll no longer receive support for your cluster. You need to upgrade your cluster to a supported version to continue receiving support.
+
+### What happens if I don't upgrade my cluster before the end of the Extended availability period?
+
+If you don't upgrade your cluster before the end of the Extended availability period, you'll no longer be able to upgrade your cluster to a supported version or scale-out agent pools. You need to recreate your cluster using a supported version to continue receiving support.
+
+### What does 'Outside of Support' mean?
+
+'Outside of Support' means that:
+
+* The version you're running is outside of the supported versions list.
+* You're asked to upgrade the cluster to a supported version when requesting support.
+
+Additionally, Operator Nexus doesn't make any runtime or other guarantees for clusters outside of the supported versions list.
+
+### What happens when a user scales a Kubernetes cluster with a minor version that isn't supported?
+
+For minor versions not supported by Operator Nexus, scaling in or out should continue to work. Since there are no guarantees with quality of service, we recommend upgrading to bring your cluster back into support.
+
+### Can I skip multiple Kubernetes versions during cluster upgrade?
+
+When you upgrade a supported Operator Nexus Kubernetes cluster, Kubernetes minor versions can't be skipped. Kubernetes control planes [version skew policy](https://kubernetes.io/releases/version-skew-policy/) doesn't support minor version skipping. For example, upgrades between:
+
+* *1.12.x* -> *1.13.x*: allowed.
+* *1.13.x* -> *1.14.x*: allowed.
+* *1.12.x* -> *1.14.x*: not allowed.
+
+To upgrade from *1.12.x* -> *1.14.x*:
+
+1. Upgrade from *1.12.x* -> *1.13.x*.
+2. Upgrade from *1.13.x* -> *1.14.x*.
+
+### Can I create a new cluster during its extended availability window?
+
+Yes, you can create a new 1.xx.x cluster during its extended availability window. However, we recommend that you create a new cluster with the latest supported version.
+
+### Can I upgrade a cluster to a newer version during its extended availability window?
+
+Yes, you can upgrade an N-3 cluster to N-2 during its extended availability window. If your cluster is currently on N-4, you can make use of the extended availability to first upgrade from N-4 to N-3, and then proceed with the upgrade to a supported version (N-2).
+
+### I'm on an extended availability window, can I still add new node pools? Or will I have to upgrade?
+
+Yes, you're allowed to add node pools to the cluster.
operator-nexus Troubleshoot Kubernetes Cluster Dual Stack Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/troubleshoot-kubernetes-cluster-dual-stack-configuration.md
+
+ Title: Troubleshooting dual-stack Nexus Kubernetes Cluster configuration issues
+description: Troubleshooting the configuration of a dual-stack IP.
+++ Last updated : 10/19/2023+++
+# Troubleshooting dual-stack Nexus Kubernetes Cluster configuration issues
+
+This guide provides detailed steps for troubleshooting issues related to setting up a dual stack Nexus Kubernetes cluster. If you've created a dual stack cluster but are experiencing issues, this guide will help you identify and resolve potential configuration problems.
+
+## Prerequisites
+
+* Install the latest version of the
+ [az CLI extensions](./howto-install-cli-extensions.md)
+* Tenant ID
+* Necessary permissions to make changes to the cluster configuration.
+
+## Dual-stack configuration
+
+Dual-stack configuration involves running both IPv4 and IPv6 protocols on your CNI network. This allows Kubernetes services that support both protocols to communicate over either IPv4 or IPv6.
+
+## Common issues
+
+ - A dual-stack Nexus Kubernetes cluster has been established, yet you're having trouble observing the dual-stack address on the CNI network. Additionally, the Kubernetes services are not receiving dual-stack addresses.
+
+## Configuration steps
+
+ - **Step 1: Verifying dual-stack L3 network**
+
+ Make certain that your Layer 3 (L3) network, serving as the Container Network Interface (CNI), is properly set up to manage both IPv4 and IPv6 traffic. Utilize the `az networkcloud l3network show` command for validation.
+ - Example:
+
+ ```
+"ipAllocationType": "DualStack",
+ "ipv4ConnectedPrefix": "166.XXX.XXX.X/24",
+ "ipv6ConnectedPrefix": fda0:XXXX:XXXX:XXX::/64,
+```
+
+> [!NOTE]
+> If the output only contains an IPv4 address, please consult the [prerequisites for deploying tenant workloads](./quickstarts-tenant-workload-prerequisites.md) to establish a dual-stack network.
+
+ - **Step 2: Validating Nexus Kubernetes Cluster configuration:**
+
+ To ensure proper configuration for dual-stack networking in your Nexus Kubernetes cluster, follow these steps:
+
+ 1. Execute the command `az networkcloud kubernetescluster show` to retrieve information about your cluster.
+ 2. Examine the `networkConfiguration` section in the `az networkcloud kubernetescluster show` output.
+ 3. Confirm that `podCidrs` and `serviceCidrs` are set as arrays, each containing one IPv4 prefix and one IPv6 prefix.
+ 4. To enable the Kubernetes service to have a dual-stack address, make sure that the IP pool configuration includes both IPv4 and IPv6 addresses. For additional information, refer to the IP address pool configuration section in the how-to available at [IP address pool configuration](howto-kubernetes-service-load-balancer.md#bicep-template-parameters-for-ip-address-pool-configuration) for more details.
+
+ By following these steps, you can guarantee the correct setup of dual-stack networking in your Nexus Kubernetes cluster.
+
+ - Example:
+
+ ```json
+ "podCidrs": [
+ "10.XXX.X.X/16",
+ "fdbe:8fbe:17b7:0::/64"
+ ],
+ "serviceCidrs": [
+ "10.XXX.X.X/16",
+ "fda0:XXXX:XXXX:ffff::/108"
+ ]
+ ```
+
+> [!NOTE]
+> The prefix length for IPv6 `serviceCidrs` must be >= 108 (for example, /64 won't work).
+
+ - **Step 3: Ensuring proper peering configuration:**
+
+If the configurations in steps 1 and 2 are correct but traffic issues persist, please ensure that any peering connections or routes between your cluster and external networks are properly established for both IPv4 and IPv6 traffic. When Nexus Kubernetes cluster isn't configured with IPv6 in "podCidrs" and "serviceCidrs," IPv4 peering occurs on CE but not IPv6.
+
+ Action: Review and update peering configurations as necessary to accommodate dual stack traffic.
+
+## Sample output
+
+ - Output without IPv6 configuration:
+
+ ```plaintext
+ BGP summary information
+ Router identifier 10.X.XXX.XX, local AS number 65501
+ Neighbor Status Codes: m - Under maintenance
+ Neighbor V AS MsgRcvd MsgSent InQ OutQ Up/Down State PfxRcd PfxAcc
+ 107.XXX.XX.X 4 64906 222452 239726 0 0 7d02h Estab(NotNegotiated)
+ ...
+ ```
+
+ - Output with IPv6 configuration:
+
+ ```plaintext
+ BGP summary information
+ Router identifier 10.X.XXX.XX, local AS number 65501
+ Neighbor Status Codes: m - Under maintenance
+ Neighbor V AS MsgRcvd MsgSent InQ OutQ Up/Down State PfxRcd PfxAcc
+ 107.XXX.XX.X 4 64906 246524 265580 0 0 7d20h Estab(NotNegotiated)
+ ...
+ ```
+
+## Additional recommendations:
+
+Scrutinize logs and error messages for indicators of configuration issues.
+
+## Conclusion
+Setting up a dual-stack configuration involves enabling both IPv4 and IPv6 on your network, and ensuring services can communicate over both. By following the steps outlined in this guide, you should be able to identify and resolve common configuration issues related to setting up a dual stack cluster. If you continue to experience difficulties, consider seeking further assistance from your network administrator or consulting platform-specific support resources.
orbital Concepts Contact https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/concepts-contact.md
Title: Ground station contact - Azure Orbital
-description: Learn more about the contact resource and how to schedule a contact.
+ Title: Contact resource - Azure Orbital Ground Station
+description: Learn more about a contact resource and how to schedule a contact.
#Customer intent: As a satellite operator or user, I want to understand how to what the contact resource is so I can manage my mission operations.
-# Ground station contact
+# Ground station contact resource
A contact occurs when the spacecraft passes over a specified ground station. You can find available passes and schedule contacts for your spacecraft through the Azure Orbital Ground Station platform. A contact and ground station pass mean the same thing. When you schedule a contact for a spacecraft, a contact resource is created under your spacecraft resource in your resource group. The contact is only associated with that particular spacecraft and can't be transferred to another spacecraft, resource group, or region.
-## Contact resource
+## Contact parameters
The contact resource contains the start time and end time of the pass and other parameters related to pass operations. The full list is below.
The RX and TX start/end times might differ depending on the individual station m
## Create a contact In order to create a contact, you must have the following prerequisites:
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An [authorized](register-spacecraft.md) spacecraft resource.
+- A [contact profile](contact-profile.md) with links in accordance with the spacecraft resource above.
-* An [authorized](register-spacecraft.md) spacecraft resource.
-* A [contact profile](contact-profile.md) with links in accordance with the spacecraft resource above.
+Contacts are created on a per-pass and per-site basis. If you already know the pass timings for your spacecraft and desired ground station, you can directly proceed to schedule the pass with these times. The service will succeed in creating the contact resource if the window is available and fail if the window is unavailable.
-Contacts are created on a per-pass and per-site basis. If you already know the pass timings for your spacecraft and selected ground station, you can directly proceed to schedule the pass with these times. The service will succeed in creating the contact resource if the window is available and fail if the window is unavailable.
-
-If you don't know the pass timings, or which sites are available, then you can use the Orbital portal or API to determine those details. Query the available passes and use the results to schedule your passes accordingly.
+If you don't know your spacecraft's pass timings or which ground station sites are available, you can use the [Azure portal](https://aka.ms/orbital/portal) or [Azure Orbital Ground Station API](/rest/api/orbital/) to determine those details. Query the available passes and use the results to schedule your passes accordingly.
| Method | List available contacts | Schedule contacts | Notes | |-|-|-|-|
-|Portal| Yes | Yes | Custom pass timings aren't possible. You must use the results from the query. |
-|API | Yes | Yes| Custom pass timings are possible. |
+|Portal| Yes | Yes | Custom pass timings aren't supported. You must use the results from the query. |
+|API | Yes | Yes | Custom pass timings are supported. |
+
+See [how-to schedule a contact](schedule-contact.md) for instructions to use the Azure portal. See [API documentation](/rest/api/orbital/) for instructions to use the Azure Orbital Ground Station API.
+
+## Cancel a scheduled contact
+
+In order to cancel a scheduled contact, you must delete the contact resource. You must have the following prerequisites:
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An [authorized](register-spacecraft.md) spacecraft resource.
+- A [contact profile](contact-profile.md) with links in accordance with the spacecraft resource above.
+- A [scheduled contact](schedule-contact.md).
+
+1. In the Azure portal search box, enter **Spacecraft**. Select **Spacecraft** in the search results.
+2. In the **Spacecraft** page, select the name of the spacecraft for the scheduled contact.
+3. Select **Contacts** from the left menu bar in the spacecraftΓÇÖs overview page.
+
+ :::image type="content" source="media/orbital-eos-delete-contact.png" alt-text="Select a scheduled contact" lightbox="media/orbital-eos-delete-contact.png":::
+
+4. Select the name of the contact to be deleted
+5. Select **Delete** from the top bar of the contact's configuration view
+
+ :::image type="content" source="media/orbital-eos-contact-config-view.png" alt-text="Delete a scheduled contact" lightbox="media/orbital-eos-contact-config-view.png":::
-See [how-to schedule a contact](schedule-contact.md) for the Portal method. The API can also be used to create a contact. See the [API docs](/rest/api/orbital/) for this method.
+6. The scheduled contact will be canceled once the contact entry is deleted.
## Next steps
search Search Add Autocomplete Suggestions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-add-autocomplete-suggestions.md
-+ Last updated 10/03/2023-+
-# How to add autocomplete and search suggestions in client apps
+# Add autocomplete and search suggestions in client apps
Search-as-you-type is a common technique for improving query productivity. In Azure Cognitive Search, this experience is supported through *autocomplete*, which finishes a term or phrase based on partial input (completing "micro" with "microsoft"). A second user experience is *suggestions*, or a short list of matching documents (returning book titles with an ID so that you can link to a detail page about that book). Both autocomplete and suggestions are predicated on a match in the index. The service won't offer queries that return zero results.
search Search Get Started Vector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-vector.md
Because this is a hybrid query, results are RRF-ranked. RRF evaluates search sco
} ```
-Because RRF merges results, it helps to review the inputs. The following results are from just the full text query. Top two results are Sublime Cliff Hotel and History Lion Resort, with Sublime Cliff Hotel having a much stronger relevance score.
+Because RRF merges results, it helps to review the inputs. The following results are from just the full text query. Top two results are Sublime Cliff Hotel and History Lion Resort, with Sublime Cliff Hotel having a much stronger BM25 relevance score.
```http {
Because RRF merges results, it helps to review the inputs. The following results
}, ```
-In the vector-only query, Sublime Cliff Hotel drops to position four. But Historic Lion, which was second in full text search and third in vector search, doesn't experience the same range of fluctuation and thus appears as a top match in a homogenized result set.
+In the vector-only query using HNSW for finding matches, Sublime Cliff Hotel drops to position four. But Historic Lion, which was second in full text search and third in vector search, doesn't experience the same range of fluctuation and thus appears as a top match in a homogenized result set.
```http "value": [
spring-apps How To Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-custom-domain.md
az keyvault set-policy \
:::image type="content" source="./media/how-to-custom-domain/select-certificate-from-key-vault.png" alt-text="Screenshot of the Azure portal showing the Select certificate from Azure page." lightbox="./media/how-to-custom-domain/select-certificate-from-key-vault.png":::
-1. On the opened **Set certificate name** page, enter your certificate name, and then select **Apply**.
+1. On the opened **Set certificate name** page, enter your certificate name, select **Enable auto sync** if needed, and then select **Apply**. For more information, see the [Auto sync certificate](#auto-sync-certificate) section.
+
+ :::image type="content" source="./media/how-to-custom-domain/set-certificate-name.png" alt-text="Screenshot of the Set certificate name dialog box.":::
1. When you have successfully imported your certificate, it displays in the list of **Private Key Certificates**.
az spring certificate add \
--service <Azure-Spring-Apps-instance-name> \ --name <cert-name> \ --vault-uri <key-vault-uri> \
- --vault-certificate-name <key-vault-cert-name>
+ --vault-certificate-name <key-vault-cert-name> \
+ --enable-auto-sync false
```
-Use the following command show a list of imported certificates:
+To enable certificate auto sync, include the `--enable-auto-sync true` setting when you add the certificate, as shown in the following example. For more information, see the [Auto sync certificate](#auto-sync-certificate) section.
+
+```azurecli
+az spring certificate add \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name <cert-name> \
+ --vault-uri <key-vault-uri> \
+ --vault-certificate-name <key-vault-cert-name> \
+ --enable-auto-sync true
+```
+
+Use the following command to show a list of imported certificates:
```azurecli az spring certificate list \
az spring certificate list \
> [!IMPORTANT]
-> To secure a custom domain with this certificate, you still need to bind the certificate to a specific domain. Follow the steps in the [Add SSL Binding](#add-ssl-binding) section.
+> To secure a custom domain with this certificate, be sure to bind the certificate to the specific domain. For more information, see the [Add SSL binding](#add-ssl-binding) section.
+
+### Auto sync certificate
+
+A certificate stored in Azure Key Vault sometimes gets renewed before it expires. Similarly, your organization's security policies for certificate management might require your DevOps team to replace certificates with new ones regularly. After you enable auto sync for a certificate, Azure Spring Apps starts to sync your key vault for a new version regularly - usually every 24 hours. If a new version is available, Azure Spring Apps imports it, and then reloads it for various components using the certificate without causing any downtime. The following list shows the affected components:
+
+- App custom domain
+- [VMware Spring Cloud Gateway](./how-to-configure-enterprise-spring-cloud-gateway.md) custom domain
+- [API portal for VMware Tanzu](./how-to-use-enterprise-api-portal.md) custom domain
+- [VMware Tanzu Application Accelerator](./how-to-use-accelerator.md) custom domain
+- [Application Configuration Service for Tanzu](./how-to-enterprise-application-configuration-service.md)
+
+When Azure Spring Apps imports or reloads a certificate, an activity log is generated. To see the activity logs, navigate to your Azure Spring Apps instance in the Azure portal and select **Activity log** in the navigation pane.
+
+> [!NOTE]
+> The certificate auto sync feature works with private certificates and public certificates imported from Azure Key Vault. This feature is unavailable for content certificates, which the customer uploads.
+
+You can enable or disable the certificate auto sync feature when you import a certificate from your key vault to Azure Spring Apps. For more information, see the [Import certificate to Azure Spring Apps](#import-certificate-to-azure-spring-apps) section.
+
+You can also enable or disable this feature for a certificate that has already been imported to Azure Spring Apps.
+
+#### [Azure portal](#tab/Azure-portal)
+
+Use the following steps to enable or disable auto sync for an imported certificate:
+
+1. Go to the list of **Private Key Certificates** or **Public Key Certificates**.
+
+1. Select the ellipsis (**...**) button after the **Auto sync** column, and then select either **Enable auto sync** or **Disable auto sync**.
+
+ :::image type="content" source="./media/how-to-custom-domain/edit-auto-sync.png" alt-text="Screenshot of the Azure portal that shows a certificate list with the ellipsis button menu open and the Enable auto sync option selected." lightbox="./media/how-to-custom-domain/edit-auto-sync.png":::
+
+#### [Azure CLI](#tab/Azure-CLI)
+
+Use the following command to enable auto sync for an imported certificate:
+
+```azurecli
+az spring certificate update \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name <cert-name> \
+ --enable-auto-sync true
+```
+
+Use the following command to disable auto sync for an imported certificate:
+
+```azurecli
+az spring certificate update \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name <cert-name> \
+ --enable-auto-sync false
+```
++ ## Add Custom Domain
spring-apps How To Use Tls Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-tls-certificate.md
**This article applies to:** ✔️ Basic/Standard ✔️ Enterprise
-This article shows you how to use public certificates in Azure Spring Apps for your application. Your app may act as a client and access an external service that requires certificate authentication, or it may need to perform cryptographic tasks.
+This article shows you how to use public certificates in Azure Spring Apps for your application. Your app might act as a client and access an external service that requires certificate authentication, or it might need to perform cryptographic tasks.
When you let Azure Spring Apps manage your TLS/SSL certificates, you can maintain the certificates and your application code separately to safeguard your sensitive data. Your app code can access the public certificates you add to your Azure Spring Apps instance.
-> [!NOTE]
-> Azure CLI and Terraform support and samples will be coming soon to this article.
- ## Prerequisites - An application deployed to Azure Spring Apps. See [Quickstart: Deploy your first application in Azure Spring Apps](./quickstart.md), or use an existing app.
You can choose to import your certificate into your Azure Spring Apps instance f
You need to grant Azure Spring Apps access to your key vault before you import your certificate using these steps: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Select **Key vaults**, then select the Key Vault you'll import your certificate from.
+1. Select **Key vaults**, then select the Key Vault you import your certificate from.
1. In the left navigation pane, select **Access policies**, then select **Create**. 1. Select **Certificate permissions**, then select **Get** and **List**.
You need to grant Azure Spring Apps access to your key vault before you import y
After you grant access to your key vault, you can import your certificate using these steps: 1. Go to your service instance.+ 1. From the left navigation pane of your instance, select **TLS/SSL settings**.+ 1. Select **Import Key Vault Certificate** in the **Public Key Certificates** section.
-1. Select your Key Vault in **Key vault** and the certificate in **Certificate**, then **Select** and **Apply**.
-1. When you have successfully imported your certificate, you'll see it in the list of Public Key Certificates.
+
+1. Select your key vault in the **Key vaults** section, select your certificate in the **Certificate** section, and then select **Select**.
+
+1. Provide a value for **Certificate name**, select **Enable auto sync** if needed, and then select **Apply**. For more information, see the [Auto sync certificate](./how-to-custom-domain.md#auto-sync-certificate) section of [Map an existing custom domain to Azure Spring Apps](./how-to-custom-domain.md).
+
+After you've successfully imported your certificate, you see it in the list of Public Key Certificates.
> [!NOTE] > The Azure Key Vault and Azure Spring Apps instances should be in the same tenant.
You can import a certificate file stored locally using these steps:
1. Go to your service instance. 1. From the left navigation pane of your instance, select **TLS/SSL settings**. 1. Select **Upload public certificate** in the **Public Key Certificates** section.
-1. When you've successfully imported your certificate, you'll see it in the list of Public Key Certificates.
+
+After you've successfully imported your certificate, you see it in the list of Public Key Certificates.
## Load a certificate
X509Certificate cert = (X509Certificate) factory.generateCertificate(is);
### Load a certificate into the trust store
-For a Java application, you can choose **Load into trust store** for the selected certificate. The certificate will be automatically added to the Java default TrustStores to authenticate a server in SSL authentication.
+For a Java application, you can choose **Load into trust store** for the selected certificate. The certificate is automatically added to the Java default TrustStores to authenticate a server in SSL authentication.
The following log from your app shows that the certificate is successfully loaded.
storage Anonymous Read Access Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-configure.md
Title: Configure anonymous read access for containers and blobs description: Learn how to allow or disallow anonymous access to blob data for the storage account. Set the container's anonymous access setting to make containers and blobs available for anonymous access.--++ Last updated 09/12/2023- ms.devlang: powershell, azurecli
storage Anonymous Read Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-overview.md
Title: Overview of remediating anonymous read access for blob data
description: Learn how to remediate anonymous read access to blob data for both Azure Resource Manager and classic storage accounts. -+ Last updated 09/12/2023-+
storage Anonymous Read Access Prevent Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-prevent-classic.md
Title: Remediate anonymous read access to blob data (classic deployments) description: Learn how to prevent anonymous requests against a classic storage account by disabling anonymous access to containers.--++ Last updated 09/12/2023-+ ms.devlang: powershell, azurecli
storage Anonymous Read Access Prevent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-prevent.md
Title: Remediate anonymous read access to blob data (Azure Resource Manager deployments) description: Learn how to analyze current anonymous requests against a storage account and how to prevent anonymous access for the entire storage account or for an individual container.--++ Last updated 09/12/2023-+ ms.devlang: powershell, azurecli
storage Assign Azure Role Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/assign-azure-role-data-access.md
Title: Assign an Azure role for access to blob data description: Learn how to assign permissions for blob data to a Microsoft Entra security principal with Azure role-based access control (Azure RBAC). Azure Storage supports built-in and Azure custom roles for authentication and authorization via Microsoft Entra ID.--++ Last updated 04/19/2022- ms.devlang: powershell, azurecli
storage Authorize Access Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/authorize-access-azure-active-directory.md
Title: Authorize access to blobs using Active Directory description: Authorize access to Azure blobs using Microsoft Entra ID. Assign Azure roles for access rights. Access data with a Microsoft Entra account.--++ Last updated 03/17/2023-+ # Authorize access to blobs using Microsoft Entra ID
storage Authorize Data Operations Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/authorize-data-operations-cli.md
Title: Authorize access to blob data with Azure CLI description: Specify how to authorize data operations against blob data with the Azure CLI. You can authorize data operations using Microsoft Entra credentials, with the account access key, or with a shared access signature (SAS) token.--++ Last updated 07/12/2021-+ ms.devlang: azurecli
storage Authorize Data Operations Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/authorize-data-operations-portal.md
Title: Authorize access to blob data in the Azure portal description: When you access blob data using the Azure portal, the portal makes requests to Azure Storage under the covers. These requests to Azure Storage can be authenticated and authorized using either your Microsoft Entra account or the storage account access key.--++ Last updated 12/10/2021-+
storage Authorize Data Operations Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/authorize-data-operations-powershell.md
Title: Run PowerShell commands with Microsoft Entra credentials to access blob data description: PowerShell supports signing in with Microsoft Entra credentials to run commands on blob data in Azure Storage. An access token is provided for the session and used to authorize calling operations. Permissions depend on the Azure role assigned to the Microsoft Entra security principal.--++ Last updated 05/12/2022-+ ms.devlang: powershell
storage Client Side Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/client-side-encryption.md
Title: Client-side encryption for blobs
description: The Blob Storage client library supports client-side encryption and integration with Azure Key Vault for users requiring encryption on the client. -+ Last updated 12/12/2022-+ ms.devlang: csharp
storage Encryption Customer Provided Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/encryption-customer-provided-keys.md
Title: Provide an encryption key on a request to Blob storage
description: Clients making requests against Azure Blob storage can provide an encryption key on a per-request basis. Including the encryption key on the request provides granular control over encryption settings for Blob storage operations. -+ Last updated 05/09/2022 -+
storage Encryption Scope Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/encryption-scope-manage.md
Title: Create and manage encryption scopes
description: Learn how to create an encryption scope to isolate blob data at the container or blob level. -+ Last updated 05/10/2023 -+ ms.devlang: powershell, azurecli
storage Encryption Scope Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/encryption-scope-overview.md
Title: Encryption scopes for Blob storage
description: Encryption scopes provide the ability to manage encryption at the level of the container or an individual blob. You can use encryption scopes to create secure boundaries between data that resides in the same storage account but belongs to different customers. -+ Last updated 06/01/2023 -+
For more information about working with encryption scopes, see [Create and manag
By default, a storage account is encrypted with a key that is scoped to the entire storage account. When you define an encryption scope, you specify a key that may be scoped to a container or an individual blob. When the encryption scope is applied to a blob, the blob is encrypted with that key. When the encryption scope is applied to a container, it serves as the default scope for blobs in that container, so that all blobs that are uploaded to that container may be encrypted with the same key. The container can be configured to enforce the default encryption scope for all blobs in the container, or to permit an individual blob to be uploaded to the container with an encryption scope other than the default.
-Read operations on a blob that was created with an encryption scope happen transparently, so long as the encryption scope is not disabled.
+Read operations on a blob that was created with an encryption scope happen transparently, so long as the encryption scope isn't disabled.
### Key management
A storage account may have up to 10,000 encryption scopes that are protected wit
Infrastructure encryption in Azure Storage enables double encryption of data. With infrastructure encryption, data is encrypted twice &mdash; once at the service level and once at the infrastructure level &mdash; with two different encryption algorithms and two different keys.
-Infrastructure encryption is supported for an encryption scope, as well as at the level of the storage account. If infrastructure encryption is enabled for an account, then any encryption scope created on that account automatically uses infrastructure encryption. If infrastructure encryption is not enabled at the account level, then you have the option to enable it for an encryption scope at the time that you create the scope. The infrastructure encryption setting for an encryption scope cannot be changed after the scope is created.
+Infrastructure encryption is supported for an encryption scope, as well as at the level of the storage account. If infrastructure encryption is enabled for an account, then any encryption scope created on that account automatically uses infrastructure encryption. If infrastructure encryption isn't enabled at the account level, then you have the option to enable it for an encryption scope at the time that you create the scope. The infrastructure encryption setting for an encryption scope cannot be changed after the scope is created.
For more information about infrastructure encryption, see [Enable infrastructure encryption for double encryption of data](../common/infrastructure-encryption-enable.md).
storage Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/security-recommendations.md
Title: Security recommendations for Blob storage description: Learn about security recommendations for Blob storage. Implementing this guidance will help you fulfill your security obligations as described in our shared responsibility model.-+ Last updated 09/12/2023-+
storage Static Website Content Delivery Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/static-website-content-delivery-network.md
Title: Integrate a static website with Azure CDN description: Learn how to cache static website content from an Azure Storage account by using Azure Content Delivery Network (CDN).-+ -+ Last updated 04/07/2020
storage Storage Auth Abac Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-attributes.md
Title: Actions and attributes for Azure role assignment conditions for Azure Blob Storage description: Supported actions and attributes for Azure role assignment conditions and Azure attribute-based access control (Azure ABAC) for Azure Blob Storage. --++ Last updated 08/10/2023-+
storage Storage Auth Abac Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-cli.md
Title: "Tutorial: Add a role assignment condition to restrict access to blobs using Azure CLI - Azure ABAC" description: Add a role assignment condition to restrict access to blobs using Azure CLI and Azure attribute-based access control (Azure ABAC).--++ -+ Last updated 06/26/2023
storage Storage Auth Abac Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-examples.md
Title: Example Azure role assignment conditions for Blob Storage description: Example Azure role assignment conditions for Blob Storage.--++ - Last updated 05/09/2023
storage Storage Auth Abac Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-portal.md
Title: "Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal - Azure ABAC" description: Add a role assignment condition to restrict access to blobs using the Azure portal and Azure attribute-based access control (Azure ABAC).--++ - Last updated 03/15/2023
storage Storage Auth Abac Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-powershell.md
Title: "Tutorial: Add a role assignment condition to restrict access to blobs using Azure PowerShell - Azure ABAC" description: Add a role assignment condition to restrict access to blobs using Azure PowerShell and Azure attribute-based access control (Azure ABAC).--++ -+ Last updated 03/15/2023
storage Storage Auth Abac Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-security.md
Title: Security considerations for Azure role assignment conditions in Azure Blob Storage description: Security considerations for Azure role assignment conditions and Azure attribute-based access control (Azure ABAC).--++ Last updated 05/09/2023-+
storage Storage Auth Abac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac.md
Title: Authorize access to Azure Blob Storage using Azure role assignment conditions description: Authorize access to Azure Blob Storage and Azure Data Lake Storage Gen2 using Azure role assignment conditions and Azure attribute-based access control (Azure ABAC). Define conditions on role assignments using Blob Storage attributes.--++ Last updated 04/21/2023-
storage Storage Blob Encryption Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-encryption-status.md
Title: Check the encryption status of a blob
description: Learn how to use Azure portal, PowerShell, or Azure CLI to check whether a given blob is encrypted. -+ Last updated 02/09/2023-+ ms.devlang: azurecli
storage Storage Blob Static Website Host https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-static-website-host.md
Title: 'Tutorial: Host a static website on Blob storage description: Learn how to configure a storage account for static website hosting, and deploy a static website to Azure Storage.-+ Last updated 11/04/2021-+ #Customer intent: I want to host files for a static website in Blob storage and access the website from an Azure endpoint.
storage Storage Blob Static Website How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-static-website-how-to.md
Title: Host a static website in Azure Storage description: Learn how to serve static content (HTML, CSS, JavaScript, and image files) directly from a container in an Azure Storage GPv2 account.-+ -+ Last updated 04/19/2022
storage Storage Blob Static Website https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-static-website.md
Title: Static website hosting in Azure Storage description: Azure Storage static website hosting, providing a cost-effective, scalable solution for hosting modern web applications.-+ -+ Last updated 07/24/2023
storage Storage Blob Use Access Tier Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-dotnet.md
# Set or change a block blob's access tier with .NET + This article shows how to set or change the access tier for a block blob using the [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage). ## Prerequisites
storage Storage Blob Use Access Tier Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-java.md
# Set or change a block blob's access tier with Java + This article shows how to set or change the access tier for a block blob using the [Azure Storage client library for Java](/java/api/overview/azure/storage-blob-readme). ## Prerequisites
storage Storage Blob Use Access Tier Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-javascript.md
# Set or change a block blob's access tier with JavaScript + This article shows how to set or change a blob's [access tier](access-tiers-overview.md) for block blobs with the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob). ## Prerequisites
storage Storage Blob Use Access Tier Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-python.md
# Set or change a block blob's access tier with Python + This article shows how to set or change the access tier for a block blob using the [Azure Storage client library for Python](/python/api/overview/azure/storage). ## Prerequisites
storage Storage Blob Use Access Tier Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-typescript.md
# Set or change a block blob's access tier with TypeScript + This article shows how to set or change a blob's [access tier](access-tiers-overview.md) with the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob). ## Prerequisites
storage Storage Blob User Delegation Sas Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-cli.md
Title: Use Azure CLI to create a user delegation SAS for a container or blob
description: Learn how to create a user delegation SAS with Microsoft Entra credentials by using Azure CLI. --++ Last updated 12/18/2019-
storage Storage Blob User Delegation Sas Create Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-dotnet.md
description: Learn how to create a user delegation SAS for a blob with Microsoft Entra credentials by using the .NET client library for Blob Storage. -+ Last updated 06/22/2023- ms.devlang: csharp
storage Storage Blob User Delegation Sas Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-java.md
description: Learn how to create a user delegation SAS for a blob with Microsoft Entra credentials by using the Azure Storage client library for Java. -+ Last updated 06/12/2023- ms.devlang: java
storage Storage Blob User Delegation Sas Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-powershell.md
Title: Use PowerShell to create a user delegation SAS for a container or blob
description: Learn how to create a user delegation SAS with Microsoft Entra credentials by using PowerShell. --++ Last updated 12/18/2019-
storage Storage Blob User Delegation Sas Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-python.md
description: Learn how to create a user delegation SAS for a blob with Microsoft Entra credentials by using the Python client library for Blob Storage. -+ Last updated 06/06/2023- ms.devlang: python
storage Storage Blobs Static Site Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-static-site-github-actions.md
Title: Use GitHub Actions to deploy a static site to Azure Storage description: Azure Storage static website hosting with GitHub Actions-+ -+ Last updated 01/24/2022
storage Storage Custom Domain Name https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-custom-domain-name.md
Title: Map a custom domain to an Azure Blob Storage endpoint description: Map a custom domain to a Blob Storage or web endpoint in an Azure storage account.-+ Last updated 02/12/2021-+
storage Authorization Resource Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/authorization-resource-provider.md
Title: Use the Azure Storage resource provider to access management resources description: The Azure Storage resource provider is a service that provides access to management resources for Azure Storage. You can use the Azure Storage resource provider to create, update, manage, and delete resources such as storage accounts, private endpoints, and account access keys. --++ Last updated 12/12/2019-
storage Authorize Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/authorize-data-access.md
Title: Authorize operations for data access
description: Learn about the different ways to authorize access to data in Azure Storage. Azure Storage supports authorization with Microsoft Entra ID, Shared Key authorization, or shared access signatures (SAS), and also supports anonymous access to blobs. --++ Last updated 05/31/2023-
storage Azure Defender Storage Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/azure-defender-storage-configure.md
Last updated 01/18/2023-+
storage Configure Network Routing Preference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/configure-network-routing-preference.md
Title: Configure network routing preference
description: Configure network routing preference for your Azure storage account to specify how network traffic is routed to your account from clients over the Internet. -+ Last updated 03/17/2021-+
storage Customer Managed Keys Configure Cross Tenant Existing Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-cross-tenant-existing-account.md
Title: Configure cross-tenant customer-managed keys for an existing storage acco
description: Learn how to configure Azure Storage encryption with customer-managed keys in an Azure key vault that resides in a different tenant than the tenant where the storage account resides. Customer-managed keys allow a service provider to encrypt the customer's data using an encryption key that is managed by the service provider's customer and that isn't accessible to the service provider. -+ Last updated 10/31/2022-+
storage Customer Managed Keys Configure Cross Tenant New Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-cross-tenant-new-account.md
Title: Configure cross-tenant customer-managed keys for a new storage account
description: Learn how to configure Azure Storage encryption with customer-managed keys in an Azure key vault that resides in a different tenant than the tenant where the storage account will be created. Customer-managed keys allow a service provider to encrypt the customer's data using an encryption key that is managed by the service provider's customer and that isn't accessible to the service provider. -+ Last updated 10/31/2022-+
storage Customer Managed Keys Configure Existing Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-existing-account.md
Title: Configure customer-managed keys in the same tenant for an existing storag
description: Learn how to configure Azure Storage encryption with customer-managed keys for an existing storage account by using the Azure portal, PowerShell, or Azure CLI. Customer-managed keys are stored in an Azure key vault. -+ Last updated 06/07/2023-+
storage Customer Managed Keys Configure Key Vault Hsm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-key-vault-hsm.md
Title: Configure encryption with customer-managed keys stored in Azure Key Vault
description: Learn how to configure Azure Storage encryption with customer-managed keys stored in Azure Key Vault Managed HSM by using Azure CLI. -+ Last updated 05/05/2022-+
storage Customer Managed Keys Configure New Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-new-account.md
Title: Configure customer-managed keys in the same tenant for a new storage acco
description: Learn how to configure Azure Storage encryption with customer-managed keys for a new storage account by using the Azure portal, PowerShell, or Azure CLI. Customer-managed keys are stored in an Azure key vault. -+ Last updated 03/23/2023-+
storage Customer Managed Keys Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-overview.md
Title: Customer-managed keys for account encryption
description: You can use your own encryption key to protect the data in your storage account. When you specify a customer-managed key, that key is used to protect and control access to the key that encrypts your data. Customer-managed keys offer greater flexibility to manage access controls. -+ Last updated 05/11/2023 -+
storage Geo Redundant Design Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/geo-redundant-design-legacy.md
Title: Use geo-redundancy to design highly available applications (.NET v11 SDK)
description: Learn how to use geo-redundant storage to design a highly available application using the .NET v11 SDK that is flexible enough to handle outages. -+ Last updated 08/23/2022-+ ms.devlang: csharp
storage Geo Redundant Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/geo-redundant-design.md
Title: Use geo-redundancy to design highly available applications
description: Learn how to use geo-redundant storage to design a highly available application that is flexible enough to handle outages. -+ Last updated 08/23/2022-+ ms.devlang: csharp
storage Infrastructure Encryption Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/infrastructure-encryption-enable.md
Title: Enable infrastructure encryption for double encryption of data
description: Customers who require higher levels of assurance that their data is secure can also enable 256-bit AES encryption at the Azure Storage infrastructure level. When infrastructure encryption is enabled, data in a storage account or encryption scope is encrypted twice with two different encryption algorithms and two different keys. -+ Last updated 10/19/2022 -+
storage Last Sync Time Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/last-sync-time-get.md
Title: Check the Last Sync Time property for a storage account
description: Learn how to check the Last Sync Time property for a geo-replicated storage account. The Last Sync Time property indicates the last time at which all writes from the primary region were successfully written to the secondary region. -+ Last updated 07/20/2023-+
storage Network Routing Preference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/network-routing-preference.md
Title: Network routing preference
description: Network routing preference enables you to specify how network traffic is routed to your account from clients over the internet. -+ Last updated 03/13/2023-+
storage Redundancy Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/redundancy-migration.md
Title: Change how a storage account is replicated
description: Learn how to change how data in an existing storage account is replicated. -+ Last updated 09/21/2023-+
storage Redundancy Regions Gzrs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/redundancy-regions-gzrs.md
Title: List of Azure regions that support geo-zone-redundant storage (GZRS)
description: List of Azure regions that support geo-zone-redundant storage (GZRS) -+ Last updated 04/28/2023-+
storage Redundancy Regions Zrs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/redundancy-regions-zrs.md
Title: List of Azure regions that support zone-redundant storage (ZRS)
description: List of Azure regions that support zone-redundant storage (ZRS) -+ Last updated 04/28/2023-+
storage Sas Expiration Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/sas-expiration-policy.md
Title: Configure an expiration policy for shared access signatures (SAS)
description: Configure a policy on the storage account that defines the length of time that a shared access signature (SAS) should be valid. Learn how to monitor policy violations to remediate security risks. --++ Last updated 12/12/2022-
storage Security Restrict Copy Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/security-restrict-copy-operations.md
Title: Permitted scope for copy operations (preview) description: Learn how to use the "Permitted scope for copy operations (preview)" Azure storage account setting to limit the source accounts of copy operations to the same tenant or with private links to the same virtual network.--++
storage Shared Key Authorization Prevent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/shared-key-authorization-prevent.md
Title: Prevent authorization with Shared Key
description: To require clients to use Microsoft Entra ID to authorize requests, you can disallow requests to the storage account that are authorized with Shared Key. --++ Last updated 06/06/2023- ms.devlang: azurecli
storage Storage Account Keys Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-keys-manage.md
Title: Manage account access keys
description: Learn how to view, manage, and rotate your storage account access keys. --++ Last updated 03/22/2023-
storage Storage Configure Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-configure-connection-string.md
Title: Configure a connection string
description: Configure a connection string for an Azure storage account. A connection string contains the information needed to authorize access to a storage account from your application at runtime using Shared Key authorization. --++ Last updated 01/24/2023-
storage Storage Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-disaster-recovery-guidance.md
Title: Azure storage disaster recovery planning and failover
description: Azure Storage supports account failover for geo-redundant storage accounts. Create a disaster recovery plan for your storage accounts if the endpoints in the primary region become unavailable. -+ Last updated 09/22/2023-+
storage Storage Encryption Key Model Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-encryption-key-model-get.md
Title: Determine which encryption key model is in use for the storage account
description: Use Azure portal, PowerShell, or Azure CLI to check how encryption keys are being managed for the storage account. Keys may be managed by Microsoft (the default), or by the customer. Customer-managed keys must be stored in Azure Key Vault. -+ Last updated 03/13/2020-+
storage Storage Failover Customer Managed Unplanned https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-failover-customer-managed-unplanned.md
Title: How Azure Storage account customer-managed failover works
description: Azure Storage supports account failover for geo-redundant storage accounts to recover from a service endpoint outage. Learn what happens to your storage account and storage services during a customer-managed failover to the secondary region if the primary endpoint becomes unavailable. -+ Last updated 09/22/2023-+
storage Storage Failover Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-failover-private-endpoints.md
Title: Failover considerations for storage accounts with private endpoints
description: Learn how to architect highly available storage accounts using Private Endpoints -+ Last updated 05/07/2021-+ # Failover considerations for storage accounts with private endpoints
storage Storage Initiate Account Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-initiate-account-failover.md
Title: Initiate a storage account failover
description: Learn how to initiate an account failover in the event that the primary endpoint for your storage account becomes unavailable. The failover updates the secondary region to become the primary region for your storage account. -+ Last updated 09/15/2023-+
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md
Title: Configure Azure Storage firewalls and virtual networks description: Configure layered network security for your storage account by using the Azure Storage firewall. -+ Last updated 08/15/2023-+
storage Storage Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-private-endpoints.md
Title: Use private endpoints
description: Overview of private endpoints for secure access to storage accounts from virtual networks. -+ Last updated 06/22/2023-+
storage Storage Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-redundancy.md
Title: Data redundancy
description: Understand data redundancy in Azure Storage. Data in your Microsoft Azure Storage account is replicated for durability and high availability. -+ Last updated 09/06/2023-+
storage Storage Require Secure Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-require-secure-transfer.md
Title: Require secure transfer to ensure secure connections
description: Learn how to require secure transfer for requests to Azure Storage. When you require secure transfer for a storage account, any requests originating from an insecure connection are rejected. -+ Last updated 06/01/2021-+
storage Storage Sas Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-sas-overview.md
Title: Grant limited access to data with shared access signatures (SAS)
description: Learn about using shared access signatures (SAS) to delegate access to Azure Storage resources, including blobs, queues, tables, and files. --++ Last updated 06/07/2023-
storage Storage Service Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-service-encryption.md
Title: Azure Storage encryption for data at rest description: Azure Storage protects your data by automatically encrypting it before persisting it to the cloud. You can rely on Microsoft-managed keys for the encryption of the data in your storage account, or you can manage encryption with your own keys. -+ Last updated 02/09/2023 -+
storage Transport Layer Security Configure Client Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/transport-layer-security-configure-client-version.md
Title: Configure Transport Layer Security (TLS) for a client application
description: Configure a client application to communicate with Azure Storage using a minimum version of Transport Layer Security (TLS). -+ Last updated 12/29/2022-+ ms.devlang: csharp
storage Transport Layer Security Configure Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/transport-layer-security-configure-minimum-version.md
Title: Enforce a minimum required version of Transport Layer Security (TLS) for
description: Configure a storage account to require a minimum version of Transport Layer Security (TLS) for clients making requests against Azure Storage. -+ Last updated 12/30/2022-+