Updates from: 01/13/2024 02:11:12
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Content Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/content-filter.md
data: {"id":"","object":"","created":0,"model":"","choices":[{"index":0,"finish_
data: [DONE] ```+
+> [!IMPORTANT]
+> When content filtering is triggered for a prompt and a `"status": 400` is received as part of the response there may be a charge for this request as the prompt was evaluated by the service. [Charges will also occur](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) when a `"status":200` is received with `"finish_reason": "content_filter"`. In this case the prompt did not have any issues, but the completion generated by the model was detected to violate the content filtering rules which results in the completion being filtered.
+ ## Best practices As part of your application design, consider the following best practices to deliver a positive experience with your application while minimizing potential harms:
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
See [model versions](../concepts/model-versions.md) to learn about how Azure Ope
**<sup>2</sup>** GPT-4 Turbo with Vision Preview = `gpt-4` (vision-preview). To deploy this model, under **Deployments** select model **gpt-4**. For **Model version** select **vision-preview**. > [!CAUTION]
-> We don't recommend using these models in production. We will upgrade all deployments of these models to a future stable version. Models designated preview do not follow the standard Azure OpenAI model lifecycle.
+> We don't recommend using preview models in production. We will upgrade all deployments of preview models to a future stable version. Models designated preview do not follow the standard Azure OpenAI model lifecycle.
> [!NOTE] > Regions where GPT-4 (0314) & (0613) are listed as available have access to both the 8K and 32K versions of the model
See [model versions](../concepts/model-versions.md) to learn about how Azure Ope
| Model Availability | gpt-4 (0314) | gpt-4 (0613) | gpt-4 (1106-preview) | gpt-4 (vision-preview) | ||:|:|:|:|
-| Available to all subscriptions with Azure OpenAI access | | Australia East <br> Canada East <br> France Central <br> Sweden Central <br> Switzerland North | Australia East <br> Canada East <br> East US 2 <br> France Central <br> Norway East <br> South India <br> Sweden Central <br> UK South <br> West US | Switzerland North <br> West US |
-| Available to subscriptions with current access to the model version in the region | East US <br> France Central <br> South Central US <br> UK South | East US <br> East US 2 <br> Japan East <br> UK South | | Australia East <br>Sweden Central|
+| Available to all subscriptions with Azure OpenAI access | | Australia East <br> Canada East <br> France Central <br> Sweden Central <br> Switzerland North | Australia East <br> Canada East <br> East US 2 <br> France Central <br> Norway East <br> South India <br> Sweden Central <br> UK South <br> West US | Sweden Central <br> Switzerland North <br> West US |
+| Available to subscriptions with current access to the model version in the region | East US <br> France Central <br> South Central US <br> UK South | East US <br> East US 2 <br> Japan East <br> UK South | | Australia East |
### GPT-3.5 models
ai-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/quotas-limits.md
- ignite-2023 - references_regions Previously updated : 12/06/2023 Last updated : 01/12/2024
The default quota for models varies by model and region. Default quota limits ar
<tr> <td>gpt-4 (vision-preview)<br>GPT-4 Turbo with Vision</td> <td>Sweden Central, Switzerland North, Australia East, West US</td>
- <td>10 K</td>
+ <td>30 K</td>
</tr> <tr> <td rowspan="2">text-embedding-ada-002</td>
aks Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/availability-zones.md
AKS clusters deployed using availability zones can distribute nodes across multi
If a single zone becomes unavailable, your applications continue to run on clusters configured to spread across multiple zones.
+> [!NOTE]
+> When implementing **availability zones with the [cluster autoscaler](./cluster-autoscaler-overview.md)**, we recommend using a single node pool for each zone. You can set the `--balance-similar-node-groups` parameter to `True` to maintain a balanced distribution of nodes across zones for your workloads during scale up operations. When this approach isn't implemented, scale down operations can disrupt the balance of nodes across zones.
+ ## Create an AKS cluster across availability zones When you create a cluster using the [az aks create][az-aks-create] command, the `--zones` parameter specifies the availability zones to deploy agent nodes into. The availability zones that the managed control plane components are deployed into are **not** controlled by this parameter. They are automatically spread across all availability zones (if present) in the region during cluster deployment.
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
Title: Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS)
-description: Learn how to configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS), including deploying an AKS cluster into an existing virtual network and subnet.
+description: Learn how to configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS), including deploying an AKS cluster into an existing virtual network and subnets.
With Azure CNI Overlay, the cluster nodes are deployed into an Azure Virtual Net
## Overview of Overlay networking
-In Overlay networking, only the Kubernetes cluster nodes are assigned IPs from a subnet. Pods receive IPs from a private CIDR provided at the time of cluster creation. Each node is assigned a `/24` address space carved out from the same CIDR. Extra nodes created when you scale out a cluster automatically receive `/24` address spaces from the same CIDR. Azure CNI assigns IPs to pods from this `/24` space.
+In Overlay networking, only the Kubernetes cluster nodes are assigned IPs from subnets. Pods receive IPs from a private CIDR provided at the time of cluster creation. Each node is assigned a `/24` address space carved out from the same CIDR. Extra nodes created when you scale out a cluster automatically receive `/24` address spaces from the same CIDR. Azure CNI assigns IPs to pods from this `/24` space.
A separate routing domain is created in the Azure Networking stack for the pod's private CIDR space, which creates an Overlay network for direct communication between pods. There's no need to provision custom routes on the cluster subnet or use an encapsulation method to tunnel traffic between pods, which provides connectivity performance between pods on par with VMs in a VNet. Workloads running within the pods are not even aware that network address manipulation is happening.
Like Azure CNI Overlay, Kubenet assigns IP addresses to pods from an address spa
## IP address planning -- **Cluster Nodes**: When setting up your AKS cluster, make sure your VNet subnet has enough room to grow for future scaling. Keep in mind that clusters can't scale across subnets, but you can always add new node pools in another subnet within the same VNet for extra space. A `/24`subnet can fit up to 251 nodes since the first three IP addresses are reserved for management tasks.
+- **Cluster Nodes**: When setting up your AKS cluster, make sure your VNet subnets have enough room to grow for future scaling. You can assign each node pool to a dedicated subnet. A `/24`subnet can fit up to 251 nodes since the first three IP addresses are reserved for management tasks.
- **Pods**: The Overlay solution assigns a `/24` address space for pods on every node from the private CIDR that you specify during cluster creation. The `/24` size is fixed and can't be increased or decreased. You can run up to 250 pods on a node. When planning the pod address space, ensure the private CIDR is large enough to provide `/24` address spaces for new nodes to support future cluster expansion. - When planning IP address space for pods, consider the following factors: - The same pod CIDR space can be used on multiple independent AKS clusters in the same VNet.
az aks create -n $clusterName -g $resourceGroup \
--pod-cidr 192.168.0.0/16 ```
+## Add a new nodepool to a dedicated subnet
+
+After your have created a cluster with Azure CNI Overlay, you can create another nodepool and assign the nodes to a new subnet of the same VNet.
+This approach can be usefull if you want to control the ingress or egress IPs of the host from/ towards targets in the same VNET or peered VNets.
+
+```azurecli-interactive
+clusterName="myOverlayCluster"
+resourceGroup="myResourceGroup"
+location="westcentralus"
+nodepoolName="newpool1"
+subscriptionId=$(az account show --query id -o tsv)
+vnetName="yourVnetName"
+subnetName="yourNewSubnetName"
+subnetResourceId="/subscriptions/$subscriptionId/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnetName/subnets/$subnetName"
+az aks nodepool add -g $resourceGroup --cluster-name $clusterName \
+ --name $nodepoolName --node-count 1 \
+ --mode system --vnet-subnet-id $subnetResourceId
+```
+ ## Upgrade an existing cluster to CNI Overlay > [!NOTE]
aks Best Practices Performance Scale Large https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/best-practices-performance-scale-large.md
Keeping the above considerations in mind, customers are typically able to deploy
Always upgrade your Kubernetes clusters to the latest version. Newer versions contain many improvements that address performance and throttling issues. If you're using an upgraded version of Kubernetes and still see throttling due to the actual load or the number of clients in the subscription, you can try the following options: * **Analyze errors using AKS Diagnose and Solve Problems**: You can use [AKS Diagnose and Solve Problems](./aks-diagnostics.md) to analyze errors, identity the root cause, and get resolution recommendations.
- * **Increase the Cluster Autoscaler scan interval**: If the diagnostic reports show that [Cluster Autoscaler throttling has been detected](/troubleshoot/azure/azure-kubernetes/429-too-many-requests-errors#analyze-and-identify-errors-by-using-aks-diagnose-and-solve-problems), you can [increase the scan interval](./cluster-autoscaler.md#change-the-cluster-autoscaler-settings) to reduce the number of calls to Virtual Machine Scale Sets from the Cluster Autoscaler.
+ * **Increase the Cluster Autoscaler scan interval**: If the diagnostic reports show that [Cluster Autoscaler throttling has been detected](/troubleshoot/azure/azure-kubernetes/429-too-many-requests-errors#analyze-and-identify-errors-by-using-aks-diagnose-and-solve-problems), you can [increase the scan interval](./cluster-autoscaler.md#update-the-cluster-autoscaler-settings) to reduce the number of calls to Virtual Machine Scale Sets from the Cluster Autoscaler.
* **Reconfigure third-party applications to make fewer calls**: If you filter by *user agents* in the ***View request rate and throttle details*** diagnostic and see that [a third-party application, such as a monitoring application, makes a large number of GET requests](/troubleshoot/azure/azure-kubernetes/429-too-many-requests-errors#analyze-and-identify-errors-by-using-aks-diagnose-and-solve-problems), you can change the settings of these applications to reduce the frequency of the GET calls. Make sure the application clients use exponential backoff when calling Azure APIs. * **Split your clusters into different subscriptions or regions**: If you have a large number of clusters and node pools that use Virtual Machine Scale Sets, you can split them into different subscriptions or regions within the same subscription. Most Azure API limits are shared at the subscription-region level, so you can move or scale your clusters to different subscriptions or regions to get unblocked on Azure API throttling. This option is especially helpful if you expect your clusters to have high activity. There are no generic guidelines for these limits. If you want specific guidance, you can create a support ticket.
aks Best Practices Performance Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/best-practices-performance-scale.md
Implementing [vertical pod autoscaling](./vertical-pod-autoscaler.md) is useful
Implementing cluster autoscaling is useful if your existing nodes lack sufficient capacity, as it helps with scaling up and provisioning new nodes.
-When considering cluster autoscaling, the decision of when to remove a node involves a tradeoff between optimizing resource utilization and ensuring resource availability. Eliminating underutilized nodes enhances cluster utilization but might result in new workloads having to wait for resources to be provisioned before they can be deployed. It's important to find a balance between these two factors that aligns with your cluster and workload requirements and [configure the cluster autoscaler profile settings accordingly](./cluster-autoscaler.md#change-the-cluster-autoscaler-settings).
+When considering cluster autoscaling, the decision of when to remove a node involves a tradeoff between optimizing resource utilization and ensuring resource availability. Eliminating underutilized nodes enhances cluster utilization but might result in new workloads having to wait for resources to be provisioned before they can be deployed. It's important to find a balance between these two factors that aligns with your cluster and workload requirements and [configure the cluster autoscaler profile settings accordingly](./cluster-autoscaler.md#update-the-cluster-autoscaler-settings).
The Cluster Autoscaler profile settings apply universally to all autoscaler-enabled node pools in your cluster. This means that any scaling actions occurring in one autoscaler-enabled node pool might impact the autoscaling behavior in another node pool. It's important to apply consistent and synchronized profile settings across all relevant node pools to ensure that the autoscaler behaves as expected.
The following table provides a breakdown of suggested use cases for OS disks sup
#### IOPS and throughput
-Input/output operations per second (IOPS) refers to the number of read and write operations that a disk can perform in a second. Throughout refers to the amount of data that can be transferred in a given time period.
+Input/output operations per second (IOPS) refers to the number of read and write operations that a disk can perform in a second. Throughput refers to the amount of data that can be transferred in a given time period.
OS disks are responsible for storing the operating system and its associated files, and the VMs are responsible for running the applications. When selecting a VM, ensure the size and performance of the OS disk and VM SKU don't have a large discrepancy. A discrepancy in size or performance can cause performance issues and resource contention. For example, if the OS disk is significantly smaller than the VMs, it can limit the amount of space available for application data and cause the system to run out of disk space. If the OS disk has lower performance than the VMs, it can become a bottleneck and limit the overall performance of the system. Make sure the size and performance are balanced to ensure optimal performance in Kubernetes.
aks Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/best-practices.md
The following conceptual articles cover some of the fundamental features and com
## Next steps
-For guidance on a creating full solutions with AKS for production, see [AKS solution guidance][aks-solution-guidance].
+For guidance on a designing an enterprise-scale implementation of AKS, see [Plan your AKS design][plan-aks-design].
<!-- LINKS - internal -->
-[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?WT.mc_id=AKSDOCSPAGE
+[plan-aks-design]: /azure/architecture/reference-architectures/containers/aks-start-here?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json
aks Cluster Autoscaler Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-autoscaler-overview.md
+
+ Title: Cluster autoscaling in Azure Kubernetes Service (AKS) overview
+
+description: Learn about cluster autoscaling in Azure Kubernetes Service (AKS) using the cluster autoscaler.
+ Last updated : 01/05/2024++
+# Cluster autoscaling in Azure Kubernetes Service (AKS) overview
+
+To keep up with application demands in Azure Kubernetes Service (AKS), you might need to adjust the number of nodes that run your workloads. The cluster autoscaler component watches for pods in your cluster that can't be scheduled because of resource constraints. When the cluster autoscaler detects issues, it scales up the number of nodes in the node pool to meet the application demand. It also regularly checks nodes for a lack of running pods and scales down the number of nodes as needed.
+
+This article helps you understand how the cluster autoscaler works in AKS. It also provides guidance, best practices, and considerations when configuring the cluster autoscaler for your AKS workloads. If you want to enable, disable, or update the cluster autoscaler for your AKS workloads, see [Use the cluster autoscaler in AKS](./cluster-autoscaler.md).
+
+## About the cluster autoscaler
+
+Clusters often need a way to scale automatically to adjust to changing application demands, such as between workdays and evenings or weekends. AKS clusters can scale in the following ways:
+
+* The **cluster autoscaler** periodically checks for pods that can't be scheduled on nodes because of resource constraints. The cluster then automatically increases the number of nodes. Manual scaling is disabled when you use the cluster autoscaler. For more information, see [How does scale up work?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-up-work).
+* The **[Horizontal Pod Autoscaler][horizontal-pod-autoscaler]** uses the Metrics Server in a Kubernetes cluster to monitor the resource demand of pods. If an application needs more resources, the number of pods is automatically increased to meet the demand.
+* The **[Vertical Pod Autoscaler][vertical-pod-autoscaler]** automatically sets resource requests and limits on containers per workload based on past usage to ensure pods are scheduled onto nodes that have the required CPU and memory resources.
++
+It's a common practice to enable cluster autoscaler for nodes and either the Vertical Pod Autoscaler or Horizontal Pod Autoscaler for pods. When you enable the cluster autoscaler, it applies the specified scaling rules when the node pool size is lower than the minimum or greater than the maximum. The cluster autoscaler waits to take effect until a new node is needed in the node pool or until a node might be safely deleted from the current node pool. For more information, see [How does scale down work?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-down-work)
+
+## Best practices and considerations
+
+* When implementing **availability zones with the cluster autoscaler**, we recommend using a single node pool for each zone. You can set the `--balance-similar-node-groups` parameter to `True` to maintain a balanced distribution of nodes across zones for your workloads during scale up operations. When this approach isn't implemented, scale down operations can disrupt the balance of nodes across zones.
+* For **clusters with more than 400 nodes**, we recommend using Azure CNI or Azure CNI Overlay.
+* To **effectively run workloads concurrently on both Spot and Fixed node pools**, consider using [*priority expanders*](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-expanders). This approach allows you to schedule pods based on the priority of the node pool.
+* Exercise caution when **assigning CPU/Memory requests on pods**. The cluster autoscaler scales up based on pending pods rather than CPU/Memory pressure on nodes.
+* For **clusters concurrently hosting both long-running workloads, like web apps, and short/bursty job workloads**, we recommend separating them into distinct node pools with [Affinity Rules](./operator-best-practices-advanced-scheduler.md#node-affinity)/[expanders](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-expanders) or using [PriorityClass](https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass) to help prevent unnecessary node drain or scale down operations.
+* We **don't recommend making direct changes to nodes in autoscaled node pools**. All nodes in the same node group should have uniform capacity, labels, and system pods running on them.
+* Nodes don't scale up if pods have a PriorityClass value below -10. Priority -10 is reserved for [overprovisioning pods](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-can-i-configure-overprovisioning-with-cluster-autoscaler). For more information, see [Using the cluster autoscaler with Pod Priority and Preemption](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-cluster-autoscaler-work-with-pod-priority-and-preemption).
+* **Don't combine other node autoscaling mechanisms**, such as Virtual Machine Scale Set autoscalers, with the cluster autoscaler.
+* The cluster autoscaler **might be unable to scale down if pods can't move, such as in the following situations**:
+ * A directly created pod not backed by a controller object, such as a Deployment or ReplicaSet.
+ * A pod disruption budget (PDB) that's too restrictive and doesn't allow the number of pods to fall below a certain threshold.
+ * A pod uses node selectors or anti-affinity that can't be honored if scheduled on a different node.
+ For more information, see [What types of pods can prevent the cluster autoscaler from removing a node?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-types-of-pods-can-prevent-ca-from-removing-a-node).
+
+## Cluster autoscaler profile
+
+The [cluster autoscaler profile](./cluster-autoscaler.md#cluster-autoscaler-profile-settings) is a set of parameters that control the behavior of the cluster autoscaler. You can configure the cluster autoscaler profile when you create a cluster or update an existing cluster.
+
+### Optimizing the cluster autoscaler profile
+
+You should fine-tune the cluster autoscaler profile settings according to your specific workload scenarios while also considering tradeoffs between performance and cost. This section provides examples that demonstrate those tradeoffs.
+
+It's important to note that the cluster autoscaler profile settings are cluster-wide and applied to all autoscale-enabled node pools. Any scaling actions that take place in one node pool can affect the autoscaling behavior of other node pools, which can lead to unexpected results. Make sure you apply consistent and synchronized profile configurations across all relevant node pools to ensure you get your desired results.
+
+#### Example 1: Optimizing for performance
+
+For clusters that handle substantial and bursty workloads with a primary focus on performance, we recommend increasing the `scan-interval` and decreasing the `scale-down-utilization-threshold`. These settings help batch multiple scaling operations into a single call, optimizing scaling time and the utilization of compute read/write quotas. It also helps mitigate the risk of swift scale down operations on underutilized nodes, enhancing the pod scheduling efficiency.
+
+For clusters with daemonset pods, we recommend setting `ignore-daemonset-utilization` to `true`, which effectively ignores node utilization by daemonset pods and minimizes unnecessary scale down operations.
+
+#### Example 2: Optimizing for cost
+
+If you want a cost-optimized profile, we recommend setting the following parameter configurations:
+
+* Reduce `scale-down-unneeded-time`, which is the amount of time a node should be unneeded before it's eligible for scale down.
+* Reduce `scale-down-delay-after-add`, which is the amount of time to wait after a node is added before considering it for scale down.
+* Increase `scale-down-utilization-threshold`, which is the utilization threshold for removing nodes.
+* Increase `max-empty-bulk-delete`, which is the maximum number of nodes that can be deleted in a single call.
+
+## Common issues and mitigation recommendations
+
+### Not triggering scale up operations
+
+| Common causes | Mitigation recommendations |
+|--|--|
+| PersistentVolume node affinity conflicts, which can arise when using the cluster autoscaler with multiple availability zones or when a pod's or persistent volume's zone differs from the node's zone. | Use one node pool per availability zone and enabling `--balance-similar-node-groups`. You can also set the [`volumeBindingMode` field to `WaitForFirstConsumer`](./azure-disk-csi.md#create-a-custom-storage-class) in the pod specification to prevent the volume from being bound to a node until a pod using the volume is created. |
+| Taints and Tolerations/Node affinity conflicts | Assess the taints assigned to your nodes and review the tolerations defined in your pods. If necessary, make adjustments to the [taints and tolerations](./operator-best-practices-advanced-scheduler.md#provide-dedicated-nodes-using-taints-and-tolerations) to ensure that your pods can be efficiently scheduled on your nodes. |
+
+### Scale up operation failures
+
+| Common causes | Mitigation recommendations |
+|--|--|
+| IP address exhaustion in the subnet | Add another subnet in the same virtual network and add another node pool into the new subnet. |
+| Core quota exhaustion | Approved core quota has been exhausted. [Request a quota increase](../quotas/quickstart-increase-quota-portal.md). The cluster autoscaler enters an [exponential backoff state](#node-pool-in-backoff) within the specific node group when it experiences multiple failed scale up attempts. |
+| Max size of node pool | Increase the max nodes on the node pool or create a new node pool. |
+| Requests/Calls exceeding the rate limit | See [429 Too Many Requests errors](/troubleshoot/azure/azure-kubernetes/429-too-many-requests-errors). |
+
+### Scale down operation failures
+
+| Common causes | Mitigation recommendations |
+|--|--|
+| Pod preventing node drain/Unable to evict pod |ΓÇó View [what types of pods can prevent scale down](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-types-of-pods-can-prevent-ca-from-removing-a-node). <br> ΓÇó For pods using local storage, such as hostPath and emptyDir, set the cluster autoscaler profile flag `skip-nodes-with-local-storage` to `false`. <br> ΓÇó In the pod specification, set the `cluster-autoscaler.kubernetes.io/safe-to-evict` annotation to `true`. <br> ΓÇó Check your [PDB](https://kubernetes.io/docs/tasks/run-application/configure-pdb/), as it might be restrictive. |
+| Min size of node pool | Reduce the minimum size of the node pool. |
+| Requests/Calls exceeding the rate limit | See [429 Too Many Requests errors](/troubleshoot/azure/azure-kubernetes/429-too-many-requests-errors). |
+| Write operations locked | Don't make any changes to the [fully managed AKS resource group](./cluster-configuration.md#fully-managed-resource-group-preview) (see [AKS support policies](./support-policies.md)). Remove or reset any [resource locks](../azure-resource-manager/management/lock-resources.md) you previously applied to the resource group. |
+
+### Other issues
+
+| Common causes | Mitigation recommendations |
+|--|--|
+| PriorityConfigMapNotMatchedGroup | Make sure that you add all the node groups requiring autoscaling to the [expander configuration file](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/expander/priority/readme.md#configuration). |
+
+### Node pool in backoff
+
+Node pool in backoff was introduced in version 0.6.2 and causes the cluster autoscaler to back off from scaling a node pool after a failure.
+
+Depending on how long the scaling operations have been experiencing failures, it may take up to 30 minutes before making another attempt. You can reset the node pool's backoff state by disabling and then re-enabling autoscaling.
+
+<!-- LINKS >
+[vertical-pod-autoscaler]: vertical-pod-autoscaler.md
+[horizontal-pod-autoscaler]:concepts-scale.md#horizontal-pod-autoscaler
aks Cluster Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-autoscaler.md
Title: Use the cluster autoscaler in Azure Kubernetes Service (AKS)
-description: Learn how to use the cluster autoscaler to automatically scale your Azure Kubernetes Service (AKS) clusters to meet application demands.
+description: Learn how to use the cluster autoscaler to automatically scale your Azure Kubernetes Service (AKS) workloads to meet application demands.
Previously updated : 09/26/2023 Last updated : 01/11/2024
-# Automatically scale a cluster to meet application demands on Azure Kubernetes Service (AKS)
+# Use the cluster autoscaler in Azure Kubernetes Service (AKS)
-To keep up with application demands in Azure Kubernetes Service (AKS), you might need to adjust the number of nodes that run your workloads. The cluster autoscaler component watches for pods in your cluster that can't be scheduled because of resource constraints. When the cluster autoscaler detects issues, it scales up the number of nodes in the node pool to meet the application demand. It also regularly checks nodes for a lack of running pods and scales down the number of nodes as needed.
+To keep up with application demands in AKS, you might need to adjust the number of nodes that run your workloads. The cluster autoscaler component watches for pods in your cluster that can't be scheduled because of resource constraints. When the cluster autoscaler detects issues, it scales up the number of nodes in the node pool to meet the application demands. It also regularly checks nodes for a lack of running pods and scales down the number of nodes as needed.
-This article shows you how to enable and manage the cluster autoscaler in an AKS cluster, which is based on the open source [Kubernetes][kubernetes-cluster-autoscaler] version.
+This article shows you how to enable and manage the cluster autoscaler in AKS, which is based on the [open-source Kubernetes version][kubernetes-cluster-autoscaler].
## Before you begin This article requires Azure CLI version 2.0.76 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
-## About the cluster autoscaler
+## Use the cluster autoscaler on an AKS cluster
-To adjust to changing application demands, such as between workdays and evenings or weekends, clusters often need a way to automatically scale. AKS clusters can scale in the following ways:
-
-* The **cluster autoscaler** periodically checks for pods that can't be scheduled on nodes because of resource constraints. The cluster then automatically increases the number of nodes. For more information, see [How does scale-up work?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-up-work).
-* The **[Horizontal Pod Autoscaler][horizontal-pod-autoscaler]** uses the Metrics Server in a Kubernetes cluster to monitor the resource demand of pods. If an application needs more resources, the number of pods is automatically increased to meet the demand.
-* **[Vertical Pod Autoscaler][vertical-pod-autoscaler]** (preview) automatically sets resource requests and limits on containers per workload based on past usage to ensure pods are scheduled onto nodes that have the required CPU and memory resources.
--
-The Horizontal Pod Autoscaler scales the number of pod replicas as needed, and the cluster autoscaler scales the number of nodes in a node pool as needed. The cluster autoscaler decreases the number of nodes when there has been unused capacity after a period of time. Any pods on a node removed by the cluster autoscaler are safely scheduled elsewhere in the cluster.
-
-While Vertical Pod Autoscaler or Horizontal Pod Autoscaler can be used to automatically adjust the number of Kubernetes pods in a workload, the number of nodes also needs to be able to scale to meet the computational needs of the pods. The cluster autoscaler addresses that need, handling scale up and scale down of Kubernetes nodes. It is common practice to enable cluster autoscaler for nodes, and either Vertical Pod Autoscaler or Horizontal Pod Autoscalers for pods.
-
-The cluster autoscaler and Horizontal Pod Autoscaler can work together and are often both deployed in a cluster. When combined, the Horizontal Pod Autoscaler runs the number of pods required to meet application demand, and the cluster autoscaler runs the number of nodes required to support the scheduled pods.
-
-> [!NOTE]
-> Manual scaling is disabled when you use the cluster autoscaler. Let the cluster autoscaler determine the required number of nodes. If you want to manually scale your cluster, [disable the cluster autoscaler](#disable-the-cluster-autoscaler-on-a-cluster).
-
-With cluster autoscaler enabled, when the node pool size is lower than the minimum or greater than the maximum it applies the scaling rules. Next, the autoscaler waits to take effect until a new node is needed in the node pool or until a node might be safely deleted from the current node pool. For more information, see [How does scale-down work?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-down-work)
-
-The cluster autoscaler might be unable to scale down if pods can't move, such as in the following situations:
-
-* A directly created pod not backed by a controller object, such as a deployment or replica set.
-* A pod disruption budget (PDB) is too restrictive and doesn't allow the number of pods to fall below a certain threshold.
-* A pod uses node selectors or anti-affinity that can't be honored if scheduled on a different node.
-
-For more information, see [What types of pods can prevent the cluster autoscaler from removing a node?][autoscaler-scaledown]
-
-## Use the cluster autoscaler on your AKS cluster
-
-In this section, you deploy, upgrade, disable, or re-enable the cluster autoscaler on your cluster.
-
-The cluster autoscaler uses startup parameters for things like time intervals between scale events and resource thresholds. For more information on what parameters the cluster autoscaler uses, see [using the autoscaler profile](#use-the-cluster-autoscaler-profile).
+> [!IMPORTANT]
+> The cluster autoscaler is a Kubernetes component. Although the AKS cluster uses a virtual machine scale set for the nodes, don't manually enable or edit settings for scale set autoscaling. Let the Kubernetes cluster autoscaler manage the required scale settings. For more information, see [Can I modify the AKS resources in the node resource group?][aks-faq-node-resource-group]
### Enable the cluster autoscaler on a new cluster
-> [!IMPORTANT]
-> The cluster autoscaler is a Kubernetes component. Although the AKS cluster uses a virtual machine scale set for the nodes, don't manually enable or edit settings for scale set autoscale in the Azure portal or using the Azure CLI. Let the Kubernetes cluster autoscaler manage the required scale settings. For more information, see [Can I modify the AKS resources in the node resource group?][aks-faq-node-resource-group]
- 1. Create a resource group using the [`az group create`][az-group-create] command. ```azurecli-interactive
The cluster autoscaler uses startup parameters for things like time intervals be
### Enable the cluster autoscaler on an existing cluster
-> [!IMPORTANT]
-> The cluster autoscaler is a Kubernetes component. Although the AKS cluster uses a virtual machine scale set for the nodes, don't manually enable or edit settings for scale set autoscale in the Azure portal or using the Azure CLI. Let the Kubernetes cluster autoscaler manage the required scale settings. For more information, see [Can I modify the AKS resources in the node resource group?][aks-faq-node-resource-group]
-
-#### [Azure CLI](#tab/azure-cli)
- * Update an existing cluster using the [`az aks update`][az-aks-update] command and enable and configure the cluster autoscaler on the node pool using the `--enable-cluster-autoscaler` parameter and specifying a node `--min-count` and `--max-count`. The following example command updates an existing AKS cluster to enable the cluster autoscaler on the node pool for the cluster and sets a minimum of one and maximum of three nodes: ```azurecli-interactive
The cluster autoscaler uses startup parameters for things like time intervals be
It takes a few minutes to update the cluster and configure the cluster autoscaler settings.
-#### [Portal](#tab/azure-portal)
-
-1. To enable cluster autoscaler on your existing clusterΓÇÖs node pools, navigate to *Node pools* from your cluster's overview page in the Azure portal. Select the *scale method* for the node pool youΓÇÖd like to adjust scaling settings for.
-
- :::image type="content" source="./media/cluster-autoscaler/main-blade-column-inline.png" alt-text="Screenshot of the Azure portal page for a cluster's node pools. The column for 'Scale method' is highlighted." lightbox="./media/cluster-autoscaler/main-blade-column.png":::
-
-1. From here, you can enable or disable autoscaling, adjust minimum and maximum node count, and learn more about your node poolΓÇÖs size, capacity, and usage. Select *Apply* to save your changes.
-
- :::image type="content" source="./media/cluster-autoscaler/menu-inline.png" alt-text="Screenshot of the Azure portal page for a cluster's node pools is shown with the 'Scale node pool' menu expanded. The 'Apply' button is highlighted." lightbox="./media/cluster-autoscaler/menu.png":::
--- ### Disable the cluster autoscaler on a cluster * Disable the cluster autoscaler using the [`az aks update`][az-aks-update-preview] command and the `--disable-cluster-autoscaler` parameter.
The cluster autoscaler uses startup parameters for things like time intervals be
Nodes aren't removed when the cluster autoscaler is disabled. > [!NOTE]
-> You can manually scale your cluster after disabling the cluster autoscaler using the [`az aks scale`][az-aks-scale] command. If you use the horizontal pod autoscaler, that feature continues to run with the cluster autoscaler disabled, but pods might end up unable to be scheduled if all node resources are in use.
+> You can manually scale your cluster after disabling the cluster autoscaler using the [`az aks scale`][az-aks-scale] command. If you use the horizontal pod autoscaler, it continues to run with the cluster autoscaler disabled, but pods might end up unable to be scheduled if all node resources are in use.
-### Re-enable a disabled cluster autoscaler
+### Re-enable the cluster autoscaler on a cluster
You can re-enable the cluster autoscaler on an existing cluster using the [`az aks update`][az-aks-update-preview] command and specifying the `--enable-cluster-autoscaler`, `--min-count`, and `--max-count` parameters.
-## Change the cluster autoscaler settings
+## Use the cluster autoscaler on node pools
-> [!IMPORTANT]
-> If you have multiple node pools in your AKS cluster, skip to the [autoscale with multiple agent pools section](#use-the-cluster-autoscaler-with-multiple-node-pools-enabled). Clusters with multiple agent pools require the `az aks nodepool` command instead of `az aks`.
+### Use the cluster autoscaler on multiple node pools
-In our example to enable cluster autoscaling, your cluster autoscaler's minimum node count was set to one and maximum node count was set to three. As your application demands change, you need to adjust the cluster autoscaler node count to scale efficiently.
+You can use the cluster autoscaler with [multiple node pools][aks-multiple-node-pools] and can enable the cluster autoscaler on each individual node pool and pass unique autoscaling rules to them.
-* Change the node count using the [`az aks update`][az-aks-update] command and update the cluster autoscaler using the `--update-cluster-autoscaler` parameter and specifying your updated node `--min-count` and `--max-count`.
+* Update the settings on an existing node pool using the [`az aks nodepool update`][az-aks-nodepool-update] command.
```azurecli-interactive
- az aks update \
+ az aks nodepool update \
--resource-group myResourceGroup \
- --name myAKSCluster \
+ --cluster-name myAKSCluster \
+ --name nodepool1 \
--update-cluster-autoscaler \ --min-count 1 \ --max-count 5 ```
-> [!NOTE]
-> The cluster autoscaler enforces the minimum count in cases where the actual count drops below the minimum due to external factors, such as during a spot eviction or when changing the minimum count value from the AKS API.
+### Disable the cluster autoscaler on a node pool
-Monitor the performance of your applications and services, and adjust the cluster autoscaler node counts to match the required performance.
+* Disable the cluster autoscaler on a node pool using the [`az aks nodepool update`][az-aks-nodepool-update] command and the `--disable-cluster-autoscaler` parameter.
-## Use the cluster autoscaler profile
+ ```azurecli-interactive
+ az aks nodepool update \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name nodepool1 \
+ --disable-cluster-autoscaler
+ ```
-You can also configure more granular details of the cluster autoscaler by changing the default values in the cluster-wide autoscaler profile. For example, a scale down event happens after nodes are under-utilized after 10 minutes. If you have workloads that run every 15 minutes, you might want to change the autoscaler profile to scale down under-utilized nodes after 15 or 20 minutes. When you enable the cluster autoscaler, a default profile is used unless you specify different settings. The cluster autoscaler profile has the following settings you can update:
+### Re-enable the cluster autoscaler on a node pool
-* Example profile update that scales after 15 minutes and changes after 10 minutes of idle use.
+You can re-enable the cluster autoscaler on a node pool using the [`az aks nodepool update`][az-aks-nodepool-update] command and specifying the `--enable-cluster-autoscaler`, `--min-count`, and `--max-count` parameters.
+
+> [!NOTE]
+> If you plan on using the cluster autoscaler with node pools that span multiple zones and leverage scheduling features related to zones, such as volume topological scheduling, we recommend you have one node pool per zone and enable `--balance-similar-node-groups` through the autoscaler profile. This ensures the autoscaler can successfully scale up and keep the sizes of the node pools balanced.
+
+## Update the cluster autoscaler settings
+
+As your application demands change, you might need to adjust the cluster autoscaler node count to scale efficiently.
+
+* Change the node count using the [`az aks update`][az-aks-update] command and update the cluster autoscaler using the `--update-cluster-autoscaler` parameter and specifying your updated node `--min-count` and `--max-count`.
```azurecli-interactive az aks update \
- -g learn-aks-cluster-scalability \
- -n learn-aks-cluster-scalability \
- --cluster-autoscaler-profile scan-interval=5s \
- scale-down-unready-time=10m \
- scale-down-delay-after-add=15m
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --update-cluster-autoscaler \
+ --min-count 1 \
+ --max-count 5
```
-| Setting | Description | Default value |
-|-|||
-| scan-interval | How often cluster is reevaluated for scale up or down | 10 seconds |
-| scale-down-delay-after-add | How long after scale up that scale down evaluation resumes | 10 minutes |
-| scale-down-delay-after-delete | How long after node deletion that scale down evaluation resumes | scan-interval |
-| scale-down-delay-after-failure | How long after scale down failure that scale down evaluation resumes | 3 minutes |
-| scale-down-unneeded-time | How long a node should be unneeded before it's eligible for scale down | 10 minutes |
-| scale-down-unready-time | How long an unready node should be unneeded before it's eligible for scale down | 20 minutes |
-| ignore-daemonsets-utilization (Preview) | Whether DaemonSet pods will be ignored when calculating resource utilization for scaling down | false |
-| daemonset-eviction-for-empty-nodes (Preview) | Whether DaemonSet pods will be gracefully terminated from empty nodes | false |
-| daemonset-eviction-for-occupied-nodes (Preview) | Whether DaemonSet pods will be gracefully terminated from non-empty nodes | true |
-| scale-down-utilization-threshold | Node utilization level, defined as sum of requested resources divided by capacity, in which a node can be considered for scale down | 0.5 |
-| max-graceful-termination-sec | Maximum number of seconds the cluster autoscaler waits for pod termination when trying to scale down a node | 600 seconds |
-| balance-similar-node-groups | Detects similar node pools and balances the number of nodes between them | false |
-| expander | Type of node pool [expander](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-expanders) to be used in scale up. Possible values: `most-pods`, `random`, `least-waste`, `priority` | random |
-| skip-nodes-with-local-storage | If true, cluster autoscaler doesn't delete nodes with pods with local storage, for example, EmptyDir or HostPath | true |
-| skip-nodes-with-system-pods | If true, cluster autoscaler doesn't delete nodes with pods from kube-system (except for DaemonSet or mirror pods) | true |
-| max-empty-bulk-delete | Maximum number of empty nodes that can be deleted at the same time | 10 nodes |
-| new-pod-scale-up-delay | For scenarios like burst/batch scale where you don't want CA to act before the kubernetes scheduler could schedule all the pods, you can tell CA to ignore unscheduled pods before they're a certain age. | 0 seconds |
-| max-total-unready-percentage | Maximum percentage of unready nodes in the cluster. After this percentage is exceeded, CA halts operations | 45% |
-| max-node-provision-time | Maximum time the autoscaler waits for a node to be provisioned | 15 minutes |
-| ok-total-unready-count | Number of allowed unready nodes, irrespective of max-total-unready-percentage | Three nodes |
+> [!NOTE]
+> The cluster autoscaler enforces the minimum count in cases where the actual count drops below the minimum due to external factors, such as during a spot eviction or when changing the minimum count value from the AKS API.
-> [!IMPORTANT]
-> When using the autoscaler profile, keep the following information in mind:
->
-> * The cluster autoscaler profile affects **all node pools** that use the cluster autoscaler. You can't set an autoscaler profile per node pool. When you set the profile, any existing node pools with the cluster autoscaler enabled immediately start using the profile.
-> * The cluster autoscaler profile requires Azure CLI version *2.11.1* or later. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
-> * To access preview features use the aks-preview extension version 0.5.126 or later
+## Use the cluster autoscaler profile
+
+You can configure more granular details of the cluster autoscaler by changing the default values in the cluster-wide autoscaler profile. For example, a scale down event happens after nodes are under-utilized after 10 minutes. If you have workloads that run every 15 minutes, you might want to change the autoscaler profile to scale down under-utilized nodes after 15 or 20 minutes. When you enable the cluster autoscaler, a default profile is used unless you specify different settings.
+> [!IMPORTANT]
+> The cluster autoscaler profile affects **all node pools** that use the cluster autoscaler. You can't set an autoscaler profile per node pool. When you set the profile, any existing node pools with the cluster autoscaler enabled immediately start using the profile.
+
+### Cluster autoscaler profile settings
+
+The following table lists the available settings for the cluster autoscaler profile:
+
+|Setting |Description |Default value |
+|--||--|
+| `scan-interval` | How often the cluster is reevaluated for scale up or down. | 10 seconds |
+| `scale-down-delay-after-add` | How long after scale up that scale down evaluation resumes. | 10 minutes |
+| `scale-down-delay-after-delete` | How long after node deletion that scale down evaluation resumes. | `scan-interval` |
+| `scale-down-delay-after-failure` | How long after scale down failure that scale down evaluation resumes. | Three minutes |
+| `scale-down-unneeded-time` | How long a node should be unneeded before it's eligible for scale down. | 10 minutes |
+| `scale-down-unready-time` | How long an unready node should be unneeded before it's eligible for scale down. | 20 minutes |
+| `ignore-daemonsets-utilization` (Preview) | Whether DaemonSet pods will be ignored when calculating resource utilization for scale down. | `false` |
+| `daemonset-eviction-for-empty-nodes` (Preview) | Whether DaemonSet pods will be gracefully terminated from empty nodes. | `false` |
+| `daemonset-eviction-for-occupied-nodes` (Preview) | Whether DaemonSet pods will be gracefully terminated from non-empty nodes. | `true` |
+| `scale-down-utilization-threshold` | Node utilization level, defined as sum of requested resources divided by capacity, in which a node can be considered for scale down. | 0.5 |
+| `max-graceful-termination-sec` | Maximum number of seconds the cluster autoscaler waits for pod termination when trying to scale down a node. | 600 seconds |
+| `balance-similar-node-groups` | Detects similar node pools and balances the number of nodes between them. | `false` |
+| `expander` | Type of node pool [expander](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-expanders) uses in scale up. Possible values include `most-pods`, `random`, `least-waste`, and `priority`. | |
+| `skip-nodes-with-local-storage` | If `true`, cluster autoscaler doesn't delete nodes with pods with local storage, for example, EmptyDir or HostPath. | `true` |
+| `skip-nodes-with-system-pods` | If `true`, cluster autoscaler doesn't delete nodes with pods from kube-system (except for DaemonSet or mirror pods). | `true` |
+| `max-empty-bulk-delete` | Maximum number of empty nodes that can be deleted at the same time. | 10 nodes |
+| `new-pod-scale-up-delay` | For scenarios such as burst/batch scale where you don't want CA to act before the Kubernetes scheduler could schedule all the pods, you can tell CA to ignore unscheduled pods before they reach a certain age. | 0 seconds |
+| `max-total-unready-percentage` | Maximum percentage of unready nodes in the cluster. After this percentage is exceeded, CA halts operations. | 45% |
+| `max-node-provision-time` | Maximum time the autoscaler waits for a node to be provisioned. | 15 minutes |
+| `ok-total-unready-count` | Number of allowed unready nodes, irrespective of max-total-unready-percentage. | Three nodes |
### Set the cluster autoscaler profile on a new cluster
You can retrieve logs and status updates from the cluster autoscaler to help dia
### [Azure CLI](#tab/azure-cli)
-Use the following steps to configure logs to be pushed from the cluster autoscaler into Log Analytics:
- 1. Set up a rule for resource logs to push cluster autoscaler logs to Log Analytics using the [instructions here][aks-view-master-logs]. Make sure you check the box for `cluster-autoscaler` when selecting options for **Logs**. 2. Select the **Log** section on your cluster. 3. Enter the following example query into Log Analytics:
Use the following steps to configure logs to be pushed from the cluster autoscal
kubectl get configmap -n kube-system cluster-autoscaler-status -o yaml ```
-### [Portal](#tab/azure-portal)
+### [Azure portal](#tab/azure-portal)
-1. Navigate to *Node pools* from your cluster's overview page in the Azure portal. Select any of the tiles for autoscale events, autoscale warnings, or scale-ups not triggered to get more details.
+* Navigate to *Node pools* from your cluster's overview page in the Azure portal. Select any of the tiles for autoscale events, autoscale warnings, or scale ups not triggered to get more details.
- :::image type="content" source="./media/cluster-autoscaler/main-blade-tiles-inline.png" alt-text="Screenshot of the Azure portal page for a cluster's node pools. The section displaying autoscaler events, warning, and scale-ups not triggered is highlighted." lightbox="./media/cluster-autoscaler/main-blade-tiles.png":::
+ :::image type="content" source="./media/cluster-autoscaler/main-blade-tiles-inline.png" alt-text="Screenshot of the Azure portal page for a cluster's node pools. The section displaying autoscaler events, warning, and scale ups not triggered is highlighted." lightbox="./media/cluster-autoscaler/main-blade-tiles.png":::
-1. YouΓÇÖll see a list of Kubernetes events filtered to `source: cluster-autoscaler` that have occurred within the last hour. With this information, youΓÇÖll be able to troubleshoot and diagnose any issues that might arise while scaling your nodes.
+ This shows a list of Kubernetes events filtered to `source: cluster-autoscaler` that have occurred within the last hour. You can use this information to troubleshoot and diagnose any issues that might arise while scaling your nodes.
:::image type="content" source="./media/cluster-autoscaler/events-inline.png" alt-text="Screenshot of the Azure portal page for a cluster's events. The filter for source is highlighted, showing 'source: cluster-autoscaler'." lightbox="./media/cluster-autoscaler/events.png":::
-To learn more about the autoscaler logs, see the [Kubernetes/autoscaler GitHub project FAQ][kubernetes-faq].
-
-## Use the cluster autoscaler with node pools
-
-### Use the cluster autoscaler with multiple node pools enabled
-
-You can use the cluster autoscaler with [multiple node pools][aks-multiple-node-pools] enabled. When using both features together, you can enable the cluster autoscaler on each individual node pool in the cluster and pass unique autoscaling rules to each node pool.
-
-* Update the settings on an existing node pool using the [`az aks nodepool update`][az-aks-nodepool-update] command. The following command continues from the [previous steps](#enable-the-cluster-autoscaler-on-a-new-cluster) in this article:
-
- ```azurecli-interactive
- az aks nodepool update \
- --resource-group myResourceGroup \
- --cluster-name myAKSCluster \
- --name nodepool1 \
- --update-cluster-autoscaler \
- --min-count 1 \
- --max-count 5
- ```
-
-### Disable the cluster autoscaler on a node pool
-
-* Disable the cluster autoscaler on a node pool using the [`az aks nodepool update`][az-aks-nodepool-update] command and the `--disable-cluster-autoscaler` parameter.
-
- ```azurecli-interactive
- az aks nodepool update \
- --resource-group myResourceGroup \
- --cluster-name myAKSCluster \
- --name nodepool1 \
- --disable-cluster-autoscaler
- ```
-
-### Re-enable the cluster autoscaler on a node pool
-
-* Re-enable the cluster autoscaler on a node pool using the [`az aks nodepool update`][az-aks-nodepool-update] command and specifying the `--enable-cluster-autoscaler`, `--min-count`, and `--max-count` parameters.
-
- ```azurecli-interactive
- az aks nodepool update \
- --resource-group myResourceGroup \
- --cluster-name myAKSCluster \
- --name nodepool1 \
- --enable-cluster-autoscaler \
- --min-count 1 \
- --max-count 5
- ```
-
- > [!NOTE]
- > If you plan on using the cluster autoscaler with node pools that span multiple zones and leverage scheduling features related to zones, such as volume topological scheduling, we recommend you have one node pool per zone and enable the `--balance-similar-node-groups` through the autoscaler profile. This ensures the autoscaler can successfully scale up and keep the sizes of the node pools balanced.
-
-## Configure the horizontal pod autoscaler
-
-Kubernetes supports [horizontal pod autoscaling][kubernetes-hpa] to adjust the number of pods in a deployment depending on CPU utilization or other select metrics. The [Metrics Server][metrics-server] provides resource utilization to Kubernetes. You can configure horizontal pod autoscaling through the `kubectl autoscale` command or through a manifest. For more information on using the horizontal pod autoscaler, see the [HorizontalPodAutoscaler walkthrough][kubernetes-hpa-walkthrough].
+For more information, see the [Kubernetes/autoscaler GitHub project FAQ][kubernetes-faq].
## Next steps
To further help improve cluster resource utilization and free up CPU and memory
[az-aks-update]: /cli/azure/aks#az-aks-update [az-aks-scale]: /cli/azure/aks#az-aks-scale [vertical-pod-autoscaler]: vertical-pod-autoscaler.md
-[horizontal-pod-autoscaler]:concepts-scale.md#horizontal-pod-autoscaler
[az-group-create]: /cli/azure/group#az_group_create <!-- LINKS - external --> [az-aks-update-preview]: https://github.com/Azure/azure-cli-extensions/tree/master/src/aks-preview [az-aks-nodepool-update]: https://github.com/Azure/azure-cli-extensions/tree/master/src/aks-preview#enable-cluster-auto-scaler-for-a-node-pool
-[autoscaler-scaledown]: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-types-of-pods-can-prevent-ca-from-removing-a-node
[kubernetes-faq]: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#ca-doesnt-work-but-it-used-to-work-yesterday-why
-[kubernetes-hpa]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
-[kubernetes-hpa-walkthrough]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
-[metrics-server]: https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#metrics-server
[kubernetes-cluster-autoscaler]: https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler
aks Csi Migrate In Tree Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-migrate-in-tree-volumes.md
Title: Migrate from in-tree storage class to CSI drivers on Azure Kubernetes Service (AKS) description: Learn how to migrate from in-tree persistent volume to the Container Storage Interface (CSI) driver in an Azure Kubernetes Service (AKS) cluster. Previously updated : 07/26/2023 Last updated : 01/11/2024
The following are important considerations to evaluate:
i=$((i + 1)) else PVC_CREATION_TIME=$(kubectl get pvc $PVC -n $NAMESPACE -o jsonpath='{.metadata.creationTimestamp}')
- if [[ $PVC_CREATION_TIME > $STARTTIMESTAMP ]]; then
+ if [[ $PVC_CREATION_TIME >= $STARTTIMESTAMP ]]; then
if [[ $ENDTIMESTAMP > $PVC_CREATION_TIME ]]; then PV="$(kubectl get pvc $PVC -n $NAMESPACE -o jsonpath='{.spec.volumeName}')" RECLAIM_POLICY="$(kubectl get pv $PV -n $NAMESPACE -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')"
The following are important considerations to evaluate:
* `namespace` - The cluster namespace * `sourceStorageClass` - The in-tree storage driver-based StorageClass * `targetCSIStorageClass` - The CSI storage driver-based StorageClass, which can be either one of the default storage classes that have the provisioner set to **disk.csi.azure.com** or **file.csi.azure.com**. Or you can create a custom storage class as long as it is set to either one of those two provisioners.
- * `startTimeStamp` - Provide a start time in the format **yyyy-mm-ddthh:mm:ssz**.
+ * `startTimeStamp` - Provide a start time **before** PVC creation time in the format **yyyy-mm-ddthh:mm:ssz**
* `endTimeStamp` - Provide an end time in the format **yyyy-mm-ddthh:mm:ssz**. ```bash
aks Quick Kubernetes Deploy Bicep Extensibility Kubernetes Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-bicep-extensibility-kubernetes-provider.md
Title: 'Quickstart: Create an Azure Kubernetes Service (AKS) cluster using the Bicep extensibility Kubernetes provider'
-description: Learn how to quickly create a Kubernetes cluster using the Bicep extensibility Kubernetes provider and deploy an application in Azure Kubernetes Service (AKS).
+ Title: 'Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using the Bicep extensibility Kubernetes provider'
+description: Learn how to quickly deploy a Kubernetes cluster using the Bicep extensibility Kubernetes provider and deploy an application in Azure Kubernetes Service (AKS).
Previously updated : 12/27/2023
-#Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure.
Last updated : 01/11/2024
+#Customer intent: As a developer or cluster operator, I want to quickly deploy an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure.
-# Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using the Bicep extensibility Kubernetes provider (Preview)
+# Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using the Bicep extensibility Kubernetes provider (preview)
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you:
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
> } > ```
+> [!NOTE]
+> To get started with quickly provisioning an AKS cluster, this article includes steps to deploy a cluster with default settings for evaluation purposes only. Before deploying a production-ready cluster, we recommend that you familiarize yourself with our [baseline reference architecture][baseline-reference-architecture] to consider how it aligns with your business requirements.
+ ## Before you begin * This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
To learn more about AKS and walk through a complete code-to-deployment example,
[new-azresourcegroup]: /powershell/module/az.resources/new-azresourcegroup [new-azresourcegroupdeployment]: /powershell/module/az.resources/new-azresourcegroupdeployment [az-sshkey-create]: /cli/azure/sshkey#az_sshkey_create
-[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?WT.mc_id=AKSDOCSPAGE
+[baseline-reference-architecture]: /azure/architecture/reference-architectures/containers/aks/baseline-aks?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json
+[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json
aks Quick Kubernetes Deploy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-bicep.md
Title: 'Quickstart: Create an Azure Kubernetes Service (AKS) cluster using Bicep'
-description: Learn how to quickly create a Kubernetes cluster using a Bicep file and deploy an application in Azure Kubernetes Service (AKS).
+ Title: 'Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Bicep'
+description: Learn how to quickly deploy a Kubernetes cluster using a Bicep file and deploy an application in Azure Kubernetes Service (AKS).
Last updated 12/27/2023
-#Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure.
+#Customer intent: As a developer or cluster operator, I want to quickly deploy an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure.
# Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Bicep
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
:::image type="content" source="media/quick-kubernetes-deploy-bicep/aks-store-application.png" alt-text="Screenshot of browsing to Azure Store sample application." lightbox="media/quick-kubernetes-deploy-bicep/aks-store-application.png":::
+> [!NOTE]
+> To get started with quickly provisioning an AKS cluster, this article includes steps to deploy a cluster with default settings for evaluation purposes only. Before deploying a production-ready cluster, we recommend that you familiarize yourself with our [baseline reference architecture][baseline-reference-architecture] to consider how it aligns with your business requirements.
+ ## Before you begin * This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
* To create an AKS cluster using a Bicep file, you provide an SSH public key. If you need this resource, see the following section. Otherwise, skip to [Review the Bicep file](#review-the-bicep-file). * Make sure the identity you use to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
-* To deploy a Bicep file, you need write access on the resources you deploy and access to all operations on the `Microsoft.Resources/deployments` resource type. For example, to deploy a virtual machine, you need `Microsoft.Compute/virtualMachines/write` and `Microsoft.Resources/deployments/*` permissions. For a list of roles and permissions, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
+* To deploy a Bicep file, you need write access on the resources you create and access to all operations on the `Microsoft.Resources/deployments` resource type. For example, to create a virtual machine, you need `Microsoft.Compute/virtualMachines/write` and `Microsoft.Resources/deployments/*` permissions. For a list of roles and permissions, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
### Create an SSH key pair
To learn more about AKS and walk through a complete code-to-deployment example,
[ssh-keys]: ../../virtual-machines/linux/create-ssh-keys-detailed.md [new-az-aks-cluster]: /powershell/module/az.aks/new-azakscluster [az-sshkey-create]: /cli/azure/sshkey#az_sshkey_create
-[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?WT.mc_id=AKSDOCSPAGE
+[baseline-reference-architecture]: /azure/architecture/reference-architectures/containers/aks/baseline-aks?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json
+[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json
aks Quick Kubernetes Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-cli.md
Title: 'Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Azure CLI'
-description: Learn how to quickly create a Kubernetes cluster, deploy an application, and monitor performance in Azure Kubernetes Service (AKS) using Azure CLI.
+description: Learn how to quickly deploy a Kubernetes cluster and deploy an application in Azure Kubernetes Service (AKS) using Azure CLI.
Last updated 01/10/2024
-#Customer intent: As a developer or cluster operator, I want to create an AKS cluster and deploy an application so I can see how to run and monitor applications using the managed Kubernetes service in Azure.
+#Customer intent: As a developer or cluster operator, I want to deploy an AKS cluster and deploy an application so I can see how to run applications using the managed Kubernetes service in Azure.
# Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Azure CLI
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
- Deploy an AKS cluster using the Azure CLI. - Run a sample multi-container application with a group of microservices and web front ends simulating a retail scenario. +
+> [!NOTE]
+> To get started with quickly provisioning an AKS cluster, this article includes steps to deploy a cluster with default settings for evaluation purposes only. Before deploying a production-ready cluster, we recommend that you familiarize yourself with our [baseline reference architecture][baseline-reference-architecture] to consider how it aligns with your business requirements.
+ ## Before you begin This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
To learn more about AKS and walk through a complete code-to-deployment example,
[az-group-create]: /cli/azure/group#az-group-create [az-group-delete]: /cli/azure/group#az-group-delete [kubernetes-deployment]: ../concepts-clusters-workloads.md#deployments-and-yaml-manifests
-[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?WT.mc_id=AKSDOCSPAGE
-[intro-azure-linux]: ../../azure-linux/intro-azure-linux.md
+[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json
+[intro-azure-linux]: ../../azure-linux/intro-azure-linux.md
+[baseline-reference-architecture]: /azure/architecture/reference-architectures/containers/aks/baseline-aks?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json
aks Quick Kubernetes Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-portal.md
Title: 'Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using the Azure portal'
-description: Learn how to quickly create a Kubernetes cluster, deploy an application, and monitor performance in Azure Kubernetes Service (AKS) using the Azure portal.
+description: Learn how to quickly deploy a Kubernetes cluster and deploy an application in Azure Kubernetes Service (AKS) using the Azure portal.
Previously updated : 12/27/2023 Last updated : 01/11/2024
-#Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run and monitor applications using the managed Kubernetes service in Azure.
+#Customer intent: As a developer or cluster operator, I want to quickly deploy an AKS cluster and deploy an application so that I can see how to run and monitor applications using the managed Kubernetes service in Azure.
# Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Azure portal
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
- Deploy an AKS cluster using the Azure portal. - Run a sample multi-container application with a group of microservices and web front ends simulating a retail scenario. +
+> [!NOTE]
+> To get started with quickly provisioning an AKS cluster, this article includes steps to deploy a cluster with default settings for evaluation purposes only. Before deploying a production-ready cluster, we recommend that you familiarize yourself with our [baseline reference architecture][baseline-reference-architecture] to consider how it aligns with your business requirements.
+ ## Before you begin This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
To learn more about AKS and walk through a complete code-to-deployment example,
[aks-tutorial]: ../tutorial-kubernetes-prepare-app.md [preset-config]: ../quotas-skus-regions.md#cluster-configuration-presets-in-the-azure-portal [intro-azure-linux]: ../../azure-linux/intro-azure-linux.md
-[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?WT.mc_id=AKSDOCSPAGE
+[baseline-reference-architecture]: /azure/architecture/reference-architectures/containers/aks/baseline-aks?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json
+[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json
aks Quick Kubernetes Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-powershell.md
Title: 'Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Azure PowerShell'
-description: Learn how to quickly create a Kubernetes cluster and deploy an application in Azure Kubernetes Service (AKS) using PowerShell.
+description: Learn how to quickly deploy a Kubernetes cluster and deploy an application in Azure Kubernetes Service (AKS) using PowerShell.
Previously updated : 01/10/2024 Last updated : 01/11/2024
-#Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure.
+#Customer intent: As a developer or cluster operator, I want to quickly deploy an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure.
# Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Azure PowerShell
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
- Deploy an AKS cluster using Azure PowerShell. - Run a sample multi-container application with a group of microservices and web front ends simulating a retail scenario. +
+> [!NOTE]
+> To get started with quickly provisioning an AKS cluster, this article includes steps to deploy a cluster with default settings for evaluation purposes only. Before deploying a production-ready cluster, we recommend that you familiarize yourself with our [baseline reference architecture][baseline-reference-architecture] to consider how it aligns with your business requirements.
+ ## Before you begin This article assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)](../concepts-clusters-workloads.md).
To learn more about AKS and walk through a complete code-to-deployment example,
[remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup [aks-tutorial]: ../tutorial-kubernetes-prepare-app.md [azure-resource-group]: ../../azure-resource-manager/management/overview.md
-[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?WT.mc_id=AKSDOCSPAGE
+[baseline-reference-architecture]: /azure/architecture/reference-architectures/containers/aks/baseline-aks?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json
+[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json
aks Quick Kubernetes Deploy Rm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-rm-template.md
Title: 'Quickstart: Create an Azure Kubernetes Service (AKS) cluster using an ARM template'
-description: Learn how to quickly create a Kubernetes cluster using an Azure Resource Manager template and deploy an application in Azure Kubernetes Service (AKS).
+ Title: 'Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using an ARM template'
+description: Learn how to quickly deploy a Kubernetes cluster using an Azure Resource Manager template and deploy an application in Azure Kubernetes Service (AKS).
Previously updated : 12/27/2023 Last updated : 01/11/2024
-#Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure.
+#Customer intent: As a developer or cluster operator, I want to quickly deploy an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure.
# Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using an ARM template
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
:::image type="content" source="media/quick-kubernetes-deploy-rm-template/aks-store-application.png" alt-text="Screenshot of browsing to Azure Store sample application." lightbox="media/quick-kubernetes-deploy-rm-template/aks-store-application.png":::
+> [!NOTE]
+> To get started with quickly provisioning an AKS cluster, this article includes steps to deploy a cluster with default settings for evaluation purposes only. Before deploying a production-ready cluster, we recommend that you familiarize yourself with our [baseline reference architecture][baseline-reference-architecture] to consider how it aligns with your business requirements.
+ ## Before you begin * This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
To learn more about AKS and walk through a complete code-to-deployment example,
[remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup [kubernetes-deployment]: ../concepts-clusters-workloads.md#deployments-and-yaml-manifests [ssh-keys]: ../../virtual-machines/linux/create-ssh-keys-detailed.md
-[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?WT.mc_id=AKSDOCSPAGE
+[baseline-reference-architecture]: /azure/architecture/reference-architectures/containers/aks/baseline-aks?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json
+[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json
aks Quick Kubernetes Deploy Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-terraform.md
Title: 'Quickstart: Create an Azure Kubernetes Service (AKS) cluster using Terraform'
-description: Learn how to quickly create a Kubernetes cluster using Terraform and deploy an application in Azure Kubernetes Service (AKS).
+ Title: 'Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Terraform'
+description: Learn how to quickly deploy a Kubernetes cluster using Terraform and deploy an application in Azure Kubernetes Service (AKS).
Previously updated : 12/27/2023 Last updated : 01/11/2024 content_well_notification: - AI-contribution
-#Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure.
+#Customer intent: As a developer or cluster operator, I want to quickly deploy an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure.
-# Quickstart: Create an Azure Kubernetes Service (AKS) cluster using Terraform
+# Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Terraform
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you:
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
:::image type="content" source="media/quick-kubernetes-deploy-terraform/aks-store-application.png" alt-text="Screenshot of browsing to Azure Store sample application." lightbox="media/quick-kubernetes-deploy-terraform/aks-store-application.png":::
+> [!NOTE]
+> To get started with quickly provisioning an AKS cluster, this article includes steps to deploy a cluster with default settings for evaluation purposes only. Before deploying a production-ready cluster, we recommend that you familiarize yourself with our [baseline reference architecture][baseline-reference-architecture] to consider how it aligns with your business requirements.
+ ## Before you begin * This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
To learn more about AKS and walk through a complete code-to-deployment example,
[kubernetes-concepts]: ../concepts-clusters-workloads.md [kubernetes-deployment]: ../concepts-clusters-workloads.md#deployments-and-yaml-manifests [intro-azure-linux]: ../../azure-linux/intro-azure-linux.md
+[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json
+[baseline-reference-architecture]: /azure/architecture/reference-architectures/containers/aks/baseline-aks?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json
<!-- LINKS - External -->
-[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?WT.mc_id=AKSDOCSPAGE
aks Quick Windows Container Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-cli.md
Title: Create a Windows Server container on an Azure Kubernetes Service (AKS) cluster using Azure CLI
-description: Learn how to quickly create a Kubernetes cluster and deploy an application in a Windows Server container in Azure Kubernetes Service (AKS) using Azure CLI.
+ Title: Deploy a Windows Server container on an Azure Kubernetes Service (AKS) cluster using Azure CLI
+description: Learn how to quickly deploy a Kubernetes cluster and deploy an application in a Windows Server container in Azure Kubernetes Service (AKS) using Azure CLI.
Previously updated : 01/09/2024
-#Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy a Windows Server container so that I can see how to run applications running on a Windows Server container using the managed Kubernetes service in Azure.
Last updated : 01/11/2024
+#Customer intent: As a developer or cluster operator, I want to quickly deploy an AKS cluster and deploy a Windows Server container so that I can see how to run applications running on a Windows Server container using the managed Kubernetes service in Azure.
-# Create a Windows Server container on an Azure Kubernetes Service (AKS) cluster using Azure CLI
+# Deploy a Windows Server container on an Azure Kubernetes Service (AKS) cluster using Azure CLI
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this article, you use Azure CLI to deploy an AKS cluster that runs Windows Server containers. You also deploy an ASP.NET sample application in a Windows Server container to the cluster. +
+> [!NOTE]
+> To get started with quickly provisioning an AKS cluster, this article includes steps to deploy a cluster with default settings for evaluation purposes only. Before deploying a production-ready cluster, we recommend that you familiarize yourself with our [baseline reference architecture][baseline-reference-architecture] to consider how it aligns with your business requirements.
+ ## Before you begin
-This article assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)](../concepts-clusters-workloads.md).
+This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)](../concepts-clusters-workloads.md).
- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)] -- This article requires version 2.0.64 or later of the Azure CLI. If you are using Azure Cloud Shell, then the latest version is already installed.
+- This quickstart requires version 2.0.64 or later of the Azure CLI. If you are using Azure Cloud Shell, then the latest version is already installed.
- Make sure that the identity you're using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md). - If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the [az account](/cli/azure/account) command.
To learn more about AKS, and to walk through a complete code-to-deployment examp
[az-group-create]: /cli/azure/group#az_group_create [az-group-delete]: /cli/azure/group#az_group_delete [az-provider-register]: /cli/azure/provider#az_provider_register
-[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?WT.mc_id=AKSDOCSPAGE
+[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json
[kubernetes-deployment]: ../concepts-clusters-workloads.md#deployments-and-yaml-manifests [kubernetes-service]: ../concepts-network.md#services [windows-server-password]: /windows/security/threat-protection/security-policy-settings/password-must-meet-complexity-requirements#reference [win-faq-change-admin-creds]: ../windows-faq.md#how-do-i-change-the-administrator-password-for-windows-server-nodes-on-my-cluster [az-provider-show]: /cli/azure/provider#az_provider_show
+[baseline-reference-architecture]: /azure/architecture/reference-architectures/containers/aks/baseline-aks?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json
aks Quick Windows Container Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-portal.md
Title: Create a Windows Server container on an Azure Kubernetes Service (AKS) cluster using the Azure portal
-description: Learn how to quickly create a Kubernetes cluster and deploy an application in a Windows Server container in Azure Kubernetes Service (AKS) using the Azure portal.
+ Title: Deploy a Windows Server container on an Azure Kubernetes Service (AKS) cluster using the Azure portal
+description: Learn how to quickly deploy a Kubernetes cluster and deploy an application in a Windows Server container in Azure Kubernetes Service (AKS) using the Azure portal.
Previously updated : 12/27/2023
-#Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy a Windows Server container so that I can see how to run applications running on a Windows Server container using the managed Kubernetes service in Azure.
Last updated : 01/11/2024
+#Customer intent: As a developer or cluster operator, I want to quickly deploy an AKS cluster and deploy a Windows Server container so that I can see how to run applications running on a Windows Server container using the managed Kubernetes service in Azure.
-# Create a Windows Server container on an Azure Kubernetes Service (AKS) cluster using the Azure portal
+# Deploy a Windows Server container on an Azure Kubernetes Service (AKS) cluster using the Azure portal
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this article, you deploy an AKS cluster that runs Windows Server containers using the Azure portal. You also deploy an ASP.NET sample application in a Windows Server container to the cluster. +
+> [!NOTE]
+> To get started with quickly provisioning an AKS cluster, this article includes steps to deploy a cluster with default settings for evaluation purposes only. Before deploying a production-ready cluster, we recommend that you familiarize yourself with our [baseline reference architecture][baseline-reference-architecture] to consider how it aligns with your business requirements.
+ ## Before you begin
-This article assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)](../concepts-clusters-workloads.md).
+This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)](../concepts-clusters-workloads.md).
- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] - If you're unfamiliar with the Azure Cloud Shell, review [Overview of Azure Cloud Shell](/azure/cloud-shell/overview).
To learn more about AKS, and to walk through a complete code-to-deployment examp
[kubernetes-service]: ../concepts-network.md#services [preset-config]: ../quotas-skus-regions.md#cluster-configuration-presets-in-the-azure-portal [import-azakscredential]: /powershell/module/az.aks/import-azakscredential
-[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?WT.mc_id=AKSDOCSPAGE
+[baseline-reference-architecture]: /azure/architecture/reference-architectures/containers/aks/baseline-aks?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json
+[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json
aks Quick Windows Container Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-powershell.md
Title: Create a Windows Server container on an Azure Kubernetes Service (AKS) cluster using PowerShell
-description: Learn how to quickly create a Kubernetes cluster and deploy an application in a Windows Server container in Azure Kubernetes Service (AKS) using PowerShell.
+ Title: Deploy a Windows Server container on an Azure Kubernetes Service (AKS) cluster using PowerShell
+description: Learn how to quickly deploy a Kubernetes cluster and deploy an application in a Windows Server container in Azure Kubernetes Service (AKS) using PowerShell.
Previously updated : 01/09/2024 Last updated : 01/11/2024
-#Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy a Windows Server container so that I can see how to run applications running on a Windows Server container using the managed Kubernetes service in Azure.
+#Customer intent: As a developer or cluster operator, I want to quickly deploy an AKS cluster and deploy a Windows Server container so that I can see how to run applications running on a Windows Server container using the managed Kubernetes service in Azure.
-# Create a Windows Server container on an Azure Kubernetes Service (AKS) cluster using PowerShell
+# Deploy a Windows Server container on an Azure Kubernetes Service (AKS) cluster using PowerShell
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this article, you use Azure PowerShell to deploy an AKS cluster that runs Windows Server containers. You also deploy an ASP.NET sample application in a Windows Server container to the cluster. +
+> [!NOTE]
+> To get started with quickly provisioning an AKS cluster, this article includes steps to deploy a cluster with default settings for evaluation purposes only. Before deploying a production-ready cluster, we recommend that you familiarize yourself with our [baseline reference architecture][baseline-reference-architecture] to consider how it aligns with your business requirements.
+ ## Before you begin
-This article assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)](../concepts-clusters-workloads.md).
+This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)](../concepts-clusters-workloads.md).
- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] - For ease of use, try the PowerShell environment in [Azure Cloud Shell](/azure/cloud-shell/overview). For more information, see [Quickstart for Azure Cloud Shell](/azure/cloud-shell/quickstart).
To learn more about AKS, and to walk through a complete code-to-deployment examp
[kubernetes-deployment]: ../concepts-clusters-workloads.md#deployments-and-yaml-manifests [kubernetes-service]: ../concepts-network.md#services [aks-tutorial]: ../tutorial-kubernetes-prepare-app.md
-[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?WT.mc_id=AKSDOCSPAGE
+[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json
[windows-server-password]: /windows/security/threat-protection/security-policy-settings/password-must-meet-complexity-requirements#reference [new-azaksnodepool]: /powershell/module/az.aks/new-azaksnodepool
+[baseline-reference-architecture]: /azure/architecture/reference-architectures/containers/aks/baseline-aks?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json
[win-faq-change-admin-creds]: ../windows-faq.md#how-do-i-change-the-administrator-password-for-windows-server-nodes-on-my-cluster
aks Manage Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-node-pools.md
As your application workload demands change, you may need to scale the number of
AKS offers a separate feature to automatically scale node pools with a feature called the [cluster autoscaler](cluster-autoscaler.md). You can enable this feature with unique minimum and maximum scale counts per node pool.
-For more information, see [use the cluster autoscaler](cluster-autoscaler.md#use-the-cluster-autoscaler-with-multiple-node-pools-enabled).
+For more information, see [use the cluster autoscaler](cluster-autoscaler.md#use-the-cluster-autoscaler-on-multiple-node-pools).
## Associate capacity reservation groups to node pools (preview)
aks Open Ai Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-ai-quickstart.md
Now that the application is deployed, you can deploy the Python-based microservi
memory: 50Mi limits: cpu: 30m
- memory: 65Mi
+ memory: 85Mi
apiVersion: v1 kind: Service
aks Open Ai Secure Access Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-ai-secure-access-quickstart.md
To use Microsoft Entra Workload ID on AKS, you need to make a few changes to the
memory: 50Mi limits: cpu: 30m
- memory: 65Mi
+ memory: 85Mi
EOF ```
aks Tutorial Kubernetes Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-scale.md
The following example increases the number of nodes to three in the Kubernetes c
-You can also autoscale the nodes in your cluster. For more information, see [Use the cluster autoscaler with node pools](./cluster-autoscaler.md#use-the-cluster-autoscaler-with-node-pools).
+You can also autoscale the nodes in your cluster. For more information, see [Use the cluster autoscaler with node pools](./cluster-autoscaler.md#use-the-cluster-autoscaler-on-node-pools).
## Next steps
aks Windows Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-best-practices.md
Last updated 10/27/2023
# Best practices for Windows containers on Azure Kubernetes Service (AKS)
-In AKS, you can create node pools that run Linux or Windows Server as the operating system (OS) on the nodes. Windows Server nodes can run native Windows container applications, such as .NET Framework. The Linux OS and Windows OS have different container support and configuration considerations. For more information, see [Windows container considerations in Kubernetes][windows-vs-linux].
+In AKS, you can create node pools that run Linux or Windows Server as the operating system (OS) on the nodes. Windows Server nodes can run native Windows container applications, such as .NET Framework. The Linux OS and Windows OS have different container support and configuration considerations. For more information, see [Windows container considerations in Kubernetes][windows-vs-linux]. To learn more about how various industries are using Windows containers on AKS, see [Windows AKS customer stories](./windows-aks-customer-stories.md).
This article outlines best practices for running Windows containers on AKS.
You might want to containerize existing applications and run them using Windows
AKS uses Windows Server 2019 and Windows Server 2022 as the host OS versions and only supports process isolation. AKS doesn't support container images built by other versions of Windows Server. For more information, see [Windows container version compatibility](/virtualization/windowscontainers/deploy-containers/version-compatibility).
-Windows Server 2022 is the default OS for Kubernetes version 1.25 and later. Windows Server 2019 will retire after Kubernetes version 1.32 reaches end of life (EOL) and won't be supported in future releases. For more information, see the [AKS release notes][aks-release-notes].
+Windows Server 2022 is the default OS for Kubernetes version 1.25 and later. Windows Server 2019 will retire after Kubernetes version 1.32 reaches end of service and won't be supported in future releases. For more information, see the [AKS release notes][aks-release-notes].
## Networking
aks Windows Vs Linux Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-vs-linux-containers.md
Title: Windows container considerations in Kubernetes
+ Title: Windows container considerations in Azure Kubernetes Service
-description: See the Windows container considerations in Kubernetes.
+description: See the Windows container considerations with Azure Kubernetes Service (AKS).
Previously updated : 10/05/2023 Last updated : 12/13/2023
-# Windows container considerations in Kubernetes
+# Windows container considerations with Azure Kubernetes Service
-When you create deployments that use Windows Server containers on Azure Kubernetes Service (AKS), there are a few differences relative to Linux deployments you should keep in mind. For a detailed comparison of the differences between Windows and Linux in upstream Kubernetes, please see [Windows containers in Kubernetes](https://kubernetes.io/docs/concepts/windows/intro/).
+When you create deployments that use Windows Server containers on Azure Kubernetes Service (AKS), there are a few differences relative to Linux deployments you should keep in mind. For a detailed comparison of the differences between Windows and Linux in upstream Kubernetes, see [Windows containers in Kubernetes](https://kubernetes.io/docs/concepts/windows/intro/).
Some of the major differences include:
api-management Api Management Api Import Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-api-import-restrictions.md
Namespaces other than the target aren't preserved on export. While you can impor
### Multiple endpoints WSDL files can define multiple services and endpoints (ports) by one or more `wsdl:service` and `wsdl:port` elements. However, the API Management gateway is able to import and proxy requests to only a single service and endpoint. If multiple services or endpoints are defined in the WSDL file, identify the target service name and endpoint when importing the API by using the [wsdlSelector](/rest/api/apimanagement/apis/create-or-update#wsdlselector) property.
+> [!TIP]
+> If you want to load-balance requests across multiple services and endpoints, consider configuring a [load-balanced backend pool](backends.md#load-balanced-pool-preview).
+ ### Arrays SOAP-to-REST transformation supports only wrapped arrays shown in the example below:
api-management Backends https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/backends.md
Title: Azure API Management backends | Microsoft Docs
-description: Learn about custom backends in API Management
+description: Learn about custom backends in Azure API Management
documentationcenter: ''
editor: ''
Previously updated : 08/16/2023 Last updated : 01/09/2024
API Management also supports using other Azure resources as an API backend, such
* A [Service Fabric cluster](how-to-configure-service-fabric-backend.md). * A custom service.
-API Management supports custom backends so you can manage the backend services of your API. Use custom backends, for example, to authorize the credentials of requests to the backend service. Configure and manage custom backends in the Azure portal, or using Azure APIs or tools.
+API Management supports custom backends so you can manage the backend services of your API. Use custom backends for one or more of the following:
-After creating a backend, you can reference the backend in your APIs. Use the [`set-backend-service`](set-backend-service-policy.md) policy to direct an incoming API request to the custom backend. If you already configured a backend web service for an API, you can use the `set-backend-service` policy to redirect the request to a custom backend instead of the default backend web service configured for that API.
+* Authorize the credentials of requests to the backend service
+* Protect your backend from too many requests
+* Route or load-balance requests to multiple backends
+
+Configure and manage custom backends in the Azure portal, or using Azure APIs or tools.
## Benefits of backends
A custom backend has several benefits, including:
* Easily used by configuring a transformation policy on an existing API. * Takes advantage of API Management functionality to maintain secrets in Azure Key Vault if [named values](api-management-howto-properties.md) are configured for header or query parameter authentication.
+## Reference backend using set-backend-service policy
+
+After creating a backend, you can reference the backend in your APIs. Use the [`set-backend-service`](set-backend-service-policy.md) policy to direct an incoming API request to the custom backend. If you already configured a backend web service for an API, you can use the `set-backend-service` policy to redirect the request to a custom backend instead of the default backend web service configured for that API. For example:
+
+```xml
+<policies>
+ <inbound>
+ <base />
+ <set-backend-service backend-id="myBackend" />
+ </inbound>
+ [...]
+<policies/>
+```
+
+You can use conditional logic with the `set-backend-service` policy to change the effective backend based on location, gateway that was called, or other expressions.
+
+For example, here is a policy to route traffic to another backend based on the gateway that was called:
+
+```xml
+<policies>
+ <inbound>
+ <base />
+ <choose>
+ <when condition="@(context.Deployment.Gateway.Id == "factory-gateway")">
+ <set-backend-service backend-id="backend-on-prem" />
+ </when>
+ <when condition="@(context.Deployment.Gateway.IsManaged == false)">
+ <set-backend-service backend-id="self-hosted-backend" />
+ </when>
+ <otherwise />
+ </choose>
+ </inbound>
+ [...]
+<policies/>
+```
++ ## Circuit breaker (preview) Starting in API version 2023-03-01 preview, API Management exposes a [circuit breaker](/rest/api/apimanagement/current-preview/backend/create-or-update?tabs=HTTP#backendcircuitbreaker) property in the backend resource to protect a backend service from being overwhelmed by too many requests.
The backend circuit breaker is an implementation of the [circuit breaker pattern
### Example
-Use the API Management REST API or a Bicep or ARM template to configure a circuit breaker in a backend. In the following example, the circuit breaker trips when there are three or more `5xx` status codes indicating server errors in a day. The circuit breaker resets after one hour.
+Use the API Management [REST API](/rest/api/apimanagement/backend) or a Bicep or ARM template to configure a circuit breaker in a backend. In the following example, the circuit breaker in *myBackend* in the API Management instance *myAPIM* trips when there are three or more `5xx` status codes indicating server errors in a day. The circuit breaker resets after one hour.
#### [Bicep](#tab/bicep)
-Include a snippet similar to the following in your Bicep template:
+Include a snippet similar to the following in your Bicep template for a backend resource with a circuit breaker:
```bicep resource symbolicname 'Microsoft.ApiManagement/service/backends@2023-03-01-preview' = {
- name: 'myBackend'
- parent: resourceSymbolicName
+ name: 'myAPIM/myBackend'
properties: { url: 'https://mybackend.com' protocol: 'http'
resource symbolicname 'Microsoft.ApiManagement/service/backends@2023-03-01-previ
'Server errors' ] interval: 'P1D'
- percentage: int
statusCodeRanges: [ { min: 500
resource symbolicname 'Microsoft.ApiManagement/service/backends@2023-03-01-previ
} ] }
- }
-[...]
-}
+ }
+ }
``` #### [ARM](#tab/arm)
-Include a JSON snippet similar to the following in your ARM template:
+Include a JSON snippet similar to the following in your ARM template for a backend resource with a circuit breaker:
```JSON { "type": "Microsoft.ApiManagement/service/backends", "apiVersion": "2023-03-01-preview",
- "name": "myBackend",
+ "name": "myAPIM/myBackend",
"properties": { "url": "https://mybackend.com", "protocol": "http",
Include a JSON snippet similar to the following in your ARM template:
] } }
-[...]
} ```
+## Load-balanced pool (preview)
+
+Starting in API version 2023-05-01 preview, API Management supports backend *pools*, when you want to implement multiple backends for an API and load-balance requests across those backends. Currently, the backend pool supports round-robin load balancing.
+
+Use a backend pool for scenarios such as the following:
+
+* Spread the load to multiple backends, which may have individual backend circuit breakers.
+* Shift the load from one set of backends to another for upgrade (blue-green deployment).
+To create a backend pool, set the `type` property of the backend to `pool` and specify a list of backends that make up the pool.
+
+> [!NOTE]
+> Currently, you can only include single backends in a backend pool. You can't add a backend of type `pool` to another backend pool.
+
+### Example
+
+Use the API Management [REST API](/rest/api/apimanagement/backend) or a Bicep or ARM template to configure a backend pool. In the following example, the backend *myBackendPool* in the API Management instance *myAPIM* is configured with a backend pool. Example backends in the pool are named *backend-1* and *backend-2*.
+
+#### [Bicep](#tab/bicep)
+
+Include a snippet similar to the following in your Bicep template for a backend resource with a load-balanced pool:
+
+```bicep
+resource symbolicname 'Microsoft.ApiManagement/service/backends@2023-05-01-preview' = {
+ name: 'myAPIM/myBackendPool'
+ properties: {
+ description: 'Load balancer for multiple backends'
+ type: 'Pool'
+ protocol: 'http'
+ url: 'http://unused'
+ pool: {
+
+ {
+ id: '/backends/backend-1'
+ }
+ {
+ id: '/backends/backend-2'
+ }
+ ]
+ }
+ }
+}
+```
+#### [ARM](#tab/arm)
+
+Include a JSON snippet similar to the following in your ARM template for a backend resource with a load-balanced pool:
+
+```json
+{
+ "type": "Microsoft.ApiManagement/service/backends",
+ "apiVersion": "2023-05-01-preview",
+ "name": "myAPIM/myBackendPool",
+ "properties": {
+ "description": "Load balancer for multiple backends",
+ "type": "Pool",
+ "protocol": "http",
+ "url": "http://unused",
+ "pool": {
+ "services": [
+ {
+ "id": "/backends/backend-1"
+ },
+ {
+ "id": "/backends/backend-2"
+ }
+ ]
+ }
+ }
+}
+```
++ ## Limitation For **Developer** and **Premium** tiers, an API Management instance deployed in an [internal virtual network](api-management-using-with-internal-vnet.md) can throw HTTP 500 `BackendConnectionFailure` errors when the gateway endpoint URL and backend URL are the same. If you encounter this limitation, follow the instructions in the [Self-Chained API Management request limitation in internal virtual network mode](https://techcommunity.microsoft.com/t5/azure-paas-blog/self-chained-apim-request-limitation-in-internal-virtual-network/ba-p/1940417) article in the Tech Community blog.
-## Next steps
+## Related content
* Set up a [Service Fabric backend](how-to-configure-service-fabric-backend.md) using the Azure portal.
-* Backends can also be configured using the API Management [REST API](/rest/api/apimanagement), [Azure PowerShell](/powershell/module/az.apimanagement/new-azapimanagementbackend), or [Azure Resource Manager templates](../service-fabric/service-fabric-tutorial-deploy-api-management.md).
+
app-service Quickstart Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-arm-template.md
ms.assetid: 582bb3c2-164b-42f5-b081-95bfcb7a502a Previously updated : 03/10/2022 Last updated : 12/20/2023
-zone_pivot_groups: app-service-platform-windows-linux
+zone_pivot_groups: app-service-platform-windows-linux-windows-container
adobe-target: true adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021 adobe-target-experience: Experience B
Get started with [Azure App Service](overview.md) by deploying an app to the clo
If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+Use the following button to deploy on **Windows**:
+
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.web%2Fapp-service-docs-windows%2Fazuredeploy.json)
Use the following button to deploy on **Linux**: [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.web%2Fapp-service-docs-linux%2Fazuredeploy.json)
+Use the following button to deploy on **Windows container**:
-Use the following button to deploy on **Windows**:
-
-[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.web%2Fapp-service-docs-windows%2Fazuredeploy.json)
+[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.web%2Fapp-service-docs-windows-container%2Fazuredeploy.json)
## Prerequisites
This template contains several parameters that are predefined for your convenien
::: zone-end
+The template used in this quickstart is from [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/app-service-docs-windows-container/). It deploys an App Service plan and an App Service app on a Windows container.
++
+Two Azure resources are defined in the template:
+
+* [**Microsoft.Web/serverfarms**](/azure/templates/microsoft.web/serverfarms): create an App Service plan.
+* [**Microsoft.Web/sites**](/azure/templates/microsoft.web/sites): create an App Service app.
+
+This template contains several parameters that are predefined for your convenience. See the table below for parameter defaults and their descriptions:
+| Parameters | Type | Default value | Description |
+||||-|
+| webAppName | string | "webApp-**[`<uniqueString>`](../azure-resource-manager/templates/template-functions-string.md#uniquestring)**" | App name |
+| appServicePlanName | string | "webAppPlan-**[`<uniqueString>`](../azure-resource-manager/templates/template-functions-string.md#uniquestring)**" | App Service Plan name |
+| location | string | "[[resourceGroup().location](../azure-resource-manager/templates/template-functions-resource.md#resourcegroup)]" | App region |
+| skuTier | string | "P1v3" | Instance size ([View available SKUs](configure-custom-container.md?tabs=debian&pivots=container-windows#customize-container-memory)) |
+| appSettings | string | "[{"name": "PORT","value": "8080"}]" | App Service listening port. Needs to be 8080. |
+| kind | string | "windows" | External Git repo (optional) |
+| hyperv | string | "true" | External Git repo (optional) |
+| windowsFxVersion | string | "DOCKER&#124;mcr.microsoft.com/dotnet/samples:aspnetapp" | External Git repo (optional) |
+ ## Deploy the template Azure CLI is used here to deploy the template. You can also use the Azure portal, Azure PowerShell, and REST API. To learn other deployment methods, see [Deploy templates](../azure-resource-manager/templates/deploy-powershell.md).
Run the code below to create a Python app on Linux.
```azurecli-interactive az group create --name myResourceGroup --location "southcentralus" &&
-az deployment group create --resource-group myResourceGroup --parameters webAppName="<app-name>" linuxFxVersion="PYTHON|3.7" \
+az deployment group create --resource-group myResourceGroup --parameters webAppName="<app-name>" linuxFxVersion="PYTHON|3.9" \
--template-uri "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.web/app-service-docs-linux/azuredeploy.json" ```
To deploy a different language stack, update `linuxFxVersion` with appropriate v
::: zone-end
+Run the code below to deploy a [.NET app](https://mcr.microsoft.com/product/dotnet/samples/tags) on a Windows container.
+
+```azurecli-interactive
+az group create --name myResourceGroup --location "southcentralus" &&
+az deployment group create --resource-group myResourceGroup \
+--parameters webAppName="<app-name>" \
+--template-uri "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.web/app-service-docs-windows-container/azuredeploy.json"
> [!NOTE] > You can find more [Azure App Service template samples here](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Sites).
azure-app-configuration Howto Leverage Json Content Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-leverage-json-content-type.md
The JSON key-values you created should look like this in App Configuration:
:::image type="content" source="./media/create-json-settings.png" alt-text="Screenshot that shows the Config store containing JSON key-values.":::
+To check this, open your App Configuration resource in the Azure portal and go to **Configuration explorer**.
+ ## Export JSON key-values to a file One of the major benefits of using JSON key-values is the ability to preserve the original data type of your values while exporting. If a key-value in App Configuration doesn't have JSON content type, its value will be treated as a string.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/overview.md
In order to use Arc resource bridge in a region, Arc resource bridge and the Arc
Arc resource bridge supports the following Azure regions:
-* East US
-* East US 2
+- * East US
+- East US
+- East US 2
* West US 2 * West US 3
-* Central US
+- Central US
+- North Central US
+ * South Central US * West Europe * North Europe * UK South * UK West * Sweden Central
-* Canada Central
-* Australia East
+- * Canada Central
+- Australia East
+- Japan East
+ * Southeast Asia
-* East Asia
+- East Asia
+- Central India
### Regional resiliency
Arc resource bridge typically releases a new version on a monthly cadence, at th
* Learn how [Azure Arc-enabled SCVMM extends Azure's governance and management capabilities to System Center managed infrastructure](../system-center-virtual-machine-manager/overview.md). * Learn about [provisioning and managing on-premises Windows and Linux VMs running on Azure Stack HCI clusters](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines). * Review the [system requirements](system-requirements.md) for deploying and managing Arc resource bridge.+
azure-arc Troubleshoot Agent Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/troubleshoot-agent-onboard.md
The following table lists some of the known errors and suggestions on how to tro
|--|||| |Failed to acquire authorization token device flow |`Error occurred while sending request for Device Authorization Code: Post https://login.windows.net/fb84ce97-b875-4d12-b031-ef5e7edf9c8e/oauth2/devicecode?api-version=1.0: dial tcp 40.126.9.7:443: connect: network is unreachable.` |Can't reach `login.windows.net` endpoint | Run [azcmagent check](azcmagent-check.md) to see if a firewall is blocking access to Microsoft Entra ID. | |Failed to acquire authorization token device flow |`Error occurred while sending request for Device Authorization Code: Post https://login.windows.net/fb84ce97-b875-4d12-b031-ef5e7edf9c8e/oauth2/devicecode?api-version=1.0: dial tcp 40.126.9.7:443: connect: network is Forbidden`. |Proxy or firewall is blocking access to `login.windows.net` endpoint. | Run [azcmagent check](azcmagent-check.md) to see if a firewall is blocking access to Microsoft Entra ID.|
-|Failed to acquire authorization token device flow |`Error occurred while sending request for Device Authorization Code: Post https://login.windows.net/fb84ce97-b875-4d12-b031-ef5e7edf9c8e/oauth2/devicecode?api-version=1.0: dial tcp lookup login.windows.net: no such host`. | Group Policy Object *Computer Configuration\ Administrative Templates\ System\ User Profiles\ Delete user profiles older than a specified number of days on system restart* is enabled. | Verify the GPO is enabled and targeting the affected machine. See footnote <sup>[1](#footnote1)</sup> for further details. |
|Failed to acquire authorization token from SPN |`Failed to execute the refresh request. Error = 'Post https://login.windows.net/fb84ce97-b875-4d12-b031-ef5e7edf9c8e/oauth2/token?api-version=1.0: Forbidden'` |Proxy or firewall is blocking access to `login.windows.net` endpoint. |Run [azcmagent check](azcmagent-check.md) to see if a firewall is blocking access to Microsoft Entra ID. | |Failed to acquire authorization token from SPN |`Invalid client secret is provided` |Wrong or invalid service principal secret. |Verify the service principal secret. | | Failed to acquire authorization token from SPN |`Application with identifier 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' wasn't found in the directory 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'. This can happen if the application has not been installed by the administrator of the tenant or consented to by any user in the tenant` |Incorrect service principal and/or Tenant ID. |Verify the service principal and/or the tenant ID.|
The following table lists some of the known errors and suggestions on how to tro
|Failed to AzcmagentConnect ARM resource |`The subscription isn't registered to use namespace 'Microsoft.HybridCompute'` |Azure resource providers aren't registered. |Register the [resource providers](prerequisites.md#azure-resource-providers). | |Failed to AzcmagentConnect ARM resource |`Get https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/myResourceGroup/providers/Microsoft.HybridCompute/machines/MSJC01?api-version=2019-03-18-preview: Forbidden` |Proxy server or firewall is blocking access to `management.azure.com` endpoint. | Run [azcmagent check](azcmagent-check.md) to see if a firewall is blocking access to Azure Resource Manager. |
-<a name="footnote1"></a><sup>1</sup>If this GPO is enabled and applies to machines with the Connected Machine agent, it deletes the user profile associated with the built-in account specified for the *himds* service. As a result, it also deletes the authentication certificate used to communicate with the service that is cached in the local certificate store for 30 days. Before the 30-day limit, an attempt is made to renew the certificate. To resolve this issue, follow the steps to [disconnect the agent](azcmagent-disconnect.md) and then re-register it with the service running `azcmagent connect`.
- ## Next steps If you don't see your problem here or you can't resolve your issue, try one of the following channels for more support:
azure-arc Troubleshoot Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/troubleshoot-extended-security-updates.md
If you're unable to enable this service offering, review the resource providers
## ESU patches issues
-Ensure that both the licensing package and servicing stack update (SSU) are downloaded for the Azure Arc-enabled server as documented at [KB5031043: Procedure to continue receiving security updates after extended support has ended on October 10, 2023](https://support.microsoft.com/topic/kb5031043-procedure-to-continue-receiving-security-updates-after-extended-support-has-ended-on-october-10-2023-c1a20132-e34c-402d-96ca-1e785ed51d45). Ensure you are following all of the networking prerequisites as recorded at [Prepare to deliver Extended Security Updates for Windows Server 2012](prepare-extended-security-updates.md?tabs=azure-cloud#networking).
-
-If installing the Extended Security Update enabled by Azure Arc fails with errors such as "ESU: Trying to Check IMDS Again LastError=HRESULT_FROM_WIN32(12029)" or "ESU: Trying to Check IMDS Again LastError=HRESULT_FROM_WIN32(12002)", there is a known remediation approach:
+### ESU prerequisites
-1. Download this [intermediate CA published by Microsoft](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2001%20-%20xsign.crt).
-1. Install the downloaded certificate as Local Computer under `Intermediate Certificate Authorities\Certificates`. Use the following command to install the certificate correctly:
+Ensure that both the licensing package and servicing stack update (SSU) are downloaded for the Azure Arc-enabled server as documented at [KB5031043: Procedure to continue receiving security updates after extended support has ended on October 10, 2023](https://support.microsoft.com/topic/kb5031043-procedure-to-continue-receiving-security-updates-after-extended-support-has-ended-on-october-10-2023-c1a20132-e34c-402d-96ca-1e785ed51d45). Ensure you are following all of the networking prerequisites as recorded at [Prepare to deliver Extended Security Updates for Windows Server 2012](prepare-extended-security-updates.md?tabs=azure-cloud#networking).
- `certutil -addstore CA 'Microsoft Azure TLS Issuing CA 01 - xsign.crt'`
-1. Install security updates. If it fails, reboot the machine and install security updates again.
+### Error: Trying to check IMDS again (HRESULT 12002)
-If you're working with Azure Government Cloud, use the following instructions instead of those above:
+If installing the Extended Security Update enabled by Azure Arc fails with errors such as "ESU: Trying to Check IMDS Again LastError=HRESULT_FROM_WIN32(12029)" or "ESU: Trying to Check IMDS Again LastError=HRESULT_FROM_WIN32(12002)", you may need to update the intermediate certificate authorities trusted by your computer using one of the following two methods:
-1. Download this [intermediate CA published by Microsoft](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2002%20-%20xsign.crt).
+1. Configure your network firewall and/or proxy server to allow access from the Windows Server 2012 (R2) machines to `https://microsoft.com/pkiops/certs`. This will allow the machine to automatically retrieve updated intermediate certificates as required and is Microsoft's preferred approach.
+1. Download all intermediate CAs from a machine with internet access, copy them to each Windows Server 2012 (R2) machine, and import them to the machine's intermediate certificate authority store:
+ 1. Download the 4 intermediate CA certificates:
+ 1. [Microsoft Azure TLS Issuing CA 01](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2001%20-%20xsign.crt)
+ 1. [Microsoft Azure TLS Issuing CA 02](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2002%20-%20xsign.crt)
+ 1. [Microsoft Azure TLS Issuing CA 05](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2005%20-%20xsign.crt)
+ 1. [Microsoft Azure TLS Issuing CA 06](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2006%20-%20xsign.crt)
+ 1. Copy the certificate files to your Windows Server 2012 (R2) machine.
+ 1. Run the following commands in an elevated command prompt or PowerShell session to add the certificates to the "Intermediate Certificate Authorities" store for the local computer. The command should be run from the same directory as the certificate files. The commands are idempotent and won't make any changes if you've already imported the certificate:
-1. Install the downloaded certificate as Local Computer under `Intermediate Certificate Authorities\Certificates`. Use the following command to install the certificate correctly:
+ ```powershell
+ certstore -addstore CA "Microsoft Azure TLS Issuing CA 01 - xsign.crt"
+ certstore -addstore CA "Microsoft Azure TLS Issuing CA 02 - xsign.crt"
+ certstore -addstore CA "Microsoft Azure TLS Issuing CA 05 - xsign.crt"
+ certstore -addstore CA "Microsoft Azure TLS Issuing CA 06 - xsign.crt"
+ ```
- `certutil -addstore CA 'Microsoft Azure TLS Issuing CA 02 - xsign.crt'`
+After allowing the servers to reach the PKI URL or manually importing the intermediate certificates, try installing the Extended Security Updates again using Windows Update or your preferred patch management software. You may need to reboot your computer for the changes to take effect.
-1. Install security updates. If it fails, reboot the machine and install security updates again.
+### Error: Not eligible (HRESULT 1633)
If you encounter the error "ESU: not eligible HRESULT_FROM_WIN32(1633)", follow these steps:
If you encounter the error "ESU: not eligible HRESULT_FROM_WIN32(1633)", follow
`Restart-Service himds` If you have other issues receiving ESUs after successfully enrolling the server through Arc-enabled servers, or you need additional information related to issues affecting ESU deployment, see [Troubleshoot issues in ESU](/troubleshoot/windows-client/windows-7-eos-faq/troubleshoot-extended-security-updates-issues).-
azure-cache-for-redis Cache Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-private-link.md
Previously updated : 12/15/2023 Last updated : 01/12/2024
You can restrict public access to the private endpoint of your cache by disablin
## Scope of availability
-|Tier | Basic, Standard, Premium |Enterprise, Enterprise Flash |
-||||
-|Available | Yes | Yes |
+|Tier | Basic, Standard, Premium |Enterprise, Enterprise Flash |
+| |::|::|
+|Available | Yes | Yes |
## Prerequisites
You can restrict public access to the private endpoint of your cache by disablin
> [!IMPORTANT] > When using private link, you cannot export or import data to a to a storage account that has firewall enabled unless you're using [managed identity to autenticate to the storage account](cache-managed-identity.md).
-> For more information, see [How to export if I have firewall enabled on my storage account?](cache-how-to-import-export-data.md#how-to-export-if-i-have-firewall-enabled-on-my-storage-account)
+> For more information, see [How to export if I have firewall enabled on my storage account?](cache-how-to-import-export-data.md#how-to-export-if-i-have-firewall-enabled-on-my-storage-account)
> ## Create a private endpoint with a new Azure Cache for Redis instance
az network private-endpoint delete --name MyPrivateEndpoint --resource-group MyR
### How do I connect to my cache with private endpoint?
-For **Basic, Standard, and Premium tier** caches, your application should connect to `<cachename>.redis.cache.windows.net` on port `6380`. A private DNS zone, named `*.privatelink.redis.cache.windows.net`, is automatically created in your subscription. The private DNS zone is vital for establishing the TLS connection with the private endpoint. We recommend avoiding the use of `<cachename>.privatelink.redis.cache.windows.net` in configuration or connection string.
+For **Basic, Standard, and Premium tier** caches, your application should connect to `<cachename>.redis.cache.windows.net` on port `6380`. A private DNS zone, named `*.privatelink.redis.cache.windows.net`, is automatically created in your subscription. The private DNS zone is vital for establishing the TLS connection with the private endpoint. We recommend avoiding the use of `<cachename>.privatelink.redis.cache.windows.net` in configuration or connection string.
-For **Enterprise and Enterprise Flash** tier caches, your application should connect to `<cachename>.<region>.redisenterprise.cache.azure.net` on port `10000`.
+For **Enterprise and Enterprise Flash** tier caches, your application should connect to `<cachename>.<region>.redisenterprise.cache.azure.net` on port `10000`.
For more information, see [Azure services DNS zone configuration](../private-link/private-endpoint-dns.md). ### Why can't I connect to a private endpoint? -- Private endpoints can't be used with your cache instance if your cache is already using the VNet injection network connection method.-- You have a limit of one private link for clustered caches. For all other caches, your limit is 100 private links.-- You try to [persist data to a storage account](cache-how-to-premium-persistence.md) with firewall rules and you're not using managed identity to connect to the storage account.
+- Private endpoints can't be used with your cache instance if your cache is already a VNet injected cache.
+
+- On Premium tier caches, you have a limit of one private link for clustered caches. Enterprise and Enterprise Flash tier caches do not have this limitation for clustered caches. For all other caches, your limit is 100 private links.
+
+- You try to [persist data to storage account](cache-how-to-premium-persistence.md) where firewall rules are applied might prevent you from creating the Private Link.
+ - You might not connect to your private endpoint if your cache instance is using an [unsupported feature](#what-features-arent-supported-with-private-endpoints). ### What features aren't supported with private endpoints? - Trying to connect from the Azure portal console is an unsupported scenario where you see a connection failure.-- Private links can't be added to Premium tier caches that are already geo-replicated. To add a private link to a cache using [passive geo-replication](cache-how-to-geo-replication.md): 1. Unlink the geo-replication. 2. Add a Private Link. 3. Last, relink the geo-replication.+
+- Private links can't be added to caches that are already using [passive geo-replication](cache-how-to-geo-replication.md) in the Premium tier. To add a private link to a geo-replicated cache: 1. Unlink the geo-replication. 2. Add a Private Link. 3. Last, relink the geo-replication. (Enterprise tier caches using [active geo-replication](cache-how-to-active-geo-replication.md) do not have this restriction.)
### How do I verify if my private endpoint is configured correctly?
You can also change the value through a RESTful API PATCH request. For example,
} ```+ For more information, see [Redis - Update](/rest/api/redis/Redis/Update?tabs=HTTP). ### How can I migrate my VNet injected cache to a Private Link cache?
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
- devx-track-extended-java - devx-track-python - ignite-2023 Previously updated : 11/08/2023 Last updated : 12/28/2023 # App settings reference for Azure Functions
When using app settings, you should be aware of the following considerations:
} ````
- In this article, only double-underscores are used, since they're supported on both operating systems.
+ In this article, only double-underscores are used, since they're supported on both operating systems. Most of the settings that support managed identity connections use double-underscores.
+ When Functions runs locally, app settings are specified in the `Values` collection in the [local.settings.json](functions-develop-local.md#local-settings-file).
Don't use both `APPINSIGHTS_INSTRUMENTATIONKEY` and `APPLICATIONINSIGHTS_CONNECT
The connection string for Application Insights. Don't use both `APPINSIGHTS_INSTRUMENTATIONKEY` and `APPLICATIONINSIGHTS_CONNECTION_STRING`. While the use of `APPLICATIONINSIGHTS_CONNECTION_STRING` is recommended in all cases, it's required in the following cases: + When your function app requires the added customizations supported by using the connection string.
-+ When your Application Insights instance runs in a sovereign cloud, which requires a custom endpoint.
++ When your Application Insights instance runs in a sovereign cloud, which requires a custom endpoint. For more information, see [Connection strings](../azure-monitor/app/sdk-connection-string.md).
For more information, see [Connection strings](../azure-monitor/app/sdk-connecti
## AZURE_FUNCTION_PROXY_DISABLE_LOCAL_CALL
-By default, [Functions proxies](functions-proxies.md) use a shortcut to send API calls from proxies directly to functions in the same function app. This shortcut is used instead of creating a new HTTP request. This setting allows you to disable that shortcut behavior.
+> [!IMPORTANT]
+> Azure Functions proxies is a legacy feature for [versions 1.x through 3.x](functions-versions.md) of the Azure Functions runtime. For more information about legacy support in version 4.x, see [Functions proxies](functions-proxies.md).
+
+By default, Functions proxies use a shortcut to send API calls from proxies directly to functions in the same function app. This shortcut is used instead of creating a new HTTP request. This setting allows you to disable that shortcut behavior.
|Key|Value|Description| |-|-|-|
By default, [Functions proxies](functions-proxies.md) use a shortcut to send API
## AZURE_FUNCTION_PROXY_BACKEND_URL_DECODE_SLASHES
+> [!IMPORTANT]
+> Azure Functions proxies is a legacy feature for [versions 1.x through 3.x](functions-versions.md) of the Azure Functions runtime. For more information about legacy support in version 4.x, see [Functions proxies](functions-proxies.md).
+ This setting controls whether the characters `%2F` are decoded as slashes in route parameters when they're inserted into the backend URL. |Key|Value|Description|
When `AZURE_FUNCTION_PROXY_BACKEND_URL_DECODE_SLASHES` is set to `true`, the URL
## AZURE_FUNCTIONS_ENVIRONMENT
-In version 2.x and later versions of the Functions runtime, configures app behavior based on the runtime environment. This value is read during initialization, and can be set to any value. Only the values of `Development`, `Staging`, and `Production` are honored by the runtime. When this application setting isn't present when running in Azure, the environment is assumed to be `Production`. Use this setting instead of `ASPNETCORE_ENVIRONMENT` if you need to change the runtime environment in Azure to something other than `Production`. The Azure Functions Core Tools set `AZURE_FUNCTIONS_ENVIRONMENT` to `Development` when running on a local computer, and this setting can't be overridden in the local.settings.json file. To learn more, see [Environment-based Startup class and methods](/aspnet/core/fundamentals/environments#environment-based-startup-class-and-methods).
+Configures the runtime [hosting environment](/dotnet/api/microsoft.extensions.hosting.environments) of the function app when running in Azure. This value is read during initialization, and only these values are honored by the runtime:
+
+| Value | Description |
+| | |
+| `Production` | Represents a production environment, with reduced logging and full performance optimizations. This is the default when `AZURE_FUNCTIONS_ENVIRONMENT` either isn't set or is set to an unsupported value. |
+| `Staging` | Represents a staging environment, such as when running in a [staging slot](functions-deployment-slots.md). |
+| `Development` | A development environment supports more verbose logging and other reduced performance optimizations. The Azure Functions Core Tools sets `AZURE_FUNCTIONS_ENVIRONMENT` to `Development` when running on your local computer. This setting can't be overridden in the local.settings.json file. |
+
+Use this setting instead of `ASPNETCORE_ENVIRONMENT` when you need to change the runtime environment in Azure to something other than `Production`. For more information, see [Environment-based Startup class and methods](/aspnet/core/fundamentals/environments#environments).
+
+This setting isn't available in version 1.x of the Functions runtime.
## AzureFunctionsJobHost__\*
For more information, see [Host ID considerations](storage-considerations.md#hos
## AzureWebJobsDashboard
-Optional storage account connection string for storing logs and displaying them in the **Monitor** tab in the portal. This setting is only valid for apps that target version 1.x of the Azure Functions runtime. The storage account must be a general-purpose one that supports blobs, queues, and tables. To learn more, see [Storage account requirements](storage-considerations.md#storage-account-requirements).
+_This setting is deprecated and is only supported when running on version 1.x of the Azure Functions runtime._
+
+Optional storage account connection string for storing logs and displaying them in the **Monitor** tab in the portal. The storage account must be a general-purpose one that supports blobs, queues, and tables. To learn more, see [Storage account requirements](storage-considerations.md#storage-account-requirements).
|Key|Sample value| ||| |AzureWebJobsDashboard|`DefaultEndpointsProtocol=https;AccountName=...`|
-> [!NOTE]
-> For better performance and experience, runtime version 2.x and later versions use APPINSIGHTS_INSTRUMENTATIONKEY and App Insights for monitoring instead of `AzureWebJobsDashboard`.
- ## AzureWebJobsDisableHomepage A value of `true` disables the default landing page that is shown for the root URL of a function app. The default value is `false`.
To learn more, see [Secret repositories](security-concepts.md#secret-repositorie
## AzureWebJobsStorage
-The Azure Functions runtime uses this storage account connection string for normal operation. Some uses of this storage account include key management, timer trigger management, and Event Hubs checkpoints. The storage account must be a general-purpose one that supports blobs, queues, and tables. For more information, see [Storage account requirements](storage-considerations.md#storage-account-requirements).
+Specifies the connection string for an Azure Storage account that the Functions runtime uses for normal operations. Some uses of this storage account by Functions include key management, timer trigger management, and Event Hubs checkpoints. The storage account must be a general-purpose one that supports blobs, queues, and tables. For more information, see [Storage account requirements](storage-considerations.md#storage-account-requirements).
|Key|Sample value| ||| |AzureWebJobsStorage|`DefaultEndpointsProtocol=https;AccountName=...`|
+Instead of a connection string, you can use an identity based connection for this storage account. For more information, see [Connecting to host storage with an identity](functions-reference.md#connecting-to-host-storage-with-an-identity).
+
+## AzureWebJobsStorage__accountName
+
+When using an identity-based storage connection, sets the account name of the storage account instead of using the connection string in `AzureWebJobsStorage`. This syntax is unique to `AzureWebJobsStorage` and can't be used for other identity-based connections.
+
+|Key|Sample value|
+|||
+|AzureWebJobsStorage__accountName|`<STORAGE_ACCOUNT_NAME>`|
+
+For sovereign clouds or when using a custom DNS, you must instead use the service-specific `AzureWebJobsStorage__*ServiceUri` settings.
+
+## AzureWebJobsStorage__blobServiceUri
+
+When using an identity-based storage connection, sets the data plane URI of the blob service of the storage account.
+
+|Key|Sample value|
+|||
+|AzureWebJobsStorage__blobServiceUri|`https://<STORAGE_ACCOUNT_NAME>.blob.core.windows.net`|
+
+Use this setting instead of `AzureWebJobsStorage__accountName` in sovereign clouds or when using a custom DNS. For more information, see [Connecting to host storage with an identity](functions-reference.md#connecting-to-host-storage-with-an-identity).
+
+## AzureWebJobsStorage__queueServiceUri
+
+When using an identity-based storage connection, sets the data plane URI of the queue service of the storage account.
+
+|Key|Sample value|
+|||
+|AzureWebJobsStorage__queueServiceUri|`https://<STORAGE_ACCOUNT_NAME>.queue.core.windows.net`|
+
+Use this setting instead of `AzureWebJobsStorage__accountName` in sovereign clouds or when using a custom DNS. For more information, see [Connecting to host storage with an identity](functions-reference.md#connecting-to-host-storage-with-an-identity).
+
+## AzureWebJobsStorage__tableServiceUri
+
+When using an identity-based storage connection, sets data plane URI of a table service of the storage account.
+
+|Key|Sample value|
+|||
+|AzureWebJobsStorage__tableServiceUri|`https://<STORAGE_ACCOUNT_NAME>.table.core.windows.net`|
+
+Use this setting instead of `AzureWebJobsStorage__accountName` in sovereign clouds or when using a custom DNS. For more information, see [Connecting to host storage with an identity](functions-reference.md#connecting-to-host-storage-with-an-identity).
+ ## AzureWebJobs_TypeScriptPath Path to the compiler used for TypeScript. Allows you to override the default if you need to.
Path to the compiler used for TypeScript. Allows you to override the default if
||| |AzureWebJobs_TypeScriptPath|`%HOME%\typescript`|
+## DOCKER_REGISTRY_SERVER_PASSWORD
+
+Indicates the password used to access a private container registry. This setting is only required when deploying your containerized function app from a private container registry. For more information, see [Environment variables and app settings in Azure App Service](../app-service/reference-app-settings.md#custom-containers).
+
+## DOCKER_REGISTRY_SERVER_URL
+
+Indicates the URL of a private container registry. This setting is only required when deploying your containerized function app from a private container registry. For more information, see [Environment variables and app settings in Azure App Service](../app-service/reference-app-settings.md#custom-containers).
+
+## DOCKER_REGISTRY_SERVER_USERNAME
+
+Indicates the account used to access a private container registry. This setting is only required when deploying your containerized function app from a private container registry. For more information, see [Environment variables and app settings in Azure App Service](../app-service/reference-app-settings.md#custom-containers).
+ ## DOCKER_SHM_SIZE Sets the shared memory size (in bytes) when the Python worker is using shared memory. To learn more, see [Shared memory](functions-reference-python.md#shared-memory).
Indicates whether the [Oryx build system](https://github.com/microsoft/Oryx) is
## FUNCTION\_APP\_EDIT\_MODE
-Dictates whether editing in the Azure portal is enabled. Valid values are `readwrite` and `readonly`.
+Indicates whether you're able to edit your function app in the Azure portal. Valid values are `readwrite` and `readonly`.
|Key|Sample value| ||| |FUNCTION\_APP\_EDIT\_MODE|`readonly`|
+The value is set by the runtime based on the language stack and deployment status of your function app. For more information, see [Development limitations in the Azure portal](functions-how-to-use-azure-function-app-settings.md#development-limitations-in-the-azure-portal).
+ ## FUNCTIONS\_EXTENSION\_VERSION The version of the Functions runtime that hosts your function app. A tilde (`~`) with major version means use the latest version of that major version (for example, `~3`). When new versions for the same major version are available, they're automatically installed in the function app. To pin the app to a specific version, use the full version number (for example, `3.0.12345`). Default is `~3`. A value of `~1` pins your app to version 1.x of the runtime. For more information, see [Azure Functions runtime versions overview](functions-versions.md). A value of `~4` means that your app runs on version 4.x of the runtime.
For Node.js v18 or lower, the app setting can be used and the default behavior d
## FUNCTIONS\_V2\_COMPATIBILITY\_MODE
-This setting enables your function app to run in a version 2.x compatible mode on the version 3.x runtime. Use this setting only if encountering issues after upgrading your function app from version 2.x to 3.x of the runtime.
- >[!IMPORTANT]
-> This setting is intended only as a short-term workaround while you update your app to run correctly on version 3.x. This setting is supported as long as the [2.x runtime is supported](functions-versions.md). If you encounter issues that prevent your app from running on version 3.x without using this setting, please [report your issue](https://github.com/Azure/azure-functions-host/issues/new?template=Bug_report.md).
-
-You must also set [FUNCTIONS\_EXTENSION\_VERSION](functions-app-settings.md#functions_extension_version) to `~3`.
-
-|Key|Sample value|
-|||
-|FUNCTIONS\_V2\_COMPATIBILITY\_MODE|`true`|
+> This setting is no longer supported. It was originally provided to enable a short-term workaround for apps that targeted the v2.x runtime to be able to instead run on the v3.x runtime while it was still supported. Except for legacy apps that run on version 1.x, all function apps must run on version 4.x of the Functions runtime: `FUNCTIONS_EXTENSION_VERSION=~4`. For more information, see [Azure Functions runtime versions overview](functions-versions.md).
## FUNCTIONS\_REQUEST\_BODY\_SIZE\_LIMIT
Overrides the default limit on the body size of requests sent to HTTP endpoints.
## FUNCTIONS\_WORKER\_PROCESS\_COUNT
-Specifies the maximum number of language worker processes, with a default value of `1`. The maximum value allowed is `10`. Function invocations are evenly distributed among language worker processes. Language worker processes are spawned every 10 seconds until the count set by FUNCTIONS\_WORKER\_PROCESS\_COUNT is reached. Using multiple language worker processes isn't the same as [scaling](functions-scale.md). Consider using this setting when your workload has a mix of CPU-bound and I/O-bound invocations. This setting applies to all language runtimes, except for .NET running in process (`dotnet`).
+Specifies the maximum number of language worker processes, with a default value of `1`. The maximum value allowed is `10`. Function invocations are evenly distributed among language worker processes. Language worker processes are spawned every 10 seconds until the count set by `FUNCTIONS_WORKER_PROCESS_COUNT` is reached. Using multiple language worker processes isn't the same as [scaling](functions-scale.md). Consider using this setting when your workload has a mix of CPU-bound and I/O-bound invocations. This setting applies to all language runtimes, except for .NET running in process (`FUNCTIONS_WORKER_RUNTIME=dotnet`).
|Key|Sample value| |||
Specifies the maximum number of language worker processes, with a default value
## FUNCTIONS\_WORKER\_RUNTIME
-The language worker runtime to load in the function app. This corresponds to the language being used in your application (for example, `dotnet`). Starting with version 2.x of the Azure Functions runtime, a given function app can only support a single language.
+The language or language stack of the worker runtime to load in the function app. This corresponds to the language being used in your application (for example, `python`). Starting with version 2.x of the Azure Functions runtime, a given function app can only support a single language.
|Key|Sample value| |||
The language worker runtime to load in the function app. This corresponds to th
Valid values:
-| Value | Language |
+| Value | Language/language stack |
||| | `dotnet` | [C# (class library)](functions-dotnet-class-library.md)<br/>[C# (script)](functions-reference-csharp.md) | | `dotnet-isolated` | [C# (isolated worker process)](dotnet-isolated-process-guide.md) |
Connection string for storage account where the function app code and configurat
||| |WEBSITE_CONTENTAZUREFILECONNECTIONSTRING|`DefaultEndpointsProtocol=https;AccountName=...`|
-This setting is required for Consumption plan apps on Windows and for Elastic Premium plan apps on both Windows and Linux. It's not required for Dedicated plan apps, which aren't dynamically scaled by Functions.
+This setting is required for Consumption and Elastic Premium plan apps running on both Windows and Linux. It's not required for Dedicated plan apps, which aren't dynamically scaled by Functions.
Changing or removing this setting can cause your function app to not start. To learn more, see [this troubleshooting article](functions-recover-storage-account.md#storage-account-application-settings-were-deleted).
+Azure Files doesn't support using managed identity when accessing the file share. For more information, see [Azure Files supported authentication scenarios](../storage/files/storage-files-active-directory-overview.md#supported-authentication-scenarios).
+ ## WEBSITE\_CONTENTOVERVNET A value of `1` enables your function app to scale when you have your storage account restricted to a virtual network. You should enable this setting when restricting your storage account to a virtual network. To learn more, see [Restrict your storage account to a virtual network](configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network).
Supported on [Premium](functions-premium-plan.md) and [Dedicated (App Service) p
## WEBSITE\_CONTENTSHARE
-The file path to the function app code and configuration in an event-driven scaling plans. Used with WEBSITE_CONTENTAZUREFILECONNECTIONSTRING. Default is a unique string generated by the runtime that begins with the function app name. For more information, see [Storage account connection setting](storage-considerations.md#storage-account-connection-setting).
+The name of the file share that Functions uses to store function app code and configuration files. This content is required by event-driven scaling plans. Used with `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`. Default is a unique string generated by the runtime, which begins with the function app name. For more information, see [Storage account connection setting](storage-considerations.md#storage-account-connection-setting).
|Key|Sample value| |||
The file path to the function app code and configuration in an event-driven scal
This setting is required for Consumption and Premium plan apps on both Windows and Linux. It's not required for Dedicated plan apps, which aren't dynamically scaled by Functions.
-Changing or removing this setting can cause your function app to not start. To learn more, see [this troubleshooting article](functions-recover-storage-account.md#storage-account-application-settings-were-deleted).
+The share is created when your function app is created. Changing or removing this setting can cause your function app to not start. To learn more, see [this troubleshooting article](functions-recover-storage-account.md#storage-account-application-settings-were-deleted).
-The following considerations apply when using an Azure Resource Manager (ARM) template to create a function app during deployment:
+The following considerations apply when using an Azure Resource Manager (ARM) template or Bicep file to create a function app during deployment:
+ When you don't set a `WEBSITE_CONTENTSHARE` value for the main function app or any apps in slots, unique share values are generated for you. Not setting `WEBSITE_CONTENTSHARE` _is the recommended approach_ for an ARM template deployment. + There are scenarios where you must set the `WEBSITE_CONTENTSHARE` value to a predefined share, such as when you [use a secured storage account in a virtual network](configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network). In this case, you must set a unique share name for the main function app and the app for each deployment slot.
Valid values are either a URL that resolves to the location of a deployment pack
## WEBSITE\_SKIP\_CONTENTSHARE\_VALIDATION
-The [WEBSITE_CONTENTAZUREFILECONNECTIONSTRING](#website_contentazurefileconnectionstring) and [WEBSITE_CONTENTSHARE](#website_contentshare) settings have extra validation checks to ensure that the app can be properly started. Creation of application settings fail when the function app can't properly call out to the downstream Storage Account or Key Vault due to networking constraints or other limiting factors. When WEBSITE_SKIP_CONTENTSHARE_VALIDATION is set to `1`, the validation check is skipped; otherwise the value defaults to `0` and the validation will take place.
+The [WEBSITE_CONTENTAZUREFILECONNECTIONSTRING](#website_contentazurefileconnectionstring) and [WEBSITE_CONTENTSHARE](#website_contentshare) settings have extra validation checks to ensure that the app can be properly started. Creation of application settings fail when the function app can't properly call out to the downstream Storage Account or Key Vault due to networking constraints or other limiting factors. When WEBSITE_SKIP_CONTENTSHARE_VALIDATION is set to `1`, the validation check is skipped; otherwise the value defaults to `0` and the validation takes place.
|Key|Sample value| |||
Indicates whether all outbound traffic from the app is routed through the virtua
||| |WEBSITE\_VNET\_ROUTE\_ALL|`1`|
+## WEBSITES_ENABLE_APP_SERVICE_STORAGE
+
+Indicates whether the `/home` directory is shared across scaled instances, with a default value of `true`. You should set this to `false` when deploying your function app in a container. d
+ ## App Service site settings Some configurations must be maintained at the App Service level as site settings, such as language versions. These settings are managed in the portal, by using REST APIs, or by using Azure CLI or Azure PowerShell. The following are site settings that could be required, depending on your runtime language, OS, and versions:
azure-functions Functions How To Use Azure Function App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-use-azure-function-app-settings.md
You must consider these limitations when developing your functions in the [Azure
+ Python in-portal editing is only supported when running in the Consumption plan. + In-portal editing is currently only supported for functions that were created or last modified in the portal. + When you deploy code to a function app from outside the portal, you can no longer edit any of the code for that function app in the portal. In this case, just continue using [local development](functions-develop-local.md).
-+ For compiled C# functions, Java functions, and some Python functions, you can create the function app in the portal. However, you must create the functions code project locally and then publish it to Azure.
++ For compiled C# functions, Java functions, and some Python functions, you can create the function app and related resources in the portal. However, you must create the functions code project locally and then publish it to Azure. When possible, you should develop your functions locally and publish your code project to a function app in Azure. For more information, see [Code and test Azure Functions locally](functions-develop-local.md).
azure-functions Functions Infrastructure As Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-infrastructure-as-code.md
These application settings are required for container deployments:
Keep these considerations in mind when working with site and application settings using Bicep files or ARM templates: :::zone pivot="consumption-plan,premium-plan,dedicated-plan"
-+ There are important considerations for using [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare) in an automated deployment.
++ There are important considerations for when you should set `WEBSITE_CONTENTSHARE` in an automated deployment. For detailed guidance, see the [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare) reference. ::: zone-end :::zone pivot="container-apps,azure-arc,premium-plan,dedicated-plan" + For container deployments, also set [`WEBSITES_ENABLE_APP_SERVICE_STORAGE`](../app-service/reference-app-settings.md#custom-containers) to `false`, since your app content is provided in the container itself.
azure-functions Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference.md
Here's an example of `local.settings.json` properties required for identity-base
#### Connecting to host storage with an identity
-The Azure Functions host uses the `AzureWebJobsStorage` connection for core behaviors such as coordinating singleton execution of timer triggers and default app key storage. This connection can also be configured to use an identity.
+The Azure Functions host uses the storage connection set in [`AzureWebJobsStorage`](functions-app-settings.md#azurewebjobsstorage) to enable core behaviors such as coordinating singleton execution of timer triggers and default app key storage. This connection can also be configured to use an identity.
> [!CAUTION] > Other components in Functions rely on `AzureWebJobsStorage` for default behaviors. You should not move it to an identity-based connection if you are using older versions of extensions that do not support this type of connection, including triggers and bindings for Azure Blobs, Event Hubs, and Durable Functions. Similarly, `AzureWebJobsStorage` is used for deployment artifacts when using server-side build in Linux Consumption, and if you enable this, you will need to deploy via [an external deployment package](run-functions-from-deployment-package.md). >
-> In addition, some apps reuse `AzureWebJobsStorage` for other storage connections in their triggers, bindings, and/or function code. Make sure that all uses of `AzureWebJobsStorage` are able to use the identity-based connection format before changing this connection from a connection string.
+> In addition, your function app might be reusing `AzureWebJobsStorage` for other storage connections in their triggers, bindings, and/or function code. Make sure that all uses of `AzureWebJobsStorage` are able to use the identity-based connection format before changing this connection from a connection string.
To use an identity-based connection for `AzureWebJobsStorage`, configure the following app settings:
azure-monitor Workbooks Jsonpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-jsonpath.md
In this example, the JSON object represents a store's inventory. We're going to
:::image type="content" source="media/workbooks-jsonpath/query-jsonpath.png" alt-text="Screenshot that shows editing a query item with JSON data source and JSON path result format.":::
-## Use regular expressions to covert values
+## Use regular expressions to convert values
You may have some data that isn't in a standard format. To use that data effectively, you would want to convert that data into a standard format.
azure-netapp-files Azure Government https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-government.md
All [Azure NetApp Files features](whats-new.md) available on Azure public cloud
| Azure NetApp Files backup | Public preview | No | | Azure NetApp Files large volumes | Public preview | No | | Edit network features for existing volumes | Public preview | No |
-| Standard storage with cool access in Azure NetApp Files | Public preview | No |
+| Standard storage with cool access in Azure NetApp Files | Public preview | Public preview [(in select regions)](cool-access-introduction.md#supported-regions) |
## Portal access
azure-netapp-files Cool Access Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cool-access-introduction.md
Standard storage with cool access is supported for the following regions:
* Switzerland North * Switzerland West * UAE North
+* US Gov Arizona
* West US ## Effects of cool access on data
azure-netapp-files Tools Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/tools-reference.md
+
+ Title: Azure NetApp Files tools
+description: Learn about the tools available to you to maximize your experience and savings with Azure NetApp Files.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ Last updated : 01/12/2023+++
+# Azure NetApp Files tools
+
+Azure NetApp Files offers [multiple tools](https://azure.github.io/azure-netapp-files/) to estimate costs, understand features and availability, and monitor your Azure NetApp Files deployment.
+
+* [**Azure NetApp Files Performance Calculator**](https://aka.ms/anfcalc)
+
+ The Azure NetApp Files Performance Calculator enables you to easily calculate the performance and estimated cost of a volume based on the size and service level or performance requirements. It also helps you estimate backup and replication costs.
+
+* [**Azure NetApp Files datastore for Azure VMware Solution TCO Estimator**](https://aka.ms/anfavscalc)
+
+ This tool assists you with sizing an Azure VMware Solution. In the Estimator, provide the details of your current VMware environment to learn how much you can save by using Azure NetApp Files datastores.
+
+* [**SAP on Azure NetApp Files Sizing Estimator**](https://aka.ms/anfsapcalc)
+
+ This comprehensive tool estimates the infrastructure costs of an SAP HANA on Azure NetApp Files landscape. The estimate includes primary storage, backup, and replication costs.
+
+* [**Azure NetApp Files Standard storage with cool access cost savings estimator**](https://aka.ms/anfcoolaccesscalc)
+
+ Standard storage with cool access enables you to transparently move infrequently accessed data to less expensive storage. This cost savings estimator helps you understand how much money you can save by enabling Standard storage with cool access.
+
+* [**Azure NetApp Files Region and Feature Map**](https://aka.ms/anfmap)
+
+ Use this interactive map to understand which regions support Azure NetApp Files and its numerous features.
+
+* [**Azure NetApp Files on YouTube**](https://www.youtube.com/@azurenetappfiles)
+
+ Learn about the latest features in Azure NetApp Files and watch detailed how-to videos on the Azure NetApp Files YouTube channel.
+
+* [**ANFCapacityManager**](https://github.com/ANFTechTeam/ANFCapacityManager)
+
+ ANFCapacityManager is an Azure logic application that automatically creates metric alert rules in Azure Monitor to notify you when volumes are approaching their capacity. Optionally, it can increase the volumes' sizes automatically to keep your applications online.
+
+* [**ANFHealthCheck**](https://github.com/seanluce/ANFHealthCheck)
+
+ ANFHeathCheck is a PowerShell runbook that generates artful HTML reports of your entire Azure NetApp Files landscape. Optionally, it can automatically reduce over-sized volumes and capacity pools to reduce your TCO.
azure-web-pubsub Reference Protobuf Reliable Webpubsub Subprotocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-protobuf-reliable-webpubsub-subprotocol.md
For example, in JavaScript, you can create a Reliable PubSub WebSocket client wi
var pubsub = new WebSocket('wss://test.webpubsub.azure.com/client/hubs/hub1', 'protobuf.reliable.webpubsub.azure.v1'); ```
-To correctly use `json.reliable.webpubsub.azure.v1` subprotocol, the client must follow the [How to create reliable clients](./howto-develop-reliable-clients.md) to implement reconnection, publisher and subscriber.
+To correctly use `protobuf.reliable.webpubsub.azure.v1` subprotocol, the client must follow the [How to create reliable clients](./howto-develop-reliable-clients.md) to implement reconnection, publisher and subscriber.
> [!NOTE] > Currently, the Web PubSub service supports only [proto3](https://developers.google.com/protocol-buffers/docs/proto3).
data-factory Connector Microsoft Fabric Lakehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-microsoft-fabric-lakehouse.md
This article outlines how to use Copy activity to copy data from and to Microsof
This Microsoft Fabric Lakehouse connector is supported for the following capabilities:
-| Supported capabilities|IR |
-|| --|
-|[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|
-|[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|&#9312; |
+| Supported capabilities|IR | Managed private endpoint|
+|| --| --|
+|[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|Γ£ô |
+|[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|&#9312; |- |
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|Γ£ô |
+|[GetMetadata activity](control-flow-get-metadata-activity.md)|&#9312; &#9313;|Γ£ô |
+|[Delete activity](delete-activity.md)|&#9312; &#9313;|Γ£ô |
*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
This Microsoft Fabric Lakehouse connector is supported for the following capabil
Use the following steps to create a Microsoft Fabric Lakehouse linked service in the Azure portal UI.
-1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then select New:
+1. Browse to the **Manage** tab in your Azure Data Factory or Synapse workspace and select Linked Services, then select New:
# [Azure Data Factory](#tab/data-factory)
The following properties are supported for Microsoft Fabric Lakehouse Table data
```json {
-    "name": "LakehouseTableDataset",
-    "properties": {
-        "type": "LakehouseTable",
-        "linkedServiceName": {
-            "referenceName": "<Microsoft Fabric Lakehouse linked service name>",
-            "type": "LinkedServiceReference"
-        },
-        "typeProperties": {
+ "name": "LakehouseTableDataset",
+ "properties": {
+ "type": "LakehouseTable",
+ "linkedServiceName": {
+ "referenceName": "<Microsoft Fabric Lakehouse linked service name>",
+ "type": "LinkedServiceReference"
+ },
+ "typeProperties": {
"table": "<table_name>"
-        },
-        "schema": [< physical schema, optional, retrievable during authoring >]
-    }
+ },
+ "schema": [< physical schema, optional, retrievable during authoring >]
+ }
} ```
The following properties are supported in the Mapping Data Flows **sink** sectio
| Update method | When you select "Allow insert" alone or when you write to a new delta table, the target receives all incoming rows regardless of the Row policies set. If your data contains rows of other Row policies, they need to be excluded using a preceding Filter transform. <br><br> When all Update methods are selected a Merge is performed, where rows are inserted/deleted/upserted/updated as per the Row Policies set using a preceding Alter Row transform. | yes | `true` or `false` | insertable <br> deletable <br> upsertable <br> updateable | | Optimized Write | Achieve higher throughput for write operation via optimizing internal shuffle in Spark executors. As a result, you might notice fewer partitions and files that are of a larger size | no | `true` or `false` | optimizedWrite: true | | Auto Compact | After any write operation has completed, Spark will automatically execute the ```OPTIMIZE``` command to reorganize the data, resulting in more partitions if necessary, for better reading performance in the future | no | `true` or `false` | autoCompact: true |
-| Merge Schema | Merge schema option allows schema evolution, that is, any columns that are present in the current incoming stream but not in the target Delta table is automatically added to its schema. This option is supported across all update methods. | no | `true` or `false` | mergeSchema: true |
+| Merge Schema | Merge schema option allows schema evolution, that is, any columns that are present in the current incoming stream but not in the target Delta table is automatically added to its schema. This option is supported across all update methods. | no | `true` or `false` | mergeSchema: true |
**Example: Microsoft Fabric Lakehouse Table sink** ``` sink(allowSchemaDrift: true,
-ΓÇ» ΓÇ» validateSchema: false,
-ΓÇ» ΓÇ» input(
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» CustomerID as string,
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» NameStyle as string,
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» Title as string,
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» FirstName as string,
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» MiddleName as string,
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» LastName as string,
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» Suffix as string,
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» CompanyName as string,
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» SalesPerson as string,
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» EmailAddress as string,
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» Phone as string,
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» PasswordHash as string,
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» PasswordSalt as string,
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» rowguid as string,
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ModifiedDate as string
-ΓÇ» ΓÇ» ),
-ΓÇ» ΓÇ» deletable:false,
-ΓÇ» ΓÇ» insertable:true,
-ΓÇ» ΓÇ» updateable:false,
-ΓÇ» ΓÇ» upsertable:false,
-ΓÇ» ΓÇ» optimizedWrite: true,
-ΓÇ» ΓÇ» mergeSchema: true,
-ΓÇ» ΓÇ» autoCompact: true,
-ΓÇ» ΓÇ» skipDuplicateMapInputs: true,
-ΓÇ» ΓÇ» skipDuplicateMapOutputs: true) ~> CustomerTable
+ validateSchema: false,
+ input(
+ CustomerID as string,
+ NameStyle as string,
+ Title as string,
+ FirstName as string,
+ MiddleName as string,
+ LastName as string,
+ Suffix as string,
+ CompanyName as string,
+ SalesPerson as string,
+ EmailAddress as string,
+ Phone as string,
+ PasswordHash as string,
+ PasswordSalt as string,
+ rowguid as string,
+ ModifiedDate as string
+ ),
+ deletable:false,
+ insertable:true,
+ updateable:false,
+ upsertable:false,
+ optimizedWrite: true,
+ mergeSchema: true,
+ autoCompact: true,
+ skipDuplicateMapInputs: true,
+ skipDuplicateMapOutputs: true) ~> CustomerTable
```
data-factory Control Flow Get Metadata Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-get-metadata-activity.md
The Get Metadata activity takes a dataset as an input and returns metadata infor
| [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md) | √/√ | √/√ | √ | x/x | √/√ | √ | x | √ | √ | √/√ | | [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md) | √/√ | √/√ | √ | x/x | √/√ | √ | √ | √ | √ | √/√ | | [Azure Files](connector-azure-file-storage.md) | √/√ | √/√ | √ | √/√ | √/√ | √ | x | √ | √ | √/√ |
+| [Microsoft Fabric Lakehouse](connector-microsoft-fabric-lakehouse.md) | √/√ | √/√ | √ | x/x | √/√ | √ | √ | √ | √ | √/√ |
| [File system](connector-file-system.md) | √/√ | √/√ | √ | √/√ | √/√ | √ | x | √ | √ | √/√ | | [SFTP](connector-sftp.md) | √/√ | √/√ | √ | x/x | √/√ | √ | x | √ | √ | √/√ | | [FTP](connector-ftp.md) | √/√ | √/√ | √ | x/x | x/x | √ | x | √ | √ | √/√ |
data-factory Delete Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/delete-activity.md
Last updated 07/17/2023
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-You can use the Delete Activity in Azure Data Factory to delete files or folders from on-premises storage stores or cloud storage stores. Use this activity to clean up or archive files when they are no longer needed.
+You can use the Delete Activity in Azure Data Factory to delete files or folders from on-premises storage stores or cloud storage stores. Use this activity to clean up or archive files when they're no longer needed.
> [!WARNING] > Deleted files or folders cannot be restored (unless the storage has soft-delete enabled). Be cautious when using the Delete activity to delete files or folders.
Here are some recommendations for using the Delete activity:
- Make sure that the service has write permissions to delete folders or files from the storage store. -- Make sure you are not deleting files that are being written at the same time.
+- Make sure you aren't deleting files that are being written at the same time.
-- If you want to delete files or folder from an on-premises system, make sure you are using a self-hosted integration runtime with a version greater than 3.14.
+- If you want to delete files or folder from an on-premises system, make sure you're using a self-hosted integration runtime with a version greater than 3.14.
## Supported data stores -- [Azure Blob storage](connector-azure-blob-storage.md)-- [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md)-- [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md)-- [Azure Files](connector-azure-file-storage.md)-- [File System](connector-file-system.md)-- [FTP](connector-ftp.md)-- [SFTP](connector-sftp.md)-- [Amazon S3](connector-amazon-simple-storage-service.md)-- [Amazon S3 Compatible Storage](connector-amazon-s3-compatible-storage.md)-- [Google Cloud Storage](connector-google-cloud-storage.md)-- [Oracle Cloud Storage](connector-oracle-cloud-storage.md)-- [HDFS](connector-hdfs.md)
+- [Azure Blob storage](connector-azure-blob-storage.md)
+- [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md)
+- [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md)
+- [Azure Files](connector-azure-file-storage.md)
+- [File System](connector-file-system.md)
+- [FTP](connector-ftp.md)
+- [SFTP](connector-sftp.md)
+- [Microsoft Fabric Lakehouse](connector-microsoft-fabric-lakehouse.md)
+- [Amazon S3](connector-amazon-simple-storage-service.md)
+- [Amazon S3 Compatible Storage](connector-amazon-s3-compatible-storage.md)
+- [Google Cloud Storage](connector-google-cloud-storage.md)
+- [Oracle Cloud Storage](connector-oracle-cloud-storage.md)
+- [HDFS](connector-hdfs.md)
## Create a Delete activity with UI To use a Delete activity in a pipeline, complete the following steps: 1. Search for _Delete_ in the pipeline Activities pane, and drag a Delete activity to the pipeline canvas.
-1. Select the new Delete activity on the canvas if it is not already selected, and its **Source** tab, to edit its details.
+1. Select the new Delete activity on the canvas if it isn't already selected, and its **Source** tab, to edit its details.
:::image type="content" source="media/delete-activity/delete-activity.png" alt-text="Shows the UI for a Delete activity.":::
To use a Delete activity in a pipeline, complete the following steps:
| dataset | Provides the dataset reference to determine which files or folder to be deleted | Yes | | recursive | Indicates whether the files are deleted recursively from the subfolders or only from the specified folder. | No. The default is `false`. | | maxConcurrentConnections | The number of the connections to connect to storage store concurrently for deleting folder or files. | No. The default is `1`. |
-| enablelogging | Indicates whether you need to record the folder or file names that have been deleted. If true, you need to further provide a storage account to save the log file, so that you can track the behaviors of the Delete activity by reading the log file. | No |
-| logStorageSettings | Only applicable when enablelogging = true.<br/><br/>A group of storage properties that can be specified where you want to save the log file containing the folder or file names that have been deleted by the Delete activity. | No |
-| linkedServiceName | Only applicable when enablelogging = true.<br/><br/>The linked service of [Azure Storage](connector-azure-blob-storage.md#linked-service-properties), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md#linked-service-properties), or [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#linked-service-properties) to store the log file that contains the folder or file names that have been deleted by the Delete activity. Be aware it must be configured with the same type of Integration Runtime from the one used by delete activity to delete files. | No |
-| path | Only applicable when enablelogging = true.<br/><br/>The path to save the log file in your storage account. If you do not provide a path, the service creates a container for you. | No |
+| enable logging | Indicates whether you need to record the deleted folder or file names. If true, you need to further provide a storage account to save the log file, so that you can track the behaviors of the Delete activity by reading the log file. | No |
+| logStorageSettings | Only applicable when enablelogging = true.<br/><br/>A group of storage properties that can be specified where you want to save the log file containing the folder or file names deleted by the Delete activity. | No |
+| linkedServiceName | Only applicable when enablelogging = true.<br/><br/>The linked service of [Azure Storage](connector-azure-blob-storage.md#linked-service-properties), [Azure Data Lake Storage Gen1](connector-azure-data-lake-store.md#linked-service-properties), or [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#linked-service-properties) to store the log file that contains the folder or file names deleted by the Delete activity. Be aware it must be configured with the same type of Integration Runtime from the one used by delete activity to delete files. | No |
+| path | Only applicable when enablelogging = true.<br/><br/>The path to save the log file in your storage account. If you don't provide a path, the service creates a container for you. | No |
## Monitoring
The store has the following folder structure:
Root/<br/>&nbsp;&nbsp;&nbsp;&nbsp;Folder_A_1/<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;1.txt<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;2.txt<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;3.csv<br/>&nbsp;&nbsp;&nbsp;&nbsp;Folder_A_2/<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;4.txt<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;5.csv<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Folder_B_1/<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;6.txt<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;7.csv<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Folder_B_2/<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;8.txt
-Now you are using the Delete activity to delete folder or files by the combination of different property value from the dataset and the Delete activity:
+Now you're using the Delete activity to delete folder or files by the combination of different property value from the dataset and the Delete activity:
| folderPath | fileName | recursive | Output | |: |: |: |: |
Now you are using the Delete activity to delete folder or files by the combinati
### Periodically clean up the time-partitioned folder or files
-You can create a pipeline to periodically clean up the time partitioned folder or files. For example, the folder structure is similar as: `/mycontainer/2018/12/14/*.csv`. You can leverage the service system variable from schedule trigger to identify which folder or files should be deleted in each pipeline run.
+You can create a pipeline to periodically clean up the time partitioned folder or files. For example, the folder structure is similar as: `/mycontainer/2018/12/14/*.csv`. You can use the service system variable from schedule trigger to identify which folder or files should be deleted in each pipeline run.
#### Sample pipeline
You can create a pipeline to periodically clean up the time partitioned folder o
### Clean up the expired files that were last modified before 2018.1.1
-You can create a pipeline to clean up the old or expired files by leveraging file attribute filter: ΓÇ£LastModifiedΓÇ¥ in dataset.
+You can create a pipeline to clean up the old or expired files by using file attribute filter: "LastModified" in dataset.
#### Sample pipeline
You can create a pipeline to clean up the old or expired files by leveraging fil
### Move files by chaining the Copy activity and the Delete activity
-You can move a file by using a Copy activity to copy a file and then a Delete activity to delete a file in a pipeline. When you want to move multiple files, you can use the GetMetadata activity + Filter activity + Foreach activity + Copy activity + Delete activity as in the following sample.
+You can move a file by using a Copy activity to copy a file and then a Delete activity to delete a file in a pipeline. When you want to move multiple files, you can use the GetMetadata activity + Filter activity + Foreach activity + Copy activity + Delete activity as in the following sample.
> [!NOTE] > If you want to move the entire folder by defining a dataset containing a folder path only, and then using a Copy activity and a Delete activity to reference to the same dataset representing a folder, you need to be very careful. You must ensure that there **will not** be any new files arriving into the folder between the copy operation and the delete operation. If new files arrive in the folder at the moment when your copy activity just completed the copy job but the Delete activity has not been started, then the Delete activity might delete the newly arriving file which has NOT been copied to the destination yet by deleting the entire folder.
You can also get the template to move files from [here](solution-template-move-f
## Known limitation -- Delete activity does not support deleting list of folders described by wildcard.
+- Delete activity doesn't support deleting list of folders described by wildcard.
-- When using file attribute filter in delete activity: modifiedDatetimeStart and modifiedDatetimeEnd to select files to be deleted, make sure to set "wildcardFileName": "*" in delete activity as well.
+- When using file attribute filter in delete activity: modifiedDatetimeStart and modifiedDatetimeEnd to select files to be deleted, make sure to set "wildcardFileName": "*" in delete activity as well.
## Related content Learn more about moving files in Azure Data Factory and Synapse pipelines. -- [Copy Data tool](copy-data-tool.md)
+- [Copy Data tool](copy-data-tool.md)
data-factory Format Delta https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/format-delta.md
The below table lists the properties supported by a delta source. You can edit t
| - | -- | -- | -- | - | | Format | Format must be `delta` | yes | `delta` | format | | File system | The container/file system of the delta lake | yes | String | fileSystem |
-| Folder path | The direct of the delta lake | yes | String | folderPath |
+| Folder path | The directory of the delta lake | yes | String | folderPath |
| Compression type | The compression type of the delta table | no | `bzip2`<br>`gzip`<br>`deflate`<br>`ZipDeflate`<br>`snappy`<br>`lz4` | compressionType | | Compression level | Choose whether the compression completes as quickly as possible or if the resulting file should be optimally compressed. | required if `compressedType` is specified. | `Optimal` or `Fastest` | compressionLevel | | Time travel | Choose whether to query an older snapshot of a delta table | no | Query by timestamp: Timestamp <br> Query by version: Integer | timestampAsOf <br> versionAsOf |
The below table lists the properties supported by a delta sink. You can edit the
| - | -- | -- | -- | - | | Format | Format must be `delta` | yes | `delta` | format | | File system | The container/file system of the delta lake | yes | String | fileSystem |
-| Folder path | The direct of the delta lake | yes | String | folderPath |
+| Folder path | The directory of the delta lake | yes | String | folderPath |
| Compression type | The compression type of the delta table | no | `bzip2`<br>`gzip`<br>`deflate`<br>`ZipDeflate`<br>`snappy`<br>`lz4` | compressionType | | Compression level | Choose whether the compression completes as quickly as possible or if the resulting file should be optimally compressed. | required if `compressedType` is specified. | `Optimal` or `Fastest` | compressionLevel | | Vacuum | Deletes files older than the specified duration that is no longer relevant to the current table version. When a value of 0 or less is specified, the vacuum operation isn't performed. | yes | Integer | vacuum |
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important upcoming changes description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 01/09/2024 Last updated : 01/11/2024 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you can find them in the [What's
| Planned change | Announcement date | Estimated date for change | |--|--|--|
+| [Four new recommendations for Azure Stack HCI resource type](#four-new-recommendations-for-azure-stack-hci-resource-type) | January 11, 2024 | February 2024 |
| [Defender for Servers built-in vulnerability assessment (Qualys) retirement path](#defender-for-servers-built-in-vulnerability-assessment-qualys-retirement-path) | January 9, 2024 | May 2024 | | [Retirement of the Defender for Cloud Containers Vulnerability Assessment powered by Qualys](#retirement-of-the-defender-for-cloud-containers-vulnerability-assessment-powered-by-qualys) | January 9, 2023 | March 2024 | | [New version of Defender Agent for Defender for Containers](#new-version-of-defender-agent-for-defender-for-containers) | January 4, 2024 | February 2024 |
If you're looking for the latest release notes, you can find them in the [What's
| [Deprecating two security incidents](#deprecating-two-security-incidents) | | November 2023 | | [Defender for Cloud plan and strategy for the Log Analytics agent deprecation](#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation) | | August 2024 |
+## Four new recommendations for Azure Stack HCI resource type
+
+**Announcement date: January 11, 2024**
+
+**Estimated date for change: February 2024**
+
+Azure Stack HCI is set to be a new resource type that can be managed through Microsoft Defender for Cloud. We're adding four recommendations that are specific to the HCI resource type:
+
+| Recommendation | Description | Severity |
+|-|-|-|
+| Azure Stack HCI servers should meet Secured-core requirements | Ensure that all Azure Stack HCI servers meet the Secured-core requirements. (Related policy: [Guest Configuration extension should be installed on machines - Microsoft Azure](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/6c99f570-2ce7-46bc-8175-cde013df43bc)) | Low |
+| Azure Stack HCI servers should have consistently enforced application control policies | At a minimum, apply the Microsoft WDAC base policy in enforced mode on all Azure Stack HCI servers. Applied Windows Defender Application Control (WDAC) policies must be consistent across servers in the same cluster. | High |
+| Azure Stack HCI systems should have encrypted volumes | Use BitLocker to encrypt the OS and data volumes on Azure Stack HCI systems | High |
+| Host and VM networking should be protected on Azure Stack HCI systems | Protect data on the Azure Stack HCI hostΓÇÖs network and on virtual machine network connections. | Low |
+ ## Defender for Servers built-in vulnerability assessment (Qualys) retirement path **Announcement date: January 9, 2024** **Estimated date for change: May 2024**
-The Defender for Servers built-in vulnerability assessment solution powered by Qualys is on a retirement path which is estimated to complete on **May 1st, 2024**. If you are currently using the vulnerability assessment solution powered by Qualys, you should plan your [transition to the integrated Microsoft defender vulnerability management solution](how-to-transition-to-built-in.md).
+The Defender for Servers built-in vulnerability assessment solution powered by Qualys is on a retirement path which is estimated to complete on **May 1st, 2024**. If you're currently using the vulnerability assessment solution powered by Qualys, you should plan your [transition to the integrated Microsoft Defender vulnerability management solution](how-to-transition-to-built-in.md).
For more information about our decision to unify our vulnerability assessment offering with Microsoft Defender Vulnerability Management, you can read [this blog post](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/defender-for-cloud-unified-vulnerability-assessment-powered-by/ba-p/3990112).
You can also check out the [common questions about the transition to Microsoft D
**Estimated date for change: March 2024**
-The Defender for Cloud Containers Vulnerability Assessment powered by Qualys is now on a retirement path completing on **March 1st, 2024**. If you are currently using container vulnerability assessment powered by Qualys, start planning your transition to [Vulnerability assessments for Azure with Microsoft Defender Vulnerability Management](agentless-vulnerability-assessment-azure.md).
+The Defender for Cloud Containers Vulnerability Assessment powered by Qualys is now on a retirement path completing on **March 1st, 2024**. If you're currently using container vulnerability assessment powered by Qualys, start planning your transition to [Vulnerability assessments for Azure with Microsoft Defender Vulnerability Management](agentless-vulnerability-assessment-azure.md).
For more information about our decision to unify our vulnerability assessment offering with Microsoft Defender Vulnerability Management, see [this blog post](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/defender-for-cloud-unified-vulnerability-assessment-powered-by/ba-p/3990112).
dev-box Concept Dev Box Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/concept-dev-box-concepts.md
When you're creating a network connection, you must choose the Active Directory
To learn more about native Microsoft Entra join and Microsoft Entra hybrid join, see [Plan your Microsoft Entra device deployment](../active-directory/devices/plan-device-deployment.md).
-The virtual network specified in a network connection also determines the region for a dev box. You can create multiple network connections based on the regions where you support developers. You can then use those connections when you're creating dev box pools to ensure that dev box users create dev boxes in a region close to them. Using a region close to the dev box user provides the best experience.
+
+## Azure regions for Dev Box
+
+Before setting up Dev Box, you need to choose the best regions for your organization. Check [Products available by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=dev-box) and [Azure geographies](https://azure.microsoft.com/explore/global-infrastructure/geographies/#choose-your-region) to help you decide on the regions you use. If the region you prefer isnΓÇÖt available for Dev Box, choose a region within 500 miles.
+
+You specify a region for your dev center and projects. Typically, these resources are in the same region as your main office or IT management center.
+
+The region of the virtual network specified in a network connection determines the region for a dev box. You can create multiple network connections based on the regions where you support developers. You can then use those connections when you're creating dev box pools to ensure that dev box users create dev boxes in a region close to them. Using a region close to the dev box user provides the best experience.
## Dev box pool
dev-box How To Configure Network Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-network-connections.md
Previously updated : 12/20/2023 Last updated : 01/12/2024 #Customer intent: As a platform engineer, I want to be able to manage network connections so that I can enable dev boxes to connect to my existing networks and deploy them in the desired region.
Microsoft Dev Box requires a configured and working Active Directory join, which
> [!NOTE] > Microsoft Dev Box automatically creates a resource group for each network connection, which holds the network interface cards (NICs) that use the virtual network assigned to the network connection. The resource group has a fixed name based on the name and region of the network connection. You can't change the name of the resource group, or specify an existing resource group.
-## Attach a network connection to a dev center
-
-You need to attach a network connection to a dev center before you can use it in projects to create dev box pools.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. In the search box, enter **dev centers**. In the list of results, select **Dev centers**.
-
-1. Select the dev center that you created, and then select **Networking**.
-
-1. Select **+ Add**.
-
-1. On the **Add network connection** pane, select the network connection that you created earlier, and then select **Add**.
-
- :::image type="content" source="./media/how-to-manage-network-connection/add-network-connection.png" alt-text="Screenshot that shows the pane for adding a network connection." lightbox="./media/how-to-manage-network-connection/add-network-connection.png":::
-
-After you attach a network connection, the Azure portal runs several health checks on the network. You can view the status of the checks on the resource overview page.
--
-You can add network connections that pass all health checks to a dev center and use them to create dev box pools. Dev boxes within dev box pools are created and domain joined in the location of the virtual network assigned to the network connection.
-
-To resolve any errors, see [Troubleshoot Azure network connections](/windows-365/enterprise/troubleshoot-azure-network-connection).
-
-## Remove a network connection from a dev center
-
-You can remove a network connection from a dev center if you no longer want to use it to connect to network resources. Network connections can't be removed if one or more dev box pools are using them.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. In the search box, enter **dev centers**. In the list of results, select **Dev centers**.
-
-1. Select the dev center that you created, and then select **Networking**.
-
-1. Select the network connection that you want to remove, and then select **Remove**.
-
- :::image type="content" source="./media/how-to-manage-network-connection/remove-network-connection.png" alt-text="Screenshot that shows the Remove button on the network connection page.":::
-
-1. Review the warning message, and then select **OK**.
-
-The network connection is no longer available for use in the dev center.
## Related content
dev-box How To Manage Dev Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-center.md
Title: Manage a dev center
+ Title: Manage a Microsoft Dev Box dev center
description: Microsoft Dev Box dev centers help you manage dev box resources, grouping projects with similar settings. Learn how to create, delete, and manage dev centers. Previously updated : 04/25/2023 Last updated : 01/12/2024 #Customer intent: As a platform engineer, I want to be able to manage dev centers so that I can manage my Microsoft Dev Box implementation.
In this article, you learn how to manage a dev center in Microsoft Dev Box by using the Azure portal.
-Development teams vary in the way they function and might have different needs. A dev center helps you manage these scenarios by enabling you to group similar sets of projects together and apply similar settings.
+Development teams vary in the way they function and can have different needs. A dev center helps you manage these scenarios by enabling you to group similar sets of projects together and apply similar settings.
## Permissions To manage a dev center, you need the following permissions:
-|Action|Permissions required|
-|--|--|
-|Create or delete a dev center|Owner or Contributor permissions on an Azure subscription or a specific resource group.|
-|Manage a dev center|Owner or Contributor role, or specific Write permission to the dev center.|
-|Attach or remove a network connection|Network Contributor permissions on an existing network connection (Owner or Contributor).|
+| Action | Permissions required |
+|||
+| _Create or delete a dev center_ | Owner or Contributor permissions on an Azure subscription or a specific resource group. |
+| _Manage a dev center_ | Owner or Contributor role, or specific Write permission to the dev center. |
+| _Attach or remove a network connection_ | Network Contributor permissions on an existing network connection (Owner or Contributor). |
## Create a dev center
To create a dev center in the Azure portal:
1. In the search box, enter **dev centers**. In the search results, select **Dev centers** from the **Services** list.
- :::image type="content" source="./media/how-to-manage-dev-center/search-dev-center.png" alt-text="Screenshot that shows the search box and list of services on the Azure portal.":::
+ :::image type="content" source="./media/how-to-manage-dev-center/search-dev-center.png" alt-text="Screenshot that shows the Azure portal with the search box and the result for dev centers." lightbox="./media/how-to-manage-dev-center/search-dev-center.png":::
1. On the **Dev centers** page, select **Create**.
- :::image type="content" source="./media/how-to-manage-dev-center/create-dev-center.png" alt-text="Screenshot that shows the Create button on the page for dev centers.":::
+ :::image type="content" source="./media/how-to-manage-dev-center/create-dev-center.png" alt-text="Screenshot that shows the Azure portal with the Create button on the page for dev centers." lightbox="./media/how-to-manage-dev-center/create-dev-center.png":::
1. On the **Create a dev center** pane, on the **Basics** tab, enter the following values:
- |Name|Value|
- |-|-|
- |**Subscription**|Select the subscription in which you want to create the dev center.|
- |**ResourceGroup**|Select an existing resource group, or select **Create new** and then enter a name for the new resource group.|
- |**Name**|Enter a name for the dev center.|
- |**Location**|Select the location or region where you want to create the dev center.|
+ | Setting | Value |
+ |||
+ | **Subscription** | Select the subscription in which you want to create the dev center. |
+ | **ResourceGroup** | Select an existing resource group, or select **Create new** and then enter a name for the new resource group. |
+ | **Name** | Enter a name for your dev center. |
+ | **Location** | Select the location or region where you want the dev center to be created. |
- :::image type="content" source="./media/how-to-manage-dev-center/create-dev-center-basics.png" alt-text="Screenshot that shows the Basics tab on the pane for creating a dev center.":::
+ :::image type="content" source="./media/how-to-manage-dev-center/create-dev-center-basics.png" alt-text="Screenshot that shows the Basics tab on the pane for creating a dev center." lightbox="./media/how-to-manage-dev-center/create-dev-center-basics.png":::
- For a list of supported Azure locations with capacity, see [Frequently asked questions about Microsoft Dev Box](https://aka.ms/devbox_acom).
+ For a list of the currently supported Azure locations with capacity, see [Frequently asked questions about Microsoft Dev Box](https://aka.ms/devbox_acom).
1. (Optional) On the **Tags** tab, enter a name/value pair that you want to assign.
- :::image type="content" source="./media/how-to-manage-dev-center/create-dev-center-tags.png" alt-text="Screenshot that shows the Tags tab on the page for creating a dev center.":::
+ :::image type="content" source="./media/how-to-manage-dev-center/create-dev-center-tags.png" alt-text="Screenshot that shows the Tags tab on the page for creating a dev center." lightbox="./media/how-to-manage-dev-center/create-dev-center-tags.png":::
1. Select **Review + Create**.
To create a dev center in the Azure portal:
1. Monitor the progress of the dev center creation from any page in the Azure portal by opening the **Notifications** pane.
- :::image type="content" source="./media/how-to-manage-dev-center/azure-notifications.png" alt-text="Screenshot that shows the Notifications pane in the Azure portal.":::
+ :::image type="content" source="./media/how-to-manage-dev-center/azure-notifications.png" alt-text="Screenshot that shows the Notifications pane in the Azure portal." lightbox="./media/how-to-manage-dev-center/azure-notifications.png":::
-1. When the deployment is complete, select **Go to resource** and confirm that the dev center appears on the **Dev centers** page.
+1. When the deployment completes, select **Go to resource**. Confirm that the dev center page appears.
## Delete a dev center
You might choose to delete a dev center to reflect organizational or workload ch
A dev center can't be deleted while any projects are associated with it. You must delete the projects before you can delete the dev center.
-Attached network connections and their associated virtual networks are not deleted when you delete a dev center.
+Attached network connections and their associated virtual networks aren't deleted when you delete a dev center.
When you're ready to delete your dev center, follow these steps:
When you're ready to delete your dev center, follow these steps:
1. Select **Delete**.
- :::image type="content" source="./media/how-to-manage-dev-center/delete-dev-center.png" alt-text="Screenshot of the Delete button on the page for a dev center.":::
+ :::image type="content" source="./media/how-to-manage-dev-center/delete-dev-center.png" alt-text="Screenshot of the Delete button on the page for a dev center." lightbox="./media/how-to-manage-dev-center/delete-dev-center.png":::
1. In the confirmation message, select **OK**.
-## Attach a network connection
-
-You can attach existing network connections to a dev center. You must attach a network connection to a dev center before you can use it in projects to create dev box pools.
-
-Network connections enable dev boxes to connect to existing virtual networks. The location, or Azure region, of the network connection determines where associated dev boxes are hosted.
-
-To attach a network connection to a dev center in Microsoft Dev Box:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. In the search box, enter **dev centers**. In the list of results, select **Dev centers**.
-
-1. Select the dev center that you want to attach the network connection to, and then select **Networking**.
-
-1. Select **+ Add**.
-
-1. On the **Add network connection** pane, select the network connection that you created earlier, and then select **Add**.
-
-## Remove a network connection
-
-You can remove network connections from dev centers. Network connections can't be removed if one or more dev box pools are using them. When you remove a network connection, it's no longer available for use in dev box pools within the dev center.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. In the search box, enter **dev centers**. In the list of results, select **Dev centers**.
-
-1. Select the dev center that you want to detach the network connection from, and then select **Networking**.
-
-1. Select the network connection that you want to detach, and then select **Remove**.
-
-1. In the confirmation message, select **OK**.
## Assign permissions for users You can assign multiple users permissions to a dev center to help with administrative tasks. You can assign users or groups to the following built-in roles:
-|**Role**|**Description**|
-|--|--|
-|**Owner**|Grants full access to manage all resources, including the ability to assign roles in Azure role-based access control (RBAC).|
-|**Contributor**|Grants full access to manage all resources, but doesn't allow the user to assign roles in Azure RBAC, manage assignments in Azure Blueprints, or share image galleries.|
-|**Reader**|Grants the ability to view all resources, but doesn't allow the user to make any changes.|
+- **Owner**: Grants full access to manage all resources, including the ability to assign roles in Azure role-based access control (RBAC).
+- **Contributor**: Grants full access to manage all resources, but doesn't allow the user to assign roles in Azure RBAC, manage assignments in Azure Blueprints, or share image galleries.
+- **Reader**: Grants the ability to view all resources, but doesn't allow the user to make any changes.
To make role assignments:
To make role assignments:
1. Select **Add** > **Add role assignment**.
-1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+1. Assign a role by configuring the following settings. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
| Setting | Value |
- | | |
+ |||
| **Role** | Select **Owner**, **Contributor**, or **Reader**. | | **Assign access to** | Select **User, group, or service principal**. | | **Members** | Select the users or groups that you want to be able to access the dev center. |
To make role assignments:
## Related content - [Provide access to projects for project admins](./how-to-project-admin.md)-- [2. Create a dev box definition](quickstart-configure-dev-box-service.md#create-a-dev-box-definition)
+- [Create a dev box definition](quickstart-configure-dev-box-service.md#create-a-dev-box-definition)
- [Configure Azure Compute Gallery](./how-to-configure-azure-compute-gallery.md)
dev-box How To Request Quota Increase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-request-quota-increase.md
Title: Request a quota limit increase for Dev Box resources
+ Title: Request a quota limit increase for Dev Box resources
description: Learn how to request a quota increase to expand the number of dev box resources you can use in your subscription. Request an increase for dev box cores and other resources. Previously updated : 08/22/2023 Last updated : 01/11/2024 # Request a quota limit increase for Microsoft Dev Box resources
-This article describes how to submit a support request for increasing the number of resources for Microsoft Dev Box in your Azure subscription.
+This article describes how to submit a support request to increase the number of resources for Microsoft Dev Box in your Azure subscription.
-To ensure that resources are available for customers, Microsoft Dev Box has a limit on the number of each type of resource that can be used in a subscription. This limit is called a quota.
+To ensure that resources are available for customers, Microsoft Dev Box has a limit on the number of each type of resource that can be used in a subscription. This limit is called a _quota_.
-There are different types of quota limits that you might encounter, depending on the resource type. For example:
+There are different types of quota limits that you might encounter, depending on the resource type. Here are some examples:
-**Developer portal**
+- There are limits on the number of vCPUs available for dev boxes. You might encounter this quota error in the Microsoft **[developer portal](https://aka.ms/devbox-portal)** during dev box creation.
+- There are limits for dev centers, network connections, and dev box definitions. You can find information about these limits through the **Azure portal**.
-Dev Box vCPU - you might encounter this quota error in the [developer portal](https://aka.ms/devbox-portal) during dev box creation.
+When you reach the limit for a resource in your subscription, you can request a limit increase (sometimes called a capacity increase, or a quota increase) to extend the number of resources available. The request process allows the Microsoft Dev Box team to ensure your subscription isn't involved in any cases of fraud or unintentional, sudden large-scale deployments.
-**Azure portal**
--- Dev Centers-- Network connections -- Dev Box Definitions -
-When you reach the limit for a resource in your subscription, you can request a limit increase (sometimes called a capacity increase, or a quota increase) to extend the number of resources available. The request process allows the Microsoft Dev Box team to ensure that your subscription isn't involved in any cases of fraud or unintentional, sudden large-scale deployments.
-
-The time it takes to increase your quota varies depending on the VM size, region, and number of resources requested. You won't have to go through the process of requesting extra capacity often. To ensure you have the resources you require when you need them, you should:
+The time it takes to increase your quota varies depending on the virtual machine size, region, and number of resources requested. You don't have to go through the process of requesting extra capacity often. To ensure you have the resources you require when you need them, you should:
- Request capacity as far in advance as possible. - If possible, be flexible on the region where you're requesting capacity. - Recognize that capacity remains assigned for the lifetime of a subscription. When dev box resources are deleted, the capacity remains assigned to the subscription. - Request extra capacity only if you need more than is already assigned to your subscription. -- Make incremental requests for VM cores rather than making large, bulk requests. Break requests for large numbers of cores into smaller requests for extra flexibility in how those requests are fulfilled.
+- Make incremental requests for virtual machine cores rather than making large, bulk requests. Break requests for large numbers of cores into smaller requests for extra flexibility in how those requests are fulfilled.
Learn more about the general [process for creating Azure support requests](../azure-portal/supportability/how-to-create-azure-support-request.md).
Submitting a support request for an increase in quota is quicker if you gather t
- **Determine your current quota usage**
- For each of your subscriptions, you can check your current usage of each Deployment Environments resource type in each region. Determine your current usage by following these steps: [Determine usage and quota](./how-to-determine-your-quota-usage.md).
+ For each of your subscriptions, you can check your current usage of each Deployment Environments resource type in each region. Determine your current usage by following the steps in [Determine usage and quota](./how-to-determine-your-quota-usage.md).
- **Determine the region for the additional quota**
- Dev Box resources can exist in many regions. You can choose to deploy resources in multiple regions close to your dev box users. For more information about Azure regions, how they relate to global geographies, and which services are available in each region, see [Azure global infrastructure](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/).
+ Dev Box resources can exist in many regions. You can choose to deploy resources in multiple regions located near to your dev box users. For more information about Azure regions, how they relate to global geographies, and which services are available in each region, see [Azure global infrastructure](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/).
-- **Choose the quota type of the additional quota.**
+- **Choose the quota type of the additional quota**
The following Dev Box resources are limited by subscription. You can request an increase in the number of resources for each of these types.
Submitting a support request for an increase in quota is quicker if you gather t
When you want to increase the number of dev boxes available to your developers, you should request an increase in the number of Dev Box general cores.
-## Submit a new support request
+## Initiate a support request
-Start the process of requesting a limit increase by opening **Support + troubleshooting** from the right of the toolbar.
+Azure presents two ways to get you the right help and assist you with submitting a request for support:
+- The **Support + troubleshooting** feature available on the toolbar
+- The **Help + support** page available on the Azure portal menu
-Azure presents two different ways to get you the right help and support. When **Support + troubleshooting** opens, you see either:
-- A question asking **How can we help you?**-- A classic style support request form
+The **Support + troubleshooting** feature uses questions like **How can we help you?** to guide you through the process.
-From the following tabs, select the style appropriate for your experience and use the steps to request a limit increase:
+Both the **Support + troubleshooting** feature and the **Help + support** page help you fill out and submit a classic style support request form.
-#### [Question style](#tab/Questions/)
+To begin the process, choose the tab that offers the input style that's appropriate for your experience, then follow the steps to request a quota limit increase.
-1. On the Azure portal home page, select Support & troubleshooting from the top right.
-
- :::image type="content" source="media/how-to-request-quota-increase/help-support-question.png" alt-text="Screenshot showing the How can we help you question." lightbox="media/how-to-request-quota-increase/help-support-question.png":::
+# [**Support + troubleshooting** (questions)](#tab/Questions/)
+
+1. On the Azure portal home page, select the **Support + Troubleshooting** icon (question mark) on the toolbar.
-1. In the **How can we help you?** box, enter *quota limit*, and then select **Go**.
+ :::image type="content" source="media/how-to-request-quota-increase/help-support-question.png" alt-text="Screenshot showing the How can we help question view for the Support plus troubleshooting feature." lightbox="media/how-to-request-quota-increase/help-support-question.png":::
+
+1. In the **How can we help you?** box, enter **quota limit**, and then select **Go**. The view updates to show the **Current selection** section.
- :::image type="content" source="media/how-to-request-quota-increase/help-support-quota-limit.png" alt-text="Screenshot showing the How can we help you question and quota limit answer." lightbox="media/how-to-request-quota-increase/help-support-quota-limit.png":::
+ :::image type="content" source="media/how-to-request-quota-increase/help-support-quota-limit.png" alt-text="Screenshot showing the How can we help question and quota limit answer with the Current selection section." lightbox="media/how-to-request-quota-increase/help-support-quota-limit.png":::
-1. From the **Which service are you having an issue with?** list, select **Service and subscription limits (quotas)**, and then select **Next**.
+1. In the **Which service are you having an issue with?** dropdown list, select **Service and subscription limits (quotas)**.
- :::image type="content" source="media/how-to-request-quota-increase/help-support-service-list.png" alt-text="Screenshot showing the Service and subscription limits (quotas) item." lightbox="media/how-to-request-quota-increase/help-support-service-list.png":::
+ :::image type="content" source="media/how-to-request-quota-increase/help-support-service-list.png" alt-text="Screenshot showing the open dropdown list for the Which service are you having an issue with field." lightbox="media/how-to-request-quota-increase/help-support-service-list.png":::
-1. In the Service and subscription limits (quotas) section, select **Create a support request**.
+1. Confirm your choice for the **Current selection** and then select **Next**. The view updates to include an option to create a support request for quotas.
- :::image type="content" source="media/how-to-request-quota-increase/help-support-result.png" alt-text="Screenshot showing the Create a support request button." lightbox="media/how-to-request-quota-increase/help-support-result.png":::
+ :::image type="content" source="media/how-to-request-quota-increase/help-support-service-list-next.png" alt-text="Screenshot showing the Service and subscription limits (quotas) item selected and the Next button highlighted." lightbox="media/how-to-request-quota-increase/help-support-service-list-next.png":::
-1. On the **New support request** page, enter the following information, and then select **Next**.
+1. In the **Service and subscription limits (quotas)** section, select **Create a support request**.
- | Name | Value |
- | -- | - |
- | **Issue type** | *Service and subscription limits (quotas)* |
- | **Subscription** | Select the subscription to which the request applies. |
- | **Quota type** | *Microsoft Dev Box* |
+ :::image type="content" source="media/how-to-request-quota-increase/help-support-result.png" alt-text="Screenshot showing the Service and subscription limits (quotas) section and the Create a support request button highlighted." lightbox="media/how-to-request-quota-increase/help-support-result.png":::
-1. On the **Additional details** tab, in the **Problem details** section, select **Enter details**.
-
- :::image type="content" source="media/how-to-request-quota-increase/enter-details.png" alt-text="Screenshot of the New support request page, highlighting Enter details." lightbox="media/how-to-request-quota-increase/enter-details.png":::
+The **New support request** page opens. Continue to the [following section](#describe-the-requested-quota-increase) to fill out the support request form.
-1. In **Quota details**, enter the following information, and then select **Next**.
-
- | Name | Value |
- | -- | - |
- | **Region** | Select the **Region** in which you want to increase your quota. |
- | **Quota type** | When you select a Region, Azure displays your current usage and your current for all quota types. </br> Select the **Quota type** that you want to increase. |
- | **New total limit** | Enter the new total limit that you want to request. |
- | **Is it a limit decrease?** | Select **Yes** or **No**. |
- | **Additional information** | Enter any extra information about your request. |
+# [**Help + support**](#tab/AzureADJoin/)
+
+1. On the Azure portal home page, expand the Azure portal menu, and select **Help + support**.
- :::image type="content" source="media/how-to-request-quota-increase/quota-details.png" alt-text="Screenshot of the Quota details pane." lightbox="media/how-to-request-quota-increase/quota-details.png":::
+ :::image type="content" source="./media/how-to-request-quota-increase/help-plus-support-portal.png" alt-text="Screenshot of the Azure portal menu on the home page and the Help plus support option selected." lightbox="./media/how-to-request-quota-increase/help-plus-support-portal.png":::
-1. Select **Save and continue**.
+1. On the **Help + support** page, select **Create a support request**.
-#### [Classic style](#tab/AzureADJoin/)
+ :::image type="content" source="./media/how-to-request-quota-increase/create-support-request.png" alt-text="Screenshot of the Help plus support page and the Create a support request highlighted." lightbox="./media/how-to-request-quota-increase/create-support-request.png":::
-1. On the Azure portal home page, select Support & troubleshooting from the top right, and then select **Help + support**.
+The **New support request** page opens. Continue to the [following section](#describe-the-requested-quota-increase) to fill out the support request form.
- :::image type="content" source="./media/how-to-request-quota-increase/submit-new-request.png" alt-text="Screenshot of the Azure portal home page, highlighting the Request core limit increase button." lightbox="./media/how-to-request-quota-increase/submit-new-request.png":::
+
-1. On the **Help + support** page, select **Create a support request**.
+## Describe the requested quota increase
- :::image type="content" source="./media/how-to-request-quota-increase/create-support-request.png" alt-text="Screenshot of the Help + support page, highlighting Create a support request." lightbox="./media/how-to-request-quota-increase/create-support-request.png":::
+Follow these steps to describe your requested quota increase and fill out the support form.
-1. On the **New support request** page, enter the following information, and then select **Next**.
+1. On the **New support request** page, on the **1. Problem description** tab, configure the following settings, and then select **Next**.
- | Name | Value |
- | -- | - |
- | **Issue type** | *Service and subscription limits (quotas)* |
- | **Subscription** | Select the subscription to which the request applies. |
- | **Quota type** | *Microsoft Dev Box* |
+ :::image type="content" source="media/how-to-request-quota-increase/help-support-request-problem.png" alt-text="Screenshot showing the problem description tab for a new support request with the required fields highlighted." lightbox="media/how-to-request-quota-increase/help-support-request-problem.png":::
-1. On the **Additional details** tab, in the **Problem details** section, select **Enter details**.
-
- :::image type="content" source="media/how-to-request-quota-increase/enter-details.png" alt-text="Screenshot of the New support request page, highlighting Enter details." lightbox="media/how-to-request-quota-increase/enter-details.png":::
+ | Setting | Value |
+ |||
+ | **Issue type** | Select **Service and subscription limits (quotas)**. |
+ | **Subscription** | Select the subscription to which the request applies. |
+ | **Quota type** | Select **Microsoft Dev Box**. |
+
+ After you select **Next**, the tool skips the **2. Recommended solution** tab and opens the **3. Additional details** tab. This tab contains four sections: **Problem details**, **Advanced diagnostic information**, **Support method**, and **Contact information**.
-1. In **Quota details**, enter the following information, and then select **Next**.
+1. On the **3. Additional details** tab, in the **Problem details** section, select **Enter details**. The **Quota details** pane opens.
- | Name | Value |
- | -- | - |
- | **Region** | Select the **Region** in which you want to increase your quota. |
- | **Quota type** | When you select a Region, Azure displays your current usage and your current for all quota types. </br> Select the **Quota type** that you want to increase. |
- | **New total limit** | Enter the new total limit that you want to request. |
- | **Is it a limit decrease?** | Select **Yes** or **No**. |
- | **Additional information** | Enter any extra information about your request. |
+ :::image type="content" source="media/how-to-request-quota-increase/help-support-request-additional-details.png" alt-text="Screenshot showing the additional details tab for a new support request with the Enter details link highlighted." lightbox="media/how-to-request-quota-increase/help-support-request-enter-details.png":::
- :::image type="content" source="media/how-to-request-quota-increase/quota-details.png" alt-text="Screenshot of the Quota details pane." lightbox="media/how-to-request-quota-increase/quota-details.png":::
+1. In the **Quota details** pane, configure the following settings:
+
+ | Setting | Value |
+ |||
+ | **Region** | Select the **Region** in which you want to increase your quota. |
+ | **Quota type** | When you select a **Region**, Azure updates the view to display your current usage and current limit for all quota types. After the view updates, set the **Quota type** field to the quota that you want to increase. |
+ | **New total limit** | Enter the new total limit that you want to request. |
+ | **Is it a limit decrease?** | Select **Yes** or **No**. |
+ | **Additional information** | Enter any extra information about your request. |
-1. Select **Save and continue**.
+ :::image type="content" source="media/how-to-request-quota-increase/quota-details.png" alt-text="Screenshot of the Quota details pane showing current usage and current limit for all quota types for a specific region." lightbox="media/how-to-request-quota-increase/quota-details.png":::
-
+1. Select **Save and Continue**.
## Complete the support request
-To complete the support request, enter the following information:
+To complete the support request form, configure the remaining settings. When you're ready, review your information and submit the request.
+
+1. On the **Additional details** tab, in the **Advanced diagnostic information** section, configure the following setting:
+
+ | Setting | Value |
+ |||
+ | **Allow collection of advanced diagnostic information** | Select **Yes** (Recommended) or **No**. |
-1. Complete the remainder of the support request **Additional details** tab using the following information:
+ :::image type="content" source="media/how-to-request-quota-increase/request-advanced-diagnostics-info.png" alt-text="Screenshot showing the Advanced diagnostic information section for a new support request." lightbox="media/how-to-request-quota-increase/request-advanced-diagnostics-info.png":::
- ### Advanced diagnostic information
+1. In the **Support method** section, configure the following settings:
- |Name |Value |
- |||
- |**Allow collection of advanced diagnostic information**|Select yes or no.|
+ | Setting | Value |
+ |||
+ | **Support plan** | Select your support plan. |
+ | **Severity** | Select the severity of the issue. |
+ | **Preferred contact method** | Select **Email** or **Phone**. |
+ | **Your availability** | Enter your availability. |
+ | **Support language** | Select your language preference. |
- ### Support method
+ :::image type="content" source="media/how-to-request-quota-increase/request-support-method.png" alt-text="Screenshot showing the Support method section for a new support request." lightbox="media/how-to-request-quota-increase/request-support-method.png":::
- |Name |Value |
- |||
- |**Support plan**|Select your support plan.|
- |**Severity**|Select the severity of the issue.|
- |**Preferred contact method**|Select email or phone.|
- |**Your availability**|Enter your availability.|
- |**Support language**|Select your language preference.|
+1. In the **Contact info** section, configure the following settings:
- ### Contact information
+ | Setting | Value |
+ |||
+ | **First name** | Enter your first name. |
+ | **Last name** | Enter your last name. |
+ | **Email** | Enter your contact email. |
+ | **Additional email for notification** | Enter an email for notifications. |
+ | **Phone** | Enter your contact phone number. |
+ | **Country/region** | Enter your location. |
+ | **Save contact changes for future support requests.** | Select the check box to save changes. |
- |Name |Value |
- |||
- |**First name**|Enter your first name.|
- |**Last name**|Enter your last name.|
- |**Email**|Enter your contact email.|
- |**Additional email for notification**|Enter an email for notifications.|
- |**Phone**|Enter your contact phone number.|
- |**Country/region**|Enter your location.|
- |**Save contact changes for future support requests.**|Select the check box to save changes.|
+ :::image type="content" source="media/how-to-request-quota-increase/request-contact-info.png" alt-text="Screenshot showing the Contact info section for a new support request and the Next button." lightbox="media/how-to-request-quota-increase/request-contact-info.png":::
1. Select **Next**.
-1. On the **Review + create** tab, review the information, and then select **Create**.
+1. On the **4. Review + create** tab, review your information. When you're ready to submit the request, select **Create**.
## Related content -- To learn how to check your quota usage, see [Determine usage and quota](./how-to-determine-your-quota-usage.md).-- Check the default quota for each resource type by subscription type: [Microsoft Dev Box limits](../azure-resource-manager/management/azure-subscription-service-limits.md#microsoft-dev-box-limits)
+- Check your quota usage by [determining usage and quota](./how-to-determine-your-quota-usage.md)
+- Check the default quota for each resource type by subscription type with [Microsoft Dev Box limits](../azure-resource-manager/management/azure-subscription-service-limits.md#microsoft-dev-box-limits)
dev-box Quickstart Create Dev Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-create-dev-box.md
To complete this quickstart, you need:
Microsoft Dev Box enables you to create cloud-hosted developer workstations in a self-service way. You can create and manage dev boxes by using the developer portal.
-Depending on the project configuration and your permissions, you have access to different projects and associated dev box configurations.
+Depending on the project configuration and your permissions, you have access to different projects and associated dev box configurations. If you have a choice of projects and dev box pools, select the project and dev box pool that best fits your needs. For example, you might choose a project that has a dev box pool located near to you for least latency.
To create a dev box in the Microsoft Dev Box developer portal:
dns Private Resolver Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-resolver-architecture.md
Title: Private resolver architecture
-description: Configure the Azure DNS Private Resolver for a centralized or non-centralized architecture
+description: Configure the Azure DNS Private Resolver for a centralized or noncentralized architecture
Previously updated : 03/28/2023 Last updated : 01/10/2024 #Customer intent: As an administrator, I want to optimize the DNS resolver configuration in my network.
This article discusses two architectural design options that are available to re
## Distributed DNS architecture
-Consider the following hub and spoke VNet topology in Azure with a private resolver located in the hub and a ruleset link to the spoke VNet:
+Consider the following hub and spoke VNet topology in Azure with a private resolver located in the hub and a ruleset link to the spoke VNet. Both the hub and the spoke use Azure-provided DNS in their VNet settings:
![Hub and spoke with ruleset diagram.](./media/private-resolver-architecture/hub-and-spoke-ruleset.png)
Consider the following hub and spoke VNet topology in Azure with a private resol
**DNS resolution in the hub VNet**: The virtual network link from the private zone to the Hub VNet enables resources inside the hub VNet to automatically resolve DNS records in **azure.contoso.com** using Azure-provided DNS ([168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md)). All other namespaces are also resolved using Azure-provided DNS. The hub VNet doesn't use ruleset rules to resolve DNS names because it isn't linked to the ruleset. To use forwarding rules in the hub VNet, create and link another ruleset to the Hub VNet.
-**DNS resolution in the spoke VNet**: The virtual network link from the ruleset to the spoke VNet enables the spoke VNet to resolve **azure.contoso.com** using the configured forwarding rule. A link from the private zone to the spoke VNet isn't required here. The spoke VNet sends queries for **azure.contoso.com** to the hub's inbound endpoint. Other namespaces are also resolved for the spoke VNet using the linked ruleset if rules for those names are configured in a rule. DNS queries that don't match a ruleset rule use Azure-provided DNS.
+**DNS resolution in the spoke VNet**: The virtual network link from the ruleset to the spoke VNet enables the spoke VNet to resolve **azure.contoso.com** using the configured forwarding rule. A link from the private zone to the spoke VNet isn't required here. The spoke VNet sends queries for **azure.contoso.com** to the hub's inbound endpoint via Azure-provided DNS because there is a rule matching this domain name in the linked ruleset. Queries for other namespaces can also be forwarded by configuring additional rules. DNS queries that don't match a ruleset rule are not forwarded and are resolved using Azure-provided DNS.
> [!IMPORTANT] > In this example configuration, the hub VNet must be linked to the private zone, but must **not** be linked to a forwarding ruleset with an inbound endpoint forwarding rule. Linking a forwarding ruleset that contains a rule with the inbound endpoint as a destination to the same VNet where the inbound endpoint is provisioned can cause DNS resolution loops. ## Centralized DNS architecture
-Consider the following hub and spoke VNet topology with an inbound endpoint provisioned as custom DNS in the spoke VNet:
+Consider the following hub and spoke VNet topology with an inbound endpoint provisioned as custom DNS in the spoke VNet. The spoke VNet uses a Custom DNS setting of 10.10.0.4, corresponding to the Hub's private resolver inbound endpoint:
![Hub and spoke with custom DNS diagram.](./media/private-resolver-architecture/hub-and-spoke-custom-dns.png)
Consider the following hub and spoke VNet topology with an inbound endpoint prov
- The DNS forwarding ruleset is linked to the hub VNet. - A ruleset rule **is not configured** to forward queries for the private zone to the inbound endpoint.
-**DNS resolution in the hub VNet**: The virtual network link from the private zone to the Hub VNet enables resources inside the hub VNet to automatically resolve DNS records in **azure.contoso.com** using Azure-provided DNS ([168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md)). If configured, ruleset rules determine how DNS names are resolved. Namespaces that don't match a ruleset rule are resolved using Azure-provided DNS.
+**DNS resolution in the hub VNet**: The virtual network link from the private zone to the Hub VNet enables resources inside the hub VNet to automatically resolve DNS records in **azure.contoso.com** using Azure-provided DNS ([168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md)). If configured, ruleset rules determine how DNS names are forwarded and resolved. Namespaces that don't match a ruleset rule are resolved without forwarding using Azure-provided DNS.
**DNS resolution in the spoke VNet**: In this example, the spoke VNet sends all of its DNS traffic to the inbound endpoint in the Hub VNet. Since **azure.contoso.com** has a virtual network link to the Hub VNet, all resources in the Hub can resolve **azure.contoso.com**, including the inbound endpoint (10.10.0.4). Thus, the spoke uses the hub inbound endpoint to resolve the private zone. Other DNS names are resolved for the spoke VNet according to rules provisioned in a forwarding ruleset, if they exist.
energy-data-services Concepts Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-authentication.md
Title: Authentication concepts in Microsoft Azure Data Manager for Energy
-description: This article describes the various concepts regarding the authentication in Azure Data Manager for Energy.
+description: This article describes various concepts of authentication in Azure Data Manager for Energy.
Last updated 02/10/2023
-# Authentication concepts in Azure Data Manager for Energy
-Authentication confirms the identity of the users. The access flows can be user triggered, system triggered, or system API communication. In this article, you learn about service principals and authorization token.
+# Authentication concepts in Azure Data Manager for Energy
+
+Authentication confirms the identity of users. The access flows can be user triggered, system triggered, or system API communication. In this article, you learn about service principals and authorization tokens.
## Service principals
-In the Azure Data Manager for Energy instance,
-1. No Service Principals are created.
-2. The app-id is used for API access. The same app-id is used to provision ADME instance.
-3. The app-id doesn't have access to infrastructure resources.
-4. The app-id also gets added as OWNER to all OSDU groups by default.
-5. For service-to-service (S2S) communication, ADME uses MSI (Microsoft Service Identity).
-
-In the OSDU instance,
-1. Terraform scripts create two Service Principals:
- 1. The first Service Principal is used for API access. It can also manage infrastructure resources.
- 2. The second Service Principal is used for service-to-service (S2S) communications.
-
-## Generate authorization token
-You can generate the authorization token using the steps outlined in [Generate auth token](how-to-generate-auth-token.md).
+
+In an Azure Data Manager for Energy instance:
+
+- No service principals are created.
+- The app ID is used for API access. The same app ID is used to provision an Azure Data Manager for Energy instance.
+- The app ID doesn't have access to infrastructure resources.
+- The app ID also gets added as OWNER to all OSDU groups by default.
+- For service-to-service communication, Azure Data Manager for Energy uses Managed Service Identity.
+
+In an OSDU instance:
+
+- Terraform scripts create two service principals:
+ - The first service principal is used for API access. It can also manage infrastructure resources.
+ - The second service principal is used for service-to-service communications.
+
+## Generate an authorization token
+
+To generate the authorization token, follow the steps in [Generate auth token](how-to-generate-auth-token.md).
energy-data-services Concepts Entitlements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-entitlements.md
Title: Entitlement concepts in Microsoft Azure Data Manager for Energy
-description: This article describes the various concepts regarding the entitlement service in Azure Data Manager for Energy.
+ Title: Entitlement concepts in Azure Data Manager for Energy
+description: This article describes various concepts of the entitlement service in Azure Data Manager for Energy.
# Entitlement service
-Access management is a critical function for any service or resource. The entitlement service lets you control who can use your Azure Data Manager for Energy, what they can see or change, and which services or data they can use.
+Access management is a critical function for any service or resource. The entitlement service lets you control who can use your Azure Data Manager for Energy instance, what they can see or change, and which services or data they can use.
## OSDU groups structure and naming
-The entitlements service of Azure Data Manager for Energy allows you to create groups and manage memberships of the groups. An entitlement group defines permissions on services/data sources for a given data partition in your Azure Data Manager for Energy instance. Users added to a given group obtain the associated permissions. All group identifiers (emails) are of form `{groupType}.{serviceName|resourceName}.{permission}@{partition}.{domain}`.
+The entitlement service of Azure Data Manager for Energy allows you to create groups and manage memberships of the groups. An entitlement group defines permissions on services or data sources for a specific data partition in your Azure Data Manager for Energy instance. Users added to a specific group obtain the associated permissions. All group identifiers (emails) are of the form `{groupType}.{serviceName|resourceName}.{permission}@{partition}.{domain}`.
-Please note that different groups and associated user entitlements need to be set for every **new data partition** even in the same Azure Data Manager for Energy instance.
+Different groups and associated user entitlements must be set for every *new data partition*, even in the same Azure Data Manager for Energy instance.
-The entitlements service enables three use cases for authorization:
+The entitlement service enables three use cases for authorization:
-1. **Data groups** are used to enable authorization for data.
- 1. The data groups start with the word "data." such as data.welldb.viewers and data.welldb.owners.
- 2. Individual users are added to the data groups which are added in the ACL of individual data records to enable `viewer` and `owner` access of the data once the data has been loaded in the system.
- 3. To `upload` the data, you need to have entitlements of various OSDU services which are used during ingestion process. The combination of OSDU services depends on the method of ingestion. E.g., for manifest ingestion, refer [this](concepts-manifest-ingestion.md) to understand the OSDU services APIs used. The user **need not be part of the ACL** to upload the data.
-
-2. **Service groups** are used to enable authorization for services.
- 1. The service groups start with the word "service." such as service.storage.user and service.storage.admin.
- 2. The service groups are **predefined** when OSDU services are provisioned in each data partition of Azure Data Manager for Energy instance.
- 3. These groups enable `viewer`, `editor`, and `admin` access to call the OSDU APIs corresponding to the OSDU services.
-
-3. **User groups** are used for hierarchical grouping of user and service groups.
- 1. The service groups start with the word "users." such as users.datalake.viewers and users.datalake.editors.
- 2. Some user groups are created by default when a data partition is provisioned. Details of these groups and their hierarchy scope are in [Bootstrapped OSDU Entitlements Groups](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/osdu-entitlement-roles.md).
- 3. There's one exception of this group naming rule for "users" group. It gets created when a new data partition is provisioned and its name follows the pattern of `users@{partition}.{domain}`. It has the list of all the users with any type of access in a given data partition. Before adding a new user to any entitlement groups, you need to add the new user to the `users@{partition}.{domain}` group as well.
+- **Data groups** are used to enable authorization for data.
+ - The data groups start with the word "data," such as `data.welldb.viewers` and `data.welldb.owners`.
+ - Individual users are added to the data groups, which are added in the ACL of individual data records to enable `viewer` and `owner` access of the data after the data is loaded in the system.
+ - To `upload` the data, you need to have entitlements of various OSDU services, which are used during the ingestion process. The combination of OSDU services depends on the method of ingestion. For example, for manifest ingestion, see [Manifest-based ingestion concepts](concepts-manifest-ingestion.md) to understand the OSDU services that APIs used. The user *doesn't need to be part of the ACL* to upload the data.
+- **Service groups** are used to enable authorization for services.
+ - The service groups start with the word "service," such as `service.storage.user` and `service.storage.admin`.
+ - The service groups are *predefined* when OSDU services are provisioned in each data partition of the Azure Data Manager for Energy instance.
+ - These groups enable `viewer`, `editor`, and `admin` access to call the OSDU APIs corresponding to the OSDU services.
+- **User groups** are used for hierarchical grouping of user and service groups.
+ - The service groups start with the word "users," such as `users.datalake.viewers` and `users.datalake.editors`.
+ - Some user groups are created by default when a data partition is provisioned. For information on these groups and their hierarchy scope, see [Bootstrapped OSDU entitlement groups](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/osdu-entitlement-roles.md).
+ - There's one exception of this group naming rule for the "users" group. It gets created when a new data partition is provisioned and its name follows the pattern of `users@{partition}.{domain}`. It has the list of all the users with any type of access in a specific data partition. Before you add a new user to any entitlement groups, you also need to add the new user to the `users@{partition}.{domain}` group.
-Individual users can be added to a `user group`. The `user group` is then added to a `data group`. The data group is added to the ACL of the data record. It enables abstraction for the data groups since individual users need not be added one by one to the data group and instead can be added to the `user group`. This `user group` can then be used repeatedly for multiple `data groups`. The nested structure thus helps provide scalability to manage memberships in OSDU.
+You can add individual users to a `user group`. The `user group` is then added to a `data group`. The data group is added to the ACL of the data record. It enables abstraction for the data groups because individual users don't need to be added one by one to the data group. Instead, you can add users to the `user group`. Then you can use the `user group` repeatedly for multiple `data groups`. The nested structure helps provide scalability to manage memberships in OSDU.
## Users
-For each OSDU group, you can either add a user as an OWNER or a MEMBER.
-1. If you're an OWNER of an OSDU group, then you can add or remove the members of that group or delete the group.
-2. If you're a MEMBER of an OSDU group, you can view, edit, or delete the service or data depending on the scope of the OSDU group. For example, if you're a MEMBER of service.legal.editor OSDU group, you can call the APIs to change the legal service.
+For each OSDU group, you can add a user as either an OWNER or a MEMBER:
+
+- If you're an OWNER of an OSDU group, you can add or remove the members of that group or delete the group.
+- If you're a MEMBER of an OSDU group, you can view, edit, or delete the service or data depending on the scope of the OSDU group. For example, if you're a MEMBER of the `service.legal.editor` OSDU group, you can call the APIs to change the legal service.
+ > [!NOTE]
-> Do not delete the OWNER of a group unless there is another OWNER to manage the users.
+> Don't delete the OWNER of a group unless there's another OWNER to manage the users.
## Entitlement APIs
-A full list of entitlements API endpoints can be found in [OSDU entitlement service](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/blob/release/0.15/docs/tutorial/Entitlements-Service.md#entitlement-service-api). A few illustrations of how to use Entitlement APIs are available in the [How to manage users](how-to-manage-users.md).
+For a full list of Entitlement API endpoints, see [OSDU entitlement service](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/blob/release/0.15/docs/tutorial/Entitlements-Service.md#entitlement-service-api). A few illustrations of how to use Entitlement APIs are available in [Manage users](how-to-manage-users.md).
+ > [!NOTE]
-> The OSDU documentation refers to V1 endpoints, but the scripts noted in this documentation refer to V2 endpoints, which work and have been successfully validated.
+> The OSDU documentation refers to v1 endpoints, but the scripts noted in this documentation refer to v2 endpoints, which work and have been successfully validated.
OSDU&trade; is a trademark of The Open Group. ## Next steps
-As the next step, you can do the following:
-- [How to manager users](how-to-manage-users.md)-- [How to manage legal tags](how-to-manage-legal-tags.md)-- [How to manage ACLs](how-to-manage-acls.md)
-You can also ingest data into your Azure Data Manager for Energy instance with
+For the next step, see:
+
+- [Manage users](how-to-manage-users.md)
+- [Manage legal tags](how-to-manage-legal-tags.md)
+- [Manage ACLs](how-to-manage-acls.md)
+
+You can also ingest data into your Azure Data Manager for Energy instance:
+ - [Tutorial on CSV parser ingestion](tutorial-csv-ingestion.md) - [Tutorial on manifest ingestion](tutorial-manifest-ingestion.md)
energy-data-services How To Generate Auth Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-generate-auth-token.md
Title: How to generate a refresh token for Microsoft Azure Data Manager for Energy
-description: This article describes how to generate an auth token
+ Title: Generate a refresh token for Azure Data Manager for Energy
+description: This article describes how to generate an auth token.
Last updated 01/03/2024
-#Customer intent: As a developer, I want to learn how to generate an auth token
+#Customer intent: As a developer, I want to learn how to generate an auth token.
-# How to generate auth token
+# Generate an auth token
-In this article, you learn how to generate the service principal auth token, user's auth token and user's refresh token.
+In this article, you learn how to generate the service principal auth token, a user's auth token, and a user's refresh token.
## Register your app with Microsoft Entra ID
-1. To provision the Azure Data Manager for Energy platform, you must register your app in the [Azure portal app registration page](https://go.microsoft.com/fwlink/?linkid=2083908). You can use either a Microsoft account or a work or school account to register an app. For steps on how to configure, see [Register your app documentation](../active-directory/develop/quickstart-register-app.md#register-an-application).
-2. In the app overview section, if there's no redirect URIs specified, you can add a platform, select "Web", add `http://localhost:8080`, and select save.
-
-3. Fetch the `redirect-uri` (or reply URL) for your app to receive responses from Microsoft Entra ID.
+1. To provision the Azure Data Manager for Energy platform, you must register your app on the [Azure portal app registration page](https://go.microsoft.com/fwlink/?linkid=2083908). You can use either a Microsoft account or a work or school account to register an app. For steps on how to configure, see [Register your app documentation](../active-directory/develop/quickstart-register-app.md#register-an-application).
+1. In the app overview section, if there are no redirect URIs specified, you can select **Add a platform** > **Web**, add `http://localhost:8080`, and select **Save**.
+
+ :::image type="content" source="media/how-to-generate-auth-token/app-registration-uri.png" alt-text="Screenshot that shows adding the URI to the app.":::
+1. Fetch the `redirect-uri` (or reply URL) for your app to receive responses from Microsoft Entra ID.
## Fetch parameters
-You can also find the parameters once the app is registered on the Azure portal.
-#### Find `tenant-id`
-1. Navigate to the Microsoft Entra account for your organization. You can search for "Microsoft Entra ID" in the Azure portal's search bar.
-2. Locate `tenant-id` under the basic information section in the *Overview* tab.
-3. Copy the `tenant-id` and paste it into an editor to be used later.
+You can also find the parameters after the app is registered on the Azure portal.
+
+### Find tenant-id
+
+1. Go to the Microsoft Entra account for your organization. You can search for **Microsoft Entra ID** in the Azure portal's search bar.
+1. On the **Overview** tab, under the **Basic information** section, find **Tenant ID**.
+1. Copy the `tenant-ID` value and paste it into an editor to be used later.
+ :::image type="content" source="media/how-to-generate-auth-token/azure-active-directory.png" alt-text="Screenshot that shows searching for Microsoft Entra ID.":::
+ :::image type="content" source="media/how-to-generate-auth-token/tenant-id.png" alt-text="Screenshot that shows finding the tenant ID.":::
-#### Find `client-id`
-It's the same value that you use to register your application during the provisioning of your [Azure Data Manager for Energy instance](quickstart-create-microsoft-energy-data-services-instance.md). It is often referred to as `app-id`.
+### Find client-id
-1. Find the `client-id` in the *Essentials* pane of Azure Data Manager for Energy *Overview* page.
-2. Copy the `client-id` and paste it into an editor to be used later.
-3. Currently, one Azure Data Manager for Energy instance allows one app-id to be as associated with one instance.
+A `client-id` is the same value that you use to register your application during the provisioning of your [Azure Data Manager for Energy instance](quickstart-create-microsoft-energy-data-services-instance.md). It's often referred to as `app-id`.
-> [!IMPORTANT]
-> The 'client-id' that is passed as values in the entitlement API calls needs to be the same that was used for provisioning your Azure Data Manager for the Energy instance.
+1. Go to the Azure Data Manager for Energy **Overview** page. On the **Essentials** pane, find **client ID**.
+1. Copy the `client-id` value and paste it into an editor to be used later.
+1. Currently, one Azure Data Manager for Energy instance allows one `app-id` to be associated with one instance.
+ > [!IMPORTANT]
+ > The `client-id` that's passed as a value in the Entitlement API calls needs to be the same one that was used for provisioning your Azure Data Manager for Energy instance.
-#### Find `client-secret`
-A `client-secret` is a string value your app can use in place of a certificate to identify itself. It's sometimes referred to as an application password.
+ :::image type="content" source="media/how-to-generate-auth-token/client-id-or-app-id.png" alt-text="Screenshot that shows finding the client ID for your registered app.":::
-1. Navigate to *App Registrations*.
-2. Open 'Certificates & secrets' under the *Manage* section.
-3. Create a `client-secret` for the `client-id` that you used to create your Azure Data Manager for Energy instance.
-4. Add one now by clicking on *New Client Secret*.
-5. Record the `secret's value` for later use in your client application code.
-6. The access token of the `app-id` and client secret has the Infra Admin access to the instance.
+### Find client-secret
+
+A `client-secret` is a string value your app can use in place of a certificate to identify itself. It's sometimes referred to as an application password.
+
+1. Go to **App registrations**.
+1. Under the **Manage** section, select **Certificates & secrets**.
+1. Select **New client secret** to create a client secret for the client ID that you used to create your Azure Data Manager for Energy instance.
+1. Record the secret's **Value** for later use in your client application code.
+
+ The access token of the `app-id` and `client-secret` has the infrastructure administrator access to the instance.
-> [!CAUTION]
-> Don't forget to record the secret's value. This secret value is never displayed again after you leave this page of 'client secret' creation.
+ > [!CAUTION]
+ > Don't forget to record the secret's value. This secret value is never displayed again after you leave this page for client secret creation.
+ :::image type="content" source="media/how-to-generate-auth-token/client-secret.png" alt-text="Screenshot that shows finding the client secret.":::
-#### Find the `URL` for your Azure Data Manager for Energy instance
-1. Create [Azure Data Manager for Energy instance](quickstart-create-microsoft-energy-data-services-instance.md).
-2. Navigate to your Azure Data Manager for Energy *Overview* page on the Azure portal.
-3. Copy the URI from the essentials pane.
+#### Find the URL for your Azure Data Manager for Energy instance
+1. Create an [Azure Data Manager for Energy instance](quickstart-create-microsoft-energy-data-services-instance.md).
+1. Go to your Azure Data Manager for Energy **Overview** page on the Azure portal.
+1. On the **Essentials** pane, copy the URI.
+
+ :::image type="content" source="media/how-to-generate-auth-token/endpoint-url.png" alt-text="Screenshot that shows finding the URI for the Azure Data Manager for Energy instance.":::
+
+#### Find data-partition-id
-#### Find the `data-partition-id`
You have two ways to get the list of data partitions in your Azure Data Manager for Energy instance.-- One option is to navigate the *Data Partitions* menu item under the Advanced section of your Azure Data Manager for Energy UI.
+- **Option 1**: Under the **Advanced** section of your Azure Data Manager for Energy UI, go to the **Data Partitions** menu item.
+
+ :::image type="content" source="media/how-to-generate-auth-token/data-partition-id.png" alt-text="Screenshot that shows finding the data-partition-id from the Azure Data Manager for Energy instance.":::
-- Another option is to click on the *view* below the *data partitions* field in the essentials pane of your Azure Data Manager for Energy *Overview* page.
+- **Option 2**: On the **Essentials** pane of your Azure Data Manager for Energy **Overview** page, underneath the **Data Partitions** field, select **view**.
+ :::image type="content" source="media/how-to-generate-auth-token/data-partition-id-second-option.png" alt-text="Screenshot that shows finding the data-partition-id from the Azure Data Manager for Energy instance Overview page.":::
+ :::image type="content" source="media/how-to-generate-auth-token/data-partition-id-second-option-step-2.png" alt-text="Screenshot that shows finding the data-partition-id from the Azure Data Manager for Energy instance Overview page with the data partitions.":::
-## Generate client-id auth token
+## Generate the client-id auth token
+
+Run the following curl command in Azure Cloud Bash after you replace the placeholder values with the corresponding values found earlier in the previous steps. The access token in the response is the `client-id` auth token.
-Run the below curl command in Azure Cloud Bash after replacing the placeholder values with the corresponding values found earlier in the above steps. The access token in the response is the client-id auth token.
-
**Request format** ```bash
curl --location --request POST 'https://login.microsoftonline.com/<tenant-id>/oa
} ```
-## Generate user auth token
-Generating a user's auth token is a two step process.
+## Generate the user auth token
-### Get authorization code
-The first step to getting an access token for many OpenID Connect (OIDC) and OAuth 2.0 flows is to redirect the user to the Microsoft identity platform `/authorize` endpoint. Microsoft Entra ID signs the user in and requests their consent for the permissions your app requests. In the authorization code grant flow, after consent is obtained, Microsoft Entra ID returns an `authorization_code` to your app that it can redeem at the Microsoft identity platform `/token` endpoint for an access token.
+Generating a user's auth token is a two-step process.
-1. After replacing the parameters, you can paste the request in the URL of any browser and hit enter.
-2. It asks you to log in to your Azure portal if not logged in already.
-3. You might see 'can't reach this page' error in the browser. You can ignore that.
-
+### Get the authorization code
+
+The first step to get an access token for many OpenID Connect (OIDC) and OAuth 2.0 flows is to redirect the user to the Microsoft identity platform `/authorize` endpoint. Microsoft Entra ID signs the user in and requests their consent for the permissions your app requests. In the authorization code grant flow, after consent is obtained, Microsoft Entra ID returns an authorization code to your app that it can redeem at the Microsoft identity platform `/token` endpoint for an access token.
+
+1. After you replace the parameters, you can paste the request in the URL of any browser and select Enter.
+1. Sign in to your Azure portal if you aren't signed in already.
+1. You might see the "Hmmm...can't reach this page" error message in the browser. You can ignore it.
+
+ :::image type="content" source="media/how-to-generate-auth-token/localhost-redirection-error.png" alt-text="Screenshot of localhost redirection.":::
+
+1. The browser redirects to `http://localhost:8080/?code={authorization code}&state=...` upon successful authentication.
+1. Copy the response from the URL bar of the browser and fetch the text between `code=` and `&state`.
+1. Keep this authorization code handy for future use.
-4. The browser redirects to `http://localhost:8080/?code={authorization code}&state=...` upon successful authentication.
-5. Copy the response from the URL bar of the browser and fetch the text between `code=` and `&state`
-6. This is the `authorization_code` to keep handy for future use.
-
#### Request format+ ```bash https://login.microsoftonline.com/{tenant-id}/oauth2/v2.0/authorize?client_id={client-id} &response_type=code
The first step to getting an access token for many OpenID Connect (OIDC) and OAu
| Parameter | Required? | Description | | | | |
-|tenant-id|Required|Name of your Microsoft Entra tenant|
+|tenant-id|Required|Name of your Microsoft Entra tenant.|
| client-id |Required |The application ID assigned to your app in the [Azure portal](https://portal.azure.com). | | response_type |Required |The response type, which must include `code` for the authorization code flow. You can receive an ID token if you include it in the response type, such as `code+id_token`, and in this case, the scope needs to include `openid`.|
-| redirect_uri |Required |The redirect URI of your app, where your app sends and receives the authentication responses. It must exactly match one of the redirect URIs that you registered in the portal, except that it must be URL-encoded. |
-| scope |Required |A space-separated list of scopes. The `openid` scope indicates a permission to sign in the user and get data about the user in the form of ID tokens. The `offline_access` scope is optional for web applications. It indicates that your application needs a *refresh token* for extended access to resources. The client-id indicates the token issued are intended for use by Azure AD B2C registered client. The `https://{tenant-name}/{app-id-uri}/{scope}` indicates a permission to protected resources, such as a web API. |
+| redirect_uri |Required |The redirect URI of your app, where your app sends and receives the authentication responses. It must exactly match one of the redirect URIs that you registered in the portal, except that it must be URL encoded. |
+| scope |Required |A space-separated list of scopes. The `openid` scope indicates a permission to sign in the user and get data about the user in the form of ID tokens. The `offline_access` scope is optional for web applications. It indicates that your application needs a *refresh token* for extended access to resources. The client ID indicates the token issued is intended for use by an Azure Active Directory B2C registered client. The `https://{tenant-name}/{app-id-uri}/{scope}` indicates a permission to protected resources, such as a web API. |
| response_mode |Recommended |The method that you use to send the resulting authorization code back to your app. It can be `query`, `form_post`, or `fragment`. |
-| state |Recommended |A value included in the request that can be a string of any content that you want to use. Usually, a randomly generated unique value is used, to prevent cross-site request forgery attacks. The state also is used to encode information about the user's state in the app before the authentication request occurred. For example, the page the user was on, or the user flow that was being executed. |
+| state |Recommended |A value included in the request that can be a string of any content that you want to use. Usually, a randomly generated unique value is used to prevent cross-site request forgery (CSRF) attacks. The state also is used to encode information about the user's state in the app before the authentication request occurred. For example, the page the user was on, or the user flow that was being executed. |
#### Sample response+ ```bash http://localhost:8080/?code=0.BRoAv4j5cvGGr0...au78f&state=12345&session.... ```
-> [!NOTE]
-> The browser may say that the site can't be reached, but it should still have the authorization code in the URL bar.
+
+> [!NOTE]
+> The browser might say that the site can't be reached, but it should still have the authorization code in the URL bar.
|Parameter| Description| | | |
-|code|The authorization_code that the app requested. The app can use the authorization code to request an access token for the target resource. Authorization_codes are short lived, typically they expire after about 10 minutes.|
-|state|If a state parameter is included in the request, the same value should appear in the response. The app should verify that the state values in the request and response are identical. This check helps to detect [Cross-Site Request Forgery (CSRF) attacks](https://tools.ietf.org/html/rfc6749#section-10.12) against the client.|
-|session_state|A unique value that identifies the current user session. This value is a GUID, but should be treated as an opaque value that is passed without examination.|
+|code|The authorization code that the app requested. The app can use the authorization code to request an access token for the target resource. Authorization codes are short lived. Typically, they expire after about 10 minutes.|
+|state|If a state parameter is included in the request, the same value should appear in the response. The app should verify that the state values in the request and response are identical. This check helps to detect [CSRF attacks](https://tools.ietf.org/html/rfc6749#section-10.12) against the client.|
+|session_state|A unique value that identifies the current user session. This value is a GUID, but it should be treated as an opaque value that's passed without examination.|
> [!WARNING]
-> Running the URL in Postman won't work as it requires extra configuration for token retrieval.
+> Running the URL in Postman won't work because it requires extra configuration for token retrieval.
-### Get an auth token and refresh token
-The second step is to get the auth token and refresh token. Your app uses the `authorization_code` received in the previous step to request an access token by sending a POST request to the `/token` endpoint.
+### Get an auth token and a refresh token
+
+The second step is to get the auth token and the refresh token. Your app uses the authorization code received in the previous step to request an access token by sending a POST request to the `/token` endpoint.
#### Request format
The second step is to get the auth token and refresh token. Your app uses the `a
&grant_type=authorization_code &client_secret={client-secret}' 'https://login.microsoftonline.com/{tenant-id}/oauth2/v2.0/token' ```+ |Parameter |Required |Description | ||||
-|tenant | Required | The {tenant-id} value in the path of the request can be used to control who can sign into the application.|
-|client_id | Required | The application ID assigned to your app upon registration |
-|scope | Required | A space-separated list of scopes. The scopes that your app requests in this leg must be equivalent to or a subset of the scopes that it requested in the first (authorization) leg. If the scopes specified in this request span multiple resource server, then the v2.0 endpoint returns a token for the resource specified in the first scope. |
-|code |Required |The authorization_code that you acquired in the first step of the flow. |
-|redirect_uri | Required |The same redirect_uri value that was used to acquire the authorization_code. |
-|grant_type | Required | Must be authorization_code for the authorization code flow. |
-|client_secret | Required | The client secret that you created in the app registration portal for your app. It shouldn't be used in a native app, because client_secrets can't be reliably stored on devices. It's required for web apps and web APIs, which have the ability to store the client_secret securely on the server side.|
+|tenant | Required | The `{tenant-id}` value in the path of the request can be used to control who can sign in to the application.|
+|client_id | Required | The application ID assigned to your app upon registration. |
+|scope | Required | A space-separated list of scopes. The scopes that your app requests in this leg must be equivalent to or a subset of the scopes that it requested in the first (authorization) leg. If the scopes specified in this request span multiple resource servers, the v2.0 endpoint returns a token for the resource specified in the first scope. |
+|code |Required |The authorization code that you acquired in the first step of the flow. |
+|redirect_uri | Required |The same redirect URI value that was used to acquire the authorization code. |
+|grant_type | Required | Must be `authorization_code` for the authorization code flow. |
+|client_secret | Required | The client secret that you created in the app registration portal for your app. It shouldn't be used in a native app because client secrets can't be reliably stored on devices. It's required for web apps and web APIs, which have the ability to store the client secret securely on the server side.|
#### Sample response
The second step is to get the auth token and refresh token. Your app uses the `a
|Parameter | Description | ||| |token_type |Indicates the token type value. The only type that Microsoft Entra ID supports is Bearer. |
-|scope |A space separated list of the Microsoft Graph permissions that the access_token is valid for. |
+|scope |A space-separated list of the Microsoft Graph permissions that the access token is valid for. |
|expires_in |How long the access token is valid (in seconds). | |access_token |The requested access token. Your app can use this token to call Microsoft Graph. |
-|refresh_token |An OAuth 2.0 refresh token. Your app can use this token to acquire extra access tokens after the current access token expires. Refresh tokens are long-lived, and can be used to retain access to resources for extended periods of time.|
-
-For more information on generating user access token and using refresh token to generate new access token, see the [Generate refresh tokens](/graph/auth-v2-user#2-get-authorization).
--
+|refresh_token |An OAuth 2.0 refresh token. Your app can use this token to acquire extra access tokens after the current access token expires. Refresh tokens are long-lived and can be used to retain access to resources for extended periods of time.|
+For more information on generating a user access token and using a refresh token to generate a new access token, see [Generate refresh tokens](/graph/auth-v2-user#2-get-authorization).
OSDU&trade; is a trademark of The Open Group. ## Next steps
-To learn more about how to use the generated refresh token, follow the section below:
+
+To learn more about how to use the generated refresh token, see:
> [!div class="nextstepaction"] > [How to convert segy to ovds](how-to-convert-segy-to-zgy.md)
energy-data-services How To Manage Acls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-acls.md
Title: How to manage ACLs in Microsoft Azure Data Manager for Energy
-description: This article describes how to manage ACLs in Azure Data Manager for Energy
+ Title: Manage ACLs in Azure Data Manager for Energy
+description: This article describes how to manage ACLs in Azure Data Manager for Energy.
Last updated 12/11/2023
-# How to manage ACLs of the data record
+# Manage ACLs of the data record
+ In this article, you learn how to add or remove ACLs from the data record in your Azure Data Manager for Energy instance. ## Create a record with ACLs
curl --location --request PUT 'https://osdu-ship.msft-osdu-test.org/api/storage/
``` **Sample response**+ ```JSON { "recordCount": 1,
curl --location --request PUT 'https://osdu-ship.msft-osdu-test.org/api/storage/
] } ```
-Keep the recordId from the response handy for future references.
+
+Keep the record ID from the response handy for future references.
## Get created record with ACLs
curl --location 'https://osdu-ship.msft-osdu-test.org/api/storage/v2/records/ope
``` ## Delete ACLs from the data record
-1. The first `/acl/owners/0` operation removes ACL from 0th position in the array of ACL.
-2. When you delete the first with this operation, the system deletes the first entry. Thus, the previous second entry becomes the first entry.
-3. The second `/acl/owners/0` operation tries to remove the second entry.
+
+The first `/acl/owners/0` operation removes ACL from 0th position in the array of ACL. When you delete the first entry with this operation, the system deletes it. The previous second entry then becomes the first entry. The second `/acl/owners/0` operation tries to remove the second entry.
**Request format**
curl --location --request PATCH 'https://osdu-ship.msft-osdu-test.org/api/storag
} ``` -
-If you delete the last owner ACL from the data record, you get the error
+If you delete the last owner ACL from the data record, you get the error.
**Sample response**
If you delete the last owner ACL from the data record, you get the error
``` ## Next steps
-After you have added ACLs to the data records, you can do the following:
-- [How to manage legal tags](how-to-manage-legal-tags.md)-- [How to manage users](how-to-manage-users.md)
-You can also ingest data into your Azure Data Manager for Energy instance with
+After you add ACLs to the data records, you can:
+
+- [Manage legal tags](how-to-manage-legal-tags.md)
+- [Manage users](how-to-manage-users.md)
+
+You can also ingest data into your Azure Data Manager for Energy instance:
+ - [Tutorial on CSV parser ingestion](tutorial-csv-ingestion.md) - [Tutorial on manifest ingestion](tutorial-manifest-ingestion.md)
energy-data-services How To Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-users.md
Title: How to manage users in Microsoft Azure Data Manager for Energy
-description: This article describes how to manage users in Azure Data Manager for Energy
+ Title: Manage users in Azure Data Manager for Energy
+description: This article describes how to manage users in Azure Data Manager for Energy.
Last updated 08/19/2022
-# How to manage users
-In this article, you learn how to manage users and their memberships in OSDU groups in Azure Data Manager for Energy. [Entitlements APIs](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/tree/master/) are used to add or remove users to OSDU groups and to check the entitlements when the user tries to access the OSDU services or data. For more information about OSDU groups, see [entitlement services](concepts-entitlements.md).
+# Manage users in Azure Data Manager for Energy
+In this article, you learn how to manage users and their memberships in OSDU groups in Azure Data Manager for Energy. [Entitlements APIs](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/tree/master/) are used to add or remove users to OSDU groups and to check the entitlements when the user tries to access the OSDU services or data. For more information about OSDU groups, see [Entitlement services](concepts-entitlements.md).
## Prerequisites
-1. Create an Azure Data Manager for Energy instance using the tutorial at [How to create Azure Data Manager for Energy instance](quickstart-create-microsoft-energy-data-services-instance.md).
-2. Get various parameters of your instance such as client-id, client-secret, etc. using the tutorial at [How to generate auth token](how-to-generate-auth-token.md).
-3. Generate the service principal access token needed to call the Entitlements APIs using the tutorial at [How to generate auth token](how-to-generate-auth-token.md).
-4. Keep all these parameter values handy as they are needed for executing different user management requests via the Entitlements API.
+- Create an Azure Data Manager for Energy instance. See [How to create Azure Data Manager for Energy instance](quickstart-create-microsoft-energy-data-services-instance.md).
+- Get various parameters of your instance, such as `client-id` and `client-secret`. See [How to generate auth token](how-to-generate-auth-token.md).
+- Generate the service principal access token that's needed to call the Entitlement APIs. See [How to generate auth token](how-to-generate-auth-token.md).
+- Keep all the parameter values handy. They're needed to run different user management requests via the Entitlements API.
## Fetch OID
-`object-id` (OID) is the Microsoft Entra user Object ID.
-1. Find the 'object-id' (OID) of the user(s) first. If you are managing an application's access, you must find and use the application ID (or client ID) instead of the OID.
-2. Input the `object-id` (OID) of the users (or the application or client ID if managing access for an application) as parameters in the calls to the Entitlements API of your Azure Data Manager for Energy instance.
+The object ID (OID) is the Microsoft Entra user OID.
+1. Find the OID of the users first. If you're managing an application's access, you must find and use the application ID (or client ID) instead of the OID.
+1. Input the OID of the users (or the application or client ID if managing access for an application) as parameters in the calls to the Entitlements API of your Azure Data Manager for Energy instance.
+ :::image type="content" source="media/how-to-manage-users/azure-active-directory-object-id.png" alt-text="Screenshot that shows finding the object ID from Microsoft Entra ID.":::
-## First time addition of users in a new data partition
-1. In order to add first admin to a new data partition of Azure Data Manager for Energy instance, use the access token of the `client-id` that was used to provision the instance.
-2. Get the `client-id` access token using [Generate client-id access token](how-to-generate-auth-token.md#generate-client-id-auth-token).
-3. If you try to directly use your own access token for adding entitlements, it results in 401 error. The client-id access token must be used to add first set of users in the system and those users (with admin access) can then manage more users with their own access token.
-4. Use the client-id access token to do these three steps using the commands outlined in the following sections:
+ :::image type="content" source="media/how-to-manage-users/profile-object-id.png" alt-text="Screenshot that shows finding the OID from the profile.":::
+
+## First-time addition of users in a new data partition
+
+1. To add the first admin to a new data partition of an Azure Data Manager for Energy instance, use the access token of the OID that was used to provision the instance.
+1. Get the `client-id` access token by using [Generate client-id access token](how-to-generate-auth-token.md#generate-the-client-id-auth-token).
+
+ If you try to directly use your own access token for adding entitlements, it results in a 401 error. The `client-id` access token must be used to add the first set of users in the system. Those users (with admin access) can then manage more users with their own access token.
+1. Use the `client-id` access token to do the following steps by using the commands outlined in the following sections:
1. Add the user to the `users@<data-partition-id>.<domain>` OSDU group. 2. Add the user to the `users.datalake.ops@<data-partition-id>.<domain>` OSDU group.
-5. The user becomes the admin of the data partition. The admin can then add or remove more users to the required entitlement groups:
- 1. Get admin's auth token using [Generate user access token](how-to-generate-auth-token.md#generate-user-auth-token) using the same client-id and client-secret.
- 2. Get the OSDU group such as `service.legal.editor@<data-partition-id>.<domain>` you want to add more users to using the admin's access token.
- 3. Add more users to that OSDU group using the admin's access token.
+1. The user becomes the admin of the data partition. The admin can then add or remove more users to the required entitlement groups:
+ 1. Get the admin's auth token by using [Generate user access token](how-to-generate-auth-token.md#generate-the-user-auth-token) and by using the same `client-id` and `client-secret` values.
+ 1. Get the OSDU group, such as `service.legal.editor@<data-partition-id>.<domain>`, to which you want to add more users by using the admin's access token.
+ 1. Add more users to that OSDU group by using the admin's access token.
## Get the list of all available groups in a data partition
-Run the below curl command in Azure Cloud Bash to get all the groups that are available for you or you have access to in the given data partition of Azure Data Manager for the Energy instance.
+Run the following curl command in Azure Cloud Shell to get all the groups that are available for you or that you have access to in the specific data partition of the Azure Data Manager for Energy instance.
```bash curl --location --request GET "https://<URI>/api/entitlements/v2/groups/" \
Run the below curl command in Azure Cloud Bash to get all the groups that are av
## Add users to an OSDU group in a data partition
-1. Run the below curl command in Azure Cloud Bash to add the user(s) to the "Users" group using the Entitlement service.
-2. The value to be sent for the param `email` is the `Object_ID` (OID) of the user and not the user's email.
-
-```bash
- curl --location --request POST 'https://<URI>/api/entitlements/v2/groups/<group-name>@<data-partition-id>.dataservices.energy/members' \
- --header 'data-partition-id: <data-partition-id>' \
- --header 'Authorization: Bearer <access_token>' \
- --header 'Content-Type: application/json' \
- --data-raw '{
- "email": "<Object_ID>",
- "role": "MEMBER"
- }'
-```
-
-**Sample request for `users` OSDU group**
-
-Consider an Azure Data Manager for Energy instance named "medstest" with a data partition named "dp1"
-
-```bash
- curl --location --request POST 'https://medstest.energy.azure.com/api/entitlements/v2/groups/users@medstest-dp1.dataservices.energy/members' \
- --header 'data-partition-id: medstest-dp1' \
- --header 'Authorization: Bearer abcdefgh123456.............' \
- --header 'Content-Type: application/json' \
- --data-raw '{
- "email": "90e0d063-2f8e-4244-860a-XXXXXXXXXX",
- "role": "MEMBER"
- }'
-```
-
-**Sample Response**
-
-```JSON
- {
- "email": "90e0d063-2f8e-4244-860a-XXXXXXXXXX",
- "role": "MEMBER"
- }
-```
-**Sample request for `legal service editor` OSDU group**
-```bash
- curl --location --request POST 'https://medstest.energy.azure.com/api/entitlements/v2/groups/service.legal.editor@medstest-dp1.dataservices.energy/members' \
- --header 'data-partition-id: medstest-dp1' \
- --header 'Authorization: Bearer abcdefgh123456.............' \
- --header 'Content-Type: application/json' \
- --data-raw '{
- "email": "90e0d063-2f8e-4244-860a-XXXXXXXXXX",
- "role": "MEMBER"
- }'
-```
-
-> [!IMPORTANT]
-> The app-id is the default OWNER of all the groups.
+1. Run the following curl command in Azure Cloud Shell to add the users to the users group by using the entitlement service.
+1. The value to be sent for the parameter `email` is the OID of the user and not the user's email address.
+
+ ```bash
+ curl --location --request POST 'https://<URI>/api/entitlements/v2/groups/<group-name>@<data-partition-id>.dataservices.energy/members' \
+ --header 'data-partition-id: <data-partition-id>' \
+ --header 'Authorization: Bearer <access_token>' \
+ --header 'Content-Type: application/json' \
+ --data-raw '{
+ "email": "<Object_ID>",
+ "role": "MEMBER"
+ }'
+ ```
+
+ **Sample request for users OSDU group**
+
+ Consider an Azure Data Manager for Energy instance named `medstest` with a data partition named `dp1`.
+
+ ```bash
+ curl --location --request POST 'https://medstest.energy.azure.com/api/entitlements/v2/groups/users@medstest-dp1.dataservices.energy/members' \
+ --header 'data-partition-id: medstest-dp1' \
+ --header 'Authorization: Bearer abcdefgh123456.............' \
+ --header 'Content-Type: application/json' \
+ --data-raw '{
+ "email": "90e0d063-2f8e-4244-860a-XXXXXXXXXX",
+ "role": "MEMBER"
+ }'
+ ```
+
+ **Sample response**
+
+ ```JSON
+ {
+ "email": "90e0d063-2f8e-4244-860a-XXXXXXXXXX",
+ "role": "MEMBER"
+ }
+ ```
+
+ **Sample request for legal service editor OSDU group**
+
+ ```bash
+ curl --location --request POST 'https://medstest.energy.azure.com/api/entitlements/v2/groups/service.legal.editor@medstest-dp1.dataservices.energy/members' \
+ --header 'data-partition-id: medstest-dp1' \
+ --header 'Authorization: Bearer abcdefgh123456.............' \
+ --header 'Content-Type: application/json' \
+ --data-raw '{
+ "email": "90e0d063-2f8e-4244-860a-XXXXXXXXXX",
+ "role": "MEMBER"
+ }'
+ ```
+
+ > [!IMPORTANT]
+ > The app ID is the default OWNER of all the groups.
+
+ :::image type="content" source="media/how-to-manage-users/appid.png" alt-text="Screenshot that shows the app ID in Microsoft Entra ID.":::
## Get OSDU groups for a given user in a data partition
-1. Run the below curl command in Azure Cloud Bash to get all the groups associated with the user.
-
-```bash
- curl --location --request GET 'https://<URI>/api/entitlements/v2/members/<OBJECT_ID>/groups?type=none' \
- --header 'data-partition-id: <data-partition-id>' \
- --header 'Authorization: Bearer <access_token>'
-```
-
-**Sample request**
-
-Consider an Azure Data Manager for Energy instance named "medstest" with a data partition named "dp1"
-
-```bash
- curl --location --request GET 'https://medstest.energy.azure.com/api/entitlements/v2/members/90e0d063-2f8e-4244-860a-XXXXXXXXXX/groups?type=none' \
- --header 'data-partition-id: medstest-dp1' \
- --header 'Authorization: Bearer abcdefgh123456.............'
-```
-**Sample response**
-
-```JSON
- {
- "desId": "90e0d063-2f8e-4244-860a-XXXXXXXXXX",
- "memberEmail": "90e0d063-2f8e-4244-860a-XXXXXXXXXX",
- "groups": [
+1. Run the following curl command in Azure Cloud Shell to get all the groups associated with the user.
+
+ ```bash
+ curl --location --request GET 'https://<URI>/api/entitlements/v2/members/<OBJECT_ID>/groups?type=none' \
+ --header 'data-partition-id: <data-partition-id>' \
+ --header 'Authorization: Bearer <access_token>'
+ ```
+
+ **Sample request**
+
+ Consider an Azure Data Manager for Energy instance named `medstest` with a data partition named `dp1`.
+
+ ```bash
+ curl --location --request GET 'https://medstest.energy.azure.com/api/entitlements/v2/members/90e0d063-2f8e-4244-860a-XXXXXXXXXX/groups?type=none' \
+ --header 'data-partition-id: medstest-dp1' \
+ --header 'Authorization: Bearer abcdefgh123456.............'
+ ```
+
+ **Sample response**
+
+ ```JSON
{
- "name": "users",
- "description": "Datalake users",
- "email": "users@medstest-dp1.dataservices.energy"
- },
- {
- "name": "service.search.user",
- "description": "Datalake Search users",
- "email": "service.search.user@medstest-dp1.dataservices.energy"
+ "desId": "90e0d063-2f8e-4244-860a-XXXXXXXXXX",
+ "memberEmail": "90e0d063-2f8e-4244-860a-XXXXXXXXXX",
+ "groups": [
+ {
+ "name": "users",
+ "description": "Datalake users",
+ "email": "users@medstest-dp1.dataservices.energy"
+ },
+ {
+ "name": "service.search.user",
+ "description": "Datalake Search users",
+ "email": "service.search.user@medstest-dp1.dataservices.energy"
+ }
+ ]
}
- ]
- }
-```
-
-## Delete OSDU groups of a given user in a data partition
-
-1. Run the below curl command in Azure Cloud Bash to delete a given user from a given data partition.
-2. **DO NOT** delete the OWNER of a group unless you have another OWNER who can manage users in that group.
-
-```bash
- curl --location --request DELETE 'https://<URI>/api/entitlements/v2/members/<OBJECT_ID>' \
- --header 'data-partition-id: <data-partition-id>' \
- --header 'Authorization: Bearer <access_token>'
-```
-
-**Sample request**
-
-Consider an Azure Data Manager for Energy instance named "medstest" with a data partition named "dp1"
-
-```bash
- curl --location --request DELETE 'https://medstest.energy.azure.com/api/entitlements/v2/members/90e0d063-2f8e-4244-860a-XXXXXXXXXX' \
- --header 'data-partition-id: medstest-dp1' \
- --header 'Authorization: Bearer abcdefgh123456.............'
-```
-
-**Sample response**
-No output for a successful response
+ ```
+
+## Delete OSDU groups of a specific user in a data partition
+
+1. Run the following curl command in Azure Cloud Shell to delete a specific user from a specific data partition.
+1. *Do not* delete the OWNER of a group unless you have another OWNER who can manage users in that group.
+
+ ```bash
+ curl --location --request DELETE 'https://<URI>/api/entitlements/v2/members/<OBJECT_ID>' \
+ --header 'data-partition-id: <data-partition-id>' \
+ --header 'Authorization: Bearer <access_token>'
+ ```
+
+ **Sample request**
+
+ Consider an Azure Data Manager for Energy instance named `medstest` with a data partition named `dp1`.
+
+ ```bash
+ curl --location --request DELETE 'https://medstest.energy.azure.com/api/entitlements/v2/members/90e0d063-2f8e-4244-860a-XXXXXXXXXX' \
+ --header 'data-partition-id: medstest-dp1' \
+ --header 'Authorization: Bearer abcdefgh123456.............'
+ ```
+
+ **Sample response**
+
+ No output for a successful response.
+
+## Next steps
+After you add users to the groups, you can:
+- [Manage legal tags](how-to-manage-legal-tags.md)
+- [Manage ACLs](how-to-manage-acls.md)
-## Next steps
-After you have added users to the groups, you can do the following:
-- [How to manage legal tags](how-to-manage-legal-tags.md)-- [How to manage ACLs](how-to-manage-acls.md)
+You can also ingest data into your Azure Data Manager for Energy instance:
-You can also ingest data into your Azure Data Manager for Energy instance with
- [Tutorial on CSV parser ingestion](tutorial-csv-ingestion.md) - [Tutorial on manifest ingestion](tutorial-manifest-ingestion.md)
event-hubs Event Hubs Capture Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-capture-overview.md
The capture feature is included in the premium tier so there is no additional ch
Capture doesn't consume egress quota as it is billed separately. ## Integration with Event Grid
-You can create an Azure Event Grid subscription with an Event Hubs namespace as its source. The following tutorial shows you how to create an Event Grid subscription with an event hub as a source and an Azure Functions app as a sink: [Process and migrate captured Event Hubs data to a Azure Synapse Analytics using Event Grid and Azure Functions](store-captured-data-data-warehouse.md).
+You can create an Azure Event Grid subscription with an Event Hubs namespace as its source. The following tutorial shows you how to create an Event Grid subscription with an event hub as a source and an Azure Functions app as a sink: [Process and migrate captured Event Hubs data to an Azure Synapse Analytics using Event Grid and Azure Functions](store-captured-data-data-warehouse.md).
## Explore captured files To learn how to explore captured Avro files, see [Explore captured Avro files](explore-captured-avro-files.md).
expressroute Expressroute About Virtual Network Gateways https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-about-virtual-network-gateways.md
ErGwScale is available in preview in the following regions:
* Australia East * France Central
+* Italy North
* North Europe
+* Norway East
* Sweden Central
+* UAE North
* West US 3 ### Autoscaling vs. fixed scale unit
expressroute Expressroute Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-faqs.md
No. You can purchase a private connection of any speed from your service provide
### Is it possible to use more bandwidth than I procured for my ExpressRoute circuit?
-Yes, you can use up to two times the bandwidth limit you procured by using the bandwidth available on the secondary connection of your ExpressRoute circuit. The built-in redundancy of your circuit is configured using primary and secondary connections, each of the procured bandwidth, to two Microsoft Enterprise Edge routers (MSEEs). The bandwidth available through your secondary connection can be used for more traffic if necessary. Since the secondary connection is meant for redundancy, it isn't guaranteed and shouldn't be used for extra traffic for a sustained period of time. To learn more about how to use both connections to transmit traffic, see [use AS PATH prepending](expressroute-optimize-routing.md#solution-use-as-path-prepending).
+Yes, you can use up to two times the bandwidth limit you procured by spreading the traffic across both links of your ExpressRoute circuit and thereby using the redundant bandwidth available. The built-in redundancy of your circuit is configured using redundant links, each with procured bandwidth, to two Microsoft Enterprise Edge routers (MSEEs). The bandwidth available through your secondary link can be used for more traffic if necessary. Since the second link is meant for redundancy, it isn't guaranteed and shouldn't be used for extra traffic for a sustained period of time. To learn more about how to use both connections to transmit traffic, see [use AS PATH prepending](expressroute-optimize-routing.md#solution-use-as-path-prepending).
-If you plan to use only your primary connection to transmit traffic, the bandwidth for the connection is fixed, and attempting to oversubscribe it results in increased packet drops. If traffic flows through an ExpressRoute Gateway, the bandwidth for the Gateway SKU is fixed and not burstable. For the bandwidth of each Gateway SKU, see [About ExpressRoute virtual network gateways](expressroute-about-virtual-network-gateways.md#aggthroughput).
+If you plan to use only your primary link to transmit traffic, the bandwidth for the connection is fixed, and attempting to oversubscribe it results in increased packet drops. If traffic flows through an ExpressRoute Gateway, the bandwidth for the Gateway SKU is fixed and not burstable. For the bandwidth of each Gateway SKU, see [About ExpressRoute virtual network gateways](expressroute-about-virtual-network-gateways.md#aggthroughput).
### If I pay for unlimited data, do I get unlimited egress data transfer for services accessed over Microsoft peering?
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
Previously updated : 11/06/2023 Last updated : 01/12/2024
The following table shows locations by service provider. If you want to view ava
| **[Ooredoo Cloud Connect](https://www.ooredoo.com.kw/portal/en/b2bOffConnAzureExpressRoute)** |Supported |Supported | Doha<br/>Doha2<br/>London2<br/>Marseille | | **[Optus](https://www.optus.com.au/enterprise/networking/network-connectivity/express-link/)** |Supported |Supported | Melbourne<br/>Sydney | | **[Orange](https://www.orange-business.com/en/products/business-vpn-galerie)** |Supported |Supported | Amsterdam<br/>Amsterdam2<br/>Chicago<br/>Dallas<br/>Dubai2<br/>Dublin2<br/>Frankfurt<br/>Hong Kong<br/>Johannesburg<br/>London<br/>London2<br/>Mumbai2<br/>Melbourne<br/>Paris<br/>Paris2<br/>Sao Paulo<br/>Silicon Valley<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Toronto<br/>Washington DC |
-| **[Orange Poland](https://www.orange.pl/duze-firmy)** | Supported | Supported | Warsaw |
+| **[Orange Poland](https://www.orange.pl/duze-firmy/rozwiazania-chmurowe)** | Supported | Supported | Warsaw |
| **[Orixcom](https://www.orixcom.com/solutions/azure-expressroute)** | Supported | Supported | Dubai2 | | **[PacketFabric](https://www.packetfabric.com/cloud-connectivity/microsoft-azure)** | Supported | Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>Denver<br/>Las Vegas<br/>London<br/>Los Angeles2<br/>Miami<br/>New York<br/>Seattle<br/>Silicon Valley<br/>Toronto<br/>Washington DC | | **[PCCW Global Limited](https://consoleconnect.com/clouds/#azureRegions)** | Supported | Supported | Chicago<br/>Hong Kong<br/>Hong Kong2<br/>London<br/>Singapore<br/>Singapore2<br/>Tokyo2 |
expressroute Provider Rate Limit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/provider-rate-limit.md
+
+ Title: About rate limiting for ExpressRoute circuits over service provider ports
+
+description: This document discusses how rate limiting works for ExpressRoute circuits over service provider ports. You'll also learn how to monitor the throughput and traffic drop due to rate limiting.
++++ Last updated : 01/12/2024+++
+# About rate limiting for ExpressRoute circuits over service provider ports
+
+This article discusses how rate limiting works for ExpressRoute circuits created over service provider ports. You'll also learn how to monitor the throughput and traffic drop due to rate limiting.
+
+## How does rate limiting work over an ExpressRoute circuit?
+
+An ExpressRoute circuit consists of two links that connects the Customer/Provider edge to the Microsoft Enterprise Edge (MSEE) routers. If your circuit bandwidth is 1 Gbps and you distribute your traffic evenly across both links, you can achieve a maximum throughput of 2 Gbps (two times 1 Gbps). Rate limiting restricts your throughput to the configured bandwidth if you exceed it on either link. The ExpressRoute circuit SLA is only guaranteed for the bandwidth that you configured. For example, if you purchased a 1-Gbps circuit, your SLA is for a maximum throughput of 1 Gbps.
++
+## How can I determine what my circuit throughput is?
+
+You can monitor the ingress and egress throughput of your ExpressRoute circuit for both links through the Azure portal using ExpressRoute circuit metrics. For ingress, select `BitsInPerSecond` and for egress, select `BitsOutPerSecond`. The following screenshot shows the ExpressRoute circuit metrics for ingress and egress throughput.
++
+## How can I identify if traffic is being dropped due to rate limiting?
+
+You can monitor the traffic that is being dropped due to rate limiting through the Azure portal using the ExpressRoute circuit QOS metrics. For ingress, select `DroppedInBitsPerSecond` and for egress, select `DroppedOutBitsPerSecond`. The following screenshot shows the ExpressRoute circuit QOS metrics for ingress and egress throughput.
++
+## How can I increase my circuit bandwidth?
+
+You can seamlessly increase your circuit bandwidth through the Azure portal. For more information, see [About upgrading ExpressRoute circuit bandwidth](about-upgrade-circuit-bandwidth.md).
+
+## What are the causes of traffic drop when the throughput is below the configured bandwidth?
+
+ExpressRoute circuit throughput is monitored at an aggregate level of every few minutes, while the rate limiting is enforced at a granular level in milliseconds. Therefore, occasional traffic bursts exceeding the configured bandwidth might not get detected by the throughput monitoring. However, the rate limiting is still be enforced and traffic gets dropped.
+
+## Next steps
+
+For more frequently asked questions, see [ExpressRoute FAQ](expressroute-faqs.md).
governance First Query Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/first-query-dotnet.md
- Title: "Quickstart: Your first .NET query"
-description: In this quickstart, you follow the steps to enable the Resource Graph NuGet packages for .NET and run your first query.
Previously updated : 01/20/2023----
-# Quickstart: Run your first Resource Graph query using .NET
-
-The first step to using Azure Resource Graph is to check that the required NuGet packages are installed. This quickstart walks you through the process of adding the packages to your .NET application.
-
-At the end of this process, you'll have added the packages to your .NET application and run your first Resource Graph query.
-
-## Prerequisites
--- [.NET SDK 6.0 or later](https://dotnet.microsoft.com/download/dotnet)-- An Azure subscription. If you don't have an Azure subscription, create a
- [free](https://azure.microsoft.com/free/dotnet/) account before you begin.
-- An Azure service principal, including the _clientId_ and _clientSecret_. If you don't have a
- service principal for use with Resource Graph or want to create a new one, see
- [Azure management libraries for .NET authentication](/dotnet/azure/sdk/authentication#mgmt-auth).
- Skip the step to install the NuGet packages, as we'll do that in the next steps.
-
-## Create the Resource Graph project
-
-To enable .NET to query Azure Resource Graph, create a new console application and install the
-required packages.
-
-1. Create a new .NET console application named "argQuery":
-
- ```dotnetcli
- dotnet new console --name "argQuery"
- ```
-
-1. Change directories into the new project folder. Install the packages for the Azure Resource Graph and Azure Identity client libraries:
-
- ```dotnetcli
- dotnet add package Azure.ResourceManager.ResourceGraph
- dotnet add package Azure.Identity
- ```
-
-1. Replace the default `Program.cs` with the following code and save the updated file:
-
- ```csharp
- using Azure.Identity;
- using Azure.ResourceManager;
- using Azure.ResourceManager.ResourceGraph;
- using Azure.ResourceManager.ResourceGraph.Models;
-
- string strTenant = args[0];
- string strClientId = args[1];
- string strClientSecret = args[2];
- string strQuery = args[3];
-
- var client = new ArmClient(
- new ClientSecretCredential(strTenant, strClientId, strClientSecret));
- var tenant = client.GetTenants().First();
- //Console.WriteLine($"{tenant.Id} {tenant.HasData}");
- var queryContent = new ResourceQueryContent(strQuery);
- var response = tenant.GetResources(queryContent);
- var result = response.Value;
- Console.WriteLine($"Count: {result.Data.ToString()}");
- ```
-
- > [!NOTE]
- > This code creates a tenant-based query. To limit the query to a
- > [management group](../management-groups/overview.md) or subscription, set the
- > `ManagementGroups` or `Subscriptions` property on the `QueryRequest` object.
-
-1. Build and publish the `argQuery` console application:
-
- ```dotnetcli
- dotnet build
- dotnet publish -o {run-folder}
- ```
-
-## Run your first Resource Graph query
-
-With the .NET console application built and published, it's time to try out a simple
-tenant-based Resource Graph query. The query returns the first five Azure resources with the
-**Name** and **Resource Type** of each resource.
-
-In each call to `argQuery`, replace the variables with your own values:
--- `{tenantId}` - Replace with your tenant ID-- `{clientId}` - Replace with the client ID of your service principal-- `{clientSecret}` - Replace with the client secret of your service principal-
-1. Change directories to the `{run-folder}` you defined with the earlier `dotnet publish` command.
-
-1. Run your first Azure Resource Graph query using the compiled .NET console application:
-
- ```bash
- argQuery "{tenantId}" "{clientId}" "{clientSecret}" "Resources | project name, type | limit 5"
- ```
-
- > [!NOTE]
- > As this query example does not provide a sort modifier such as `order by`, running this query
- > many times is likely to yield a different set of resources per request.
-
-1. Change the final parameter to `argQuery.exe` and change the query to `order by` the **Name**
- property:
-
- ```bash
- argQuery "{tenantId}" "{clientId}" "{clientSecret}" "Resources | project name, type | limit 5 | order by name asc"
- ```
-
- > [!NOTE]
- > Just as with the first query, running this query multiple times is likely to yield a different
- > set of resources per request. The order of the query commands is important. In this example,
- > the `order by` comes after the `limit`. This command order first limits the query results and
- > then orders them.
-
-1. Change the final parameter to `argQuery.exe` and change the query to first `order by` the
- **Name** property and then `limit` to the top five results:
-
- ```bash
- argQuery "{tenantId}" "{clientId}" "{clientSecret}" "Resources | project name, type | order by name asc | limit 5"
- ```
-
-When the final query is run several times, assuming that nothing in your environment is changing,
-the results returned are consistent and ordered by the **Name** property, but still limited to the
-top five results.
-
-## Clean up resources
-
-If you wish to remove the .NET console application and installed packages, you can do so by
-deleting the `argQuery` project folder.
-
-## Next steps
-
-In this quickstart, you've created a .NET console application with the required Resource Graph
-packages and run your first query. To learn more about the Resource Graph language, continue to the
-query language details page.
-
-> [!div class="nextstepaction"]
-> [Get more information about the query language](./concepts/query-language.md)
load-testing Concept Load Testing Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/concept-load-testing-concepts.md
A test contains a test plan, which describes the steps to invoke the application
Azure Load Testing supports all communication protocols that JMeter supports, not only HTTP-based endpoints. For example, you might want to read from or write to a database or message queue in the test script.
+Azure Load Testing currently does not support other testing frameworks than Apache JMeter.
+ The test also specifies the configuration settings for running the load test: - [Load test parameters](./how-to-parameterize-load-tests.md), such as environment variables, secrets, and certificates.
load-testing Overview What Is Azure Load Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/overview-what-is-azure-load-testing.md
By [using JMeter plugins](./how-to-use-jmeter-plugins.md) in your test script, y
With the quick test experience you can [test a single URL-based HTTP endpoint](./quickstart-create-and-run-load-test.md). By [uploading a JMeter script](how-to-create-and-run-load-test-with-jmeter-script.md), you can use all JMeter-supported communication protocols.
+Azure Load Testing currently does not support other testing frameworks than Apache JMeter.
+ ## Identify performance bottlenecks by using high-scale load tests Performance problems often remain undetected until an application is under load. You can start a high-scale load test in the Azure portal to learn sooner how your application behaves under stress. While the test is running, the Azure Load Testing dashboard provides a live update of the client and server-side metrics.
notification-hubs Export Modify Registrations Bulk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/export-modify-registrations-bulk.md
This section assumes you have the following entities:
An input file contains a list of registrations serialized in XML, one per row. Using the Azure SDK, the following code example shows how to serialize the registrations and upload them to blob container: ```csharp
-private static void SerializeToBlob(BlobContainerClient container, RegistrationDescription[] descriptions)
+private static async Task SerializeToBlobAsync(BlobContainerClient container, RegistrationDescription[] descriptions)
{ StringBuilder builder = new StringBuilder(); foreach (var registrationDescription in descriptions)
private static void SerializeToBlob(BlobContainerClient container, RegistrationD
var inputBlob = container.GetBlobClient(INPUT_FILE_NAME); using (MemoryStream stream = new MemoryStream(Encoding.UTF8.GetBytes(builder.ToString()))) {
- inputBlob.UploadAsync(stream);
+ await inputBlob.UploadAsync(stream);
} } ```
static Uri GetOutputDirectoryUrl(BlobContainerClient container)
BlobSasBuilder builder = new BlobSasBuilder(BlobSasPermissions.All, DateTime.UtcNow.AddDays(1)); return container.GenerateSasUri(builder); }
-
-
-
static Uri GetInputFileUrl(BlobContainerClient container, string filePath) { Console.WriteLine(container.CanGenerateSasUri); BlobSasBuilder builder = new BlobSasBuilder(BlobSasPermissions.Read, DateTime.UtcNow.AddDays(1)); return container.GenerateSasUri(builder);- } ```
With the two input and output URLs, you can now start the batch job.
```csharp NotificationHubClient client = NotificationHubClient.CreateClientFromConnectionString(CONNECTION_STRING, HUB_NAME);
-var createTask = client.SubmitNotificationHubJobAsync(
-new NotificationHubJob {
- JobType = NotificationHubJobType.ImportCreateRegistrations,
- OutputContainerUri = outputContainerSasUri,
- ImportFileUri = inputFileSasUri
- }
-);
-createTask.Wait();
+var job = await client.SubmitNotificationHubJobAsync(
+ new NotificationHubJob {
+ JobType = NotificationHubJobType.ImportCreateRegistrations,
+ OutputContainerUri = outputContainerSasUri,
+ ImportFileUri = inputFileSasUri
+ }
+ );
-var job = createTask.Result;
long i = 10; while (i > 0 && job.Status != NotificationHubJobStatus.Completed) {
- var getJobTask = client.GetNotificationHubJobAsync(job.JobId);
- getJobTask.Wait();
- job = getJobTask.Result;
- Thread.Sleep(1000);
+ job = await client.GetNotificationHubJobAsync(job.JobId);
+ await Task.Delay(1000);
i--; } ```
The following sample code imports registrations into a notification hub.
```csharp using Microsoft.Azure.NotificationHubs;
-using Microsoft.WindowsAzure.Storage;
-using Microsoft.WindowsAzure.Storage.Blob;
-using System;
-using System.Collections.Generic;
-using System.Globalization;
-using System.IO;
-using System.Linq;
-using System.Runtime.Serialization;
+using Azure.Storage.Blobs;
+using Azure.Storage.Sas;
using System.Text;
-using System.Threading;
-using System.Threading.Tasks;
-using System.Xml;
namespace ConsoleApplication1 {
namespace ConsoleApplication1
private static string STORAGE_ACCOUNT_CONNECTIONSTRING = "connectionstring"; private static string CONTAINER_NAME = "containername";
- static void Main(string[] args)
+ static async Task Main(string[] args)
{ var descriptions = new[] {
namespace ConsoleApplication1
new MpnsRegistrationDescription(@"http://dm2.notify.live.net/throttledthirdparty/01.00/12G9Ed13dLb5RbCii5fWzpFpAgAAAAADAQAAAAQUZm52OkJCMjg1QTg1QkZDMdUxREQFBlVTTkMwMQ"), };
-// Get a reference to a container named "sample-container" and then create it
+ // Get a reference to a container named "sample-container" and then create it
BlobContainerClient container = new BlobContainerClient(STORAGE_ACCOUNT_CONNECTIONSTRING, CONTAINER_NAME);
- container.CreateIfNotExistsAsync();
+ await container.CreateIfNotExistsAsync();
- SerializeToBlob(container, descriptions);
+ await SerializeToBlobAsync(container, descriptions);
// TODO then create Sas var outputContainerSasUri = GetOutputDirectoryUrl(container);
- BlobContainerClient inputfilecontainer = new BlobContainerClient(STORAGE_ACCOUNT_CONNECTIONSTRING, STORAGE_ACCOUNT_CONNECTIONSTRING + "/" + INPUT_FILE_NAME);
+ BlobContainerClient inputcontainer = new BlobContainerClient(STORAGE_ACCOUNT_CONNECTIONSTRING, STORAGE_ACCOUNT_CONNECTIONSTRING + "/" + INPUT_FILE_NAME);
var inputFileSasUri = GetInputFileUrl(inputcontainer, INPUT_FILE_NAME); // Import this file NotificationHubClient client = NotificationHubClient.CreateClientFromConnectionString(CONNECTION_STRING, HUB_NAME);
- var createTask = client.SubmitNotificationHubJobAsync(
+ var job = await client.SubmitNotificationHubJobAsync(
new NotificationHubJob { JobType = NotificationHubJobType.ImportCreateRegistrations, OutputContainerUri = outputContainerSasUri, ImportFileUri = inputFileSasUri } );
- createTask.Wait();
- var job = createTask.Result;
long i = 10; while (i > 0 && job.Status != NotificationHubJobStatus.Completed) {
- var getJobTask = client.GetNotificationHubJobAsync(job.JobId);
- getJobTask.Wait();
- job = getJobTask.Result;
- Thread.Sleep(1000);
+ job = await client.GetNotificationHubJobAsync(job.JobId);
+ await Task.Delay(1000);
i--; } }
- private static void SerializeToBlob(BlobContainerClient container, RegistrationDescription[] descriptions)
+ private static async Task SerializeToBlobAsync(BlobContainerClient container, RegistrationDescription[] descriptions)
{ StringBuilder builder = new StringBuilder(); foreach (var registrationDescription in descriptions)
namespace ConsoleApplication1
var inputBlob = container.GetBlobClient(INPUT_FILE_NAME); using (MemoryStream stream = new MemoryStream(Encoding.UTF8.GetBytes(builder.ToString()))) {
- inputBlob.UploadAsync(stream);
+ await inputBlob.UploadAsync(stream);
} }
postgresql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-read-replicas.md
Promotion of replicas can be done in two distinct manners:
**Promote to primary server (preview)**
-This action elevates a replica to the role of the primary server. In the process, the current primary server is demoted to a replica role, swapping their roles. For a successful promotion, it's necessary to have a [virtual endpoint](#virtual-endpoints-preview) configured for both the current primary as the writer endpoint, and the replica intended for promotion as the reader endpoint. The promotion will only be successful if the targeted replica is included in the reader endpoint configuration, or if a reader virtual endpoint has yet to be established.
+This action elevates a replica to the role of the primary server. In the process, the current primary server is demoted to a replica role, swapping their roles. For a successful promotion, it's necessary to have a [virtual endpoint](#virtual-endpoints-preview) configured for both the current primary as the writer endpoint, and the replica intended for promotion as the reader endpoint. The promotion will only be successful if the targeted replica is included in the reader endpoint configuration.
The diagram below illustrates the configuration of the servers prior to the promotion and the resulting state after the promotion operation has been successfully completed.
Read replicas are treated as separate servers in terms of control plane configur
The promote operation won't carry over specific configurations and parameters. Here are some of the notable ones: - **PgBouncer**: [The built-in PgBouncer](concepts-pgbouncer.md) connection pooler's settings and status aren't replicated during the promotion process. If PgBouncer was enabled on the primary but not on the replica, it will remain disabled on the replica after promotion. Should you want PgBouncer on the newly promoted server, you must enable it either prior to or following the promotion action.-- **Geo-redundant backup storage**: Geo-backup settings aren't transferred. Since replicas can't have geo-backup enabled, the promoted primary (formerly the replica) won't have it post-promotion. The feature can only be activated at the server's creation time.
+- **Geo-redundant backup storage**: Geo-backup settings aren't transferred. Since replicas can't have geo-backup enabled, the promoted primary (formerly the replica) won't have it post-promotion. The feature can only be activated at the standard server's creation time (not a replica).
- **Server Parameters**: If their values differ on the primary and read replica, they won't be changed during promotion. It's essential to note that parameters influencing shared memory size must have the same values on both the primary and replicas. This requirement is detailed in the [Server parameters](#server-parameters) section. - **Microsoft Entra authentication**: If the primary had [Microsoft Entra authentication](concepts-azure-ad-authentication.md) configured, but the replica was set up with PostgreSQL authentication, then after promotion, the replica won't automatically switch to Microsoft Entra authentication. It retains the PostgreSQL authentication. Users need to manually configure Microsoft Entra authentication on the promoted replica either before or after the promotion process. - **High Availability (HA)**: Should you require [HA](concepts-high-availability.md) after the promotion, it must be configured on the freshly promoted primary server, following the role reversal.
private-5g-core Collect Required Information For A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-a-site.md
Collect all the values in the following table to define the packet core instance
## Collect UE usage tracking values
-If you want to configure UE usage tracking for your site, collect all the values in the following table to define the packet core instance's associated Event Hubs instance.
+If you want to configure UE usage tracking for your site, collect all the values in the following table to define the packet core instance's associated Event Hubs instance. See [Monitor UE usage with Event Hubs](ue-usage-event-hub.md) for more information.
> [!NOTE] > You must already have an [Azure Event Hubs instance](/azure/event-hubs) with an associated user assigned managed identity with the **Resource Policy Contributor** role before you can collect the information in the following table.
private-5g-core Complete Private Mobile Network Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/complete-private-mobile-network-prerequisites.md
DNS allows the translation between human-readable domain names and their associa
## Prepare your networks For each site you're deploying, do the following.
- - Ensure you have at least one network switch with at least three ports available. You'll connect each Azure Stack Edge Pro device to the switch(es) in the same site as part of the instructions in [Order and set up your Azure Stack Edge Pro device(s)](#order-and-set-up-your-azure-stack-edge-pro-devices).
- - For every network where you decided not to enable NAPT (as described in [Allocate user equipment (UE) IP address pools](#allocate-user-equipment-ue-ip-address-pools)), configure the data network to route traffic destined for the UE IP address pools via the IP address you allocated to the packet core instance's user plane interface on the data network.
+
+- Ensure you have at least one network switch with at least three ports available. You'll connect each Azure Stack Edge Pro device to the switch(es) in the same site as part of the instructions in [Order and set up your Azure Stack Edge Pro device(s)](#order-and-set-up-your-azure-stack-edge-pro-devices).
+- For every network where you decided not to enable NAPT (as described in [Allocate user equipment (UE) IP address pools](#allocate-user-equipment-ue-ip-address-pools)), configure the data network to route traffic destined for the UE IP address pools via the IP address you allocated to the packet core instance's user plane interface on the data network.
### Configure ports for local access
private-5g-core Create A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-a-site.md
In this step, you'll create the mobile network site resource representing the ph
> If a warning appears about an incompatibility between the selected packet core version and the current Azure Stack Edge version, you'll need to update ASE first. Select **Upgrade ASE** from the warning prompt and follow the instructions in [Update your Azure Stack Edge Pro GPU](../databox-online/azure-stack-edge-gpu-install-update.md). Once you've finished updating your ASE, go back to the beginning of this step to create the site resource. - Ensure **AKS-HCI** is selected in the **Platform** field.
+ - If required, you can disable automatic backhauling of edge network function logs to Microsoft support using the checkbox.
:::zone pivot="ase-pro-gpu"
In this step, you'll create the mobile network site resource representing the ph
> [!NOTE] > **ASE N2 virtual subnet** and **ASE N3 virtual subnet** (if this site will support 5G UEs), **ASE S1-MME virtual subnet** and **ASE S1-U virtual subnet** (if this site will support 4G UEs), or **ASE N2/S1-MME virtual subnet** and **ASE N3/S1-U virtual subnet** (if this site will support both 4G and 5G UEs) must match the corresponding virtual network names on port 5 on your Azure Stack Edge Pro GPU device.
-8. If you want to enable UE Metric monitoring, use the information collected in [Collect UE Usage Tracking values](collect-required-information-for-a-site.md#collect-ue-usage-tracking-values) to fill out the **Azure Event Hub Namespace**, **Event Hub name** and **User Assigned Managed Identity** values.
- 9. In the **Attached data networks** section, select **Attach data network**. Choose whether you want to use an existing data network or create a new one, then use the information you collected in [Collect data network values](collect-required-information-for-a-site.md?pivots=ase-pro-gpu#collect-data-network-values) to fill out the fields. Note the following: - **ASE N6 virtual subnet** (if this site will support 5G UEs), **ASE SGi virtual subnet** (if this site will support 4G UEs), or **ASE N6/SGi virtual subnet** (if this site will support combined 4G and 5G UEs) must match the corresponding virtual network name on port 5 or 6 on your Azure Stack Edge Pro device. - If you decided not to configure a DNS server, clear the **Specify DNS addresses for UEs?** checkbox.
In this step, you'll create the mobile network site resource representing the ph
> [!NOTE] > **ASE N2 virtual subnet** and **ASE N3 virtual subnet** (if this site will support 5G UEs), **ASE S1-MME virtual subnet** and **ASE S1-U virtual subnet** (if this site will support 4G UEs), or **ASE N2/S1-MME virtual subnet** and **ASE N3/S1-U virtual subnet** (if this site will support both 4G and 5G UEs) must match the corresponding virtual network names on port 3 on your Azure Stack Edge Pro device.
-8. If you want to enable UE Metric monitoring, select **Enable** from the **UE Metric monitoring** dropdown. Use the information collected in [Collect UE Usage Tracking values](collect-required-information-for-a-site.md#collect-ue-usage-tracking-values) to fill out the **Azure Event Hub Namespace**, **Event Hub name** and **User Assigned Managed Identity** values.
- 9. In the **Attached data networks** section, select **Attach data network**. Choose whether you want to use an existing data network or create a new one, then use the information you collected in [Collect data network values](collect-required-information-for-a-site.md?pivots=ase-pro-2#collect-data-network-values) to fill out the fields. Note the following: - **ASE N6 virtual subnet** (if this site will support 5G UEs), **ASE SGi virtual subnet** (if this site will support 4G UEs), or **ASE N6/SGi virtual subnet** (if this site will support combined 4G and 5G UEs) must match the corresponding virtual network name on port 3 or 4 on your Azure Stack Edge Pro device. - If you decided not to configure a DNS server, clear the **Specify DNS addresses for UEs?** checkbox.
In this step, you'll create the mobile network site resource representing the ph
:::zone-end 10. Repeat the previous step for each additional data network you want to configure.
-11. If you decided you want to configure diagnostics packet collection or use a user assigned managed identity for HTTPS certificate for this site, select **Next : Identity >**.
+
+8. Go to the **Diagnostics** tab. If you want to enable UE Metric monitoring, select **Enable** from the **UE Metric monitoring** dropdown. Use the information collected in [Collect UE Usage Tracking values](collect-required-information-for-a-site.md#collect-ue-usage-tracking-values) to fill out the **Azure Event Hub Namespace**, **Event Hub name** and **User Assigned Managed Identity** values.
+
+1. If you decided you want to configure diagnostics packet collection or use a user assigned managed identity for HTTPS certificate for this site, select **Next : Identity >**.
If you decided not to configure diagnostics packet collection or use a user assigned managed identity for HTTPS certificates for this site, you can skip this step. 1. Select **+ Add** to configure a user assigned managed identity. 1. In the **Select Managed Identity** side panel: - Select the **Subscription** from the dropdown. - Select the **Managed identity** from the dropdown.
-12. If you decided you want to provide a custom HTTPS certificate in [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values), select **Next : Local access >**. If you decided not to provide a custom HTTPS certificate at this stage, you can skip this step.
+1. If you decided you want to provide a custom HTTPS certificate in [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values), select **Next : Local access >**. If you decided not to provide a custom HTTPS certificate at this stage, you can skip this step.
1. Under **Provide custom HTTPS certificate?**, select **Yes**. 1. Use the information you collected in [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values) to select a certificate.
private-5g-core Modify Packet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/modify-packet-core.md
If you want to modify a packet core instance's local access configuration, follo
## Prerequisites -- If you want to make changes to the packet core configuration or access network, refer to [Collect packet core configuration values](collect-required-information-for-a-site.md#collect-packet-core-configuration-values) and [Collect access network values](collect-required-information-for-a-site.md#collect-access-network-values) to collect the new values and make sure they're in the correct format.
+- If you want to make changes to the packet core configuration or access network, refer to [Collect packet core configuration values](collect-required-information-for-a-site.md#collect-packet-core-configuration-values) and [Collect access network values](collect-required-information-for-a-site.md#collect-access-network-values) to collect the new values and make sure they're in the correct format. If you want to enable UE usage monitoring, refer to [Collect UE usage tracking values](collect-required-information-for-a-site.md#collect-ue-usage-tracking-values).
> [!NOTE] > You can't update a packet core instance's **Technology type** or **Version** field.
The following changes will trigger the packet core to reinstall, during which yo
The following changes require you to manually perform a reinstall, during which your service will be unavailable for up to two hours, before they take effect: - Changing access network configuration.
+- Enabling [monitoring UE usage with Event Hubs](ue-usage-event-hub.md).
If you're making any of these changes to a healthy packet core instance, we recommend running this process during a maintenance window to minimize the impact on your service. Changes not listed here should not trigger a service interruption, but we recommend using a maintenance window in case of misconfiguration.
To modify the packet core and/or access network configuration:
- Use the information you collected in [Collect packet core configuration values](collect-required-information-for-a-site.md#collect-packet-core-configuration-values) for the top-level configuration values. - Use the information you collected in [Collect access network values](collect-required-information-for-a-site.md#collect-access-network-values) for the configuration values under **Access network**.
- - If you want to enable UE Metric monitoring, use the information collected in [Collect UE Usage Tracking values](collect-required-information-for-a-site.md#collect-ue-usage-tracking-values) to fill out the **Azure Event Hub Namespace**, **Event Hub name** and **User Assigned Managed Identity** values.
- > [!NOTE]
- > You must reinstall the packet core control pane** in order to use UE Metric monitoring if it was not already configured.
+ - If you want to enable UE usage monitoring, use the information collected in [Collect UE usage tracking values](collect-required-information-for-a-site.md#collect-ue-usage-tracking-values) to fill out the **Azure Event Hub Namespace**, **Event Hub name** and **User Assigned Managed Identity** values.
1. Choose the next step: - If you've finished modifying the packet core instance, go to [Submit and verify changes](#submit-and-verify-changes). - If you want to configure a new or existing data network and attach it to the packet core instance, go to [Attach a data network](#attach-a-data-network).
private-5g-core Ue Usage Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/ue-usage-event-hub.md
You can monitor UE usage based on the monitoring data generated by Azure Event H
## Configure UE usage monitoring
-UE usage monitoring can be configured during [site creation](create-a-site.md) or at a later stage by [modifying your site](modify-packet-core.md).
+UE usage monitoring can be configured during [site creation](create-a-site.md) or at a later stage by [modifying the packet core](modify-packet-core.md).
-Once Event Hubs is receiving data from your AP5GC deployment you can write an application, using SDKs [such as .NET](/azure/event-hubs/event-hubs-dotnet-standard-getstarted-send?tabs=passwordless%2Croles-azure-portal), to consume event data and produce useful metric data.
+Once Event Hubs is receiving data from your AP5GC deployment you can write an application, using SDKs such as [.NET](/azure/event-hubs/event-hubs-dotnet-standard-getstarted-send?tabs=passwordless%2Croles-azure-portal), to consume event data and produce useful metric data.
## Reported UE usage data
private-5g-core Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/whats-new.md
Previously, packet capture could only be performed from edge sites, requiring lo
**Date available:** December 22, 2023
-The new Edge Log Backhaul feature provides Microsoft support personnel with easy access to customer network function logs to help them troubleshoot and find root cause for customer issues.
+The new Edge Log Backhaul feature provides Microsoft support personnel with easy access to customer network function logs to help them troubleshoot and find root cause for customer issues. This is enabled by default. To disable this feature, [modify the packet core configuration](modify-packet-core.md).
## October 2023 ### Packet core 2310
sap Sap Hana High Availability Netapp Files Red Hat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-netapp-files-red-hat.md
Read the following SAP Notes and papers first:
- [Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster Members](https://access.redhat.com/articles/3131341) - [Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on Microsoft Azure](https://access.redhat.com/articles/3252491) - [Configure SAP HANA scale-up system replication in a Pacemaker cluster when the HANA file systems are on NFS shares](https://access.redhat.com/solutions/5156571)-- [NetApp SAP Applications on Microsoft Azure using Azure NetApp Files](https://www.netapp.com/us/media/tr-4746.pdf) - [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md) ## Overview
search Cognitive Search How To Debug Skillset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-how-to-debug-skillset.md
A debug session is a cached indexer and skillset execution, scoped to a single d
+ An Azure Storage account, used to save session state.
-+ A **Storage Blob Data Contributor** role assignment in Azure Storage if you're using managed identities.
++ A **Storage Blob Data Contributor** role assignment in Azure Storage if you're using a system managed identity. Otherwise, plan on using a full access connection string for the debug session connection to Azure Storage. + If the Azure Storage account is behind a firewall, configure it to [allow search service access](search-indexer-howto-access-ip-restricted.md).
Debug sessions work with all generally available [indexer data sources](search-d
+ For the SQL API of Azure Cosmos DB, if a partitioned collection was previously non-partitioned, the debug session won't find the document.
-+ For custom skills, you can't use a *user-assigned managed identity* to connect over a private endpoint in a debug session, but a system managed identity is supported. For more information, see [Connect a search service to other Azure resources using a managed identity](search-howto-managed-identities-data-sources.md).
++ For custom skills, a user-assigned managed identity isn't supported for a debug session connection to Azure Storage. As stated in the prerequisites, you can use a system managed identity, or specify a full access connection string that includes a key. For more information, see [Connect a search service to other Azure resources using a managed identity](search-howto-managed-identities-data-sources.md). ## Create a debug session
The debug session begins by executing the indexer and skillset on the selected d
A debug session can be canceled while it's executing using the **Cancel** button. If you hit the **Cancel** button you should be able to analyze partial results.
-It is expected for a debug session to take longer to execute than the indexer since it goes through extra processing.
+It's expected for a debug session to take longer to execute than the indexer since it goes through extra processing.
## Start with errors and warnings
search Search Blob Metadata Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-blob-metadata-properties.md
- ignite-2023 Previously updated : 02/08/2023 Last updated : 01/11/2024 # Content metadata properties used in Azure AI Search
Azure AI Search supports blob indexing and SharePoint document indexing for the
## Properties by document format
-The following table summarizes processing done for each document format, and describes the metadata properties extracted by a blob indexer and the SharePoint indexer.
+The following table summarizes processing for each document format, and describes the metadata properties extracted by a blob indexer and the SharePoint indexer.
| Document format / content type | Extracted metadata | Processing details | | | | |
search Search Create Service Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-create-service-portal.md
Last updated 12/12/2023
# Create an Azure AI Search service in the portal
-[**Azure AI Search**](search-what-is-azure-search.md) is an Azure resource used for adding a full text search experience to custom apps.
+[**Azure AI Search**](search-what-is-azure-search.md) is an Azure resource used for adding a full text search experience to custom apps or for providing enterprise information retrieval for chat-style search solutions.
-If you have an Azure subscription, including a [trial subscription](https://azure.microsoft.com/pricing/free-trial/?WT.mc_id=A261C142F), you can create a search service for free. Free services have limitations, but you can complete all of the quickstarts and most tutorials.
+If you have an Azure subscription, including a [trial subscription](https://azure.microsoft.com/pricing/free-trial/?WT.mc_id=A261C142F), you can create a search service for free. Free services have limitations, but you can complete all of the quickstarts and most tutorials, except for those featuring semantic ranking (it requires a billable service).
-The easiest way to create search service is using the [Azure portal](https://portal.azure.com/), which is covered in this article. You can also use [Azure PowerShell](search-manage-powershell.md#create-or-delete-a-service), [Azure CLI](search-manage-azure-cli.md#create-or-delete-a-service), the [Management REST API](search-manage-rest.md#create-or-update-a-service), an [Azure Resource Manager service template](search-get-started-arm.md), or a [Bicep file](search-get-started-bicep.md).
+The easiest way to create a service is using the [Azure portal](https://portal.azure.com/), which is covered in this article. You can also use [Azure PowerShell](search-manage-powershell.md#create-or-delete-a-service), [Azure CLI](search-manage-azure-cli.md#create-or-delete-a-service), the [Management REST API](search-manage-rest.md#create-or-update-a-service), an [Azure Resource Manager service template](search-get-started-arm.md), a [Bicep file](search-get-started-bicep.md), or [Terraform](search-get-started-terraform.md).
[![Animated GIF](./media/search-create-service-portal/AnimatedGif-AzureSearch-small.gif)](./media/search-create-service-portal/AnimatedGif-AzureSearch.gif#lightbox)
The following service properties are fixed for the lifetime of the service. Beca
## Subscribe (free or paid)
-To try search for free, [open a free Azure account](https://azure.microsoft.com/pricing/free-trial/?WT.mc_id=A261C142F) and then create your search service by choosing the **Free** tier. You can have one free search service per Azure subscription. Free search services are intended for short-term evaluation of the product for non-production applications. If you decide you would like to continue using the service for a production application, create a new search service on a billable tier.
+To try search for free, [open a free Azure account](https://azure.microsoft.com/pricing/free-trial/?WT.mc_id=A261C142F) and then create your search service by choosing the **Free** tier. You can have one free search service per Azure subscription. Free search services are intended for short-term evaluation of the product for non-production applications. If you want to move forward with a production application, create a new search service on a billable tier.
-Alternatively, you can use free credits to try out paid Azure services, which means you can create your search service at **Basic** or above to get more capacity. Your credit card is never charged unless you explicitly change your settings and ask to be charged. Another approach is to [activate Azure credits in a Visual Studio subscription](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?WT.mc_id=A261C142F). A Visual Studio subscription gives you credits every month you can use for paid Azure services.
+Alternatively, you can use free credits to try out paid Azure services. With this approach, you can create your search service at **Basic** or above to get more capacity. Your credit card is never charged unless you explicitly change your settings and ask to be charged. Another approach is to [activate Azure credits in a Visual Studio subscription](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?WT.mc_id=A261C142F). A Visual Studio subscription gives you credits every month you can use for paid Azure services.
Paid (or billable) search occurs when you choose a billable tier (Basic or above) when creating the resource on a billable Azure subscription.
Although most customers use just one service, service redundancy might be necess
+ [Business continuity and disaster recovery (BCDR)](../availability-zones/cross-region-replication-azure.md). Azure AI Search doesn't provide instant failover in the event of an outage.
-+ [Multi-tenant architectures](search-modeling-multitenant-saas-applications.md) sometimes call for two or more services.
++ [Multitenant architectures](search-modeling-multitenant-saas-applications.md) sometimes call for two or more services. + Globally deployed applications might require search services in each geography to minimize latency.
search Search Data Sources Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-data-sources-gallery.md
Title: Data sources gallery
-description: Lists all of the supported data sources for importing into an Azure AI Search index.
+description: Lists data source connectors for importing into an Azure AI Search index.
- ignite-2023 layout: LandingPage Previously updated : 10/17/2022 Last updated : 01/11/2024 # Data sources gallery
-Find a data connector from Microsoft or a partner to simplify data ingestion into a search index. This article has the following sections:
+Find a data connector from Microsoft or a partner that works with [an indexer](search-indexer-overview.md) to simplify data ingestion into a search index. This article has the following sections:
+ [Generally available data sources by Azure AI Search](#ga) + [Preview data sources by Azure AI Search](#preview)
BA Insight's SharePoint Connector allows you to connect to SharePoint 2019, fetc
by [Accenture](https://www.accenture.com)
-The SharePoint connector will crawl content from any SharePoint site collection URL. The connector will retrieve Sites, Lists, Folders, List Items and Attachments, as well as other pages (in .aspx format). Supports SharePoint running in the Microsoft O365 offering.
+The SharePoint connector will crawl content from any SharePoint site collection URL. The connector will retrieve Sites, Lists, Folders, List Items and Attachments, as well as other pages (in .aspx format). Supports SharePoint running in the Microsoft 365 offering.
[More details](https://contentanalytics.digital.accenture.com/display/aspire40/SharePoint+Online+Connector)
search Search Howto Move Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-move-across-regions.md
Title: How to move your service resource across regions
+ Title: Move a search service across regions
-description: This article will show you how to move your Azure AI Search resources from one region to another in the Azure cloud.
+description: Learn how to move your Azure AI Search resources from one region to another in the Azure cloud.
- subject-moving-resources - ignite-2023 Previously updated : 01/30/2023 Last updated : 01/11/2024 # Move your Azure AI Search service to another Azure region
-Occasionally, customers ask about moving a search service to another region. Currently, there is no built-in mechanism or tooling to help with that task, but this article can help you understand the manual steps for recreating indexes and other objects on a new search service in a different region.
+Occasionally, customers ask about moving a search service to another region. Currently, there's no built-in mechanism or tooling to help with that task, but this article can help you understand the manual steps for recreating indexes and other objects on a new search service in a different region.
> [!NOTE] > In the Azure portal, all services have an **Export template** command. In the case of Azure AI Search, this command produces a basic definition of a service (name, location, tier, replica, and partition count), but does not recognize the content of your service, nor does it carry over keys, roles, or logs. Although the command exists, we don't recommend using it for moving a search service.
Occasionally, customers ask about moving a search service to another region. Cur
1. Identify dependencies and related services to understand the full impact of relocating a service, in case you need to move more than just Azure AI Search.
- Azure Storage is used for logging, creating a knowledge store, and is a commonly used external data source for AI enrichment and indexing. Azure AI services is a dependency in AI enrichment. Both Azure AI services and your search service are required to be in the same region if you are using AI enrichment.
+ Azure Storage is used for logging, creating a knowledge store, and is a commonly used external data source for AI enrichment and indexing. Azure AI services are used to power built-in skills during AI enrichment. Both Azure AI services and your search service are required to be in the same region if you're using AI enrichment.
1. Create an inventory of all objects on the service so that you know what to move: indexes, synonym maps, indexers, data sources, skillsets. If you enabled logging, create and archive any reports you might need for a historical record.
-1. Check pricing and availability in the new region to ensure availability of Azure AI Search plus any related services in the new region. The majority of features are available in all regions, but some preview features have restricted availability.
+1. Check pricing and availability in the new region to ensure availability of Azure AI Search plus any related services in the new region. Most features are available in all regions, but some preview features have restricted availability.
-1. Create a service in the new region and republish from source code any existing indexes, synonym maps, indexers, data sources, and skillsets. Remember that service names must be unique so you cannot reuse the existing name. Check each skillset to see if connections to Azure AI services are still valid in terms of the same-region requirement. Also, if knowledge stores are created, check the connection strings for Azure Storage if you are using a different service.
+1. Create a service in the new region and republish from source code any existing indexes, synonym maps, indexers, data sources, and skillsets. Remember that service names must be unique so you can't reuse the existing name. Check each skillset to see if connections to Azure AI services are still valid in terms of the same-region requirement. Also, if knowledge stores are created, check the connection strings for Azure Storage if you're using a different service.
1. Reload indexes and knowledge stores, if applicable. You'll either use application code to push JSON data into an index, or rerun indexers to pull documents in from external sources.
-1. Enable logging, and if you are using them, re-create security roles.
+1. Enable logging, and if you're using them, re-create security roles.
1. Update client applications and test suites to use the new service name and API keys, and test all applications.
search Search Indexer Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-troubleshooting.md
Title: Indexer troubleshooting guidance
+ Title: Indexer troubleshooting
-description: This article provides indexer problem and resolution guidance for cases when no error messages are returned from the service search.
+description: Provides indexer problem and resolution guidance for cases when no error messages are returned from the service search.
- ignite-2023 Previously updated : 04/04/2023 Last updated : 01/11/2024 # Indexer troubleshooting guidance for Azure AI Search
-Occasionally, indexers run into problems and there is no error to help with diagnosis. This article covers problems and potential resolutions when indexer results are unexpected and there is limited information to go on. If you have an error to investigate, see [Troubleshooting common indexer errors and warnings](cognitive-search-common-errors-warnings.md) instead.
+Occasionally, indexers run into problems that don't produce errors or that occur on other Azure services, such as during authentication or when connecting. This article focuses on troubleshooting indexer problems when there are no messages to guide you. It also provides troubleshooting for errors that come from non-search resources used during indexing.
+> [!NOTE]
+> If you have an Azure AI Search error to investigate, see [Troubleshooting common indexer errors and warnings](cognitive-search-common-errors-warnings.md) instead.
<a name="connection-errors"></a> ## Troubleshoot connections to restricted resources
-For data sources that are secured by Azure network security mechanisms, indexers have a limited set of options for making the connection. Currently, indexers can access restricted data sources [behind an IP firewall](search-indexer-howto-access-ip-restricted.md) or on a virtual network through a [private endpoint](search-indexer-howto-access-private.md).
+For data sources under Azure network security, indexers are limited in how they make the connection. Currently, indexers can access restricted data sources [behind an IP firewall](search-indexer-howto-access-ip-restricted.md) or on a virtual network through a [private endpoint](search-indexer-howto-access-private.md) using a shared private link.
### Firewall rules
-Azure Storage, Azure Cosmos DB and Azure SQL provide a configurable firewall. There's no specific error message when the firewall is enabled. Typically, firewall errors are generic. Some common errors include:
+Azure Storage, Azure Cosmos DB and Azure SQL provide a configurable firewall. There's no specific error message when the firewall blocks the request. Typically, firewall errors are generic. Some common errors include:
+ * `The remote server returned an error: (403) Forbidden` * `This request is not authorized to perform this operation` * `Credentials provided in the connection string are invalid or have expired` - There are two options for allowing indexers to access these resources in such an instance:
-* Disable the firewall, by allowing access from **All Networks** (if feasible).
+* Configure an inbound rule for the IP address of your search service and the IP address range of `AzureCognitiveSearch` [service tag](../virtual-network/service-tags-overview.md#available-service-tags). Details for configuring IP address range restrictions for each data source type can be found from the following links:
-* Alternatively, you can allow access for the IP address of your search service and the IP address range of `AzureCognitiveSearch` [service tag](../virtual-network/service-tags-overview.md#available-service-tags) in the firewall rules of your resource (IP address range restriction).
+ * [Azure Storage](../storage/common/storage-network-security.md#grant-access-from-an-internet-ip-range)
+ * [Azure Cosmos DB](../storage/common/storage-network-security.md#grant-access-from-an-internet-ip-range)
+ * [Azure SQL](/azure/azure-sql/database/firewall-configure#create-and-manage-ip-firewall-rules)
-Details for configuring IP address range restrictions for each data source type can be found from the following links:
+* As a last resort or as a temporary measure, disable the firewall by allowing access from **All Networks**.
-* [Azure Storage](../storage/common/storage-network-security.md#grant-access-from-an-internet-ip-range)
+**Limitation**: IP address range restrictions only work if your search service and your storage account are in different regions.
-* [Azure Cosmos DB](../storage/common/storage-network-security.md#grant-access-from-an-internet-ip-range)
+In addition to data retrieval, indexers also send outbound requests through skillsets and [custom skills](cognitive-search-custom-skill-web-api.md). For custom skills based on an Azure function, be aware that Azure functions also have [IP address restrictions](/azure/azure-functions/ip-addresses#ip-address-restrictions). The list of IP addresses to allow through for custom skill execution include the IP address of your search service and the IP address range of `AzureCognitiveSearch` service tag.
-* [Azure SQL](/azure/azure-sql/database/firewall-configure#create-and-manage-ip-firewall-rules)
+### Network security group (NSG) rules
-**Limitation**: IP address range restrictions only work if your search service, and your storage account are in different regions.
+When an indexer accesses data on a SQL managed instance, or when an Azure VM is used as the web service URI for a [custom skill](cognitive-search-custom-skill-web-api.md), the network security group determines whether requests are allowed in.
-Azure functions (that could be used as a [Custom Web Api skill](cognitive-search-custom-skill-web-api.md)) also support [IP address restrictions](../azure-functions/ip-addresses.md#ip-address-restrictions). The list of IP addresses to configure would be the IP address of your search service and the IP address range of `AzureCognitiveSearch` service tag.
+For external resources residing on a virtual network, [configure inbound NSG rules](/azure/virtual-network/manage-network-security-group#work-with-security-rules) for the `AzureCognitiveSearch` service tag.
For more information about connecting to a virtual machine, see [Configure a connection to SQL Server on an Azure VM](search-howto-connecting-azure-sql-iaas-to-azure-search-using-indexers.md).
-### Configure network security group (NSG) rules
-
-When accessing data in a SQL managed instance, or when an Azure VM is used as the web service URI for a [Custom Web Api skill](cognitive-search-custom-skill-web-api.md), customers need not be concerned with specific IP addresses.
-
-In such cases, the Azure VM, or the SQL managed instance can be configured to reside within a virtual network. Then a network security group can be configured to filter the type of network traffic that can flow in and out of the virtual network subnets and network interfaces.
-
-The `AzureCognitiveSearch` service tag can be directly used in the inbound [NSG rules](../virtual-network/manage-network-security-group.md#work-with-security-rules) without needing to look up its IP address range.
-
-More details for accessing data in a SQL managed instance are outlined [here](search-howto-connecting-azure-sql-mi-to-azure-search-using-indexers.md).
- ### Network errors Usually, network errors are generic. Some common errors include:+ * `A network-related or instance-specific error occurred while establishing a connection to the server` * `The server was not found or was not accessible` * `Verify that the instance name is correct and that the source is configured to allow remote connections`
-When you are receiving any of those errors:
+When you receive any of those errors:
* Make sure your source is accessible by trying to connect to it directly and not through the search service
-* Check your source in the Azure portal for any current errors or outages
+* Check your resource in the Azure portal for any current errors or outages
* Check for any network outages in [Azure Status](https://azure.status.microsoft/status)
-* Check you are using public DNS for name resolution and not an [Azure Private DNS](../dns/private-dns-overview.md)
-
+* Verify you're using a public DNS for name resolution and not an [Azure Private DNS](/azure/dns/private-dns-overview)
## Azure SQL Database serverless indexing (error code 40613) If your SQL database is on a [serverless compute tier](/azure/azure-sql/database/serverless-tier-overview), make sure that the database is running (and not paused) when the indexer connects to it.
-If the database is paused, the first login from your search service is expected to auto-resume the database, but returning an error stating that the database is unavailable with error code 40613. After the database is running, retry the login to establish connectivity.
+If the database is paused, the first sign in from your search service is expected to auto-resume the database, but instead returns an error stating that the database is unavailable, giving error code 40613. After the database is running, retry the sign in to establish connectivity.
<a name='azure-active-directory-conditional-access-policies'></a> ## Microsoft Entra Conditional Access policies
-When creating a SharePoint indexer, you will go through a step that requires you to sign in to your Microsoft Entra app after providing a device code. If you receive a message that says `"Your sign-in was successful but your admin requires the device requesting access to be managed"` the indexer is likely being blocked from accessing the SharePoint document library due to a [Conditional Access](../active-directory/conditional-access/overview.md) policy.
+When you create a SharePoint indexer, there's a step requiring you to sign in to your Microsoft Entra app after providing a device code. If you receive a message that says `"Your sign-in was successful but your admin requires the device requesting access to be managed"`, the indexer is probably blocked from the SharePoint document library by a [Conditional Access](../active-directory/conditional-access/overview.md) policy.
+
+To update the policy and allow indexer access to the document library:
-To update the policy to allow the indexer access to the document library, follow the below steps:
+1. Open the Azure portal and search for **Microsoft Entra Conditional Access**.
-1. Open the Azure portal and search **Microsoft Entra Conditional Access**, then select **Policies** on the left menu. If you don't have access to view this page, you need to either find someone who has access or get access.
+1. Select **Policies** on the left menu. If you don't have access to view this page, you need to either find someone who has access or get access.
1. Determine which policy is blocking the SharePoint indexer from accessing the document library. The policy that might be blocking the indexer includes the user account that you used to authenticate during the indexer creation step in the **Users and groups** section. The policy also might have **Conditions** that:+ * Restrict **Windows** platforms. * Restrict **Mobile apps and desktop clients**. * Have **Device state** configured to **Yes**.
-1. Once you've confirmed there is a policy that is blocking the indexer, you next need to make an exemption for the indexer. Retrieve the search service IP address.
+1. Once you've confirmed which policy is blocking the indexer, make an exemption for the indexer. Start by retrieving the search service IP address.
- 1. Obtain the fully qualified domain name (FQDN) of your search service. The FQDN looks like `<search-service-name>.search.windows.net`. You can find out the FQDN by looking up your search service on the Azure portal.
+ First, obtain the fully qualified domain name (FQDN) of your search service. The FQDN looks like `<your-search-service-name>.search.windows.net`. You can find the FQDN in the Azure portal.
- ![Obtain service FQDN](media\search-indexer-howto-secure-access\search-service-portal.png "Obtain service FQDN")
+ ![Obtain service FQDN](media\search-indexer-howto-secure-access\search-service-portal.png "Obtain service FQDN")
- The IP address of the search service can be obtained by performing a `nslookup` (or a `ping`) of the FQDN. In the following example, you would add "150.0.0.1" to an inbound rule on the Azure Storage firewall. It might take up to 15 minutes after the firewall settings have been updated for the search service indexer to be able to access the Azure Storage account.
+ Now that you have the FQDN, get the IP address of the search service by performing a `nslookup` (or a `ping`) of the FQDN. In the following example, you would add "150.0.0.1" to an inbound rule on the Azure Storage firewall. It might take up to 15 minutes after the firewall settings have been updated for the search service indexer to be able to access the Azure Storage account.
```azurepowershell- nslookup contoso.search.windows.net Server: server.example.org Address: 10.50.10.50
To update the policy to allow the indexer access to the document library, follow
1. Get the IP address ranges for the indexer execution environment for your region.
- Extra IP addresses are used for requests that originate from the indexer's [multi-tenant execution environment](search-indexer-securing-resources.md#indexer-execution-environment). You can get this IP address range from the service tag.
+ Extra IP addresses are used for requests that originate from the indexer's [multitenant execution environment](search-indexer-securing-resources.md#indexer-execution-environment). You can get this IP address range from the service tag.
The IP address ranges for the `AzureCognitiveSearch` service tag can be either obtained via the [discovery API](../virtual-network/service-tags-overview.md#use-the-service-tag-discovery-api) or the [downloadable JSON file](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files).
- For this walkthrough, assuming the search service is the Azure Public cloud, the [Azure Public JSON file](https://www.microsoft.com/download/details.aspx?id=56519) should be downloaded.
+ For this exercise, assuming the search service is the Azure Public cloud, the [Azure Public JSON file](https://www.microsoft.com/download/details.aspx?id=56519) should be downloaded.
![Download JSON file](media\search-indexer-troubleshooting\service-tag.png "Download JSON file")
- From the JSON file, assuming the search service is in West Central US, the list of IP addresses for the multi-tenant indexer execution environment are listed below.
+ From the JSON file, assuming the search service is in West Central US, the list of IP addresses for the multitenant indexer execution environment are listed below.
```json {
To update the policy to allow the indexer access to the document library, follow
``` 1. Back on the Conditional Access page in Azure portal, select **Named locations** from the menu on the left, then select **+ IP ranges location**. Give your new named location a name and add the IP ranges for your search service and indexer execution environments that you collected in the last two steps.
- * For your search service IP address, you may need to add "/32" to the end of the IP address since it only accepts valid IP ranges.
+1
+ * For your search service IP address, you might need to add "/32" to the end of the IP address since it only accepts valid IP ranges.
* Remember that for the indexer execution environment IP ranges, you only need to add the IP ranges for the region that your search service is in.
-1. Exclude the new Named location from the policy.
+1. Exclude the new Named location from the policy:
+ 1. Select **Policies** on the left menu. 1. Select the policy that is blocking the indexer. 1. Select **Conditions**.
To update the policy to allow the indexer access to the document library, follow
1. Wait a few minutes for the policy to update and enforce the new policy rules.
-1. Attempt to create the indexer again
+1. Attempt to create the indexer again:
+ 1. Send an update request for the data source object that you created. 1. Resend the indexer create request. Use the new code to sign in, then send another indexer creation request. ## Indexing unsupported document types
-If you are indexing content from Azure Blob Storage, and the container includes blobs of an [unsupported content type](search-howto-indexing-azure-blob-storage.md#SupportedFormats), the indexer skips that document. In other cases, there may be problems with individual documents.
+If you're indexing content from Azure Blob Storage, and the container includes blobs of an [unsupported content type](search-howto-indexing-azure-blob-storage.md#SupportedFormats), the indexer skips that document. In other cases, there might be problems with individual documents.
-You can [set configuration options](search-howto-indexing-azure-blob-storage.md#DealingWithErrors) to allow indexer processing to continue in the event of problems with individual documents.
+In this situation, you can [set configuration options](search-howto-indexing-azure-blob-storage.md#DealingWithErrors) to allow indexer processing to continue in the event of problems with individual documents.
```http
-PUT https://[service name].search.windows.net/indexers/[indexer name]?api-version=2020-06-30
+PUT https://[service name].search.windows.net/indexers/[indexer name]?api-version=2023-11-01
Content-Type: application/json api-key: [admin key]
Indexers extract documents or rows from an external [data source](/rest/api/sear
* The document was updated after the indexer was run. If your indexer is on a [schedule](/rest/api/searchservice/create-indexer#indexer-schedule), it eventually reruns and picks up the document. * The indexer timed out before the document could be ingested. There are [maximum processing time limits](search-limits-quotas-capacity.md#indexer-limits) after which no documents are processed. You can check indexer status in the portal or by calling [Get Indexer Status (REST API)](/rest/api/searchservice/get-indexer-status). * [Field mappings](/rest/api/searchservice/create-indexer#fieldmappings) or [AI enrichment](./cognitive-search-concept-intro.md) have changed the document and its articulation in the search index is different from what you expect.
-* [Change tracking](/rest/api/searchservice/create-data-source#data-change-detection-policies) values are erroneous or prerequisites are missing. If your high watermark value is a date set to a future time, then any documents that have a date less than this are skipped by the indexer. You can understand your indexer's change tracking state using the 'initialTrackingState' and 'finalTrackingState' fields in the [indexer status](/rest/api/searchservice/get-indexer-status#indexer-execution-result). Indexers for Azure SQL and MySQL must have an index on the high water mark column of the source table, or queries used by the indexer may time out.
+* [Change tracking](/rest/api/searchservice/create-data-source#data-change-detection-policies) values are erroneous or prerequisites are missing. If your high watermark value is a date set to a future time, then any documents that have an earlier date are skipped by the indexer. You can determine your indexer's change tracking state using the 'initialTrackingState' and 'finalTrackingState' fields in the [indexer status](/rest/api/searchservice/get-indexer-status#indexer-execution-result). Indexers for Azure SQL and MySQL must have an index on the high water mark column of the source table, or queries used by the indexer might time out.
> [!TIP] > If documents are missing, check the [query](/rest/api/searchservice/search-documents) you are using to make sure it isn't excluding the document in question. To query for a specific document, use the [Lookup Document REST API](/rest/api/searchservice/lookup-document).
api-key: [admin key]
Azure AI Search has an implicit dependency on Azure Cosmos DB indexing. If you turn off automatic indexing in Azure Cosmos DB, Azure AI Search returns a successful state, but fails to index container contents. For instructions on how to check settings and turn on indexing, see [Manage indexing in Azure Cosmos DB](../cosmos-db/how-to-manage-indexing-policy.md#use-the-azure-portal).
+## Document count discrepancy between the data source and index
-## Indexer reflects a different document count than data source or index
-
-Indexer may show a different document count than either the data source, the index or count in your code, depending on specific circumstances. Here are some possible causes of why this behavior may occur:
+An indexer might show a different document count than either the data source, the index itself, or count in your code. Here are some possible reasons why this behavior can occur:
-- The indexer has a Deleted Document Policy. The deleted documents get counted on the indexer end if they are indexed before they get deleted.-- If the ID column in the data source is not unique. This applies to data sources that have the concept of columns, such as Azure Cosmos DB.-- If the data source definition has a different query than the one you are using to estimate the number of records. In example, in your data base you are querying all your data base record count, while in the data source definition query you may be selecting just a subset of records to index.-- The counts are being checked in different intervals for each component of the pipeline: data source, indexer and index.-- The index may take some minutes to show the real document count.
+- The index can lag in showing the real document count, especially in the portal.
+- The indexer has a Deleted Document Policy. The deleted documents get counted by the indexer if the documents are indexed before they get deleted.
+- If the ID column in the data source isn't unique. This applies to data sources that have the concept of columns, such as Azure Cosmos DB.
+- If the data source definition has a different query than the one you're using to estimate the number of records. In example, in your database, you're querying the database record count, while in the data source definition query, you might be selecting just a subset of records to index.
+- The counts are being checked at different intervals for each component of the pipeline: data source, indexer and index.
- The data source has a file that's mapped to many documents. This condition can occur when [indexing blobs](search-howto-index-json-blobs.md) and "parsingMode" is set to **`jsonArray`** and **`jsonLines`**.-- Due to [documents processed multiple times](#documents-processed-multiple-times).
-
## Documents processed multiple times
-Indexers use a conservative buffering strategy to ensure that every new and changed document in the data source is picked up during indexing. In certain situations, these buffers can overlap, causing an indexer to index a document two or more times resulting in the processed documents count to be more than actual number of documents in the data source. This behavior does **not** affect the data stored in the index, such as duplicating documents, only that it may take longer to reach eventual consistency. This condition can be especially prevalent if any of the following criteria are true:
+Indexers use a conservative buffering strategy to ensure that every new and changed document in the data source is picked up during indexing. In certain situations, these buffers can overlap, causing an indexer to index a document two or more times resulting in the processed documents count to be more than actual number of documents in the data source. This behavior does **not** affect the data stored in the index, such as duplicating documents, only that it can take longer to reach eventual consistency. This condition is especially prevalent if any of the following criteria are true:
- On-demand indexer requests are issued in quick succession - The data source's topology includes multiple replicas and partitions (one such example is discussed [here](../cosmos-db/consistency-levels.md)) - The data source is an Azure SQL database and the column chosen as "high water mark" is of type `datetime2`
-Indexers are not intended to be invoked multiple times in quick succession. If you need updates quickly, the supported approach is to push updates to the index while simultaneously updating the data source. For on-demand processing, we recommend that you pace your requests in five-minute intervals or more, and run the indexer on a schedule.
+Indexers aren't intended to be invoked multiple times in quick succession. If you need updates quickly, the supported approach is to push updates to the index while simultaneously updating the data source. For on-demand processing, we recommend that you pace your requests in five-minute intervals or more, and run the indexer on a schedule.
### Example of duplicate document processing with 30 second buffer
Conditions under which a document is processed twice is explained in the followi
| 00:01:42 | Indexer processes `doc2` for the fourth time | 00:01:32 | | | 00:01:43 | Indexer ends | 00:01:40 | Notice this indexer execution started more than 30 seconds after the last write to the data source and also processed `doc2`. This is the expected behavior because if all indexer executions before 00:01:35 are eliminated, this becomes the first and only execution to process `doc1` and `doc2`. |
-In practice, this scenario only happens when on-demand indexers are manually invoked within minutes of each other, for certain data sources. It may result in mismatched numbers (like the indexer processed 345 documents total according to the indexer execution stats, but there are 340 documents in the data source and index) or potentially increased billing if you are running the same skills for the same document multiple times. Running an indexer using a schedule is the preferred recommendation.
+In practice, this scenario only happens when on-demand indexers are manually invoked within minutes of each other, for certain data sources. It can result in mismatched numbers (like the indexer processed 345 documents total according to the indexer execution stats, but there are 340 documents in the data source and index) or potentially increased billing if you're running the same skills for the same document multiple times. Running an indexer using a schedule is the preferred recommendation.
## Indexing documents with sensitivity labels
search Search Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-language-support.md
- ignite-2023 Previously updated : 01/18/2023 Last updated : 01/11/2024 # Create an index for multiple languages in Azure AI Search
-A multilingual search application is one that provides a search experience in the user's own language. [Language support](index-add-language-analyzers.md#supported-language-analyzers) is enabled through a language analyzer assigned to string field. Azure AI Search supports Microsoft and Lucene analyzers. The language analyzer determines the linguistic rules by which content is tokenized. By default, the search engine uses Standard Lucene, which is language agnostic. If testing shows that the default analyzer is insufficient, replace it with a language analyzer.
+If you have strings in multiple languages, you can attach [language analyzers](index-add-language-analyzers.md#supported-language-analyzers) that analyze strings using linguistic rules of a specific language during indexing and query execution. With a language analyzer, you get better handling of character variations, punctuation, and word root forms.
-In Azure AI Search, the two patterns for supporting a multi-lingual audience include:
+Azure AI Search supports Microsoft and Lucene analyzers. By default, the search engine uses Standard Lucene, which is language agnostic. If testing indicates that the default analyzer is insufficient, replace it with a language analyzer.
+
+In Azure AI Search, the two patterns for supporting multiple languages include:
+ Create language-specific indexes where all of the alphanumeric content is in the same language, and all searchable string fields are attributed to use the same [language analyzer](index-add-language-analyzers.md). + Create a blended index with language-specific versions of each field (for example, description_en, description_fr, description_ko), and then constrain full text search to just those fields at query time. This approach is useful for scenarios where language variants are only needed on a few fields, like a description.
-This article focuses on best practices for defining and querying language specific fields in a blended index. The steps you'll implement include:
+This article focuses on steps and best practices for configuring and querying language-specific fields in a blended index:
> [!div class="checklist"]
-> * Define a string field for each language variant.
-> * Set a language analyzer on each field.
-> * On the query request, set the `searchFields` parameter to specific fields, and then use `select` to return just those fields that have compatible content.
+> + Define a string field for each language variant.
+> + Set a language analyzer on each field.
+> + On the query request, set the `searchFields` parameter to specific fields, and then use `select` to return just those fields that have compatible content.
+
+> [!NOTE]
+> If you're using large language models in a retrieval augmented generated (RAG) pattern, you can engineer the prompt to return translated strings. That scenario is out of scope for this article.
## Prerequisites
-Language analysis applies to fields of type `Edm.String` that are `searchable`, and that contain localized text. If you also need text translation, review the next section to see if AI enrichment fits your scenario.
+Language analysis applies to fields of type `Edm.String` that are `searchable`, and that contain localized text. If you also need text translation, review the next section to see if AI enrichment meets your needs.
-Non-string fields and non-searchable string fields don't undergo lexical analysis and aren't tokenized. Instead, they are stored and returned verbatim.
+Non-string fields and non-searchable string fields don't undergo lexical analysis and aren't tokenized. Instead, they're stored and returned verbatim.
## Add text translation
-This article assumes you have translated strings in place. If that's not the case, you can attach Azure AI services to an [enrichment pipeline](cognitive-search-concept-intro.md), invoking text translation during data ingestion. Text translation takes a dependency on the indexer feature and Azure AI services, but all setup is done within Azure AI Search.
+This article assumes translated strings alreach exist. If that's not the case, you can attach Azure AI services to an [enrichment pipeline](cognitive-search-concept-intro.md), invoking text translation during indexing. Text translation takes a dependency on the indexer feature and Azure AI services, but all setup is done within Azure AI Search.
To add text translation, follow these steps:
To add text translation, follow these steps:
In Azure AI Search, queries target a single index. Developers who want to provide language-specific strings in a single search experience typically define dedicated fields to store the values: one field for English strings, one for French, and so on.
-The "analyzer" property on a field definition is used to set the [language analyzer](index-add-language-analyzers.md). It will be used for both indexing and query execution.
+The `analyzer` property on a field definition is used to set the [language analyzer](index-add-language-analyzers.md). It's used for both indexing and query execution.
```JSON {
The "analyzer" property on a field definition is used to set the [language analy
"retrievable": true, "searchable": true, "analyzer": "fr.microsoft"
- },
+ }
+ ]
+}
``` ## Build and load an index
Parameters on the query are used to limit search to specific fields and then tri
| Parameters | Purpose | |--|--|
-| **searchFields** | Limits full text search to the list of named fields. |
-| **$select** | Trims the response to include only the fields you specify. By default, all retrievable fields are returned. The **$select** parameter lets you choose which ones to return. |
+| `searchFields` | Limits full text search to the list of named fields. |
+| `select` | Trims the response to include only the fields you specify. By default, all retrievable fields are returned. The `select` parameter lets you choose which ones to return. |
-Given a goal of constraining search to fields containing French strings, you would use **searchFields** to target the query at fields containing strings in that language.
+Given a goal of constraining search to fields containing French strings, you would use `searchFields` to target the query at fields containing strings in that language.
-Specifying the analyzer on a query request isn't necessary. A language analyzer on the field definition will always be used during query processing. For queries that specify multiple fields invoking different language analyzers, the terms or phrases will be processed independently by the assigned analyzers for each field.
+Specifying the analyzer on a query request isn't necessary. A language analyzer on the field definition determines text analysis during query execution. For queries that specify multiple fields, each invoking different language analyzers, the terms or phrases are processed concurrently by the assigned analyzers for each field.
-By default, a search returns all fields that are marked as retrievable. As such, you might want to exclude fields that don't conform to the language-specific search experience you want to provide. Specifically, if you limited search to a field with French strings, you probably want to exclude fields with English strings from your results. Using the **$select** query parameter gives you control over which fields are returned to the calling application.
+By default, a search returns all fields that are marked as retrievable. As such, you might want to exclude fields that don't conform to the language-specific search experience you want to provide. Specifically, if you limited search to a field with French strings, you probably want to exclude fields with English strings from your results. Using the `select` query parameter gives you control over which fields are returned to the calling application.
#### Example in REST
private static void RunQueries(SearchClient srchclient)
## Boost language-specific fields
-Sometimes the language of the agent issuing a query isn't known, in which case the query can be issued against all fields simultaneously. IA preference for results in a certain language can be defined using [scoring profiles](index-add-scoring-profiles.md). In the example below, matches found in the description in French will be scored higher relative to matches in other languages:
+Sometimes the language of the agent issuing a query isn't known, in which case the query can be issued against all fields simultaneously. IA preference for results in a certain language can be defined using [scoring profiles](index-add-scoring-profiles.md). In the example below, matches found in the description in French are scored higher relative to matches in other languages:
```JSON "scoringProfiles": [
Sometimes the language of the agent issuing a query isn't known, in which case t
You would then include the scoring profile in the search request: ```http
-POST /indexes/hotels/docs/search?api-version=2020-06-30
+POST /indexes/hotels/docs/search?api-version=2023-11-01
{ "search": "pets allowed", "searchFields": "Tags, Description_fr",
search Search Sku Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-sku-manage-costs.md
- ignite-2023 Previously updated : 12/01/2022 Last updated : 01/11/2024 # Plan and manage costs of an Azure AI Search service
-This article explains the billing model and billable events of Azure AI Search, and provides direction for managing the costs.
+This article explains the billing model and billable events of Azure AI Search, and provides guidance for managing the costs.
As a first step, estimate your baseline costs by using the Azure pricing calculator. Alternatively, estimated costs and tier comparisons can also be found in the [Select a pricing tier](search-create-service-portal.md#choose-a-tier) page when creating a service.
Billing is based on capacity (SUs) and the costs of running premium features, su
<sup>1</sup> Applies only if you use or enable the feature.
-<sup>2</sup> In an [indexer configuration](/rest/api/searchservice/create-indexer#indexer-parameters), "imageAction" is the parameter that triggers image extraction. If "imageAction" is set to "none" (the default), you won't be charged for image extraction. Costs are incurred when "imageAction" parameter is set *and* you include OCR, Image Analysis, or Document Extraction in a skillset.
+<sup>2</sup> In an [indexer configuration](/rest/api/searchservice/create-indexer#indexer-parameters), `imageAction` is the parameter that triggers image extraction. If `imageAction` is set to "none" (the default), you won't be charged for image extraction. Costs are incurred when `imageAction` parameter is set *and* you include OCR, Image Analysis, or Document Extraction in a skillset.
-There is no meter on the number of queries, query responses, or documents ingested, although [service limits](search-limits-quotas-capacity.md) do apply at each tier.
+You aren't billed on the number of full text or vector queries, query responses, or documents ingested, although [service limits](search-limits-quotas-capacity.md) do apply at each tier.
Data traffic might also incur networking costs. See the [Bandwidth pricing](https://azure.microsoft.com/pricing/details/bandwidth/).
-Several premium features such as [knowledge store](knowledge-store-concept-intro.md), [Debug Sessions](cognitive-search-debug-session.md), and [enrichment cache](cognitive-search-incremental-indexing-conceptual.md) have a dependency on Azure Storage. The meters for Azure Storage apply in this case, and the associated storage costs of using these features will be included in the Azure Storage bill.
+Several premium features such as [knowledge store](knowledge-store-concept-intro.md), [debug sessions](cognitive-search-debug-session.md), and [enrichment cache](cognitive-search-incremental-indexing-conceptual.md) have a dependency on Azure Storage. The meters for Azure Storage apply in this case, and the associated storage costs of using these features are included in the Azure Storage bill.
[Customer-managed keys](search-security-manage-encryption-keys.md) provide double encryption of sensitive content. This feature requires a billable [Azure Key Vault](https://azure.microsoft.com/pricing/details/key-vault/)).
-Skillsets can include [billable built-in skills](cognitive-search-predefined-skills.md), non-billable built-in utility skills, and custom skills. Non-billable utility skills include Conditional, Shaper, Text Merge, Text Split. There is no billing impact when using them, no Azure AI services key requirement, and no 20 document limit.
+Skillsets can include [billable built-in skills](cognitive-search-predefined-skills.md), non-billable built-in utility skills, and custom skills. Non-billable utility skills include Conditional, Shaper, Text Merge, Text Split. You aren't charged for using them. There's no API key requirement, and no 20 document limit.
-A custom skill is functionality you provide. The cost of using a custom skill depends entirely on whether custom code is calling other metered services. There is no Azure AI services key requirement and no 20 document limit on custom skills.
+A custom skill is functionality you provide. The cost of using a custom skill depends entirely on whether custom code is calling other billable services. There's no API key requirement and no 20 document limit on custom skills.
## Monitor costs
Follow these guidelines to minimize costs of an Azure AI Search solution.
1. Consider [Azure Web App](../app-service/overview.md) for your front-end application so that requests and responses stay within the data center boundary.
-1. If you're using [AI enrichment](cognitive-search-concept-intro.md), there is an extra charge for blob storage, but the cumulative cost goes down if you enable [enrichment caching](cognitive-search-incremental-indexing-conceptual.md).
+1. If you're using [AI enrichment](cognitive-search-concept-intro.md), there's an extra charge for blob storage, but the cumulative cost goes down if you enable [enrichment caching](cognitive-search-incremental-indexing-conceptual.md).
## Create budgets
Search runs as a continuous service. Dedicated resources are always operational,
**Can I change the billing rate (tier) of an existing search service?**
-In-place upgrade or downgrade is not supported. Changing a service tier requires provisioning a new service at the desired tier.
+In-place upgrade or downgrade isn't supported. Changing a service tier requires provisioning a new service at the desired tier.
## Next steps + Learn more on how pricing works with Azure AI Search. See [Azure AI Search pricing page](https://azure.microsoft.com/pricing/details/search/). + Learn more about [replicas and partitions](search-sku-tier.md).
-+ Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
++ Learn [how to optimize your cloud investment with Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). + Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). + Learn about how to [prevent unexpected costs](../cost-management-billing/cost-management-billing-overview.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). + Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
search Search Synonyms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-synonyms.md
Title: Synonyms for query expansion over a search index
+ Title: Synonyms for query expansion
-description: Create a synonym map to expand the scope of a search query on an Azure AI Search index. Scope is broadened to include equivalent terms you provide in a list.
+description: Create a synonym map to expand the scope of a search query over an Azure AI Search index. Scope is broadened to include equivalent terms you provide in the synonym map.
- ignite-2023 Previously updated : 09/12/2022 Last updated : 01/12/2024 + # Synonyms in Azure AI Search
-Within a search service, synonym maps are a global resource that associate equivalent terms, expanding the scope of a query without the user having to actually provide the term. For example, assuming "dog", "canine", and "puppy" are mapped synonyms, a query on "canine" will match on a document containing "dog".
+On a search service, synonym maps are a global resource that associate equivalent terms, expanding the scope of a query without the user having to actually provide the term. For example, assuming "dog", "canine", and "puppy" are mapped synonyms, a query on "canine" will match on a document containing "dog".
## Create synonyms A synonym map is an asset that can be created once and used by many indexes. The [service tier](search-limits-quotas-capacity.md#synonym-limits) determines how many synonym maps you can create, ranging from three synonym maps for Free and Basic tiers, up to 20 for the Standard tiers.
-You might create multiple synonym maps for different languages, such as English and French versions, or lexicons if your content includes technical or obscure terminology. Although you can create multiple synonym maps in your search service, within an index, a field definition can only have one synonym map assignment.
+You might create multiple synonym maps for different languages, such as English and French versions, or lexicons if your content includes technical jargon, slang, or obscure terminology. Although you can create multiple synonym maps in your search service, within an index, a field definition can only have one synonym map assignment.
A synonym map consists of name, format, and rules that function as synonym map entries. The only format that is supported is `solr`, and the `solr` format determines rule construction. ```http
-POST /synonymmaps?api-version=2020-06-30
+POST /synonymmaps?api-version=2023-11-01
{ "name": "geo-synonyms", "format": "solr",
Mapping rules adhere to the open-source synonym filter specification of Apache S
Each rule must be delimited by the new line character (`\n`). You can define up to 5,000 rules per synonym map in a free service and 20,000 rules per map in other tiers. Each rule can have up to 20 expansions (or items in a rule). For more information, see [Synonym limits](search-limits-quotas-capacity.md#synonym-limits).
-Query parsers will lower-case any upper or mixed case terms, but if you want to preserve special characters in the string, such as a comma or dash, add the appropriate escape characters when creating the synonym map.
+Query parsers automatically lower-case any upper or mixed case terms, but if you want to preserve special characters in the string, such as a comma or dash, add the appropriate escape characters when creating the synonym map.
### Equivalency rules
-Rules for equivalent terms are comma-delimited within the same rule. In the first example, a query on `USA` will expand to `USA` OR `"United States"` OR `"United States of America"`. Notice that if you want to match on a phrase, the query itself must be a quote-enclosed phrase query.
+Rules for equivalent terms are comma-delimited within the same rule. In the first example, a query on `USA` expands to `USA` OR `"United States"` OR `"United States of America"`. Notice that if you want to match on a phrase, the query itself must be a quote-enclosed phrase query.
-In the equivalence case, a query for `dog` will expand the query to also include `puppy` and `canine`.
+In the equivalence case, a query for `dog` expands the query to also include `puppy` and `canine`.
```json {
In the equivalence case, a query for `dog` will expand the query to also include
### Explicit mapping
-Rules for an explicit mapping are denoted by an arrow `=>`. When specified, a term sequence of a search query that matches the left-hand side of `=>` will be replaced with the alternatives on the right-hand side at query time.
+Rules for an explicit mapping are denoted by an arrow `=>`. When specified, a term sequence of a search query that matches the left-hand side of `=>` is replaced with the alternatives on the right-hand side at query time.
-In the explicit case, a query for `Washington`, `Wash.` or `WA` will be rewritten as `WA`, and the query engine will only look for matches on the term `WA`. Explicit mapping only applies in the direction specified, and doesn't rewrite the query `WA` to `Washington` in this case.
+In the explicit case, a query for `Washington`, `Wash.` or `WA` is rewritten as `WA`, and the query engine only looks for matches on the term `WA`. Explicit mapping only applies in the direction specified, and doesn't rewrite the query `WA` to `Washington` in this case.
```json {
The following example shows an example of how to escape a character with a backs
```json {
-"format": "solr",
-"synonyms": "WA\, USA, WA, Washington\n"
+ "format": "solr",
+ "synonyms": "WA\, USA, WA, Washington\n"
} ```
Since the backslash is itself a special character in other languages like JSON a
```json {
-"format":"solr",
-"synonyms": "WA\\, USA, WA, Washington"
+ "format":"solr",
+ "synonyms": "WA\\, USA, WA, Washington"
} ```
search Service Configure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/service-configure-firewall.md
- ignite-2023 Previously updated : 02/08/2023 Last updated : 01/11/2024 # Configure an IP firewall for Azure AI Search
-Azure AI Search supports IP rules for inbound access through a firewall, similar to the IP rules you'll find in an Azure virtual network security group. By applying IP rules, you can restrict search service access to an approved set of machines and cloud services. Access to data stored in your search service from the approved sets of machines and services will still require the caller to present a valid authorization token.
+Azure AI Search supports IP rules for inbound access through a firewall, similar to the IP rules found in an Azure virtual network security group. By applying IP rules, you can restrict service access to an approved set of devices and cloud services. An IP rule only allows the request through. Access to data and operations will still require the caller to present a valid authorization token.
You can set IP rules in the Azure portal, as described in this article, or use the [Management REST API](/rest/api/searchmanagement/), [Azure PowerShell](/powershell/module/az.search), or [Azure CLI](/cli/azure/search).
You can set IP rules in the Azure portal, as described in this article, or use t
:::image type="content" source="media/service-configure-firewall/azure-portal-firewall.png" alt-text="Screenshot showing how to configure the IP firewall in the Azure portal." border="true":::
- The Azure portal provides the ability to specify IP addresses and IP address ranges in the CIDR format. An example of CIDR notation is 8.8.8.0/24, which represents the IPs that range from 8.8.8.0 to 8.8.8.255.
+ The Azure portal supports IP addresses and IP address ranges in the CIDR format. An example of CIDR notation is 8.8.8.0/24, which represents the IPs that range from 8.8.8.0 to 8.8.8.255.
1. Select **Add your client IP address** under **Firewall** to create an inbound rule for the IP address of your system.
When requests originate from IP addresses that aren't in the allowed list, a gen
## Allow access from the Azure portal IP address
-When IP rules are configured, some features of the Azure portal are disabled. You'll be able to view and manage service level information, but portal access to indexes, indexers, and other top-level resources is restricted. You can restore portal access to the full range of search service operations by allowing access from the portal IP address and your client IP address.
+When IP rules are configured, some features of the Azure portal are disabled. You can view and manage service level information, but portal access to indexes, indexers, and other top-level resources is restricted. You can restore portal access to the full range of search service operations by allowing access from the portal IP address and your client IP address.
To get the portal's IP address, perform `nslookup` (or `ping`) on `stamp2.ext.search.windows.net`, which is the domain of the traffic manager. For nslookup, the IP address is visible in the "Non-authoritative answer" portion of the response.
-In the following example, the IP address that you should copy is "52.252.175.48".
+In the following example, the IP address that you should copy is `52.252.175.48`.
```bash $ nslookup stamp2.ext.search.windows.net
Aliases: stamp2.ext.search.windows.net
azspncuux.management.search.windows.net ```
-Services in different regions connect to different traffic managers. Regardless of the domain name, the IP address returned from the ping is the correct one to use when defining an inbound firewall rule for the Azure portal in your region.
+When services run in different regions, they connect to different traffic managers. Regardless of the domain name, the IP address returned from the ping is the correct one to use when defining an inbound firewall rule for the Azure portal in your region.
-For ping, the request will time out, but the IP address will be visible in the response. For example, in the message "Pinging azsyrie.northcentralus.cloudapp.azure.com [52.252.175.48]", the IP address is "52.252.175.48".
+For ping, the request will time out, but the IP address is visible in the response. For example, in the message `"Pinging azsyrie.northcentralus.cloudapp.azure.com [52.252.175.48]"`, the IP address is `52.252.175.48`.
Providing IP addresses for clients ensures that the request isn't rejected outright, but for successful access to content and operations, authorization is also necessary. Use one of the following methodologies to authenticate your request:
storage Storage Feature Support In Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-feature-support-in-storage-accounts.md
description: Determine the level of support for each storage account feature giv
Previously updated : 11/28/2023 Last updated : 01/11/2024
The following table describes whether a feature is supported in a standard gener
| [Soft delete for containers](soft-delete-container-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Static websites](storage-blob-static-website.md) | &#x2705; | &#x2705; | &#x1F7E6; | &#x2705; | | [Storage Analytics logs (classic)](../common/storage-analytics-logging.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &nbsp;&#x2B24; | &#x2705; |
-| [Storage Analytics metrics (classic)](../common/storage-analytics-metrics.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Storage Analytics metrics (classic)](../common/storage-analytics-metrics.md?toc=/azure/storage/blobs/toc.json)<sup>3</sup> | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
<sup>1</sup> Requests that clients make by using NFS 3.0 or SFTP can't be authorized by using Microsoft Entra security. <sup>2</sup> Only locally redundant storage (LRS) and zone-redundant storage (ZRS) are supported.
+<sup>3</sup> Storage Analytics metrics is retired. See [Transition to metrics in Azure Monitor](../common/storage-analytics-metrics.md?toc=/azure/storage/blobs/toc.json).
+ ## Premium block blob accounts The following table describes whether a feature is supported in a premium block blob account when you enable a hierarchical namespace (HNS), NFS 3.0 protocol, or SFTP.
The following table describes whether a feature is supported in a premium block
| [Soft delete for containers](soft-delete-container-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Static websites](storage-blob-static-website.md) | &#x2705; | &#x2705; | &#x1F7E6; | &#x2705; | | [Storage Analytics logs (classic)](../common/storage-analytics-logging.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x1F7E6; | &nbsp;&#x2B24;| &#x2705; |
-| [Storage Analytics metrics (classic)](../common/storage-analytics-metrics.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Storage Analytics metrics (classic)](../common/storage-analytics-metrics.md?toc=/azure/storage/blobs/toc.json)<sup>3</sup> | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
<sup>1</sup> Requests that clients make by using NFS 3.0 or SFTP can't be authorized by using Microsoft Entra security. <sup>2</sup> Only locally redundant storage (LRS) and zone-redundant storage (ZRS) are supported.
+<sup>3</sup> Storage Analytics metrics is retired. See [Transition to metrics in Azure Monitor](../common/storage-analytics-metrics.md?toc=/azure/storage/blobs/toc.json).
+ ## See also - [Known issues with Azure Data Lake Storage Gen2](data-lake-storage-known-issues.md)
storage Storage Account Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-upgrade.md
Previously updated : 08/17/2023 Last updated : 01/11/2024
To decide on the best access tier for your needs, it can be helpful to determine
### Monitoring existing storage accounts
-To monitor your existing storage accounts and gather this data, you can make use of Azure Storage Analytics, which performs logging and provides metrics data for a storage account. Storage Analytics can store metrics that include aggregated transaction statistics and capacity data about requests to the storage service for GPv1, GPv2, and Blob storage account types. This data is stored in well-known tables in the same storage account.
+To monitor your existing storage accounts and gather this data, you can make use of storage metrics in Azure Monitor. Azure Monitor stores metrics that include aggregated transaction statistics and capacity data about requests to the storage service. Azure Storage sends metric data to the Azure Monitor back end. Azure Monitor provides a unified monitoring experience that includes data from the Azure portal as well as data that is ingested. For more information, see any of these articles:
-For more information, see [About Storage Analytics Metrics](../blobs/monitor-blob-storage.md) and [Storage Analytics Metrics Table Schema](/rest/api/storageservices/Storage-Analytics-Metrics-Table-Schema)
-
-> [!NOTE]
-> Blob storage accounts expose the Table service endpoint only for storing and accessing the metrics data for that account.
-
-To monitor the storage consumption for Blob storage, you need to enable the capacity metrics.
-With this enabled, capacity data is recorded daily for a storage account's Blob service and recorded as a table entry that is written to the *$MetricsCapacityBlob* table within the same storage account.
-
-To monitor data access patterns for Blob storage, you need to enable the hourly transaction metrics from the API. With hourly transaction metrics enabled, per API transactions are aggregated every hour, and recorded as a table entry that is written to the *$MetricsHourPrimaryTransactionsBlob* table within the same storage account. The *$MetricsHourSecondaryTransactionsBlob* table records the transactions to the secondary endpoint when using RA-GRS storage accounts.
-
-> [!NOTE]
-> If you have a general-purpose storage account in which you have stored page blobs and virtual machine disks, or queues, files, or tables, alongside block and append blob data, this estimation process isn't applicable. The capacity data doesn't differentiate block blobs from other types, and doesn't give capacity data for other data types. If you use these types, an alternative methodology is to look at the quantities on your most recent bill.
-
-To get a good approximation of your data consumption and access pattern, we recommend you choose a retention period for the metrics that is representative of your regular usage and extrapolate. One option is to retain the metrics data for seven days and collect the data every week, for analysis at the end of the month. Another option is to retain the metrics data for the last 30 days and collect and analyze the data at the end of the 30-day period.
-
-For details on enabling, collecting, and viewing metrics data, see [Storage analytics metrics](../common/storage-analytics-metrics.md?toc=/azure/storage/blobs/toc.json).
-
-> [!NOTE]
-> Storing, accessing, and downloading analytics data is also charged just like regular user data.
-
-### Utilizing usage metrics to estimate costs
-
-#### Capacity costs
-
-The latest entry in the capacity metrics table *$MetricsCapacityBlob* with the row key *'data'* shows the storage capacity consumed by user data. The latest entry in the capacity metrics table *$MetricsCapacityBlob* with the row key *'analytics'* shows the storage capacity consumed by the analytics logs.
-
-This total capacity consumed by both user data and analytics logs (if enabled) can then be used to estimate the cost of storing data in the storage account. The same method can also be used for estimating storage costs in GPv1 storage accounts.
-
-#### Transaction costs
-
-The sum of *'TotalBillableRequests'*, across all entries for an API in the transaction metrics table indicates the total number of transactions for that particular API. *For example*, the total number of *'GetBlob'* transactions in a given period can be calculated by the sum of total billable requests for all entries with the row key *'user;GetBlob'*.
-
-In order to estimate transaction costs for Blob storage accounts, you need to break down the transactions into three groups since they're priced differently.
--- Write transactions such as *'PutBlob'*, *'PutBlock'*, *'PutBlockList'*, *'AppendBlock'*, *'ListBlobs'*, *'ListContainers'*, *'CreateContainer'*, *'SnapshotBlob'*, and *'CopyBlob'*.-- Delete transactions such as *'DeleteBlob'* and *'DeleteContainer'*.-- All other transactions.-
-In order to estimate transaction costs for GPv1 storage accounts, you need to aggregate all transactions irrespective of the operation/API.
-
-#### Data access and geo-replication data transfer costs
-
-While storage analytics doesn't provide the amount of data read from and written to a storage account, it can be roughly estimated by looking at the transaction metrics table. The sum of *'TotalIngress'* across all entries for an API in the transaction metrics table indicates the total amount of ingress data in bytes for that particular API. Similarly the sum of *'TotalEgress'* indicates the total amount of egress data, in bytes.
+- [Monitoring Azure Blob Storage](../blobs/monitor-blob-storage.md)
+- [Monitoring Azure Files](../files/storage-files-monitoring.md)
+- [Monitoring Azure Queue Storage](../queues/monitor-queue-storage.md)
+- [Monitoring Azure Table storage](../tables/monitor-table-storage.md)
In order to estimate the data access costs for Blob storage accounts, you need to break down the transactions into two groups. -- The amount of data retrieved from the storage account can be estimated by looking at the sum of *'TotalEgress'* for primarily the *'GetBlob'* and *'CopyBlob'* operations.
+- The amount of data retrieved from the storage account can be estimated by looking at the sum of the *'Ingress'* metric for primarily the *'GetBlob'* and *'CopyBlob'* operations.
+
+- The amount of data written to the storage account can be estimated by looking at the sum of *'Egress'* metrics for primarily the *'PutBlob'*, *'PutBlock'*, *'CopyBlob'* and *'AppendBlock'* operations.
-- The amount of data written to the storage account can be estimated by looking at the sum of *'TotalIngress'* for primarily the *'PutBlob'*, *'PutBlock'*, *'CopyBlob'* and *'AppendBlock'* operations.
+To determine the price of each operation against the blob storage service, see [Map each REST operation to a price](../blobs/map-rest-apis-transaction-categories.md).
The cost of geo-replication data transfer for Blob storage accounts can also be calculated by using the estimate for the amount of data written when using a GRS or RA-GRS storage account.
storage Storage Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-analytics.md
Title: Use Azure Storage analytics to collect logs and metrics data
-description: Storage Analytics enables you to track metrics data for all storage services, and to collect logs for Blob, Queue, and Table storage.
+ Title: Use Azure Storage analytics to collect logs data
+description: Storage Analytics enables you to collect logs for Blob, Queue, and Table storage.
Previously updated : 03/03/2017 Last updated : 01/11/2024
# Storage Analytics
-Azure Storage Analytics performs logging and provides metrics data for a storage account. You can use this data to trace requests, analyze usage trends, and diagnose issues with your storage account.
+Azure Storage Analytics performs logging for a storage account. You can use this data to trace requests, analyze usage trends, and diagnose issues with your storage account.
+
+> [!NOTE]
+> Storage Analytics supports only logs. Storage Analytics metrics are retired. See [Transition to metrics in Azure Monitor](../common/storage-analytics-metrics.md?toc=/azure/storage/blobs/toc.json). While Storage Analytics logs are still supported, we recommend that you use Azure Storage logs in Azure Monitor instead of Storage Analytics logs. To learn more, see any of the following articles:
+>
+> - [Monitoring Azure Blob Storage](../blobs/monitor-blob-storage.md)
+> - [Monitoring Azure Files](../files/storage-files-monitoring.md)
+> - [Monitoring Azure Queue Storage](../queues/monitor-queue-storage.md)
+> - [Monitoring Azure Table storage](../tables/monitor-table-storage.md)
To use Storage Analytics, you must enable it individually for each service you want to monitor. You can enable it from the [Azure portal](https://portal.azure.com). For details, see [Monitor a storage account in the Azure portal](./manage-storage-analytics-logs.md). You can also enable Storage Analytics programmatically via the REST API or the client library. Use the [Set Blob Service Properties](/rest/api/storageservices/set-blob-service-properties), [Set Queue Service Properties](/rest/api/storageservices/set-queue-service-properties), [Set Table Service Properties](/rest/api/storageservices/set-table-service-properties), and [Set File Service Properties](/rest/api/storageservices/Get-File-Service-Properties) operations to enable Storage Analytics for each service.
-The aggregated data is stored in a well-known blob (for logging) and in well-known tables (for metrics), which may be accessed using the Blob service and Table service APIs.
+The aggregated log data is stored in a well-known blob, which may be accessed using the Blob service and Table service APIs.
Storage Analytics has a 20 TB limit on the amount of stored data that is independent of the total limit for your storage account. For more information about storage account limits, see [Scalability and performance targets for standard storage accounts](scalability-targets-standard-account.md).
For an in-depth guide on using Storage Analytics and other tools to identify, di
## Billing for Storage Analytics
-All metrics data is written by the services of a storage account. As a result, each write operation performed by Storage Analytics is billable. Additionally, the amount of storage used by metrics data is also billable.
-
-The following actions performed by Storage Analytics are billable:
--- Requests to create blobs for logging.-- Requests to create table entities for metrics.
+The amount of storage used by logs data is billable. You're also billed for requests to create blobs for logging.
-If you have configured a data retention policy, you can reduce the spending by deleting old logging and metrics data. For more information about retention policies, see [Setting a Storage Analytics Data Retention Policy](/rest/api/storageservices/Setting-a-Storage-Analytics-Data-Retention-Policy).
+If you have configured a data retention policy, you can reduce the spending by deleting old log data. For more information about retention policies, see [Setting a Storage Analytics Data Retention Policy](/rest/api/storageservices/Setting-a-Storage-Analytics-Data-Retention-Policy).
### Understanding billable requests
-Every request made to an account's storage service is either billable or non-billable. Storage Analytics logs each individual request made to a service, including a status message that indicates how the request was handled. Similarly, Storage Analytics stores metrics for both a service and the API operations of that service, including the percentages and count of certain status messages. Together, these features can help you analyze your billable requests, make improvements on your application, and diagnose issues with requests to your services. For more information about billing, see [Understanding Azure Storage Billing - Bandwidth, Transactions, and Capacity](/archive/blogs/windowsazurestorage/understanding-windows-azure-storage-billing-bandwidth-transactions-and-capacity).
+Every request made to an account's storage service is either billable or non-billable. Storage Analytics logs each individual request made to a service, including a status message that indicates how the request was handled. See [Understanding Azure Storage Billing - Bandwidth, Transactions, and Capacity](/archive/blogs/windowsazurestorage/understanding-windows-azure-storage-billing-bandwidth-transactions-and-capacity).
-When looking at Storage Analytics data, you can use the tables in the [Storage Analytics Logged Operations and Status Messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages) topic to determine what requests are billable. Then you can compare your logs and metrics data to the status messages to see if you were charged for a particular request. You can also use the tables in the previous topic to investigate availability for a storage service or individual API operation.
+When looking at Storage Analytics data, you can use the tables in the [Storage Analytics Logged Operations and Status Messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages) topic to determine what requests are billable. Then you can compare your log data to the status messages to see if you were charged for a particular request. You can also use the tables in the previous topic to investigate availability for a storage service or individual AP
## Next steps - [Monitor a storage account in the Azure portal](./manage-storage-analytics-logs.md)-- [Storage Analytics Metrics](storage-analytics-metrics.md) - [Storage Analytics Logging](storage-analytics-logging.md)
storage Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-introduction.md
You can access resources in a storage account by any language that can make HTTP
### Azure Storage data API and library references - [Azure Storage REST API](/rest/api/storageservices/)-- [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage)-- [Azure Storage client library for Java/Android](/java/api/overview/azure/storage)-- [Azure Storage client library for Node.js](../blobs/reference.md#javascript-client-libraries)-- [Azure Storage client library for Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/storage/azure-storage-blob)-- [Azure Storage client library for C++](https://github.com/Azure/azure-storage-cpp)
+- [Azure Storage client libraries for .NET](/dotnet/api/overview/azure/storage)
+- [Azure Storage client libraries for Java](/java/api/overview/azure/storage)
+- [Azure Storage client libraries for JavaScript](/javascript/api/overview/azure/storage)
+- [Azure Storage client libraries for Python](/python/api/overview/azure/storage)
+- [Azure Storage client libraries for Go](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/storage/)
+- [Azure Storage client libraries for C++](https://github.com/Azure/azure-sdk-for-cpp/tree/main/sdk/storage)
### Azure Storage management API and library references
You can access resources in a storage account by any language that can make HTTP
### Azure Storage data movement API -- [Storage Data Movement Client Library for .NET](/dotnet/api/microsoft.azure.storage.datamovement)
+- [Storage Data Movement Client Library for .NET](storage-use-data-movement-library.md)
### Tools and utilities - [Azure PowerShell Cmdlets for Storage](/powershell/module/az.storage) - [Azure CLI Cmdlets for Storage](/cli/azure/storage)-- [AzCopy Command-Line Utility](https://aka.ms/downloadazcopy)
+- [AzCopy Command-Line Utility](storage-use-azcopy-v10.md)
- [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) is a free, standalone app from Microsoft that enables you to work visually with Azure Storage data on Windows, macOS, and Linux. - [Azure Resource Manager templates for Azure Storage](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Storage)
storage Storage Metrics Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-metrics-migration.md
description: Learn how to transition from Storage Analytics metrics (classic met
Previously updated : 01/03/2024 Last updated : 01/11/2024
# Transition to metrics in Azure Monitor
-On **January 9, 2024** Storage Analytics metrics, also referred to as *classic metrics* will be retired. If you use classic metrics, make sure to transition to metrics in Azure Monitor prior to that date. This article helps you make the transition.
+On **January 9, 2024** Storage Analytics metrics, also referred to as *classic metrics* retired. If you used classic metrics, this article helps you transition to metrics in Azure Monitor.
## Steps to complete the transition
storage File Sync Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-introduction.md
# What is Azure File Sync?
-Azure File Sync enables centralizing your organization's file shares in Azure Files, while keeping the flexibility, performance, and compatibility of a Windows file server. While some users may opt to keep a full copy of their data locally, Azure File Sync additionally has the ability to transform Windows Server into a quick cache of your Azure file share. You can use any protocol that's available on Windows Server to access your data locally, including SMB, NFS, and FTPS. You can have as many caches as you need across the world.
+Azure File Sync enables you to centralize your organization's file shares in Azure Files, while keeping the flexibility, performance, and compatibility of a Windows file server. While some users might opt to keep a full copy of their data locally, Azure File Sync additionally has the ability to transform Windows Server into a quick cache of your Azure file share. You can use any protocol that's available on Windows Server to access your data locally, including SMB, NFS, and FTPS. You can have as many caches as you need across the world.
## Videos
Azure File Sync is ideal for distributed access scenarios. For each of your offi
### Business continuity and disaster recovery
-Azure File Sync is backed by Azure Files, which offers several redundancy options for highly available storage. Because Azure contains resilient copies of your data, your local server becomes a disposable caching device, and recovering from a failed server can be done by adding a new server to your Azure File Sync deployment. Rather than restoring from a local backup, you provision another Windows Server, install the Azure File Sync agent on it, and then add it to your Azure File Sync deployment. Azure File Sync downloads your file namespace before downloading data, so that your server can be up and running as soon as possible. For even faster recovery, you can have a warm stand by server as part of your deployment, or you can use Azure File Sync with Windows Clustering.
+Azure File Sync is backed by Azure Files, which offers several redundancy options for highly available storage. Because Azure contains resilient copies of your data, your local server becomes a disposable caching device. You can recover from a failed server by adding a new server to your Azure File Sync deployment. Rather than restoring from a local backup, you provision another Windows Server, install the Azure File Sync agent on it, and then add it to your Azure File Sync deployment. Azure File Sync downloads your file namespace before downloading data, so that your server can be up and running as soon as possible. For even faster recovery, you can have a warm standby server as part of your deployment, or you can use Azure File Sync with Windows Clustering.
### Cloud-side backup
storage File Sync Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-release-notes.md
Previously updated : 11/3/2023 Last updated : 1/11/2024
The following Azure File Sync agent versions are supported:
| V15.2 Release - [KB5013875](https://support.microsoft.com/topic/9159eee2-3d16-4523-ade4-1bac78469280)| 15.2.0.0 | November 21, 2022 | Supported - Agent version will expire on March 19, 2024 | | V15.1 Release - [KB5003883](https://support.microsoft.com/topic/45761295-d49a-431e-98ec-4fb3329b0544)| 15.1.0.0 | September 19, 2022 | Supported - Agent version will expire on March 19, 2024 | | V15 Release - [KB5003882](https://support.microsoft.com/topic/2f93053f-869b-4782-a832-e3c772a64a2d)| 15.0.0.0 | March 30, 2022 | Supported - Agent version will expire on March 19, 2024 |
-| V14.1 Release - [KB5001873](https://support.microsoft.com/topic/d06b8723-c4cf-4c64-b7ec-3f6635e044c5)| 14.1.0.0 | December 1, 2021 | Supported - Agent version will expire on January 23, 2024 |
-| V14 Release - [KB5001872](https://support.microsoft.com/topic/92290aa1-75de-400f-9442-499c44c92a81)| 14.0.0.0 | October 29, 2021 | Supported - Agent version will expire on January 23, 2024 |
+| V14.1 Release - [KB5001873](https://support.microsoft.com/topic/d06b8723-c4cf-4c64-b7ec-3f6635e044c5)| 14.1.0.0 | December 1, 2021 | Supported - Agent version will expire on February 8, 2024 |
+| V14 Release - [KB5001872](https://support.microsoft.com/topic/92290aa1-75de-400f-9442-499c44c92a81)| 14.0.0.0 | October 29, 2021 | Supported - Agent version will expire on February 8, 2024 |
## Unsupported versions The following Azure File Sync agent versions have expired and are no longer supported:
storage Storage Files Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-introduction.md
# What is Azure Files?+ Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard [Server Message Block (SMB) protocol](/windows/win32/fileio/microsoft-smb-protocol-and-cifs-protocol-overview), [Network File System (NFS) protocol](https://en.wikipedia.org/wiki/Network_File_System), and [Azure Files REST API](/rest/api/storageservices/file-service-rest-api). Azure file shares can be mounted concurrently by cloud or on-premises deployments. SMB Azure file shares are accessible from Windows, Linux, and macOS clients. NFS Azure file shares are accessible from Linux clients. Additionally, SMB Azure file shares can be cached on Windows servers with [Azure File Sync](../file-sync/file-sync-introduction.md) for fast access near where the data is being used. Here are some videos on common use cases for Azure Files:+ * [Replace your file server with a serverless Azure file share](https://youtu.be/H04e9AgbcSc) * [Getting started with FSLogix profile containers on Azure Files in Azure Virtual Desktop leveraging AD authentication](https://www.youtube.com/embed/9S5A1IJqfOQ) To get started using Azure Files, see [Quickstart: Create and use an Azure file share](storage-how-to-use-files-portal.md). ## Why Azure Files is useful+ Azure file shares can be used to: * **Replace or supplement on-premises file servers**:
- Azure Files can be used to replace or supplement traditional on-premises file servers or network-attached storage (NAS) devices. Popular operating systems such as Windows, macOS, and Linux can directly mount Azure file shares wherever they are in the world. SMB Azure file shares can also be replicated with Azure File Sync to Windows servers, either on-premises or in the cloud, for performance and distributed caching of the data. With [Azure Files AD Authentication](storage-files-active-directory-overview.md), SMB Azure file shares can work with Active Directory Domain Services (AD DS) hosted on-premises for access control.
+ Azure Files can be used to replace or supplement traditional on-premises file servers or network-attached storage (NAS) devices. Popular operating systems such as Windows, macOS, and Linux can directly mount Azure file shares wherever they are in the world. SMB Azure file shares can also be replicated with Azure File Sync to Windows servers, either on-premises or in the cloud, for performance and distributed caching of the data. With [Azure Files AD Authentication](storage-files-active-directory-overview.md), SMB Azure file shares can work with Active Directory Domain Services (AD DS) hosted on-premises for access control.
* **"Lift and shift" applications**:
- Azure Files makes it easy to "lift and shift" applications to the cloud that expect a file share to store file application or user data. Azure Files enables both the "classic" lift and shift scenario, where both the application and its data are moved to Azure, and the "hybrid" lift and shift scenario, where the application data is moved to Azure Files, and the application continues to run on-premises.
+ Azure Files makes it easy to "lift and shift" applications to the cloud that expect a file share to store file application or user data. Azure Files enables both the "classic" lift and shift scenario, where both the application and its data are moved to Azure, and the "hybrid" lift and shift scenario, where the application data is moved to Azure Files, and the application continues to run on-premises.
* **Simplify cloud development**: Azure Files can also be used to simplify new cloud development projects. For example:
Azure file shares can be used to:
Azure file shares can be used as persistent volumes for stateful containers. Containers deliver "build once, run anywhere" capabilities that enable developers to accelerate innovation. For the containers that access raw data at every start, a shared file system is required to allow these containers to access the file system no matter which instance they run on. ## Key benefits
-* **Easy to use**. When an Azure file share is mounted on your computer, you don't need to do anything special to access the data: just navigate to the path where the file share is mounted and open/modify a file.
-* **Shared access**. Azure file shares support the industry standard SMB and NFS protocols, meaning you can seamlessly replace your on-premises file shares with Azure file shares without worrying about application compatibility. Being able to share a file system across multiple machines, applications, and application instances is a significant advantage for applications that need shareability.
+
+* **Easy to use**. When an Azure file share is mounted on your computer, you don't need to do anything special to access the data: just navigate to the path where the file share is mounted and open/modify a file.
+* **Shared access**. Azure file shares support the industry standard SMB and NFS protocols, meaning you can seamlessly replace your on-premises file shares with Azure file shares without worrying about application compatibility. Being able to share a file system across multiple machines, applications, and application instances is a significant advantage for applications that need shareability.
* **Fully managed**. Azure file shares can be created without the need to manage hardware or an OS. This means you don't have to deal with patching the server OS with critical security upgrades or replacing faulty hard disks.
-* **Scripting and tooling**. PowerShell cmdlets and Azure CLI can be used to create, mount, and manage Azure file shares as part of the administration of Azure applications. You can create and manage Azure file shares using Azure portal and Azure Storage Explorer.
-* **Resiliency**. Azure Files has been built from the ground up to be always available. Replacing on-premises file shares with Azure Files means you no longer have to wake up to deal with local power outages or network issues.
+* **Scripting and tooling**. PowerShell cmdlets and Azure CLI can be used to create, mount, and manage Azure file shares as part of the administration of Azure applications. You can create and manage Azure file shares using Azure portal and Azure Storage Explorer.
+* **Resiliency**. Azure Files has been built from the ground up to be always available. Replacing on-premises file shares with Azure Files means you no longer have to wake up to deal with local power outages or network issues.
* **Familiar programmability**. Applications running in Azure can access data in the share via file [system I/O APIs](/dotnet/api/system.io.file). Developers can therefore leverage their existing code and skills to migrate existing applications. In addition to System IO APIs, you can use [Azure Storage Client Libraries](/previous-versions/azure/dn261237(v=azure.100)) or the [Azure Files REST API](/rest/api/storageservices/file-service-rest-api). ## Training
For guidance on architecting solutions on Azure Files using established patterns
- [Azure files accessed on-premises and secured by AD DS](/azure/architecture/example-scenario/hybrid/azure-files-on-premises-authentication) ## Case studies+ * Organizations across the world are leveraging Azure Files and Azure File Sync to optimize file access and storage. [Check out their case studies here](azure-files-case-study.md). ## Next Steps+ * [Plan for an Azure Files deployment](storage-files-planning.md) * [Create Azure file Share](storage-how-to-create-file-share.md) * [Connect and mount an SMB share on Windows](storage-how-to-use-files-windows.md)
virtual-desktop Azure Ad Joined Session Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-ad-joined-session-hosts.md
Title: Deploy Microsoft Entra joined VMs in Azure Virtual Desktop - Azure
-description: How to configure and deploy Microsoft Entra joined VMs in Azure Virtual Desktop.
+ Title: Microsoft Entra joined session hosts in Azure Virtual Desktop
+description: Learn about using Microsoft Entra joined session hosts in Azure Virtual Desktop.
Last updated 11/14/2023
-# Deploy Microsoft Entra joined virtual machines in Azure Virtual Desktop
+# Microsoft Entra joined session hosts in Azure Virtual Desktop
This article will walk you through the process of deploying and accessing Microsoft Entra joined virtual machines in Azure Virtual Desktop. Microsoft Entra joined VMs remove the need to have line-of-sight from the VM to an on-premises or virtualized Active Directory Domain Controller (DC) or to deploy Microsoft Entra Domain Services. In some cases, it can remove the need for a DC entirely, simplifying the deployment and management of the environment. These VMs can also be automatically enrolled in Intune for ease of management.
virtual-desktop Cli Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/cli-powershell.md
Previously updated : 02/01/2023 Last updated : 01/08/2024 # Use Azure CLI and Azure PowerShell with Azure Virtual Desktop
Now that you know how to use Azure CLI and Azure PowerShell with Azure Virtual D
- [Create an Azure Virtual Desktop host pool with PowerShell or the Azure CLI](create-host-pools-powershell.md) - [Manage application groups using PowerShell or the Azure CLI](manage-app-groups-powershell.md)
+- For the full PowerShell reference documentation, see [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization).
virtual-desktop Create Fslogix Profile Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-fslogix-profile-container.md
Next, create a new capacity pool:
- For **Service level**, select your desired value from the drop-down menu. We recommend **Premium** for most environments. >[!NOTE] >The Premium setting provides the minimum throughput available for a Premium Service level, which is 256 MBps. You may need to adjust this throughput for a production environment. Final throughput is based on the relationship described in [Throughput limits](../azure-netapp-files/azure-netapp-files-service-levels.md).
- - For **Size (TiB)**, enter the capacity pool size that best fits your needs. The minimum size is 2 TiB.
+ - For **Size (TiB)**, enter the capacity pool size that best fits your needs.
5. When you're finished, select **OK**.
virtual-desktop Custom Image Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/custom-image-templates.md
description: Learn about custom image templates in Azure Virtual Desktop, where
Previously updated : 09/08/2023 Last updated : 01/05/2024 # Custom image templates in Azure Virtual Desktop
There are two parts to creating a custom image:
A custom image template is a JSON file that contains your choices of source image, distribution targets, build properties, and customizations. Azure Image Builder uses this template to create a custom image, which you can use as the source image for your session hosts when creating or updating a host pool. When creating the image, Azure Image Builder also takes care of generalizing the image with sysprep.
-Custom images can be stored in [Azure Compute Gallery](../virtual-machines/azure-compute-gallery.md) or as a [managed image](../virtual-machines/windows/capture-image-resource.md), or both. Azure Compute Gallery allows you to manage region replication, versioning, and sharing of custom images.
+Custom images can be stored in [Azure Compute Gallery](../virtual-machines/azure-compute-gallery.md) or as a [managed image](../virtual-machines/windows/capture-image-resource.md), or both. Azure Compute Gallery allows you to manage region replication, versioning, and sharing of custom images. See [Create a legacy managed image of a generalized VM in Azure](../virtual-machines/capture-image-resource.md) to review limitations for managed images.
The source image must be [supported for Azure Virtual Desktop](prerequisites.md#operating-systems-and-licenses) and can be from:
virtual-desktop Terminology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/terminology.md
The following table goes into more detail about the differences between each typ
|Feature|Personal host pools|Pooled host pools| |||| |Load balancing| User sessions are always load balanced to the session host the user is assigned to. If the user isn't currently assigned to a session host, the user session is load balanced to the next available session host in the host pool. | User sessions are load balanced to session hosts in the host pool based on user session count. You can choose which [load balancing algorithm](host-pool-load-balancing.md) to use: breadth-first or depth-first. |
-|Maximum session limit| One. | As configured by the **Max session limit** value of the properties of a host pool. Under high concurrent connection load when multiple users connect to the host pool at the same time, the number of sessions created on a session host can exceed the maximum session limit. |
-|User assignment process| Users can either be directly assigned to session hosts or be automatically assigned to the first available session host. Users always have sessions on the session hosts they are assigned to. | Users aren't assigned to session hosts. After a user signs out and signs back in, their user session might get load balanced to a different session host. |
+|Maximum session limit| One. | As configured by the [maximum session limit](configure-host-pool-load-balancing.md#configure-breadth-first-load-balancing) value of the properties of a host pool. Under high concurrent connection load when multiple users connect to the host pool at the same time, the number of sessions created on a session host can exceed the maximum session limit. |
+|User assignment process| Users can either be directly assigned to session hosts or be automatically assigned to the first available session host. Users always have sessions on the session hosts they are assigned to. | Users aren't assigned to session hosts. After a user signs out and signs back in, their user session might get load balanced to a different session host. To learn more, see [Configure personal desktop assignment](configure-host-pool-personal-desktop-assignment-type.md). |
|Scaling| [Autoscale](autoscale-scaling-plan.md) for personal host pools starts session host virtual machines according to schedule or using Start VM on Connect and then deallocates/hibernates session host virtual machines based on the user session state (log off/disconnect). | [Autoscale](autoscale-scaling-plan.md) for pooled host pools turns VMs on and off based on the capacity thresholds and schedules the customer defines. | |Windows Updates|Updated with Windows Updates, [Microsoft Configuration Manager (ConfigMgr)](configure-automatic-updates.md), or other software distribution configuration tools.|Updated by redeploying session hosts from updated images instead of traditional updates.| |User data| Each user only ever uses one session host, so they can store their user profile data on the operating system (OS) disk of the VM. | Users can connect to different session hosts every time they connect, so they should store their user profile data in [FSLogix](/fslogix/configure-profile-container-tutorial). | ### Validation environment
-You can set a host pool to be a *validation environment*. Validation environments let you monitor service updates before the service applies them to your production or non-validation environment. Without a validation environment, you may not discover changes that introduce errors, which could result in downtime for users in your production environment.
+You can set a host pool to be a [validation environment](configure-validation-environment.md). Validation environments let you monitor service updates before the service applies them to your production or non-validation environment. Without a validation environment, you may not discover changes that introduce errors, which could result in downtime for users in your production environment.
To ensure your apps work with the latest updates, the validation environment should be as similar to host pools in your non-validation environment as possible. Users should connect as frequently to the validation environment as they do to the production environment. If you have automated testing on your host pool, you should include automated testing on the validation environment. ## Application groups
-An application group is a logical grouping of applications installed on session hosts in the host pool.
+An [application group](deploy-azure-virtual-desktop.md#create-an-application-group) is a logical grouping of applications installed on session hosts in the host pool.
An application group can be one of two types:
To publish resources to users, you must assign them to application groups. When
## Workspaces
-A workspace is a logical grouping of application groups in Azure Virtual Desktop. Each Azure Virtual Desktop application group must be associated with a workspace for users to see the desktops and applications published to them.
+A [workspace](deploy-azure-virtual-desktop.md#create-a-workspace) is a logical grouping of application groups in Azure Virtual Desktop. Each Azure Virtual Desktop application group must be associated with a workspace for users to see the desktops and applications published to them.
## End users
virtual-machine-scale-sets Virtual Machine Scale Sets Health Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension.md
This article assumes that you're familiar with:
- [Modifying](virtual-machine-scale-sets-upgrade-policy.md) Virtual Machine Scale Sets > [!CAUTION]
-> Application Health Extension expects to receive a consistent probe response at the configured port `tcp` or request path `http/https` in order to label a VM as *Healthy*. If no application is running on the VM, or you're unable to configure a probe response, your VM is going to show up as *Unhealthy*.
+> Application Health Extension expects to receive a consistent probe response at the configured port `tcp` or request path `http/https` in order to label a VM as *Healthy*. If no application is running on the VM, or you're unable to configure a probe response, your VM is going to show up as *Unhealthy* (Binary Health States) or *Unknown* (Rich Health States).
> [!NOTE] > Only one source of health monitoring can be used for a Virtual Machine Scale Set, either an Application Health Extension or a Health Probe. If you have both options enabled, you will need to remove one before using orchestration services like Instance Repairs or Automatic OS Upgrades.
PUT on `/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/
```json { "name": "myHealthExtension",
+ "location": "<location>",
"properties": { "publisher": "Microsoft.ManagedServices", "type": "ApplicationHealthWindows",
PUT on `/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/
```json { "name": "myHealthExtension",
+ "location": "<location>",
"properties": { "publisher": "Microsoft.ManagedServices", "type": "ApplicationHealthWindows",
virtual-machines Disk Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disk-encryption.md
Title: Server-side encryption of Azure managed disks description: Azure Storage protects your data by encrypting it at rest before persisting it to Storage clusters. You can use customer-managed keys to manage encryption with your own keys, or you can rely on Microsoft-managed keys for the encryption of your managed disks. Previously updated : 12/13/2023 Last updated : 01/11/2024
To revoke access to customer-managed keys, see [Azure Key Vault PowerShell](/pow
#### Automatic key rotation of customer-managed keys
-If you're using customer-managed keys, you should enable automatic key rotation to the latest key version. Automatic key rotation helps ensure your keys are secure. A disk references a key via its disk encryption set. When you enable automatic rotation for a disk encryption set, the system will automatically update all managed disks, snapshots, and images referencing the disk encryption set to use the new version of the key within one hour. To learn how to enable customer-managed keys with automatic key rotation, see [Set up an Azure Key Vault and DiskEncryptionSet with automatic key rotation](windows/disks-enable-customer-managed-keys-powershell.md#set-up-an-azure-key-vault-and-diskencryptionset-optionally-with-automatic-key-rotation).
+Generally, if you're using customer-managed keys, you should enable automatic key rotation to the latest key version. Automatic key rotation helps ensure your keys are secure. A disk references a key via its disk encryption set. When you enable automatic rotation for a disk encryption set, the system will automatically update all managed disks, snapshots, and images referencing the disk encryption set to use the new version of the key within one hour. To learn how to enable customer-managed keys with automatic key rotation, see [Set up an Azure Key Vault and DiskEncryptionSet with automatic key rotation](windows/disks-enable-customer-managed-keys-powershell.md#set-up-an-azure-key-vault-and-diskencryptionset-optionally-with-automatic-key-rotation).
> [!NOTE] > Virtual Machines aren't rebooted during automatic key rotation.
virtual-machines Health Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/health-extension.md
+
+ Title: Use Application Health extension with Azure Virtual Machines
+description: Learn how to use the Application Health extension to monitor the health of your applications deployed on Azure virtual machines.
+++++ Last updated : 12/15/2023++
+# Using Application Health extension with Azure Virtual Machines
+Monitoring your application health is an important signal for managing your VMs. Azure Virtual Machines provides support for [Automatic VM Guest Patching](../automatic-vm-guest-patching.md), which rely on health monitoring of the individual instances to safely update your VMs.
+
+This article describes how you can use the two types of Application Health extension, **Binary Health States** or **Rich Health States**, to monitor the health of your applications deployed on Azure virtual machines.
+
+Application health monitoring is also available on virtual machine scale sets and helps enable functionalities such as [Rolling Upgrades](../../virtual-machine-scale-sets/virtual-machine-scale-sets-upgrade-policy.md), [Automatic OS-Image Upgrades](../../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md), and [Automatic Instance Repairs](../../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-instance-repairs.md). To experience these capabilities with the added benefits of scale, availability, and flexibility on scale sets, you can [attach your VM to an existing scale set](../../virtual-machine-scale-sets/virtual-machine-scale-sets-attach-detach-vm.md) or [create a new scale set](../../virtual-machine-scale-sets/flexible-virtual-machine-scale-sets-portal.md).
+
+## Prerequisites
+
+This article assumes that you're familiar with [Azure virtual machine extensions](overview.md).
+
+> [!CAUTION]
+> Application Health Extension expects to receive a consistent probe response at the configured port `tcp` or request path `http/https` in order to label a VM as *Healthy*. If no application is running on the VM, or you're unable to configure a probe response, your VM is going to show up as *Unhealthy* (Binary Health States) or *Unknown* (Rich Health States).
+
+## When to use the Application Health extension
+Application Health Extension reports on application health from inside the Virtual Machine. The extension probes on a local application endpoint and updates the health status based on TCP/HTTP(S) responses received from the application. This health status is used by Azure to monitor and detect patching failures during [Automatic VM Guest Patching](../automatic-vm-guest-patching.md).
+
+The extension reports health from within a VM and can be used in situations where an external probe such as the [Azure Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md) canΓÇÖt be used.
+
+Application health is a customer-provided signal on the status of your application running inside the VM. Application health is different from [resource health](../../service-health/resource-health-overview.md), which is a platform-provided signal used to report service-level events impacting the performance of your VM.
+
+## Binary versus Rich Health States
+
+Application Health Extensions has two options available: **Binary Health States** and **Rich Health States**. The following table highlights some key differences between the two options. See the end of this section for general recommendations.
+
+| Features | Binary Health States | Rich Health States |
+| -- | -- | |
+| Available Health States | Two available states: *Healthy*, *Unhealthy* | Four available states: *Healthy*, *Unhealthy*, *Initializing*, *Unknown*<sup>1</sup> |
+| Sending Health Signals | Health signals are sent through HTTP/HTTPS response codes or TCP connections. | Health signals on HTTP/HTTPS protocol are sent through the probe response code and response body. Health signals through TCP protocol remain unchanged from Binary Health States. |
+| Identifying *Unhealthy* Instances | Instances automatically fall into *Unhealthy* state if a *Healthy* signal isn't received from the application. An *Unhealthy* instance can indicate either an issue with the extension configuration (for example, unreachable endpoint) or an issue with the application (for example, non-200 status code). | Instances only go into an *Unhealthy* state if the application emits an *Unhealthy* probe response. Users are responsible for implementing custom logic to identify and flag instances with *Unhealthy* applications<sup>2</sup>. Instances with incorrect extension settings (for example, unreachable endpoint) or invalid health probe responses will fall under the *Unknown* state<sup>2</sup>. |
+| *Initializing* state for newly created instances | *Initializing* state isn't available. Newly created instances may take some time before settling into a steady state. | *Initializing* state allows newly created instances to settle into a steady Health State before surfacing the health state as _Healthy_, _Unhealthy_, or _Unknown_. |
+| HTTP/HTTPS protocol | Supported | Supported |
+| TCP protocol | Supported | Limited Support ΓÇô *Unknown* state is unavailable on TCP protocol. See [Rich Health States protocol table](#rich-health-states) for Health State behaviors on TCP. |
+
+<sup>1</sup> The *Unknown* state is unavailable on TCP protocol.
+<sup>2</sup> Only applicable for HTTP/HTTPS protocol. TCP protocol follows the same process of identifying *Unhealthy* instances as in Binary Health States.
+
+Use **Binary Health States** if:
+- You're not interested in configuring custom logic to identify and flag an unhealthy instance
+- You don't require an *initializing* grace period for newly created instances
+
+Use **Rich Health States** if:
+- You send health signals through HTTP/HTTPS protocol and can submit health information through the probe response body
+- You would like to use custom logic to identify and mark unhealthy instances
+- You would like to set an *initializing* grace period allowing newly created instances to settle into a steady health state
+
+## Binary Health States
+
+Binary Health State reporting contains two Health States, *Healthy* and *Unhealthy*. The following tables provide a brief description for how the Health States are configured.
+
+**HTTP/HTTPS Protocol**
+
+| Protocol | Health State | Description |
+| -- | | -- |
+| http/https | Healthy | To send a *Healthy* signal, the application is expected to return a 200 response code. |
+| http/https | Unhealthy | The instance is marked as *Unhealthy* if a 200 response code isn't received from the application. |
+
+**TCP Protocol**
+
+| Protocol | Health State | Description |
+| -- | | -- |
+| TCP | Healthy | To send a *Healthy* signal, a successful handshake must be made with the provided application endpoint. |
+| TCP | Unhealthy | The instance is marked as *Unhealthy* if a failed or incomplete handshake occurred with the provided application endpoint. |
+
+Some common scenarios that result in an *Unhealthy* state include:
+- When the application endpoint returns a non-200 status code
+- When there's no application endpoint configured inside the virtual machine to provide application health status
+- When the application endpoint is incorrectly configured
+- When the application endpoint isn't reachable
+
+## Rich Health States
+
+Rich Health States reporting contains four Health States, *Initializing*, *Healthy*, *Unhealthy*, and *Unknown*. The following tables provide a brief description for how each Health State is configured.
+
+**HTTP/HTTPS Protocol**
+
+| Protocol | Health State | Description |
+| -- | | -- |
+| http/https | Healthy | To send a *Healthy* signal, the application is expected to return a probe response with: **Probe Response Code**: Status 2xx, **Probe Response Body**: `{"ApplicationHealthState": "Healthy"}` |
+| http/https | Unhealthy | To send an *Unhealthy* signal, the application is expected to return a probe response with: **Probe Response Code**: Status 2xx, **Probe Response Body**: `{"ApplicationHealthState": "Unhealthy"}` |
+| http/https | Initializing | The instance automatically enters an *Initializing* state at extension start time. For more information, see [Initializing state](#initializing-state). |
+| http/https | Unknown | An *Unknown* state may occur in the following scenarios: when a non-2xx status code is returned by the application, when the probe request times out, when the application endpoint is unreachable or incorrectly configured, when a missing or invalid value is provided for `ApplicationHealthState` in the response body, or when the grace period expires. For more information, see [Unknown state](#unknown-state). |
+
+**TCP Protocol**
+
+| Protocol | Health State | Description |
+| -- | | -- |
+| TCP | Healthy | To send a *Healthy* signal, a successful handshake must be made with the provided application endpoint. |
+| TCP | Unhealthy | The instance is marked as *Unhealthy* if a failed or incomplete handshake occurred with the provided application endpoint. |
+| TCP | Initializing | The instance automatically enters an *Initializing* state at extension start time. For more information, see [Initializing state](#initializing-state). |
+
+## Initializing state
+
+This state only applies to Rich Health States. The *Initializing* state only occurs once at extension start time and can be configured by the extension settings `gracePeriod` and `numberOfProbes`.
+
+At extension startup, the application health remains in the *Initializing* state until one of two scenarios occurs:
+- The same Health State (*Healthy* or *Unhealthy*) is reported a consecutive number of times as configured through *numberOfProbes*
+- The `gracePeriod` expires
+
+If the same Health State (*Healthy* or *Unhealthy*) is reported consecutively, the application health transitions out of the *Initializing* state and into the reported Health State (*Healthy* or *Unhealthy*).
+
+### Example
+
+If `numberOfProbes` = 3, that would mean:
+- To transition from *Initializing* to *Healthy* state: Application health extension must receive three consecutive *Healthy* signals via HTTP/HTTPS or TCP protocol
+- To transition from *Initializing* to *Unhealthy* state: Application health extension must receive three consecutive *Unhealthy* signals via HTTP/HTTPS or TCP protocol
+
+If the `gracePeriod` expires before a consecutive health status is reported by the application, the instance health is determined as follows:
+- HTTP/HTTPS protocol: The application health transitions from *Initializing* to *Unknown*
+- TCP protocol: The application health transitions from *Initializing* to *Unhealthy*
+
+## Unknown state
+
+The *Unknown* state only applies to Rich Health States. This state is only reported for `http` or `https` probes and occurs in the following scenarios:
+- When a non-2xx status code is returned by the application
+- When the probe request times out
+- When the application endpoint is unreachable or incorrectly configured
+- When a missing or invalid value is provided for `ApplicationHealthState` in the response body
+- When the grace period expires
+
+## Extension schema for Binary Health States
+
+The following JSON shows the schema for the Application Health extension. The extension requires at a minimum either a "tcp", "http" or "https" request with an associated port or request path respectively.
+
+```json
+{
+ "type": "extensions",
+ "name": "HealthExtension",
+ "apiVersion": "2018-10-01",
+ "location": "<location>",
+ "properties": {
+ "publisher": "Microsoft.ManagedServices",
+ "type": "<ApplicationHealthLinux or ApplicationHealthWindows>",
+ "autoUpgradeMinorVersion": true,
+ "typeHandlerVersion": "1.0",
+ "settings": {
+ "protocol": "<protocol>",
+ "port": <port>,
+ "requestPath": "</requestPath>",
+ "intervalInSeconds": 5,
+ "numberOfProbes": 1
+ }
+ }
+}
+```
+
+### Property values
+
+| Name | Value / Example | Data Type |
+| - | | |
+| apiVersion | `2018-10-01` or above | date |
+| publisher | `Microsoft.ManagedServices` | string |
+| type | `ApplicationHealthLinux` (Linux), `ApplicationHealthWindows` (Windows) | string |
+| typeHandlerVersion | `1.0` | string |
+
+### Settings
+
+| Name | Value / Example | Data Type |
+| - | | |
+| protocol | `http` or `https` or `tcp` | string |
+| port | Optional when protocol is `http` or `https`, mandatory when protocol is `tcp` | int |
+| requestPath | Mandatory when protocol is `http` or `https`, not allowed when protocol is `tcp` | string |
+| intervalInSeconds | Optional, default is 5 seconds. This setting is the interval between each health probe. For example, if intervalInSeconds == 5, a probe is sent to the local application endpoint once every 5 seconds. | int |
+| numberOfProbes | Optional, default is 1. This setting is the number of consecutive probes required for the health status to change. For example, if numberOfProbles == 3, you will need 3 consecutive "Healthy" signals to change the health status from "Unhealthy" into "Healthy" state. The same requirement applies to change health status into "Unhealthy" state. | int |
+
+## Extension schema for Rich Health States
+
+The following JSON shows the schema for the Rich Health States extension. The extension requires at a minimum either an "http" or "https" request with an associated port or request path respectively. TCP probes are also supported, but cannot set the `ApplicationHealthState` through the probe response body and do not have access to the *Unknown* state.
+
+```json
+{
+ "type": "extensions",
+ "name": "HealthExtension",
+ "apiVersion": "2018-10-01",
+ "location": "<location>",
+ "properties": {
+ "publisher": "Microsoft.ManagedServices",
+ "type": "<ApplicationHealthLinux or ApplicationHealthWindows>",
+ "autoUpgradeMinorVersion": true,
+ "typeHandlerVersion": "2.0",
+ "settings": {
+ "protocol": "<protocol>",
+ "port": <port>,
+ "requestPath": "</requestPath>",
+ "intervalInSeconds": 5,
+ "numberOfProbes": 1,
+ "gracePeriod": 600
+ }
+ }
+}
+```
+
+### Property values
+
+| Name | Value / Example | Data Type |
+| - | | |
+| apiVersion | `2018-10-01` or above | date |
+| publisher | `Microsoft.ManagedServices` | string |
+| type | `ApplicationHealthLinux` (Linux), `ApplicationHealthWindows` (Windows) | string |
+| typeHandlerVersion | `2.0` | string |
+
+### Settings
+
+| Name | Value / Example | Data Type |
+| - | | |
+| protocol | `http` or `https` or `tcp` | string |
+| port | Optional when protocol is `http` or `https`, mandatory when protocol is `tcp` | int |
+| requestPath | Mandatory when protocol is `http` or `https`, not allowed when protocol is `tcp` | string |
+| intervalInSeconds | Optional, default is 5 seconds. This setting is the interval between each health probe. For example, if intervalInSeconds == 5, a probe is sent to the local application endpoint once every 5 seconds. | int |
+| numberOfProbes | Optional, default is 1. This setting is the number of consecutive probes required for the health status to change. For example, if numberOfProbles == 3, you will need 3 consecutive "Healthy" signals to change the health status from "Unhealthy"/"Unknown" into "Healthy" state. The same requirement applies to change health status into "Unhealthy" or "Unknown" state. | int |
+| gracePeriod | Optional, default = `intervalInSeconds` * `numberOfProbes`; maximum grace period is 7200 seconds | int |
+
+## Deploy the Application Health extension
+There are multiple ways of deploying the Application Health extension to your VMs as detailed in the following examples.
+
+### Binary Health States
+
+# [REST API](#tab/rest-api)
+
+The following example adds the Application Health extension named *myHealthExtension* to a Windows-based virtual machine.
+
+You can also use this example to change an existing extension from Rich Health States to Binary Health by making a PATCH call instead of a PUT.
+
+```
+PUT on `/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM/extensions/myHealthExtension?api-version=2018-10-01`
+```
+
+```json
+{
+ "name": "myHealthExtension",
+ "location": "<location>",
+ "properties": {
+ "publisher": "Microsoft.ManagedServices",
+ "type": "ApplicationHealthWindows",
+ "autoUpgradeMinorVersion": true,
+ "typeHandlerVersion": "1.0",
+ "settings": {
+ "protocol": "<protocol>",
+ "port": <port>,
+ "requestPath": "</requestPath>"
+ }
+ }
+}
+```
+Use `PATCH` to edit an already deployed extension.
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+Use the [Set-AzVmExtension](/powershell/module/az.compute/set-azvmextension) cmdlet to add or update Application Health extension to your virtual machine.
+
+The following example adds the Application Health extension to a Windows-based virtual machine.
+
+You can also use this example to change an existing extension from Rich Health States to Binary Health.
+
+```azurepowershell-interactive
+# Define the Application Health extension properties
+$publicConfig = @{"protocol" = "http"; "port" = 80; "requestPath" = "/healthEndpoint"};
+
+# Add the Application Health extension to the virtual machine
+Set-AzVMExtension -Name "myHealthExtension" `
+ -ResourceGroupName "<myResourceGroup>" `
+ -VMName "<myVM>" `
+ -Publisher "Microsoft.ManagedServices" `
+ -ExtensionType "ApplicationHealthWindows" `
+ -TypeHandlerVersion "1.0" `
+ -Location "<location>" `
+ -Settings $publicConfig
+
+```
+# [Azure CLI 2.0](#tab/azure-cli)
+
+Use [az vm extension set](/cli/azure/vm/extension#az-vm-extension-set) to add the Application Health extension to a virtual machine.
+
+The following example adds the Application Health extension to a Linux-based virtual machine.
+
+You can also use this example to change an existing extension from Rich Health States to Binary Health.
+
+```azurecli-interactive
+az vm extension set \
+ --name ApplicationHealthLinux \
+ --publisher Microsoft.ManagedServices \
+ --version 1.0 \
+ --resource-group <myResourceGroup> \
+ --vm-name <myVM> \
+ --settings ./extension.json
+```
+The extension.json file content.
+
+```json
+{
+ "protocol": "<protocol>",
+ "port": <port>,
+ "requestPath": "</requestPath>"
+}
+```
+
+# [Azure portal](#tab/azure-portal)
+
+The following example adds the Application Health extension to an existing virtual machine on [Azure portal](https://portal.azure.com).
+
+1. Navigate to your existing Virtual Machine
+2. On the left sidebar, go to the **Health monitoring** blade
+3. Click on **Enable application health monitoring**, select **Binary** for Health States. Configure your protocol, port, and more to set up the health probes.
+4. Click **Save** to save your settings
++++
+### Rich Health States
+
+# [REST API](#tab/rest-api)
+
+The following example adds the **Application Health - Rich States** extension (with name myHealthExtension) to a Windows-based virtual machine.
+
+You can also use this example to upgrade an existing extension from Binary to Rich Health States by making a PATCH call instead of a PUT.
+
+```
+PUT on `/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM/extensions/myHealthExtension?api-version=2018-10-01`
+```
+
+```json
+{
+ "name": "myHealthExtension",
+ "location": "<location>",
+ "properties": {
+ "publisher": "Microsoft.ManagedServices",
+ "type": "ApplicationHealthWindows",
+ "autoUpgradeMinorVersion": true,
+ "typeHandlerVersion": "2.0",
+ "settings": {
+ "requestPath": "</requestPath>",
+ "intervalInSeconds": <intervalInSeconds>,
+ "numberOfProbes": <numberOfProbes>,
+ "gracePeriod": <gracePeriod>
+ }
+ }
+}
+```
+Use `PATCH` to edit an already deployed extension.
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+Use the [Set-AzVmExtension](/powershell/module/az.compute/set-azvmextension) cmdlet to add or update Application Health extension to your virtual machine.
+
+The following example adds the **Application Health - Rich States** extension to a Windows-based virtual machine.
+
+You can also use this example to upgrade an existing extension from Binary to Rich Health States.
+
+```azurepowershell-interactive
+# Define the Application Health extension properties
+$publicConfig = @{"protocol" = "http"; "port" = 80; "requestPath" = "/healthEndpoint"; "gracePeriod" = 600};
+
+# Add the Application Health extension to the virtual machine
+Set-AzVMExtension -Name "myHealthExtension" `
+ -ResourceGroupName "<myResourceGroup>" `
+ -VMName "<myVM>" `
+ -Publisher "Microsoft.ManagedServices" `
+ -ExtensionType "ApplicationHealthWindows" `
+ -TypeHandlerVersion "2.0" `
+ -Location "<location>" `
+ -Settings $publicConfig
+
+```
+# [Azure CLI 2.0](#tab/azure-cli)
+
+Use [az vm extension set](/cli/azure/vm/extension#az-vm-extension-set) to add the Application Health extension to a virtual machine.
+
+The following example adds the **Application Health - Rich States** extension to a Linux-based virtual machine.
+
+You can also use this example to upgrade an existing extension from Binary to Rich Health States.
+
+```azurecli-interactive
+az vm extension set \
+ --name ApplicationHealthLinux \
+ --publisher Microsoft.ManagedServices \
+ --version 2.0 \
+ --resource-group <myResourceGroup> \
+ --vm-name <myVM> \
+ --settings ./extension.json
+```
+The extension.json file content.
+
+```json
+{
+ "protocol": "<protocol>",
+ "port": <port>,
+ "requestPath": "</requestPath>",
+ "gracePeriod": <healthExtensionGracePeriod>
+}
+```
+# [Azure portal](#tab/azure-portal)
+
+The following example adds the Application Health extension to an existing virtual machine on [Azure portal](https://portal.azure.com).
+
+1. Navigate to your existing Virtual Machine
+2. On the left sidebar, go to the **Health monitoring** blade
+3. Click on **Enable application health monitoring**, select **Rich (advanced)** for Health States. Configure your protocol, port, and more to set up the health probes.
+4. Click **Save** to save your settings
+++
+## Troubleshoot
+### View VMHealth
+
+# [REST API](#tab/rest-api)
+```
+GET https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM/instanceView?api-version=2023-07-01
+```
+Sample Response (see "vmHealth" object for the latest VM health status)
+```
+"vmHealth": {
+ "status": {
+ "code": "HealthState/unknown",
+ "level": "Warning",
+ "displayStatus": "The VM health is unknown",
+ "time": "2023-12-04T22:25:39+00:00"
+ }
+}
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+```azurepowershell-interactive
+Get-AzVM
+ -ResourceGroupName "<rgName>" `
+ -Name "<vmName>" `
+ -Status
+```
+
+# [Azure CLI 2.0](#tab/azure-cli)
+```azurecli-interactive
+az vm get-instance-view --name <vmName> --resource-group <rgName>
+```
+
+# [Azure portal](#tab/azure-portal)
+
+1. Navigate to your existing Virtual Machine
+2. On the left sidebar, go to the **Overview** blade
+3. Your application health can be observed under the **Health State** field
+++
+### Extension execution output log
+Extension execution output is logged to files found in the following directories:
+
+```Windows
+C:\WindowsAzure\Logs\Plugins\Microsoft.ManagedServices.ApplicationHealthWindows\<version>\
+```
+
+```Linux
+/var/lib/waagent/Microsoft.ManagedServices.ApplicationHealthLinux-<extension_version>/status
+/var/log/azure/applicationhealth-extension
+```
+
+The logs also periodically capture the application health status.
+
virtual-machines Sizes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes.md
This article describes the available sizes and options for the Azure virtual mac
| Type | Sizes | Description | ||-|-|
-| [General purpose](sizes-general.md) | B, Dsv3, Dv3, Dasv4, Dav4, DSv2, Dv2, Av2, DC, DCv2, Dpdsv5, Dpldsv5, Dpsv5, Dplsv5, Dv4, Dsv4, Ddv4, Ddsv4, Dv5, Dsv5, Ddv5, Ddsv5, Dasv5, Dadsv5, DCasv5, DCadsv5 | Balanced CPU-to-memory ratio. Ideal for testing and development, small to medium databases, and low to medium traffic web servers. |
+| [General purpose](sizes-general.md) | B, Dsv3, Dv3, Dasv4, Dav4, DSv2, Dv2, Av2, Dpdsv5, Dpldsv5, Dpsv5, Dplsv5, Dv4, Dsv4, Ddv4, Ddsv4, Dv5, Dsv5, Ddv5, Ddsv5, Dasv5, Dadsv5, DCasv5, DCadsv5, DCesv5, DCedsv5 | Balanced CPU-to-memory ratio. Ideal for testing and development, small to medium databases, and low to medium traffic web servers. |
| [Compute optimized](sizes-compute.md) | F, Fs, Fsv2, FX | High CPU-to-memory ratio. Good for medium traffic web servers, network appliances, batch processes, and application servers. |
-| [Memory optimized](sizes-memory.md) | Esv3, Ev3, Easv4, Eav4, Epdsv5, Epsv5, Ev4, Esv4, Edv4, Edsv4, Ev5, Esv5, Edv5, Edsv5, Easv5, Eadsv5, Mv2, M, DSv2, Dv2, ECasv5, ECadsv5 | High memory-to-CPU ratio. Great for relational database servers, medium to large caches, and in-memory analytics. |
+| [Memory optimized](sizes-memory.md) | Esv3, Ev3, Easv4, Eav4, Epdsv5, Epsv5, Ev4, Esv4, Edv4, Edsv4, Ev5, Esv5, Edv5, Edsv5, Easv5, Eadsv5, Mv2, M, DSv2, Dv2, ECasv5, ECadsv5, ECesv5, ECedsv5 | High memory-to-CPU ratio. Great for relational database servers, medium to large caches, and in-memory analytics. |
| [Storage optimized](sizes-storage.md) | Lsv2, Lsv3, Lasv3 | High disk throughput and IO ideal for Big Data, SQL, NoSQL databases, data warehousing and large transactional databases. | | [GPU](sizes-gpu.md) | NC, NCv2, NCv3, NCasT4_v3, NC A100 v4, ND, NDv2, NGads V620, NV, NVv3, NVv4, NDasrA100_v4, NDm_A100_v4 | Specialized virtual machines targeted for heavy graphic rendering and video editing, as well as model training and inferencing (ND) with deep learning. Available with single or multiple GPUs. | | [High performance compute](sizes-hpc.md) | HB, HBv2, HBv3, HBv4, HC, HX | Our fastest and most powerful CPU virtual machines with optional high-throughput network interfaces (RDMA). |
virtual-network Subnet Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/subnet-extension.md
Workload migration to the public cloud requires careful planning and coordination. One of the key considerations can be the ability to retain your IP addresses. Which can be important especially if your applications have IP address dependency or you have compliance requirements to use specific IP addresses. Azure Virtual Network solves this problem for you by allowing you to create virtual networks and subnets using an IP address range of your choice.
-Migrations can get a bit challenging when the above requirement is coupled with an extra requirement to keep some applications on-premises. In such as a situation, you have to split the applications between Azure and on-premises, without renumbering the IP addresses on either side. Additionally, you have to allow the applications to communicate as if they are in the same network.
+Migrations can get a bit challenging when the above requirement is coupled with an extra requirement to keep some applications on-premises. In such a situation, you have to split the applications between Azure and on-premises, without renumbering the IP addresses on either side. Additionally, you have to allow the applications to communicate as if they are in the same network.
One solution to the above problem is subnet extension. Extending a network allows applications to talk over the same broadcast domain when they exist at different physical locations, removing the need to rearchitect your network topology.
In the above example, the Azure NVA and the on-premises NVA communicate and lear
In the next section, you'll find details on subnet extension solutions we've tested on Azure. ## Next steps
-[Extend your on-premises subnets into Azure using Azure Extended Network](/windows-server/manage/windows-admin-center/azure/azure-extended-network).
+[Extend your on-premises subnets into Azure using Azure Extended Network](/windows-server/manage/windows-admin-center/azure/azure-extended-network).
virtual-wan Monitoring Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/monitoring-best-practices.md
Previously updated : 11/22/2023 Last updated : 01/12/2024 # Monitoring Azure Virtual WAN - Best practices
This section of the article focuses on metric-based alerts. There are no diagnos
|Create alert rule for Bits Received Per Second.|**Bits Received per Second** monitors the total amount of traffic received by the gateway from the MSEEs.<br><br>You might want to be alerted if the amount of traffic received by the gateway is at risk of hitting its maximum throughput, as this can lead to performance and connectivity issues. This allows you to act proactively by investigating the root cause of the increased gateway utilization or increasing the gatewayΓÇÖs maximum allowed throughput.<br><br>Choose the **Average** aggregation type and a **Threshold** value close to the maximum throughput provisioned for the gateway when configuring the alert rule.<br><br>Additionally, we recommend that you set an alert when the number of **Bits Received per Second** is near zero, as it might indicate an issue with the gateway or the MSEEs.<br><br>The maximum throughput of an ExpressRoute gateway is determined by number of scale units provisioned. To learn more about ExpressRoute gateway performance, see [About ExpressRoute connections in Azure Virtual WAN](virtual-wan-expressroute-about.md).| |Create alert rule for CPU overutilization.|When using ExpressRoute gateways, it's important to monitor the CPU utilization. Prolonged high utilization can affect performance and connectivity.<br><br>Use the **CPU utilization** metric to monitor this and create an alert for whenever the CPU utilization is **greater than** 80%, so you can investigate the root cause and ultimately increase the number of scale units, if needed. Choose the **Average** aggregation type when configuring the alert rule.<br><br>To learn more about ExpressRoute gateway performance, see [About ExpressRoute connections in Azure Virtual WAN](virtual-wan-expressroute-about.md).| |Create alert rule for packets received per second.|**Packets per second** monitors the number of inbound packets traversing the Virtual WAN ExpressRoute gateway.<br><br>You might want to be alerted if the number of **packets per second** is nearing the limit allowed for the number of scale units configured on the gateway.<br><br>Choose the Average aggregation type when configuring the alert rule. Choose a **Threshold** value close to the maximum number of **packets per second** allowed based on the number of scale units of the gateway. To learn more about ExpressRoute performance, see [About ExpressRoute connections in Azure Virtual WAN](virtual-wan-expressroute-about.md).<br><br>Additionally, we recommend that you set an alert when the number of **Packets per second** is near zero, as it might indicate an issue with the gateway or MSEEs.|
-|Create alert rule for number of routes advertised to peer. |**Count of Routes Advertised to Peers** monitors the number of routes advertised from the ExpressRoute gateway to the virtual hub router and to the Microsoft Enterprise Edge Devices.<br><br>We recommend that you configure an alert only on the two BGP peers displayed as **ExpressRoute Device** to identify when the count of advertised routes approaches the documented limit of **1000**. For example, configure the alert to be triggered when the number of routes advertised is **greater than 950**.<br><br>We also recommend that you configure an alert when the number of routes advertised to the Microsoft Edge Devices is **zero** in order to proactively detect any connectivity issues.<br><br>To add these alerts, select the **Count of Routes Advertised to Peers** metric, and then select the **Add filter** option and the **ExpressRoute** devices.|
-|Create alert rule for number of routes learned from peer.|**Count of Routes Learned from Peers** monitors the number of routes the ExpressRoute gateway learns from the virtual hub router and from the Microsoft Enterprise Edge Device.<br><br>We recommend that you configure an alert **only** on the two BGP peers displayed as **ExpressRoute Device** to identify when the count of learned routes approaches the [documented limit](../expressroute/expressroute-faqs.md#are-there-limits-on-the-number-of-routes-i-can-advertise) of 4000 for Standard SKU and 10,000 for Premium SKU circuits.<br><br>We also recommend that you configure an alert when the number of routes advertised to the Microsoft Edge Devices is **zero**. This can help in detecting when your on-premises has stopped advertising routes.
+|Create alert rule for number of routes advertised to peer. |**Count of Routes Advertised to Peers** monitors the number of routes advertised from the ExpressRoute gateway to the virtual hub router and to the Microsoft Enterprise Edge Devices.<br><br>We recommend that you **add a filter** to **only** select the two BGP peers displayed as **ExpressRoute Device** and create an alert to identify when the count of advertised routes approaches the documented limit of **1000**. For example, configure the alert to be triggered when the number of routes advertised is **greater than 950**.<br><br>We also recommend that you configure an alert when the number of routes advertised to the Microsoft Edge Devices is **zero** in order to proactively detect any connectivity issues.<br><br>To add these alerts, select the **Count of Routes Advertised to Peers** metric, and then select the **Add filter** option and the **ExpressRoute** devices.|
+|Create alert rule for number of routes learned from peer.|**Count of Routes Learned from Peers** monitors the number of routes the ExpressRoute gateway learns from the virtual hub router and from the Microsoft Enterprise Edge Device.<br><br>We recommend that you add a filter to **only** select the two BGP peers displayed as **ExpressRoute Device** and create an alert to identify when the count of learned routes approaches the [documented limit](../expressroute/expressroute-faqs.md#are-there-limits-on-the-number-of-routes-i-can-advertise) of 4000 for Standard SKU and 10,000 for Premium SKU circuits.<br><br>We also recommend that you configure an alert when the number of routes advertised to the Microsoft Edge Devices is **zero**. This can help in detecting when your on-premises has stopped advertising routes.
|Create alert rule for high frequency in route changes.|**Frequency of Routes changes** shows the change frequency of routes being learned and advertised from and to peers, including other types of branches such as site-to-site and point-to-site VPN. This metric provides visibility when a new branch or more circuits are being connected/disconnected.<br><br>This metric is a useful tool when identifying issues with BGP advertisements, such as flaplings. We recommend that you to set an alert **if** the environment is **static** and BGP changes aren't expected. Select a **threshold value** that is **greater than 1** and an **Aggregation Granularity** of 15 minutes to monitor BGP behavior consistently.<br><br>If the environment is dynamic and BGP changes are frequently expected, you might choose not to set an alert otherwise in order to avoid false positives. However, you can still consider this metric for observability of your network.| ## Virtual hub
This section of the article focuses on metric-based alerts. Azure Firewall offer
* See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources. * See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for more details about **Azure Monitor Metrics**. * See [All resource metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md) for a list of all supported metrics.
-* See [Create diagnostic settings in Azure Monitor](../azure-monitor/essentials/diagnostic-settings.md) for more information and troubleshooting when creating diagnostic settings via the Azure portal, CLI, PowerShell, etc.
+* See [Create diagnostic settings in Azure Monitor](../azure-monitor/essentials/diagnostic-settings.md) for more information and troubleshooting when creating diagnostic settings via the Azure portal, CLI, PowerShell, etc.