Updates from: 07/11/2024 01:13:28
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
The following Embeddings models are available with [Azure Government](/azure/azu
### Assistants (Preview)
-For Assistants you need a combination of a supported model, and a supported region. Certain tools and capabilities require the latest models. The following models are available in the Assistants API, SDK, Azure AI Studio and Azure OpenAI Studio. The following table is for pay-as-you-go. For information on Provisioned Throughput Unit (PTU) availability, see [provisioned throughput](./provisioned-throughput.md).
+For Assistants you need a combination of a supported model, and a supported region. Certain tools and capabilities require the latest models. The following models are available in the Assistants API, SDK, Azure AI Studio and Azure OpenAI Studio. The following table is for pay-as-you-go. For information on Provisioned Throughput Unit (PTU) availability, see [provisioned throughput](./provisioned-throughput.md). The listed models and regions can be used with both Assistants v1 and v2.
| Region | `gpt-35-turbo (0613)` | `gpt-35-turbo (1106)`| `fine tuned gpt-3.5-turbo-0125` | `gpt-4 (0613)` | `gpt-4 (1106)` | `gpt-4 (0125)` | `gpt-4o (2024-05-13)` | |--|::|::|::|::|::|::|::|
ai-services Use Your Data Securely https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/use-your-data-securely.md
So far you have already setup each resource work independently. Next you need to
|--|--|--|--| | `Search Index Data Reader` | Azure OpenAI | Azure AI Search | Inference service queries the data from the index. | | `Search Service Contributor` | Azure OpenAI | Azure AI Search | Inference service queries the index schema for auto fields mapping. Data ingestion service creates index, data sources, skill set, indexer, and queries the indexer status. |
-| `Storage Blob Data Contributor` | Azure OpenAI | Storage Account | Reads from the input container, and writes the preprocess result to the output container. |
-| `Cognitive Services OpenAI Contributor` | Azure AI Search | Azure OpenAI | Custom skill |
-| `Storage Blob Data Contributor` | Azure AI Search | Storage Account | Reads blob and writes knowledge store. |
+| `Storage Blob Data Contributor` | Azure OpenAI | Storage Account | Reads from the input container, and writes the preprocessed result to the output container. |
+| `Cognitive Services OpenAI Contributor` | Azure AI Search | Azure OpenAI | Custom skill. |
+| `Storage Blob Data Reader` | Azure AI Search | Storage Account | Reads document blobs and chunk blobs. |
+| `Cognitive Services OpenAI User` | Web app | Azure OpenAI | Inference. |
In the above table, the `Assignee` means the system assigned managed identity of that resource.
ai-services How To Configure Azure Ad Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-configure-azure-ad-auth.md
The token context must be set to "https://cognitiveservices.azure.com/.default".
::: zone-end ::: zone pivot="programming-language-python"
-To get a Microsoft Entra access token in Java, use the [Azure Identity Client Library](/python/api/overview/azure/identity-readme).
+To get a Microsoft Entra access token in Python, use the [Azure Identity Client Library](/python/api/overview/azure/identity-readme).
Here's an example of using Azure Identity to get a Microsoft Entra access token from an interactive browser: ```Python
ai-services How To Pronunciation Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-pronunciation-assessment.md
In the `SpeechRecognizer`, you can specify the language to learn or practice imp
::: zone pivot="programming-language-csharp" ```csharp
-var recognizer = new SpeechRecognizer(config, "en-US", audioInput);
+var recognizer = new SpeechRecognizer(speechConfig, "en-US", audioConfig);
``` ::: zone-end
var recognizer = new SpeechRecognizer(config, "en-US", audioInput);
::: zone pivot="programming-language-cpp" ```cpp
-auto recognizer = SpeechRecognizer::FromConfig(config, "en-US", audioConfig);
+auto recognizer = SpeechRecognizer::FromConfig(speechConfig, "en-US", audioConfig);
``` ::: zone-end
auto recognizer = SpeechRecognizer::FromConfig(config, "en-US", audioConfig);
::: zone pivot="programming-language-java" ```Java
-SpeechRecognizer recognizer = new SpeechRecognizer(config, "en-US", audioInput);
+SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, "en-US", audioConfig);
``` ::: zone-end
speechConfig.speechRecognitionLanguage = "en-US";
::: zone pivot="programming-language-objectivec" ```ObjectiveC
-SPXSpeechRecognizer* speechRecognizer = [[SPXSpeechRecognizer alloc] initWithSpeechConfiguration:speechConfig language:@"en-US" audioConfiguration:pronAudioSource];
+SPXSpeechRecognizer* recognizer = [[SPXSpeechRecognizer alloc] initWithSpeechConfiguration:speechConfig language:@"en-US" audioConfiguration:audioConfig];
``` ::: zone-end
SPXSpeechRecognizer* speechRecognizer = [[SPXSpeechRecognizer alloc] initWithSpe
::: zone pivot="programming-language-swift" ```swift
-let reco = try! SPXSpeechRecognizer(speechConfiguration: speechConfig, language: "en-US", audioConfiguration: audioInput)
+let recognizer = try! SPXSpeechRecognizer(speechConfiguration: speechConfig, language: "en-US", audioConfiguration: audioConfig)
``` ::: zone-end
aks Azure Csi Blob Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-blob-storage-provision.md
The following table includes parameters you can use to define a custom storage c
|skuName | Specify an Azure storage account type (alias: `storageAccountType`). | `Standard_LRS`, `Premium_LRS`, `Standard_GRS`, `Standard_RAGRS` | No | `Standard_LRS`| |location | Specify an Azure location. | `eastus` | No | If empty, driver will use the same location name as current cluster.| |resourceGroup | Specify an Azure resource group name. | myResourceGroup | No | If empty, driver will use the same resource group name as current cluster.|
-|storageAccount | Specify an Azure storage account name.| storageAccountName | - No for blobfuse mount </br> - Yes for NFSv3 mount. | - For blobfuse mount: if empty, driver finds a suitable storage account that matches `skuName` in the same resource group. If a storage account name is provided, storage account must exist. </br> - For NFSv3 mount, storage account name must be provided.|
+|storageAccount | Specify an Azure storage account name.| storageAccountName | - No | When a specific storage account name is not provided, the driver will look for a suitable storage account that matches the account settings within the same resource group. If it fails to find a matching storage account, it will create a new one. However, if a storage account name is specified, the storage account must already exist. |
|networkEndpointType| Specify network endpoint type for the storage account created by driver. If privateEndpoint is specified, a [private endpoint][storage-account-private-endpoint] is created for the storage account. For other cases, a service endpoint will be created for NFS protocol.<sup>1</sup> | `privateEndpoint` | No | For an AKS cluster, add the AKS cluster name to the Contributor role in the resource group hosting the VNET.| |protocol | Specify blobfuse mount or NFSv3 mount. | `fuse`, `nfs` | No | `fuse`| |containerName | Specify the existing container (directory) name. | container | No | If empty, driver creates a new container name, starting with `pvc-fuse` for blobfuse or `pvc-nfs` for NFS v3. |
aks Azure Csi Files Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-files-storage-provision.md
description: Learn how to create a static or dynamic persistent volume with Azur
Previously updated : 06/28/2024 Last updated : 07/09/2024
The following table includes parameters you can use to define a custom storage c
|shareName | Specify Azure file share name. | Existing or new Azure file share name. | No | If empty, driver generates an Azure file share name. | |shareNamePrefix | Specify Azure file share name prefix created by driver. | Share name can only contain lowercase letters, numbers, hyphens, and length should be fewer than 21 characters. | No | |skuName | Azure Files storage account type (alias: `storageAccountType`)| `Standard_LRS`, `Standard_ZRS`, `Standard_GRS`, `Standard_RAGRS`, `Standard_RAGZRS`,`Premium_LRS`, `Premium_ZRS` | No | `StandardSSD_LRS`<br> Minimum file share size for Premium account type is 100 GB.<br> ZRS account type is supported in limited regions.<br> NFS file share only supports Premium account type.|
+|storageAccount | Specify an Azure storage account name.| storageAccountName | - No | When a specific storage account name is not provided, the driver will look for a suitable storage account that matches the account settings within the same resource group. If it fails to find a matching storage account, it will create a new one. However, if a storage account name is specified, the storage account must already exist. |
|storageEndpointSuffix | Specify Azure storage endpoint suffix. | `core.windows.net`, `core.chinacloudapi.cn`, etc. | No | If empty, driver uses default storage endpoint suffix according to cloud environment. For example, `core.windows.net`. | |tags | [Tags][tag-resources] are created in new storage account. | Tag format: 'foo=aaa,bar=bbb' | No | "" | | | **Following parameters are only for SMB protocol** | | |
aks Concepts Clusters Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-clusters-workloads.md
- Title: Azure Kubernetes Services (AKS) core concepts
-description: Learn about the core components that make up workloads and clusters in Azure Kubernetes Service (AKS).
-- Previously updated : 04/16/2024----
-# Core Kubernetes concepts for Azure Kubernetes Service (AKS)
-
-This article describes core concepts of Azure Kubernetes Service (AKS), a managed Kubernetes service that you can use to deploy and operate containerized applications at scale on Azure. It helps you learn about the infrastructure components of Kubernetes and obtain a deeper understanding of how Kubernetes works in AKS.
-
-## What is Kubernetes?
-
-Kubernetes is a rapidly evolving platform that manages container-based applications and their associated networking and storage components. Kubernetes focuses on the application workloads and not the underlying infrastructure components. Kubernetes provides a declarative approach to deployments, backed by a robust set of APIs for management operations.
-
-You can build and run modern, portable, microservices-based applications using Kubernetes to orchestrate and manage the availability of the application components. Kubernetes supports both stateless and stateful applications.
-
-As an open platform, Kubernetes allows you to build your applications with your preferred programming language, OS, libraries, or messaging bus. Existing continuous integration and continuous delivery (CI/CD) tools can integrate with Kubernetes to schedule and deploy releases.
-
-AKS provides a managed Kubernetes service that reduces the complexity of deployment and core management tasks. The Azure platform manages the AKS control plane, and you only pay for the AKS nodes that run your applications.
-
-## Kubernetes cluster architecture
-
-A Kubernetes cluster is divided into two components:
-
-* The ***control plane***, which provides the core Kubernetes services and orchestration of application workloads, and
-* ***Nodes***, which run your application workloads.
-
-![Kubernetes control plane and node components](media/concepts-clusters-workloads/control-plane-and-nodes.png)
-
-## Control plane
-
-When you create an AKS cluster, the Azure platform automatically creates and configures its associated control plane. This single-tenant control plane is provided at no cost as a managed Azure resource abstracted from the user. You only pay for the nodes attached to the AKS cluster. The control plane and its resources reside only in the region where you created the cluster.
-
-The control plane includes the following core Kubernetes components:
-
-| Component | Description |
-| -- | - |
-| *kube-apiserver* | The API server exposes the underlying Kubernetes APIs and provides the interaction for management tools, such as `kubectl` or the Kubernetes dashboard. |
-| *etcd* | etcd is a highly available key-value store within Kubernetes that helps maintain the state of your Kubernetes cluster and configuration. |
-| *kube-scheduler* | When you create or scale applications, the scheduler determines what nodes can run the workload and starts the identified nodes. |
-| *kube-controller-manager* | The controller manager oversees a number of smaller controllers that perform actions such as replicating pods and handling node operations. |
-
-Keep in mind that you can't directly access the control plane. Kubernetes control plane and node upgrades are orchestrated through the Azure CLI or Azure portal. To troubleshoot possible issues, you can review the control plane logs using Azure Monitor.
-
-> [!NOTE]
-> If you want to configure or directly access a control plane, you can deploy a self-managed Kubernetes cluster using [Cluster API Provider Azure][cluster-api-provider-azure].
-
-## Nodes
-
-To run your applications and supporting services, you need a Kubernetes *node*. Each AKS cluster has at least one node, an Azure virtual machine (VM) that runs the Kubernetes node components, and container runtime.
-
-Nodes include the following core Kubernetes components:
-
-| Component | Description |
-| -- | - |
-| `kubelet` | The Kubernetes agent that processes the orchestration requests from the control plane along with scheduling and running the requested containers. |
-| *kube-proxy* | The proxy handles virtual networking on each node, routing network traffic and managing IP addressing for services and pods. |
-| *container runtime* | The container runtime allows containerized applications to run and interact with other resources, such as the virtual network or storage. For more information, see [Container runtime configuration](#container-runtime-configuration). |
-
-![Azure virtual machine and supporting resources for a Kubernetes node](media/concepts-clusters-workloads/aks-node-resource-interactions.png)
-
-The Azure VM size for your nodes defines CPUs, memory, size, and the storage type available, such as high-performance SSD or regular HDD. Plan the node size around whether your applications might require large amounts of CPU and memory or high-performance storage. Scale out the number of nodes in your AKS cluster to meet demand. For more information on scaling, see [Scaling options for applications in AKS](concepts-scale.md).
-
-In AKS, the VM image for your cluster's nodes is based on Ubuntu Linux, [Azure Linux](use-azure-linux.md), or Windows Server 2022. When you create an AKS cluster or scale out the number of nodes, the Azure platform automatically creates and configures the requested number of VMs. Agent nodes are billed as standard VMs, so any VM size discounts, including [Azure reservations][reservation-discounts], are automatically applied.
-
-For managed disks, default disk size and performance are assigned according to the selected VM SKU and vCPU count. For more information, see [Default OS disk sizing](cluster-configuration.md#default-os-disk-sizing).
-
-> [!NOTE]
-> If you need advanced configuration and control on your Kubernetes node container runtime and OS, you can deploy a self-managed cluster using [Cluster API Provider Azure][cluster-api-provider-azure].
-
-### OS configuration
-
-AKS supports Ubuntu 22.04 and Azure Linux 2.0 as the node operating system (OS) for clusters with Kubernetes 1.25 and higher. Ubuntu 18.04 can also be specified at node pool creation for Kubernetes versions 1.24 and below.
-
-AKS supports Windows Server 2022 as the default OS for Windows node pools in clusters with Kubernetes 1.25 and higher. Windows Server 2019 can also be specified at node pool creation for Kubernetes versions 1.32 and below. Windows Server 2019 is being retired after Kubernetes version 1.32 reaches end of life and isn't supported in future releases. For more information about this retirement, see the [AKS release notes][aks-release-notes].
-
-### Container runtime configuration
-
-A container runtime is software that executes containers and manages container images on a node. The runtime helps abstract away sys-calls or OS-specific functionality to run containers on Linux or Windows. For Linux node pools, `containerd` is used on Kubernetes version 1.19 and higher. For Windows Server 2019 and 2022 node pools, `containerd` is generally available and is the only runtime option on Kubernetes version 1.23 and higher. As of May 2023, Docker is retired and no longer supported. For more information about this retirement, see the [AKS release notes][aks-release-notes].
-
-[`Containerd`](https://containerd.io/) is an [OCI](https://opencontainers.org/) (Open Container Initiative) compliant core container runtime that provides the minimum set of required functionality to execute containers and manage images on a node. With`containerd`-based nodes and node pools, the kubelet talks directly to `containerd` using the CRI (container runtime interface) plugin, removing extra hops in the data flow when compared to the Docker CRI implementation. As such, you see better pod startup latency and less resource (CPU and memory) usage.
-
-`Containerd` works on every GA version of Kubernetes in AKS, in every Kubernetes version starting from v1.19, and supports all Kubernetes and AKS features.
-
-> [!IMPORTANT]
-> Clusters with Linux node pools created on Kubernetes v1.19 or higher default to the `containerd` container runtime. Clusters with node pools on an earlier supported Kubernetes versions receive Docker for their container runtime. Linux node pools will be updated to `containerd` once the node pool Kubernetes version is updated to a version that supports `containerd`.
->
-> `containerd` is generally available for clusters with Windows Server 2019 and 2022 node pools and is the only container runtime option for Kubernetes v1.23 and higher. You can continue using Docker node pools and clusters on versions earlier than 1.23, but Docker is no longer supported as of May 2023. For more information, see [Add a Windows Server node pool with `containerd`](./create-node-pools.md#windows-server-node-pools-with-containerd).
->
-> We highly recommend testing your workloads on AKS node pools with `containerd` before using clusters with a Kubernetes version that supports `containerd` for your node pools.
-
-#### `containerd` limitations/differences
-
-* For `containerd`, we recommend using [`crictl`](https://kubernetes.io/docs/tasks/debug-application-cluster/crictl) as a replacement for the Docker CLI for *troubleshooting pods, containers, and container images on Kubernetes nodes*. For more information on `crictl`, see [general usage][general-usage] and [client configuration options][client-config-options].
- * `Containerd` doesn't provide the complete functionality of the Docker CLI. It's available for troubleshooting only.
- * `crictl` offers a more Kubernetes-friendly view of containers, with concepts like pods, etc. being present.
-
-* `Containerd` sets up logging using the standardized `cri` logging format. Your logging solution needs to support the `cri` logging format, like [Azure Monitor for Containers](../azure-monitor/containers/container-insights-enable-new-cluster.md).
-* You can no longer access the Docker engine, `/var/run/docker.sock`, or use Docker-in-Docker (DinD).
- * If you currently extract application logs or monitoring data from Docker engine, use [Container Insights](../azure-monitor/containers/container-insights-enable-new-cluster.md) instead. AKS doesn't support running any out of band commands on the agent nodes that could cause instability.
- * We don't recommend building images or directly using the Docker engine. Kubernetes isn't fully aware of those consumed resources, and those methods present numerous issues as described [here](https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/) and [here](https://securityboulevard.com/2018/05/escaping-the-whale-things-you-probably-shouldnt-do-with-docker-part-1/).
-
-* When building images, you can continue to use your current Docker build workflow as normal, unless you're building images inside your AKS cluster. In this case, consider switching to the recommended approach for building images using [ACR Tasks](../container-registry/container-registry-quickstart-task-cli.md), or a more secure in-cluster option like [Docker Buildx](https://github.com/docker/buildx).
-
-### Resource reservations
-
-AKS uses node resources to help the node function as part of your cluster. This usage can create a discrepancy between your node's total resources and the allocatable resources in AKS. Remember this information when setting requests and limits for user deployed pods.
-
-To find a node's allocatable resource, you can use the `kubectl describe node` command:
-
-```kubectl
-kubectl describe node [NODE_NAME]
-```
-
-To maintain node performance and functionality, AKS reserves two types of resources, CPU and memory, on each node. As a node grows larger in resources, the resource reservation grows due to a higher need for management of user-deployed pods. Keep in mind that the resource reservations can't be changed.
-
-> [!NOTE]
-> Using AKS add-ons, such as Container Insights (OMS), consumes extra node resources.
-
-#### CPU
-
-Reserved CPU is dependent on node type and cluster configuration, which may cause less allocatable CPU due to running extra features. The following table shows CPU reservation in millicores:
-
-| CPU cores on host | 1 | 2 | 4 | 8 | 16 | 32 | 64 |
-|-|-|--|--|--|--|--|--|
-| Kube-reserved (millicores) | 60 | 100 | 140 | 180 | 260 | 420 | 740 |
-
-#### Memory
-
-Reserved memory in AKS includes the sum of two values:
-
-> [!IMPORTANT]
-> AKS 1.29 previews in January 2024 and includes certain changes to memory reservations. These changes are detailed in the following section.
-
-**AKS 1.29 and later**
-
-1. **`kubelet` daemon** has the *memory.available<100Mi* eviction rule by default. This rule ensures that a node has at least 100Mi allocatable at all times. When a host is below that available memory threshold, the `kubelet` triggers the termination of one of the running pods and frees up memory on the host machine.
-2. **A rate of memory reservations** set according to the lesser value of: *20MB * Max Pods supported on the Node + 50MB* or *25% of the total system memory resources*.
-
- **Examples**:
- * If the VM provides 8GB of memory and the node supports up to 30 pods, AKS reserves *20MB * 30 Max Pods + 50MB = 650MB* for kube-reserved. `Allocatable space = 8GB - 0.65GB (kube-reserved) - 0.1GB (eviction threshold) = 7.25GB or 90.625% allocatable.`
- * If the VM provides 4GB of memory and the node supports up to 70 pods, AKS reserves *25% * 4GB = 1000MB* for kube-reserved, as this is less than *20MB * 70 Max Pods + 50MB = 1450MB*.
-
- For more information, see [Configure maximum pods per node in an AKS cluster][maximum-pods].
-
-**AKS versions prior to 1.29**
-
-1. **`kubelet` daemon** has the *memory.available<750Mi* eviction rule by default. This rule ensures that a node has at least 750Mi allocatable at all times. When a host is below that available memory threshold, the `kubelet` triggers the termination of one of the running pods and free up memory on the host machine.
-2. **A regressive rate of memory reservations** for the kubelet daemon to properly function (*kube-reserved*).
- * 25% of the first 4GB of memory
- * 20% of the next 4GB of memory (up to 8GB)
- * 10% of the next 8GB of memory (up to 16GB)
- * 6% of the next 112GB of memory (up to 128GB)
- * 2% of any memory more than 128GB
-
-> [!NOTE]
-> AKS reserves an extra 2GB for system processes in Windows nodes that isn't part of the calculated memory.
-
-Memory and CPU allocation rules are designed to:
-
-* Keep agent nodes healthy, including some hosting system pods critical to cluster health.
-* Cause the node to report less allocatable memory and CPU than it would report if it weren't part of a Kubernetes cluster.
-
-For example, if a node offers 7 GB, it will report 34% of memory not allocatable including the 750Mi hard eviction threshold.
-
-`0.75 + (0.25*4) + (0.20*3) = 0.75GB + 1GB + 0.6GB = 2.35GB / 7GB = 33.57% reserved`
-
-In addition to reservations for Kubernetes itself, the underlying node OS also reserves an amount of CPU and memory resources to maintain OS functions.
-
-For associated best practices, see [Best practices for basic scheduler features in AKS][operator-best-practices-scheduler].
-
-## Node pools
-
-> [!NOTE]
-> The Azure Linux node pool is now generally available (GA). To learn about the benefits and deployment steps, see the [Introduction to the Azure Linux Container Host for AKS][intro-azure-linux].
-
-Nodes of the same configuration are grouped together into *node pools*. Each Kubernetes cluster contains at least one node pool. You define the initial number of nodes and sizes when you create an AKS cluster, which creates a *default node pool*. This default node pool in AKS contains the underlying VMs that run your agent nodes.
-
-> [!NOTE]
-> To ensure your cluster operates reliably, you should run at least two nodes in the default node pool.
-
-You scale or upgrade an AKS cluster against the default node pool. You can choose to scale or upgrade a specific node pool. For upgrade operations, running containers are scheduled on other nodes in the node pool until all the nodes are successfully upgraded.
-
-For more information, see [Create node pools](./create-node-pools.md) and [Manage node pools](./manage-node-pools.md).
-
-### Default OS disk sizing
-
-When you create a new cluster or add a new node pool to an existing cluster, the number for vCPUs by default determines the OS disk size. The number of vCPUs is based on the VM SKU. The following table lists the default OS disk size for each VM SKU:
-
-|VM SKU Cores (vCPUs)| Default OS Disk Tier | Provisioned IOPS | Provisioned Throughput (Mbps) |
-|--|--|--|--|
-| 1 - 7 | P10/128G | 500 | 100 |
-| 8 - 15 | P15/256G | 1100 | 125 |
-| 16 - 63 | P20/512G | 2300 | 150 |
-| 64+ | P30/1024G | 5000 | 200 |
-
-> [!IMPORTANT]
-> Default OS disk sizing is only used on new clusters or node pools when Ephemeral OS disks aren't supported and a default OS disk size isn't specified. The default OS disk size might impact the performance or cost of your cluster. You can't change the OS disk size after cluster or node pool creation. This default disk sizing affects clusters or node pools created on July 2022 or later.
-
-### Node selectors
-
-In an AKS cluster with multiple node pools, you might need to tell the Kubernetes Scheduler which node pool to use for a given resource. For example, ingress controllers shouldn't run on Windows Server nodes. You use node selectors to define various parameters, like node OS, to control where a pod should be scheduled.
-
-The following basic example schedules an NGINX instance on a Linux node using the node selector *"kubernetes.io/os": linux*:
-
-```yaml
-kind: Pod
-apiVersion: v1
-metadata:
- name: nginx
-spec:
- containers:
- - name: myfrontend
- image: mcr.microsoft.com/oss/nginx/nginx:1.15.12-alpine
- nodeSelector:
- "kubernetes.io/os": linux
-```
-
-For more information, see [Best practices for advanced scheduler features in AKS][operator-best-practices-advanced-scheduler].
-
-### Node resource group
-
-When you create an AKS cluster, you specify an Azure resource group to create the cluster resources in. In addition to this resource group, the AKS resource provider creates and manages a separate resource group called the *node resource group*. The *node resource group* contains the following infrastructure resources:
-
-* The virtual machine scale sets and VMs for every node in the node pools
-* The virtual network for the cluster
-* The storage for the cluster
-
-The node resource group is assigned a name by default with the following format: *MC_resourceGroupName_clusterName_location*. During cluster creation, you can specify the name assigned to your node resource group. When using an Azure Resource Manager template, you can define the name using the `nodeResourceGroup` property. When using Azure CLI, you use the `--node-resource-group` parameter with the `az aks create` command, as shown in the following example:
-
-```azurecli-interactive
-az aks create \
- --name myAKSCluster \
- --resource-group myResourceGroup \
- --node-resource-group myNodeResourceGroup \
- --generate-ssh-keys
-```
-
-When you delete your AKS cluster, the AKS resource provider automatically deletes the node resource group.
-
-The node resource group has the following limitations:
-
-* You can't specify an existing resource group for the node resource group.
-* You can't specify a different subscription for the node resource group.
-* You can't change the node resource group name after the cluster has been created.
-* You can't specify names for the managed resources within the node resource group.
-* You can't modify or delete Azure-created tags of managed resources within the node resource group.
-
-Modifying any **Azure-created tags** on resources under the node resource group in the AKS cluster is an unsupported action, which breaks the service-level objective (SLO). If you modify or delete Azure-created tags or other resource properties in the node resource group, you might get unexpected results, such as scaling and upgrading errors. AKS manages the infrastructure lifecycle in the node resource group, so making any changes moves your cluster into an [unsupported state][aks-support]. For more information, see [Does AKS offer a service-level agreement?][aks-service-level-agreement]
-
-AKS allows you to create and modify tags that are propagated to resources in the node resource group, and you can add those tags when [creating or updating][aks-tags] the cluster. You might want to create or modify custom tags to assign a business unit or cost center, for example. You can also create Azure Policies with a scope on the managed resource group.
-
-To reduce the chance of changes in the node resource group affecting your clusters, you can enable *node resource group lockdown* to apply a deny assignment to your AKS resources. for more information, see [Fully managed resource group (preview)][fully-managed-resource-group].
-
-> [!WARNING]
-> If you don't have node resource group lockdown enabled, you can directly modify any resource in the node resource group. Directly modifying resources in the node resource group can cause your cluster to become unstable or unresponsive.
-
-## Pods
-
-Kubernetes uses *pods* to run instances of your application. A single pod represents a single instance of your application.
-
-Pods typically have a 1:1 mapping with a container. In advanced scenarios, a pod might contain multiple containers. Multi-container pods are scheduled together on the same node and allow containers to share related resources.
-
-When you create a pod, you can define *resource requests* for a certain amount of CPU or memory. The Kubernetes Scheduler tries to meet the request by scheduling the pods to run on a node with available resources. You can also specify maximum resource limits to prevent a pod from consuming too much compute resource from the underlying node. Our recommended best practice is to include resource limits for all pods to help the Kubernetes Scheduler identify necessary, permitted resources.
-
-For more information, see [Kubernetes pods][kubernetes-pods] and [Kubernetes pod lifecycle][kubernetes-pod-lifecycle].
-
-A pod is a logical resource, but application workloads run on the containers. Pods are typically ephemeral, disposable resources. Individually scheduled pods miss some of the high availability and redundancy Kubernetes features. Instead, Kubernetes *Controllers*, such as the Deployment Controller, deploys and manages pods.
-
-## Deployments and YAML manifests
-
-A *deployment* represents identical pods managed by the Kubernetes Deployment Controller. A deployment defines the number of pod *replicas* to create. The Kubernetes Scheduler ensures that extra pods are scheduled on healthy nodes if pods or nodes encounter problems. You can update deployments to change the configuration of pods, the container image, or the attached storage.
-
-The Deployment Controller manages the deployment lifecycle and performs the following actions:
-
-* Drains and terminates a given number of replicas.
-* Creates replicas from the new deployment definition.
-* Continues the process until all replicas in the deployment are updated.
-
-Most stateless applications in AKS should use the deployment model rather than scheduling individual pods. Kubernetes can monitor deployment health and status to ensure that the required number of replicas run within the cluster. When scheduled individually, pods aren't restarted if they encounter a problem, and they aren't rescheduled on healthy nodes if their current node encounters a problem.
-
-You don't want to disrupt management decisions with an update process if your application requires a minimum number of available instances. *Pod Disruption Budgets* define how many replicas in a deployment can be taken down during an update or node upgrade. For example, if you have *five* replicas in your deployment, you can define a pod disruption of *four* to only allow one replica to be deleted or rescheduled at a time. As with pod resource limits, our recommended best practice is to define pod disruption budgets on applications that require a minimum number of replicas to always be present.
-
-Deployments are typically created and managed with `kubectl create` or `kubectl apply`. You can create a deployment by defining a manifest file in the YAML format. The following example shows a basic deployment manifest file for an NGINX web server:
-
-```yaml
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: nginx
-spec:
- replicas: 3
- selector:
- matchLabels:
- app: nginx
- template:
- metadata:
- labels:
- app: nginx
- spec:
- containers:
- - name: nginx
- image: mcr.microsoft.com/oss/nginx/nginx:1.15.2-alpine
- ports:
- - containerPort: 80
- resources:
- requests:
- cpu: 250m
- memory: 64Mi
- limits:
- cpu: 500m
- memory: 256Mi
-```
-
-A breakdown of the deployment specifications in the YAML manifest file is as follows:
-
-| Specification | Description |
-| -- | - |
-| `.apiVersion` | Specifies the API group and API resource you want to use when creating the resource. |
-| `.kind` | Specifies the type of resource you want to create. |
-| `.metadata.name` | Specifies the name of the deployment. This example YAML file runs the *nginx* image from Docker Hub. |
-| `.spec.replicas` | Specifies how many pods to create. This example YAML file creates three duplicate pods. |
-| `.spec.selector` | Specifies which pods will be affected by this deployment. |
-| `.spec.selector.matchLabels` | Contains a map of *{key, value}* pairs that allow the deployment to find and manage the created pods. |
-| `.spec.selector.matchLabels.app` | Has to match `.spec.template.metadata.labels`. |
-| `.spec.template.labels` | Specifies the *{key, value}* pairs attached to the object. |
-| `.spec.template.app` | Has to match `.spec.selector.matchLabels`. |
-| `.spec.spec.containers` | Specifies the list of containers belonging to the pod. |
-| `.spec.spec.containers.name` | Specifies the name of the container specified as a DNS label. |
-| `.spec.spec.containers.image` | Specifies the container image name. |
-| `.spec.spec.containers.ports` | Specifies the list of ports to expose from the container. |
-| `.spec.spec.containers.ports.containerPort` | Specifies the number of ports to expose on the pod's IP address. |
-| `.spec.spec.resources` | Specifies the compute resources required by the container. |
-| `.spec.spec.resources.requests` | Specifies the minimum amount of compute resources required. |
-| `.spec.spec.resources.requests.cpu` | Specifies the minimum amount of CPU required. |
-| `.spec.spec.resources.requests.memory` | Specifies the minimum amount of memory required. |
-| `.spec.spec.resources.limits` | Specifies the maximum amount of compute resources allowed. The kubelet enforces this limit. |
-| `.spec.spec.resources.limits.cpu` | Specifies the maximum amount of CPU allowed. The kubelet enforces this limit. |
-| `.spec.spec.resources.limits.memory` | Specifies the maximum amount of memory allowed. The kubelet enforces this limit. |
-
-More complex applications can be created by including services, such as load balancers, within the YAML manifest.
-
-For more information, see [Kubernetes deployments][kubernetes-deployments].
-
-### Package management with Helm
-
-[Helm][helm] is commonly used to manage applications in Kubernetes. You can deploy resources by building and using existing public *Helm charts* that contain a packaged version of application code and Kubernetes YAML manifests. You can store Helm charts either locally or in a remote repository, such as an [Azure Container Registry Helm chart repo][acr-helm].
-
-To use Helm, install the Helm client on your computer, or use the Helm client in the [Azure Cloud Shell][azure-cloud-shell]. Search for or create Helm charts, and then install them to your Kubernetes cluster. For more information, see [Install existing applications with Helm in AKS][aks-helm].
-
-## StatefulSets and DaemonSets
-
-The Deployment Controller uses the Kubernetes Scheduler and runs replicas on any available node with available resources. While this approach might be sufficient for stateless applications, the Deployment Controller isn't ideal for applications that require the following specifications:
-
-* A persistent naming convention or storage.
-* A replica to exist on each select node within a cluster.
-
-Two Kubernetes resources, however, let you manage these types of applications: *StatefulSets* and *DaemonSets*.
-
-*StatefulSets* maintain the state of applications beyond an individual pod lifecycle. *DaemonSets* ensure a running instance on each node early in the Kubernetes bootstrap process.
-
-### StatefulSets
-
-Modern application development often aims for stateless applications. For stateful applications, like those that include database components, you can use *StatefulSets*. Like deployments, a StatefulSet creates and manages at least one identical pod. Replicas in a StatefulSet follow a graceful, sequential approach to deployment, scale, upgrade, and termination operations. The naming convention, network names, and storage persist as replicas are rescheduled with a StatefulSet.
-
-You can define the application in YAML format using `kind: StatefulSet`. From there, the StatefulSet Controller handles the deployment and management of the required replicas. Data writes to persistent storage, provided by Azure Managed Disks or Azure Files. The underlying persistent storage remains even when the StatefulSet is deleted, unless the `spec.persistentVolumeClaimRetentionPolicy` is set to `Delete`. For more information, see [Kubernetes StatefulSets][kubernetes-statefulsets].
-
-> [!IMPORTANT]
-> Replicas in a StatefulSet are scheduled and run across any available node in an AKS cluster. To ensure at least one pod in your set runs on a node, you should use a DaemonSet instead.
-
-### DaemonSets
-
-For specific log collection or monitoring, you might need to run a pod on all nodes or a select set of nodes. You can use *DaemonSets* to deploy to one or more identical pods. The DaemonSet Controller ensures that each node specified runs an instance of the pod.
-
-The DaemonSet Controller can schedule pods on nodes early in the cluster boot process before the default Kubernetes scheduler starts. This ability ensures that the pods in a DaemonSet state before traditional pods in a Deployment or StatefulSet are scheduled.
-
-Like StatefulSets, you can define a DaemonSet as part of a YAML definition using `kind: DaemonSet`.
-
-For more information, see [Kubernetes DaemonSets][kubernetes-daemonset].
-
-> [!NOTE]
-> If you're using the [virtual Nodes add-on](virtual-nodes-cli.md#enable-the-virtual-nodes-addon), DaemonSets don't create pods on the virtual node.
-
-## Namespaces
-
-Kubernetes resources, such as pods and deployments, are logically grouped into *namespaces* to divide an AKS cluster and create, view, or manage access to resources. For example, you can create namespaces to separate business groups. Users can only interact with resources within their assigned namespaces.
-
-![Kubernetes namespaces to logically divide resources and applications](media/concepts-clusters-workloads/namespaces.png)
-
-The following namespaces are available when you create an AKS cluster:
-
-| Namespace | Description |
-| -- | - |
-| *default* | Where pods and deployments are created by default when none is provided. In smaller environments, you can deploy applications directly into the default namespace without creating additional logical separations. When you interact with the Kubernetes API, such as with `kubectl get pods`, the default namespace is used when none is specified. |
-| *kube-system* | Where core resources exist, such as network features like DNS and proxy, or the Kubernetes dashboard. You typically don't deploy your own applications into this namespace. |
-| *kube-public* | Typically not used, you can use it for resources to be visible across the whole cluster, and can be viewed by any user. |
-
-For more information, see [Kubernetes namespaces][kubernetes-namespaces].
-
-## Next steps
-
-For more information on core Kubernetes and AKS concepts, see the following articles:
-
-* [AKS access and identity][aks-concepts-identity]
-* [AKS security][aks-concepts-security]
-* [AKS virtual networks][aks-concepts-network]
-* [AKS storage][aks-concepts-storage]
-* [AKS scale][aks-concepts-scale]
-
-<!-- EXTERNAL LINKS -->
-[cluster-api-provider-azure]: https://github.com/kubernetes-sigs/cluster-api-provider-azure
-[kubernetes-pods]: https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/
-[kubernetes-pod-lifecycle]: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/
-[kubernetes-deployments]: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
-[kubernetes-statefulsets]: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
-[kubernetes-daemonset]: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
-[kubernetes-namespaces]: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
-[helm]: https://helm.sh/
-[azure-cloud-shell]: https://shell.azure.com
-[aks-release-notes]: https://github.com/Azure/AKS/releases
-[general-usage]: https://kubernetes.io/docs/tasks/debug/debug-cluster/crictl/#general-usage
-[client-config-options]: https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md#client-configuration-options
-
-<!-- INTERNAL LINKS -->
-[aks-concepts-identity]: concepts-identity.md
-[aks-concepts-security]: concepts-security.md
-[aks-concepts-scale]: concepts-scale.md
-[aks-concepts-storage]: concepts-storage.md
-[aks-concepts-network]: concepts-network.md
-[acr-helm]: ../container-registry/container-registry-helm-repos.md
-[aks-helm]: kubernetes-helm.md
-[operator-best-practices-scheduler]: operator-best-practices-scheduler.md
-[operator-best-practices-advanced-scheduler]: operator-best-practices-advanced-scheduler.md
-[reservation-discounts]:../cost-management-billing/reservations/save-compute-costs-reservations.md
-[aks-service-level-agreement]: faq.md#does-aks-offer-a-service-level-agreement
-[aks-tags]: use-tags.md
-[aks-support]: support-policies.md#user-customization-of-agent-nodes
-[intro-azure-linux]: ../azure-linux/intro-azure-linux.md
-[fully-managed-resource-group]: ./node-resource-group-lockdown.md
-[maximum-pods]: concepts-network-ip-address-planning.md#maximum-pods-per-node
-
aks Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-storage.md
This article introduces the core concepts that provide storage to your applicati
![Diagram of storage options for applications in an Azure Kubernetes Services (AKS) cluster.](media/concepts-storage/aks-storage-concept.png)
+## Default OS disk sizing
+
+When you create a new cluster or add a new node pool to an existing cluster, the number for vCPUs by default determines the OS disk size. The number of vCPUs is based on the VM SKU. The following table lists the default OS disk size for each VM SKU:
+
+|VM SKU Cores (vCPUs)| Default OS Disk Tier | Provisioned IOPS | Provisioned Throughput (Mbps) |
+|--|--|--|--|
+| 1 - 7 | P10/128G | 500 | 100 |
+| 8 - 15 | P15/256G | 1100 | 125 |
+| 16 - 63 | P20/512G | 2300 | 150 |
+| 64+ | P30/1024G | 5000 | 200 |
+
+> [!IMPORTANT]
+> Default OS disk sizing is only used on new clusters or node pools when Ephemeral OS disks aren't supported and a default OS disk size isn't specified. The default OS disk size might impact the performance or cost of your cluster. You can't change the OS disk size after cluster or node pool creation. This default disk sizing affects clusters or node pools created on July 2022 or later.
+ ## Ephemeral OS disk By default, Azure automatically replicates the operating system disk for a virtual machine to Azure Storage to avoid data loss when the VM is relocated to another host. However, since containers aren't designed to have local state persisted, this behavior offers limited value while providing some drawbacks. These drawbacks include, but aren't limited to, slower node provisioning and higher read/write latency.
aks Core Aks Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/core-aks-concepts.md
+
+ Title: Azure Kubernetes Services (AKS) core concepts
+description: Learn about the core concepts of Azure Kubernetes Service (AKS).
+ Last updated : 07/10/2024++++
+# Core concepts for Azure Kubernetes Service (AKS)
+
+This article describes core concepts of Azure Kubernetes Service (AKS), a managed Kubernetes service that you can use to deploy and operate containerized applications at scale on Azure.
+
+## What is Kubernetes?
+
+Kubernetes is an open-source container orchestration platform for automating the deployment, scaling, and management of containerized applications. For more information, see the official [Kubernetes documentation][kubernetes-docs].
+
+## What is AKS?
+
+AKS is a managed Kubernetes service that simplifies deploying, managing, and scaling containerized applications using Kubernetes. For more information, see [What is Azure Kubernetes Service (AKS)?][aks-overview]
+
+## Cluster components
+
+An AKS cluster is divided into two main components:
+
+* **Control plane**: The control plane provides the core Kubernetes services and orchestration of application workloads.
+* **Nodes**: Nodes are the underlying virtual machines (VMs) that run your applications.
+
+![Screenshot of Kubernetes control plane and node components](media/concepts-clusters-workloads/control-plane-and-nodes.png)
+
+### Control plane
+
+The Azure managed control plane is comprised of several components that help manage the cluster:
+
+| Component | Description |
+| | -- |
+| *kube-apiserver* | The API server ([kube-apiserver][kube-apiserver]) exposes the Kubernetes API to enable requests to the cluster from inside and outside of the cluster. |
+| *etcd* | [etcd][etcd] is a highly available key-value store that helps maintain the state of your Kubernetes cluster and configuration. |
+| *kube-scheduler* | The scheduler ([kube-scheduler][kube-scheduler]) helps make scheduling decisions, watching for new pods with no assigned node and selecting a node for them to run on. |
+| *kube-controller-manager* | The controller manager ([kube-controller-manager][kube-controller-manager]) runs controller processes, such as noticing and responding when nodes go down. |
+| *cloud-controller-manager* | The cloud controller manager ([cloud-controller-manager][cloud-controller-manager]) embeds cloud-specific control logic to run controllers specific to the cloud provider. |
+
+### Nodes
+
+Each AKS cluster has at least one node, which is an Azure virtual machine (VM) that runs Kubernetes node components. The following components run on each node:
+
+| Component | Description |
+| | -- |
+| *kubelet* | The [kubelet][kubelet] ensures that containers are running in a pod. |
+| *kube-proxy* | The [kube-proxy][kube-proxy] is a network proxy that maintains network rules on nodes. |
+| *container runtime* | The [container runtime][container-runtime] manages the execution and lifecycle of containers. |
+
+![Screenshot of Azure virtual machine and supporting resources for a Kubernetes node](media/concepts-clusters-workloads/aks-node-resource-interactions.png)
+
+## Node configuration
+
+### VM size and image
+
+The **Azure VM size** for your nodes defines CPUs, memory, size, and the storage type available, such as high-performance SSD or regular HDD. The VM size you choose depends on the workload requirements and the number of pods you plan to run on each node. For more information, see [Supported VM sizes in Azure Kubernetes Service (AKS)][aks-vm-sizes].
+
+In AKS, the **VM image** for your cluster's nodes is based on Ubuntu Linux, [Azure Linux](use-azure-linux.md), or Windows Server 2022. When you create an AKS cluster or scale out the number of nodes, the Azure platform automatically creates and configures the requested number of VMs. Agent nodes are billed as standard VMs, so any VM size discounts, including [Azure reservations][reservation-discounts], are automatically applied.
+
+### OS disks
+
+Default OS disk sizing is only used on new clusters or node pools when Ephemeral OS disks aren't supported and a default OS disk size isn't specified. For more information, see [Default OS disk sizing][default-os-disk] and [Ephemeral OS disks][ephemeral-os-disks].
+
+### Resource reservations
+
+AKS uses node resources to help the nodes function as part of the cluster. This usage can cause a discrepancy between the node's total resources and the allocatable resources in AKS. To maintain node performance and functionality, AKS reserves two types of resources, **CPU** and **memory**, on each node. For more information, see [Resource reservations in AKS][resource-reservations].
+
+### OS
+
+AKS supports Ubuntu 22.04 and Azure Linux 2.0 as the node OS for Linux node pools. For Windows node pools, AKS supports Windows Server 2022 as the default OS. Windows Server 2019 is being retired after Kubernetes version 1.32 reaches end of life and isn't supported in future releases. If you need to upgrade your Windows OS version, see [Upgrade from Windows Server 2019 to Windows Server 2022][upgrade-2019-2022]. For more information on using Windows Server on AKS, see [Windows container considerations in Azure Kubernetes Service (AKS)][windows-considerations].
+
+### Container runtime
+
+A container runtime is software that executes containers and manages container images on a node. The runtime helps abstract away sys-calls or OS-specific functionality to run containers on Linux or Windows. For Linux node pools, [`containerd`][containerd] is used on Kubernetes version 1.19 and higher. For Windows Server 2019 and 2022 node pools, [`containerd`][containerd] is generally available and is the only runtime option on Kubernetes version 1.23 and higher.
+
+## Pods
+
+A *pod* is a group of one or more containers that share the same network and storage resources and a specification for how to run the containers. Pods typically have a 1:1 mapping with a container, but you can run multiple containers in a pod.
+
+## Node pools
+
+In AKS, nodes of the same configuration are grouped together into *node pools*. These node pools contain the underlying virtual machine scale sets and virtual machines (VMs) that run your applications. When you create an AKS cluster, you define the initial number of nodes and their size (SKU), which creates a [*system node pool*][use-system-pool]. System node pools serve the primary purpose of hosting critical system pods, such as CoreDNS and `konnectivity`. To support applications that have different compute or storage demands, you can create *user node pools*. User node pools serve the primary purpose of hosting your application pods.
+
+For more information, see [Create node pools in AKS][create-node-pools] and [Manage node pools in AKS][manage-node-pools].
+
+## Node resource group
+
+When you create an AKS cluster in an Azure resource group, the AKS resource provider automatically creates a second resource group called the *node resource group*. This resource group contains all the infrastructure resources associated with the cluster, including virtual machines (VMs), virtual machine scale sets, and storage.
+
+For more information, see the following resources:
+
+* [Why are two resource groups created with AKS?][node-resource-group]
+* [Can I provide my own name for the AKS node resource group?][custom-nrg]
+* [Can I modify tags and other properties of the resources in the AKS node resource group?][modify-nrg-resources]
+
+## Namespaces
+
+Kubernetes resources, such as pods and deployments, are logically grouped into *namespaces* to divide an AKS cluster and create, view, or manage access to resources.
+
+The following namespaces are created by default in an AKS cluster:
+
+| Namespace | Description |
+| | -- |
+| *default* | The [default][kubernetes-namespaces] namespace allows you to start using cluster resources without creating a new namespace. |
+| *kube-node-lease* | The [kube-node-lease][kubernetes-namespaces] namespace enables nodes to communicate their availability to the control plane. |
+| *kube-public* | The [kube-public][kubernetes-namespaces] namespace isn't typically used, but can be used for resources to be visible across the whole cluster by any user. |
+| *kube-system* | The [kube-system][kubernetes-namespaces] namespace is used by Kubernetes to manage cluster resources, such as `coredns`, `konnectivity-agent`, and `metrics-server`. |
+
+![Screenshot of Kubernetes namespaces to logically divide resources and applications](media/concepts-clusters-workloads/namespaces.png)
+
+## Cluster modes
+
+In AKS, you can create a cluster with the **Automatic (preview)** or **Standard** mode. AKS Automatic provides a more fully managed experience, managing cluster configuration, including nodes, scaling, security, and other preconfigured settings. AKS Standard provides more control over the cluster configuration, including the ability to manage node pools, scaling, and other settings.
+
+For more information, see [AKS Automatic and Standard feature comparison][automatic-standard].
+
+## Pricing tiers
+
+AKS offers three pricing tiers for cluster management: **Free**, **Standard**, and **Premium**. The pricing tier you choose determines the features available for managing your cluster.
+
+For more information, see [Pricing tiers for AKS cluster management][pricing-tiers].
+
+## Supported Kubernetes versions
+
+For more information, see [Supported Kubernetes versions in AKS][supported-kubernetes-versions].
+
+## Next steps
+
+For information on more core concepts for AKS, see the following resources:
+
+* [AKS access and identity][access-identity]
+* [AKS security][security]
+* [AKS networking][networking]
+* [AKS storage][storage]
+* [AKS scaling][scaling]
+* [AKS monitoring][monitoring]
+* [AKS backup and recovery][backup-recovery]
+
+<!LINKS>
+[kube-apiserver]: https://kubernetes.io/docs/concepts/overview/components/#kube-apiserver
+[etcd]: https://kubernetes.io/docs/concepts/overview/components/#etcd
+[kube-scheduler]: https://kubernetes.io/docs/concepts/overview/components/#kube-scheduler
+[kube-controller-manager]: https://kubernetes.io/docs/concepts/overview/components/#kube-controller-manager
+[cloud-controller-manager]: https://kubernetes.io/docs/concepts/overview/components/#cloud-controller-manager
+[kubelet]: https://kubernetes.io/docs/concepts/overview/components/#kubelet
+[kube-proxy]: https://kubernetes.io/docs/concepts/overview/components/#kube-proxy
+[container-runtime]: https://kubernetes.io/docs/concepts/overview/components/#container-runtime
+[create-node-pools]: ./create-node-pools.md
+[manage-node-pools]: ./manage-node-pools.md
+[node-resource-group]: ./faq.md#why-are-two-resource-groups-created-with-aks
+[custom-nrg]: ./faq.md#can-i-provide-my-own-name-for-the-aks-node-resource-group
+[modify-nrg-resources]: ./faq.md#can-i-modify-tags-and-other-properties-of-the-aks-resources-in-the-node-resource-group
+[kubernetes-namespaces]: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/#initial-namespaces
+[use-system-pool]: ./use-system-pools.md
+[automatic-standard]: ./intro-aks-automatic.md#aks-automatic-and-standard-feature-comparison
+[pricing-tiers]: ./free-standard-pricing-tiers.md
+[access-identity]: ./concepts-identity.md
+[security]: ./concepts-security.md
+[networking]: ./concepts-network.md
+[storage]: ./concepts-storage.md
+[scaling]: ./concepts-scale.md
+[monitoring]: ./monitor-aks.md
+[backup-recovery]: ../backup/azure-kubernetes-service-backup-overview.md
+[kubernetes-docs]: https://kubernetes.io/docs/home/
+[resource-reservations]: ./node-resource-reservations.md
+[reservation-discounts]: ../cost-management-billing/reservations/save-compute-costs-reservations.md
+[supported-kubernetes-versions]: ./supported-kubernetes-versions.md
+[default-os-disk]: ./concepts-storage.md#default-os-disk-sizing
+[ephemeral-os-disks]: ./concepts-storage.md#ephemeral-os-disk
+[aks-overview]: ./what-is-aks.md
+[containerd]: https://containerd.io/
+[aks-vm-sizes]: ./quotas-skus-regions.md#supported-vm-sizes
+[windows-considerations]: ./windows-vs-linux-containers.md
+[upgrade-2019-2022]: ./upgrade-windows-os.md
aks Dapr Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-settings.md
Previously updated : 04/01/2024 Last updated : 07/09/2024 # Configure the Dapr extension for your Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes project
-Once you've [created the Dapr extension](./dapr.md), you can configure the [Dapr](https://dapr.io/) extension to work best for you and your project using various configuration options, like:
+After [creating the Dapr extension](./dapr.md), you can configure the [Dapr](https://dapr.io/) extension to work best for you and your project using various configuration options, like:
- Limiting which of your nodes use the Dapr extension-- Setting automatic CRD updates
+- Setting automatic custom resource definition (CRD) updates
- Configuring the Dapr release namespace The extension enables you to set Dapr configuration options by using the `--configuration-settings` parameter in the Azure CLI or `configurationSettings` property in a Bicep template.
properties: {
## Install Dapr in multiple availability zones while in HA mode
-By default, the placement service uses a storage class of type `standard_LRS`. It is recommended to create a **zone redundant storage class** while installing Dapr in HA mode across multiple availability zones. For example, to create a `zrs` type storage class, add the `storageaccounttype` parameter:
+By default, the placement service uses a storage class of type `standard_LRS`. It's recommended to create a **zone redundant storage class** while installing Dapr in HA mode across multiple availability zones. For example, to create a `zrs` type storage class, add the `storageaccounttype` parameter:
```yaml kind: StorageClass
You can configure the release namespace.
# [Azure CLI](#tab/cli)
-The Dapr extension gets installed in the `dapr-system` namespace by default. To override it, use `--release-namespace`. Include the cluster `--scope` to redefine the namespace.
+The Dapr extension gets installed in the `dapr-system` namespace by default. To override it, use `--release-namespace`. To redefine the namespace, include the cluster `--scope`.
```azurecli az k8s-extension create \
properties: {
-[Learn how to configure the Dapr release namespace if you already have Dapr installed](./dapr-migration.md).
+[Learn how to configure the Dapr release namespace when migrating from Dapr open source to the Dapr extension](./dapr-migration.md).
## Show current configuration settings
az k8s-extension show --cluster-type managedClusters \
> > HA is enabled by default. Disabling it requires deletion and recreation of the extension.
-To update your Dapr configuration settings, recreate the extension with the desired state. For example, assume we've previously created and installed the extension using the following configuration:
+To update your Dapr configuration settings, recreate the extension with the desired state. For example, let's say you previously created and installed the extension using the following configuration:
```azurecli-interactive az k8s-extension create --cluster-type managedClusters \
properties: {
## Meet network requirements
-The Dapr extension for AKS and Arc for Kubernetes requires the following outbound URLs on `https://:443` to function:
+The Dapr extension requires the following outbound URLs on `https://:443` to function on AKS and Arc for Kubernetes:
1. `https://mcr.microsoft.com/daprio` URL for pulling Dapr artifacts.
-2. `https://linuxgeneva-microsoft.azurecr.io/` URL for pulling some Dapr dependencies.
-3. The [outbound URLs required for AKS or Arc for Kubernetes](../azure-arc/kubernetes/network-requirements.md).
+1. The [outbound URLs required for AKS or Arc for Kubernetes](../azure-arc/kubernetes/network-requirements.md).
## Next Steps
-Once you have successfully provisioned Dapr in your AKS cluster, try deploying a [sample application][sample-application].
+Once you successfully provisioned Dapr in your AKS cluster, try deploying a [sample application][sample-application].
<!-- LINKS INTERNAL --> [deploy-cluster]: ./tutorial-kubernetes-deploy-cluster.md
aks Istio About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-about.md
Service-to-service communication is what makes a distributed application possibl
Istio is an open-source service mesh that layers transparently onto existing distributed applications. IstioΓÇÖs powerful features provide a uniform and more efficient way to secure, connect, and monitor services. Istio enables load balancing, service-to-service authentication, and monitoring ΓÇô with few or no service code changes. Its powerful control plane brings vital features, including:
-* Secure service-to-service communication in a cluster with TLS encryption, strong identity-based authentication and authorization.
+* Secure service-to-service communication in a cluster with TLS (Transport Layer Security) encryption, strong identity-based authentication and authorization.
* Automatic load balancing for HTTP, gRPC, WebSocket, and TCP traffic. * Fine-grained control of traffic behavior with rich routing rules, retries, failovers, and fault injection.
-* A pluggable policy layer and configuration API supporting access controls, rate limits and quotas.
+* A pluggable policy layer and configuration API supporting access controls, rate limits, and quotas.
* Automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress. ## How is the add-on different from open-source Istio?
This service mesh add-on uses and builds on top of open-source Istio. The add-on
## Limitations
-Istio-based service mesh add-on for AKS has the following limitations:
+Istio-based service mesh add-on for AKS currently has the following limitations:
* The add-on doesn't work on AKS clusters that are using [Open Service Mesh addon for AKS][open-service-mesh-about].
-* The add-on doesn't work on AKS clusters that have Istio installed on them already outside the add-on installation.
+* The add-on doesn't work on AKS clusters with self-managed installations of Istio.
* The add-on doesn't support adding pods associated with virtual nodes to be added under the mesh.
+* The add-on doesn't yet support egress gateways for outbound traffic control.
+* The add-on doesn't yet support the sidecar-less Ambient mode. Microsoft is currently contributing to Ambient workstream under Istio open source. Product integration for Ambient mode is on the roadmap and is being continuously evaluated as the Ambient workstream evolves.
+* The add-on doesn't yet support multi-cluster deployments.
* Istio doesn't support Windows Server containers.
-* Customization of mesh based on the following custom resources is blocked for now - `EnvoyFilter, ProxyConfig, WorkloadEntry, WorkloadGroup, Telemetry, IstioOperator, WasmPlugin`
-* Gateway API for Istio ingress gateway or managing mesh traffic (GAMMA) are currently not yet supported with Istio addon.
+* Customization of mesh through the following custom resources is blocked for now - `ProxyConfig, WorkloadEntry, WorkloadGroup, Telemetry, IstioOperator, WasmPlugin, EnvoyFilter`.
+* For `EnvoyFilter`, the add-on only supports customization of Lua filters (`type.googleapis.com/envoy.extensions.filters.http.lua.v3.Lua`). Note that this EnvoyFilter is allowed but any issue arising from the Lua script itself is not supported (to learn more about our support policy and distinction between "allowed" and "supported" configurations, see [the following section][istio-meshconfig-support]). Other `EnvoyFilter` types are currently blocked. other `EnvoyFilter` types are currently blocked.
+* Gateway API for Istio ingress gateway or managing mesh traffic (GAMMA) are currently not yet supported with Istio addon. It's planned to allow customizations such as ingress static IP address configuration as part of the Gateway API implementation for the add-on in future.
## Next steps * [Deploy Istio-based service mesh add-on][istio-deploy-addon]
+* [Troubleshoot Istio-based service mesh add-on][istio-troubleshooting]
[istio-overview]: https://istio.io/latest/ [managed-prometheus-overview]: ../azure-monitor/essentials/prometheus-metrics-overview.md [managed-grafana-overview]: ../managed-grafan [azure-cni-cilium]: azure-cni-powered-by-cilium.md [open-service-mesh-about]: open-service-mesh-about.md
+[istio-meshconfig]: ./istio-meshconfig.md
+[istio-ingress]: ./istio-deploy-ingress.md
+[istio-troubleshooting]: /troubleshoot/azure/azure-kubernetes/extensions/istio-add-on-general-troubleshooting
+[istio-meshconfig-support]: ./istio-meshconfig.md#allowed-supported-and-blocked-values
[istio-deploy-addon]: istio-deploy-addon.md
aks Istio Deploy Addon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-deploy-addon.md
az group delete --name ${RESOURCE_GROUP} --yes --no-wait
* [Deploy external or internal ingresses for Istio service mesh add-on][istio-deploy-ingress] * [Scale istiod and ingress gateway HPA][istio-scaling-guide] - <! External Links > [install-aks-cluster-istio-bicep]: https://github.com/Azure-Samples/aks-istio-addon-bicep [uninstall-istio-oss]: https://istio.io/latest/docs/setup/install/istioctl/#uninstall-istio
aks Istio Deploy Ingress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-deploy-ingress.md
This article shows you how to deploy external or internal ingresses for Istio se
## Prerequisites
-This guide assumes you followed the [documentation][istio-deploy-addon] to enable the Istio add-on on an AKS cluster, deploy a sample application and set environment variables.
+This guide assumes you followed the [documentation][istio-deploy-addon] to enable the Istio add-on on an AKS cluster, deploy a sample application, and set environment variables.
## Enable external ingress gateway
aks-istio-ingressgateway-external LoadBalancer 10.0.10.249 <EXTERNAL_IP>
``` > [!NOTE]
-> Customizations to IP address on internal and external gateways aren't supported yet. IP address customizations on the ingress are reverted back by the Istio add-on.
-It's planned to allow these customizations in Gateway API Istio implementation as part of the Istio add-on in future.
+> Customizations to IP address on internal and external gateways aren't supported yet. IP address customizations on the ingress specifications are reverted back by the Istio add-on.It's planned to allow these customizations in the Gateway API implementation for the Istio add-on in future.
Applications aren't accessible from outside the cluster by default after enabling the ingress gateway. To make an application accessible, map the sample deployment's ingress to the Istio ingress gateway using the following manifest:
Use `az aks mesh enable-ingress-gateway` to enable an internal Istio ingress on
az aks mesh enable-ingress-gateway --resource-group $RESOURCE_GROUP --name $CLUSTER --ingress-gateway-type internal ``` - Use `kubectl get svc` to check the service mapped to the ingress gateway: ```bash
NAME TYPE CLUSTER-IP EXTERNAL-IP
aks-istio-ingressgateway-internal LoadBalancer 10.0.182.240 <IP> 15021:30764/TCP,80:32186/TCP,443:31713/TCP 87s ```
-Applications aren't mapped to the Istio ingress gateway after enabling the ingress gateway. Use the following manifest to map the sample deployment's ingress to the Istio ingress gateway:
+After enabling the ingress gateway, applications need to be exposed through the gateway and routing rules need to be configured accordingly. Use the following manifest to map the sample deployment's ingress to the Istio ingress gateway:
```bash kubectl apply -f - <<EOF
Confirm that the sample application's product page is accessible. The expected o
## Delete resources
+If you want to clean up the Istio external or internal ingress gateways, but leave the mesh enabled on the cluster, run the following command:
+
+```azure-cli-interactive
+az aks mesh disable-ingress-gateway --ingress-gateway-type <external/internal> --resource-group ${RESOURCE_GROUP}
+```
+ If you want to clean up the Istio service mesh and the ingresses (leaving behind the cluster), run the following command: ```azurecli-interactive
az group delete --name ${RESOURCE_GROUP} --yes --no-wait
## Next steps
+> [!NOTE]
+> In case of any issues encountered with deploying the Istio ingress gateway or configuring ingress traffic routing, refer to [article on troubleshooting Istio add-on ingress gateways][istio-ingress-tsg]
+ * [Secure ingress gateway for Istio service mesh add-on][istio-secure-gateway] * [Scale ingress gateway HPA][istio-scaling-guide] [istio-deploy-addon]: istio-deploy-addon.md [istio-secure-gateway]: istio-secure-gateway.md [istio-scaling-guide]: istio-scale.md#scaling
+[istio-ingress-tsg]: /troubleshoot/azure/azure-kubernetes/extensions/istio-add-on-ingress-gateway
aks Long Term Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/long-term-support.md
To carry out an in-place upgrade to the latest LTS version, you need to specify
az aks upgrade --resource-group myResourceGroup --name myAKSCluster --kubernetes-version 1.32.2 ``` > [!NOTE]
-> The next Long Term Support Version after 1.27 is to be determined. However Customers will get a minimum 6 months of overlap between 1.27 LTS and the next LTS version to plan upgrades.
+
+> 1.30 is the next LTS version after 1.27. Customers will be able to upgrade from 1.27 LTS to 1.30 LTS starting August, 2024. 1.27 LTS goes End of Life by July 2025.
> Kubernetes 1.32.2 is used as an example version in this article. Check the [AKS release tracker](release-tracker.md) for available Kubernetes releases. ## Frequently asked questions
aks Network Observability Byo Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/network-observability-byo-cli.md
- Title: "Setup of Network Observability for Azure Kubernetes Service (AKS) - BYO Prometheus and Grafana"
-description: Get started with AKS Network Observability for your AKS cluster using BYO Prometheus and Grafana.
----- Previously updated : 06/20/2023---
-# Setup of Network Observability for Azure Kubernetes Service (AKS) - BYO Prometheus and Grafana
-
-AKS Network Observability is used to collect the network traffic data of your AKS cluster. Network Observability enables a centralized platform for monitoring application and network health. Prometheus collects AKS Network Observability metrics, and Grafana visualizes them. Both Cilium and non-Cilium data plane are supported. In this article, learn how to enable the Network Observability add-on and use BYO Prometheus and Grafana to visualize the scraped metrics.
-
-> [!NOTE]
->Starting with Kubernetes version 1.29, the network observability feature no longer supports Bring Your Own (BYO) Prometheus and Grafana. However, you can still enable it using the Azure Managed Prometheus and Grafana offering
->
-
-> [!IMPORTANT]
-> AKS Network Observability is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-For more information about AKS Network Observability, see [What is Azure Kubernetes Service (AKS) Network Observability?](network-observability-overview.md).
-
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).--- Installations of BYO Prometheus and Grafana.---- Minimum version of **Azure CLI** required for the steps in this article is **2.44.0**. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).-
-### Install the `aks-preview` Azure CLI extension
--
-```azurecli-interactive
-# Install the aks-preview extension
-az extension add --name aks-preview
-
-# Update the extension to make sure you have the latest version installed
-az extension update --name aks-preview
-```
-
-### Register the `NetworkObservabilityPreview` feature flag
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "NetworkObservabilityPreview"
-```
-
-Use [az feature show](/cli/azure/feature#az-feature-show) to check the registration status of the feature flag:
-
-```azurecli-interactive
-az feature show --namespace "Microsoft.ContainerService" --name "NetworkObservabilityPreview"
-```
-
-Wait for the feature to say **Registered** before preceding with the article.
-
-```output
-{
- "id": "/subscriptions/23250d6d-28f0-41dd-9776-61fc80805b6e/providers/Microsoft.Features/providers/Microsoft.ContainerService/features/NetworkObservabilityPreview",
- "name": "Microsoft.ContainerService/NetworkObservabilityPreview",
- "properties": {
- "state": "Registering"
- },
- "type": "Microsoft.Features/providers/features"
-}
-```
-When the feature is registered, refresh the registration of the Microsoft.ContainerService resource provider with [az provider register](/cli/azure/provider#az-provider-register):
-
-```azurecli-interactive
-az provider register -n Microsoft.ContainerService
-```
-
-## Create a resource group
-
-A resource group is a logical container into which Azure resources are deployed and managed. Create a resource group with [az group create](/cli/azure/group#az-group-create) command. The following example creates a resource group named **myResourceGroup** in the **eastus** location:
-
-```azurecli-interactive
-az group create \
- --name myResourceGroup \
- --location eastus
-```
-
-## Create AKS cluster
-
-Create an AKS cluster with [az aks create](/cli/azure/aks#az-aks-create) command. The following example creates an AKS cluster named **myAKSCluster** in the **myResourceGroup** resource group:
-
-# [**Non-Cilium**](#tab/non-cilium)
-
-Non-Cilium clusters support the enablement of Network Observability on an existing cluster or during the creation of a new cluster.
-
-## New cluster
-
-Use [az aks create](/cli/azure/aks#az-aks-create) in the following example to create an AKS cluster with Network Observability and non-Cilium.
-
-```azurecli-interactive
-az aks create \
- --name myAKSCluster \
- --resource-group myResourceGroup \
- --location eastus \
- --generate-ssh-keys \
- --network-plugin azure \
- --network-plugin-mode overlay \
- --pod-cidr 192.168.0.0/16 \
- --enable-network-observability
-```
-
-## Existing cluster
-
-Use [az aks update](/cli/azure/aks#az-aks-update) to enable Network Observability on an existing cluster.
-
-```azurecli-interactive
-az aks update \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --enable-network-observability
-```
-
-# [**Cilium**](#tab/cilium)
-
-Use the following example to create an AKS cluster with Network Observability and Cilium.
-
-```azurecli-interactive
-az aks create \
- --name myAKSCluster \
- --resource-group myResourceGroup \
- --generate-ssh-keys \
- --location eastus \
- --max-pods 250 \
- --network-plugin azure \
- --network-plugin-mode overlay \
- --network-dataplane cilium \
- --node-count 2 \
- --pod-cidr 192.168.0.0/16
-```
---
-## Get cluster credentials
-
-```azurecli-interactive
-az aks get-credentials --name myAKSCluster --resource-group myResourceGroup
-```
-
-## Enable Visualization on Grafana
-
-Use the following example to configure scrape jobs on Prometheus and enable visualization on Grafana for your AKS cluster.
--
-# [**Non-Cilium**](#tab/non-cilium)
-
-> [!NOTE]
-> The following section requires installations of Prometheus and Grafana.
-
-1. Add the following scrape job to your existing Prometheus configuration and restart your Prometheus server:
-
- ```yml
- scrape_configs:
- - job_name: "network-obs-pods"
- kubernetes_sd_configs:
- - role: pod
- relabel_configs:
- - source_labels: [__meta_kubernetes_pod_container_name]
- action: keep
- regex: kappie(.*)
- - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
- separator: ":"
- regex: ([^:]+)(?::\d+)?
- target_label: __address__
- replacement: ${1}:${2}
- action: replace
- - source_labels: [__meta_kubernetes_pod_node_name]
- action: replace
- target_label: instance
- metric_relabel_configs:
- - source_labels: [__name__]
- action: keep
- regex: (.*)
- ```
-
-1. In **Targets** of Prometheus, verify the **network-obs-pods** are present.
-
-1. Sign in to Grafana and import Network Observability dashboard with ID [18814](https://grafana.com/grafana/dashboards/18814/).
-
-# [**Cilium**](#tab/cilium)
-
-> [!NOTE]
-> The following section requires installations of Prometheus and Grafana.
-
-1. Add the following scrape job to your existing Prometheus configuration and restart your prometheus server.
-
- ```yml
- scrape_configs:
- - job_name: 'kubernetes-pods'
- kubernetes_sd_configs:
- - role: pod
- relabel_configs:
- - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
- action: keep
- regex: true
- - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
- action: replace
- regex: (.+):(?:\d+);(\d+)
- replacement: ${1}:${2}
- target_label: __address__
- ```
-
-1. In **Targets** of prometheus, verify the **kubernetes-pods** are present.
-
-1. Sign in to Grafana and import dashboards with the following ID [16611-cilium-metrics](https://grafana.com/grafana/dashboards/16611-cilium-metrics/)
---
-## Clean up resources
-
-If you're not going to continue to use this application, delete the AKS cluster and the other resources created in this article with the following example:
-
-```azurecli-interactive
- az group delete \
- --name myResourceGroup
-```
-
-## Next steps
-
-In this how-to article, you learned how to install and enable AKS Network Observability for your AKS cluster.
--- For more information about AKS Network Observability, see [What is Azure Kubernetes Service (AKS) Network Observability?](network-observability-overview.md).--- To create an AKS cluster with Network Observability and managed Prometheus and Grafana, see [Setup Network Observability for Azure Kubernetes Service (AKS) Azure managed Prometheus and Grafana](network-observability-managed-cli.md).-
aks Network Observability Managed Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/network-observability-managed-cli.md
Title: "Setup of Network Observability for Azure Kubernetes Service (AKS) - Azure managed Prometheus and Grafana"
+ Title: "Set up Network Observability for Azure Kubernetes Service (AKS) - Azure managed Prometheus and Grafana"
description: Get started with AKS Network Observability for your AKS cluster using Azure managed Prometheus and Grafana.
Last updated 06/20/2023
-# Setup of Network Observability for Azure Kubernetes Service (AKS) - Azure managed Prometheus and Grafana
+# Set up Network Observability for Azure Kubernetes Service (AKS) - Azure managed Prometheus and Grafana
AKS Network Observability is used to collect the network traffic data of your AKS cluster. Network Observability enables a centralized platform for monitoring application and network health. Prometheus collects AKS Network Observability metrics, and Grafana visualizes them. Both Cilium and non-Cilium data plane are supported. In this article, learn how to enable the Network Observability add-on and use Azure managed Prometheus and Grafana to visualize the scraped metrics.
-> [!IMPORTANT]
-> AKS Network Observability is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- For more information about AKS Network Observability, see [What is Azure Kubernetes Service (AKS) Network Observability?](network-observability-overview.md). ## Prerequisites
For more information about AKS Network Observability, see [What is Azure Kuberne
[!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)] - Minimum version of **Azure CLI** required for the steps in this article is **2.44.0**. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
-### Install the `aks-preview` Azure CLI extension
+## Create cluster
+
+> [!NOTE]
+>For Kubernetes version >= 1.29, Network Observability is included in clusters with Azure Managed Prometheus. Metric scraping is defined via the [AMA metrics profile](/azure/azure-monitor/containers/prometheus-metrics-scrape-configuration).
+>
+>For lower Kubernetes versions, extra steps are required to enable Network Observability.
+
+### [**Kubernetes version >= 1.29**](#tab/newer-k8s-versions)
+
+#### Create a resource group
+
+A resource group is a logical container into which Azure resources are deployed and managed. Create a resource group with [az group create](/cli/azure/group#az-group-create) command. The following example creates a resource group named **myResourceGroup** in the **eastus** location:
+
+```azurecli-interactive
+az group create \
+ --name myResourceGroup \
+ --location eastus
+```
+
+#### Create AKS cluster
+
+Create an AKS cluster with [az aks create](/cli/azure/aks#az-aks-create).
+The following examples each create an AKS cluster named **myAKSCluster** in the **myResourceGroup** resource group.
+
+##### Example 1: **Non-Cilium**
+
+Use [az aks create](/cli/azure/aks#az-aks-create) in the following example to create a non-Cilium AKS cluster.
+
+```azurecli-interactive
+az aks create \
+ --name myAKSCluster \
+ --resource-group myResourceGroup \
+ --location eastus \
+ --generate-ssh-keys \
+ --network-plugin azure \
+ --network-plugin-mode overlay \
+ --pod-cidr 192.168.0.0/16 \
+ --kubernetes-version 1.29
+```
+
+#### Example 2: **Cilium**
+
+Use [az aks create](/cli/azure/aks#az-aks-create) in the following example to create a Cilium AKS cluster.
+
+```azurecli-interactive
+az aks create \
+ --name myAKSCluster \
+ --resource-group myResourceGroup \
+ --generate-ssh-keys \
+ --location eastus \
+ --max-pods 250 \
+ --network-plugin azure \
+ --network-plugin-mode overlay \
+ --network-dataplane cilium \
+ --node-count 2 \
+ --pod-cidr 192.168.0.0/16
+```
+
+### [**Kubernetes version < 1.29**](#tab/older-k8s-versions)
+
+#### Install the `aks-preview` Azure CLI extension
```azurecli-interactive # Install the aks-preview extension
az extension add --name aks-preview
az extension update --name aks-preview ```
-### Register the `NetworkObservabilityPreview` feature flag
+#### Register the `NetworkObservabilityPreview` feature flag
```azurecli-interactive az feature register --namespace "Microsoft.ContainerService" --name "NetworkObservabilityPreview"
When the feature is registered, refresh the registration of the Microsoft.Contai
az provider register -n Microsoft.ContainerService ```
-## Create a resource group
+#### Create a resource group
A resource group is a logical container into which Azure resources are deployed and managed. Create a resource group with [az group create](/cli/azure/group#az-group-create) command. The following example creates a resource group named **myResourceGroup** in the **eastus** location:
az group create \
--name myResourceGroup \ --location eastus ```
-> [!NOTE]
->For Kubernetes version 1.29 or higher, network observability is enabled with the [AMA metrics profile](/azure/azure-monitor/containers/prometheus-metrics-scrape-configuration) and the AFEC flag (NetworkObservabilityPreview) until it reaches general availability.
->
->Starting with Kubernetes version 1.29, the --enable-network-observability tag is no longer required when creating or updating an Azure Kubernetes Service (AKS) cluster.
->
->For AKS clusters running Kubernetes version 1.28 or earlier, enabling network observability requires the --enable-network-observability tag during cluster creation or update.
->
-
-## Create AKS cluster
-Create an AKS cluster with [az aks create](/cli/azure/aks#az-aks-create). The following example creates an AKS cluster named **myAKSCluster** in the **myResourceGroup** resource group:
+#### Create or Update AKS cluster
-# [**Non-Cilium**](#tab/non-cilium)
+The following examples each create or update an AKS cluster named **myAKSCluster** in the **myResourceGroup** resource group.
-Non-Cilium clusters support the enablement of Network Observability on an existing cluster or during the creation of a new cluster.
+##### Example 1: **Non-Cilium**
-Use [az aks create](/cli/azure/aks#az-aks-create) in the following example to create an AKS cluster with Network Observability and non-Cilium.
+###### Create cluster
-## New cluster
+Use [az aks create](/cli/azure/aks#az-aks-create) in the following example to create a non-Cilium AKS cluster with Network Observability.
```azurecli-interactive az aks create \
az aks create \
--network-plugin azure \ --network-plugin-mode overlay \ --pod-cidr 192.168.0.0/16 \
- --enable-network-observability
+ --enable-advanced-network-observability
```
-## Existing cluster
+###### Update Existing cluster
-Use [az aks update](/cli/azure/aks#az-aks-update) to enable Network Observability for an existing cluster.
+Use [az aks update](/cli/azure/aks#az-aks-update) to enable Network Observability for an existing non-Cilium cluster.
```azurecli-interactive az aks update \ --resource-group myResourceGroup \ --name myAKSCluster \
- --enable-network-observability
+ --enable-advanced-network-observability
```
-# [**Cilium**](#tab/cilium)
+##### Example 2: **Cilium**
-Use [az aks create](/cli/azure/aks#az-aks-create) in the following example to create an AKS cluster with Network Observability and Cilium.
+Use [az aks create](/cli/azure/aks#az-aks-create) in the following example to create a Cilium AKS cluster.
```azurecli-interactive az aks create \
az aks update \
--grafana-resource-id $grafanaId ``` --
-## Get cluster credentials
+## Get cluster credentials
```azurecli-interactive az aks get-credentials --name myAKSCluster --resource-group myResourceGroup ``` - ## Visualize using Grafana > [!NOTE]
az aks get-credentials --name myAKSCluster --resource-group myResourceGroup
ama-metrics-win-node-tkrm8 2/2 Running 0 (26h ago) 26h ```
-1. Select **Dashboards** from the left navigation menu, open **Kubernetes / Networking** dashboard under **Managed Prometheus** folder.
+1. Navigate to your Grafana instance in a web browser.
+
+1. We have created a sample dashboard. It can be found under **Dashboards > Azure Managed Prometheus > Kubernetes / Networking / Clusters**.
-1. Check if the Metrics in **Kubernetes / Networking** Grafana dashboard are visible. If metrics aren't shown, change time range to last 15 minutes in top right dropdown box.
+1. Check if the metrics in the **Kubernetes / Networking / Clusters** Grafana dashboard are visible. If metrics aren't shown, change time range to last 15 minutes in top right dropdown box.
the AKS cluster and the other resources created in this article with the followi
## Next steps
-In this how-to article, you learned how to install and enable AKS Network Observability for your AKS cluster.
+In this how-to article, you learned how to set up AKS Network Observability for your AKS cluster.
- For more information about AKS Network Observability, see [What is Azure Kubernetes Service (AKS) Network Observability?](network-observability-overview.md). -- To create an AKS cluster with Network Observability and BYO Prometheus and Grafana, see [Setup Network Observability for Azure Kubernetes Service (AKS) BYO Prometheus and Grafana](network-observability-byo-cli.md).-
+- If you're interested in more granular Network Observability and other advanced features, see [What is Advanced Container Networking Services for Azure Kubernetes Service (AKS)?](advanced-container-networking-services-overview.md).
aks Network Observability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/network-observability-overview.md
Title: What is Azure Kubernetes Service (AKS) Network Observability? (Preview)
+ Title: What is Azure Kubernetes Service (AKS) Network Observability?
description: An overview of network observability for Azure Kubernetes Service (AKS).
Last updated 06/20/2023
-# What is Azure Kubernetes Service (AKS) Network Observability? (Preview)
+# What is Azure Kubernetes Service (AKS) Network Observability?
Kubernetes is a powerful tool for managing containerized applications. As containerized environments grow in complexity, it can be difficult to identify and troubleshoot networking issues in a Kubernetes cluster.
Network observability is an important part of maintaining a healthy and performa
## Overview of Network Observability add-on in AKS
-> [!IMPORTANT]
-> AKS Network Observability is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- Networking Observability add-on operates seamlessly on Non-Cilium and Cilium data-planes. It empowers customers with enterprise-grade capabilities for DevOps and SecOps. This solution offers a centralized way to monitor network issues in your cluster for cluster network administrators, cluster security administrators, and DevOps engineers.
-When the Network Observability add-on is enabled, it allows for the collection and conversion of useful metrics into Prometheus format, which can then be visualized in Grafana. There are two options available for using Prometheus and Grafana in this context: Azure managed [Prometheus](/azure/azure-monitor/essentials/prometheus-metrics-overview) and [Grafana](/azure/azure-monitor/visualize/grafana-plugin) or BYO Prometheus and Grafana.
-
-* **Azure managed Prometheus and Grafana:** This option involves using a managed service provided by Azure. The managed service takes care of the infrastructure and maintenance of Prometheus and Grafana, allowing you to focus on configuring and visualizing your metrics. This option is convenient if you prefer not to manage the underlying infrastructure.
+When the Network Observability add-on is enabled, it allows for the collection and conversion of useful metrics into Prometheus format, which can then be visualized in Grafana.
+Azure has offerings for managed [Prometheus](/azure/azure-monitor/essentials/prometheus-metrics-overview) and [Grafana](/azure/azure-monitor/visualize/grafana-plugin).
-* **BYO Prometheus and Grafana:** Alternatively, you can choose to set up your own Prometheus and Grafana instances. In this case, you're responsible for provisioning and managing the infrastructure required to run Prometheus and Grafana. Install and configure Prometheus to scrape the metrics generated by the Network Observability add-on and store them. Similarly, Grafana needs to be set up to connect to Prometheus and visualize the collected data.
+* **Azure managed Prometheus and Grafana:** A managed service provided by Azure, taking care of the infrastructure and maintenance of Prometheus and Grafana, allowing you to focus on configuring and visualizing your metrics.
* **Multi CNI Support:** Network Observability add-on supports both Azure CNI and Kubenet network plugins.
Certain scale limitations apply when you use Azure managed Prometheus and Grafan
- For more information about Azure Kubernetes Service (AKS), see [What is Azure Kubernetes Service (AKS)?](/azure/aks/intro-kubernetes).
+- To create an AKS cluster with Network Observability and Azure managed Prometheus and Grafana, see [Setup Network Observability for Azure Kubernetes Service (AKS) - Azure managed Prometheus and Grafana](advanced-network-observability-cli.md).
aks Node Resource Reservations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-resource-reservations.md
+
+ Title: Node resource reservations in Azure Kubernetes Service (AKS)
+description: Learn about node resource reservations in Azure Kubernetes Service (AKS).
++ Last updated : 04/16/2024++++
+# Node resource reservations in Azure Kubernetes Service (AKS)
+
+In this article, you learn about node resource reservations in Azure Kubernetes Service (AKS).
+
+## Resource reservations
+
+AKS uses node resources to help the nodes function as part of the cluster. This usage can cause a discrepancy between the node's total resources and the allocatable resources in AKS.
+
+AKS reserves two types of resources, **CPU** and **memory**, on each node to maintain node performance and functionality. As a node grows larger in resources, the resource reservations also grow due to a higher need for management of user-deployed pods. Keep in mind that you can't change resource reservations on a node.
+
+### CPU reservations
+
+Reserved CPU is dependent on node type and cluster configuration, which might result in less allocatable CPU due to running extra features. The following table shows CPU reservations in millicores:
+
+| CPU cores on host | 1 core | 2 cores | 4 cores | 8 cores | 16 cores | 32 cores | 64 cores |
+| -- | | - | - | - | -- | -- | -- |
+| Kube-reserved CPU (millicores) | 60 | 100 | 140 | 180 | 260 | 420 | 740 |
+
+### Memory reservations
+
+In AKS, reserved memory consists of the sum of two values:
+
+**AKS 1.29 and later**
+
+* **`kubelet` daemon** has the *memory.available < 100 Mi* eviction rule by default. This rule ensures that a node has at least 100 Mi allocatable at all times. When a host is below that available memory threshold, the `kubelet` triggers the termination of one of the running pods and frees up memory on the host machine.
+* **A rate of memory reservations** set according to the lesser value of: *20 MB * Max Pods supported on the Node + 50 MB* or *25% of the total system memory resources*.
+
+ **Examples**:
+ * If the virtual machine (VM) provides 8 GB of memory and the node supports up to 30 pods, AKS reserves *20 MB * 30 Max Pods + 50 MB = 650 MB* for kube-reserved. `Allocatable space = 8 GB - 0.65 GB (kube-reserved) - 0.1 GB (eviction threshold) = 7.25 GB or 90.625% allocatable.`
+ * If the VM provides 4 GB of memory and the node supports up to 70 pods, AKS reserves *25% * 4 GB = 1000 MB* for kube-reserved, as this is less than *20 MB * 70 Max Pods + 50 MB = 1450 MB*.
+
+ For more information, see [Configure maximum pods per node in an AKS cluster][maximum-pods].
+
+**AKS versions prior to 1.29**
+
+* **`kubelet` daemon** has the *memory.available < 750 Mi* eviction rule by default. This rule ensures that a node has at least 750 Mi allocatable at all times. When a host is below that available memory threshold, the `kubelet` triggers the termination of one of the running pods and free up memory on the host machine.
+* **A regressive rate of memory reservations** for the kubelet daemon to properly function (*kube-reserved*).
+ * 25% of the first 4 GB of memory
+ * 20% of the next 4 GB of memory (up to 8 GB)
+ * 10% of the next 8 GB of memory (up to 16 GB)
+ * 6% of the next 112 GB of memory (up to 128 GB)
+ * 2% of any memory more than 128 GB
+
+> [!NOTE]
+> AKS reserves an extra 2 GB for system processes in Windows nodes that isn't part of the calculated memory.
+
+Memory and CPU allocation rules are designed to:
+
+* Keep agent nodes healthy, including some hosting system pods critical to cluster health.
+* Cause the node to report less allocatable memory and CPU than it would report if it weren't part of a Kubernetes cluster.
+
+For example, if a node offers 7 GB, it reports 34% of memory not allocatable including the 750 Mi hard eviction threshold.
+
+`0.75 + (0.25*4) + (0.20*3) = 0.75 GB + 1 GB + 0.6 GB = 2.35 GB / 7 GB = 33.57% reserved`
+
+In addition to reservations for Kubernetes itself, the underlying node OS also reserves an amount of CPU and memory resources to maintain OS functions.
+
+For associated best practices, see [Best practices for basic scheduler features in AKS][operator-best-practices-scheduler].
+
+## Next steps
+
+<!LINKS>
+[operator-best-practices-scheduler]: operator-best-practices-scheduler.md
+[maximum-pods]: concepts-network-ip-address-planning.md#maximum-pods-per-node
aks What Is Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/what-is-aks.md
The following table lists some of the key features of AKS:
| Feature | Description | | | | | **Identity and security management** | ΓÇó Enforce [regulatory compliance controls using Azure Policy](./security-controls-policy.md) with built-in guardrails and internet security benchmarks. <br/> ΓÇó Integrate with [Kubernetes RBAC](./azure-ad-rbac.md) to limit access to cluster resources. <br/> ΓÇó Use [Microsoft Entra ID](./enable-authentication-microsoft-entra-id.md) to set up Kubernetes access based on existing identity and group membership. |
-| **Logging and monitoring** | ΓÇó Integrate with [Container Insights](../azure-monitor/containers/kubernetes-monitoring-enable.md), a feature in Azure Monitor, to monitor the health and performance of your clusters and containerized applications. <br/> ΓÇó Set up [Network Observability](./network-observability-overview.md) and [use BYO Prometheus and Grafana](./network-observability-byo-cli.md) to collect and visualize network traffic data from your clusters. |
+| **Logging and monitoring** | ΓÇó Integrate with [Container Insights](../azure-monitor/containers/kubernetes-monitoring-enable.md), a feature in Azure Monitor, to monitor the health and performance of your clusters and containerized applications. <br/> ΓÇó Set up [Network Observability](./network-observability-overview.md) to collect and visualize network traffic data from your clusters. |
| **Streamlined deployments** | ΓÇó Use prebuilt cluster configurations for Kubernetes with [smart defaults](./quotas-skus-regions.md#cluster-configuration-presets-in-the-azure-portal). <br/> ΓÇó Autoscale your applications using the [Kubernetes Event Driven Autoscaler (KEDA)](./keda-about.md). </br> ΓÇó Use [Draft for AKS](./draft.md) to ready source code and prepare your applications for production. | | **Clusters and nodes** | ΓÇó Connect storage to nodes and pods, upgrade cluster components, and use GPUs. <br/> ΓÇó Create clusters that run multiple node pools to support mixed operating systems and Windows Server containers. <br/> ΓÇó Configure automatic scaling using the [cluster autoscaler](./cluster-autoscaler.md) and [horizontal pod autoscaler](./tutorial-kubernetes-scale.md#autoscale-pods). <br/> ΓÇó Deploy clusters with [confidential computing nodes](../confidential-computing/confidential-nodes-aks-overview.md) to allow containers to run in a hardware-based trusted execution environment. | | **Storage volume support** | ΓÇó Mount static or dynamic storage volumes for persistent data. <br/> ΓÇó Use [Azure Disks](./azure-disk-csi.md) for single pod access and [Azure Files](./azure-files-csi.md) for multiple, concurrent pod access. <br/> ΓÇó Use [Azure NetApp Files](./azure-netapp-files.md) for high-performance, high-throughput, and low-latency file shares. |
app-service Ase Multi Tenant Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/ase-multi-tenant-comparison.md
An App Service Environment is an Azure App Service feature that provides a fully
|Dedicated host group|[Available](overview.md#dedicated-environment) |No | |Remote file storage|Fully dedicated to the App Service Environment |Remote file storage for the application is dedicated, but the storage is hosted on a shared file server | |Private inbound configuration|Yes, using ILB App Service Environment variation |Yes, via private endpoint |
-|Planned maintenance|[Manual upgrade preference is available](how-to-upgrade-preference.md) |The platform handles maintenance. [Service health notifications are available](../../app-service/routine-maintenance.md). |
+|Planned maintenance|[Manual upgrade preference is available](how-to-upgrade-preference.md) |[The platform handles maintenance](../../app-service/routine-maintenance.md) |
|Aggregate remote file share storage limit|1 TB for all apps in an App Service Environment v3|250 GB for all apps in a single App Service plan. 500 GB for all apps across all App Service plans in a single resource group.| ### Scaling
app-service Get Resource Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/get-resource-events.md
- Title: Get resource events in Azure App Service
-description: Learn how to get resource events through Activity Logs and Event Grid on your App Service app.
- Previously updated : 04/24/2020---
-# Get resource events in Azure App Service
-
-Azure App Service provides built-in tools to monitor the status and health of your resources. Resource events help you understand any changes that were made to your underlying web app resources and take action as necessary. Event examples include: scaling of instances, updates to application settings, restarting of the web app, and many more. In this article, you'll learn how to view [Azure Activity Logs](../azure-monitor/essentials/activity-log-insights.md#view-the-activity-log) and enable [Event Grid](../event-grid/index.yml) to monitor App Service resource events.
-
-## View Azure Activity Logs
-Azure Activity Logs contain resource events emitted by operations taken on the resources in your subscription. Both the user actions in the Azure portal and Azure Resource Manager templates contribute to the events captured by the Activity log.
-
-Azure Activity Logs for App Service details such as:
-- What operations were taken on the resources (ex: App Service Plans)-- Who started the operation-- When the operation occurred-- Status of the operation-- Property values to help you research the operation-
-### What can you do with Azure Activity Logs?
-
-Azure Activity Logs can be queried using the Azure portal, PowerShell, REST API, or CLI. You can send the logs to a storage account, Event Hub, and Log Analytics. You can also analyze them in Power BI or create alerts to stay updated on resource events.
-
-[View and retrieve Azure Activity log events.](../azure-monitor/essentials/activity-log-insights.md#view-the-activity-log)
-
-## Ship Activity Logs to Event Grid
-
-While Activity logs are user-based, there's a new [Event Grid](../event-grid/index.yml) integration with App Service (preview) that logs both user actions and automated events. With Event Grid, you can configure a handler to react to the said events. For example, use Event Grid to instantly trigger a serverless function to run image analysis each time a new photo is added to a blob storage container.
-
-Alternatively, you can use Event Grid with Logic Apps to process data anywhere, without writing code. Event Grid connects data sources and event handlers.
-
-[View the properties and schema for Azure App Service Events.](../event-grid/event-schema-app-service.md)
-
-## <a name="nextsteps"></a> Next steps
-* [Query logs with Azure Monitor](../azure-monitor/logs/log-query-overview.md)
-* [How to Monitor Azure App Service](web-sites-monitor.md)
-* [Troubleshooting Azure App Service in Visual Studio](troubleshoot-dotnet-visual-studio.md)
app-service Monitor App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/monitor-app-service.md
## App Service monitoring
+Azure App Service provides several monitoring options for monitoring resources for availability, performance, and operation. Options include Diagnostic Settings, Application Insights, log stream, metrics, quotas and alerts, and activity logs.
+ On the Azure portal page for your web app, you can select **Diagnose and solve problems** from the left navigation to access complete App Service diagnostics for your app. For more information about the App Service diagnostics tool, see [Azure App Service diagnostics overview](overview-diagnostics.md). App Service provides built-in diagnostics logging to assist with debugging apps. For more information about the built-in logs, see [Stream diagnostics logs](troubleshoot-diagnostic-logs.md#stream-logs). You can also use Azure Health check to monitor App Service instances. For more information, see [Monitor App Service instances using Health check](monitor-instances-health-check.md).
-For a complete overview and summary of App Service monitoring options, see [Azure App Service monitoring overview](overview-monitoring.md).
+If you're using ASP.NET Core, ASP.NET, Java, Node.js, or Python, we recommend [enabling observability with Application Insights](/azure/azure-monitor/app/opentelemetry-enable). To learn more about observability experiences offered by Application Insights, see [Application Insights overview](/azure/azure-monitor/app/app-insights-overview).
+
+### Monitoring scenarios
+
+The following table lists monitoring methods to use for different scenarios.
+
+|Scenario|Monitoring method |
+|-|--|
+|I want to monitor platform metrics and logs | [Azure Monitor platform metrics](#platform-metrics)|
+|I want to monitor application performance and usage | (Azure Monitor) [Application Insights](#application-insights)|
+|I want to monitor built-in logs for testing and development|[Log stream](troubleshoot-diagnostic-logs.md#stream-logs)|
+|I want to monitor resource limits and configure alerts|[Quotas and alerts](web-sites-monitor.md)|
+|I want to monitor web app resource events|[Activity logs](#activity-log)|
+|I want to monitor metrics visually|[Metrics](web-sites-monitor.md#metrics-granularity-and-retention-policy)|
[!INCLUDE [horz-monitor-insights](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-insights.md)]
For more information about the resource types for App Service, see [App Service
[!INCLUDE [horz-monitor-data-storage](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-data-storage.md)]
+<a name="platform-metrics"></a>
[!INCLUDE [horz-monitor-platform-metrics](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-platform-metrics.md)] For a list of available metrics for App Service, see [App Service monitoring data reference](monitor-app-service-reference.md#metrics).
+For help understanding metrics in App Service, see [Understand metrics](web-sites-monitor.md#understand-metrics). Metrics can be viewed by aggregates on data (ie. average, max, min, etc.), instances, time range, and other filters. Metrics can monitor performance, memory, CPU, and other attributes.
+ [!INCLUDE [horz-monitor-resource-logs](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-resource-logs.md)] For the available resource log categories, their associated Log Analytics tables, and the logs schemas for App Service, see [App Service monitoring data reference](monitor-app-service-reference.md#resource-logs). [!INCLUDE [audit log categories tip](./includes/azure-monitor-log-category-groups-tip.md)]
+<a name="activity-log"></a>
[!INCLUDE [horz-monitor-activity-log](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-activity-log.md)]
+### Azure activity logs for App Service
+
+Azure activity logs for App Service include details such as:
+
+- What operations were taken on the resources (ex: App Service Plans)
+- Who started the operation
+- When the operation occurred
+- Status of the operation
+- Property values to help you research the operation
+
+Azure activity logs can be queried using the Azure portal, PowerShell, REST API, or CLI.
+
+### Ship activity logs to Event Grid
+
+While activity logs are user-based, there's a new [Azure Event Grid](../event-grid/index.yml) integration with App Service (preview) that logs both user actions and automated events. With Event Grid, you can configure a handler to react to the said events. For example, use Event Grid to instantly trigger a serverless function to run image analysis each time a new photo is added to a blob storage container.
+
+Alternatively, you can use Event Grid with Logic Apps to process data anywhere, without writing code. Event Grid connects data sources and event handlers.
+
+To view the properties and schema for App Service events, see [Azure App Service as an Event Grid source](../event-grid/event-schema-app-service.md).
+
+## Log stream (via App Service Logs)
+
+Azure provides built-in diagnostics to assist during testing and development to debug an App Service app. [Log stream](troubleshoot-diagnostic-logs.md#stream-logs) can be used to get quick access to output and errors written by your application, and logs from the web server. These are standard output/error logs in addition to web server logs.
+ [!INCLUDE [horz-monitor-analyze-data](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-analyze-data.md)] [!INCLUDE [horz-monitor-external-tools](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-external-tools.md)]
See [Azure Monitor queries for App Service](https://github.com/microsoft/AzureMo
[!INCLUDE [horz-monitor-insights-alerts](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-insights-alerts.md)]
+### Quotas and alerts
+
+Apps that are hosted in App Service are subject to certain limits on the resources they can use. [The limits](web-sites-monitor.md#understand-quotas) are defined by the App Service plan that's associated with the app. Metrics for an app or an App Service plan can be hooked up to alerts.
+ ### App Service alert rules The following table lists common and recommended alert rules for App Service.
app-service Overview Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-monitoring.md
- Title: Monitoring overview
-description: Learn about the various monitoring options on App Service.
-keywords: app service, azure app service, monitoring, diagnostic settings, support, web app, troubleshooting,
- Previously updated : 06/29/2023----
-# Azure App Service monitoring overview
-
-Azure App Service provides several monitoring options for monitoring resources for availability, performance, and operation. Options such as Diagnostic Settings, Application Insights, Log stream, Metrics, Quotas and alerts, and Activity logs. This article aims to bring clarity on monitoring options on App Service and [provide scenarios](#monitoring-scenarios) when each should be used.
-
-## Diagnostic Settings (via Azure Monitor)
-
-Azure Monitor is a monitoring service that provides a complete set of features to monitor your Azure resources in addition to resources in other clouds and on-premises. The Azure Monitor data platform collects data into logs and metrics where they can be analyzed. App Service monitoring data can be shipped to Azure Monitor through Diagnostic Settings.
-
-Diagnostic settings lets you export logs to other services, such as Log Analytics, Storage account, and Event Hubs. Large amounts of data using SQL-like Kusto can be queried with Log Analytics. You can capture platform logs in Azure Monitor Logs as configured via Diagnostic Settings, and instrument your app further with the dedicated application performance management feature (Application Insights) for additional telemetry and logs.
-
-For an end-to-end tutorial on Diagnostic Settings, see the article [Troubleshoot an App Service app with Azure Monitor](tutorial-troubleshoot-monitor.md).
-
-## Quotas and alerts
-
-Apps that are hosted in App Service are subject to certain limits on the resources they can use. [The limits](web-sites-monitor.md#understand-quotas) are defined by the App Service plan that's associated with the app. Metrics for an app or an App Service plan can be hooked up to alerts.
-
-## Metrics
-
-Build visualizations of [metrics](web-sites-monitor.md#understand-metrics) on Azure resources (web apps and App Service Plans). Metrics can be viewed by aggregates on data (ie. average, max, min, etc), instances, time range, and other filters. Metrics can monitor performance, memory, CPU, and other attributes.
-
-## Activity logs
-View a historical log of [events changing your resource](get-resource-events.md#view-azure-activity-logs). Resource events help you understand any changes that were made to your underlying web app resources and take action as necessary. Event examples include scaling of instances, updates to application settings, restarting of the web app, and many more.
-
-## Application Insights (via Azure Monitor)
-
-[Application Insights](monitor-app-service.md#application-insights), a feature of Azure Monitor, is an extensible Application Performance Management (APM) service for developers and DevOps professionals. Use it to monitor your live applications. It will automatically detect performance anomalies, and includes powerful analytics tools to help you diagnose issues and to understand what users actually do with your app. The logs in Application Insights are generated by application code.
-
-## Log stream (via App Service Logs)
-Azure provides built-in diagnostics to assist during testing and development to debug an App Service app. [Log stream](troubleshoot-diagnostic-logs.md#stream-logs) can be used to get quick access to output and errors written by your application, and logs from the web server. These are standard output/error logs in addition to web server logs.
-
-## Monitoring scenarios
-
-The table below lists monitoring methods to use for different scenarios.
-
-|Scenario|Monitoring method |
-|-|--|
-|I want to monitor platform metrics and logs | (Azure Monitor) [Diagnostic Settings](troubleshoot-diagnostic-logs.md)|
-|I want to monitor application performance and usage | (Azure Monitor) [Application Insights](monitor-app-service.md#application-insights)|
-|I want to monitor built-in logs for testing and development|[Log stream](troubleshoot-diagnostic-logs.md#stream-logs)|
-|I want to monitor resource limits and configure alerts|[Quotas and alerts](web-sites-monitor.md)|
-|I want to monitor web app resource events|[Activity Logs](get-resource-events.md#view-azure-activity-logs)|
-|I want to monitor metrics visually|[Metrics](web-sites-monitor.md#metrics-granularity-and-retention-policy)|
-
-## Next steps
-* [Query logs with Azure Monitor](../azure-monitor/logs/log-query-overview.md)
-* [How to Monitor Azure App Service](web-sites-monitor.md)
-* [Troubleshooting Azure App Service with Azure Monitor](tutorial-troubleshoot-monitor.md)
-* [Monitor App Service with Azure Monitor](monitor-app-service.md)
app-service Troubleshoot Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-diagnostic-logs.md
With the new [Azure Monitor integration](https://aka.ms/appsvcblog-azmon), you c
### Supported log types
-The following table shows the supported log types and descriptions:
-
-| Log Name| Log type | Windows | Windows Container | Linux | Linux Container | Description |
-|-|-|-|-|-|-|-|
-| App Service Console Logs | AppServiceConsoleLogs | Java SE & Tomcat | Yes | Yes | Yes | Standard output and standard error <sup>3</sup> |
-| HTTP logs | AppServiceHTTPLogs | Yes | Yes | Yes | Yes | Web server logs |
-| App Service Environment Platform Logs | AppServiceEnvironmentPlatformLogs | Yes | N/A | Yes | Yes | App Service Environment: scaling, configuration changes, and status logs|
-| Access Audit Logs | AppServiceAuditLogs | Yes | Yes | Yes | Yes | Login activity via FTP and Kudu |
-| Site Content Change Audit Logs | AppServiceFileAuditLogs | Yes | Yes | TBA | TBA | File changes made to the site content; **only available for Premium tier and above** |
-| App Service Application Logs | AppServiceAppLogs | ASP.NET, .NET Core, & Tomcat <sup>1</sup> | ASP.NET & Tomcat <sup>1</sup> | .NET Core, Java, SE & Tomcat Blessed Images <sup>2</sup> | Java SE & Tomcat Blessed Images <sup>2</sup> | Application logs <sup>3</sup> |
-| IPSecurity Audit logs | AppServiceIPSecAuditLogs | Yes | Yes | Yes | Yes | Requests from IP Rules |
-| App Service Platform logs | AppServicePlatformLogs | TBA | Yes | Yes | Yes | Container operation logs |
-| Report Antivirus Audit Logs | AppServiceAntivirusScanAuditLogs <sup>4</sup> | Yes | Yes | Yes | Yes | [Anti-virus scan logs](https://azure.github.io/AppService/2020/12/09/AzMon-AppServiceAntivirusScanAuditLogs.html) using Microsoft Defender for Cloud; **only available for Premium tier** |
-
-<sup>1</sup> For Tomcat apps, add `TOMCAT_USE_STARTUP_BAT` to the app settings and set it to `false` or `0`. Need to be on the *latest* Tomcat version and use *java.util.logging*.
-
-<sup>2</sup> For Java SE apps, add `WEBSITE_AZMON_PREVIEW_ENABLED` to the app settings and set it to `true` or to `1`.
-
-<sup>3</sup> Current logging limit is set to 100 logs per minute.
-
-<sup>4</sup> AppServiceAntivirusScanAuditLogs log type is still currently in Preview
+For a list of supported log types and their descriptions, see [Supported resource logs for Microsoft.Web](monitor-app-service-reference.md#supported-resource-logs-for-microsoftweb).
## Networking considerations
app-service Tutorial Troubleshoot Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-troubleshoot-monitor.md
az monitor log-analytics workspace create --resource-group myResourceGroup --wor
### Create a diagnostic setting
-Diagnostic settings can be used to collect metrics for certain Azure services into Azure Monitor Logs for analysis with other monitoring data using log queries. For this tutorial, you enable the web server and standard output/error logs. See [supported log types](./troubleshoot-diagnostic-logs.md#supported-log-types) for a complete list of log types and descriptions.
+Diagnostic settings can be used to collect metrics for certain Azure services into Azure Monitor Logs for analysis with other monitoring data using log queries. For this tutorial, you enable the web server and standard output/error logs. See [supported log types](monitor-app-service-reference.md#resource-logs) for a complete list of log types and descriptions.
You run the following commands to create diagnostic settings for AppServiceConsoleLogs (standard output/error) and AppServiceHTTPLogs (web server logs). Replace _\<app-name>_ and _\<workspace-name>_ with your values.
app-service Web Sites Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/web-sites-monitor.md
Title: Monitor apps
+ Title: Quotas and alerts
description: Learn how to monitor apps in Azure App Service by using the Azure portal. Understand the quotas and metrics that are reported. ms.assetid: d273da4e-07de-48e0-b99d-4020d84a425e
Last updated 06/29/2023
-# Monitor apps in Azure App Service
+# Azure App Service quotas and alerts
[Azure App Service](./overview.md) provides built-in monitoring functionality for web apps, mobile, and API apps in the [Azure portal](https://portal.azure.com).
You can increase or remove quotas from your app by upgrading your App Service pl
Metrics provide information about the app or the App Service plan's behavior.
-For an app, the available metrics are:
-
-| Metric | Description |
-| | |
-| **Response Time** | The time taken for the app to serve requests, in seconds. |
-| **Average Response Time (deprecated)** | The average time taken for the app to serve requests, in seconds. |
-| **Average memory working set** | The average amount of memory used by the app, in megabytes (MiB). |
-| **Connections** | The number of bound sockets existing in the sandbox (w3wp.exe and its child processes). A bound socket is created by calling bind()/connect() APIs and remains until said socket is closed with CloseHandle()/closesocket(). |
-| **CPU Time** | The amount of CPU consumed by the app, in seconds. For more information about this metric, see [CPU time vs CPU percentage](#cpu-time-vs-cpu-percentage). |
-| **Current Assemblies** | The current number of Assemblies loaded across all AppDomains in this application. |
-| **Data In** | The amount of incoming bandwidth consumed by the app, in MiB. |
-| **Data Out** | The amount of outgoing bandwidth consumed by the app, in MiB. |
-| **File System Usage** | The amount of usage in bytes by storage share. |
-| **Gen 0 Garbage Collections** | The number of times the generation 0 objects are garbage collected since the start of the app process. Higher generation GCs include all lower generation GCs.|
-| **Gen 1 Garbage Collections** | The number of times the generation 1 objects are garbage collected since the start of the app process. Higher generation GCs include all lower generation GCs.|
-| **Gen 2 Garbage Collections** | The number of times the generation 2 objects are garbage collected since the start of the app process.|
-| **Handle Count** | The total number of handles currently open by the app process.|
-| **Health Check Status** | The average health status across the application's instances in the App Service Plan.|
-| **Http 2xx** | The count of requests resulting in an HTTP status code ≥ 200 but < 300. |
-| **Http 3xx** | The count of requests resulting in an HTTP status code ≥ 300 but < 400. |
-| **Http 401** | The count of requests resulting in HTTP 401 status code. |
-| **Http 403** | The count of requests resulting in HTTP 403 status code. |
-| **Http 404** | The count of requests resulting in HTTP 404 status code. |
-| **Http 406** | The count of requests resulting in HTTP 406 status code. |
-| **Http 4xx** | The count of requests resulting in an HTTP status code ≥ 400 but < 500. |
-| **Http Server Errors** | The count of requests resulting in an HTTP status code ≥ 500 but < 600. |
-| **IO Other Bytes Per Second** | The rate at which the app process is issuing bytes to I/O operations that don't involve data, such as control operations.|
-| **IO Other Operations Per Second** | The rate at which the app process is issuing I/O operations that aren't read or write operations.|
-| **IO Read Bytes Per Second** | The rate at which the app process is reading bytes from I/O operations.|
-| **IO Read Operations Per Second** | The rate at which the app process is issuing read I/O operations.|
-| **IO Write Bytes Per Second** | The rate at which the app process is writing bytes to I/O operations.|
-| **IO Write Operations Per Second** | The rate at which the app process is issuing write I/O operations.|
-| **Memory working set** | The current amount of memory used by the app, in MiB. |
-| **Private Bytes** | Private Bytes is the current size, in bytes, of memory that the app process has allocated that can't be shared with other processes.|
-| **Requests** | The total number of requests regardless of their resulting HTTP status code. |
-| **Requests In Application Queue** | The number of requests in the application request queue.|
-| **Thread Count** | The number of threads currently active in the app process.|
-| **Total App Domains** | The current number of AppDomains loaded in this application.|
-| **Total App Domains Unloaded** | The total number of AppDomains unloaded since the start of the application.|
--
-For an App Service plan, the available metrics are:
+For a list of available metrics for apps or for App Service plans, see [Supported metrics for Microsoft.Web](monitor-app-service-reference.md#supported-metrics-for-microsoftweb).
> [!NOTE] > App Service plan metrics are available only for plans in *Basic*, *Standard*, *Premium*, and *Isolated* tiers.
->
-
-| Metric | Description |
-| | |
-| **CPU Percentage** | The average CPU used across all instances of the plan. |
-| **Memory Percentage** | The average memory used across all instances of the plan. |
-| **Data In** | The average incoming bandwidth used across all instances of the plan. |
-| **Data Out** | The average outgoing bandwidth used across all instances of the plan. |
-| **Disk Queue Length** | The average number of both read and write requests that were queued on storage. A high disk queue length is an indication of an app that might be slowing down because of excessive disk I/O. |
-| **Http Queue Length** | The average number of HTTP requests that had to sit on the queue before being fulfilled. A high or increasing HTTP Queue length is a symptom of a plan under heavy load. |
### CPU time vs CPU percentage <!-- To do: Fix Anchor (#CPU-time-vs.-CPU-percentage) -->
Clicking on any of those charts will take you to the metrics view where you can
To learn more about metrics, see [Monitor service metrics](../azure-monitor/data-platform.md). ## Alerts and autoscale
-Metrics for an app or an App Service plan can be hooked up to alerts. For more information, see [Receive alert notifications](../azure-monitor/alerts/alerts-classic-portal.md).
+
+Metrics for an app or an App Service plan can be hooked up to alerts. For more information, see [Alerts](monitor-app-service.md#alerts).
App Service apps hosted in Basic or higher App Service plans support autoscale. With autoscale, you can configure rules that monitor the App Service plan metrics. Rules can increase or decrease the instance count, which can provide additional resources as needed. Rules can also help you save money when the app is over-provisioned.
application-gateway Custom Health Probe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/custom-health-probe.md
Previously updated : 5/9/2024 Last updated : 7/10/2024
When the default health probe is used, the following values for each health prob
| healthyTrehshold | 1 probe | | unhealthyTreshold | 3 probes | | port | The port number used is defined by the backend port number in the Ingress resource or HttpRoute backend port in the HttpRoute resource. |
-| protocol | HTTP for HTTP and HTTPS when TLS is specified |
+| protocol | HTTP or HTTPS<sup>1</sup> |
| (http) host | localhost | | (http) path | / |
+<sup>1</sup> HTTPS will be used when a backendTLSPolicy references a target backend service (for Gateway API implementation) or IngressExtension with a backendSetting protocol of HTTPS (for Ingress API implementation) is specified.
+ >[!Note] >Health probes are initiated with the `User-Agent` value of `Microsoft-Azure-Application-LB/AGC`.
spec:
timeout: 3s healthyThreshold: 1 unhealthyThreshold: 1
- protocol: HTTP
http: host: contoso.com path: /
azure-app-configuration Concept Feature Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-feature-management.md
To use feature flags effectively, you need to externalize all the feature flags
Azure App Configuration provides a centralized repository for feature flags. You can use it to define different kinds of feature flags and manipulate their states quickly and confidently. You can then use the App Configuration libraries for various programming language frameworks to easily access these feature flags from your application.
-[The feature flags in an ASP.NET Core app](./use-feature-flags-dotnet-core.md) shows how the App Configuration .NET provider and Feature Management libraries are used together to implement feature flags for your ASP.NET web application. For more information on feature flags in Azure App Configuration, see the following articles:
-
-* [Manage feature flags](./manage-feature-flags.md)
-* [Use conditional feature flags](./howto-feature-filters-aspnet-core.md)
-* [Enable a feature for specified users/groups](./howto-targetingfilter-aspnet-core.md)
-* [Add feature flags to an ASP.NET Core app](./quickstart-feature-flag-aspnet-core.md)
-* [Add feature flags to a .NET Framework app](./quickstart-feature-flag-dotnet.md)
-* [Add feature flags to an Azure Functions app](./quickstart-feature-flag-azure-functions-csharp.md)
-* [Add feature flags to a Spring Boot app](./quickstart-feature-flag-spring-boot.md)
-* [Use feature flags in an ASP.NET Core](./use-feature-flags-dotnet-core.md)
-* [Use feature flags in a Spring Boot app](./use-feature-flags-spring-boot.md)
- ## Next steps
+To start using feature flags with Azure App Configuration, continue to the following quickstarts specific to your applicationΓÇÖs language or platform.
+
+> [!div class="nextstepaction"]
+> [ASP.NET Core](./quickstart-feature-flag-aspnet-core.md)
+ > [!div class="nextstepaction"]
-> [Add feature flags to an ASP.NET Core web app](./quickstart-feature-flag-aspnet-core.md)
+> [.NET/.NET Framework](./quickstart-feature-flag-dotnet.md)
+
+> [!div class="nextstepaction"]
+> [.NET background service](./quickstart-feature-flag-dotnet-background-service.md)
+
+> [!div class="nextstepaction"]
+> [Java Spring](./quickstart-feature-flag-spring-boot.md)
+
+> [!div class="nextstepaction"]
+> [Python](./quickstart-feature-flag-python.md)
+
+> [!div class="nextstepaction"]
+> [Azure Kubernetes Service](./quickstart-feature-flag-azure-kubernetes-service.md)
+
+> [!div class="nextstepaction"]
+> [Azure Functions](./quickstart-feature-flag-azure-functions-csharp.md)
+
+To learn more about managing feature flags in Azure App Configuration, continue to the following tutorial.
+
+> [!div class="nextstepaction"]
+> [Manage feature flags in Azure App Configuration](./manage-feature-flags.md)
+
+Feature filters allow you to enable a feature flag conditionally. Azure App Configuration offers built-in feature filters that enable you to activate a feature flag only during a specific period or to a particular targeted audience of your app. For more information, continue to the following tutorial.
+
+> [!div class="nextstepaction"]
+> [Enable conditional features with feature filters](./howto-feature-filters.md)
+
+> [!div class="nextstepaction"]
+> [Enable features on a schedule](./howto-timewindow-filter.md)
+
+> [!div class="nextstepaction"]
+> [Roll out features to targeted audiences](./howto-targetingfilter.md)
+
azure-app-configuration Feature Management Dotnet Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/feature-management-dotnet-reference.md
Title: .NET feature management - Azure App Configuration
-description: Overview of .NET Feature Management library
+ Title: .NET feature flag management
+
+description: In this tutorial, you learn how to use feature flags in .NET apps. The feature management library provides various out-of-the-box solutions for application development, ranging from simple feature toggles to complex feature experimentation.
+ms.devlang: csharp
Last updated 05/22/2024 zone_pivot_groups: feature-management
+#Customer intent: I want to control feature availability in my app by using the Feature Management library.
# .NET Feature Management
By default, the feature manager retrieves feature flag configuration from the "F
> [!NOTE] > You can also specify that feature flag configuration should be retrieved from a different configuration section by passing the section to `AddFeatureManagement`. The following example tells the feature manager to read from a different section called "MyFeatureFlags" instead:-
-``` C#
-services.AddFeatureManagement(configuration.GetSection("MyFeatureFlags"));
-```
+>
+> ``` C#
+> services.AddFeatureManagement(configuration.GetSection("MyFeatureFlags"));
+> ```
### Dependency Injection
To use an implementation of `IFeatureDefinitionProvider`, it must be added into
services.AddSingleton<IFeatureDefinitionProvider, InMemoryFeatureDefinitionProvider>() .AddFeatureManagement() ```+
+## Next steps
+
+To learn how to use feature flags in your applications, continue to the following quickstarts.
+
+> [!div class="nextstepaction"]
+> [ASP.NET Core](./quickstart-feature-flag-aspnet-core.md)
+
+> [!div class="nextstepaction"]
+> [.NET/.NET Framework console app](./quickstart-feature-flag-dotnet.md)
+
+> [!div class="nextstepaction"]
+> [.NET background service](./quickstart-feature-flag-dotnet-background-service.md)
+
+To learn how to use feature filters, continue to the following tutorials.
+
+> [!div class="nextstepaction"]
+> [Enable conditional features with feature filters](./howto-feature-filters.md)
+
+> [!div class="nextstepaction"]
+> [Enable features on a schedule](./howto-timewindow-filter.md)
+
+> [!div class="nextstepaction"]
+> [Roll out features to targeted audiences](./howto-targetingfilter.md)
+
+To learn how to run experiments with variant feature flags, continue to the following tutorial.
+
+> [!div class="nextstepaction"]
+> [Run experiments with variant feature flags](./howto-feature-filters.md)
azure-app-configuration Howto Feature Filters Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-feature-filters-aspnet-core.md
To learn more about the built-in feature filters, continue to the following tuto
> [!div class="nextstepaction"] > [Roll out features to targeted audience](./howto-targetingfilter.md)+
+For the full feature rundown of the .NET feature management library, continue to the following document.
+
+> [!div class="nextstepaction"]
+> [.NET Feature Management](./feature-management-dotnet-reference.md)
azure-app-configuration Howto Targetingfilter Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-targetingfilter-aspnet-core.md
To learn more about the feature filters, continue to the following tutorials.
> [!div class="nextstepaction"] > [Enable features on a schedule](./howto-timewindow-filter-aspnet-core.md)+
+For the full feature rundown of the .NET feature management library, continue to the following document.
+
+> [!div class="nextstepaction"]
+> [.NET Feature Management](./feature-management-dotnet-reference.md)
azure-app-configuration Howto Timewindow Filter Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-timewindow-filter-aspnet-core.md
To learn more about the feature filters, continue to the following tutorials.
> [!div class="nextstepaction"] > [Roll out features to targeted audience](./howto-targetingfilter.md)+
+For the full feature rundown of the .NET feature management library, continue to the following document.
+
+> [!div class="nextstepaction"]
+> [.NET Feature Management](./feature-management-dotnet-reference.md)
azure-app-configuration Manage Feature Flags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/manage-feature-flags.md
Title: "Use Azure App Configuration to manage feature flags"
+ Title: Use Azure App Configuration to manage feature flags
description: In this quickstart, you learn how to manage feature flags separately from your application by using Azure App Configuration.
Feature flags created with the Feature manager are stored as regular key-values.
## Next steps
+To start using feature flags with Azure App Configuration, continue to the following quickstarts specific to your applicationΓÇÖs language or platform.
+
+> [!div class="nextstepaction"]
+> [ASP.NET Core](./quickstart-feature-flag-aspnet-core.md)
+
+> [!div class="nextstepaction"]
+> [.NET/.NET Framework](./quickstart-feature-flag-dotnet.md)
+
+> [!div class="nextstepaction"]
+> [.NET background service](./quickstart-feature-flag-dotnet-background-service.md)
+
+> [!div class="nextstepaction"]
+> [Java Spring](./quickstart-feature-flag-spring-boot.md)
+
+> [!div class="nextstepaction"]
+> [Python](./quickstart-feature-flag-python.md)
+
+> [!div class="nextstepaction"]
+> [Azure Kubernetes Service](./quickstart-feature-flag-azure-kubernetes-service.md)
+ > [!div class="nextstepaction"]
-> [Enable staged rollout of features for targeted audiences](./howto-targetingfilter-aspnet-core.md)
+> [Azure Functions](./quickstart-feature-flag-azure-functions-csharp.md)
azure-app-configuration Quickstart Feature Flag Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-aspnet-core.md
Title: Quickstart for adding feature flags to ASP.NET Core
-description: Add feature flags to ASP.NET Core apps and manage them using Azure App Configuration
+ Title: Quickstart for adding feature flags to ASP.NET Core apps
+
+description: This tutorial will guide you through the process of integrating feature flags from Azure App Configuration into your ASP.NET Core apps.
+ ms.devlang: csharp
Add a feature flag called *Beta* to the App Configuration store and leave **Labe
## Next steps
-In this quickstart, you added feature management capability to an ASP.NET Core app on top of dynamic configuration. The [Microsoft.FeatureManagement.AspNetCore](https://www.nuget.org/packages/Microsoft.FeatureManagement.AspNetCore) library offers rich integration for ASP.NET Core apps, including feature management in MVC controller actions, razor pages, views, routes, and middleware. For more information, continue to the following tutorial.
+In this quickstart, you added feature management capability to an ASP.NET Core app on top of dynamic configuration. The [Microsoft.FeatureManagement.AspNetCore](https://www.nuget.org/packages/Microsoft.FeatureManagement.AspNetCore) library offers rich integration for ASP.NET Core apps, including feature management in MVC controller actions, razor pages, views, routes, and middleware. For the full feature rundown of the .NET feature management library, continue to the following document.
> [!div class="nextstepaction"]
-> [Use feature flags in ASP.NET Core apps](./use-feature-flags-dotnet-core.md)
+> [.NET Feature Management](./feature-management-dotnet-reference.md)
While a feature flag allows you to activate or deactivate functionality in your app, you may want to customize a feature flag based on your app's logic. Feature filters allow you to enable a feature flag conditionally. For more information, continue to the following tutorial. > [!div class="nextstepaction"]
-> [Use feature filters for conditional feature flags](./howto-feature-filters-aspnet-core.md)
+> [Enable conditional features with feature filters](./howto-feature-filters.md)
Azure App Configuration offers built-in feature filters that enable you to activate a feature flag only during a specific period or to a particular targeted audience of your app. For more information, continue to the following tutorial. > [!div class="nextstepaction"]
-> [Enable features for targeted audiences](./howto-targetingfilter-aspnet-core.md)
+> [Enable features on a schedule](./howto-timewindow-filter.md)
+
+> [!div class="nextstepaction"]
+> [Roll out features to targeted audiences](./howto-targetingfilter.md)
To enable feature management capability for other types of apps, continue to the following tutorials. > [!div class="nextstepaction"]
-> [Use feature flags in .NET apps](./quickstart-feature-flag-dotnet.md)
+> [Use feature flags in .NET/.NET Framework console apps](./quickstart-feature-flag-dotnet.md)
+
+> [!div class="nextstepaction"]
+> [Use feature flags in .NET background services](./quickstart-feature-flag-dotnet-background-service.md)
> [!div class="nextstepaction"] > [Use feature flags in Azure Functions](./quickstart-feature-flag-azure-functions-csharp.md)
azure-app-configuration Quickstart Feature Flag Azure Functions Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-azure-functions-csharp.md
This project will use [dependency injection in .NET Azure Functions](../azure-fu
## Next steps
-In this quickstart, you created a feature flag and used it with an Azure Functions app via the [Microsoft.FeatureManagement](https://www.nuget.org/packages/Microsoft.FeatureManagement/) library.
+In this quickstart, you created a feature flag and used it with an Azure Functions.
-- Learn more about [feature management](./concept-feature-management.md)-- [Manage feature flags](./manage-feature-flags.md)-- [Use conditional feature flags](./howto-feature-filters-aspnet-core.md)-- [Enable staged rollout of features for targeted audiences](./howto-targetingfilter-aspnet-core.md)-- [Use dynamic configuration in an Azure Functions app](./enable-dynamic-configuration-azure-functions-csharp.md)
+To enable feature management capability for other types of apps, continue to the following tutorials.
+
+> [!div class="nextstepaction"]
+> [Use feature flags in ASP.NET Core apps](./quickstart-feature-flag-aspnet-core.md)
+
+> [!div class="nextstepaction"]
+> [Use feature flags in .NET/.NET framework console apps](./quickstart-feature-flag-dotnet.md)
+
+> [!div class="nextstepaction"]
+> [Use feature flags in .NET background services](./quickstart-feature-flag-dotnet-background-service.md)
+
+To learn more about managing feature flags in Azure App Configuration, continue to the following tutorial.
+
+> [!div class="nextstepaction"]
+> [Manage feature flags in Azure App Configuration](./manage-feature-flags.md)
+
+For the full feature rundown of the .NET feature management library, continue to the following document.
+
+> [!div class="nextstepaction"]
+> [.NET Feature Management](./feature-management-dotnet-reference.md)
azure-app-configuration Quickstart Feature Flag Dotnet Background Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-dotnet-background-service.md
Add a feature flag called *Beta* to the App Configuration store and leave **Labe
## Next steps
+In this quickstart, you created a feature flag and used it with a background service.
+ To enable feature management capability for other types of apps, continue to the following tutorials. > [!div class="nextstepaction"]
-> [Use feature flags in .NET console apps](./quickstart-feature-flag-dotnet.md)
+> [Use feature flags in ASP.NET Core apps](./quickstart-feature-flag-aspnet-core.md)
> [!div class="nextstepaction"]
-> [Use feature flags in ASP.NET Core apps](./quickstart-feature-flag-aspnet-core.md)
+> [Use feature flags in .NET/.NET framework console apps](./quickstart-feature-flag-dotnet.md)
To learn more about managing feature flags in Azure App Configuration, continue to the following tutorial. > [!div class="nextstepaction"] > [Manage feature flags in Azure App Configuration](./manage-feature-flags.md)+
+For the full feature rundown of the .NET feature management library, continue to the following document.
+
+> [!div class="nextstepaction"]
+> [.NET Feature Management](./feature-management-dotnet-reference.md)
azure-app-configuration Quickstart Feature Flag Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-dotnet.md
You can use Visual Studio to create a new console app project.
In this quickstart, you created a feature flag in App Configuration and used it with a console app. To learn how to dynamically update feature flags and other configuration values without restarting the application, continue to the next tutorial. - > [!div class="nextstepaction"] > [Enable dynamic configuration in a .NET app](./enable-dynamic-configuration-dotnet-core.md) > [!div class="nextstepaction"] > [Enable dynamic configuration in a .NET Framework app](./enable-dynamic-configuration-dotnet.md)
+To enable feature management capability for other types of apps, continue to the following tutorials.
+
+> [!div class="nextstepaction"]
+> [Use feature flags in ASP.NET Core apps](./quickstart-feature-flag-aspnet-core.md)
+
+> [!div class="nextstepaction"]
+> [Use feature flags in .NET background services](./quickstart-feature-flag-dotnet-background-service.md)
+
+For the full feature rundown of the .NET feature management library, continue to the following document.
+
+> [!div class="nextstepaction"]
+> [.NET Feature Management](./feature-management-dotnet-reference.md)
azure-app-configuration Run Experiments Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/run-experiments-aspnet-core.md
description: In this tutorial, you learn how to set up experiments in an App Con
+ms.devlang: csharp
- build-2024
Any edit to a variant feature flag generates a new version of the experimentatio
## Next step
+To learn more about the experimentation concepts, refer to the following document.
+
+> [!div class="nextstepaction"]
> [Experimentation](./concept-experimentation.md)+
+For the full feature rundown of the .NET feature management library, continue to the following document.
+
+> [!div class="nextstepaction"]
+> [.NET Feature Management](./feature-management-dotnet-reference.md)
azure-app-configuration Use Feature Flags Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-feature-flags-dotnet-core.md
Title: Tutorial for using feature flags in a .NET app | Microsoft Docs description: In this tutorial, you learn how to implement feature flags in .NET apps. -+ ms.devlang: csharp Previously updated : 02/20/2024- Last updated : 07/02/2024+ #Customer intent: I want to control feature availability in my app by using the .NET Feature Manager library. # Tutorial: Use feature flags in an ASP.NET Core app
-The .NET Feature Management libraries provide idiomatic support for implementing feature flags in a .NET or ASP.NET Core application. These libraries allow you to declaratively add feature flags to your code so that you don't have to manually write code to enable or disable features with `if` statements.
+> [!IMPORTANT]
+> This document has been superseded by the [.NET Feature Management](./feature-management-dotnet-reference.md) reference document, which provides the most current and detailed rundown of the features available in the .NET feature management libraries.
+>
+> To get started with feature flags in your apps, follow the quickstarts for [.NET console apps](./quickstart-feature-flag-dotnet.md) or [ASP.NET Core apps](./quickstart-feature-flag-aspnet-core.md).
-The Feature Management libraries also manage feature flag lifecycles behind the scenes. For example, the libraries refresh and cache flag states, or guarantee a flag state to be immutable during a request call. In addition, the ASP.NET Core library offers out-of-the-box integrations, including MVC controller actions, views, routes, and middleware.
+The .NET feature management libraries provide idiomatic support for implementing feature flags in a .NET or ASP.NET Core application. These libraries allow you to declaratively add feature flags to your code so that you don't have to manually write code to enable or disable features with `if` statements.
-For the ASP.NET Core feature management API reference documentation, see [Microsoft.FeatureManagement Namespace](/dotnet/api/microsoft.featuremanagement).
+The feature management libraries also manage feature flag lifecycles behind the scenes. For example, the libraries refresh and cache flag states, or guarantee a flag state to be immutable during a request call. In addition, the ASP.NET Core library offers out-of-the-box integrations, including MVC controller actions, views, routes, and middleware.
In this tutorial, you will learn how to:
When a feature flag has multiple filters, the filter list is traversed in order
The feature manager supports *appsettings.json* as a configuration source for feature flags. The following example shows how to set up feature flags in a JSON file: ```JSON
-{"FeatureManagement": {
+{
+ "FeatureManagement": {
"FeatureA": true, // Feature flag set to on "FeatureB": false, // Feature flag set to off "FeatureC": {
app.UseForFeature(featureName, appBuilder => {
In this tutorial, you learned how to implement feature flags in your ASP.NET Core application by using the `Microsoft.FeatureManagement` libraries. For more information about feature management support in ASP.NET Core and App Configuration, see the following resources: * [ASP.NET Core feature flag sample code](./quickstart-feature-flag-aspnet-core.md)
-* [Microsoft.FeatureManagement documentation](/dotnet/api/microsoft.featuremanagement)
+* [Microsoft.FeatureManagement Feature Reference](./feature-management-dotnet-reference.md)
+* [Microsoft.FeatureManagement API Reference](/dotnet/api/microsoft.featuremanagement)
* [Manage feature flags](./manage-feature-flags.md)
azure-arc Create Pv https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/edge-storage-accelerator/create-pv.md
This section describes the prerequisites for creating a persistent volume (PV).
1. Create a storage account [following the instructions here](/azure/storage/common/storage-account-create?tabs=azure-portal). > [!NOTE]
- > When you create your storage account, create it under the same resource group and region/location as your Kubernetes cluster.
+ > When you create your storage account, create it under the same resource group as your Kubernetes cluster. It is recommended that you also create it under the same region/location as your Kubernetes cluster.
1. Create a container in the storage account that you created in the previous step, [following the instructions here](/azure/storage/blobs/storage-quickstart-blobs-portal#create-a-container).
Note the `metadata: name:` as you must specify it in the `spec: volumeName` of t
metadata: ### Create a name here ### name: CREATE_A_NAME_HERE
- ### Use a namespace that matches your intended consuming pod, or "default" ###
- namespace: INTENDED_CONSUMING_POD_OR_DEFAULT_HERE
spec: capacity: ### This storage capacity value is not enforced at this layer. ###
azure-arc Create Pvc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/edge-storage-accelerator/create-pvc.md
This size does not affect the ceiling of blob storage used in the cloud to suppo
volumeMode: Filesystem ### This name references your PV name in your PV config ### volumeName: INSERT_YOUR_PV_NAME
- status:
- accessModes:
- - ReadWriteMany
- capacity:
- storage: 5Gi
``` > [!NOTE]
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/edge-storage-accelerator/release-notes.md
This article provides information about new features and known issues in Edge St
- Kernel versions: the minimum supported Linux kernel version is 5.1. Currently there are known issues with 6.4 and 6.2.
+## Version 1.2.0-preview
+
+- Extension identity and OneLake support: ESA now allows use of a system-assigned extension identity for access to blob storage or OneLake lake houses.
+- Security fixes: security maintenance (package/module version updates).
+ ## Next steps [Edge Storage Accelerator overview](overview.md)
azure-arc Support Feedback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/edge-storage-accelerator/support-feedback.md
description: Learn how to get support and provide feedback Edge Storage Accelera
Previously updated : 04/09/2024 Last updated : 07/09/2024 # Support and feedback for Edge Storage Accelerator (preview)
-If you experience an issue or need support during the preview, you can submit an [Edge Storage Accelerator support request form here](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR19S7i8RsvNAg8hqZuHbEyxUOVlRSjJNOFgxNkRPN1IzQUZENFE4SjlSNy4u).
+If you experience an issue or need support during the preview, see the following video and steps to request support for Edge Storage Accelerator in the Azure portal:
+
+> [!VIDEO f477de99-2036-41a3-979a-586a39b1854f]
+
+1. Navigate to the desired Arc-connected Kubernetes cluster with the Edge Storage Accelerator extension that you are experiencing issues with.
+1. To expand the menu, select **Settings** on the left blade.
+1. Select **Extensions**.
+1. Select the name for **Type**: `microsoft.edgestorageaccelerator`. In this example, the name is `hydraext`.
+1. Select **Help** on the left blade to expand the menu.
+1. Select **Support + Troubleshooting**.
+1. In the search text box, describe the issue you are facing in a few words.
+1. Select "Go" to the right of the search text box.
+1. For **Which service you are having an issue with**, make sure that **Edge Storage Accelerator - Preview** is selected. If not, you might need to search for **Edge Storage Accelerator - Preview** in the drop-down.
+1. Select **Next** after you select **Edge Storage Accelerator - Preview**.
+1. **Subscription** should already be populated with the subscription that you used to set up your Kubernetes cluster. If not, select the subscription to which your Arc-connected Kubernetes cluster is linked.
+1. For **Resource**, select **General question** from the drop-down menu.
+1. Select **Next**.
+1. For **Problem type**, from the drop-down menu, select the problem type that best describes your issue.
+1. For **Problem subtype**, from the drop-down menu, select the subtype that best describes your issue. The subtype options vary based on your selected **Problem type**.
+1. Select **Next**.
+1. Based on the issue, there might be documentation available to help you triage your issue. If these articles are not relevant or don't solve the issue, select **Create a support request** at the top.
+1. After you select **Create a support request at the top**, the fields in the **Problem description** section should already be populated with the details that you provided earlier. If you want to change anything, you can do so in this window.
+1. Select **Next** once you verify that the information in the **Problem description** section is accurate.
+1. In the **Recommended solution** section, recommended solutions appear based on the information you entered. If the recommended solutions are not helpful, select **Next** to continue filing a support request.
+1. In the **Additional details** section, populate the **Problem details** with your information.
+1. Once all required fields are complete, select **Next**.
+1. Review your information from the previous sections, then select **Create**.
## Release notes
-See the [release notes for Edge Storage Accelerator](release-notes.md) to learn about new features and known issues.
+See the [release notes for Edge Storage Accelerator](release-notes.md) for information about new features and known issues.
## Next steps
-[What is Edge Storage Accelerator?](overview.md)
+[What is Edge Storage Accelerator?](overview.md)
azure-arc Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/private-link.md
Title: Private connectivity for Azure Arc-enabled Kubernetes clusters using private link (preview)
+ Title: Use private connectivity for Azure Arc-enabled Kubernetes clusters with private link (preview)
Last updated 09/21/2022-+ description: With Azure Arc, you can use a Private Link Scope model to allow multiple Kubernetes clusters to use a single private endpoint.
-# Private connectivity for Arc-enabled Kubernetes clusters using private link (preview)
+# Use private connectivity for Arc-enabled Kubernetes clusters with private link (preview)
[Azure Private Link](../../private-link/private-link-overview.md) allows you to securely link Azure services to your virtual network using private endpoints. This means you can connect your on-premises Kubernetes clusters with Azure Arc and send all traffic over an Azure ExpressRoute or site-to-site VPN connection instead of using public networks. In Azure Arc, you can use a Private Link Scope model to allow multiple Kubernetes clusters to communicate with their Azure Arc resources using a single private endpoint.
azure-arc Troubleshoot Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/troubleshoot-resource-bridge.md
Title: Troubleshoot Azure Arc resource bridge issues description: This article tells how to troubleshoot and resolve issues with the Azure Arc resource bridge when trying to deploy or connect to the service. Last updated 11/03/2023-+ # Troubleshoot Azure Arc resource bridge issues
azure-arc Support Matrix For System Center Virtual Machine Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/support-matrix-for-system-center-virtual-machine-manager.md
+
+ Title: Support matrix for Azure Arc-enabled System Center Virtual Machine Manager
+description: Learn about the support matrix for Arc-enabled System Center Virtual Machine Manager.
++++++ Last updated : 07/10/2024
+keywords: "VMM, Arc, Azure"
+
+# Customer intent: As a VI admin, I want to understand the support matrix for System Center Virtual Machine Manager.
++
+# Support matrix for Azure Arc-enabled System Center Virtual Machine Manager
+
+This article documents the prerequisites and support requirements for using [Azure Arc-enabled System Center Virtual Machine Manager (SCVMM)](overview.md) to manage your SCVMM managed on-premises VMs through Azure Arc.
+
+To use Arc-enabled SCVMM, you must deploy an Azure Arc Resource Bridge in your SCVMM managed environment. The Resource Bridge provides an ongoing connection between your SCVMM management server and Azure. Once you've connected your SCVMM management server to Azure, components on the Resource Bridge discover your SCVMM management server inventory. You can [enable them in Azure](enable-scvmm-inventory-resources.md) and start performing virtual hardware and guest OS operations on them using Azure Arc.
+
+## System Center Virtual Machine Manager requirements
+
+The following requirements must be met in order to use Arc-enabled SCVMM.
+
+### Supported SCVMM versions
+
+Azure Arc-enabled SCVMM works with VMM 2019 and 2022 versions and supports SCVMM management servers with a maximum of 15,000 VMs.
+
+> [!NOTE]
+> If VMM server is running on Windows Server 2016 machine, ensure that [Open SSH package](https://github.com/PowerShell/Win32-OpenSSH/releases) is installed.
+> If you deploy an older version of appliance (version lesser than 0.2.25), Arc operation fails with the error *Appliance cluster is not deployed with AAD authentication*. To fix this issue, download the latest version of the onboarding script and deploy the Resource Bridge again.
+> Azure Arc Resource Bridge deployment using private link is currently not supported.
+
+| **Requirement** | **Details** |
+| | |
+| **Azure** | An Azure subscription <br/><br/> A resource group in the above subscription where you have the *Owner/Contributor* role. |
+| **SCVMM** | You need an SCVMM management server running version 2019 or later.<br/><br/> A private cloud or a host group with a minimum free capacity of 32 GB of RAM, 4 vCPUs with 100 GB of free disk space. <br/><br/> A VM network with internet access, directly or through proxy. Appliance VM will be deployed using this VM network.<br/><br/> Only Static IP allocation is supported and VMM Static IP Pool is required. Follow [these steps](/system-center/vmm/network-pool?view=sc-vmm-2022&preserve-view=true) to create a VMM Static IP Pool and ensure that the Static IP Pool has at least four IP addresses. If your SCVMM server is behind a firewall, all IPs in this IP Pool and the Control Plane IP should be allowed to communicate through WinRM ports. The default WinRM ports are 5985 and 5986. <br/><br/> Dynamic IP allocation using DHCP isn't supported. <br/><br/> A library share with write permission for the SCVMM admin account through which Resource Bridge deployment is going to be performed. |
+| **SCVMM accounts** | An SCVMM admin account that can perform all administrative actions on all objects that VMM manages. <br/><br/> The user should be part of local administrator account in the SCVMM server. If the SCVMM server is installed in a High Availability configuration, the user should be a part of the local administrator accounts in all the SCVMM cluster nodes. <br/><br/>This will be used for the ongoing operation of Azure Arc-enabled SCVMM and the deployment of the Arc Resource Bridge VM. |
+| **Workstation** | The workstation will be used to run the helper script. Ensure you have [64-bit Azure CLI installed](/cli/azure/install-azure-cli) on the workstation.<br/><br/> When you execute the script from a Linux machine, the deployment takes a bit longer and you might experience performance issues. |
+
+### Resource Bridge networking requirements
+
+The following firewall URL exceptions are required for the Azure Arc Resource Bridge VM:
++
+>[!Note]
+> To configure SSL proxy and to view the exclusion list for no proxy, seeΓÇ»[Additional network requirements](../resource-bridge/network-requirements.md#azure-arc-resource-bridge-network-requirements).
+
+In addition, SCVMM requires the following exception:
+
+| **Service** | **Port** | **URL** | **Direction** | **Notes**|
+| | | | | |
+| SCVMM Management Server | 443 | URL of the SCVMM management server. | Appliance VM IP and control plane endpoint need outbound connection. | Used by the SCVMM server to communicate with the Appliance VM and the control plane. |
+| WinRM | WinRM Port numbers (Default: 5985 and 5986). | URL of the WinRM service. | IPs in the IP Pool used by the Appliance VM and control plane need connection with the VMM server. | Used by the SCVMM server to communicate with the Appliance VM. |
+
+Generally, connectivity requirements include these principles:
+
+- All connections are TCP unless otherwise specified.
+- All HTTP connections use HTTPS and SSL/TLS with officially signed and verifiable certificates.
+- All connections are outbound unless otherwise specified.
+
+To use a proxy, verify that the agents and the machine performing the onboarding process meet the network requirements in this article. For a complete list of network requirements for Azure Arc features and Azure Arc-enabled services, see [Azure Arc network requirements (Consolidated)](../network-requirements-consolidated.md).
+
+### Azure role/permission requirements
+
+The minimum Azure roles required for operations related to Arc-enabled SCVMM are as follows:
+
+| **Operation** | **Minimum role required** | **Scope** |
+| | | |
+| Onboarding your SCVMM Management Server to Arc | Azure Arc SCVMM Private Clouds Onboarding | On the subscription or resource group into which you want to onboard |
+| Administering Arc-enabled SCVMM | Azure Arc SCVMM Administrator | On the subscription or resource group where SCVMM management server resource is created |
+| VM Provisioning | Azure Arc SCVMM Private Cloud User | On the subscription or resource group that contains the SCVMM cloud, datastore, and virtual network resources, or on the resources themselves |
+| VM Provisioning | Azure Arc SCVMM VM Contributor | On the subscription or resource group where you want to provision VMs |
+| VM Operations | Azure Arc SCVMM VM Contributor | On the subscription or resource group that contains the VM, or on the VM itself |
+
+Any roles with higher permissions on the same scope, such as Owner or Contributor, will also allow you to perform the operations listed above.
+
+### Azure connected machine agent (Guest Management) requirements
+
+Ensure the following before you install Arc agents at scale for SCVMM VMs:
+
+- The Resource Bridge must be in a running state.
+- The SCVMM management server must be in a connected state.
+- The user account must have permissions listed in Azure Arc-enabled SCVMM Administrator role.
+- All the target machines are:
+ - Powered on and the resource bridge has network connectivity to the host running the VM.
+ - Running a [supported operating system](/azure/azure-arc/servers/prerequisites#supported-operating-systems).
+ - Able to connect through the firewall to communicate over the Internet and [these URLs](/azure/azure-arc/servers/network-requirements?tabs=azure-cloud#urls) aren't blocked.
+
+### Supported SCVMM versions
+
+Azure Arc-enabled SCVMM supports direct installation of Arc agents in VMs managed by:
+
+- SCVMM 2022 UR1 or later versions of SCVMM server or console
+- SCVMM 2019 UR5 or later versions of SCVMM server or console
+
+For VMs managed by other SCVMM versions, [install Arc agents through the script](install-arc-agents-using-script.md).
+
+>[!Important]
+>We recommend maintaining the SCVMM management server and the SCVMM console in the same Long-Term Servicing Channel (LTSC) and Update Rollup (UR) version.
+
+### Supported operating systems
+
+Azure Arc-enabled SCVMM supports direct installation of Arc agents in VMs running Windows Server 2022, 2019, 2016, 2012R2, Windows 10, and Windows 11 operating systems. For other Windows and Linux operating systems, [install Arc agents through the script](install-arc-agents-using-script.md).
+
+### Software requirements
+
+Windows operating systems:
+
+* Microsoft recommends running the latest version, [Windows Management Framework 5.1](https://www.microsoft.com/download/details.aspx?id=54616).
+
+Linux operating systems:
+
+* systemd
+* wget (to download the installation script)
+* openssl
+* gnupg (Debian-based systems, only)
+
+### Networking requirements
+
+The following firewall URL exceptions are required for the Azure Arc agents:
+
+| **URL** | **Description** |
+| | |
+| `aka.ms` | Used to resolve the download script during installation |
+| `packages.microsoft.com` | Used to download the Linux installation package |
+| `download.microsoft.com` | Used to download the Windows installation package |
+| `login.windows.net` | Microsoft Entra ID |
+| `login.microsoftonline.com` | Microsoft Entra ID |
+| `pas.windows.net` | Microsoft Entra ID |
+| `management.azure.com` | Azure Resource Manager - to create or delete the Arc server resource |
+| `*.his.arc.azure.com` | Metadata and hybrid identity services |
+| `*.guestconfiguration.azure.com` | Extension management and guest configuration services |
+| `guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com` | Notification service for extension and connectivity scenarios |
+| `azgn*.servicebus.windows.net` | Notification service for extension and connectivity scenarios |
+| `*.servicebus.windows.net` | For Windows Admin Center and SSH scenarios |
+| `*.blob.core.windows.net` | Download source for Azure Arc-enabled servers extensions |
+| `dc.services.visualstudio.com` | Agent telemetry |
+
+## Next steps
+
+[Connect your System Center Virtual Machine Manager management server to Azure Arc](quickstart-connect-system-center-virtual-machine-manager-to-arc.md).
azure-cache-for-redis Cache Python Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-python-get-started.md
Title: 'Quickstart: Use Azure Cache for Redis in Python'
-description: In this quickstart, you learn how to create a Python App that uses Azure Cache for Redis.
+description: In this quickstart, you learn how to create a Python script that uses Azure Cache for Redis.
- Previously updated : 02/15/2023 Last updated : 07/09/2024 ms.devlang: python +
+#customer intent: As a cloud developer, I want to quickly see a cache so that understand how to use Python with Azure Cache for Redis.
+ # Quickstart: Use Azure Cache for Redis in Python
-In this article, you incorporate Azure Cache for Redis into a Python app to have access to a secure, dedicated cache that is accessible from any application within Azure.
+In this Quickstart, you incorporate Azure Cache for Redis into a Python script to have access to a secure, dedicated cache that is accessible from any application within Azure.
## Skip to the code on GitHub
If you want to skip straight to the code, see the [Python quickstart](https://gi
- Azure subscription - [create one for free](https://azure.microsoft.com/free/) - Python 3
- - For macOS or Linux, download from [python.org](https://www.python.org/downloads/).
- - For Windows 11, use the [Windows Store](https://www.microsoft.com/en-us/p/python-3/9nblggh083nz?activetab=pivot:overviewtab).
+ - For macOS or Linux, download from [python.org](https://www.python.org/downloads/).
+ - For Windows 11, use the [Windows Store](https://apps.microsoft.com/search/publisher?name=Python+Software+Foundation&hl=en-us&gl=US).
## Create an Azure Cache for Redis instance [!INCLUDE [redis-cache-create](~/reusable-content/ce-skilling/azure/includes/azure-cache-for-redis/includes/redis-cache-create.md)]
+## Install redis-py library
-## Install redis-py
-
-[Redis-py](https://pypi.org/project/redis/) is a Python interface to Azure Cache for Redis. Use the Python packages tool, `pip`, to install the `redis-py` package from a command prompt.
+[Redis-py](https://pypi.org/project/redis/) is a Python interface to Azure Cache for Redis. Use the Python packages tool, `pip`, to install the `redis-py` package from a command prompt.
The following example used `pip3` for Python 3 to install `redis-py` on Windows 11 from an Administrator command prompt. :::image type="content" source="media/cache-python-get-started/cache-python-install-redis-py.png" alt-text="Screenshot of a terminal showing an install of redis-py interface to Azure Cache for Redis.":::
-## Read and write to the cache
-
-Run Python from the command line and test your cache by using the following code. Replace `<Your Host Name>` and `<Your Access Key>` with the values from your Azure Cache for Redis instance. Your host name is of the form `<DNS name>.redis.cache.windows.net`.
+## Create a Python script to access your cache
+
+Create a Python script to that uses either Microsoft Entra ID or access keys to connect to an Azure Cache for Redis. We recommend you use Microsoft Entra ID.
+
+## [Microsoft Entra ID Authentication (recommended)](#tab/entraid)
++
+### Install the Microsoft Authentication Library
+
+1. Install the [Microsoft Authentication Library (MSAL)](/entra/identity-platform/msal-overview). This library allows you to acquire security tokens from Microsoft identity to authenticate users.
+
+1. You can use the [Python Azure identity client library](/python/api/overview/azure/identity-readme) available that uses MSAL to provide token authentication support. Install this library using `pip`:
+
+ ```python
+ pip install azure-identity
+ ```
+
+### Create a Python script using Microsoft Entra ID
+
+1. Create a new text file, add the following script, and save the file as `PythonApplication1.py`.
+
+1. Replace `<Your Host Name>` with the value from your Azure Cache for Redis instance. Your host name is of the form `<DNS name>.redis.cache.windows.net`.
+
+1. Replace `<Your Username>` with the values from your Microsoft Entra ID user.
+
+ ```python
+ import redis
+ from azure.identity import DefaultAzureCredential
+
+ scope = "https://redis.azure.com/.default"
+ host = "<Your Host Name>"
+ port = 6380
+ user_name = "<Your Username>"
+
+
+ def hello_world():
+ cred = DefaultAzureCredential()
+ token = cred.get_token(scope)
+ r = redis.Redis(host=host,
+ port=port,
+ ssl=True, # ssl connection is required.
+ username=user_name,
+ password=token.token,
+ decode_responses=True)
+ result = r.ping()
+ print("Ping returned : " + str(result))
+
+ result = r.set("Message", "Hello!, The cache is working with Python!")
+ print("SET Message returned : " + str(result))
+
+ result = r.get("Message")
+ print("GET Message returned : " + result)
+
+ result = r.client_list()
+ print("CLIENT LIST returned : ")
+ for c in result:
+ print(f"id : {c['id']}, addr : {c['addr']}")
+
+ if __name__ == '__main__':
+ hello_world()
+ ```
+
+1. Before you run your Python code from a Terminal, make sure you authorize the terminal for using Microsoft Entra ID.
+
+ `azd auth login`
+
+1. Run `PythonApplication1.py` with Python. You should see results like the following example:
+
+ :::image type="content" source="media/cache-python-get-started/cache-python-completed.png" alt-text="Screenshot of a terminal showing a Python script to test cache access.":::
+
+### Create a Python script using reauthentication
+
+Microsoft Entra ID access tokens have limited lifespans, [averaging 75 minutes](/entra/identity-platform/configurable-token-lifetimes#token-lifetime-policies-for-access-saml-and-id-tokens). In order to maintain a connection to your cache, you need to refresh the token. This example demonstrates how to do this using Python.
+
+1. Create a new text file, add the following script. Then, save the file as `PythonApplication2.py`.
+
+1. Replace `<Your Host Name>` with the value from your Azure Cache for Redis instance. Your host name is of the form `<DNS name>.redis.cache.windows.net`.
+
+1. Replace `<Your Username>` with the values from your Microsoft Entra ID user.
+
+ ```python
+ import time
+ import logging
+ import redis
+ from azure.identity import DefaultAzureCredential
+
+ scope = "https://redis.azure.com/.default"
+ host = "<Your Host Name>"
+ port = 6380
+ user_name = "<Your Username>"
+
+ def re_authentication():
+ _LOGGER = logging.getLogger(__name__)
+ cred = DefaultAzureCredential()
+ token = cred.get_token(scope)
+ r = redis.Redis(host=host,
+ port=port,
+ ssl=True, # ssl connection is required.
+ username=user_name,
+ password=token.token,
+ decode_responses=True)
+ max_retry = 3
+ for index in range(max_retry):
+ try:
+ if _need_refreshing(token):
+ _LOGGER.info("Refreshing token...")
+ tmp_token = cred.get_token(scope)
+ if tmp_token:
+ token = tmp_token
+ r.execute_command("AUTH", user_name, token.token)
+ result = r.ping()
+ print("Ping returned : " + str(result))
+
+ result = r.set("Message", "Hello!, The cache is working with Python!")
+ print("SET Message returned : " + str(result))
+
+ result = r.get("Message")
+ print("GET Message returned : " + result)
+
+ result = r.client_list()
+ print("CLIENT LIST returned : ")
+ for c in result:
+ print(f"id : {c['id']}, addr : {c['addr']}")
+ break
+ except redis.ConnectionError:
+ _LOGGER.info("Connection lost. Reconnecting.")
+ token = cred.get_token(scope)
+ r = redis.Redis(host=host,
+ port=port,
+ ssl=True, # ssl connection is required.
+ username=user_name,
+ password=token.token,
+ decode_responses=True)
+ except Exception:
+ _LOGGER.info("Unknown failures.")
+ break
+
+
+ def _need_refreshing(token, refresh_offset=300):
+ return not token or token.expires_on - time.time() < refresh_offset
+
+ if __name__ == '__main__':
+ re_authentication()
+ ```
+
+1. Run `PythonApplication2.py` with Python. You should see results like the following example:
+
+ :::image type="content" source="media/cache-python-get-started/cache-python-completed.png" alt-text="Screenshot of a terminal showing a Python script to test cache access.":::
+
+ Unlike the first example, If your token expires, this example automatically refreshes it.
+
+## [Access Key Authentication](#tab/accesskey)
++
+### Read and write to the cache from the command line
+
+Run [Python from the command line](https://docs.python.org/3/faq/windows.html#id2) to test your cache. First, initiate the Python interpreter in your command line by typing `py`, and then use the following code. Replace `<Your Host Name>` and `<Your Access Key>` with the values from your Azure Cache for Redis instance. Your host name is of the form `<DNS name>.redis.cache.windows.net`.
```python >>> import redis
->>> r = redis.StrictRedis(host='<Your Host Name>',
+>>> r = redis.Redis(host='<Your Host Name>',
port=6380, db=0, password='<Your Access Key>', ssl=True) >>> r.set('foo', 'bar') True
True
b'bar' ```
-> [!IMPORTANT]
-> For Azure Cache for Redis version 3.0 or higher, TLS/SSL certificate check is enforced. `ssl_ca_certs` must be explicitly set when connecting to Azure Cache for Redis. For RedHat Linux, `ssl_ca_certs` are in the `/etc/pki/tls/certs/ca-bundle.crt` certificate module.
-
-## Create a Python sample app
+### Create a Python script using access keys
Create a new text file, add the following script, and save the file as `PythonApplication1.py`. Replace `<Your Host Name>` and `<Your Access Key>` with the values from your Azure Cache for Redis instance. Your host name is of the form `<DNS name>.redis.cache.windows.net`.
import redis
myHostname = "<Your Host Name>" myPassword = "<Your Access Key>"
-r = redis.StrictRedis(host=myHostname, port=6380,
+r = redis.Redis(host=myHostname, port=6380,
password=myPassword, ssl=True) result = r.ping()
result = r.set("Message", "Hello!, The cache is working with Python!")
print("SET Message returned : " + str(result)) result = r.get("Message")
-print("GET Message returned : " + result.decode("utf-8"))
+print("GET Message returned : " + result)
result = r.client_list() print("CLIENT LIST returned : ")
Run `PythonApplication1.py` with Python. You should see results like the followi
:::image type="content" source="media/cache-python-get-started/cache-python-completed.png" alt-text="Screenshot of a terminal showing a Python script to test cache access.":::
-## Clean up resources
-
-If you're finished with the Azure resource group and resources you created in this quickstart, you can delete them to avoid charges.
-
-> [!IMPORTANT]
-> Deleting a resource group is irreversible, and the resource group and all the resources in it are permanently deleted. If you created your Azure Cache for Redis instance in an existing resource group that you want to keep, you can delete just the cache by selecting **Delete** from the cache **Overview** page.
-
-To delete the resource group and its Redis Cache for Azure instance:
-
-1. From the [Azure portal](https://portal.azure.com), search for and select **Resource groups**.
-
-1. In the **Filter by name** text box, enter the name of the resource group that contains your cache instance, and then select it from the search results.
+
-1. On your resource group page, select **Delete resource group**.
+<!-- Clean up resources -->
-1. Type the resource group name, and then select **Delete**.
-
- :::image type="content" source="./media/cache-python-get-started/delete-your-resource-group-for-azure-cache-for-redis.png" alt-text="Screenshot of the Azure portal showing how to delete the resource group for Azure Cache for Redis.":::
-## Next steps
+## Related content
-- [Create a simple ASP.NET web app that uses an Azure Cache for Redis.](./cache-web-app-howto.md)
+- [Create a ASP.NET web app that uses an Azure Cache for Redis.](./cache-web-app-howto.md)
azure-functions Functions Dotnet Class Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-dotnet-class-library.md
Azure Functions supports C# and C# script programming languages. If you're looki
### Updating to target .NET 8 > [!NOTE]
-> Targeting .NET 8 with the in-process model is not yet enabled for Linux, for apps hosted in App Service Environments, or for apps in sovereign clouds. Updates will be communicated on [this tracking thread on GitHub](https://github.com/Azure/azure-functions-host/issues/9951).
+> Targeting .NET 8 with the in-process model is not yet enabled for Linux or for apps in sovereign clouds. Updates will be communicated on [this tracking thread on GitHub](https://github.com/Azure/azure-functions-host/issues/9951).
Apps using the in-process model can target .NET 8 by following the steps outlined in this section. However, if you choose to exercise this option, you should still begin planning your [migration to the isolated worker model](./migrate-dotnet-to-isolated-model.md) in advance of [support ending for the in-process model on November 10, 2026](https://aka.ms/azure-functions-retirements/in-process-model).
azure-functions Migrate Cosmos Db Version 3 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-cosmos-db-version-3-version-4.md
description: This article shows you how to upgrade your existing function apps u
Previously updated : 05/07/2024 Last updated : 07/10/2024 zone_pivot_groups: programming-languages-set-functions-lang-workers
namespace CosmosDBSamples
``` > [!NOTE]
-> If your scenario relied on the dynamic nature of the `Document` type to identify different schemas and types of events, you can use a base abstract type with the common properties across your types or dynamic types like `JObject` that allow to access properties like `Document` did.
+> If your scenario relied on the dynamic nature of the `Document` type to identify different schemas and types of events, you can use a base abstract type with the common properties across your types or dynamic types like `JObject` (when using `Microsoft.Azure.WebJobs.Extensions.CosmosDB`) and `JsonNode` (when using `Microsoft.Azure.Functions.Worker.Extensions.CosmosDB`) that allow to access properties like `Document` did.
Additionally, if you are using the Output Binding, please review the [change in item ID generation](#changes-to-item-id-generation) to verify if you need additional code changes.
azure-maps About Creator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/about-creator.md
This section provides a high-level overview of the indoor map creation workflow.
### ![How-to articles](./media/creator-indoor-maps/about-creator/how-to-guides.png) How-to guides - [Manage Creator]-- [Implement Dynamic styling for indoor maps] - [Query datasets with WFS API] - [Custom styling for indoor maps] - [Indoor maps wayfinding service] - [Edit indoor maps using the QGIS plugin] - [Create dataset using GeoJson package]-- [Create a feature stateset] ### ![Reference articles](./media/creator-indoor-maps/about-creator/reference.png) Reference - [Drawing package requirements] - [Facility Ontology]-- [Dynamic maps StylesObject] - [Drawing error visualizer] - [Azure Maps Creator REST API] [Azure Maps Creator onboarding tool]: https://azure.github.io/azure-maps-creator-onboarding-tool [Azure Maps Creator REST API]: /rest/api/maps-creator [Conversion]: /rest/api/maps-creator/conversion
-[Create a feature stateset]: how-to-creator-feature-stateset.md
[Create custom styles for indoor maps]: how-to-create-custom-styles.md [Create dataset using GeoJson package]: how-to-dataset-geojson.md [Custom styling for indoor maps]: how-to-create-custom-styles.md
This section provides a high-level overview of the indoor map creation workflow.
[Drawing error visualizer]: drawing-error-visualizer.md [Drawing package guide]: drawing-package-guide.md?pivots=drawing-package-v2 [Drawing package requirements]: drawing-requirements.md
-[Dynamic maps StylesObject]: schema-stateset-stylesobject.md
[Edit indoor maps using the QGIS plugin]: creator-qgis-plugin.md [Facility Ontology]: creator-facility-ontology.md [Features API]: /rest/api/maps-creator/features?view=rest-maps-creator-2023-03-01-preview&preserve-view=true [features]: glossary.md#feature [How to create data registry]: how-to-create-data-registries.md
-[Implement Dynamic styling for indoor maps]: indoor-map-dynamic-styling.md
[Indoor map concepts]: creator-indoor-maps.md [Indoor maps wayfinding service]: how-to-creator-wayfinding.md [Manage Creator]: how-to-manage-creator.md
azure-maps Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-indoor-maps.md
Creator services create, store, and use various data types that are defined and
- Tileset - style - Map configuration-- Feature stateset - Routeset ## Upload a drawing package
Azure Maps Creator provides the following services that support map creation:
- [Tileset service]. Use the Tileset service to create a vector-based representation of a dataset. Applications can use a tileset to present a visual tile-based view of the dataset. - [Custom styling service]. Use the [style] service or [visual style editor] to customize the visual elements of an indoor map.-- [Feature State service]. Use the Feature State service to support dynamic map styling. Applications can use dynamic map styling to reflect real-time events on spaces provided by the IoT system. - [Wayfinding service]. Use the [wayfinding] API to generate a path between two points within a facility. Use the [routeset] API to create the data that the wayfinding service needs to generate paths. ### Datasets
-A dataset is a collection of indoor map features. The indoor map features represent facilities that are defined in a converted drawing package. After you create a dataset with the [Dataset service], you can create any number of [tilesets] or [feature statesets].
+A dataset is a collection of indoor map features. The indoor map features represent facilities that are defined in a converted drawing package. After you create a dataset with the [Dataset service], you can create any number of [tilesets].
At any time, developers can use the [Dataset service] to add or remove facilities to an existing dataset. For more information about how to update an existing dataset using the API, see the append options in [Dataset service]. For an example of how to update a dataset, see [Data maintenance].
The following JSON is an example of a default map configuration. See the followi
- For more information on style Rest API, see [style] in the Maps Creator Rest API reference. - For more information on the map configuration Rest API, see [Creator - map configuration Rest API].
-### Feature statesets
-
-Feature statesets are collections of dynamic properties (*states*) that are assigned to dataset features, such as rooms or equipment. An example of a *state* can be temperature or occupancy. Each *state* is a key/value pair that contains the name of the property, the value, and the timestamp of the last update.
-
-You can use the [Feature State service] to create and manage a feature stateset for a dataset. The stateset is defined by one or more *states*. Each feature, such as a room, can have one *state* attached to it.
-
-The value of each *state* in a stateset is updated or retrieved by IoT devices or other applications. For example, using the [Feature State Update API], devices measuring space occupancy can systematically post the state change of a room.
-
-An application can use a feature stateset to dynamically render features in a facility according to their current state and respective map style. For more information about using feature statesets to style features in a rendering map, see [Indoor Maps module].
-
->[!NOTE]
->Like tilesets, changing a dataset doesn't affect the existing feature stateset, and deleting a feature stateset doesn't affect the dataset to which it's attached.
- ### Wayfinding (preview) The [Wayfinding service] enables you to provide your customers with the shortest path between two points within a facility. Once you've imported your indoor map data and created your dataset, you can use that to create a [routeset]. The routeset provides the data required to generate paths between two points. The wayfinding service takes into account things such as the minimum width of openings and can optionally exclude elevators or stairs when navigating between levels as a result.
You can use the [Web Feature service] (WFS) to query datasets. WFS follows the O
### Alias API
-Creator services such as Conversion, Dataset, Tileset and Feature State return an identifier for each resource that's created from the APIs. The [Alias API] allows you to assign an alias to reference a resource identifier.
+Creator services such as Conversion, Dataset, and Tileset return an identifier for each resource that's created from the APIs. The [Alias API] allows you to assign an alias to reference a resource identifier.
### Indoor Maps module
The [Azure Maps Web SDK] includes the Indoor Maps module. This module offers ext
You can use the Indoor Maps module to create web applications that integrate indoor map data with other [Azure Maps services]. The most common application setups include adding knowledge from other maps - such as road, imagery, weather, and transit - to indoor maps.
-The Indoor Maps module also supports dynamic map styling. For a step-by-step walkthrough to implement feature stateset dynamic styling in an application, see [Use the Indoor Map module].
+The Indoor Maps module also supports dynamic map styling for more information, see [Enhance your indoor maps with real-time map feature styling].
### Azure Maps integration
As you begin to develop solutions for indoor maps, you can discover ways to inte
### Data maintenance
- You can use the Azure Maps Creator List, Update, and Delete API to list, update, and delete your datasets, tilesets, and feature statesets.
+ You can use the Azure Maps Creator List, Update, and Delete API to list, update, and delete your datasets and tilesets.
>[!NOTE] >When you review a list of items to determine whether to delete them, consider the impact of that deletion on all dependent API or applications. For example, if you delete a tileset that's being used by an application by means of the [Render - Get Map Tile] API, the application fails to render that tileset.
The following example shows how to update a dataset, create a new tileset, and d
[Convert a drawing package]: #convert-a-drawing-package [Custom styling service]: #custom-styling-preview [Data maintenance]: #data-maintenance
-[feature statesets]: #feature-statesets
[Indoor Maps module]: #indoor-maps-module [Render service]: #renderget-map-tile-api [tilesets]: #tilesets
The following example shows how to update a dataset, create a new tileset, and d
[Conversion service]: /rest/api/maps-creator/conversion [Dataset Create]: /rest/api/maps-creator/dataset/create [Dataset service]: /rest/api/maps-creator/dataset
-[Feature State service]: /rest/api/maps-creator/feature-state
-[Feature State Update API]: /rest/api/maps-creator/feature-state/update-states
[Geofence service]: /rest/api/maps/spatial/postgeofence [Tileset Create]: /rest/api/maps-creator/tileset/create [Tileset List]: /rest/api/maps-creator/tileset/list [Tileset service]: /rest/api/maps-creator/tileset [Web Feature service]: /rest/api/maps-creator/wfs - <! learn.microsoft.com Links > [Authorization with role-based access control]: azure-maps-authentication.md#authorization-with-role-based-access-control [Azure Maps Drawing Error Visualizer]: drawing-error-visualizer.md
The following example shows how to update a dataset, create a new tileset, and d
[style picker control]: choose-map-style.md#add-the-style-picker-control [Tutorial: Creating a Creator indoor map]: tutorial-creator-indoor-maps.md [Tutorial: Implement IoT spatial analytics by using Azure Maps]: tutorial-iot-hub-maps.md
-[Use the Indoor Map module]: how-to-use-indoor-module.md
[verticalPenetration]: creator-facility-ontology.md?pivots=facility-ontology-v2#verticalpenetration <! HTTP Links >
The following example shows how to update a dataset, create a new tileset, and d
[sprites]: https://docs.mapbox.com/help/glossary/sprite/ [style layers]: https://docs.mapbox.com/mapbox-gl-js/style-spec/layers/#layout [visual style editor]: https://azure.github.io/Azure-Maps-Style-Editor
+[Enhance your indoor maps with real-time map feature styling]: https://techcommunity.microsoft.com/t5/azure-maps-blog/enhance-your-indoor-maps-with-real-time-map-feature-styling/ba-p/4048929
azure-maps How To Creator Feature Stateset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-creator-feature-stateset.md
- Title: Create a feature stateset-
-description: How to create a feature stateset using the Creator REST API.
-- Previously updated : 03/03/2023-----
-# Create a feature stateset
-
-[Feature statesets] define dynamic properties and values on specific features that support them. This article explains how to create a stateset that defines values and corresponding styles for a property and changing a property's state.
-
-## Prerequisites
-
-* Successful completion of [Query datasets with WFS API].
-* The `datasetId` obtained in the [Check the dataset creation status] section of the *Use Creator to create indoor maps* tutorial.
-
->[!IMPORTANT]
->
-> * This article uses the `us.atlas.microsoft.com` geographical URL. If your Creator service wasn't created in the United States, you must use a different geographical URL. For more information, see [Access to Creator Services].
-> * In the URL examples in this article you will need to replace:
-> * `{Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
-> * `{datasetId}` with the `datasetId` obtained in the [Check the dataset creation status] section of the *Use Creator to create indoor maps* tutorial
-
-## Create the feature stateset
-
-To create a stateset:
-
-Create a new **HTTP POST Request** that uses the [Stateset API]. The request should look like the following URL:
-
-```http
-https://us.atlas.microsoft.com/featurestatesets?api-version=2.0&datasetId={datasetId}&subscription-key={Your-Azure-Maps-Subscription-key}
-```
-
-Next, set the `Content-Type` to `application/json` in the **Header** of the request.
-
-If using a tool like [Postman], it should look like this:
--
-Finally, in the **Body** of the HTTP request, include the style information in raw JSON format, which applies different colors to the `occupied` property depending on its value:
-
-```json
-{
- "styles":[
- {
- "keyname":"occupied",
- "type":"boolean",
- "rules":[
- {
- "true":"#FF0000",
- "false":"#00FF00"
- }
- ]
- }
- ]
-}
-```
-
-After the response returns successfully, copy the `statesetId` from the response body. In the next section, you'll use the `statesetId` to change the `occupancy` property state of the unit with feature `id` "UNIT26". If using Postman, it appears as follows:
--
-## Update a feature state
-
-This section demonstrates how to update the `occupied` state of the unit with feature `id` "UNIT26". To update the `occupied` state, create a new **HTTP PUT Request** calling the [Feature Statesets API]. The request should look like the following URL (replace `{statesetId}` with the `statesetId` obtained in [Create a feature stateset]):
-
-```http
-https://us.atlas.microsoft.com/featurestatesets/{statesetId}/featureStates/UNIT26?api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key}
-```
-
-Next, set the `Content-Type` to `application/json` in the **Header** of the request.
-
-If using a tool like [Postman], it should look like this:
--
-Finally, in the **Body** of the HTTP request, include the style information in raw JSON format, which applies different colors to the `occupied` property depending on its value:
-
-```json
-{
- "states": [
- {
- "keyName": "occupied",
- "value": true,
- "eventTimestamp": "2020-11-14T17:10:20"
- }
- ]
-}
-```
-
->[!NOTE]
-> The update will be saved only if the time posted stamp is after the time stamp of the previous request.
-
-Once the HTTP request is sent and the update completes, you receive a `200 OK` HTTP status code. If you implemented [dynamic styling] for an indoor map, the update displays at the specified time stamp in your rendered map.
-
-## Additional information
-
-* For information on how to retrieve the state of a feature using its feature ID, see [Feature State - List States].
-* For information on how to delete the stateset and its resources, see [Feature State - Delete Stateset].
-* For information on using the Azure Maps Creator [Feature State service] to apply styles that are based on the dynamic properties of indoor map data features, see how to article [Implement dynamic styling for Creator indoor maps].
-
-* For more information on the different Azure Maps Creator services discussed in this article, see [Creator Indoor Maps].
-
-## Next steps
-
-Learn how to implement dynamic styling for indoor maps.
-
-> [!div class="nextstepaction"]
-> [dynamic styling]
-
-<! Internal Links >
-[Create a feature stateset]: #create-a-feature-stateset
-
-<! learn.microsoft.com links >
-[Access to Creator Services]: how-to-manage-creator.md#access-to-creator-services
-[Check the dataset creation status]: tutorial-creator-indoor-maps.md#check-the-dataset-creation-status
-[Creator Indoor Maps]: creator-indoor-maps.md
-[dynamic styling]: indoor-map-dynamic-styling.md
-[Implement dynamic styling for Creator indoor maps]: indoor-map-dynamic-styling.md
-[Query datasets with WFS API]: how-to-creator-wfs.md
-
-<! External Links >
-[Postman]: https://www.postman.com/
-
-<! REST API Links >
-[Feature State - Delete Stateset]: /rest/api/maps-creator/feature-state/delete-stateset
-[Feature State - List States]: /rest/api/maps-creator/feature-state/list-states
-[Feature State service]: /rest/api/maps-creator/feature-state
-[Feature Statesets API]: /rest/api/maps-creator/feature-state/create-stateset
-[Feature statesets]: /rest/api/maps-creator/feature-state
-[Stateset API]: /rest/api/maps-creator/feature-state/create-stateset
azure-maps How To Creator Wfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-creator-wfs.md
To query the unit collection in your dataset, create a new **HTTP GET Request**:
https://us.atlas.microsoft.com/wfs/datasets/{datasetId}/collections/unit/items?subscription-key={Your-Azure-Maps-Subscription-key}&api-version=2.0 ```
-After the response returns, copy the feature `id` for one of the `unit` features. In the following example, the feature `id` is "UNIT26". Use "UNIT26" as your features `id` when you [Update a feature state].
+After the response returns, copy the feature `id` for one of the `unit` features. In the following example, the feature `id` is "UNIT26".
```json {
After the response returns, copy the feature `id` for one of the `unit` features
} ```
-## Next steps
-
-> [!div class="nextstepaction"]
-> [How to create a feature stateset]
- [Check the dataset creation status]: tutorial-creator-indoor-maps.md#check-the-dataset-creation-status [datasets]: /rest/api/maps-creator/dataset [WFS API]: /rest/api/maps-creator/wfs
After the response returns, copy the feature `id` for one of the `unit` features
[Check dataset creation status]: tutorial-creator-indoor-maps.md#check-the-dataset-creation-status [Access to Creator Services]: how-to-manage-creator.md#access-to-creator-services [WFS Describe Collections API]: /rest/api/maps-creator/wfs/get-collection-definition
-[Update a feature state]: how-to-creator-feature-stateset.md#update-a-feature-state
-[How to create a feature stateset]: how-to-creator-feature-stateset.md
azure-maps How To Manage Creator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-manage-creator.md
To delete the Creator resource:
2. Select **Delete**. >[!WARNING]
- >When you delete the Creator resource of your Azure Maps account, you also delete the conversions, datasets, tilesets, and feature statesets that were created using Creator services. Once a Creator resource is deleted, it cannot be undone.
+ >When you delete the Creator resource of your Azure Maps account, you also delete the conversions, datasets and tilesets that were created using Creator services. Once a Creator resource is deleted, it cannot be undone.
:::image type="content" source="./media/how-to-manage-creator/creator-delete.png" alt-text="A screenshot of the Azure Maps Creator Resource page with the delete button highlighted.":::
Introduction to Creator services for indoor mapping:
> [!div class="nextstepaction"] > [Tileset]
-> [!div class="nextstepaction"]
-> [Feature State set]
- Learn how to use the Creator services to render indoor maps in your application: > [!div class="nextstepaction"] > [Azure Maps Creator tutorial]
-> [!div class="nextstepaction"]
-> [Indoor map dynamic styling]
- > [!div class="nextstepaction"] > [Use the Indoor Maps module]
Learn how to use the Creator services to render indoor maps in your application:
[Azure portal]: https://portal.azure.com [Data conversion]: creator-indoor-maps.md#convert-a-drawing-package [Dataset]: creator-indoor-maps.md#datasets
-[Feature State set]: creator-indoor-maps.md#feature-statesets
-[Indoor map dynamic styling]: indoor-map-dynamic-styling.md
[Manage authentication in Azure Maps]: how-to-manage-authentication.md [see Creator service geographic scope]: creator-geographic-scope.md [Tileset]: creator-indoor-maps.md#tilesets
azure-maps How To Use Indoor Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-indoor-module.md
const map = new atlas.Map("map-id", {
## Instantiate the Indoor Manager
-To load the indoor map style of the tiles, you must instantiate the *Indoor Manager*. Instantiate the *Indoor Manager* by providing the *Map object*. If you wish to support [dynamic map styling], you must pass the `statesetId`. The `statesetId` variable name is case-sensitive. Your code should look like the following JavaScript code snippet:
-
-```javascriptf
-const statesetId = "<statesetId>";
-
-const indoorManager = new atlas.indoor.IndoorManager(map, {
- statesetId: statesetId // Optional
-});
-```
-
-To enable polling of state data you provide, you must provide the `statesetId` and call `indoorManager.setDynamicStyling(true)`. Polling state data lets you dynamically update the state of dynamic properties or *states*. For example, a feature such as room can have a dynamic property (*state*) called `occupancy`. Your application may wish to poll for any *state* changes to reflect the change inside the visual map. The following code shows you how to enable state polling:
+To load the indoor map style of the tiles, you must instantiate the *Indoor Manager*. Instantiate the *Indoor Manager* by providing the *Map object*. Your code should look like the following JavaScript code snippet:
```javascript
-const statesetId = "<statesetId>";
- const indoorManager = new atlas.indoor.IndoorManager(map, {
- statesetId: statesetId // Optional
});-
-if (statesetId.length > 0) {
- indoorManager.setDynamicStyling(true);
-}
``` ## Indoor level picker control
When you create an indoor map using Azure Maps Creator, default styles are appli
- `zoom` allows you to specify the min and max zoom levels for your map. - `styleAPIVersion`: pass **'2023-03-01-preview'** (which is required while Custom Styling is in public preview)
-7. Next, create the *Indoor Manager* module with *Indoor Level Picker* control instantiated as part of *Indoor Manager* options, optionally set the `statesetId` option.
+7. Next, create the *Indoor Manager* module with *Indoor Level Picker* control instantiated as part of *Indoor Manager* options.
8. Add *Map object* event listeners.
Your file should now look similar to the following HTML:
<script> const subscriptionKey = "<Your Azure Maps Subscription Key>"; const mapConfig = "<Your map configuration id or alias>";
- const statesetId = "<Your statesetId>";
const region = "<Your Creator resource region: us or eu>" atlas.setDomain(`${region}.atlas.microsoft.com`);
Your file should now look similar to the following HTML:
const indoorManager = new atlas.indoor.IndoorManager(map, { levelControl: levelControl, //level picker
- statesetId: statesetId // Optional
});
- if (statesetId.length > 0) {
- indoorManager.setDynamicStyling(true);
- }
- map.events.add("levelchanged", indoorManager, (eventData) => { //put code that runs after a level has been changed console.log("The level has changed:", eventData);
Read about the APIs that are related to the *Azure Maps Indoor* module:
Learn more about how to add more data to your map:
-> [!div class="nextstepaction"]
-> [Indoor Maps dynamic styling]
- > [!div class="nextstepaction"] > [Code samples] [Azure Content Delivery Network]: #embed-the-indoor-maps-module [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account [Azure Maps Creator resource]: how-to-manage-creator.md
-[Indoor Maps]: https://www.npmjs.com/package/azure-maps-indoor
[Azure Maps service geographic scope]: geographic-scope.md [azure-maps-indoor package]: https://www.npmjs.com/package/azure-maps-indoor [Code samples]: /samples/browse/?products=azure-maps
Learn more about how to add more data to your map:
[Creator for indoor maps]: creator-indoor-maps.md [Creator Indoor Maps]: https://samples.azuremaps.com/?sample=creator-indoor-maps [Drawing package requirements]: drawing-requirements.md
-[dynamic map styling]: indoor-map-dynamic-styling.md
-[Indoor Maps dynamic styling]: indoor-map-dynamic-styling.md
+[How to use the Azure Maps map control npm package]: how-to-use-npm-package.md
+[Indoor Maps]: https://www.npmjs.com/package/azure-maps-indoor
[map configuration API]: /rest/api/maps-creator/map-configuration?view=rest-maps-creator-2023-03-01-preview&preserve-view=true [map configuration]: creator-indoor-maps.md#map-configuration [Style Rest API]: /rest/api/maps-creator/style?view=rest-maps-creator-2023-03-01-preview&preserve-view=true
Learn more about how to add more data to your map:
[Use Creator to create indoor maps]: tutorial-creator-indoor-maps.md [visual style editor]: https://azure.github.io/Azure-Maps-Style-Editor [Webpack]: https://webpack.js.org
-[How to use the Azure Maps map control npm package]: how-to-use-npm-package.md
azure-maps Indoor Map Dynamic Styling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/indoor-map-dynamic-styling.md
- Title: Implement dynamic styling for Azure Maps Creator indoor maps-
-description: Learn how to Implement dynamic styling for Creator indoor maps
-- Previously updated : 03/03/2023-----
-# Implement dynamic styling for Creator indoor maps
-
-You can use the Azure Maps Creator [Feature State service] to apply styles that are based on the dynamic properties of indoor map data features. For example, you can render facility meeting rooms with a specific color to reflect occupancy status. This article describes how to dynamically render indoor map features with the [Feature State service] and the [Indoor Web module].
-
-## Prerequisites
--- A `statesetId`. For more information, see [How to create a feature stateset].-- A web application. For more information, see [How to use the Indoor Map module].-
-This article uses the [Postman] application, but you may choose a different API development environment.
-
-## Implement dynamic styling
-
-After you complete the prerequisites, you should have a simple web application configured with your subscription key, and `statesetId`.
-
-### Select features
-
-You reference a feature, such as a meeting or conference room, by its ID to implement dynamic styling. Use the feature ID to update the dynamic property or *state* of that feature. To view the features defined in a dataset, use one of the following methods:
--- WFS API (Web Feature service). Use the [WFS API] to query datasets. WFS follows the Open Geospatial Consortium API Features. The WFS API is helpful for querying features within a dataset. For example, you can use WFS to find all mid-size meeting rooms of a specific facility and floor level.--- Implement customized code that a user can use to select features on a map using your web application, as demonstrated in this article. -
-The following script implements the mouse-click event. The code retrieves the feature ID based on the clicked point. In your application, you can insert the code after your Indoor Manager code block. Run your application, and then check the console to obtain the feature ID of the clicked point.
-
-```javascript
-/* Upon a mouse click, log the feature properties to the browser's console. */
-map.events.add("click", function(e){
-
- var features = map.layers.getRenderedShapes(e.position, "unit");
-
- features.forEach(function (feature) {
- if (feature.layer.id == 'indoor_unit_office') {
- console.log(feature);
- }
- });
-});
-```
-
-The [Create an indoor map] tutorial configured the feature stateset to accept state updates for `occupancy`.
-
-In the next section, you'll set the occupancy *state* of office `UNIT26` to `true` and office `UNIT27` to `false`.
-
-### Set occupancy status
-
-Update the state of the two offices, `UNIT26` and `UNIT27`:
-
-1. In the Postman app, select **New**.
-
-2. In the **Create New** window, select **HTTP Request**.
-
-3. Enter a **Request name** for the request, such as *POST Data Upload*.
-
-4. Enter the following URL to the [Feature Update States API] (replace `{Azure-Maps-Subscription-key}` with your Azure Maps subscription key and `statesetId` with the `statesetId`):
-
- ```http
- https://us.atlas.microsoft.com/featurestatesets/{statesetId}/featureStates/UNIT26?api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key}
- ```
-
-5. Select the **Headers** tab.
-
-6. In the **KEY** field, select `Content-Type`. In the **VALUE** field, select `application/json`.
-
- :::image type="content" source="./media/indoor-map-dynamic-styling/stateset-header.png"alt-text="Header tab information for stateset creation.":::
-
-7. Select the **Body** tab.
-
-8. In the dropdown lists, select **raw** and **JSON**.
-
-9. Copy the following JSON style, and then paste it in the **Body** window:
-
- ```json
- {
- "states": [
- {
- "keyName": "occupied",
- "value": true,
- "eventTimestamp": "2020-11-14T17:10:20"
- }
- ]
- }
- ```
-
- >[!IMPORTANT]
- >The update will be saved only if the posted time stamp is after the time stamp used in previous feature state update requests for the same feature ID.
-
-10. Change the URL you used in step 7 by replacing `UNIT26` with `UNIT27`:
-
- ```http
- https://us.atlas.microsoft.com/featurestatesets/{statesetId}/featureStates/UNIT27?api-version=2.0&subscription-key={Your-Azure-Maps-Subscription-key}
- ```
-
-11. Copy the following JSON style, and then paste it in the **Body** window:
-
- ``` json
- {
- "states": [
- {
- "keyName": "occupied",
- "value": false,
- "eventTimestamp": "2020-11-14T17:10:20"
- }
- ]
- }
- ```
-
-### Visualize dynamic styles on a map
-
-The web application that you previously opened in a browser should now reflect the updated state of the map features:
--- Office `UNIT27`(142) should appear green.-- Office `UNIT26`(143) should appear red.-
-![Free room in green and Busy room in red](./media/indoor-map-dynamic-styling/room-state.png)
-
-[See live demo]
-
-## Next steps
-
-Learn more by reading:
-
-> [!div class="nextstepaction"]
-> [What is Azure Maps Creator?]
-
-> [!div class="nextstepaction"]
-> [Creator for indoor maps](creator-indoor-maps.md)
-
-[Feature State service]: /rest/api/maps-creator/feature-state
-[Indoor Web module]: how-to-use-indoor-module.md
-<!--[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
-[A Creator resource]: how-to-manage-creator.md
-[Sample Drawing package]: https://github.com/Azure-Samples/am-creator-indoor-data-examples/tree/master/Drawing%20Package%202.0-->
-[How to use the Indoor Map module]: how-to-use-indoor-module.md
-[Postman]: https://www.postman.com/
-[How to create a feature stateset]: how-to-creator-feature-stateset.md
-[See live demo]: https://samples.azuremaps.com/?sample=creator-indoor-maps
-[Feature Update States API]: /rest/api/maps-creator/feature-state/update-states
-[Create an indoor map]: tutorial-creator-indoor-maps.md
-[WFS API]: /rest/api/maps-creator/wfs
-[Creator for indoor maps]: creator-indoor-maps.md
-[What is Azure Maps Creator?]: about-creator.md
azure-maps Schema Stateset Stylesobject https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/schema-stateset-stylesobject.md
- Title: StylesObject Schema reference guide for Dynamic Azure Maps
-description: Reference guide to the dynamic Azure Maps StylesObject schema and syntax.
-- Previously updated : 02/17/2023-----
-# StylesObject Schema reference guide for dynamic Maps
-
- The `StylesObject` is a `StyleObject` array representing stateset styles. Use the Azure Maps Creator [Feature State service] to apply your stateset styles to indoor map data features. Once you've created your stateset styles and associated them with indoor map features, you can use them to create dynamic indoor maps. For more information on creating dynamic indoor maps, see [Implement dynamic styling for Creator indoor maps].
-
-## StyleObject
-
-A `StyleObject` is one of the following style rules:
-
-* [`BooleanTypeStyleRule`]
-* [`NumericTypeStyleRule`]
-* [`StringTypeStyleRule`]
-
-The following JSON shows example usage of each of the three style types. The `BooleanTypeStyleRule` is used to determine the dynamic style for features whose `occupied` property is true and false. The `NumericTypeStyleRule` is used to determine the style for features whose `temperature` property falls within a certain range. Finally, the `StringTypeStyleRule` is used to match specific styles to `meetingType`.
-
-```json
- "styles": [
- {
- "keyname": "occupied",
- "type": "boolean",
- "rules": [
- {
- "true": "#FF0000",
- "false": "#00FF00"
- }
- ]
- },
- {
- "keyname": "temperature",
- "type": "number",
- "rules": [
- {
- "range": {
- "minimum": 50,
- "exclusiveMaximum": 70
- },
- "color": "#343deb"
- },
- {
- "range": {
- "maximum": 70,
- "exclusiveMinimum": 30
- },
- "color": "#eba834"
- }
- ]
- },
- {
- "keyname": "meetingType",
- "type": "string",
- "rules": [
- {
- "private": "#FF0000",
- "confidential": "#FF00AA",
- "allHands": "#00FF00",
- "brownBag": "#964B00"
- }
- ]
- }
-]
-```
-
-## NumericTypeStyleRule
-
- A `NumericTypeStyleRule` is a [`StyleObject`] and consists of the following properties:
-
-| Property | Type | Description | Required |
-|--|-|-|-|
-| `keyName` | string | The *state* or dynamic property name. A `keyName` should be unique inside the `StyleObject` array.| Yes |
-| `type` | string | Value is `numeric`. | Yes |
-| `rules` | [`NumberRuleObject`][]| An array of numeric style ranges with associated colors. Each range defines a color that's to be used when the *state* value satisfies the range.| Yes |
-
-### NumberRuleObject
-
-A `NumberRuleObject` consists of a [`RangeObject`](#rangeobject) and a `color` property. If the *state* value falls into the range, its color for display is the color specified in the `color` property.
-
-If you define multiple overlapping ranges, the color chosen will be the color that's defined in the first range that is satisfied.
-
-In the following JSON sample, both ranges hold true when the *state* value is between 50-60. However, the color that is used is `#343deb` because it's the first range in the list that has been satisfied.
-
-```json
-
- {
- "rules":[
- {
- "range": {
- "minimum": 50,
- "exclusiveMaximum": 70
- },
- "color": "#343deb"
- },
- {
- "range": {
- "minimum": 50,
- "maximum": 60
- },
- "color": "#eba834"
- }
- ]
- }
-]
-```
-
-| Property | Type | Description | Required |
-|--|-|-|-|
-| `range` | [RangeObject] | The [RangeObject] defines a set of logical range conditions, which, if `true`, change the display color of the *state* to the color specified in the `color` property. If `range` is unspecified, then the color defined in the `color` property is always used. | No |
-| `color` | string | The color to use when state value falls into the range. The `color` property is a JSON string in any one of following formats: <ul><li> HTML-style hex values </li><li> RGB ("#ff0", "#ffff00", "rgb(255, 255, 0)")</li><li> RGBA ("rgba(255, 255, 0, 1)")</li><li> HSL("hsl(100, 50%, 50%)")</li><li> HSLA("hsla(100, 50%, 50%, 1)")</li><li> Predefined HTML colors names, like yellow, and blue.</li></ul> | Yes |
-
-### RangeObject
-
-The `RangeObject` defines a numeric range value of a [`NumberRuleObject`]. For the *state* value to fall into the range, all defined conditions must hold true.
-
-| Property | Type | Description | Required |
-|--|-|-|-|
-| `minimum` | double | All the number x that x ≥ `minimum`.| No |
-| `maximum` | double | All the number x that x Γëñ `maximum`. | No |
-| `exclusiveMinimum` | double | All the number x that x > `exclusiveMinimum`.| No |
-| `exclusiveMaximum` | double | All the number x that x < `exclusiveMaximum`.| No |
-
-### Example of NumericTypeStyleRule
-
-The following JSON illustrates a `NumericTypeStyleRule` *state* named `temperature`. In this example, the [`NumberRuleObject`] contains two defined temperature ranges and their associated color styles. If the temperature range is 50-69, the display should use the color `#343deb`. If the temperature range is 31-70, the display should use the color `#eba834`.
-
-```json
-{
- "keyname": "temperature",
- "type": "number",
- "rules":[
- {
- "range": {
- "minimum": 50,
- "exclusiveMaximum": 70
- },
- "color": "#343deb"
- },
- {
- "range": {
- "maximum": 70,
- "exclusiveMinimum": 30
- },
- "color": "#eba834"
- }
- ]
-}
-```
-
-## StringTypeStyleRule
-
-A `StringTypeStyleRule` is a [`StyleObject`] and consists of the following properties:
-
-| Property | Type | Description | Required |
-|--|-|-|-|
-| `keyName` | string | The *state* or dynamic property name. A `keyName` should be unique inside the `StyleObject` array.| Yes |
-| `type` | string |Value is `string`. | Yes |
-| `rules` | [`StringRuleObject`][]| An array of N number of *state* values.| Yes |
-
-### StringRuleObject
-
-A `StringRuleObject` consists of up to N number of state values that are the possible string values of a feature's property. If the feature's property value doesn't match any of the defined state values, that feature won't have a dynamic style. If duplicate state values are given, the first one takes precedence.
-
-The string value matching is case-sensitive.
-
-| Property | Type | Description | Required |
-||--|--|-|
-| `stateValue1` | string | The color when value string is stateValue1.| No |
-| `stateValue2` | string | The color when value string is stateValue. | No |
-| `stateValueN` | string | The color when value string is stateValueN.| No |
-
-### Example of StringTypeStyleRule
-
-The following JSON illustrates a `StringTypeStyleRule` that defines styles associated with specific meeting types.
-
-```json
- {
- "keyname": "meetingType",
- "type": "string",
- "rules": [
- {
- "private": "#FF0000",
- "confidential": "#FF00AA",
- "allHands": "#00FF00",
- "brownBag": "#964B00"
- }
- ]
- }
-
-```
-
-## BooleanTypeStyleRule
-
-A `BooleanTypeStyleRule` is a [`StyleObject`] and consists of the following properties:
-
-| Property | Type | Description | Required |
-|--|-|-|-|
-| `keyName` | string | The *state* or dynamic property name. A `keyName` should be unique inside the `StyleObject` array.| Yes |
-| `type` | string |Value is `boolean`. | Yes |
-| `rules` | [`BooleanRuleObject`]| A boolean pair with colors for `true` and `false` *state* values.| Yes |
-
-### BooleanRuleObject
-
-A `BooleanRuleObject` defines colors for `true` and `false` values.
-
-| Property | Type | Description | Required |
-|--|-|-|-|
-| `true` | string | The color to use when the *state* value is `true`. The `color` property is a JSON string in any one of following formats: <ul><li> HTML-style hex values </li><li> RGB ("#ff0", "#ffff00", "rgb(255, 255, 0)")</li><li> RGBA ("rgba(255, 255, 0, 1)")</li><li> HSL("hsl(100, 50%, 50%)")</li><li> HSLA("hsla(100, 50%, 50%, 1)")</li><li> Predefined HTML colors names, like yellow, and blue.</li></ul>| Yes |
-| `false` | string | The color to use when the *state* value is `false`. | Yes |
-
-### Example of BooleanTypeStyleRule
-
-The following JSON illustrates a `BooleanTypeStyleRule` *state* named `occupied`. The [`BooleanRuleObject`] defines colors for `true` and `false` values.
-
-```json
-{
- "keyname": "occupied",
- "type": "boolean",
- "rules": [
- {
- "true": "#FF0000",
- "false": "#00FF00"
- }
- ]
-}
-```
-
-## Next steps
-
-Learn more about Creator for indoor maps by reading:
-
-> [!div class="nextstepaction"]
-> [What is Azure Maps Creator?]
-
-> [!div class="nextstepaction"]
-> [Creator for indoor maps]
-
-[`BooleanRuleObject`]: #booleanruleobject
-[`BooleanTypeStyleRule`]: #booleantypestylerule
-[`NumberRuleObject`]: #numberruleobject
-[`NumericTypeStyleRule`]: #numerictypestylerule
-[`StringRuleObject`]: #stringruleobject
-[`StringTypeStyleRule`]: #stringtypestylerule
-[`StyleObject`]: #styleobject
-[Creator for indoor maps]: creator-indoor-maps.md
-[Feature State service]: /rest/api/maps-creator/feature-state
-[Implement dynamic styling for Creator indoor maps]: indoor-map-dynamic-styling.md
-[RangeObject]: #rangeobject
-[What is Azure Maps Creator?]: about-creator.md
azure-monitor Azure Monitor Agent Send Data To Event Hubs And Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-send-data-to-event-hubs-and-storage.md
The Azure Monitor Agent is the new, consolidated telemetry agent for collecting
## Prerequisites
-A [user-assigned managed identity](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) associated with the following resources:
+A managed identity (either system or user) associated with the resources below. We highly recommend using [user-assigned managed identity](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) for better scalability and performance.
- [Storage account](../../storage/common/storage-account-create.md) - [Event Hubs namespace and event hub](../../event-hubs/event-hubs-create.md)
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
The following table describes resource limits for Azure NetApp Files:
| Number of snapshots per volume | 255 | No | | Number of IPs in a virtual network (including immediately peered VNets) accessing volumes in an Azure NetApp Files hosting VNet | <ul><li>**Basic**: 1000</li><li>**Standard**: [Same standard limits as VMs](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits)</li></ul> | No | | Minimum size of a single capacity pool | 1 TiB* | No |
-| Maximum size of a single capacity pool | 2048 TiB | Yes |
+| Maximum size of a single capacity pool | 2,048 TiB | Yes |
| Minimum size of a single regular volume | 100 GiB | No | | Maximum size of a single regular volume | 100 TiB | No | | Minimum size of a single [large volume](large-volumes-requirements-considerations.md) | 50 TiB | No |
The following table describes resource limits for Azure NetApp Files:
| Maximum size of a single file | 16 TiB | No | | Maximum size of directory metadata in a single directory | 320 MB | No | | Maximum number of files in a single directory | *Approximately* 4 million. <br> See [Determine if a directory is approaching the limit size](#directory-limit). | No |
-| Maximum number of files [`maxfiles`](#maxfiles) per volume | 106,255,630 | Yes |
+| Maximum number of files `maxfiles` per volume | See [`maxfiles`](#maxfiles) | Yes |
| Maximum number of export policy rules per volume | 5 | No | | Maximum number of quota rules per volume | 100 | No | | Minimum assigned throughput for a manual QoS volume | 1 MiB/s | No |
For limits and constraints related to Azure NetApp Files network features, see [
You can use the `stat` command from a client to see whether a directory is approaching the maximum size limit for directory metadata (320 MB). If you reach the maximum size limit for a single directory for Azure NetApp Files, the error `No space left on device` occurs.
-For a 320-MB directory, the number of blocks is 655360, with each block size being 512 bytes. (That is, 320x1024x1024/512.) This number translates to approximately 4 million files maximum for a 320-MB directory. However, the actual number of maximum files might be lower, depending on factors such as the number of files with non-ASCII characters in the directory. As such, you should use the `stat` command as follows to determine whether your directory is approaching its limit.
+For a 320-MB directory, the number of blocks is 655,360, with each block size being 512 bytes. (That is, 320x1024x1024/512.) This number translates to approximately 4 million files maximum for a 320-MB directory. However, the actual number of maximum files might be lower, depending on factors such as the number of files with non-ASCII characters in the directory. As such, you should use the `stat` command as follows to determine whether your directory is approaching its limit.
Examples:
Size: 4096 Blocks: 8 IO Block: 65536 directory
## `Maxfiles` limits <a name="maxfiles"></a>
-Azure NetApp Files volumes have a limit called *`maxfiles`*. The `maxfiles` limit is the number of files a volume can contain. Linux file systems refer to the limit as *inodes*. The `maxfiles` limit for an Azure NetApp Files volume is indexed based on the size (quota) of the volume. The `maxfiles` limit for a volume increases or decreases at the rate of 21,251,126 files per TiB of provisioned volume size.
+Azure NetApp Files volumes have a value called `maxfiles` that refers to the maximum number of files and folders (also known as inodes) a volume can contain. When the `maxfiles` limit is reached, clients receive "out of space" messages when attempting to create new files or folders. If you experience this issue, contact Microsoft technical support.
-The service dynamically adjusts the `maxfiles` limit for a volume based on its provisioned size. For example, a volume configured initially with a size of 1 TiB would have a `maxfiles` limit of 21,251,126. Subsequent changes to the size of the volume would result in an automatic readjustment of the `maxfiles` limit based on the following rules:
+The `maxfiles` limit for an Azure NetApp Files volume is based on the size (quota) of the volume, where the service dynamically adjusts the `maxfiles` limit for a volume based on its provisioned size and uses the following guidelines.
-**For volumes up to 100 TiB in size:**
+- For regular volumes less than or equal to 683 GiB, the default `maxfiles` limit is 21,251,126.
+- For regular volumes greater than 683 GiB, the default `maxfiles` limit is approximately one file (or inode) per 32 KiB of allocated volume capacity up to a maximum of 2,147,483,632.
+- For [large volumes](large-volumes-requirements-considerations.md), the default `maxfiles` limit is approximately one file (or inode) per 32 KiB of allocated volume capacity up to a default maximum of 15,938,355,048.
-| Volume size (quota) | Automatic readjustment of the `maxfiles` limit |
-|-|-|
-| <= 1 TiB | 21,251,126 |
-| > 1 TiB but <= 2 TiB | 42,502,252 |
-| > 2 TiB but <= 3 TiB | 63,753,378 |
-| > 3 TiB but <= 4 TiB | 85,004,504 |
-| > 4 TiB but <= 100 TiB | 106,255,630 |
+The following table shows examples of the relationship `maxfiles` values based on volume sizes for regular volumes.
->[!IMPORTANT]
-> If your volume has a volume size (quota) of more than 4 TiB and you want to increase the `maxfiles` limit, you must initiate [a support request](#request-limit-increase).
-
-For volumes 100 TiB or under, if you've allocated at least 5 TiB of quota for a volume, you can initiate a support request to increase the `maxfiles` (inodes) limit beyond 106,255,630. For every 106,255,630 files you increase (or a fraction thereof), you need to increase the corresponding volume quota by 5 TiB. For example, if you increase the `maxfiles` limit from 106,255,630 files to 212,511,260 files (or any number in between), you need to increase the volume quota from 5 TiB to 10 TiB.
+| Volume size | Estimated maxfiles limit |
+| - | - |
+| 0 ΓÇô 683 GiB | 21,251,126 |
+| 1 TiB (1,073,741,824 KiB) | 31,876,709 |
+| 10 TiB (10,737,418,240 KiB) | 318,767,099 |
+| 50 TiB (53,687,091,200 KiB) | 1,593,835,519 |
+| 100 TiB (107,374,182,400 KiB) | 2,147,483,632 |
-For volumes 100 TiB or under, you can increase the `maxfiles` limit up to 531,278,150 if your volume quota is at least 25 TiB.
+The following table shows examples of the relationship `maxfiles` values based on volume sizes for large volumes.
->[!IMPORTANT]
-> When files or folders are allocated to an Azure NetApp Files volume, they count against the `maxfiles` limit. If a file or folder is deleted, the internal data structures for `maxfiles` allocation remain the same. For instance, if the files used in a volume increase to 63,753,378 and 100,000 files are deleted, the `maxfiles` allocation remains at 63,753,378.
-> Once a volume has exceeded a `maxfiles` limit, you cannot reduce volume size below the quota corresponding to that `maxfiles` limit even if you have reduced the actual used file count. For example, the `maxfiles` limit for a 2 TiB volume is 63,753,378. If you create more than 63,753,378 files in that volume, the volume quota cannot be reduced below its corresponding index of 2 TiB.
+| Volume size | Estimated maxfiles limit |
+| - | - |
+| 50 TiB (53,687,091,200 KiB) | 1,593,835,512 |
+| 100 TiB (107,374,182,400 KiB) | 3,187,671,024 |
+| 200 TiB (214,748,364,800 KiB) | 6,375,342,024 |
+| 500 TiB (536,870,912,000 KiB) | 15,938,355,048 |
-**For [large volumes](azure-netapp-files-understand-storage-hierarchy.md#large-volumes):**
+To see the `maxfiles` allocation for a specific volume size, check the **Maximum number of files** field in the volumeΓÇÖs overview pane.
-| Volume size (quota) | Automatic readjustment of the `maxfiles` limit |
-| - | - |
-| > 50 TiB | 2,550,135,120 |
-
-You can increase the `maxfiles` limit beyond 2,550,135,120 using a support request. For every 2,550,135,120 files you increase (or a fraction thereof), you need to increase the corresponding volume quota by 120 TiB. For example, if you increase `maxfiles` limit from 2,550,135,120 to 5,100,270,240 files (or any number in between), you need to increase the volume quota to at least 240 TiB.
-
-The maximum `maxfiles` value for a 500 TiB volume is 10,625,563,000 files.
-You cannot set `maxfiles` limits for data protection volumes via a quota request. Azure NetApp Files automatically increases the `maxfiles` limit of a data protection volume to accommodate the number of files replicated to the volume. When a failover happens to a data protection volume, the `maxfiles` limit remains the last value before the failover. In this situation, you can submit a `maxfiles` [quota request](#request-limit-increase) for the volume.
+You can't set `maxfiles` limits for data protection volumes via a quota request. Azure NetApp Files automatically increases the `maxfiles` limit of a data protection volume to accommodate the number of files replicated to the volume. When a failover happens on a data protection volume, the `maxfiles` limit remains the last value before the failover. In this situation, you can submit a `maxfiles` [quota request](#request-limit-increase) for the volume.
## Request limit increase
azure-netapp-files Configure Network Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-network-features.md
Previously updated : 11/07/2023 Last updated : 07/10/2024
This section shows you how to set the network features option when you create a
## Edit network features option for existing volumes
-You can edit the network features option of existing volumes from *Basic* to *Standard* network features. The change you make applies to all volumes in the same *network sibling set* (or *siblings*). Siblings are determined by their network IP address relationship. They share the same NIC for mounting the volume to the client or connecting to the SMB share of the volume. At the creation of a volume, its siblings are determined by a placement algorithm that aims for reusing the IP address where possible.
+You can edit the network features option of existing volumes from *Basic* to *Standard* network features. The change you make applies to all volumes in the same *network sibling set* (or *siblings*). Siblings are determined by their network IP address relationship. They share the same NIC for mounting the volume to the client or connecting to the remote share of the volume. At the creation of a volume, its siblings are determined by a placement algorithm that aims for reusing the IP address where possible.
>[!IMPORTANT] >It's not recommended that you use the edit network features option with Terraform-managed volumes due to risks. You must follow separate instructions if you use Terraform-managed volumes. For more information see, [Update Terraform-managed Azure NetApp Files volume from Basic to Standard](#update-terraform-managed-azure-netapp-files-volume-from-basic-to-standard).
azure-portal Get Subscription Tenant Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/get-subscription-tenant-id.md
Title: Get subscription and tenant IDs in the Azure portal description: Learn how to locate and copy the IDs of Azure tenants and subscriptions. Last updated 09/27/2023-+ # Get subscription and tenant IDs in the Azure portal
azure-portal Alerts Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/mobile-app/alerts-notifications.md
Title: Azure mobile app alerts and notifications
+ Title: Manage alerts and notifications in the Azure mobile app
description: Use Azure mobile app notifications to get up-to-date alerts and information on your resources and services. Last updated 11/2/2023-+
-# Azure mobile app alerts and notifications
+# Manage alerts and notifications in the Azure mobile app
-Use Azure mobile app notifications to get up-to-date alerts and information on your resources and services.
+Use Azure mobile app notifications to get up-to-date alerts and information on your resources and services.
Azure mobile app notifications offer users flexibility in how they receive push notifications.
azure-portal Cloud Shell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/mobile-app/cloud-shell.md
Title: Use Cloud Shell in the Azure mobile app description: Use the Azure mobile app to execute commands in Cloud Shell. Last updated 05/21/2024-+ # Use Cloud Shell in the Azure mobile app
azure-portal Intune Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/mobile-app/intune-management.md
Title: Use Microsoft Intune MAM on devices that run the Azure mobile app description: Learn about setting and enforcing app protection policies on devices that run the Azure mobile app. Last updated 06/17/2024-+ - build-2024
azure-portal Learn Training https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/mobile-app/learn-training.md
Title: Learn about Azure in the Azure mobile app description: The Microsoft Learn features in the Azure mobile app help you learn Azure skills anytime, anywhere. Last updated 02/26/2024-+ # Learn about Azure in the Azure mobile app
azure-portal Microsoft Copilot In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/mobile-app/microsoft-copilot-in-azure.md
Title: Use Microsoft Copilot in Azure with the Azure mobile app description: You can use the Azure mobile app to access Microsoft Copilot in Azure (preview) and benefit from its features. Last updated 05/21/2024-+ - build-2024
azure-portal Microsoft Entra Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/mobile-app/microsoft-entra-id.md
Title: Use Microsoft Entra ID with the Azure mobile app description: Use the Azure mobile app to manage users and groups with Microsoft Entra ID. Last updated 04/04/2024-+ # Use Microsoft Entra ID with the Azure mobile app
azure-portal Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/mobile-app/overview.md
Title: What is the Azure mobile app? description: The Azure mobile app is a tool that allows you to monitor and manage your Azure resources and services from your mobile device. Last updated 06/06/2024-+ # What is the Azure mobile app?
azure-resource-manager Bicep Error Bcp033 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-error-bcp033.md
+
+ Title: BCP033
+description: Error - Expected a value of type "{expectedType}" but the provided value is of type "{actualType}".
++ Last updated : 06/28/2024++
+# Bicep warning and error code - BCP033
+
+This error occurs when you assign a value of a mismatched data type.
+
+## Error description
+
+`Expected a value of type "{expectedType}" but the provided value is of type "{actualType}".`
+
+## Solution
+
+Use the expected data type.
+
+## Examples
+
+The following example raises the error because the expected data type is a string. The actual provided value is an integer:
+
+```bicep
+var myValue = 5
+
+output myString string = myValue
+```
+
+You can fix the error by providing a string value:
+
+```bicep
+var myValue = '5'
+
+output myString string = myValue
+```
+
+## Next steps
+
+For more information about Bicep warning and error codes, see [Bicep warnings and errors](./bicep-error-codes.md).
azure-resource-manager Bicep Error Bcp035 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-error-bcp035.md
+
+ Title: BCP035
+description: Error - The specified <data-type> declaration is missing the following required properties.
++ Last updated : 06/28/2024++
+# Bicep warning and error code - BCP035
+
+This warning occurs when your resource definition is missing a required property.
+
+## Warning description
+
+`The specified <date-type> declaration is missing the following required properties: <name-of-the-property.`
+
+## Solution
+
+Add the missing property to the resource definition.
+
+## Examples
+
+The following example raises the warning for **virtualNetworkGateway1** and **virtualNetworkGateway2**:
+
+```bicep
+var networkConnectionName = 'testConnection'
+var location = 'eastus'
+var vnetGwAId = 'gatewayA'
+var vnetGwBId = 'gatewayB'
+
+resource networkConnection 'Microsoft.Network/connections@2023-11-01' = {
+ name: networkConnectionName
+ location: location
+ properties: {
+ virtualNetworkGateway1: {
+ id: vnetGwAId
+ }
+ virtualNetworkGateway2: {
+ id: vnetGwBId
+ }
+
+ connectionType: 'Vnet2Vnet'
+ }
+}
+```
+
+The warning is:
+
+```warning
+The specified "object" declaration is missing the following required properties: "properties". If this is an inaccuracy in the documentation, please report it to the Bicep Team.
+```
+
+You can verify the missing properties from the [template reference](/azure/templates). If you see the warning from Visual Studio Code, hover the cursor over the resource symbolic name and select **View document** to open the template reference.
+
+You can fix the error by adding the missing properties:
+
+```bicep
+var networkConnectionName = 'testConnection'
+var location = 'eastus'
+var vnetGwAId = 'gatewayA'
+var vnetGwBId = 'gatewayB'
+
+resource networkConnection 'Microsoft.Network/connections@2023-11-01' = {
+ name: networkConnectionName
+ location: location
+ properties: {
+ virtualNetworkGateway1: {
+ id: vnetGwAId
+ properties:{}
+ }
+ virtualNetworkGateway2: {
+ id: vnetGwBId
+ properties:{}
+ }
+
+ connectionType: 'Vnet2Vnet'
+ }
+}
+```
+
+## Next steps
+
+For more information about Bicep warning and error codes, see [Bicep warnings and errors](./bicep-error-codes.md).
azure-resource-manager Bicep Error Bcp072 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-error-bcp072.md
+
+ Title: BCP072
+description: Error - This symbol cannot be referenced here. Only other parameters can be referenced in parameter default values.
++ Last updated : 07/02/2024++
+# Bicep warning and error code - BCP072
+
+This error occurs when you reference a variable in parameter default values.
+
+## Error description
+
+`This symbol cannot be referenced here. Only other parameters can be referenced in parameter default values.`
+
+## Solution
+
+Reference another parameter instead.
+
+## Examples
+
+The following example raises the error because the parameter default value references a variable:
+
+```bicep
+param foo string = bar
+
+var bar = 'HelloWorld!'
+```
+
+You can fix the error by referencing another parameter:
+
+```bicep
+param foo string = bar
+param bar string = 'HelloWorld!'
+
+output outValue string = foo
+```
+
+## Next steps
+
+For more information about Bicep warning and error codes, see [Bicep warnings and errors](./bicep-error-codes.md).
azure-resource-manager Bicep Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-error-codes.md
+
+ Title: Bicep warnings and error codes
+description: Lists the warnings and error codes.
++ Last updated : 06/28/2024++
+# Bicep warning and error codes
+
+If you need more information about a particular warning or error code, select the **Feedback** button in the upper right corner of the page and specify the code.
+
+| Error Code | Error Description |
+||-|
+| BCP001 | The following token is not recognized: "{token}". |
+| BCP002 | The multi-line comment at this location is not terminated. Terminate it with the */ character sequence. |
+| BCP003 | The string at this location is not terminated. Terminate the string with a single quote character. |
+| BCP004 | The string at this location is not terminated due to an unexpected new line character. |
+| BCP005 | The string at this location is not terminated. Complete the escape sequence and terminate the string with a single unescaped quote character. |
+| BCP006 | The specified escape sequence is not recognized. Only the following escape sequences are allowed: {ToQuotedString(escapeSequences)}. |
+| BCP007 | This declaration type is not recognized. Specify a metadata, parameter, variable, resource, or output declaration. |
+| BCP008 | Expected the "=" token, or a newline at this location. |
+| BCP009 | Expected a literal value, an array, an object, a parenthesized expression, or a function call at this location. |
+| BCP010 | Expected a valid 64-bit signed integer. |
+| BCP011 | The type of the specified value is incorrect. Specify a string, boolean, or integer literal. |
+| BCP012 | Expected the "{keyword}" keyword at this location. |
+| BCP013 | Expected a parameter identifier at this location. |
+| BCP015 | Expected a variable identifier at this location. |
+| BCP016 | Expected an output identifier at this location. |
+| BCP017 | Expected a resource identifier at this location. |
+| BCP018 | Expected the "{character}" character at this location. |
+| BCP019 | Expected a new line character at this location. |
+| BCP020 | Expected a function or property name at this location. |
+| BCP021 | Expected a numeric literal at this location. |
+| BCP022 | Expected a property name at this location. |
+| BCP023 | Expected a variable or function name at this location. |
+| BCP024 | The identifier exceeds the limit of {LanguageConstants.MaxIdentifierLength}. Reduce the length of the identifier. |
+| BCP025 | The property "{property}" is declared multiple times in this object. Remove or rename the duplicate properties. |
+| BCP026 | The output expects a value of type "{expectedType}" but the provided value is of type "{actualType}". |
+| BCP028 | Identifier "{identifier}" is declared multiple times. Remove or rename the duplicates. |
+| BCP029 | The resource type is not valid. Specify a valid resource type of format "&lt;types>@&lt;apiVersion>". |
+| BCP030 | The output type is not valid. Please specify one of the following types: {ToQuotedString(validTypes)}. |
+| BCP031 | The parameter type is not valid. Please specify one of the following types: {ToQuotedString(validTypes)}. |
+| BCP032 | The value must be a compile-time constant. |
+| [BCP033](./bicep-error-bcp033.md) | Expected a value of type "{expectedType}" but the provided value is of type "{actualType}". |
+| BCP034 | The enclosing array expected an item of type "{expectedType}", but the provided item was of type "{actualType}". |
+| [BCP035](./bicep-error-bcp035.md) | The specified "{blockName}" declaration is missing the following required properties{sourceDeclarationClause}: {ToQuotedString(properties)}.{(showTypeInaccuracy ? TypeInaccuracyClause : string.Empty)} |
+| BCP036 | The property "{property}" expected a value of type "{expectedType}" but the provided value{sourceDeclarationClause} is of type "{actualType}".{(showTypeInaccuracy ? TypeInaccuracyClause : string.Empty)} |
+| BCP037 | The property "{property}"{sourceDeclarationClause} is not allowed on objects of type "{type}".{permissiblePropertiesClause}{(showTypeInaccuracy ? TypeInaccuracyClause : string.Empty)} |
+| BCP040 | String interpolation is not supported for keys on objects of type "{type}"{sourceDeclarationClause}.{permissiblePropertiesClause} |
+| BCP041 | Values of type "{valueType}" cannot be assigned to a variable. |
+| BCP043 | This is not a valid expression. |
+| BCP044 | Cannot apply operator "{operatorName}" to operand of type "{type}". |
+| BCP045 | Cannot apply operator "{operatorName}" to operands of type "{type1}" and "{type2}".{(additionalInfo is null ? string.Empty : " " + additionalInfo)} |
+| BCP046 | Expected a value of type "{type}". |
+| BCP047 | String interpolation is unsupported for specifying the resource type. |
+| BCP048 | Cannot resolve function overload. For details, see the documentation. |
+| BCP049 | The array index must be of type "{LanguageConstants.String}" or "{LanguageConstants.Int}" but the provided index was of type "{wrongType}". |
+| BCP050 | The specified path is empty. |
+| BCP051 | The specified path begins with "/". Files must be referenced using relative paths. |
+| BCP052 | The type "{type}" does not contain property "{badProperty}". |
+| BCP053 | The type "{type}" does not contain property "{badProperty}". Available properties include {ToQuotedString(availableProperties)}. |
+| BCP054 | The type "{type}" does not contain any properties. |
+| BCP055 | Cannot access properties of type "{wrongType}". An "{LanguageConstants.Object}" type is required. |
+| BCP056 | The reference to name "{name}" is ambiguous because it exists in namespaces {ToQuotedString(namespaces)}. The reference must be fully qualified. |
+| BCP057 | The name "{name}" does not exist in the current context. |
+| BCP059 | The name "{name}" is not a function. |
+| BCP060 | The "variables" function is not supported. Directly reference variables by their symbolic names. |
+| BCP061 | The "parameters" function is not supported. Directly reference parameters by their symbolic names. |
+| BCP062 | The referenced declaration with name "{name}" is not valid. |
+| BCP063 | The name "{name}" is not a parameter, variable, resource or module. |
+| BCP064 | Found unexpected tokens in interpolated expression. |
+| BCP065 | Function "{functionName}" is not valid at this location. It can only be used as a parameter default value. |
+| BCP066 | Function "{functionName}" is not valid at this location. It can only be used in resource declarations. |
+| BCP067 | Cannot call functions on type "{wrongType}". An "{LanguageConstants.Object}" type is required. |
+| BCP068 | Expected a resource type string. Specify a valid resource type of format "&lt;types>@&lt;apiVersion>". |
+| BCP069 | The function "{function}" is not supported. Use the "{@operator}" operator instead. |
+| BCP070 | Argument of type "{argumentType}" is not assignable to parameter of type "{parameterType}". |
+| BCP071 | Expected {expected}, but got {argumentCount}. |
+| [BCP072](./bicep-error-bcp072.md) | This symbol cannot be referenced here. Only other parameters can be referenced in parameter default values. |
+| BCP073 | The property "{property}" is read-only. Expressions cannot be assigned to read-only properties.{(showTypeInaccuracy ? TypeInaccuracyClause : string.Empty)} |
+| BCP074 | Indexing over arrays requires an index of type "{LanguageConstants.Int}" but the provided index was of type "{wrongType}". |
+| BCP075 | Indexing over objects requires an index of type "{LanguageConstants.String}" but the provided index was of type "{wrongType}". |
+| BCP076 | Cannot index over expression of type "{wrongType}". Arrays or objects are required. |
+| BCP077 | The property "{badProperty}" on type "{type}" is write-only. Write-only properties cannot be accessed. |
+| BCP078 | The property "{propertyName}" requires a value of type "{expectedType}", but none was supplied. |
+| BCP079 | This expression is referencing its own declaration, which is not allowed. |
+| BCP080 | The expression is involved in a cycle ("{string.Join("\" -> \"", cycle)}"). |
+| BCP081 | Resource type "{resourceTypeReference.FormatName()}" does not have types available. Bicep is unable to validate resource properties prior to deployment, but this will not block the resource from being deployed. |
+| BCP082 | The name "{name}" does not exist in the current context. Did you mean "{suggestedName}"? |
+| BCP083 | The type "{type}" does not contain property "{badProperty}". Did you mean "{suggestedProperty}"? |
+| BCP084 | The symbolic name "{name}" is reserved. Please use a different symbolic name. Reserved namespaces are {ToQuotedString(namespaces.OrderBy(ns => ns))}. |
+| BCP085 | The specified file path contains one ore more invalid path characters. The following are not permitted: {ToQuotedString(forbiddenChars.OrderBy(x => x).Select(x => x.ToString()))}. |
+| BCP086 | The specified file path ends with an invalid character. The following are not permitted: {ToQuotedString(forbiddenPathTerminatorChars.OrderBy(x => x).Select(x => x.ToString()))}. |
+| BCP087 | Array and object literals are not allowed here. |
+| BCP088 | The property "{property}" expected a value of type "{expectedType}" but the provided value is of type "{actualStringLiteral}". Did you mean "{suggestedStringLiteral}"? |
+| BCP089 | The property "{property}" is not allowed on objects of type "{type}". Did you mean "{suggestedProperty}"? |
+| BCP090 | This module declaration is missing a file path reference. |
+| BCP091 | An error occurred reading file. {failureMessage} |
+| BCP092 | String interpolation is not supported in file paths. |
+| BCP093 | File path "{filePath}" could not be resolved relative to "{parentPath}". |
+| BCP094 | This module references itself, which is not allowed. |
+| BCP095 | The file is involved in a cycle ("{string.Join("\" -> \"", cycle)}"). |
+| BCP096 | Expected a module identifier at this location. |
+| BCP097 | Expected a module path string. This should be a relative path to another bicep file, e.g. 'myModule.bicep' or '../parent/myModule.bicep' |
+| BCP098 | The specified file path contains a "\" character. Use "/" instead as the directory separator character. |
+| BCP099 | The "{LanguageConstants.ParameterAllowedPropertyName}" array must contain one or more items. |
+| BCP100 | The function "if" is not supported. Use the "?:\" (ternary conditional) operator instead, e.g. condition ? ValueIfTrue : ValueIfFalse |
+| BCP101 | The "createArray" function is not supported. Construct an array literal using []. |
+| BCP102 | The "createObject" function is not supported. Construct an object literal using {}. |
+| BCP103 | The following token is not recognized: "{token}". Strings are defined using single quotes in bicep. |
+| BCP104 | The referenced module has errors. |
+| BCP105 | Unable to load file from URI "{fileUri}". |
+| BCP106 | Expected a new line character at this location. Commas are not used as separator delimiters. |
+| BCP107 | The function "{name}" does not exist in namespace "{namespaceType.Name}". |
+| BCP108 | The function "{name}" does not exist in namespace "{namespaceType.Name}". Did you mean "{suggestedName}"? |
+| BCP109 | The type "{type}" does not contain function "{name}". |
+| BCP110 | The type "{type}" does not contain function "{name}". Did you mean "{suggestedName}"? |
+| BCP111 | The specified file path contains invalid control code characters. |
+| BCP112 | The "{LanguageConstants.TargetScopeKeyword}" cannot be declared multiple times in one file. |
+| BCP113 | Unsupported scope for module deployment in a "{LanguageConstants.TargetScopeTypeTenant}" target scope. Omit this property to inherit the current scope, or specify a valid scope. Permissible scopes include tenant: tenant(), named management group: managementGroup(&lt;name>), named subscription: subscription(&lt;subId>), or named resource group in a named subscription: resourceGroup(&lt;subId>, &lt;name>). |
+| BCP114 | Unsupported scope for module deployment in a "{LanguageConstants.TargetScopeTypeManagementGroup}" target scope. Omit this property to inherit the current scope, or specify a valid scope. Permissible scopes include current management group: managementGroup(), named management group: managementGroup(&lt;name>), named subscription: subscription(&lt;subId>), tenant: tenant(), or named resource group in a named subscription: resourceGroup(&lt;subId>, &lt;name>). |
+| BCP115 | Unsupported scope for module deployment in a "{LanguageConstants.TargetScopeTypeSubscription}" target scope. Omit this property to inherit the current scope, or specify a valid scope. Permissible scopes include current subscription: subscription(), named subscription: subscription(&lt;subId>), named resource group in same subscription: resourceGroup(&lt;name>), named resource group in different subscription: resourceGroup(&lt;subId>, &lt;name>), or tenant: tenant(). |
+| BCP116 | Unsupported scope for module deployment in a "{LanguageConstants.TargetScopeTypeResourceGroup}" target scope. Omit this property to inherit the current scope, or specify a valid scope. Permissible scopes include current resource group: resourceGroup(), named resource group in same subscription: resourceGroup(&lt;name>), named resource group in a different subscription: resourceGroup(&lt;subId>, &lt;name>), current subscription: subscription(), named subscription: subscription(&lt;subId>) or tenant: tenant(). |
+| BCP117 | An empty indexer is not allowed. Specify a valid expression. |
+| BCP118 | Expected the "{" character, the "[" character, or the "if" keyword at this location. |
+| BCP119 | Unsupported scope for extension resource deployment. Expected a resource reference. |
+| BCP120 | This expression is being used in an assignment to the "{propertyName}" property of the "{objectTypeName}" type, which requires a value that can be calculated at the start of the deployment. |
+| BCP121 | Resources: {ToQuotedString(resourceNames)} are defined with this same name in a file. Rename them or split into different modules. |
+| BCP122 | Modules: {ToQuotedString(moduleNames)} are defined with this same name and this same scope in a file. Rename them or split into different modules. |
+| BCP123 | Expected a namespace or decorator name at this location. |
+| BCP124 | The decorator "{decoratorName}" can only be attached to targets of type "{attachableType}", but the target has type "{targetType}". |
+| BCP125 | Function "{functionName}" cannot be used as a parameter decorator. |
+| BCP126 | Function "{functionName}" cannot be used as a variable decorator. |
+| BCP127 | Function "{functionName}" cannot be used as a resource decorator. |
+| BCP128 | Function "{functionName}" cannot be used as a module decorator. |
+| BCP129 | Function "{functionName}" cannot be used as an output decorator. |
+| BCP130 | Decorators are not allowed here. |
+| BCP132 | Expected a declaration after the decorator. |
+| BCP133 | The unicode escape sequence is not valid. Valid unicode escape sequences range from \\u{0} to \\u{10FFFF}. |
+| BCP134 | Scope {ToQuotedString(LanguageConstants.GetResourceScopeDescriptions(suppliedScope))} is not valid for this module. Permitted scopes: {ToQuotedString(LanguageConstants.GetResourceScopeDescriptions(supportedScopes))}. |
+| BCP135 | Scope {ToQuotedString(LanguageConstants.GetResourceScopeDescriptions(suppliedScope))} is not valid for this resource type. Permitted scopes: {ToQuotedString(LanguageConstants.GetResourceScopeDescriptions(supportedScopes))}. |
+| BCP136 | Expected a loop item variable identifier at this location. |
+| BCP137 | Loop expected an expression of type "{LanguageConstants.Array}" but the provided value is of type "{actualType}". |
+| BCP138 | For-expressions are not supported in this context. For-expressions may be used as values of resource, module, variable, and output declarations, or values of resource and module properties. |
+| BCP139 | A resource's scope must match the scope of the Bicep file for it to be deployable. You must use modules to deploy resources to a different scope. |
+| BCP140 | The multi-line string at this location is not terminated. Terminate it with "'''. |
+| BCP141 | The expression cannot be used as a decorator as it is not callable. |
+| BCP142 | Property value for-expressions cannot be nested. |
+| BCP143 | For-expressions cannot be used with properties whose names are also expressions. |
+| BCP144 | Directly referencing a resource or module collection is not currently supported here. Apply an array indexer to the expression. |
+| BCP145 | Output "{identifier}" is declared multiple times. Remove or rename the duplicates. |
+| BCP147 | Expected a parameter declaration after the decorator. |
+| BCP148 | Expected a variable declaration after the decorator. |
+| BCP149 | Expected a resource declaration after the decorator. |
+| BCP150 | Expected a module declaration after the decorator. |
+| BCP151 | Expected an output declaration after the decorator. |
+| BCP152 | Function "{functionName}" cannot be used as a decorator. |
+| BCP153 | Expected a resource or module declaration after the decorator. |
+| BCP154 | Expected a batch size of at least {limit} but the specified value was "{value}". |
+| BCP155 | The decorator "{decoratorName}" can only be attached to resource or module collections. |
+| BCP156 | The resource type segment "{typeSegment}" is invalid. Nested resources must specify a single type segment, and optionally can specify an API version using the format "&lt;type>@&lt;apiVersion>". |
+| BCP157 | The resource type cannot be determined due to an error in the containing resource. |
+| BCP158 | Cannot access nested resources of type "{wrongType}". A resource type is required. |
+| BCP159 | The resource "{resourceName}" does not contain a nested resource named "{identifierName}". Known nested resources are: {ToQuotedString(nestedResourceNames)}. |
+| BCP160 | A nested resource cannot appear inside of a resource with a for-expression. |
+| BCP162 | Expected a loop item variable identifier or "(" at this location. |
+| BCP164 | A child resource's scope is computed based on the scope of its ancestor resource. This means that using the "scope" property on a child resource is unsupported. |
+| BCP165 | A resource's computed scope must match that of the Bicep file for it to be deployable. This resource's scope is computed from the "scope" property value assigned to ancestor resource "{ancestorIdentifier}". You must use modules to deploy resources to a different scope. |
+| BCP166 | Duplicate "{decoratorName}" decorator. |
+| BCP167 | Expected the "{" character or the "if" keyword at this location. |
+| BCP168 | Length must not be a negative value. |
+| BCP169 | Expected resource name to contain {expectedSlashCount} "/" character(s). The number of name segments must match the number of segments in the resource type. |
+| BCP170 | Expected resource name to not contain any "/" characters. Child resources with a parent resource reference (via the parent property or via nesting) must not contain a fully-qualified name. |
+| BCP171 | Resource type "{resourceType}" is not a valid child resource of parent "{parentResourceType}". |
+| BCP172 | The resource type cannot be validated due to an error in parent resource "{resourceName}". |
+| BCP173 | The property "{property}" cannot be used in an existing resource declaration. |
+| BCP174 | Type validation is not available for resource types declared containing a "/providers/" segment. Please instead use the "scope" property. |
+| BCP176 | Values of the "any" type are not allowed here. |
+| BCP177 | This expression is being used in the if-condition expression, which requires a value that can be calculated at the start of the deployment.{variableDependencyChainClause}{accessiblePropertiesClause} |
+| BCP178 | This expression is being used in the for-expression, which requires a value that can be calculated at the start of the deployment.{variableDependencyChainClause}{accessiblePropertiesClause} |
+| BCP179 | Unique resource or deployment name is required when looping. The loop item variable "{itemVariableName}" or the index variable "{indexVariableName}" must be referenced in at least one of the value expressions of the following properties in the loop body: {ToQuotedString(expectedVariantProperties)} |
+| BCP180 | Function "{functionName}" is not valid at this location. It can only be used when directly assigning to a module parameter with a secure decorator. |
+| BCP181 | This expression is being used in an argument of the function "{functionName}", which requires a value that can be calculated at the start of the deployment.{variableDependencyChainClause}{accessiblePropertiesClause} |
+| BCP182 | This expression is being used in the for-body of the variable "{variableName}", which requires values that can be calculated at the start of the deployment.{variableDependencyChainClause}{violatingPropertyNameClause}{accessiblePropertiesClause} |
+| BCP183 | The value of the module "params" property must be an object literal. |
+| BCP184 | File '{filePath}' exceeded maximum size of {maxSize} {unit}. |
+| BCP185 | Encoding mismatch. File was loaded with '{detectedEncoding}' encoding. |
+| BCP186 | Unable to parse literal JSON value. Please ensure that it is well-formed. |
+| BCP187 | The property "{property}" does not exist in the resource or type definition, although it might still be valid.{TypeInaccuracyClause} |
+| BCP188 | The referenced ARM template has errors. Please see [https://aka.ms/arm-template](https://aka.ms/arm-template) for information on how to diagnose and fix the template. |
+| BCP189 | (allowedSchemes.Contains(ArtifactReferenceSchemes.Local, StringComparer.Ordinal), allowedSchemes.Any(scheme => !string.Equals(scheme, ArtifactReferenceSchemes.Local, StringComparison.Ordinal))) switch { (false, false) => "Module references are not supported in this context.", (false, true) => $"The specified module reference scheme \"{badScheme}\" is not recognized. Specify a module reference using one of the following schemes: {FormatSchemes()}", (true, false) => $"The specified module reference scheme \"{badScheme}\" is not recognized. Specify a path to a local module file.", (true, true) => $"The specified module reference scheme \"{badScheme}\" is not recognized. Specify a path to a local module file or a module reference using one of the following schemes: {FormatSchemes()}"} |
+| BCP190 | The artifact with reference "{artifactRef}" has not been restored. |
+| BCP191 | Unable to restore the artifact with reference "{artifactRef}". |
+| BCP192 | Unable to restore the artifact with reference "{artifactRef}": {message} |
+| BCP193 | {BuildInvalidOciArtifactReferenceClause(aliasName, badRef)} Specify a reference in the format of "{ArtifactReferenceSchemes.Oci}:&lt;artifact-uri>:&lt;tag>", or "{ArtifactReferenceSchemes.Oci}/&lt;module-alias>:&lt;module-name-or-path>:&lt;tag>". |
+| BCP194 | {BuildInvalidTemplateSpecReferenceClause(aliasName, badRef)} Specify a reference in the format of "{ArtifactReferenceSchemes.TemplateSpecs}:&lt;subscription-ID>/&lt;resource-group-name>/&lt;template-spec-name>:&lt;version>", or "{ArtifactReferenceSchemes.TemplateSpecs}/&lt;module-alias>:&lt;template-spec-name>:&lt;version>". |
+| BCP195 | {BuildInvalidOciArtifactReferenceClause(aliasName, badRef)} The artifact path segment "{badSegment}" is not valid. Each artifact name path segment must be a lowercase alphanumeric string optionally separated by a ".", "_", or \"-\"." |
+| BCP196 | The module tag or digest is missing. |
+| BCP197 | The tag "{badTag}" exceeds the maximum length of {maxLength} characters. |
+| BCP198 | The tag "{badTag}" is not valid. Valid characters are alphanumeric, ".", "_", or "-" but the tag cannot begin with ".", "_", or "-". |
+| BCP199 | Module path "{badRepository}" exceeds the maximum length of {maxLength} characters. |
+| BCP200 | The registry "{badRegistry}" exceeds the maximum length of {maxLength} characters. |
+| BCP201 | Expected a provider specification string of with a valid format at this location. Valid formats are "br:&lt;providerRegistryHost>/&lt;providerRepositoryPath>@&lt;providerVersion>" or "br/&lt;providerAlias>:&lt;providerName>@&lt;providerVersion>". |
+| BCP202 | Expected a provider alias name at this location. |
+| BCP203 | Using provider statements requires enabling EXPERIMENTAL feature "Extensibility". |
+| BCP204 | Provider namespace "{identifier}" is not recognized. |
+| BCP205 | Provider namespace "{identifier}" does not support configuration. |
+| BCP206 | Provider namespace "{identifier}" requires configuration, but none was provided. |
+| BCP207 | Namespace "{identifier}" is declared multiple times. Remove the duplicates. |
+| BCP208 | The specified namespace "{badNamespace}" is not recognized. Specify a resource reference using one of the following namespaces: {ToQuotedString(allowedNamespaces)}. |
+| BCP209 | Failed to find resource type "{resourceType}" in namespace "{@namespace}". |
+| BCP210 | Resource type belonging to namespace "{childNamespace}" cannot have a parent resource type belonging to different namespace "{parentNamespace}". |
+| BCP211 | The module alias name "{aliasName}" is invalid. Valid characters are alphanumeric, "_", or "-". |
+| BCP212 | The Template Spec module alias name "{aliasName}" does not exist in the {BuildBicepConfigurationClause(configFileUri)}. |
+| BCP213 | The OCI artifact module alias name "{aliasName}" does not exist in the {BuildBicepConfigurationClause(configFileUri)}. |
+| BCP214 | The Template Spec module alias "{aliasName}" in the {BuildBicepConfigurationClause(configFileUri)} is in valid. The "subscription" property cannot be null or undefined. |
+| BCP215 | The Template Spec module alias "{aliasName}" in the {BuildBicepConfigurationClause(configFileUri)} is in valid. The "resourceGroup" property cannot be null or undefined. |
+| BCP216 | The OCI artifact module alias "{aliasName}" in the {BuildBicepConfigurationClause(configFileUri)} is invalid. The "registry" property cannot be null or undefined. |
+| BCP217 | {BuildInvalidTemplateSpecReferenceClause(aliasName, referenceValue)} The subscription ID "{subscriptionId}" is not a GUID. |
+| BCP218 | {BuildInvalidTemplateSpecReferenceClause(aliasName, referenceValue)} The resource group name "{resourceGroupName}" exceeds the maximum length of {maximumLength} characters. |
+| BCP219 | {BuildInvalidTemplateSpecReferenceClause(aliasName, referenceValue)} The resource group name "{resourceGroupName}" is invalid. Valid characters are alphanumeric, unicode characters, ".", "_", "-", "(", or ")", but the resource group name cannot end with ".". |
+| BCP220 | {BuildInvalidTemplateSpecReferenceClause(aliasName, referenceValue)} The Template Spec name "{templateSpecName}" exceeds the maximum length of {maximumLength} characters. |
+| BCP221 | {BuildInvalidTemplateSpecReferenceClause(aliasName, referenceValue)} The Template Spec name "{templateSpecName}" is invalid. Valid characters are alphanumeric, ".", "_", "-", "(", or ")", but the Template Spec name cannot end with ".". |
+| BCP222 | {BuildInvalidTemplateSpecReferenceClause(aliasName, referenceValue)} The Template Spec version "{templateSpecVersion}" exceeds the maximum length of {maximumLength} characters. |
+| BCP223 | {BuildInvalidTemplateSpecReferenceClause(aliasName, referenceValue)} The Template Spec version "{templateSpecVersion}" is invalid. Valid characters are alphanumeric, ".", "_", "-", "(", or ")", but the Template Spec name cannot end with ".". |
+| BCP224 | {BuildInvalidOciArtifactReferenceClause(aliasName, badRef)} The digest "{badDigest}" is not valid. The valid format is a string "sha256:" followed by exactly 64 lowercase hexadecimal digits. |
+| BCP225 | The discriminator property "{propertyName}" value cannot be determined at compilation time. Type checking for this object is disabled. |
+| BCP226 | Expected at least one diagnostic code at this location. Valid format is "#disable-next-line diagnosticCode1 diagnosticCode2 ...". |
+| BCP227 | The type "{resourceType}" cannot be used as a parameter or output type. Extensibility types are currently not supported as parameters or outputs. |
+| BCP229 | The parameter "{parameterName}" cannot be used as a resource scope or parent. Resources passed as parameters cannot be used as a scope or parent of a resource. |
+| BCP300 | Expected a type literal at this location. Please specify a concrete value or a reference to a literal type. |
+| BCP301 | The type name "{reservedName}" is reserved and may not be attached to a user-defined type. |
+| BCP302 | The name "{name}" is not a valid type. Please specify one of the following types: {ToQuotedString(validTypes)}. |
+| BCP303 | String interpolation is unsupported for specifying the provider. |
+| BCP304 | Invalid provider specifier string. Specify a valid provider of format "&lt;providerName>@&lt;providerVersion>". |
+| BCP305 | Expected the "with" keyword, "as" keyword, or a new line character at this location. |
+| BCP306 | The name "{name}" refers to a namespace, not to a type. |
+| BCP307 | The expression cannot be evaluated, because the identifier properties of the referenced existing resource including {ToQuotedString(runtimePropertyNames.OrderBy(x => x))} cannot be calculated at the start of the deployment. In this situation, {accessiblePropertyNamesClause}{accessibleFunctionNamesClause}. |
+| BCP308 | The decorator "{decoratorName}" may not be used on statements whose declared type is a reference to a user-defined type. |
+| BCP309 | Values of type "{flattenInputType.Name}" cannot be flattened because "{incompatibleType.Name}" is not an array type. |
+| BCP311 | The provided index value of "{indexSought}" is not valid for type "{typeName}". Indexes for this type must be between 0 and {tupleLength - 1}. |
+| BCP315 | An object type may have at most one additional properties declaration. |
+| BCP316 | The "{LanguageConstants.ParameterSealedPropertyName}" decorator may not be used on object types with an explicit additional properties type declaration. |
+| BCP317 | Expected an identifier, a string, or an asterisk at this location. |
+| BCP318 | The value of type "{possiblyNullType}" may be null at the start of the deployment, which would cause this access expression (and the overall deployment with it) to fail. If you do not know whether the value will be null and the template would handle a null value for the overall expression, use a `.?` (safe dereference) operator to short-circuit the access expression if the base expression's value is null: {accessExpression.AsSafeAccess().ToString()}. If you know the value will not be null, use a non-null assertion operator to inform the compiler that the value will not be null: {SyntaxFactory.AsNonNullable(expression).ToString()}. |
+| BCP319 | The type at "{errorSource}" could not be resolved by the ARM JSON template engine. Original error message: "{message}" |
+| BCP320 | The properties of module output resources cannot be accessed directly. To use the properties of this resource, pass it as a resource-typed parameter to another module and access the parameter's properties therein. |
+| BCP321 | Expected a value of type "{expectedType}" but the provided value is of type "{actualType}". If you know the value will not be null, use a non-null assertion operator to inform the compiler that the value will not be null: {SyntaxFactory.AsNonNullable(expression).ToString()}. |
+| BCP322 | The `.?` (safe dereference) operator may not be used on instance function invocations. |
+| BCP323 | The `[?]` (safe dereference) operator may not be used on resource or module collections. |
+| BCP325 | Expected a type identifier at this location. |
+| BCP326 | Nullable-typed parameters may not be assigned default values. They have an implicit default of 'null' that cannot be overridden. |
+| BCP327 | The provided value (which will always be greater than or equal to {sourceMin}) is too large to assign to a target for which the maximum allowable value is {targetMax}. |
+| BCP328 | The provided value (which will always be less than or equal to {sourceMax}) is too small to assign to a target for which the minimum allowable value is {targetMin}. |
+| BCP329 | The provided value can be as small as {sourceMin} and may be too small to assign to a target with a configured minimum of {targetMin}. |
+| BCP330 | The provided value can be as large as {sourceMax} and may be too large to assign to a target with a configured maximum of {targetMax}. |
+| BCP331 | A type's "{minDecoratorName}" must be less than or equal to its "{maxDecoratorName}", but a minimum of {minValue} and a maximum of {maxValue} were specified. |
+| BCP332 | The provided value (whose length will always be greater than or equal to {sourceMinLength}) is too long to assign to a target for which the maximum allowable length is {targetMaxLength}. |
+| BCP333 | The provided value (whose length will always be less than or equal to {sourceMaxLength}) is too short to assign to a target for which the minimum allowable length is {targetMinLength}. |
+| BCP334 | The provided value can have a length as small as {sourceMinLength} and may be too short to assign to a target with a configured minimum length of {targetMinLength}. |
+| BCP335 | The provided value can have a length as large as {sourceMaxLength} and may be too long to assign to a target with a configured maximum length of {targetMaxLength}. |
+| BCP337 | This declaration type is not valid for a Bicep Parameters file. Specify a "{LanguageConstants.UsingKeyword}", "{LanguageConstants.ParameterKeyword}" or "{LanguageConstants.VariableKeyword}" declaration. |
+| BCP338 | Failed to evaluate parameter "{parameterName}": {message} |
+| BCP339 | The provided array index value of "{indexSought}" is not valid. Array index should be greater than or equal to 0. |
+| BCP340 | Unable to parse literal YAML value. Please ensure that it is well-formed. |
+| BCP341 | This expression is being used inside a function declaration, which requires a value that can be calculated at the start of the deployment. {variableDependencyChainClause}{accessiblePropertiesClause} |
+| BCP342 | User-defined types are not supported in user-defined function parameters or outputs. |
+| BCP344 | Expected an assert identifier at this location. |
+| BCP345 | A test declaration can only reference a Bicep File |
+| BCP0346 | Expected a test identifier at this location. |
+| BCP0347 | Expected a test path string at this location. |
+| BCP348 | Using a test declaration statement requires enabling EXPERIMENTAL feature "{nameof(ExperimentalFeaturesEnabled.TestFramework)}". |
+| BCP349 | Using an assert declaration requires enabling EXPERIMENTAL feature "{nameof(ExperimentalFeaturesEnabled.Assertions)}". |
+| BCP350 | Value of type "{valueType}" cannot be assigned to an assert. Asserts can take values of type 'bool' only. |
+| BCP351 | Function "{functionName}" is not valid at this location. It can only be used when directly assigning to a parameter. |
+| BCP352 | Failed to evaluate variable "{name}": {message} |
+| BCP353 | The {itemTypePluralName} {ToQuotedString(itemNames)} differ only in casing. The ARM deployments engine is not case sensitive and will not be able to distinguish between them. |
+| BCP354 | Expected left brace ('{') or asterisk ('*') character at this location. |
+| BCP355 | Expected the name of an exported symbol at this location. |
+| BCP356 | Expected a valid namespace identifier at this location. |
+| BCP358 | This declaration is missing a template file path reference. |
+| BCP360 | The '{symbolName}' symbol was not found in (or was not exported by) the imported template. |
+| BCP361 | The "@export()" decorator must target a top-level statement. |
+| BCP362 | This symbol is imported multiple times under the names {string.Join(", ", importedAs.Select(identifier => $"'{identifier}'"))}. |
+| BCP363 | The "{LanguageConstants.TypeDiscriminatorDecoratorName}" decorator can only be applied to object-only union types with unique member types. |
+| BCP364 | The property "{discriminatorPropertyName}" must be a required string literal on all union member types. |
+| BCP365 | The value "{discriminatorPropertyValue}" for discriminator property "{discriminatorPropertyName}" is duplicated across multiple union member types. The value must be unique across all union member types. |
+| BCP366 | The discriminator property name must be "{acceptablePropertyName}" on all union member types. |
+| BCP367 | The "{featureName}" feature is temporarily disabled. |
+| BCP368 | The value of the "{targetName}" parameter cannot be known until the template deployment has started because it uses a reference to a secret value in Azure Key Vault. Expressions that refer to the "{targetName}" parameter may be used in {LanguageConstants.LanguageFileExtension} files but not in {LanguageConstants.ParamsFileExtension} files. |
+| BCP369 | The value of the "{targetName}" parameter cannot be known until the template deployment has started because it uses the default value defined in the template. Expressions that refer to the "{targetName}" parameter may be used in {LanguageConstants.LanguageFileExtension} files but not in {LanguageConstants.ParamsFileExtension} files. |
+| BCP372 | The "@export()" decorator may not be applied to variables that refer to parameters, modules, or resource, either directly or indirectly. The target of this decorator contains direct or transitive references to the following unexportable symbols: {ToQuotedString(nonExportableSymbols)}. |
+| BCP373 | Unable to import the symbol named "{name}": {message} |
+| BCP374 | The imported model cannot be loaded with a wildcard because it contains the following duplicated exports: {ToQuotedString(ambiguousExportNames)}. |
+| BCP375 | An import list item that identifies its target with a quoted string must include an 'as &lt;alias>' clause. |
+| BCP376 | The "{name}" symbol cannot be imported because imports of kind {exportMetadataKind} are not supported in files of kind {sourceFileKind}. |
+| BCP377 | The provider alias name "{aliasName}" is invalid. Valid characters are alphanumeric, "_", or "-". |
+| BCP378 | The OCI artifact provider alias "{aliasName}" in the {BuildBicepConfigurationClause(configFileUri)} is invalid. The "registry" property cannot be null or undefined. |
+| BCP379 | The OCI artifact provider alias name "{aliasName}" does not exist in the {BuildBicepConfigurationClause(configFileUri)}. |
+| BCP380 | Artifacts of type: "{artifactType}" are not supported. |
+| BCP381 | Declaring provider namespaces with the "import" keyword has been deprecated. Please use the "provider" keyword instead. |
+| BCP383 | The "{typeName}" type is not parameterizable. |
+| BCP384 | The "{typeName}" type requires {requiredArgumentCount} argument(s). |
+| BCP385 | Using resource-derived types requires enabling EXPERIMENTAL feature "{nameof(ExperimentalFeaturesEnabled.ResourceDerivedTypes)}". |
+| BCP386 | The decorator "{decoratorName}" may not be used on statements whose declared type is a reference to a resource-derived type. |
+| BCP387 | Indexing into a type requires an integer greater than or equal to 0. |
+| BCP388 | Cannot access elements of type "{wrongType}" by index. A tuple type is required. |
+| BCP389 | The type "{wrongType}" does not declare an additional properties type. |
+| BCP390 | The array item type access operator ('[*]') can only be used with typed arrays. |
+| BCP391 | Type member access is only supported on a reference to a named type. |
+| BCP392 | "The supplied resource type identifier "{resourceTypeIdentifier}" was not recognized as a valid resource type name." |
+| BCP393 | "The type pointer segment "{unrecognizedSegment}" was not recognized. Supported pointer segments are: "properties", "items", "prefixItems", and "additionalProperties"." |
+| BCP394 | Resource-derived type expressions must derefence a property within the resource body. Using the entire resource body type is not permitted. |
+| BCP395 | Declaring provider namespaces using the '&lt;providerName>@&lt;version>' expression has been deprecated. Please use an identifier instead. |
+| BCP396 | The referenced provider types artifact has been published with malformed content. |
+| BCP397 | "Provider {name} is incorrectly configured in the {BuildBicepConfigurationClause(configFileUri)}. It is referenced in the "{RootConfiguration.ImplicitProvidersConfigurationKey}" section, but is missing corresponding configuration in the "{RootConfiguration.ProvidersConfigurationKey}" section." |
+| BCP398 | "Provider {name} is incorrectly configured in the {BuildBicepConfigurationClause(configFileUri)}. It is configured as built-in in the "{RootConfiguration.ProvidersConfigurationKey}" section, but no built-in provider exists." |
+| BCP399 | Fetching az types from the registry requires enabling EXPERIMENTAL feature "{nameof(ExperimentalFeaturesEnabled.DynamicTypeLoading)}". |
+| BCP400 | Fetching types from the registry requires enabling EXPERIMENTAL feature "{nameof(ExperimentalFeaturesEnabled.ProviderRegistry)}". |
+
+## Next steps
+
+To learn about Bicep, see [Bicep overview](./overview.md).
baremetal-infrastructure Nc2 On Azure Responsibility Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/nc2-on-azure-responsibility-matrix.md
Title: NC2 on Azure responsibility matrix-
-description: Defines who is responsible for what for NC2 on Azure
+description: Defines who's responsible for what for NC2 on Azure.
The following table color-codes areas of management, where:
* Microsoft NC2 team = blue * Nutanix = purple
-* Customer = grey
-* Microsoft Azure = white
+* Customer = gray
:::image type="content" source="media/nc2-on-azure-responsibility-matrix.png" alt-text="A diagram showing the support responsibilities for Microsoft and partners.":::
-Microsoft manages the Azure BareMetal specialized compute hardware and its data and control plane platform for underlay network. Microsoft supports if the customers plan to bring their existing Azure Subscription, VNet, vWAN etc.
+Microsoft manages the Azure BareMetal specialized compute hardware and its data and control plane platform for underlay network. Microsoft supports if the customers plan to bring their existing Azure Subscription, VNet, vWAN, etc.
-Nutanix covers the life-cycle management of Nutanix software (MCM, Prism Central/Element etc.) and their licenses.
+Nutanix covers the life-cycle management of Nutanix software (MCM, Prism Central/Element, etc.) and their licenses.
**Monitoring and remediation**
chaos-studio Chaos Studio Fault Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-library.md
Agent-based faults are injected into **Azure Virtual Machines** or **Virtual Mac
| Windows, Linux | [CPU Pressure](#cpu-pressure) | Compute capacity loss, resource pressure | | Windows, Linux | [Kill Process](#kill-process) | Dependency disruption | | Windows | [Pause Process](#pause-process) | Dependency disruption, service disruption |
-| Windows, Linux | [Network Disconnect](#network-disconnect) | Network disruption |
-| Windows, Linux | [Network Latency](#network-latency) | Network performance degradation |
-| Windows, Linux | [Network Packet Loss](#network-packet-loss) | Network reliability issues |
+| Windows<sup>1</sup>, Linux<sup>2</sup> | [Network Disconnect](#network-disconnect) | Network disruption |
+| Windows<sup>1</sup>, Linux<sup>2</sup> | [Network Latency](#network-latency) | Network performance degradation |
+| Windows<sup>1</sup>, Linux<sup>2</sup> | [Network Packet Loss](#network-packet-loss) | Network reliability issues |
+| Windows | [DNS Failure](#dns-failure) | DNS resolution issues |
+| Windows | [Network Disconnect (Via Firewall)](#network-disconnect-via-firewall) | Network disruption |
| Windows, Linux | [Physical Memory Pressure](#physical-memory-pressure) | Memory capacity loss, resource pressure | | Windows, Linux | [Stop Service](#stop-service) | Service disruption/restart | | Windows, Linux | [Time Change](#time-change) | Time synchronization issues |
Agent-based faults are injected into **Azure Virtual Machines** or **Virtual Mac
| Linux | [Arbitrary Stress-ng Stressor](#arbitrary-stress-ng-stressor) | General system stress testing | | Linux | [Linux DiskIO Pressure](#linux-disk-io-pressure) | Disk I/O performance degradation | | Windows | [DiskIO Pressure](#disk-io-pressure) | Disk I/O performance degradation |
-| Windows | [DNS Failure](#dns-failure) | DNS resolution issues |
-| Windows | [Network Disconnect (Via Firewall)](#network-disconnect-via-firewall) | Network disruption |
+
+<sup>1</sup> TCP/UDP packets only. <sup>2</sup> Outbound network traffic only.
## App Service
These actions are building blocks for constructing effective experiments. Use th
|-|-| | Capability name | NetworkDisconnect-1.1 | | Target type | Microsoft-Agent |
-| Supported OS types | Windows, Linux. |
-| Description | Blocks outbound network traffic for specified port range and network block. At least one destinationFilter or inboundDestinationFilter array must be provided. |
+| Supported OS types | Windows, Linux (outbound traffic only) |
+| Description | Blocks network traffic for specified port range and network block. At least one destinationFilter or inboundDestinationFilter array must be provided. |
| Prerequisites | **Windows:** The agent must run as administrator, which happens by default if installed as a VM extension. | | | **Linux:** The `tc` (Traffic Control) package is used for network faults. If it isn't already installed, the agent automatically attempts to install it from the default package manager. | | Urn | urn:csci:microsoft:agent:networkDisconnect/1.1 |
+| Fault type | Continuous. |
| Parameters (key, value) | | | destinationFilters | Delimited JSON array of packet filters defining which outbound packets to target. Maximum of 16.| | inboundDestinationFilters | Delimited JSON array of packet filters defining which inbound packets to target. Maximum of 16. |
The parameters **destinationFilters** and **inboundDestinationFilters** use the
* The agent-based network faults currently only support IPv4 addresses. * The network disconnect fault only affects new connections. Existing active connections continue to persist. You can restart the service or process to force connections to break. * When running on Windows, the network disconnect fault currently only works with TCP or UDP packets.
+* When running on Linux, this fault can only affect **outbound** traffic, not inbound traffic. The fault can affect **both inbound and outbound** traffic on Windows environments (via the `inboundDestinationFilters` and `destinationFilters` parameters).
### Network Disconnect (Via Firewall)
The parameters **destinationFilters** and **inboundDestinationFilters** use the
| Description | Applies a Windows firewall rule to block outbound traffic for specified port range and network block. | | Prerequisites | Agent must run as administrator. If the agent is installed as a VM extension, it runs as administrator by default. | | Urn | urn:csci:microsoft:agent:networkDisconnectViaFirewall/1.0 |
+| Fault type | Continuous. |
| Parameters (key, value) | | | destinationFilters | Delimited JSON array of packet filters that define which outbound packets to target for fault injection. | | address | IP address that indicates the start of the IP range. |
The parameters **destinationFilters** and **inboundDestinationFilters** use the
#### Limitations * The agent-based network faults currently only support IPv4 addresses.
+* This fault currently only affects new connections. Existing active connections are unaffected. You can restart the service or process to force connections to break.
### Network Latency
The parameters **destinationFilters** and **inboundDestinationFilters** use the
| Prerequisites | **Windows:** The agent must run as administrator, which happens by default if installed as a VM extension. | | | **Linux:** The `tc` (Traffic Control) package is used for network faults. If it isn't already installed, the agent automatically attempts to install it from the default package manager. | | Urn | urn:csci:microsoft:agent:networkLatency/1.1 |
+| Fault type | Continuous. |
| Parameters (key, value) | | | latencyInMilliseconds | Amount of latency to be applied in milliseconds. | | destinationFilters | Delimited JSON array of packet filters defining which outbound packets to target. Maximum of 16.|
The parameters **destinationFilters** and **inboundDestinationFilters** use the
* The agent-based network faults currently only support IPv4 addresses. * When running on Linux, the network latency fault can only affect **outbound** traffic, not inbound traffic. The fault can affect **both inbound and outbound** traffic on Windows environments (via the `inboundDestinationFilters` and `destinationFilters` parameters). * When running on Windows, the network latency fault currently only works with TCP or UDP packets.
+* This fault currently only affects new connections. Existing active connections are unaffected. You can restart the service or process to force connections to break.
### Network Packet Loss
The parameters **destinationFilters** and **inboundDestinationFilters** use the
|-|-| | Capability name | NetworkPacketLoss-1.0 | | Target type | Microsoft-Agent |
-| Supported OS types | Windows, Linux |
+| Supported OS types | Windows, Linux (outbound traffic only) |
| Description | Introduces packet loss for outbound traffic at a specified rate, between 0.0 (no packets lost) and 1.0 (all packets lost). This action can help simulate scenarios like network congestion or network hardware issues. | | Prerequisites | **Windows:** The agent must run as administrator, which happens by default if installed as a VM extension. | | | **Linux:** The `tc` (Traffic Control) package is used for network faults. If it isn't already installed, the agent automatically attempts to install it from the default package manager. | | Urn | urn:csci:microsoft:agent:networkPacketLoss/1.0 |
+| Fault type | Continuous. |
| Parameters (key, value) | | | packetLossRate | The rate at which packets matching the destination filters will be lost, ranging from 0.0 to 1.0. | | virtualMachineScaleSetInstances | An array of instance IDs when you apply this fault to a virtual machine scale set. Required for virtual machine scale sets in uniform orchestration mode. [Learn more about instance IDs](../virtual-machine-scale-sets/virtual-machine-scale-sets-instance-ids.md#scale-set-instance-id-for-uniform-orchestration-mode). |
The parameters **destinationFilters** and **inboundDestinationFilters** use the
* The agent-based network faults currently only support IPv4 addresses. * When running on Windows, the network packet loss fault currently only works with TCP or UDP packets.
+* When running on Linux, this fault can only affect **outbound** traffic, not inbound traffic. The fault can affect **both inbound and outbound** traffic on Windows environments (via the `inboundDestinationFilters` and `destinationFilters` parameters).
+* This fault currently only affects new connections. Existing active connections are unaffected. You can restart the service or process to force connections to break.
### DNS Failure
The parameters **destinationFilters** and **inboundDestinationFilters** use the
| Description | Substitutes DNS lookup request responses with a specified error code. DNS lookup requests that are substituted must:<ul><li>Originate from the VM.</li><li>Match the defined fault parameters.</li></ul>DNS lookups that aren't made by the Windows DNS client aren't affected by this fault. | | Prerequisites | None. | | Urn | urn:csci:microsoft:agent:dnsFailure/1.0 |
+| Fault type | Continuous. |
| Parameters (key, value) | | | hosts | Delimited JSON array of host names to fail DNS lookup request for.<br><br>This property accepts wildcards (`*`), but only for the first subdomain in an address and only applies to the subdomain for which they're specified. For example:<ul><li>\*.microsoft.com is supported.</li><li>subdomain.\*.microsoft isn't supported.</li><li>\*.microsoft.com doesn't work for multiple subdomains in an address, such as subdomain1.subdomain2.microsoft.com.</li></ul> | | dnsFailureReturnCode | DNS error code to be returned to the client for the lookup failure (FormErr, ServFail, NXDomain, NotImp, Refused, XDomain, YXRRSet, NXRRSet, NotAuth, NotZone). For more information on DNS return codes, see the [IANA website](https://www.iana.org/assignments/dns-parameters/dns-parameters.xml#dns-parameters-6). |
The parameters **destinationFilters** and **inboundDestinationFilters** use the
|-|-| | Capability name | CPUPressure-1.0 | | Target type | Microsoft-Agent |
-| Supported OS types | Windows, Linux. |
+| Supported OS types | Windows, Linux |
| Description | Adds CPU pressure, up to the specified value, on the VM where this fault is injected during the fault action. The artificial CPU pressure is removed at the end of the duration or if the experiment is canceled. On Windows, the **% Processor Utility** performance counter is used at fault start to determine current CPU percentage, which is subtracted from the `pressureLevel` defined in the fault so that **% Processor Utility** hits approximately the `pressureLevel` defined in the fault parameters. | | Prerequisites | **Linux**: The **stress-ng** utility needs to be installed. Installation happens automatically as part of agent installation, using the default package manager, on several operating systems including Debian-based (like Ubuntu), Red Hat Enterprise Linux, and OpenSUSE. For other distributions, including Azure Linux, you must install **stress-ng** manually. For more information, see the [upstream project repository](https://github.com/ColinIanKing/stress-ng). | | | **Windows**: None. | | Urn | urn:csci:microsoft:agent:cpuPressure/1.0 |
+| Fault type | Continuous. |
| Parameters (key, value) | | | pressureLevel | An integer between 1 and 99 that indicates how much CPU pressure (%) is applied to the VM in terms of **% CPU Usage** | | virtualMachineScaleSetInstances | An array of instance IDs when you apply this fault to a virtual machine scale set. Required for virtual machine scale sets in uniform orchestration mode. [Learn more about instance IDs](../virtual-machine-scale-sets/virtual-machine-scale-sets-instance-ids.md#scale-set-instance-id-for-uniform-orchestration-mode). |
Known issues on Linux:
|-|-| | Capability name | PhysicalMemoryPressure-1.0 | | Target type | Microsoft-Agent |
-| Supported OS types | Windows, Linux. |
+| Supported OS types | Windows, Linux |
| Description | Adds physical memory pressure, up to the specified value, on the VM where this fault is injected during the fault action. The artificial physical memory pressure is removed at the end of the duration or if the experiment is canceled. | | Prerequisites | **Linux**: The **stress-ng** utility needs to be installed. Installation happens automatically as part of agent installation, using the default package manager, on several operating systems including Debian-based (like Ubuntu), Red Hat Enterprise Linux, and OpenSUSE. For other distributions, including Azure Linux, you must install **stress-ng** manually. For more information, see the [upstream project repository](https://github.com/ColinIanKing/stress-ng). | | | **Windows**: None. | | Urn | urn:csci:microsoft:agent:physicalMemoryPressure/1.0 |
+| Fault type | Continuous. |
| Parameters (key, value) | | | pressureLevel | An integer between 1 and 99 that indicates how much physical memory pressure (%) is applied to the VM. | | virtualMachineScaleSetInstances | An array of instance IDs when you apply this fault to a virtual machine scale set. Required for virtual machine scale sets in uniform orchestration mode. [Learn more about instance IDs](../virtual-machine-scale-sets/virtual-machine-scale-sets-instance-ids.md#scale-set-instance-id-for-uniform-orchestration-mode). |
Currently, the Windows agent doesn't reduce memory pressure when other applicati
| Description | Adds virtual memory pressure, up to the specified value, on the VM where this fault is injected during the fault action. The artificial virtual memory pressure is removed at the end of the duration or if the experiment is canceled. | | Prerequisites | None. | | Urn | urn:csci:microsoft:agent:virtualMemoryPressure/1.0 |
+| Fault type | Continuous. |
| Parameters (key, value) | | | pressureLevel | An integer between 1 and 99 that indicates how much physical memory pressure (%) is applied to the VM. | | virtualMachineScaleSetInstances | An array of instance IDs when you apply this fault to a virtual machine scale set. Required for virtual machine scale sets in uniform orchestration mode. [Learn more about instance IDs](../virtual-machine-scale-sets/virtual-machine-scale-sets-instance-ids.md#scale-set-instance-id-for-uniform-orchestration-mode). |
Currently, the Windows agent doesn't reduce memory pressure when other applicati
| Description | Uses the [diskspd utility](https://github.com/Microsoft/diskspd/wiki) to add disk pressure to a Virtual Machine. Pressure is added to the primary disk by default, or the disk specified with the targetTempDirectory parameter. This fault has five different modes of execution. The artificial disk pressure is removed at the end of the duration or if the experiment is canceled. | | Prerequisites | None. | | Urn | urn:csci:microsoft:agent:diskIOPressure/1.1 |
+| Fault type | Continuous. |
| Parameters (key, value) | | | pressureMode | The preset mode of disk pressure to add to the primary storage of the VM. Must be one of the `PressureModes` in the following table. | | targetTempDirectory | (Optional) The directory to use for applying disk pressure. For example, `D:/Temp`. If the parameter is not included, pressure is added to the primary disk. |
Currently, the Windows agent doesn't reduce memory pressure when other applicati
| Description | Uses stress-ng to apply pressure to the disk. One or more worker processes are spawned that perform I/O processes with temporary files. Pressure is added to the primary disk by default, or the disk specified with the targetTempDirectory parameter. For information on how pressure is applied, see the [stress-ng](https://wiki.ubuntu.com/Kernel/Reference/stress-ng) article. | | Prerequisites | **Linux**: The **stress-ng** utility needs to be installed. Installation happens automatically as part of agent installation, using the default package manager, on several operating systems including Debian-based (like Ubuntu), Red Hat Enterprise Linux, and OpenSUSE. For other distributions, including Azure Linux, you must install **stress-ng** manually. For more information, see the [upstream project repository](https://github.com/ColinIanKing/stress-ng). | | Urn | urn:csci:microsoft:agent:linuxDiskIOPressure/1.1 |
+| Fault type | Continuous. |
| Parameters (key, value) | | | workerCount | Number of worker processes to run. Setting `workerCount` to 0 generates as many worker processes as there are number of processors. | | fileSizePerWorker | Size of the temporary file that a worker performs I/O operations against. Integer plus a unit in bytes (b), kilobytes (k), megabytes (m), or gigabytes (g) (for example, `4m` for 4 megabytes and `256g` for 256 gigabytes). |
These sample values produced ~100% disk pressure when tested on a `Standard_D2s_
|-|-| | Capability name | StopService-1.0 | | Target type | Microsoft-Agent |
-| Supported OS types | Windows, Linux. |
+| Supported OS types | Windows, Linux |
| Description | Stops a Windows service or a Linux systemd service during the fault. Restarts it at the end of the duration or if the experiment is canceled. | | Prerequisites | None. | | Urn | urn:csci:microsoft:agent:stopService/1.0 |
+| Fault type | Continuous. |
| Parameters (key, value) | | | serviceName | Name of the Windows service or Linux systemd service you want to stop. | | virtualMachineScaleSetInstances | An array of instance IDs when you apply this fault to a virtual machine scale set. Required for virtual machine scale sets in uniform orchestration mode. [Learn more about instance IDs](../virtual-machine-scale-sets/virtual-machine-scale-sets-instance-ids.md#scale-set-instance-id-for-uniform-orchestration-mode). |
These sample values produced ~100% disk pressure when tested on a `Standard_D2s_
|-|-| | Capability name | KillProcess-1.0 | | Target type | Microsoft-Agent |
-| Supported OS types | Windows, Linux. |
+| Supported OS types | Windows, Linux |
| Description | Kills all the running instances of a process that matches the process name sent in the fault parameters. Within the duration set for the fault action, a process is killed repetitively based on the value of the kill interval specified. This fault is a destructive fault where system admin would need to manually recover the process if self-healing is configured for it. | | Prerequisites | None. | | Urn | urn:csci:microsoft:agent:killProcess/1.0 |
+| Fault type | Continuous. |
| Parameters (key, value) | | | processName | Name of a process to continuously kill (without the .exe). The process does not need to be running when the fault begins executing. | | killIntervalInMilliseconds | Amount of time the fault waits in between successive kill attempts in milliseconds. |
These sample values produced ~100% disk pressure when tested on a `Standard_D2s_
|-|-| | Capability name | PauseProcess-1.0 | | Target type | Microsoft-Agent |
-| Supported OS types | Windows. |
+| Supported OS types | Windows |
| Description | Pauses (suspends) the specified processes for the specified duration. If there are multiple processes with the same name, this fault suspends all of those processes. Within the fault's duration, the processes are paused repetitively at the specified interval. At the end of the duration or if the experiment is canceled, the processes will resume. | | Prerequisites | None. | | Urn | urn:csci:microsoft:agent:pauseProcess/1.0 |
+| Fault type | Continuous. |
| Parameters (key, value) | | | processNames | Delimited JSON array of process names defining which processes are to be paused. Maximum of 4. The process name can optionally include the ".exe" extension. | | pauseIntervalInMilliseconds | Amount of time the fault waits between successive pausing attempts, in milliseconds. |
Currently, a maximum of 4 process names can be listed in the processNames parame
| Description | Changes the system time of the virtual machine and resets the time at the end of the experiment or if the experiment is canceled. | | Prerequisites | None. | | Urn | urn:csci:microsoft:agent:timeChange/1.0 |
+| Fault type | Continuous. |
| Parameters (key, value) | | | dateTime | A DateTime string in [ISO8601 format](https://www.cryptosys.net/pki/manpki/pki_iso8601datetime.html). If `YYYY-MM-DD` values are missing, they're defaulted to the current day when the experiment runs. If Thh:mm:ss values are missing, the default value is 12:00:00 AM. If a 2-digit year is provided (`YY`), it's converted to a 4-digit year (`YYYY`) based on the current century. If the timezone `<Z>` is missing, the default offset is the local timezone. `<Z>` must always include a sign symbol (negative or positive). | | virtualMachineScaleSetInstances | An array of instance IDs when you apply this fault to a virtual machine scale set. Required for virtual machine scale sets in uniform orchestration mode. [Learn more about instance IDs](../virtual-machine-scale-sets/virtual-machine-scale-sets-instance-ids.md#scale-set-instance-id-for-uniform-orchestration-mode). |
Currently, a maximum of 4 process names can be listed in the processNames parame
| Description | Runs any stress-ng command by passing arguments directly to stress-ng. Useful when one of the predefined faults for stress-ng doesn't meet your needs. | | Prerequisites | **Linux**: The **stress-ng** utility needs to be installed. Installation happens automatically as part of agent installation, using the default package manager, on several operating systems including Debian-based (like Ubuntu), Red Hat Enterprise Linux, and OpenSUSE. For other distributions, including Azure Linux, you must install **stress-ng** manually. For more information, see the [upstream project repository](https://github.com/ColinIanKing/stress-ng). | | Urn | urn:csci:microsoft:agent:stressNg/1.0 |
+| Fault type | Continuous. |
| Parameters (key, value) | | | stressNgArguments | One or more arguments to pass to the stress-ng process. For information on possible stress-ng arguments, see the [stress-ng](https://wiki.ubuntu.com/Kernel/Reference/stress-ng) article. **NOTE: Do NOT include the "-t " argument because it will cause an error. Experiment length is defined directly in the Azure chaos experiment UI, NOT in the stressNgArguments.** |
Currently, a maximum of 4 process names can be listed in the processNames parame
| Description | Causes an Azure Cosmos DB account with a single write region to fail over to a specified read region to simulate a [write region outage](../cosmos-db/high-availability.md). | | Prerequisites | None. | | Urn | `urn:csci:microsoft:cosmosDB:failover/1.0` |
+| Fault type | Continuous. |
| Parameters (key, value) | | | readRegion | The read region that should be promoted to write region during the failover, for example, `East US 2`. |
Currently, a maximum of 4 process names can be listed in the processNames parame
| Property | Value | |-|-|
-| Capability name | SecurityRule-1.0 |
+| Capability name | SecurityRule-1.0, SecurityRule-1.1 |
| Target type | Microsoft-NetworkSecurityGroup | | Description | Enables manipulation or rule creation in an existing Azure network security group (NSG) or set of Azure NSGs, assuming the rule definition is applicable across security groups. Useful for: <ul><li>Simulating an outage of a downstream or cross-region dependency/nondependency.<li>Simulating an event that's expected to trigger a logic to force a service failover.<li>Simulating an event that's expected to trigger an action from a monitoring or state management service.<li>Using as an alternative for blocking or allowing network traffic where Chaos Agent can't be deployed. | | Prerequisites | None. |
-| Urn | urn:csci:microsoft:networkSecurityGroup:securityRule/1.0 |
+| Urn | urn:csci:microsoft:networkSecurityGroup:securityRule/1.0, urn:csci:microsoft:networkSecurityGroup:securityRule/1.1 |
+| Fault type | Continuous. |
| Parameters (key, value) | | | name | A unique name for the security rule that's created. The fault fails if another rule already exists on the NSG with the same name. Must begin with a letter or number. Must end with a letter, number, or underscore. May contain only letters, numbers, underscores, periods, or hyphens. | | protocol | Protocol for the security rule. Must be Any, TCP, UDP, or ICMP. |
Currently, a maximum of 4 process names can be listed in the processNames parame
| Description | Shuts down a VM for the duration of the fault. Restarts it at the end of the experiment or if the experiment is canceled. Only Azure Resource Manager VMs are supported. | | Prerequisites | None. | | Urn | urn:csci:microsoft:virtualMachine:shutdown/1.0 |
+| Fault type | Continuous. |
| Parameters (key, value) | | | abruptShutdown | (Optional) Boolean that indicates if the VM should be shut down gracefully or abruptly (destructive). |
This fault has two available versions that you can use, Version 1.0 and Version
| Description | Shuts down or kills a virtual machine scale set instance during the fault and restarts the VM at the end of the fault duration or if the experiment is canceled. | | Prerequisites | None. | | Urn | urn:csci:microsoft:virtualMachineScaleSet:shutdown/1.0 |
+| Fault type | Continuous. |
| Parameters (key, value) | | | abruptShutdown | (Optional) Boolean that indicates if the virtual machine scale set instance should be shut down gracefully or abruptly (destructive). | | instances | A string that's a delimited array of virtual machine scale set instance IDs to which the fault is applied. |
This fault has two available versions that you can use, Version 1.0 and Version
| Description | Shuts down or kills a virtual machine scale set instance during the fault. Restarts the VM at the end of the fault duration or if the experiment is canceled. Supports [dynamic targeting](chaos-studio-tutorial-dynamic-target-cli.md). | | Prerequisites | None. | | Urn | urn:csci:microsoft:virtualMachineScaleSet:shutdown/2.0 |
+| Fault type | Continuous. |
| [filter](/azure/templates/microsoft.chaos/experiments?pivots=deployment-language-arm-template#filter-objects-1) | (Optional) Available starting with Version 2.0. Used to filter the list of targets in a selector. Currently supports filtering on a list of zones. The filter is only applied to virtual machine scale set resources within a zone:<ul><li>If no filter is specified, this fault shuts down all instances in the virtual machine scale set.</li><li>The experiment targets all virtual machine scale set instances in the specified zones.</li><li>If a filter results in no targets, the experiment fails.</li></ul> | | Parameters (key, value) | | | abruptShutdown | (Optional) Boolean that indicates if the virtual machine scale set instance should be shut down gracefully or abruptly (destructive). |
cloud-services Cloud Services Guestos Family 2 3 4 Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-family-2-3-4-retirement.md
foreach($subscription in Get-AzureSubscription) {
} ```
-Your cloud services are impacted by this retirement if the `osFamily` column in the script output contains a `2`, `3`, `4`, or is empty. If empty, the default `osFamily` column value is `5`.
+Your cloud services are impacted by this retirement if the `osFamily` column in the script output contains a `2`, `3`, `4`, or is empty. If empty, the default `osFamily` attribute will point to `osFamily` `5`.
## Recommendations
If you're affected, we recommend you migrate your Cloud Service or [Cloud Servic
## Important clarification regarding support
-The announcement of the retirement of Azure Guest OS Families 2, 3, and 4, effective May 2025, pertains specifically to the operating systems within these families. This retirement doesn't extend the overall support timeline for Azure Cloud Services (classic) beyond the scheduled deprecation in August 2024. [Cloud Services Extended Support](../cloud-services-extended-support/overview.md) continues support with Guest OS Families 5 and newer.
+The announcement of the retirement of Azure Guest OS Families 2, 3, and 4, effective March 2025, pertains specifically to the operating systems within these families. This retirement doesn't extend the overall support timeline for Azure Cloud Services (classic) beyond the scheduled deprecation in August 2024. [Cloud Services Extended Support](../cloud-services-extended-support/overview.md) continues support with Guest OS Families 5 and newer.
Customers currently using Azure Cloud Services who wish to continue receiving support beyond August 2024 are encouraged to transition to [Cloud Services Extended Support](../cloud-services-extended-support/overview.md). This separate service offering ensures continued assistance and support. Cloud Services Extended Support requires a distinct enrollment and isn't automatically included with existing Azure Cloud Services subscriptions.
container-apps Azure Arc Create Container App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-arc-create-container-app.md
Next, add the required Azure CLI extensions.
```azurecli-interactive az extension add --upgrade --yes --name customlocation
-az extension remove --name containerapp
-az extension add -s https://aka.ms/acaarccli/containerapp-latest-py2.py3-none-any.whl --yes
+az extension add --name containerapp --upgrade --yes
``` ## Create a resource group
container-apps Azure Arc Enable Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-arc-enable-cluster.md
Install the following Azure CLI extensions.
# [Azure CLI](#tab/azure-cli) ```azurecli-interactive
-az extension add --name connectedk8s --upgrade --yes
+az extension add --name connectedk8s --upgrade --yes
az extension add --name k8s-extension --upgrade --yes az extension add --name customlocation --upgrade --yes
-az extension remove --name containerapp
-az extension add --source https://aka.ms/acaarccli/containerapp-latest-py2.py3-none-any.whl --yes
+az extension add --name containerapp --upgrade --yes
``` # [PowerShell](#tab/azure-powershell)
az extension add --source https://aka.ms/acaarccli/containerapp-latest-py2.py3-n
az extension add --name connectedk8s --upgrade --yes az extension add --name k8s-extension --upgrade --yes az extension add --name customlocation --upgrade --yes
-az extension remove --name containerapp
-az extension add --source https://aka.ms/acaarccli/containerapp-latest-py2.py3-none-any.whl --yes
+az extension add --name containerapp --upgrade --yes
```
container-instances Container Instances Volume Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-volume-azure-files.md
By default, Azure Container Instances are stateless. If the container is restart
> > [!IMPORTANT]
-> If you are deploying container groups into an Azure Virtual Network, you must add a [service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) to your Azure Storage Account.
+> If the outbound connection to the internet is blocked in the delegated subnet, you must add a [service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) to Azure Strorage on your delegated subnet.
## Create an Azure file share
copilot Analyze Cost Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/analyze-cost-management.md
Title: Analyze, estimate and optimize cloud costs using Microsoft Copilot in Azure description: Learn about scenarios where Microsoft Copilot in Azure can use Microsoft Cost Management to help you manage your costs. Last updated 05/28/2024-+ - ignite-2023
copilot Author Api Management Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/author-api-management-policies.md
Title: Author API Management policies using Microsoft Copilot in Azure description: Learn about how Microsoft Copilot in Azure can generate Azure API Management policies based on your requirements. Last updated 05/28/2024-+ - ignite-2023
copilot Build Infrastructure Deploy Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/build-infrastructure-deploy-workloads.md
Title: Build infrastructure and deploy workloads using Microsoft Copilot in Azure description: Learn how Microsoft Copilot in Azure can help you build custom infrastructure for your workloads and provide templates and scripts to help you deploy. Last updated 02/26/2024-+ - ignite-2023
copilot Deploy Vms Effectively https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/deploy-vms-effectively.md
Title: Deploy virtual machines effectively using Microsoft Copilot in Azure description: Learn how Microsoft Copilot in Azure can help you deploy cost-efficient VMs. Last updated 05/28/2024-+ - ignite-2023
copilot Generate Cli Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/generate-cli-scripts.md
Title: Generate Azure CLI scripts using Microsoft Copilot in Azure description: Learn about scenarios where Microsoft Copilot in Azure can generate Azure CLI scripts for you to customize and use. Last updated 04/25/2024-+
copilot Generate Kubernetes Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/generate-kubernetes-yaml.md
Title: Create Kubernetes YAML files for AKS clusters using Microsoft Copilot in Azure description: Learn how Microsoft Copilot in Azure can help you create Kubernetes YAML files for you to customize and use. Last updated 05/28/2024-+ - ignite-2023
copilot Generate Powershell Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/generate-powershell-scripts.md
Title: Generate PowerShell scripts using Microsoft Copilot in Azure description: Learn about scenarios where Microsoft Copilot in Azure can generate PowerShell scripts for you to customize and use. Last updated 05/28/2024-+ - build-2024
copilot Get Information Resource Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/get-information-resource-graph.md
Title: Get resource information using Microsoft Copilot in Azure (preview) description: Learn about scenarios where Microsoft Copilot in Azure (preview) can help with Azure Resource Graph. Last updated 05/28/2024-+ - ignite-2023
copilot Get Monitoring Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/get-monitoring-information.md
Title: Get information about Azure Monitor metrics and logs using Microsoft Copilot in Azure description: Learn about scenarios where Microsoft Copilot in Azure can provide information about Azure Monitor metrics and logs. Last updated 07/03/2024-+ - ignite-2023
copilot Improve Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/improve-storage-accounts.md
Title: Improve security and resiliency of storage accounts using Microsoft Copilot in Azure description: Learn how Microsoft Copilot in Azure can improve the security posture and data resiliency of storage accounts. Last updated 04/25/2024-+ - ignite-2023
copilot Manage Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/manage-access.md
Title: Manage access to Microsoft Copilot in Azure description: Learn how administrators can manage user access to Microsoft Copilot in Azure. Last updated 05/28/2024-+ - build-2024
copilot Optimize Code Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/optimize-code-application-insights.md
Title: Discover performance recommendations with Code Optimizations using Microsoft Copilot in Azure description: Learn about scenarios where Microsoft Copilot in Azure can use Application Insight Code Optimizations to help optimize your apps. Last updated 11/20/2023-+ - ignite-2023
copilot Query Attack Surface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/query-attack-surface.md
Title: Query your attack surface with Defender EASM using Microsoft Copilot in Azure description: Learn how Microsoft Copilot in Azure can help query Attack Surface Insights from Defender EASM. Last updated 04/25/2024-+ - ignite-2023
copilot Troubleshoot App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/troubleshoot-app-service.md
Title: Troubleshoot your apps faster with App Service using Microsoft Copilot in Azure description: Learn how Microsoft Copilot in Azure can help you troubleshoot your web apps hosted with App Service. Last updated 05/28/2024-+ - build-2024
copilot Understand Service Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/understand-service-health.md
Title: Understand service health events and status using Microsoft Copilot in Azure description: Learn about scenarios where Microsoft Copilot in Azure can provide information about service health events. Last updated 05/28/2024-+ - ignite-2023
copilot Use Guided Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/use-guided-deployments.md
Title: Create resources using guided deployments from Microsoft Copilot in Azure description: Learn how Microsoft Copilot in Azure (preview) can provide one-click or step-by-step deployment assistance. Last updated 05/28/2024-+
copilot Work Aks Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/work-aks-clusters.md
Title: Work with AKS clusters efficiently using Microsoft Copilot in Azure description: Learn how Microsoft Copilot in Azure can help you be more efficient when working with Azure Kubernetes Service (AKS). Last updated 05/28/2024-+ - build-2024
copilot Work Smarter Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/work-smarter-edge.md
Title: Work smarter with your Azure Stack HCI clusters using Microsoft Copilot in Azure description: Learn about scenarios where Microsoft Copilot in Azure can help you work with your Azure Stack HCI clusters. Last updated 05/28/2024-+ - ignite-2023
cosmos-db Free Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/free-tier.md
After you create a free tier account, you can start building apps with Azure Cos
* [Build a console app using the .NET V4 SDK](create-sql-api-dotnet-v4.md) to manage Azure Cosmos DB resources. * [Build a .NET web app using Azure Cosmos DB for MongoDB](mongodb/create-mongodb-dotnet.md)
-* [Create a Jupyter notebook](notebooks-overview.md) and analyze your data.
+* [Create a notebook](nosql/tutorial-create-notebook-vscode.md) and analyze your data.
* Learn more about [Understanding your Azure Cosmos DB bill](understand-your-bill.md)
cosmos-db How To Configure Vnet Service Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-configure-vnet-service-endpoint.md
Here are some frequently asked questions about configuring access from virtual n
### Are Notebooks and Mongo/Cassandra Shell currently compatible with Virtual Network enabled accounts?
-At the moment the [Mongo shell](https://devblogs.microsoft.com/cosmosdb/preview-native-mongo-shell/) and [Cassandra shell](https://devblogs.microsoft.com/cosmosdb/announcing-native-cassandra-shell-preview/) integrations in the Azure Cosmos DB Data Explorer, and the [Jupyter Notebooks service](./notebooks-overview.md), aren't supported with VNET access. This integration is currently in active development.
+At the moment the [Mongo shell](https://devblogs.microsoft.com/cosmosdb/preview-native-mongo-shell/) and [Cassandra shell](https://devblogs.microsoft.com/cosmosdb/announcing-native-cassandra-shell-preview/) integrations in the Azure Cosmos DB Data Explorer aren't supported with VNET access. This integration is currently in active development.
### Can I specify both virtual network service endpoint and IP access control policy on an Azure Cosmos DB account?
cosmos-db Vector Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/vector-search.md
To create a vector index using the IVF (Inverted File) algorithm, use the follow
| `dimensions` | integer | Number of dimensions for vector similarity. The maximum number of supported dimensions is `2000`. | > [!IMPORTANT]
-> Setting the _numLists_ parameter correctly is important for acheiving good accuracy and performance. We recommend that `numLists` is set to `documentCount/1000` for up to 1 million documents and to `sqrt(documentCount)` for more than 1 million documents.
+> Setting the _numLists_ parameter correctly is important for achieving good accuracy and performance. We recommend that `numLists` is set to `documentCount/1000` for up to 1 million documents and to `sqrt(documentCount)` for more than 1 million documents.
> > As the number of items in your database grows, you should tune _numLists_ to be larger in order to achieve good latency performance for vector search. >
cosmos-db Change Feed Processor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/change-feed-processor.md
ms.devlang: csharp Previously updated : 04/19/2024 Last updated : 07/10/2024
The change feed processor can be hosted in any platform that supports long-runni
Although the change feed processor can run in short-lived environments because the lease container maintains the state, the startup cycle of these environments adds delays to the time it takes to receive notifications (due to the overhead of starting the processor every time the environment is started).
+## Role-based access requirements
+
+When using Microsoft Entra ID as authentication mechanism, make sure the identity has the proper [permissions](../how-to-setup-rbac.md#permission-model):
+
+* On the monitored container:
+ * `Microsoft.DocumentDB/databaseAccounts/readMetadata`
+ * `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/readChangeFeed`
+* On the lease container:
+ * `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/read`
+ * `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/create`
+ * `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/replace`
+ * `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/delete`
+ * `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/executeQuery`
+ ## Additional resources * [Azure Cosmos DB SDK](sdk-dotnet-v2.md)
cosmos-db Troubleshoot Dotnet Sdk Slow Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-dotnet-sdk-slow-request.md
Previously updated : 08/02/2023 Last updated : 07/10/2024
For multiple store results for a single request, be aware of the following:
Show the time for the different stages of sending and receiving a request in the transport layer. * `ChannelAcquisitionStarted`: The time to get or create a new connection. Connections can be created for numerous reasons such as: The previous connection was closed due to inactivity using [CosmosClientOptions.IdleTcpConnectionTimeout](sdk-connection-modes.md#volume-of-connections), the volume of concurrent requests exceeds the [CosmosClientOptions.MaxRequestsPerTcpConnection](sdk-connection-modes.md#volume-of-connections), the connection was closed due to a network error, or the application is not following the [Singleton pattern](#application-design) and new instances are constantly created. Once a connection is established, it is reused for subsequent requests, so this should not impact P99 latency unless the previously mentioned issues are happening.
-* `Pipelined` time is large might be caused by a large request.
-* `Transit time` is large, which leads to a networking problem. Compare this number to the `BELatencyInMs`. If `BELatencyInMs` is small, then the time was spent on the network, and not on the Azure Cosmos DB service.
-* `Received` time is large might be caused by a thread starvation problem. This is the time between having the response and returning the result.
+* `Pipelined`: The time spent writing the request into the TCP socket. Request can only be written on a TCP socket one at a time, a large value indicates a bottleneck on the TCP socket which is commonly associated with locked threads by the application code or large requests size.
+* `Transit time`: The time spent on the network after the request was written on the TCP socket. Compare this number to the `BELatencyInMs`. If `BELatencyInMs` is small, then the time was spent on the network, and not on the Azure Cosmos DB service. If the request failed with a timeout, it indicates how long the client waited without response, and the source is network latency.
+* `Received`: The time between the response was received and processed by the SDK. A large value is normally caused by a thread starvation or locked threads.
### ServiceEndpointStatistics
cosmos-db Tutorial Create Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-create-notebook.md
- Title: |
- Tutorial: Create a Jupyter Notebook in Azure Cosmos DB for NoSQL to analyze and visualize data (preview)
-description: |
- Learn how to use built-in Jupyter notebooks to import data to Azure Cosmos DB for NoSQL, analyze the data, and visualize the output.
--- Previously updated : 09/29/2022-----
-# Tutorial: Create a Jupyter Notebook in Azure Cosmos DB for NoSQL to analyze and visualize data (preview)
--
-> [!WARNING]
-> The Jupyter Notebooks feature of Azure Cosmos DB will be retired March 30, 2024; you will not be able to use built-in Jupyter notebooks from the Azure Cosmos DB account. We recommend using [Visual Studio Code's support for Jupyter notebooks](../nosql/tutorial-create-notebook-vscode.md) or your preferred notebooks client.
-
-This tutorial walks through how to use the Jupyter Notebooks feature of Azure Cosmos DB to import sample retail data to an Azure Cosmos DB for NoSQL account. You'll see how to use the Azure Cosmos DB magic commands to run queries, analyze the data, and visualize the results.
-
-## Prerequisites
--- An existing Azure Cosmos DB for NoSQL account.
- - If you have an existing Azure subscription, [create a new account](how-to-create-account.md?tabs=azure-portal).
- - No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required.
-
-## Create a new notebook
-
-In this section, you'll create the Azure Cosmos database, container, and import the retail data to the container.
-
-1. Navigate to your Azure Cosmos DB account and open the **Data Explorer.**
-
-1. Select **New Notebook**.
-
- :::image type="content" source="media/tutorial-create-notebook/new-notebook-option.png" lightbox="media/tutorial-create-notebook/new-notebook-option.png" alt-text="Screenshot of the Data Explorer with the 'New Notebook' option highlighted.":::
-
-1. In the confirmation dialog that appears, select **Create**.
-
- > [!NOTE]
- > A temporary workspace will be created to enable you to work with Jupyter Notebooks. When the session expires, any notebooks in the workspace will be removed.
-
-1. Select the kernel you wish to use for the notebook.
-
-### [Python](#tab/python)
--
-### [C#](#tab/csharp)
----
-> [!TIP]
-> Now that the new notebook has been created, you can rename it to something like **VisualizeRetailData.ipynb**.
-
-## Create a database and container using the SDK
-
-### [Python](#tab/python)
-
-1. Start in the default code cell.
-
-1. Import any packages you require for this tutorial.
-
- ```python
- import azure.cosmos
- from azure.cosmos.partition_key import PartitionKey
- ```
-
-1. Create a database named **RetailIngest** using the built-in SDK.
-
- ```python
- database = cosmos_client.create_database_if_not_exists('RetailIngest')
- ```
-
-1. Create a container named **WebsiteMetrics** with a partition key of `/CartID`.
-
- ```python
- container = database.create_container_if_not_exists(id='WebsiteMetrics', partition_key=PartitionKey(path='/CartID'))
- ```
-
-1. Select **Run** to create the database and container resource.
-
- :::image type="content" source="media/tutorial-create-notebook/run-cell.png" alt-text="Screenshot of the 'Run' option in the menu.":::
-
-### [C#](#tab/csharp)
-
-1. Start in the default code cell.
-
-1. Import any packages you require for this tutorial.
-
- ```csharp
- using Microsoft.Azure.Cosmos;
- ```
-
-1. Create a new instance of the client type using the built-in SDK.
-
- ```csharp
- var cosmosClient = new CosmosClient(Cosmos.Endpoint, Cosmos.Key);
- ```
-
-1. Create a database named **RetailIngest**.
-
- ```csharp
- Database database = await cosmosClient.CreateDatabaseIfNotExistsAsync("RetailIngest");
- ```
-
-1. Create a container named **WebsiteMetrics** with a partition key of `/CartID`.
-
- ```csharp
- Container container = await database.CreateContainerIfNotExistsAsync("WebsiteMetrics", "/CartID");
- ```
-
-1. Select **Run** to create the database and container resource.
-
- :::image type="content" source="media/tutorial-create-notebook/run-cell.png" alt-text="Screenshot of the 'Run' option in the menu.":::
---
-## Import data using magic commands
-
-1. Add a new code cell.
-
-1. Within the code cell, add the following magic command to upload, to your existing container, the JSON data from this url: <https://cosmosnotebooksdata.blob.core.windows.net/notebookdata/websiteData.json>
-
- ```python
- %%upload --databaseName RetailIngest --containerName WebsiteMetrics --url https://cosmosnotebooksdata.blob.core.windows.net/notebookdata/websiteData.json
- ```
-
-1. Select **Run Active Cell** to only run the command in this specific cell.
-
- :::image type="content" source="media/tutorial-create-notebook/run-active-cell.png" alt-text="Screenshot of the 'Run Active Cell' option in the menu.":::
-
- > [!NOTE]
- > The import command should take 5-10 seconds to complete.
-
-1. Observe the output from the run command. Ensure that **2,654** documents were imported.
-
- ```output
- Documents successfully uploaded to WebsiteMetrics
- Total number of documents imported:
- Success: 2654
- Failure: 0
- Total time taken : 00:00:04 hours
- Total RUs consumed : 27309.660000001593
- ```
-
-## Visualize your data
-
-### [Python](#tab/python)
-
-1. Create another new code cell.
-
-1. In the code cell, use a SQL query to populate a [Pandas DataFrame](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html#pandas.DataFrame).
-
- ```python
- %%sql --database RetailIngest --container WebsiteMetrics --output df_cosmos
- SELECT c.Action, c.Price as ItemRevenue, c.Country, c.Item FROM c
- ```
-
-1. Select **Run Active Cell** to only run the command in this specific cell.
-
-1. Create another new code cell.
-
-1. In the code cell, output the top **10** items from the dataframe.
-
- ```python
- df_cosmos.head(10)
- ```
-
-1. Select **Run Active Cell** to only run the command in this specific cell.
-
-1. Observe the output of running the command.
-
- | | Action | ItemRevenue | Country | Item |
- | | | | | |
- | **0** | Purchased | 19.99 | Macedonia | Button-Up Shirt |
- | **1** | Viewed | 12.00 | Papua New Guinea | Necklace |
- | **2** | Viewed | 25.00 | Slovakia (Slovak Republic) | Cardigan Sweater |
- | **3** | Purchased | 14.00 | Senegal | Flip Flop Shoes |
- | **4** | Viewed | 50.00 | Panama | Denim Shorts |
- | **5** | Viewed | 14.00 | Senegal | Flip Flop Shoes |
- | **6** | Added | 14.00 | Senegal | Flip Flop Shoes |
- | **7** | Added | 50.00 | Panama | Denim Shorts |
- | **8** | Purchased | 33.00 | Palestinian Territory | Red Top |
- | **9** | Viewed | 30.00 | Malta | Green Sweater |
-
-1. Create another new code cell.
-
-1. In the code cell, import the **pandas** package to customize the output of the dataframe.
-
- ```python
- import pandas as pd
- pd.options.display.html.table_schema = True
- pd.options.display.max_rows = None
-
- df_cosmos.groupby("Item").size()
- ```
-
-1. Select **Run Active Cell** to only run the command in this specific cell.
-
-1. In the output, select the **Line Chart** option to view a different visualization of the data.
-
- :::image type="content" source="media/tutorial-create-notebook/pandas-python-line-chart.png" alt-text="Screenshot of the Pandas dataframe visualization for the data as a line chart.":::
-
-### [C#](#tab/csharp)
-
-1. Create a new code cell.
-
-1. In the code cell, create a new C# class to represent an item in the container.
-
- ```csharp
- public class Record
- {
- public string Action { get; set; }
- public decimal Price { get; set; }
- public string Country { get; set; }
- public string Item { get; set; }
- }
- ```
-
-1. Create another new code cell.
-
-1. In the code cell, add code to [execute a SQL query using the SDK](query/index.yml) storing the output of the query in a variable of type <xref:System.Collections.Generic.List%601> named **results**.
-
- ```csharp
- using System.Collections.Generic;
-
- var query = new QueryDefinition(
- query: "SELECT c.Action, c.Price, c.Country, c.Item FROM c"
- );
-
- FeedIterator<Record> feed = container.GetItemQueryIterator<Record>(
- queryDefinition: query
- );
-
- var results = new List<Record>();
- while (feed.HasMoreResults)
- {
- FeedResponse<Record> response = await feed.ReadNextAsync();
- foreach (Record result in response)
- {
- results.Add(result);
- }
- }
- ```
-
-1. Create another new code cell.
-
-1. In the code cell, create a dictionary by adding unique permutations of the **Item** field as the key and the data in the **Price** field as the value.
-
- ```csharp
- var dictionary = new Dictionary<string, decimal>();
-
- foreach(var result in results)
- {
- dictionary.TryAdd (result.Item, result.Price);
- }
-
- dictionary
- ```
-
-1. Select **Run Active Cell** to only run the command in this specific cell.
-
-1. Observe the output with unique combinations of the **Item** and **Price** fields.
-
- ```output
- ...
- Denim Jacket:31.99
- Fleece Jacket:65
- Sandals:12
- Socks:3.75
- Sandal:35.5
- Light Jeans:80
- ...
- ```
-
-1. Create another new code cell.
-
-1. In the code cell, output the **results** variable.
-
- ```csharp
- results
- ```
-
-1. Select **Run Active Cell** to only run the command in this specific cell.
-
-1. In the output, select the **Box Plot** option to view a different visualization of the data.
-
- :::image type="content" source="media/tutorial-create-notebook/pandas-csharp-box-plot.png" alt-text="Screenshot of the Pandas dataframe visualization for the data as a box plot.":::
---
-## Persist your notebook
-
-1. In the **Notebooks** section, open the context menu for the notebook you created for this tutorial and select **Download**.
-
- :::image type="content" source="media/tutorial-create-notebook/download-notebook.png" alt-text="Screenshot of the notebook context menu with the 'Download' option.":::
-
- > [!TIP]
- > To save your work permanently, save your notebooks to a GitHub repository or download the notebooks to your local machine before the session ends.
-
-## Next steps
--- [Learn about the Jupyter Notebooks feature in Azure Cosmos DB](../notebooks-overview.md)-- [Import notebooks from GitHub into an Azure Cosmos DB for NoSQL account](tutorial-import-notebooks.md)-- [Review the FAQ on Jupyter Notebook support](../notebooks-faq.yml)
cosmos-db Tutorial Import Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-import-notebooks.md
- Title: |
- Tutorial: Import Jupyter notebooks from GitHub into Azure Cosmos DB for NoSQL (preview)
-description: |
- Learn how to connect to GitHub and import the notebooks from a GitHub repository to your Azure Cosmos DB for NoSQL account.
--- Previously updated : 09/29/2022-----
-# Tutorial: Import Jupyter notebooks from GitHub into Azure Cosmos DB for NoSQL (preview)
--
-> [!WARNING]
-> The Jupyter Notebooks feature of Azure Cosmos DB will be retired March 30, 2024; you will not be able to use built-in Jupyter notebooks from the Azure Cosmos DB account. We recommend using [Visual Studio Code's support for Jupyter notebooks](../nosql/tutorial-create-notebook-vscode.md) or your preferred notebooks client.
-
-This tutorial walks through how to import Jupyter notebooks from a GitHub repository and run them in an Azure Cosmos DB for NoSQL account. After importing the notebooks, you can run, edit them, and persist your changes back to the same GitHub repository.
-
-## Prerequisites
--- An existing Azure Cosmos DB for NoSQL account.
- - If you have an existing Azure subscription, [create a new account](how-to-create-account.md?tabs=azure-portal).
- - No Azure subscription? You can [try Azure Cosmos DB free](../try-free.md) with no credit card required.
-
-## Create a copy of a GitHub repository
-
-1. Navigate to the [azure-samples/cosmos-db-nosql-notebooks](https://github.com/azure-samples/cosmos-db-nosql-notebooks) template repository.
-
-1. Create a new copy of the template repository in your own GitHub account or organization.
-
-## Pull notebooks from GitHub
-
-Instead of creating new notebooks each time you start a workspace, you can import existing notebooks from GitHub. In this section, you'll connect to an existing GitHub repository with sample notebooks.
-
-1. Navigate to your Azure Cosmos DB account and open the **Data Explorer.**
-
-1. Select **Connect to GitHub**.
-
- :::image type="content" source="media/tutorial-import-notebooks/connect-github-option.png" lightbox="media/tutorial-import-notebooks/connect-github-option.png" alt-text="Screenshot of the Data Explorer with the 'Connect to GitHub' option highlighted.":::
-
-1. In the **Connect to GitHub** dialog, select the access option appropriate to your GitHub repository and then select **Authorize access**.
-
- :::image type="content" source="media/tutorial-import-notebooks/authorize-access.png" alt-text="Screenshot of the 'Connect to GitHub' dialog with options for various levels of access.":::
-
-1. Complete the GitHub third-party authorization workflow granting access to the organization\[s\] required to access your GitHub repository. For more information, see [Authorizing GitHub Apps](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/authorizing-github-apps).
-
-1. In the **Manage GitHub settings** dialog, select the GitHub repository you created earlier.
-
- :::image type="content" source="media/tutorial-import-notebooks/select-pinned-repositories.png" alt-text="Screenshot of the 'Manage GitHub settings' dialog with a list of unpinned and pinned repositories.":::
-
-1. Back in the Data Explorer, locate the new tree of nodes for your pinned repository and open the **website-metrics-python.ipynb** file.
-
- :::image type="content" source="media/tutorial-import-notebooks/open-notebook-pinned-repositories.png" alt-text="Screenshot of the pinned repositories in the Data Explorer.":::
-
-1. In the editor for the notebook, locate the following cell.
-
- ```python
- import pandas as pd
- pd.options.display.html.table_schema = True
- pd.options.display.max_rows = None
-
- df_cosmos.groupby("Item").size()
- ```
-
-1. The cell currently outputs the number of unique items. Replace the final line of the cell with a new line to output the number of unique actions in the dataset.
-
- ```python
- df_cosmos.groupby("Action").size()
- ```
-
-1. Run all the cells sequentially to see the new dataset. The new dataset should only include three potential values for the **Action** column. Optionally, you can select a data visualization for the results.
-
- :::image type="content" source="media/tutorial-import-notebooks/updated-visualization.png" alt-text="Screenshot of the Pandas dataframe visualization for the data.":::
-
-## Push notebook changes to GitHub
-
-> [!TIP]
-> Currently, temporary workspaces will be de-allocated if left idle for 20 minutes. The maximum amount of usage time per day is 60 minutes. These limits are subject to change in the future.
-
-To save your work permanently, save your notebooks back to the GitHub repository. In this section, you'll persist your changes from the temporary workspace to GitHub as a new commit.
-
-1. Select **Save** to create a commit for your change to the notebook.
-
- :::image type="content" source="media/tutorial-import-notebooks/save-option.png" alt-text="Screenshot of the 'Save' option in the Data Explorer menu.":::
-
-1. In the **Save** dialog, add a descriptive commit message.
-
- :::image type="content" source="media/tutorial-import-notebooks/commit-message-dialog.png" alt-text="Screenshot of the 'Save' dialog with an example of a commit message.":::
-
-1. Navigate to the GitHub repository you created using your browser. The new commit should now be visible in the online repository.
-
- :::image type="content" source="media/tutorial-import-notebooks/updated-github-repository.png" alt-text="Screenshot of the updated notebook on the GitHub website.":::
-
-## Next steps
--- [Learn about the Jupyter Notebooks feature in Azure Cosmos DB](../notebooks-overview.md)-- [Create your first notebook in an Azure Cosmos DB for NoSQL account](tutorial-create-notebook.md)-- [Review the FAQ on Jupyter Notebook support](../notebooks-faq.yml)
cosmos-db Notebooks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/notebooks-overview.md
- Title: Jupyter Notebooks in Azure Cosmos DB (preview)
-description: Create and use built-in Jupyter Notebooks in Azure Cosmos DB to interactively run queries.
-- Previously updated : 09/29/2022-----
-# Jupyter Notebooks in Azure Cosmos DB (preview)
--
-> [!WARNING]
-> The Jupyter Notebooks feature of Azure Cosmos DB will be retired March 30, 2024; you will not be able to use built-in Jupyter notebooks from the Azure Cosmos DB account. We recommend using [Visual Studio Code's support for Jupyter notebooks](nosql/tutorial-create-notebook-vscode.md) or your preferred notebooks client.
-
-Jupyter Notebooks is an open-source interactive developer environment (IDE) that's designed to create, execute, and share documents that contain live code, equations, visualizations, and narrative text.
-
-Azure Cosmos DB built-in Jupyter Notebooks are directly integrated into the Azure portal and your Azure Cosmos DB accounts, making them convenient and easy to use. Developers, data scientists, engineers, and analysts can use the familiar Jupyter Notebooks experience to perform common tasks. These common tasks include:
--- data exploration-- data cleaning-- data transformations-- numerical simulations-- statistical modeling-- data visualization-- machine learning--
-Azure Cosmos DB supports both C# and Python notebooks for the APIs for NoSQL, Apache Cassandra, Apache Gremlin, Table, and MongoDB. Inside the notebook, you can take advantage of built-in commands and features that make it easy to create Azure Cosmos DB resources. You can also use the built-in commands to upload, query, and visualize your data in Azure Cosmos DB.
--
-## Benefits of Jupyter Notebooks
-
-Jupyter Notebooks were originally developed for data science applications written in Python and R. However, they can be used in various ways for different kinds of projects, including:
-
-### Data visualization
-
-Jupyter Notebooks allow you to visualize data in the form of a shared notebook that renders a data set as a graphic. You can create visualizations, make interactive changes to the shared code and data set, and share the results.
-
-### Code sharing
-
-Services like GitHub provides ways to share code, but they're largely non-interactive. With a Jupyter Notebook, you can view code, execute it, and display the results directly in the Azure portal.
-
-### Live interactions with code
-
-Code in a Jupyter Notebook is dynamic; you can edit it and run the updates incrementally in real time. You can also embed user controls (for example, sliders or text input fields) that are used as input sources for code, demos, or Proof of Concepts (POCs).
-
-### Documentation of code samples and outcomes of data exploration
-
-If you have a piece of code and you want to explain line-by-line how it works, you can embed it in a Jupyter Notebook. You can add interactivity along with the documentation at the same time.
-
-### Built-in commands for Azure Cosmos DB
-
-Azure Cosmos DB's built-in magic commands make it easy to interact with your account. You can use commands like %%upload and %%sql to upload data into a container and query it using [SQL API syntax](sql-query-getting-started.md). You don't need to write extra custom code.
-
-### All in one place environment
-
-Jupyter Notebooks combines multiple assets into a single document including:
--- code-- rich text-- images-- videos-- animations-- mathematical equations-- plots-- maps-- interactive figures-- widgets-- graphical user interfaces-
-## Components of a Jupyter Notebook
-
-Jupyter Notebooks can include several types of components, each organized into discrete blocks or cells:
-
-### Text and HTML
-
-Plain text, or text annotated in the markdown syntax to generate HTML, can be inserted into the document at any point. CSS styling can also be included inline or added to the template used to generate the notebook.
-
-### Code and output
-
-Jupyter Notebooks support Python and C# code. The results of the executed code appear immediately after the code blocks, and the code blocks can be executed multiple times in any order you like.
-
-### Visualizations
-
-You can generate graphics and charts from the code by using modules like Matplotlib, Plotly, Bokeh, and others. Similar to the output, these visualizations appear inline next to the code that generates them. Similar to the output, these visualizations appear inline next to the code that generates them.
-
-### Multimedia
-
-Because Jupyter Notebooks are built on web technology, they can display all the types of multimedia supported by a web page. You can include them in a notebook as HTML elements, or you can generate them programmatically by using the `IPython.display` module.
-
-### Data
-
-You can import the data from Azure Cosmos containers or the results of queries into a Jupyter Notebook programmatically. Use built-in magic commands to upload or query data in Azure Cosmos DB.
-
-## Next steps
-
-To get started with built-in Jupyter Notebooks in Azure Cosmos DB, see the following articles:
--- [Create your first notebook in an Azure Cosmos DB for NoSQL account](nosql/tutorial-create-notebook.md)-- [Import notebooks from GitHub into an Azure Cosmos DB for NoSQL account](nosql/tutorial-import-notebooks.md)-- [Review the FAQ on Jupyter Notebook support](notebooks-faq.yml)
cosmos-db Try Free https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/try-free.md
After you create a Try Azure Cosmos DB sandbox account, you can start building a
* Use [API for NoSQL to build a console app using .NET](nosql/quickstart-dotnet.md) to manage data in Azure Cosmos DB. * Use [API for MongoDB to build a sample app using Python](mongodb/quickstart-python.md) to manage data in Azure Cosmos DB.
-* [Create a Jupyter notebook](notebooks-overview.md) and analyze your data.
+* [Create a notebook](nosql/tutorial-create-notebook-vscode.md) and analyze your data.
* Learn more about [understanding your Azure Cosmos DB bill](understand-your-bill.md) * Get started with Azure Cosmos DB with one of our quickstarts: * [Get started with Azure Cosmos DB for NoSQL](nosql/quickstart-portal.md#create-container-database)
cost-management-billing Prepare Buy Reservation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepare-buy-reservation.md
You can purchase reservations from Azure portal, APIs, PowerShell, CLI. Read the
- [Databricks](prepay-databricks-reserved-capacity.md) - [Data Explorer](/azure/data-explorer/pricing-reserved-capacity?toc=/azure/cost-management-billing/reservations/toc.json) - [Dedicated Host](../../virtual-machines/prepay-dedicated-hosts-reserved-instances.md)
+- [Defender for Cloud - Pre-Purchase](/azure/defender-for-cloud/prepurchase-plan?toc=/azure/cost-management-billing/reservations/toc.json)
- [Disk Storage](../../virtual-machines/disks-reserved-capacity.md) - [Microsoft Fabric](fabric-capacity.md) - [SAP HANA Large Instances](prepay-hana-large-instances-reserved-capacity.md) - [Software plans](../../virtual-machines/linux/prepay-suse-software-charges.md?toc=/azure/cost-management-billing/reservations/toc.json) - [SQL Database](/azure/azure-sql/database/reserved-capacity-overview?toc=/azure/cost-management-billing/reservations/toc.json) - [Synapse Analytics - data warehouse](prepay-sql-data-warehouse-charges.md)-- [Synapse Analytics - Prepurchase](synapse-analytics-pre-purchase-plan.md)
+- [Synapse Analytics - Pre-Purchase](synapse-analytics-pre-purchase-plan.md)
- [Virtual machines](../../virtual-machines/prepay-reserved-vm-instances.md?toc=/azure/cost-management-billing/reservations/toc.json) - [Virtual machine software](buy-vm-software-reservation.md)
data-factory Configure Azure Ssis Integration Runtime Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/configure-azure-ssis-integration-runtime-performance.md
$SubnetName = "[your subnet name or leave it empty]" # WARNING: Please use the s
### SSISDB info $SSISDBServerEndpoint = "[your server name or managed instance name.DNS prefix].database.windows.net" # WARNING: Please ensure that there is no existing SSISDB, so we can prepare and manage one on your behalf
-# Authentication info: SQL or Azure Active Directory (AAD)
+# Authentication info: SQL or Entra ID
$SSISDBServerAdminUserName = "[your server admin username for SQL authentication or leave it empty for AAD authentication]" $SSISDBServerAdminPassword = "[your server admin password for SQL authentication or leave it empty for AAD authentication]" $SSISDBPricingTier = "[Basic|S0|S1|S2|S3|S4|S6|S7|S9|S12|P1|P2|P4|P6|P11|P15|…|ELASTIC_POOL(name = <elastic_pool_name>) for Azure SQL Database or leave it empty for SQL Managed Instance]"
data-factory Connector Snowflake Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-snowflake-legacy.md
Previously updated : 05/22/2024 Last updated : 07/02/2024 # Copy and transform data in Snowflake using Azure Data Factory or Azure Synapse Analytics (legacy)
If your sink data store and format meet the criteria described in this section,
"typeProperties": { "source": { "type": "SnowflakeSource",
- "sqlReaderQuery": "SELECT * FROM MYTABLE",
+ "query": "SELECT * FROM MYTABLE",
"exportSettings": { "type": "SnowflakeExportCopyCommand", "additionalCopyOptions": {
To use this feature, create an [Azure Blob storage linked service](connector-azu
"typeProperties": { "source": { "type": "SnowflakeSource",
- "sqlReaderQuery": "SELECT * FROM MyTable",
+ "query": "SELECT * FROM MyTable",
"exportSettings": { "type": "SnowflakeExportCopyCommand" }
data-factory Connector Snowflake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-snowflake.md
Previously updated : 05/22/2024 Last updated : 06/24/2024 # Copy and transform data in Snowflake using Azure Data Factory or Azure Synapse Analytics
To copy data from Snowflake, the following properties are supported in the Copy
| exportSettings | Advanced settings used to retrieve data from Snowflake. You can configure the ones supported by the COPY into command that the service will pass through when you invoke the statement. | Yes | | ***Under `exportSettings`:*** | | | | type | The type of export command, set to **SnowflakeExportCopyCommand**. | Yes |
+| storageIntegration | Specify the name of your storage integration that you created in the Snowflake. For the prerequisite steps of using the storage integration, see [Configuring a Snowflake storage integration](https://docs.snowflake.com/en/user-guide/data-load-azure-config#option-1-configuring-a-snowflake-storage-integration). | No |
| additionalCopyOptions | Additional copy options, provided as a dictionary of key-value pairs. Examples: MAX_FILE_SIZE, OVERWRITE. For more information, see [Snowflake Copy Options](https://docs.snowflake.com/en/sql-reference/sql/copy-into-location.html#copy-options-copyoptions). | No | | additionalFormatOptions | Additional file format options that are provided to COPY command as a dictionary of key-value pairs. Examples: DATE_FORMAT, TIME_FORMAT, TIMESTAMP_FORMAT. For more information, see [Snowflake Format Type Options](https://docs.snowflake.com/en/sql-reference/sql/copy-into-location.html#format-type-options-formattypeoptions). | No |
To copy data from Snowflake, the following properties are supported in the Copy
If your sink data store and format meet the criteria described in this section, you can use the Copy activity to directly copy from Snowflake to sink. The service checks the settings and fails the Copy activity run if the following criteria isn't met: -- The **sink linked service** is [**Azure Blob storage**](connector-azure-blob-storage.md) with **shared access signature** authentication. If you want to directly copy data to Azure Data Lake Storage Gen2 in the following supported format, you can create an Azure Blob linked service with SAS authentication against your ADLS Gen2 account, to avoid using [staged copy from Snowflake](#staged-copy-from-snowflake).
+- When you specify `storageIntegration` in the source:
+
+ The sink data store is the Azure Blob Storage that you referred in the external stage in Snowflake. You need to complete the following steps before copying data:
+
+ 1. Create an [**Azure Blob Storage**](connector-azure-blob-storage.md) linked service for the sink Azure Blob Storage with any supported authentication types.
+
+ 2. Grant at least **Storage Blob Data Contributor** role to the Snowflake service principal in the sink Azure Blob Storage **Access Control (IAM)**.
+
+- When you don't specify `storageIntegration` in the source:
+
+ The **sink linked service** is [**Azure Blob storage**](connector-azure-blob-storage.md) with **shared access signature** authentication. If you want to directly copy data to Azure Data Lake Storage Gen2 in the following supported format, you can create an Azure Blob Storage linked service with SAS authentication against your Azure Data Lake Storage Gen2 account, to avoid using [staged copy from Snowflake](#staged-copy-from-snowflake).
- The **sink data format** is of **Parquet**, **delimited text**, or **JSON** with the following configurations:
If your sink data store and format meet the criteria described in this section,
"typeProperties": { "source": { "type": "SnowflakeV2Source",
- "sqlReaderQuery": "SELECT * FROM MYTABLE",
+ "query": "SELECT * FROM MYTABLE",
"exportSettings": { "type": "SnowflakeExportCopyCommand", "additionalCopyOptions": {
If your sink data store and format meet the criteria described in this section,
}, "additionalFormatOptions": { "DATE_FORMAT": "'MM/DD/YYYY'"
- }
+ },
+ "storageIntegration": "< Snowflake storage integration name >"
} }, "sink": {
When your sink data store or format isn't natively compatible with the Snowflake
To use this feature, create an [Azure Blob storage linked service](connector-azure-blob-storage.md#linked-service-properties) that refers to the Azure storage account as the interim staging. Then specify the `enableStaging` and `stagingSettings` properties in the Copy activity.
-> [!NOTE]
-> The staging Azure Blob storage linked service must use shared access signature authentication, as required by the Snowflake COPY command. Make sure you grant proper access permission to Snowflake in the staging Azure Blob storage. To learn more about this, see this [article](https://docs.snowflake.com/en/user-guide/data-load-azure-config.html#option-2-generating-a-sas-token).
+- When you specify `storageIntegration` in the source, the interim staging Azure Blob Storage should be the one that you referred in the external stage in Snowflake. Ensure that you create an [Azure Blob Storage](connector-azure-blob-storage.md) linked service for it with any supported authentication, and grant at least **Storage Blob Data Contributor** role to the Snowflake service principal in the staging Azure Blob Storage **Access Control (IAM)**.
+
+- When you don't specify `storageIntegration` in the source, the staging Azure Blob Storage linked service must use shared access signature authentication, as required by the Snowflake COPY command. Make sure you grant proper access permission to Snowflake in the staging Azure Blob Storage. To learn more about this, see this [article](https://docs.snowflake.com/en/user-guide/data-load-azure-config.html#option-2-generating-a-sas-token).
**Example:**
To use this feature, create an [Azure Blob storage linked service](connector-azu
"typeProperties": { "source": { "type": "SnowflakeV2Source",
- "sqlReaderQuery": "SELECT * FROM MyTable",
+ "query": "SELECT * FROM MyTable",
"exportSettings": {
- "type": "SnowflakeExportCopyCommand"
+ "type": "SnowflakeExportCopyCommand",
+ "storageIntegration": "< Snowflake storage integration name >"
} }, "sink": {
To copy data to Snowflake, the following properties are supported in the Copy ac
| importSettings | Advanced settings used to write data into Snowflake. You can configure the ones supported by the COPY into command that the service will pass through when you invoke the statement. | Yes | | ***Under `importSettings`:*** | | | | type | The type of import command, set to **SnowflakeImportCopyCommand**. | Yes |
+| storageIntegration | Specify the name of your storage integration that you created in the Snowflake. For the prerequisite steps of using the storage integration, see [Configuring a Snowflake storage integration](https://docs.snowflake.com/en/user-guide/data-load-azure-config#option-1-configuring-a-snowflake-storage-integration). | No |
| additionalCopyOptions | Additional copy options, provided as a dictionary of key-value pairs. Examples: ON_ERROR, FORCE, LOAD_UNCERTAIN_FILES. For more information, see [Snowflake Copy Options](https://docs.snowflake.com/en/sql-reference/sql/copy-into-table.html#copy-options-copyoptions). | No | | additionalFormatOptions | Additional file format options provided to the COPY command, provided as a dictionary of key-value pairs. Examples: DATE_FORMAT, TIME_FORMAT, TIMESTAMP_FORMAT. For more information, see [Snowflake Format Type Options](https://docs.snowflake.com/en/sql-reference/sql/copy-into-table.html#format-type-options-formattypeoptions). | No |
To copy data to Snowflake, the following properties are supported in the Copy ac
If your source data store and format meet the criteria described in this section, you can use the Copy activity to directly copy from source to Snowflake. The service checks the settings and fails the Copy activity run if the following criteria isn't met: -- The **source linked service** is [**Azure Blob storage**](connector-azure-blob-storage.md) with **shared access signature** authentication. If you want to directly copy data from Azure Data Lake Storage Gen2 in the following supported format, you can create an Azure Blob linked service with SAS authentication against your ADLS Gen2 account, to avoid using [staged copy to Snowflake](#staged-copy-to-snowflake).
+- When you specify `storageIntegration` in the sink:
+
+ The source data store is the Azure Blob Storage that you referred in the external stage in Snowflake. You need to complete the following steps before copying data:
+
+ 1. Create an [**Azure Blob Storage**](connector-azure-blob-storage.md) linked service for the source Azure Blob Storage with any supported authentication types.
+
+ 2. Grant at least **Storage Blob Data Reader** role to the Snowflake service principal in the source Azure Blob Storage **Access Control (IAM)**.
+
+- When you don't specify `storageIntegration` in the sink:
+
+ The **source linked service** is [**Azure Blob storage**](connector-azure-blob-storage.md) with **shared access signature** authentication. If you want to directly copy data from Azure Data Lake Storage Gen2 in the following supported format, you can create an Azure Blob Storage linked service with SAS authentication against your Azure Data Lake Storage Gen2 account, to avoid using [staged copy to Snowflake](#staged-copy-to-snowflake).
- The **source data format** is **Parquet**, **Delimited text**, or **JSON** with the following configurations:
If your source data store and format meet the criteria described in this section
}, "fileFormatOptions": { "DATE_FORMAT": "YYYY-MM-DD"
- }
+ },
+ "storageIntegration": "< Snowflake storage integration name >"
} } }
When your source data store or format isn't natively compatible with the Snowfla
To use this feature, create an [Azure Blob storage linked service](connector-azure-blob-storage.md#linked-service-properties) that refers to the Azure storage account as the interim staging. Then specify the `enableStaging` and `stagingSettings` properties in the Copy activity.
-> [!NOTE]
-> The staging Azure Blob storage linked service need to use shared access signature authentication as required by the Snowflake COPY command.
+- When you specify `storageIntegration` in the sink, the interim staging Azure Blob Storage should be the one that you referred in the external stage in Snowflake. Ensure that you create an [Azure Blob Storage](connector-azure-blob-storage.md) linked service for it with any supported authentication, and grant at least **Storage Blob Data Reader** role to the Snowflake service principal in the staging Azure Blob Storage **Access Control (IAM)**.
+
+- When you don't specify `storageIntegration` in the sink, the staging Azure Blob Storage linked service need to use shared access signature authentication as required by the Snowflake COPY command.
**Example:**
To use this feature, create an [Azure Blob storage linked service](connector-azu
"sink": { "type": "SnowflakeV2Sink", "importSettings": {
- "type": "SnowflakeImportCopyCommand"
+ "type": "SnowflakeImportCopyCommand",
+ "storageIntegration": "< Snowflake storage integration name >"
} }, "enableStaging": true,
The Snowflake connector offers new functionalities and is compatible with most f
| :-- | :- | | Support Basic and Key pair authentication. | Support Basic authentication. | | Script parameters are not supported in Script activity currently. As an alternative, utilize dynamic expressions for script parameters. For more information, see [Expressions and functions in Azure Data Factory and Azure Synapse Analytics](control-flow-expression-language-functions.md). | Support script parameters in Script activity. |
-| Multiple SQL statements execution in Script activity is not supported currently. To execute multiple SQL statements, divide the query into several script blocks. | Support multiple SQL statements execution in Script activity. |
| Support BigDecimal in Lookup activity. The NUMBER type, as defined in Snowflake, will be displayed as a string in Lookup activity. | BigDecimal is not supported in Lookup activity. | ## Related content
data-factory Copy Activity Performance Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-performance-features.md
Previously updated : 06/17/2024 Last updated : 06/24/2024
Configure the **enableStaging** setting in the copy activity to specify whether
| | | | | | enableStaging |Specify whether you want to copy data via an interim staging store. |False |No | | linkedServiceName |Specify the name of an [Azure Blob storage](connector-azure-blob-storage.md#linked-service-properties) or [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#linked-service-properties) linked service, which refers to the instance of Storage that you use as an interim staging store. |N/A |Yes, when **enableStaging** is set to TRUE |
-| path |Specify the path that you want to contain the staged data. If you don't provide a path, the service creates a container to store temporary data. |N/A |No |
+| path |Specify the path that you want to contain the staged data. If you don't provide a path, the service creates a container to store temporary data. |N/A |No (Yes when `storageIntegration` in Snowflake connector is specified) |
| enableCompression |Specifies whether data should be compressed before it's copied to the destination. This setting reduces the volume of data being transferred. |False |No | >[!NOTE]
data-factory Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/introduction.md
Data Preview and Validation: During the Data Copy activity, tools are provided f
Customizable Data Flows: Azure Data Factory allows you to create customizable data flows. This feature allows you to add custom actions or steps for data processing.
-Integrated Security: Azure Data Factory offers integrated security features such as Azure Active Directory integration and role-based access control to control access to dataflows. This feature increases security in data processing and protects your data.
+Integrated Security: Azure Data Factory offers integrated security features such as Entra ID integration and role-based access control to control access to dataflows. This feature increases security in data processing and protects your data.
## Usage scenarios
data-factory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new-archive.md
Be a part of Azure Data Factory studio preview features - Experience the latest
<tr><td><b>Region expansion</b></td><td>Data Factory is now available in West US3 and Jio India West</td><td>Data Factory is now available in two new regions: West US3 and Jio India West. You can colocate your ETL workflow in these new regions if you're using these regions to store and manage your modern data warehouse. You can also use these regions for business continuity and disaster recovery purposes if you need to fail over from another region within the geo.<br><a href="https://azure.microsoft.com/global-infrastructure/services/?products=data-factory&regions=all">Learn more</a></td></tr>
-<tr><td><b>Security</b></td><td>Connect to an Azure DevOps account in another Azure Active Directory (Azure AD) tenant</td><td>You can connect your Data Factory instance to an Azure DevOps account in a different Azure AD tenant for source control purposes.<br><a href="cross-tenant-connections-to-azure-devops.md">Learn more</a></td></tr>
+<tr><td><b>Security</b></td><td>Connect to an Azure DevOps account in another Entra ID tenant</td><td>You can connect your Data Factory instance to an Azure DevOps account in a different Azure AD tenant for source control purposes.<br><a href="cross-tenant-connections-to-azure-devops.md">Learn more</a></td></tr>
</table> ## January 2022
defender-for-cloud Exempt Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/exempt-resource.md
To create an exemption rule:
## After creating the exemption
-After creating the exemption, it can take up to 30 minutes to take effect. After it takes effect:
+After creating the exemption, it can take up to 24 hours to take effect. After it takes effect:
- The recommendation or resources won't impact your secure score. - If you exempted specific resources, they'll be listed in the **Not applicable** tab of the recommendation details page.
defender-for-cloud Release Notes Recommendations Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-recommendations-alerts.md
Title: New and upcoming changes in Defender for Cloud recommendations and alerts
+ Title: New and upcoming changes in recommendations and alerts
description: Get release notes for new and upcoming changes in recommendations and alerts in Microsoft Defender for Cloud. -+ Last updated 03/18/2024 #customer intent: As a Defender for Cloud admin, I want to stay up to date on the latest new and changed security recommendations and alerts.
Last updated 03/18/2024
This article summarizes what's new in security recommendations and alerts in Microsoft Defender for Cloud. It includes information about new, modified, and deprecated recommendations and alerts.
+<!-- Please don't adjust this next line without getting approval from the Defender for Cloud documentation team. It is necessary for proper RSS functionality. -->
+- This page is updated frequently with the latest recommendations and alerts in Defender for Cloud.
+ - Find the latest information about new and updated Defender for Cloud features in [What's new in Defender for Cloud features](release-notes.md). - Find items older than six months in the [What's new archive](release-notes-archive.md).+
+> [!TIP]
+> Get notified when this page is updated by copying and pasting the following URL into your feed reader:
+>
+> `https://aka.ms/mdc/rss-recommendations-alerts`
+ - Review a complete list of multicloud security recommendations and alerts:
- - [Compute recommendations](recommendations-reference-compute.md)
- - [Container recommendations](recommendations-reference-container.md)
- - [Data recommendations](recommendations-reference-data.md)
- - [DevOps recommendations](recommendations-reference-devops.md)
- - [Identity and access recommendations](recommendations-reference-identity-access.md)
- - [IoT recommendations](recommendations-reference-iot.md)
- - [Networking recommendations](recommendations-reference-networking.md)
- - [Deprecated recommendations](recommendations-reference-deprecated.md)
- - [Security alerts](alerts-reference.md).
+ - [Compute recommendations](recommendations-reference-compute.md)
+ - [Container recommendations](recommendations-reference-container.md)
+ - [Data recommendations](recommendations-reference-data.md)
+ - [DevOps recommendations](recommendations-reference-devops.md)
+ - [Identity and access recommendations](recommendations-reference-identity-access.md)
+ - [IoT recommendations](recommendations-reference-iot.md)
+ - [Networking recommendations](recommendations-reference-networking.md)
+ - [Deprecated recommendations](recommendations-reference-deprecated.md)
+ - [Security alerts](alerts-reference.md).
## Recommendations and alert updates
New and updated recommendations and alerts are added to the table in date order.
<!-- 6. If you're adding a new alert here, make sure you also add it to the alerts reference page--> <!-- 7. After adding the alert to the alerts reference page or adding the recommendation to the recommendations page, in Name, add the name of the alert or recommendation, and add a link to the relevant entry that you added in the alerts or recommendations reference page. Note that all details about the alert or recommendation should be on the reference page. This page should only have minimum information.-->
+| **Date** | **Type** | **State** | **Name** |
+| -- | | | |
+| June 28 | Recommendation | GA | [Azure DevOps repositories should require minimum two-reviewer approval for code pushes](recommendations-reference-devops.md#preview-azure-devops-repositories-should-require-minimum-two-reviewer-approval-for-code-pushes) |
+| June 28 | Recommendation | GA | [Azure DevOps repositories should not allow requestors to approve their own Pull Requests](recommendations-reference-devops.md#preview-azure-devops-repositories-should-not-allow-requestors-to-approve-their-own-pull-requests) |
+| June 28 | Recommendation | GA | [GitHub organizations should not make action secrets accessible to all repositories](recommendations-reference-devops.md#github-organizations-should-not-make-action-secrets-accessible-to-all repositories) |
+| June 27 | Alert | Deprecation | `Security incident detected suspicious source IP activity`<br><br/> Severity: Medium/High |
+| June 27 | Alert | Deprecation | `Security incident detected on multiple resources`<br><br/> Severity: Medium/High |
+| June 27 | Alert | Deprecation | `Security incident detected compromised machine`<br><br/> Severity: Medium/High |
+| June 27 | Alert | Deprecation | `Security incident detected suspicious virtual machines activity`<br><br/> Severity: Medium/High |
+| May 30 | Recommendation | GA | [Linux virtual machines should enable Azure Disk Encryption (ADE) or EncryptionAtHost](recommendations-reference-compute.md#edr-solution-should-be-installed-on-virtual-machines). Assessment key a40cc620-e72c-fdf4-c554-c6ca2cd705c0 |
+| May 30 | Recommendation | GA | [Windows virtual machines should enable Azure Disk Encryption or EncryptionAtHost](recommendations-reference-compute.md#edr-solution-should-be-installed-on-virtual-machines). Assessment key 0cb5f317-a94b-6b80-7212-13a9cc8826af |
+| May 28 | Recommendation | GA | [Machine should be configured securely (powered by MDVM)](recommendations-reference-compute.md#machines-should-be-configured-securely) |
+| May 1 | Recommendation | Upcoming deprecation | [System updates should be installed on your machines](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/SystemUpdatesRecommendationDetailsWithRulesBlade/assessmentKey/4ab6e3c5-74dd-8b35-9ab9-f61b30875b27s).<br/><br/> Estimated deprecation: July 2024. |
+| May 1 | Recommendation | Upcoming deprecation | [System updates on virtual machine scale sets should be installed](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/bd20bd91-aaf1-7f14-b6e4-866de2f43146).<br/><br/> Estimated deprecation: July 2024. |
+| May 1 | Recommendation | Upcoming deprecation | [Log Analytics agent should be installed on Windows-based Azure Arc-enabled machines](recommendations-reference-compute.md#log-analytics-agent-should-be-installed-on-windows-based-azure-arc-enabled-machines)<br/><br/>Estimated deprecation: July 2024 |
+| May 1 | Recommendation | Upcoming deprecation | [Log Analytics agent should be installed on virtual machine scale sets](recommendations-reference-compute.md#log-analytics-agent-should-be-installed-on-virtual-machine-scale-sets)<br/><br/>Estimated deprecation: July 2024 |
+| May 1 | Recommendation | Upcoming deprecation | Auto provisioning of the Log Analytics agent should be enabled on subscriptions<br/><br/>Estimated deprecation: July 2024 |
+| May 1 | Recommendation | Upcoming deprecation | [Log Analytics agent should be installed on virtual machines](recommendations-reference-compute.md#log-analytics-agent-should-be-installed-on-virtual-machines)<br/><br/>Estimated deprecation: July 2024 |
+| May 1 | Recommendation | Upcoming deprecation | [Adaptive application controls for defining safe applications should be enabled on your machines](recommendations-reference-compute.md#adaptive-application-controls-for-defining-safe-applications-should-be-enabled-on-your-machines)<br/><br/>Estimated deprecation: July 2024 |
+| April 18 | Alert | Deprecation | `Fileless attack toolkit detected (VM_FilelessAttackToolkit.Windows)`<br/>`Fileless attack technique detected (VM_FilelessAttackTechnique.Windows)`<br/>`Fileless attack behavior detected (VM_FilelessAttackBehavior.Windows)`<br/>`Fileless Attack Toolkit Detected (VM_FilelessAttackToolkit.Linux)`<br/>`Fileless Attack Technique Detected (VM_FilelessAttackTechnique.Linux)`<br/>`Fileless Attack Behavior Detected (VM_FilelessAttackBehavior.Linux)`<br/><br/>Fileless attack alerts for Windows and Linux VMs will be discontinued. Instead, alerts will be generated by Defender for Endpoint. If you already have the Defender for Endpoint integration enabled in Defender for Servers, there's no action required on your part. In May 2024 you might experience a decrease in your alerts volume, but still remain protected. If you don't currently have integration enabled, enable it to maintain and improve alert coverage. All Defender for Server customers can access the full value of Defender for Endpoint's integration at no additional cost. [Learn more](enable-defender-for-endpoint.md). |
+| April 3 | Recommendation | Upcoming deprecation | [Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](recommendations-reference-compute.md#virtual-machines-should-encrypt-temp-disks-caches-and-data-flows-between-compute-and-storage-resources)<br/><br/>Estimated deprecation date: May 2024. |
+| April 3 | Recommendation | Preview | [Container images in Azure registry should have vulnerability findings resolved (Preview)](recommendations-reference-container.md#preview-container-images-in-azure-registry-should-have-vulnerability-findings-resolved) |
+| April 3 | Recommendation | Preview | [Containers running in Azure should have vulnerability findings resolved (Preview)](recommendations-reference-container.md#preview-containers-running-in-azure-should-have-vulnerability-findings-resolved) |
+| April 3 | Recommendation | Preview | [Container images in AWS registry should have vulnerability findings resolved (Preview)](recommendations-reference-container.md#preview-container-images-in-aws-registry-should-have-vulnerability-findings-resolved) |
+| April 3 | Recommendation | Preview | [Containers running in AWS should have vulnerability findings resolved (Preview)](recommendations-reference-aws.md#preview-containers-running-in-aws-should-have-vulnerability-findings-resolved) |
+| April 3 | Recommendation | Preview | [Container images in GCP registry should have vulnerability findings resolved (Preview)](recommendations-reference-container.md#preview-container-images-in-gcp-registry-should-have-vulnerability-findings-resolved) |
+| April 3 | Recommendation | Preview | [Containers running in GCP should have vulnerability findings resolved (Preview)](recommendations-reference-container.md#preview-containers-running-in-gcp-should-have-vulnerability-findings-resolved) |
+| April 2 | Recommendation | Upcoming deprecation | [Virtual machines should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/12018f4f-3d10-999b-e4c4-86ec25be08a1)The.<br/><br/> There's no effect since these resources no longer exist. Estimated date: July 30, 2024 |
+| April 2 | Recommendation | Update | [Azure AI Services should restrict network access](recommendations-reference-ai.md#azure-ai-services-resources-should-restrict-network-access). |
+| April 2 | Recommendation | Update | [Azure AI Services should have key access disabled (disable local authentication)](recommendations-reference-ai.md#azure-ai-services-resources-should-have-key-access-disabled-disable-local-authentication). |
+| April 2 | Recommendation | Update | [Diagnostic logs in Azure AI services resources should be enabled](recommendations-reference-ai.md#diagnostic-logs-in-azure-ai-services-resources-should-be-enabled). |
+| April 2 | Recommendation | Deprecation | Public network access should be disabled for Cognitive Services accounts. |
+| April 2 | Recommendation | GA | [Azure registry container images should have vulnerabilities resolved](recommendations-reference-container.md#azure-registry-container-images-should-have-vulnerabilities-resolved-powered-by-microsoft-defender-vulnerability-management) |
+| April 2 | Recommendation | Deprecation | [Public network access should be disabled for Cognitive Services accounts](https://ms.portal.azure.com/?feature.msaljs=true#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/684a5b6d-a270-61ce-306e-5cea400dc3a7) |
+| April 2 | Recommendation | GA | [Azure running container images should have vulnerabilities resolved](recommendations-reference-container.md#azure-running-container-images-should-have-vulnerabilities-resolved-powered-by-microsoft-defender-vulnerability-management) |
+| April 2 | Recommendation | GA | [AWS registry container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](recommendations-reference-container.md#preview-container-images-in-aws-registry-should-have-vulnerability-findings-resolved) |
+| April 2 | Recommendation | GA | [AWS running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](recommendations-reference-container.md#preview-containers-running-in-aws-should-have-vulnerability-findings-resolved) |
+| April 2 | Recommendation | GA | [GCP registry container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](recommendations-reference-container.md#preview-container-images-in-gcp-registry-should-have-vulnerability-findings-resolved) |
+| April 2 | Recommendation | GA | [GCP running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](recommendations-reference-container.md#preview-containers-running-in-gcp-should-have-vulnerability-findings-resolved) |
+| March 28 | Recommendation | Upcoming | Linux virtual machines should enable Azure Disk Encryption or EncryptionAtHost (assessment key a40cc620-e72c-fdf4-c554-c6ca2cd705c0) |
+| March 28 | Recommendation | Upcoming | Windows virtual machines should enable Azure Disk Encryption or EncryptionAtHost (assessment key 0cb5f317-a94b-6b80-7212-13a9cc8826af)<br/><br/>Unified disk encryption recommendations will be available for GA in the Azure public cloud in April 2024, replacing the recommendation "Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources." |
+| March 18 | Recommendation | GA | [EDR solution should be installed on virtual machines](recommendations-reference-compute.md#edr-solution-should-be-installed-on-virtual-machines) |
+| March 18 | Recommendation | GA | [EDR configuration issues should be resolved on virtual machines](recommendations-reference-compute.md#edr-configuration-issues-should-be-resolved-on-virtual-machines) |
+| March 18 | Recommendation | GA | [EDR configuration issues should be resolved on EC2s](recommendations-reference-compute.md#edr-configuration-issues-should-be-resolved-on-ec2s) |
+| March 18 | Recommendation | GA | [EDR solution should be installed on EC2s] |
+| March 18 | Recommendation | GA | [EDR configuration issues should be resolved on GCP virtual machines](recommendations-reference-compute.md#edr-configuration-issues-should-be-resolved-on-gcp-virtual-machines) |
+| March 18 | Recommendation | GA | [EDR solution should be installed on GCP virtual machines](recommendations-reference-compute.md#edr-solution-should-be-installed-on-gcp-virtual-machines) |
+| End March | Recommendation | Deprecation | [Endpoint protection should be installed on machines](recommendations-reference-deprecated.md#endpoint-protection-should-be-installed-on-machines) . |
+| End March | Recommendation | Deprecation | [Endpoint protection health issues on machines should be resolved](recommendations-reference-deprecated.md#endpoint-protection-health-issues-on-machines-should-be-resolved) |
+| March 5 | Recommendation | Deprecation | Over-provisioned identities in accounts should be investigated to reduce the Permission Creep Index (PCI) |
+| March 5 | Recommendation | Deprecation | Over-provisioned identities in subscriptions should be investigated to reduce the Permission Creep Index (PCI) |
+| February 20 | Recommendation | Upcoming | [Azure AI Services resources should restrict network access](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/f738efb8-005f-680d-3d43-b3db762d6243) |
+| February 20 | Recommendation | Upcoming | [Azure AI Services resources should have key access disabled (disable local authentication)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/13b10b36-aa99-4db6-b00c-dcf87c4761e6) |
+| February 12 | Recommendation | Deprecation | [`Public network access should be disabled for Cognitive Services accounts`](https://ms.portal.azure.com/?feature.msaljs=true#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/684a5b6d-a270-61ce-306e-5cea400dc3a7). Estimated deprecation: March 14 2024 |
+| February 8 | Recommendation | Preview | [(Preview) Azure Stack HCI servers should meet secured-core requirements](recommendations-reference-compute.md#preview-azure-stack-hci-servers-should-meet-secured-core-requirements) |
+| February 8 | Recommendation | Preview | [(Preview) Azure Stack HCI servers should have consistently enforced application control policies](recommendations-reference-compute.md#preview-azure-stack-hci-servers-should-have-consistently-enforced-application-control-policies) |
+| February 8 | Recommendation | Preview | [(Preview) Azure Stack HCI systems should have encrypted volumes](recommendations-reference-compute.md#preview-azure-stack-hci-systems-should-have-encrypted-volumes) |
+| February 8 | Recommendation | Preview | [(Preview) Host and VM networking should be protected on Azure Stack HCI systems](recommendations-reference-compute.md#preview-host-and-vm-networking-should-be-protected-on-azure-stack-hci-systems) |
+| February 1 | Recommendation | Upcoming | EDR solution should be installed on virtual machines<br/>EDR configuration issues should be resolved on virtual machines<br/>EDR solution should be installed on EC2s<br/>EDR configuration issues should be resolved on EC2s<br/>EDR configuration issues should be resolved on GCP virtual machines<br/>EDR solution should be installed on GCP virtual machines. |
+| January 25 | Alert (Container) | Deprecation | `Anomalous pod deployment (Preview) (K8S_AnomalousPodDeployment)` |
+| January 25 | Alert (Container) | Deprecation | `Excessive role permissions assigned in Kubernetes cluster (Preview) (K8S_ServiceAcountPermissionAnomaly)` |
+| January 25 | Alert (Container) | Deprecation | `Anomalous access to Kubernetes secret (Preview) (K8S_AnomalousSecretAccess)` |
+| January 25 | Alert (Windows machines) | Update to informational | `Adaptive application control policy violation was audited (VM_AdaptiveApplicationControlWindowsViolationAudited)` |
+| January 25 | Alert (Windows machines) | Update to informational | `Adaptive application control policy violation was audited (VM_AdaptiveApplicationControlLinuxViolationAudited)` |
+| January 25 | Alert (Container) | Update to informational | `Attempt to create a new Linux namespace from a container detected (K8S.NODE_NamespaceCreation)` |
+| January 25 | Alert (Container) | Update to informational | `Attempt to stop apt-daily-upgrade.timer service detected (K8S.NODE_TimerServiceDisabled)` |
+| January 25 | Alert (Container) | Update to informational | `Command within a container running with high privileges (K8S.NODE_PrivilegedExecutionInContainer)` |
+| January 25 | Alert (Container) | Update to informational | `Container running in privileged mode (K8S.NODE_PrivilegedContainerArtifacts)` |
+| January 25 | Alert (Container) | Update to informational | `Container with a sensitive volume mount detected (K8S_SensitiveMount)` |
+| January 25 | Alert (Container) | Update to informational | `Creation of admission webhook configuration detected (K8S_AdmissionController)` |
+| January 25 | Alert (Container) | Update to informational | `Detected suspicious file download (K8S.NODE_SuspectDownloadArtifacts)` |
+| January 25 | Alert (Container) | Update to informational | `Docker build operation detected on a Kubernetes node (K8S.NODE_ImageBuildOnNode)` |
+| January 25 | Alert (Container) | Update to informational | `New container in the kube-system namespace detected (K8S_KubeSystemContainer)` |
+| January 25 | Alert (Container) | Update to informational | `New high privileges role detected (K8S_HighPrivilegesRole)` |
+| January 25 | Alert (Container) | Update to informational | `Privileged container detected (K8S_PrivilegedContainer)` |
+| January 25 | Alert (Container) | Update to informational | `Process seen accessing the SSH authorized keys file in an unusual way (K8S.NODE_SshKeyAccess)` |
+| January 25 | Alert (Container) | Update to informational | `Role binding to the cluster-admin role detected (K8S_ClusterAdminBinding)` |
+| January 25 | Alert (Container) | Update to informational | `SSH server is running inside a container (K8S.NODE_ContainerSSH)` |
+| January 25 | Alert (DNS) | Update to informational | `Communication with suspicious algorithmically generated domain (AzureDNS_DomainGenerationAlgorithm)` |
+| January 25 | Alert (DNS) | Update to informational | `Communication with suspicious algorithmically generated domain (DNS_DomainGenerationAlgorithm)` |
+| January 25 | Alert (DNS) | Update to informational | `Communication with suspicious random domain name (Preview) (DNS_RandomizedDomain)` |
+| January 25 | Alert (DNS) | Update to informational | `Communication with suspicious random domain name (AzureDNS_RandomizedDomain)` |
+| January 25 | Alert (DNS) | Update to informational | `Communication with possible phishing domain (AzureDNS_PhishingDomain)` |
+| January 25 | Alert (DNS) | Update to informational | `Communication with possible phishing domain (Preview) (DNS_PhishingDomain)` |
+| January 25 | Alert (Azure App Service) | Update to informational | `NMap scanning detected (AppServices_Nmap)` |
+| January 25 | Alert (Azure App Service) | Update to informational | `Suspicious User Agent detected (AppServices_UserAgentInjection)` |
+| January 25 | Alert (Azure network layer) | Update to informational | `Possible incoming SMTP brute force attempts detected (Generic_Incoming_BF_OneToOne)` |
+| January 25 | Alert (Azure network layer) | Update to informational | `Traffic detected from IP addresses recommended for blocking (Network_TrafficFromUnrecommendedIP)` |
+| January 25 | Alert (Azure Resource Manager) | Update to informational | `Privileged custom role created for your subscription in a suspicious way (Preview)(ARM_PrivilegedRoleDefinitionCreation)` |
+| January 4 | Recommendation | Preview | [Cognitive Services accounts should have local authentication methods disabled](recommendations-reference-data.md#cognitive-services-accounts-should-have-local-authentication-methods-disabled)<br/> Microsoft Cloud Security Benchmark |
+| January 4 | Recommendation preview | [Cognitive Services should use private link](recommendations-reference-data.md#cognitive-services-should-use-private-link)<br/> Microsoft Cloud Security Benchmark | |
+| January 4 | Recommendation | Preview | [Virtual machines and virtual machine scale sets should have encryption at host enabled](recommendations-reference-compute.md#virtual-machines-and-virtual-machine-scale-sets-should-have-encryption-at-host-enabled)<br/> Microsoft Cloud Security Benchmark |
+| January 4 | Recommendation | Preview | [Azure Cosmos DB should disable public network access](recommendations-reference-data.md#azure-cosmos-db-should-disable-public-network-access)<br/> Microsoft Cloud Security Benchmark |
+| January 4 | Recommendation | Preview | [Cosmos DB accounts should use private link](recommendations-reference-data.md#cosmos-db-accounts-should-use-private-link)<br/> Microsoft Cloud Security Benchmark |
+| January 4 | Recommendation | Preview | VPN gateways should use only Azure Active Directory (Azure AD) authentication for point-to-site users<br/> Microsoft Cloud Security Benchmark |
+| January 4 | Recommendation | Preview | [Azure SQL Database should be running TLS version 1.2 or newer](recommendations-reference-data.md#azure-sql-database-should-be-running-tls-version-12-or-newer)<br/> Microsoft Cloud Security Benchmark |
+| January 4 | Recommendation | Preview | [Azure SQL Managed Instances should disable public network access](recommendations-reference-data.md#azure-sql-managed-instances-should-disable-public-network-access)<br/> Microsoft Cloud Security Benchmark |
+| January 4 | Recommendation | Preview | [Storage accounts should prevent shared key access](recommendations-reference-data.md#storage-accounts-should-prevent-shared-key-access)<br/> Microsoft Cloud Security Benchmark |
+| December 14 | Recommendation | Preview | [Azure registry container images should have vulnerabilities resolved (powered by Microsoft Defender Vulnerability Management)](recommendations-reference-container.md#azure-registry-container-images-should-have-vulnerabilities-resolved-powered-by-microsoft-defender-vulnerability-management)<br/><br/>Vulnerability assessment for Linux container images with Microsoft Defender Vulnerability Management. |
+| December 14 | Recommendation | GA | [Azure running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](recommendations-reference-container.md#azure-running-container-images-should-have-vulnerabilities-resolved-powered-by-microsoft-defender-vulnerability-management)<br/><br/> Vulnerability assessment for Linux container images with Microsoft Defender Vulnerability Management. |
+| December 14 | Recommendation | Rename | **New**: [Azure registry container images should have vulnerabilities resolved (powered by Qualys)](recommendations-reference-container.md#azure-registry-container-images-should-have-vulnerabilities-resolved-powered-by-qualys). Vulnerability assessment for container images using Qualys.<br/>**Old**: Container registry images should have vulnerability findings resolved (powered by Qualys) |
+| December 14 | Recommendation | Rename | **New**: [Azure running container images should have vulnerabilities resolved - (powered by Qualys)](recommendations-reference-container.md#azure-running-container-images-should-have-vulnerabilities-resolvedpowered-by-qualys)<br/><br/> Vulnerability assessment for container images using Qualys.<br/>**Old**: Running container images should have vulnerability findings resolved (powered by Qualys) |
+| December 4 | Alert | Preview | `Malicious blob was downloaded from a storage account (Preview)`<br/><br/> MITRE tactics: Lateral movement |
-**Date** | **Type** | **State** | **Name**
- | | |
-June 28 | Recommendation | GA | [Azure DevOps repositories should require minimum two-reviewer approval for code pushes](recommendations-reference-devops.md#preview-azure-devops-repositories-should-require-minimum-two-reviewer-approval-for-code-pushes) |
-June 28 | Recommendation | GA | [Azure DevOps repositories should not allow requestors to approve their own Pull Requests](recommendations-reference-devops.md#preview-azure-devops-repositories-should-not-allow-requestors-to-approve-their-own-pull-requests) |
-June 28 | Recommendation | GA | [GitHub organizations should not make action secrets accessible to all repositories](recommendations-reference-devops.md#github-organizations-should-not-make-action-secrets-accessible-to-all repositories)
-June 27 | Alert | Deprecation | `Security incident detected suspicious source IP activity`<br><br/> Severity: Medium/High
-June 27 | Alert | Deprecation | `Security incident detected on multiple resources`<br><br/> Severity: Medium/High
-June 27 | Alert | Deprecation | `Security incident detected compromised machine`<br><br/> Severity: Medium/High
-June 27 | Alert | Deprecation | `Security incident detected suspicious virtual machines activity`<br><br/> Severity: Medium/High
-May 30 |Recommendation | GA | [Linux virtual machines should enable Azure Disk Encryption (ADE) or EncryptionAtHost](recommendations-reference-compute.md#edr-solution-should-be-installed-on-virtual-machines). Assessment key a40cc620-e72c-fdf4-c554-c6ca2cd705c0
-May 30 | Recommendation | GA | [Windows virtual machines should enable Azure Disk Encryption or EncryptionAtHost](recommendations-reference-compute.md#edr-solution-should-be-installed-on-virtual-machines). Assessment key 0cb5f317-a94b-6b80-7212-13a9cc8826af
-May 28 | Recommendation | GA| [Machine should be configured securely (powered by MDVM)](recommendations-reference-compute.md#machines-should-be-configured-securely) |
-May 1 | Recommendation | Upcoming deprecation | [System updates should be installed on your machines](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/SystemUpdatesRecommendationDetailsWithRulesBlade/assessmentKey/4ab6e3c5-74dd-8b35-9ab9-f61b30875b27s).<br/><br/> Estimated deprecation: July 2024.
-May 1 | Recommendation | Upcoming deprecation | [System updates on virtual machine scale sets should be installed](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/bd20bd91-aaf1-7f14-b6e4-866de2f43146).<br/><br/> Estimated deprecation: July 2024.
-May 1 | Recommendation | Upcoming deprecation | [Log Analytics agent should be installed on Windows-based Azure Arc-enabled machines](recommendations-reference-compute.md#log-analytics-agent-should-be-installed-on-windows-based-azure-arc-enabled-machines)<br/><br/>Estimated deprecation: July 2024
-May 1 | Recommendation | Upcoming deprecation | [Log Analytics agent should be installed on virtual machine scale sets](recommendations-reference-compute.md#log-analytics-agent-should-be-installed-on-virtual-machine-scale-sets)<br/><br/>Estimated deprecation: July 2024
-May 1 | Recommendation | Upcoming deprecation| Auto provisioning of the Log Analytics agent should be enabled on subscriptions<br/><br/>Estimated deprecation: July 2024
-May 1 | Recommendation | Upcoming deprecation | [Log Analytics agent should be installed on virtual machines](recommendations-reference-compute.md#log-analytics-agent-should-be-installed-on-virtual-machines)<br/><br/>Estimated deprecation: July 2024
-May 1 | Recommendation | Upcoming deprecation | [Adaptive application controls for defining safe applications should be enabled on your machines](recommendations-reference-compute.md#adaptive-application-controls-for-defining-safe-applications-should-be-enabled-on-your-machines)<br/><br/>Estimated deprecation: July 2024
-April 18 | Alert | Deprecation | `Fileless attack toolkit detected (VM_FilelessAttackToolkit.Windows)`<br/>`Fileless attack technique detected (VM_FilelessAttackTechnique.Windows)`<br/>`Fileless attack behavior detected (VM_FilelessAttackBehavior.Windows)`<br/>`Fileless Attack Toolkit Detected (VM_FilelessAttackToolkit.Linux)`<br/>`Fileless Attack Technique Detected (VM_FilelessAttackTechnique.Linux)`<br/>`Fileless Attack Behavior Detected (VM_FilelessAttackBehavior.Linux)`<br/><br/>Fileless attack alerts for Windows and Linux VMs will be discontinued. Instead, alerts will be generated by Defender for Endpoint. If you already have the Defender for Endpoint integration enabled in Defender for Servers, there's no action required on your part. In May 2024 you might experience a decrease in your alerts volume, but still remain protected. If you don't currently have integration enabled, enable it to maintain and improve alert coverage. All Defender for Server customers can access the full value of Defender for Endpoint's integration at no additional cost. [Learn more](enable-defender-for-endpoint.md).
-April 3 | Recommendation | Upcoming deprecation | [Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](recommendations-reference-compute.md#virtual-machines-should-encrypt-temp-disks-caches-and-data-flows-between-compute-and-storage-resources)<br/><br/>Estimated deprecation date: May 2024.
-April 3 | Recommendation | Preview | [Container images in Azure registry should have vulnerability findings resolved (Preview)](recommendations-reference-container.md#preview-container-images-in-azure-registry-should-have-vulnerability-findings-resolved)
-April 3 | Recommendation | Preview | [Containers running in Azure should have vulnerability findings resolved (Preview)](recommendations-reference-container.md#preview-containers-running-in-azure-should-have-vulnerability-findings-resolved)
-April 3 | Recommendation | Preview | [Container images in AWS registry should have vulnerability findings resolved (Preview)](recommendations-reference-container.md#preview-container-images-in-aws-registry-should-have-vulnerability-findings-resolved)
-April 3 | Recommendation | Preview | [Containers running in AWS should have vulnerability findings resolved (Preview)](recommendations-reference-aws.md#preview-containers-running-in-aws-should-have-vulnerability-findings-resolved)
-April 3 | Recommendation | Preview | [Container images in GCP registry should have vulnerability findings resolved (Preview)](recommendations-reference-container.md#preview-container-images-in-gcp-registry-should-have-vulnerability-findings-resolved)
-April 3 | Recommendation | Preview | [Containers running in GCP should have vulnerability findings resolved (Preview)](recommendations-reference-container.md#preview-containers-running-in-gcp-should-have-vulnerability-findings-resolved)
-April 2 | Recommendation | Upcoming deprecation| [Virtual machines should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/12018f4f-3d10-999b-e4c4-86ec25be08a1)The.<br/><br/> There's no effect since these resources no longer exist. Estimated date: July 30, 2024
-April 2 | Recommendation | Update | [Azure AI Services should restrict network access](recommendations-reference-ai.md#azure-ai-services-resources-should-restrict-network-access).
-April 2 | Recommendation | Update | [Azure AI Services should have key access disabled (disable local authentication)](recommendations-reference-ai.md#azure-ai-services-resources-should-have-key-access-disabled-disable-local-authentication).
-April 2 | Recommendation | Update | [Diagnostic logs in Azure AI services resources should be enabled](recommendations-reference-ai.md#diagnostic-logs-in-azure-ai-services-resources-should-be-enabled).
-April 2 | Recommendation | Deprecation | Public network access should be disabled for Cognitive Services accounts.
-April 2 | Recommendation | GA | [Azure registry container images should have vulnerabilities resolved](recommendations-reference-container.md#azure-registry-container-images-should-have-vulnerabilities-resolved-powered-by-microsoft-defender-vulnerability-management)
-April 2 | Recommendation | Deprecation | [Public network access should be disabled for Cognitive Services accounts](https://ms.portal.azure.com/?feature.msaljs=true#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/684a5b6d-a270-61ce-306e-5cea400dc3a7)
-April 2 | Recommendation | GA | [Azure running container images should have vulnerabilities resolved](recommendations-reference-container.md#azure-running-container-images-should-have-vulnerabilities-resolved-powered-by-microsoft-defender-vulnerability-management)
-April 2 | Recommendation | GA | [AWS registry container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](recommendations-reference-container.md#preview-container-images-in-aws-registry-should-have-vulnerability-findings-resolved)
-April 2 | Recommendation | GA | [AWS running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](recommendations-reference-container.md#preview-containers-running-in-aws-should-have-vulnerability-findings-resolved)|
-April 2 | Recommendation | GA | [GCP registry container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](recommendations-reference-container.md#preview-container-images-in-gcp-registry-should-have-vulnerability-findings-resolved)|
-April 2 | Recommendation | GA | [GCP running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](recommendations-reference-container.md#preview-containers-running-in-gcp-should-have-vulnerability-findings-resolved)
-March 28 | Recommendation | Upcoming | Linux virtual machines should enable Azure Disk Encryption or EncryptionAtHost (assessment key a40cc620-e72c-fdf4-c554-c6ca2cd705c0)
-March 28 | Recommendation | Upcoming | Windows virtual machines should enable Azure Disk Encryption or EncryptionAtHost (assessment key 0cb5f317-a94b-6b80-7212-13a9cc8826af)<br/><br/>Unified disk encryption recommendations will be available for GA in the Azure public cloud in April 2024, replacing the recommendation "Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources."
-March 18 | Recommendation | GA | [EDR solution should be installed on virtual machines](recommendations-reference-compute.md#edr-solution-should-be-installed-on-virtual-machines)
-March 18 | Recommendation | GA | [EDR configuration issues should be resolved on virtual machines](recommendations-reference-compute.md#edr-configuration-issues-should-be-resolved-on-virtual-machines)
-March 18 | Recommendation | GA | [EDR configuration issues should be resolved on EC2s](recommendations-reference-compute.md#edr-configuration-issues-should-be-resolved-on-ec2s)
-March 18 | Recommendation | GA | [EDR solution should be installed on EC2s]
-March 18 | Recommendation | GA | [EDR configuration issues should be resolved on GCP virtual machines](recommendations-reference-compute.md#edr-configuration-issues-should-be-resolved-on-gcp-virtual-machines)
-March 18 | Recommendation | GA | [EDR solution should be installed on GCP virtual machines](recommendations-reference-compute.md#edr-solution-should-be-installed-on-gcp-virtual-machines)
-End March | Recommendation | Deprecation | [Endpoint protection should be installed on machines](recommendations-reference-deprecated.md#endpoint-protection-should-be-installed-on-machines) .
-End March | Recommendation | Deprecation | [Endpoint protection health issues on machines should be resolved](recommendations-reference-deprecated.md#endpoint-protection-health-issues-on-machines-should-be-resolved)
-March 5 | Recommendation | Deprecation | Over-provisioned identities in accounts should be investigated to reduce the Permission Creep Index (PCI)
-March 5 | Recommendation | Deprecation | Over-provisioned identities in subscriptions should be investigated to reduce the Permission Creep Index (PCI)
-February 20 | Recommendation | Upcoming | [Azure AI Services resources should restrict network access](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/f738efb8-005f-680d-3d43-b3db762d6243)
-February 20 | Recommendation | Upcoming | [Azure AI Services resources should have key access disabled (disable local authentication)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/13b10b36-aa99-4db6-b00c-dcf87c4761e6)
-February 12 | Recommendation | Deprecation | [`Public network access should be disabled for Cognitive Services accounts`](https://ms.portal.azure.com/?feature.msaljs=true#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/684a5b6d-a270-61ce-306e-5cea400dc3a7). Estimated deprecation: March 14 2024
-February 8 | Recommendation | Preview | [(Preview) Azure Stack HCI servers should meet secured-core requirements](recommendations-reference-compute.md#preview-azure-stack-hci-servers-should-meet-secured-core-requirements)
-February 8 | Recommendation | Preview| [(Preview) Azure Stack HCI servers should have consistently enforced application control policies](recommendations-reference-compute.md#preview-azure-stack-hci-servers-should-have-consistently-enforced-application-control-policies)
-February 8 |Recommendation | Preview | [(Preview) Azure Stack HCI systems should have encrypted volumes](recommendations-reference-compute.md#preview-azure-stack-hci-systems-should-have-encrypted-volumes)
-February 8 | Recommendation | Preview | [(Preview) Host and VM networking should be protected on Azure Stack HCI systems](recommendations-reference-compute.md#preview-host-and-vm-networking-should-be-protected-on-azure-stack-hci-systems)
-February 1 | Recommendation | Upcoming | EDR solution should be installed on virtual machines<br/>EDR configuration issues should be resolved on virtual machines<br/>EDR solution should be installed on EC2s<br/>EDR configuration issues should be resolved on EC2s<br/>EDR configuration issues should be resolved on GCP virtual machines<br/>EDR solution should be installed on GCP virtual machines.
-January 25 | Alert (Container) | Deprecation | `Anomalous pod deployment (Preview) (K8S_AnomalousPodDeployment)`
-January 25 | Alert (Container) | Deprecation| `Excessive role permissions assigned in Kubernetes cluster (Preview) (K8S_ServiceAcountPermissionAnomaly)`
-January 25 | Alert (Container) | Deprecation | `Anomalous access to Kubernetes secret (Preview) (K8S_AnomalousSecretAccess)`
-January 25 | Alert (Windows machines) | Update to informational | `Adaptive application control policy violation was audited (VM_AdaptiveApplicationControlWindowsViolationAudited)`
-January 25 | Alert (Windows machines) | Update to informational | `Adaptive application control policy violation was audited (VM_AdaptiveApplicationControlLinuxViolationAudited)`
-January 25 | Alert (Container) | Update to informational | `Attempt to create a new Linux namespace from a container detected (K8S.NODE_NamespaceCreation)`
-January 25 | Alert (Container) | Update to informational | `Attempt to stop apt-daily-upgrade.timer service detected (K8S.NODE_TimerServiceDisabled)`
-January 25 | Alert (Container) |Update to informational | `Command within a container running with high privileges (K8S.NODE_PrivilegedExecutionInContainer)`
-January 25 | Alert (Container) | Update to informational | `Container running in privileged mode (K8S.NODE_PrivilegedContainerArtifacts)`
-January 25 | Alert (Container) | Update to informational | `Container with a sensitive volume mount detected (K8S_SensitiveMount)`
-January 25 | Alert (Container) | Update to informational | `Creation of admission webhook configuration detected (K8S_AdmissionController)`
-January 25 | Alert (Container) | Update to informational | `Detected suspicious file download (K8S.NODE_SuspectDownloadArtifacts)`
-January 25 | Alert (Container) | Update to informational | `Docker build operation detected on a Kubernetes node (K8S.NODE_ImageBuildOnNode)`
-January 25 | Alert (Container) | Update to informational| `New container in the kube-system namespace detected (K8S_KubeSystemContainer)`
-January 25 | Alert (Container) | Update to informational | `New high privileges role detected (K8S_HighPrivilegesRole)`
-January 25 | Alert (Container) |Update to informational | `Privileged container detected (K8S_PrivilegedContainer)`
-January 25 | Alert (Container) | Update to informational | `Process seen accessing the SSH authorized keys file in an unusual way (K8S.NODE_SshKeyAccess)`
-January 25 | Alert (Container)| Update to informational | `Role binding to the cluster-admin role detected (K8S_ClusterAdminBinding)`
-January 25 | Alert (Container) | Update to informational | `SSH server is running inside a container (K8S.NODE_ContainerSSH)`
-January 25 | Alert (DNS)| Update to informational | `Communication with suspicious algorithmically generated domain (AzureDNS_DomainGenerationAlgorithm)`
-January 25 | Alert (DNS) | Update to informational |`Communication with suspicious algorithmically generated domain (DNS_DomainGenerationAlgorithm)`
-January 25 |Alert (DNS) | Update to informational |`Communication with suspicious random domain name (Preview) (DNS_RandomizedDomain)`
-January 25 | Alert (DNS)| Update to informational |`Communication with suspicious random domain name (AzureDNS_RandomizedDomain)`
-January 25 | Alert (DNS) | Update to informational |`Communication with possible phishing domain (AzureDNS_PhishingDomain)`
-January 25 | Alert (DNS) | Update to informational |`Communication with possible phishing domain (Preview) (DNS_PhishingDomain)`
-January 25 | Alert (Azure App Service) | Update to informational |`NMap scanning detected (AppServices_Nmap)`
-January 25 |Alert (Azure App Service) | Update to informational |`Suspicious User Agent detected (AppServices_UserAgentInjection)`
-January 25 | Alert (Azure network layer) | Update to informational |`Possible incoming SMTP brute force attempts detected (Generic_Incoming_BF_OneToOne)`
-January 25 | Alert (Azure network layer) | Update to informational |`Traffic detected from IP addresses recommended for blocking (Network_TrafficFromUnrecommendedIP)`
-January 25 |Alert (Azure Resource Manager) | Update to informational |`Privileged custom role created for your subscription in a suspicious way (Preview)(ARM_PrivilegedRoleDefinitionCreation)`
-January 4 | Recommendation | Preview | [Cognitive Services accounts should have local authentication methods disabled](recommendations-reference-data.md#cognitive-services-accounts-should-have-local-authentication-methods-disabled)<br/> Microsoft Cloud Security Benchmark
-January 4 | Recommendation preview | [Cognitive Services should use private link](recommendations-reference-data.md#cognitive-services-should-use-private-link)<br/> Microsoft Cloud Security Benchmark
-January 4 | Recommendation | Preview | [Virtual machines and virtual machine scale sets should have encryption at host enabled](recommendations-reference-compute.md#virtual-machines-and-virtual-machine-scale-sets-should-have-encryption-at-host-enabled)<br/> Microsoft Cloud Security Benchmark
-January 4 | Recommendation | Preview| [Azure Cosmos DB should disable public network access](recommendations-reference-data.md#azure-cosmos-db-should-disable-public-network-access)<br/> Microsoft Cloud Security Benchmark
-January 4 | Recommendation | Preview| [Cosmos DB accounts should use private link](recommendations-reference-data.md#cosmos-db-accounts-should-use-private-link)<br/> Microsoft Cloud Security Benchmark
-January 4 | Recommendation | Preview| VPN gateways should use only Azure Active Directory (Azure AD) authentication for point-to-site users<br/> Microsoft Cloud Security Benchmark |
-January 4 | Recommendation | Preview| [Azure SQL Database should be running TLS version 1.2 or newer](recommendations-reference-data.md#azure-sql-database-should-be-running-tls-version-12-or-newer)<br/> Microsoft Cloud Security Benchmark
-January 4 |Recommendation | Preview| [Azure SQL Managed Instances should disable public network access](recommendations-reference-data.md#azure-sql-managed-instances-should-disable-public-network-access)<br/> Microsoft Cloud Security Benchmark
-January 4 | Recommendation | Preview | [Storage accounts should prevent shared key access](recommendations-reference-data.md#storage-accounts-should-prevent-shared-key-access)<br/> Microsoft Cloud Security Benchmark
-December 14 | Recommendation | Preview | [Azure registry container images should have vulnerabilities resolved (powered by Microsoft Defender Vulnerability Management)](recommendations-reference-container.md#azure-registry-container-images-should-have-vulnerabilities-resolved-powered-by-microsoft-defender-vulnerability-management)<br/><br/>Vulnerability assessment for Linux container images with Microsoft Defender Vulnerability Management.
-December 14 | Recommendation | GA | [Azure running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](recommendations-reference-container.md#azure-running-container-images-should-have-vulnerabilities-resolved-powered-by-microsoft-defender-vulnerability-management)<br/><br/> Vulnerability assessment for Linux container images with Microsoft Defender Vulnerability Management.
-December 14 | Recommendation | Rename | **New**: [Azure registry container images should have vulnerabilities resolved (powered by Qualys)](recommendations-reference-container.md#azure-registry-container-images-should-have-vulnerabilities-resolved-powered-by-qualys). Vulnerability assessment for container images using Qualys.<br/>**Old**: Container registry images should have vulnerability findings resolved (powered by Qualys)
-December 14 | Recommendation | Rename | **New**: [Azure running container images should have vulnerabilities resolved - (powered by Qualys)](recommendations-reference-container.md#azure-running-container-images-should-have-vulnerabilities-resolvedpowered-by-qualys)<br/><br/> Vulnerability assessment for container images using Qualys.<br/>**Old**: Running container images should have vulnerability findings resolved (powered by Qualys)
-December 4 | Alert | Preview | `Malicious blob was downloaded from a storage account (Preview)`<br/><br/> MITRE tactics: Lateral movement
---
-## Next steps
+## Related content
For information about new features, see [What's new in Defender for Cloud features](release-notes.md).
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: What's new in Microsoft Defender for Cloud features description: What's new and updated in Microsoft Defender for Cloud features Previously updated : 03/24/2024 Last updated : 07/10/2024 # What's new in Defender for Cloud features This article summarizes what's new in Microsoft Defender for Cloud. It includes information about new features in preview or in general availability (GA), feature updates, upcoming feature plans, and deprecated functionality. -- Find the latest information about security recommendations and alerts in [What's new in recommendations and alerts](release-notes.md).
+<!-- Please don't adjust this next line without getting approval from the Defender for Cloud documentation team. It is necessary for proper RSS functionality. -->
+- This page is updated frequently with the latest updates in Defender for Cloud.
+
+- Find the latest information about security recommendations and alerts in [What's new in recommendations and alerts](release-notes-recommendations-alerts.md).
- If you're looking for items older than six months, you can find them in the [What's new archive](release-notes-archive.md). > [!TIP]
This article summarizes what's new in Microsoft Defender for Cloud. It includes
<!-- 5. Under the relevant month, add a short paragraph about the new feature. Give the paragraph an H3 (###) heading. Keep the title short and not rambling. --> <!-- 6. In the Update column, add a bookmark to the H3 paragraph that you created (#<bookmark-name>) .-->
-## July 2024
+## July 2024
+ |Date | Category | Update| |--|--|--|
-| July 9 | Upcoming update | [Inventory experience improvement](#update-inventory-experience-improvement) |
+| July 10 | GA | [Compliance standards are now GA](#compliance-standards-are-now-ga) |
+| July 9 | Upcoming update | [Inventory experience improvement](#inventory-experience-improvement) |
|July 8 | Upcoming update | [Container mapping tool to run by default in GitHub](#container-mapping-tool-to-run-by-default-in-github) |
-### Update: Inventory experience improvement
+### Compliance standards are now GA
+
+July 10, 2024
+
+In March, we added preview versions of many new compliance standards for customers to validate their AWS and GCP resources against.
+
+Those standards included CIS Google Kubernetes Engine (GKE) Benchmark, ISO/IEC 27001 and ISO/IEC 27002, CRI Profile, CSA Cloud Controls Matrix (CCM), Brazilian General Personal Data Protection Law (LGPD), California Consumer Privacy Act (CCPA), and more.
+
+Those preview standards are now generally available (GA).
+
+Check out the [full list of supported compliance standards](concept-regulatory-compliance-standards.md#available-compliance-standards)
+
+### Inventory experience improvement
July 9, 2024
-**Estimated date for change: July 11, 2024**
+**Estimated date for change**: July 11, 2024
The inventory experience will be updated to improve performance, including improvements to the blade's 'Open query' query logic in Azure Resource Graph. Updates to the logic behind Azure resource calculation may result in additional resources counted and presented.
The inventory experience will be updated to improve performance, including impro
July 8, 2024
-**Estimated date for change: August 12, 2024**
+**Estimated date for change**: August 12, 2024
-With DevOps security capabilities in Microsoft Defender Cloud Security Posture Management (CSPM), you can map your cloud-native applications from code to cloud to easily kick off developer remediation workflows and reduce the time to remediation of vulnerabilities in your container images. Currently, you must manually configure the container image mapping tool to run in the Microsoft Secuity DevOps action in GitHub. With this change, container mapping will run by default as part of the Microsoft Security DevOps action. [Learn more about the Microsoft Security DevOps action](https://github.com/microsoft/security-devops-action/blob/main/README.md#advanced).
+With DevOps security capabilities in Microsoft Defender Cloud Security Posture Management (CSPM), you can map your cloud-native applications from code to cloud to easily kick off developer remediation workflows and reduce the time to remediation of vulnerabilities in your container images. Currently, you must manually configure the container image mapping tool to run in the Microsoft Security DevOps action in GitHub. With this change, container mapping will run by default as part of the Microsoft Security DevOps action. [Learn more about the Microsoft Security DevOps action](https://github.com/microsoft/security-devops-action/blob/main/README.md#advanced).
## June 2024
-|Date | Category | Update
+|Date | Category | Update |
|--|--|--| | June 27 | GA | [Checkov IaC Scanning in Defender for Cloud](#ga-checkov-iac-scanning-in-defender-for-cloud). | | June 24 | Update | [Change in pricing for multicloud Defender for Containers](#update-change-in-pricing-for-defender-for-containers-in-multicloud) |
With DevOps security capabilities in Microsoft Defender Cloud Security Posture M
| June 10 | Upcoming update |[SQL vulnerability assessment automatic enablement using express configuration on unconfigured servers](#update-sql-vulnerability-assessment-automatic-enablement).<br/><br/> Estimated update: July 10, 2024. | | June 3 | Upcoming update |[Changes in identity recommendations behavior](#update-changes-in-identity-recommendations-behavior)<br/><br/> Estimated update: July 10 2024. | -
-### GA: Checkov IaC Scanning in Defender for Cloud
+### GA: Checkov IaC Scanning in Defender for Cloud
June 27, 2024
-We are announcing the general availability of the Checkov integration for Infrasturcture-as-Code (IaC) scanning through [MSDO](azure-devops-extension.yml). As part of this release, Checkov will replace TerraScan as a default IaC analyzer that runs as part of the MSDO CLI. TerraScan may still be configured manually through MSDO's [environment variables](https://github.com/microsoft/security-devops-azdevops/wiki) but will not run by default.
+We are announcing the general availability of the Checkov integration for Infrastructure-as-Code (IaC) scanning through [MSDO](azure-devops-extension.yml). As part of this release, Checkov will replace TerraScan as a default IaC analyzer that runs as part of the MSDO CLI. TerraScan may still be configured manually through MSDO's [environment variables](https://github.com/microsoft/security-devops-azdevops/wiki) but will not run by default.
-Security findings from Checkov present as recommendations for both Azure DevOps and GitHub repositories under the assessments "Azure DevOps repositories should have infrastructure as code findings resolved" and "GitHub repositories should have infrastructure as code findings resolved".
+Security findings from Checkov present as recommendations for both Azure DevOps and GitHub repositories under the assessments "Azure DevOps repositories should have infrastructure as code findings resolved" and "GitHub repositories should have infrastructure as code findings resolved".
To learn more about DevOps security in Defender for Cloud, see the [DevOps Security Overview](defender-for-devops-introduction.md). To learn how to configure the MSDO CLI, see the [Azure DevOps](azure-devops-extension.yml) or [GitHub](github-action.md) documentation.
Since Defender for Containers in multicloud is now generally available, it's no
June 20, 2024
-**Estimated date for change: August, 2024**
+**Estimated date for change**: August, 2024
As part of the [MMA deprecation and the Defender for Servers updated deployment strategy](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-for-cloud-strategy-and-plan-towards-log/ba-p/3883341), Defender for Servers security features will be provided through the Microsoft Defender for Endpoint (MDE) agent, or through the [agentless scanning capabilities](enable-agentless-scanning-vms.md). Both of these options won't depend on either the MMA or Azure Monitoring Agent (AMA).
Learn more about [Copilot for Security in Defender for Cloud](copilot-security-i
June 10, 2024
-**Estimated date for change: July 10, 2024**
+**Estimated date for change**: July 10, 2024
-Originally, SQL Vulnerability Assessment (VA) with Express Configuration was only automatically enabled on servers where Microsoft Defender for SQL was activated after the introduction of Express Configuration in December 2022.
+Originally, SQL Vulnerability Assessment (VA) with Express Configuration was only automatically enabled on servers where Microsoft Defender for SQL was activated after the introduction of Express Configuration in December 2022.
We will be updating all Azure SQL Servers that had Microsoft Defender for SQL activated before December 2022 and had no existing SQL VA policy in place, to have SQL Vulnerability Assessment (SQL VA) automatically enabled with Express Configuration.
We will be updating all Azure SQL Servers that had Microsoft Defender for SQL ac
June 3, 2024
-**Estimated date for change: July 2024**
+**Estimated date for change**: July 2024
These changes:
These changes:
- The recommendations won't have 'sub-recommendations' anymore - The value of the 'assessmentKey' field in the API will be changed for those recommendations
-Will be applied to the following recommendations:
+Will be applied to the following recommendations:
- Accounts with owner permissions on Azure resources should be MFA enabled - Accounts with write permissions on Azure resources should be MFA enabled
Will be applied to the following recommendations:
- A maximum of 3 owners should be designated for your subscription - There should be more than one owner assigned to your subscription --- ## May 2024 |Date|Category|Update | |--|--|--| | May 30 | GA | [Agentless malware detection in Defender for Servers Plan 2](#ga-agentless-malware-detection-in-defender-for-servers-plan-2) | | May 22 | Update |[Configure email notifications for attack paths](#update-configure-email-notifications-for-attack-paths) |
-| May 21 | Update |[Advanced hunting in Microsoft Defender XDR includes Defender for Cloud alerts and incidents](#update-advanced-hunting-in-microsoft-defender-xdr-includes-defender-for-cloud-alerts-and-incidents) |
+| May 21 | Update |[Advanced hunting in Microsoft Defender XDR includes Defender for Cloud alerts and incidents](#update-advanced-hunting-in-microsoft-defender-xdr-includes-defender-for-cloud-alerts-and-incidents) |
| May 9 | Preview | [Checkov integration for IaC scanning in Defender for Cloud](#preview-checkov-integration-for-iac-scanning-in-defender-for-cloud) |
-| May 7 | GA | [Permissions management in Defender for Cloud](#ga-permissions-management-in-defender-for-cloud) |
+| May 7 | GA | [Permissions management in Defender for Cloud](#ga-permissions-management-in-defender-for-cloud) |
| May 6 | Preview | [AI multicloud security posture management is available for Azure and AWS](#preview-ai-multicloud-security-posture-management). |
-| May 6 | Limited preview | [Threat protection for AI workloads in Azure](#limited-preview-threat-protection-for-ai-workloads-in-azure). |
+| May 6 | Limited preview | [Threat protection for AI workloads in Azure](#limited-preview-threat-protection-for-ai-workloads-in-azure). |
| May 2 | Update |[Security policy management](#ga-security-policy-management). | | May 1 | Preview | [Defender for open-source databases is now available on AWS for Amazon instances](#preview-defender-for-open-source-databases-available-in-aws). | | May 1 | Upcoming deprecation |[Removal of FIM over AMA and release of new version over Defender for Endpoint](#deprecation-removal-of-fim-with-ama).<br/><br/> Estimated Deprecation August 2024. | - ### GA: Agentless malware detection in Defender for Servers Plan 2 May 30, 2024
Defender for Cloud's agentless malware detection for Azure VMs, AWS EC2 instance
Agentless malware detection uses the [Microsoft Defender Antivirus](/microsoft-365/security/defender-endpoint/microsoft-defender-antivirus-windows) anti-malware engine to scan and detect malicious files. Detected threats trigger security alerts directly into Defender for Cloud and Defender XDR, where they can be investigated and remediated. Learn more about [agentless malware scanning](agentless-malware-scanning.md) for servers and [agentless scanning for VMs](concept-agentless-data-collection.md). - ### Update: Configure email notifications for attack paths May 22, 2024
May 21, 2024
Defender for Cloud's alerts and incidents are now integrated with Microsoft Defender XDR and can be accessed in the Microsoft Defender Portal. This integration provides richer context to investigations that span cloud resources, devices, and identities. Learn about [advanced hunting in XDR integration](concept-integration-365.md#advanced-hunting-in-xdr).
-### Preview: Checkov integration for IaC scanning in Defender for Cloud
+### Preview: Checkov integration for IaC scanning in Defender for Cloud
May 9, 2024
May 7, 2024
[Permissions management](permissions-management.md) is now generally available in Defender for Cloud.
-### Preview: AI multicloud security posture management
+### Preview: AI multicloud security posture management
May 6, 2024
Threat protection for AI workloads in Defender for Cloud is available in limited
Learn more about [threat protection for AI workloads](ai-threat-protection.md).
-### GA: Security policy management
+### GA: Security policy management
May 2, 2024
Security policy management across clouds (Azure, AWS, GCP) is now generally avai
Learn more about [security policies in Microsoft Defender for Cloud](security-policy-concept.md#working-with-security-standards).
-### Preview: Defender for open-source databases available in AWS
+### Preview: Defender for open-source databases available in AWS
May 1, 2024
Defender for open-source databases on AWS is now available in preview. It adds s
Learn more about [Defender for open-source databases](defender-for-databases-introduction.md) and how to [enable Defender for open-source databases on AWS](enable-defender-for-databases-aws.md).
-### Deprecation: Removal of FIM (with AMA)
+### Deprecation: Removal of FIM (with AMA)
May 1, 2024
-**Estimated date for change: August 2024**
+**Estimated date for change**: August 2024
As part of the [MMA deprecation and the Defender for Servers updated deployment strategy](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-for-cloud-strategy-and-plan-towards-log/ba-p/3883341), all Defender for Servers security features will be provided via a single agent (MDE), or via agentless scanning capabilities, and without dependency on either the MMA or AMA.
The new version of File Integrity Monitoring (FIM) over Microsoft Defender for E
As part of this release, FIM experience over AMA will no longer be available through the Defender for Cloud portal beginning August 2024. For more information, see [File Integrity Monitoring experience - changes and migration guidance](prepare-deprecation-log-analytics-mma-agent.md#file-integrity-monitoring-experiencechanges-and-migration-guidance). -- ## April 2024 |Date| Category | Update |
As part of this release, FIM experience over AMA will no longer be available thr
| April 3 | Update | [Risk prioritization is now the default experience in Defender for Cloud](#update-risk-prioritization) | | April 3 | Update | [Defender for open-source relational databases updates](#update-defender-for-open-source-relational-databases-updates). | -- ### Update: Change in CIEM assessment IDs April 16, 2024
-**Estimated date for change: May 2024**
+**Estimated date for change**: May 2024
The following recommendations are scheduled for remodeling, which will result in changes to their assessment IDs:
The following recommendations are scheduled for remodeling, which will result in
- `Super identities in your Azure environment should be removed` - `Unused identities in your Azure environment should be removed` - ### GA: Defender for Containers for AWS and GCP April 15, 2024
-Runtime threat detection and agentless discovery for AWS and GCP in Defender for Containers are now generally available. In addition, there's a new authentication capability in AWS which simplifies provisioning.
+Runtime threat detection and agentless discovery for AWS and GCP in Defender for Containers are now generally available. In addition, there's a new authentication capability in AWS which simplifies provisioning.
Learn more about [containers support matrix in Defender for Cloud](support-matrix-defender-for-containers.md) and how to [configure Defender for Containers components](/azure/defender-for-cloud/defender-for-containers-enable?branch=pr-en-us-269845&tabs=aks-deploy-portal%2Ck8s-deploy-asc%2Ck8s-verify-asc%2Ck8s-remove-arc%2Caks-removeprofile-api&pivots=defender-for-container-eks#deploying-the-defender-sensor).
-### Update: Risk prioritization
+### Update: Risk prioritization
April 3, 2024 Risk prioritization is now the default experience in Defender for Cloud. This feature helps you to focus on the most critical security issues in your environment by prioritizing recommendations based on the risk factors of each resource. The risk factors include the potential impact of the security issue being breached, the categories of risk, and the attack path that the security issue is part of. Learn more about [risk prioritization](risk-prioritization.md). - ### Update: Defender for Open-Source Relational Databases April 3, 2024
This release includes:
- Alert compatibility with existing alerts for Defender for MySQL Single Servers. - Enablement of individual resources. - Enablement at the subscription level.-- Updates for Azure Database for MySQL flexible servers are rolling out over the next few weeks. If you see the error `The server <servername> is not compatible with Advanced Threat Protection`, you can either wait for the update, or open a support ticket to update the server sooner to a supported version.
+- Updates for Azure Database for MySQL flexible servers are rolling out over the next few weeks. If you see the error `The server <servername> is not compatible with Advanced Threat Protection`, you can either wait for the update, or open a support ticket to update the server sooner to a supported version.
If you're already protecting your subscription with Defender for open-source relational databases, your flexible server resources are automatically enabled, protected, and billed. Specific billing notifications have been sent via email for affected subscriptions.
Learn more about [Microsoft Defender for open-source relational databases](defen
| March 25 | Update |[Continuous export now includes attack path data](#update-continuous-export-now-includes-attack-path-data) | | March 21 | Preview | [Agentless scanning supports CMK encrypted VMs in Azure](#preview-agentless-scanning-supports-cmk-encrypted-vms-in-azure) | | March 17 | Preview | [Custom recommendations based on KQL for Azure](#preview-custom-recommendations-based-on-kql-for-azure). |
-| March 13 | Update |[Inclusion of DevOps recommendations in the Microsoft cloud security benchmark](#update-inclusion-of-devops-recommendations-in-the-microsoft-cloud-security-benchmark)
+| March 13 | Update |[Inclusion of DevOps recommendations in the Microsoft cloud security benchmark](#update-inclusion-of-devops-recommendations-in-the-microsoft-cloud-security-benchmark) |
| March 13 | GA | [ServiceNow integration](#ga-servicenow-integration-is-now-generally-available). | | March 13 | Preview | [Critical assets protection in Microsoft Defender for Cloud](#preview-critical-assets-protection-in-microsoft-defender-for-cloud). | | March 12 | Update |[Enhanced AWS and GCP recommendations with automated remediation scripts](#update-enhanced-aws-and-gcp-recommendations-with-automated-remediation-scripts) |
Learn more about [Microsoft Defender for open-source relational databases](defen
| March 3 | Deprecation | [Defender for Cloud Containers Vulnerability Assessment powered by Qualys retirement](#deprecation-defender-for-cloud-containers-vulnerability-assessment-powered-by-qualys-retirement) | | March 3 | Upcoming update |[Changes in where you access Compliance offerings and Microsoft Actions](#update-changes-in-where-you-access-compliance-offerings-and-microsoft-actions).<br/><br/> Estimated deprecation: September 30, 2025. | --
-### GA: Windows container images scanning
+### GA: Windows container images scanning
March 31, 2024 We're announcing the general availability (GA) of the Windows container images support for scanning by Defender for Containers. -- ### Update: Continuous export now includes attack path data March 25, 2024
During public preview this capability isn't automatically enabled. If you're usi
- [Learn more on agentless scanning for VMs](concept-agentless-data-collection.md) - [Learn more on agentless scanning permissions](faq-permissions.yml#which-permissions-are-used-by-agentless-scanning-) -
-### Preview: Custom recommendations based on KQL for Azure
+### Preview: Custom recommendations based on KQL for Azure
March 17, 2024
The MCSB is a framework that defines fundamental cloud security principles based
Learn more about the [DevOps recommendations](recommendations-reference-devops.md) that will be included and the [Microsoft cloud security benchmark](concept-regulatory-compliance.md).
-### GA: ServiceNow integration is now generally available
+### GA: ServiceNow integration is now generally available
March 12, 2024
Learn how to [assign a security standard](update-regulatory-compliance-packages.
March 6, 2024**
-**Estimated date for change: April, 2024**
+**Estimated date for change**: April, 2024
**Defender for PostgreSQL Flexible Servers post-GA updates** - The update enables customers to enforce protection for existing PostgreSQL flexible servers at the subscription level, allowing complete flexibility to enable protection on a per-resource basis or for automatic protection of all resources at the subscription level.
Learn more about [Microsoft Defender for open-source relational databases](defen
March 3, 2024
-**Estimated date for change: September 30, 2025**
+**Estimated date for change**: September 30, 2025
On September 30, 2025, the locations where you access two preview features, Compliance offering and Microsoft Actions, will change.
For a subset of controls, Microsoft Actions was accessible from the **Microsoft
March 3, 2024**
-**Estimated date for change: September 2025**
+**Estimated date for change**: September 2025
On September 30, 2025, the locations where you access two preview features, Compliance offering and Microsoft Actions, will change.
The table that lists the compliance status of Microsoft's products (accessed fro
For a subset of controls, Microsoft Actions was accessible from the **Microsoft Actions (Preview)** button in the controls details pane. After this button is removed, you can view Microsoft Actions by visiting MicrosoftΓÇÖs [Service Trust Portal for FedRAMP](https://servicetrust.microsoft.com/viewpage/FedRAMP) and accessing the Azure System Security Plan document. -- ### Deprecation: Defender for Cloud Containers Vulnerability Assessment powered by Qualys retirement March 3, 2024
The Defender for Cloud Containers Vulnerability Assessment powered by Qualys is
| February 13 | Deprecation |[AWS container vulnerability assessment powered by Trivy retired](#deprecation-aws-container-vulnerability-assessment-powered-by-trivy-retired). | | February 5 | Upcoming update |[Decommissioning of Microsoft.SecurityDevOps resource provider](#update-decommissioning-of-microsoftsecuritydevops-resource-provider)<br/><br/>Expected: March 6, 2024 | - ### Deprecation: Microsoft Security Code Analysis (MSCA) is no longer operational February 28, 2024
February 26, 2024
Azure Kubernetes Service (AKS) threat detection features in Defender for Containers are now fully supported in commercial, Azure Government, and Azure China 21Vianet clouds. [Review](support-matrix-defender-for-containers.md#azure) supported features. - ### Update: New version of Defender sensor for Defender for Containers February 20, 2024
The container vulnerability assessment powered by Trivy has been retired. Any cu
February 5, 2024
-**Estimated date of change: March 6, 2024**
+**Estimated date for change**: March 6, 2024
Microsoft Defender for Cloud is decommissioning the resource provider `Microsoft.SecurityDevOps` that was used during public preview of DevOps security, having migrated to the existing `Microsoft.Security` provider. The reason for the change is to improve customer experiences by reducing the number of resource providers associated with DevOps connectors.
digital-twins How To Integrate Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-integrate-maps.md
Before proceeding with this article, start by setting up your individual Azure D
* For Azure Digital Twins: Follow the instructions in [Connect an end-to-end solution](./tutorial-end-to-end.md) to set up an Azure Digital Twins instance with a sample twin graph and simulated data flow. * In this article, you'll extend that solution with another endpoint and route. You'll also add another function to the function app from that tutorial.
-* For Azure Maps: Follow the instructions in [Use Creator to create indoor maps](../azure-maps/tutorial-creator-indoor-maps.md) and [Create a feature stateset](../azure-maps/tutorial-creator-feature-stateset.md) to create an Azure Maps indoor map with a *feature stateset*.
- * [Feature statesets](../azure-maps/creator-indoor-maps.md#feature-statesets) are collections of dynamic properties (states) assigned to dataset features such as rooms or equipment. In the Azure Maps instructions above, the feature stateset stores room status that you'll be displaying on a map.
+* For Azure Maps: Follow the instructions in [Use Creator to create indoor maps](../azure-maps/tutorial-creator-indoor-maps.md) and create an Azure Maps indoor map with a *feature stateset*.
+ * Feature statesets are collections of dynamic properties (states) assigned to dataset features such as rooms or equipment. In the Azure Maps instructions above, the feature stateset stores room status that you'll be displaying on a map.
* You'll need your Azure Maps **subscription key**, feature **stateset ID**, and **mapConfiguration**. ### Topology
Replace the function code with the following code. It will filter out only updat
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/updateMaps.cs":::
-You'll need to set two environment variables in your function app. One is your [Azure Maps primary subscription key](../azure-maps/quick-demo-map-app.md#get-the-subscription-key-for-your-account), and one is your [Azure Maps stateset ID](../azure-maps/tutorial-creator-feature-stateset.md).
+You'll need to set two environment variables in your function app. One is your [Azure Maps primary subscription key](../azure-maps/quick-demo-map-app.md#get-the-subscription-key-for-your-account), and one is your Azure Maps stateset ID.
```azurecli-interactive az functionapp config appsettings set --name <your-function-app-name> --resource-group <your-resource-group> --settings "subscription-key=<your-Azure-Maps-primary-subscription-key>"
firewall-manager Secure Cloud Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/secure-cloud-network.md
In this tutorial, you learn how to:
> It is also possible to convert an existing hub to a secured hub using the Azure portal, as described in [Configure Azure Firewall in a Virtual WAN hub](../virtual-wan/howto-firewall.md). But like Azure Firewall Manager, you can't configure **Availability Zones**. > To upgrade an existing hub and specify **Availability Zones** for Azure Firewall (recommended) you must follow the upgrade procedure in [Tutorial: Secure your virtual hub using Azure PowerShell](secure-cloud-network-powershell.md). ## Prerequisites
firewall-manager Secure Hybrid Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/secure-hybrid-network.md
For this tutorial, you create three virtual networks:
- **VNet-Spoke** - the spoke virtual network represents the workload located on Azure. - **VNet-Onprem** - The on-premises virtual network represents an on-premises network. In an actual deployment, it can be connected using either a VPN or ExpressRoute connection. For simplicity, this tutorial uses a VPN gateway connection, and an Azure-located virtual network is used to represent an on-premises network.
-![Hybrid network](media/tutorial-hybrid-portal/hybrid-network-firewall.png)
In this tutorial, you learn how to:
governance Definition Structure Basics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/definition-structure-basics.md
Title: Details of the policy definition structure basics
-description: Describes how policy definition basics are used to establish conventions for Azure resources in your organization.
Previously updated : 04/19/2024
+ Title: Details of Azure Policy definition structure basics
+description: Describes how Azure Policy definition basics are used to establish conventions for Azure resources in your organization.
Last updated : 07/10/2024
While the `policyType` property can't be set, there are three values returned by
- `Builtin`: Microsoft provides and maintains these policy definitions. - `Custom`: All policy definitions created by customers have this value.-- `Static`: Indicates a [Regulatory Compliance](./regulatory-compliance.md) policy definition with
- Microsoft **Ownership**. The compliance results for these policy definitions are the results of
- non-Microsoft audits of Microsoft infrastructure. In the Azure portal, this value is sometimes
- displayed as **Microsoft managed**. For more information, see
- [Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md).
+- `Static`: Indicates a [Regulatory Compliance](./regulatory-compliance.md) policy definition with Microsoft **Ownership**. The compliance results for these policy definitions are the results of non-Microsoft audits of Microsoft infrastructure. In the Azure portal, this value is sometimes displayed as **Microsoft managed**. For more information, see [Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md).
## Mode
We recommend that you set `mode` to `all` in most cases. All policy definitions
The following Resource Provider modes are fully supported: -- `Microsoft.Kubernetes.Data` for managing Kubernetes clusters and components such as pods, containers, and ingresses. Supported for Azure Kubernetes Service clusters and [Azure Arc-enabled Kubernetes clusters](../../../aks/intro-kubernetes.md). Definitions using this Resource Provider mode use the effects _audit_, _deny_, and _disabled_.
+- `Microsoft.Kubernetes.Data` for managing Kubernetes clusters and components such as pods, containers, and ingresses. Supported for Azure Kubernetes Service clusters and [Azure Arc-enabled Kubernetes clusters](../../../aks/what-is-aks.md). Definitions using this Resource Provider mode use the effects _audit_, _deny_, and _disabled_.
- `Microsoft.KeyVault.Data` for managing vaults and certificates in [Azure Key Vault](../../../key-vault/general/overview.md). For more information on these policy definitions, see [Integrate Azure Key Vault with Azure Policy](../../../key-vault/general/azure-policy.md). - `Microsoft.Network.Data` for managing [Azure Virtual Network Manager](../../../virtual-network-manager/overview.md) custom membership policies using Azure Policy.
The following Resource Provider modes are currently supported as a [preview](htt
- `Microsoft.MachineLearningServices.v2.Data` for managing [Azure Machine Learning](../../../machine-learning/overview-what-is-azure-machine-learning.md) model deployments. This Resource Provider mode reports compliance for newly created and updated components. During public preview, compliance records remain for 24 hours. Model deployments that exist before these policy definitions are assigned don't report compliance. > [!NOTE]
->Unless explicitly stated, Resource Provider modes only support built-in policy definitions, and exemptions are not supported at the component-level.
+> Unless explicitly stated, Resource Provider modes only support built-in policy definitions, and exemptions are not supported at the component-level.
When Azure Policy versioning is released, the following Resource Provider modes won't support built-in versioning:
When Azure Policy versioning is released, the following Resource Provider modes
- `Microsoft.ManagedHSM.Data` ## Version (preview)
-Built-in policy definitions can host multiple versions with the same `definitionID`. If no version number is specified, all experiences will show the latest version of the definition. To see a specific version of a built-in, it must be specified in API, SDK or UI. To reference a specific version of a definition within an assignment, see [definition version within assignment](../concepts/assignment-structure.md#policy-definition-id-and-version-preview)
-The Azure Policy service uses `version`, `preview`, and `deprecated` properties to convey level of
-> change to a built-in policy definition or initiative and state. The format of `version` is:
-> `{Major}.{Minor}.{Patch}`. Specific states, such as _deprecated_ or _preview_, are appended to the
-> `version` property or in another property as a **boolean**.
+Built-in policy definitions can host multiple versions with the same `definitionID`. If no version number is specified, all experiences will show the latest version of the definition. To see a specific version of a built-in, it must be specified in API, SDK or UI. To reference a specific version of a definition within an assignment, see [definition version within assignment](../concepts/assignment-structure.md#policy-definition-id-and-version-preview)
+
+The Azure Policy service uses `version`, `preview`, and `deprecated` properties to convey level of change to a built-in policy definition or initiative and state. The format of `version` is: `{Major}.{Minor}.{Patch}`. Specific states, such as _deprecated_ or _preview_, are appended to the `version` property or in another property as a **boolean**.
- Major Version (example: 2.0.0): introduce breaking changes such as major rule logic changes, removing parameters, adding an enforcement effect by default. - Minor Version (example: 2.1.0): introduce changes such as minor rule logic changes, adding new parameter allowed values, change to `roleDefinitionIds`, adding or moving definitions within an initiative. - Patch Version (example: 2.1.4): introduce string or metadata changes and break glass security scenarios (rare).
-> For more information about
-> Azure Policy versions built-ins, see
-> [Built-in versioning](https://github.com/Azure/azure-policy/blob/master/built-in-policies/README.md).
-> To learn more about what it means for a policy to be _deprecated_ or in _preview_, see [Preview and deprecated policies](https://github.com/Azure/azure-policy/blob/master/built-in-policies/README.md#preview-and-deprecated-policies).
+For more information about Azure Policy versions built-ins, see [Built-in versioning](https://github.com/Azure/azure-policy/blob/master/built-in-policies/README.md). To learn more about what it means for a policy to be _deprecated_ or in _preview_, see [Preview and deprecated policies](https://github.com/Azure/azure-policy/blob/master/built-in-policies/README.md#preview-and-deprecated-policies).
## Metadata
For more information, see [Understand scope in Azure Policy](./scope.md#definiti
- For more information about policy definition structure, go to [parameters](./definition-structure-parameters.md), [policy rule](./definition-structure-policy-rule.md), and [alias](./definition-structure-alias.md). - For initiatives, go to [initiative definition structure](./initiative-definition-structure.md). - Review examples at [Azure Policy samples](../samples/index.md).-- Review [Understanding policy effects](effects.md).
+- Review [Understanding policy effects](effect-basics.md).
- Understand how to [programmatically create policies](../how-to/programmatically-create.md). - Learn how to [get compliance data](../how-to/get-compliance-data.md). - Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
hdinsight Hdinsight Hadoop Use Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-use-blob-storage.md
Sharing one blob container as the default file system for multiple clusters isn'
## Access files from within cluster
-There are several ways you can access the files in Data Lake Storage from an HDInsight cluster. The URI scheme provides unencrypted access (with the *wasb:* prefix) and TLS encrypted access (with *wasbs*). We recommend using *wasbs* wherever possible, even when accessing data that lives inside the same region in Azure.
+> [!NOTE]
+> Azure storage team has discontinued all active developments on WASB and recommend all customers to use the ABFS driver to interact with Blob and ADLS gen2. For more information, see [The Azure Blob Filesystem driver (ABFS): A dedicated Azure Storage driver for Hadoop](/azure/storage/blobs/data-lake-storage-abfs-driver)
* **Using the fully qualified name**. With this approach, you provide the full path to the file that you want to access.
The default Blob container stores cluster-specific information such as job histo
While creating an HDInsight cluster, you specify the Azure Storage account you want to associate with it. Also, you can add additional storage accounts from the same Azure subscription or different Azure subscriptions during the creation process or after a cluster has been created. For instructions about adding additional storage accounts, see [Create HDInsight clusters](hdinsight-hadoop-provision-linux-clusters.md). > [!WARNING]
-> Using an additional storage account in a different location than the HDInsight cluster is not supported.
+> Using an additional storage account in a different location other than the HDInsight cluster is not supported.
## Next steps
hdinsight Hdinsight Retired Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-retired-versions.md
Title: Azure HDInsight retired versions
description: Learn about retired versions in Azure HDInsight. Previously updated : 02/12/2024 Last updated : 07/10/2024 # Retired HDInsight versions
-In this article, you learn about retired versions in HDInsight.
+This page lists all the versions of HDInsight that are retired and/or out of support. If youΓÇÖre currently on one of the versions mentioned on this page, we recommend that you immediately migrate to the latest version. If you choose not to migrate and continue using any of the following versions, be aware of the following terms and risks associated with your continued usage of retired and unsupported software:ΓÇ» ΓÇï
+
+- As a retired and out of support version of an Azure service, HDInsight hasn't been shipping and won’t ship any updates or security patches to these versions. Some of the OSS components in these versions haven’t been updated for several years.   ​
+
+- By continuing to use these versions, there are security risks that may lead to vulnerabilities, system instability, and potential data loss for you and your customers.ΓÇ» ΓÇï
+
+- HDInsight wonΓÇÖt be able to provide support or help if a security compromise occurs, as we don't have pipelines and patching mechanisms for these versions.ΓÇ» ΓÇï
+
+- If there are any operational issues, HDInsight wonΓÇÖt be able to provide support for root cause analysis, investigation of failures, or performance degradation/issues.ΓÇ» ΓÇï
+
+- There's no guarantee that all the existing functionality of your clusters continues to work as-is, because underlying dependencies determine the availability of the existing features in these versions. If there's a breaking change due to these dependencies, there's no way to recover the impacted clusters.ΓÇ» ΓÇï
+
+- The new service capabilities developed by HDInsight wonΓÇÖt be applicable to these versions.ΓÇ» ΓÇï
+
+- In the extreme event of a serious security threat to the service caused by the outdated version you're using, HDInsight might choose to stop or delete your clusters immediately to secure the service. In such cases, there's no way to recover the impacted HDInsight clusters, but your data in Azure storage and BYO Azure SQL DBs aren't deleted and can be utilized to migrate to the latest HDInsight version.ΓÇ» ΓÇï
## Retired version list
The following table lists the retired versions of HDInsight.
| HDInsight 2.1 |HDP 1.3 |Windows Server 2012 R2 |October 28, 2013 |May 12, 2014 |May 31, 2015 |No | | HDInsight 1.6 |HDP 1.1 | |October 28, 2013 |April 26, 2014 |May 31, 2015 | No |
+## Call to action
+
+To maintain the security posture,ΓÇ» migrate to [HDInsight 5.1](./hdinsight-5x-component-versioning.md#open-source-components-available-with-hdinsight-5x), which is Generally Available since November 1, 2023. This release contains all theΓÇ»[latest versions of supported software](./hdinsight-5x-component-versioning.md) along with significant improvements on the security patches on open-source components.ΓÇ» ΓÇï
+ ## Next steps - [Supported Apache components and versions in HDInsight](./hdinsight-component-versioning.md)
hdinsight Msi Support To Access Azure Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/msi-support-to-access-azure-services.md
+
+ Title: MSI Support to Access Azure services
+description: Learn how to provide MSI Support to Access Azure services.
+++ Last updated : 07/09/2024++
+# MSI Support to access Azure services
+
+Presently in Azure HDInsight non-ESP cluster, User Job accessing Azure resources like SqlDB, Cosmos DB, EH, KV, Kusto either using username and password or using MSI certificate key. This isn't in line with Microsoft security guidelines.
+
+This article explains the HDInsight interface and code details to fetch OAuth tokens in a non-ESP cluster.
+
+## Prerequisites
+
+* This feature is available in the latest HDInsight-5.1, 5.0, and 4.0 versions. Make sure you recreated or installed this cluster versions.
+* HDInsight Cluster must be with ADL-Gen2 storage as primary storage, which enables MSI based access for this storage. This same MSI used for all job resources access. Ensure the required IAM permissions given to this MSI to access Azure resources.
+* IMDS endpoint can't work for HDI worker nodes and the access tokens can be fetched using this HDInsight utility only.
+
+There are two Java client implementations provided to fetch the access token.
+
+* Option 1: HDInsight utility and API usage to fetch access token.
+* Option 2: HDInsight utility, TokenCredential Implementation to fetch Access Token.
+
+> [!NOTE]
+> By default, the Scope is ΓÇ£.defaultΓÇ¥. We will provide a mechanism in the utility API to pass the user supplied scope argument, in future.
+
+## How to download the utility jar from Maven Central
+
+Follow these steps to download client JARs from Maven Central.
+
+Downloading the JAR in a Maven Build from Maven Central directly.
+
+1. Add maven central as one of your repositories to resolve maven dependencies, ignore if already added.
+
+ Add the following code snippet to the `repositories` section of your pom.xml file:
+
+ ```
+ <repository>
+ <id>central</id>
+ <url>https://repo.maven.apache.org/maven2/</url>
+ <releases>
+ <enabled>true</enabled>
+ </releases>
+ <snapshots>
+ <enabled>true</enabled>
+ </snapshots>
+ </repository>
+ ```
+
+1. Following is the sample code snippet of HDInsight OAuth client utility library dependency, add the `dependency` section to your pom.xml
+
+```
+<dependency>
+ <groupId>com.microsoft.azure.hdinsight</groupId>
+ <artifactId>hdi-oauth-token-utils</artifactId>
+ <version>1.0.0</version>
+</dependency>
+```
+
+> [!IMPORTANT]
+>
+> Make sure the following items are in the class path.
+> - Hadoop's `core-site.xml`
+> - All the client jars from this cluster location `/usr/hdp/<hdi-version>/hadoop/client/*`
+> - `azure-core-1.49.0.jar, okhttp3-4.9.3` and its transitive dependent jars.
+
+### Structure of access token
+
+Access token structure as follows.
+
+```
+package com.azure.core.credential;
+import java.time.OffsetDateTime;
+
+/** Represents an immutable access token with a token string and an expiration time
+* in date format. By default, 24hrs is the expiration time out.
+*/
+public class AccessToken {
+
+ public String getToken();
+
+ public OffsetDateTime getExpiresAt();
+}
+```
++
+## Option 1 - HDInsight utility and API usage to fetch access token
+
+Implemented a convenient java utility class to fetch MSI access token by providing target resource URI, which can be EH, KV, Kusto, SqlDB, Cosmos DB etc.
+
+### How to use the API
+
+To fetch the token, you can invoke the API in your job application code.
+
+```
+import com.microsoft.azure.hdinsight.oauthtoken.utils.HdiIdentityTokenServiceUtils;
+import com.azure.core.credential.AccessToken;
+
+// uri can be EH, Kusto etc.
+// By default, the Scope is ΓÇ£.defaultΓÇ¥.
+// We will provide a mechanism to take user supplied scope, in future.
+String msiResourceUri = https://vault.azure.net/;
+HdiIdentityTokenServiceUtils tokenUtils = new HdiIdentityTokenServiceUtils();
+AccessToken token = tokenUtils.getAccessToken(msiResourceUri);
+```
+
+## Option 2 - HDInsight Utility, TokenCredential implementation to fetch access token
+
+Provided `HdiIdentityTokenCredential` feature java class, which is the standard implementation of `com.azure.core.credential.TokenCredential` interface.
+
+> [!NOTE]
+> The HdiIdentityTokenCredential class can be used with various Azure SDK client libraries to authenticate requests and access Azure services without manual access token management.
+
+### Examples
+
+Following are the HDInsight oauth utility examples, which can be used in job applications to fetch access tokens for the given target resource uri:
+
+**If the client is a Key Vault**
+
+For Azure Key Vault, the SecretClient instance uses a TokenCredential to authenticate and fetch the access token:
+
+```
+import com.azure.core.credential.TokenCredential;
+import com.azure.security.keyvault.secrets.SecretClient;
+import com.azure.security.keyvault.secrets.SecretClientBuilder;
+import com.microsoft.azure.hdinsight.oauthtoken.credential.HdiIdentityTokenCredential;
+
+// Replace <resource-uri> with your Key Vault URI.
+TokenCredential hdiTokenCredential = new HdiIdentityTokenCredential("<resource-uri>");
+
+// Create a SecretClient to call the service.
+SecretClient secretClient = new SecretClientBuilder()
+ .vaultUrl("<your-key-vault-url>") // Replace with your Key Vault URL.
+ .credential(hdiTokenCredential) // Add HDI identity token credential.
+ .buildClient();
+
+// Retrieve a secret from the Key Vault.
+// Replace with your secret name.
+KeyVaultSecret secret = secretClient.getSecret("<your-secret-name>");
+```
+
+**If the client is a Event Hub**
+
+Example of Azure Event Hubs, which doesn't directly fetch an access token. It uses a TokenCredential to authenticate, and this credential handles fetching the access token.
+
+```
+import com.azure.messaging.eventhubs.EventHubClientBuilder;
+import com.azure.messaging.eventhubs.EventHubProducerClient;
+import com.azure.core.credential.TokenCredential;
+import com.microsoft.azure.hdinsight.oauthtoken.credential.HdiIdentityTokenCredential;
+HdiIdentityTokenCredential hdiTokenCredential = new HdiIdentityTokenCredential("https://eventhubs.azure.net");
+// Create a producer client
+EventHubProducerClient producer = new EventHubClientBuilder()
+ .credential("<fully-qualified-namespace>", "<event-hub-name>", hdiTokenCredential)
+ .buildProducerClient();
+
+// Use the producer client ....
+```
++
+**If the client is a MySql Database**
+
+Example of Azure Sql Database, which doesn't directly fetch an access token.
+
+Connect using access token callback: The following example demonstrates implementing and setting the accessToken callback
+
+```
+ package com.microsoft.azure.hdinsight.oauthtoken;
+
+ import com.azure.core.credential.AccessToken;
+ import com.microsoft.azure.hdinsight.oauthtoken.utils.HdiIdentityTokenServiceUtils;
+ import com.microsoft.sqlserver.jdbc.SQLServerAccessTokenCallback;
+ import com.microsoft.sqlserver.jdbc.SqlAuthenticationToken;
+
+ public class HdiSQLAccessTokenCallback implements SQLServerAccessTokenCallback {
+
+ @Override
+ public SqlAuthenticationToken getAccessToken(String spn, String stsurl) {
+ try {
+ HdiIdentityTokenServiceUtils provider = new HdiIdentityTokenServiceUtils();
+ AccessToken token = provider.getAccessToken("https://database.windows.net/";);
+ return new SqlAuthenticationToken(token.getToken(), token.getExpiresAt().getTime());
+ } catch (Exception e) {
+ // handle exception...
+ return null;
+ }
+ }
+ }
+
+
+
+ package com.microsoft.azure.hdinsight.oauthtoken;
+
+ import java.sql.DriverManager;
+
+ public class HdiTokenClassBasedConnectionWithDriver {
+
+ public static void main(String[] args) throws Exception {
+
+ // Below is the sample code to use hdi sql callback.
+ // Replaces <dbserver> with your server name and replaces <dbname> with your db name.
+ String connectionUrl = "jdbc:sqlserver://<dbserver>.database.windows.net;"
+ + "database=<dbname>;"
+ + "accessTokenCallbackClass=com.microsoft.azure.hdinsight.oauthtoken.HdiSQLAccessTokenCallback;"
+ + "encrypt=true;"
+ + "trustServerCertificate=false;"
+ + "loginTimeout=30;";
+
+ DriverManager.getConnection(connectionUrl);
+
+ }
+
+ }
+
+ package com.microsoft.azure.hdinsight.oauthtoken;
+
+ import com.microsoft.azure.hdinsight.oauthtoken.HdiSQLAccessTokenCallback;
+ import com.microsoft.sqlserver.jdbc.SQLServerDataSource;
+ import java.sql.Connection;
+
+ public class HdiTokenClassBasedConnectionWithDS {
+
+ public static void main(String[] args) throws Exception {
+
+ HdiSQLAccessTokenCallback callback = new HdiSQLAccessTokenCallback();
+
+ SQLServerDataSource ds = new SQLServerDataSource();
+ ds.setServerName("<db-server>"); // Replaces <db-server> with your server name.
+ ds.setDatabaseName("<dbname>"); // Replace <dbname> with your database name.
+ ds.setAccessTokenCallback(callback);
+
+ ds.getConnection();
+ }
+ }
+```
+
+
+
+**If the client is a Kusto**
+
+Example of Azure Sql Database, which doesn't directly fetch an access token.
+
+Connect using tokenproviderCallback:
+
+The following example demonstrates accessToken callback provider,
+
+```
+public void createConnection () {
+
+ final String clusterUrl = "https://xyz.eastus.kusto.windows.net";
+
+ ConnectionStringBuilder conStrBuilder = ConnectionStringBuilder.createWithAadTokenProviderAuthentication(clusterUrl, new Callable<String>() {
+
+ public String call() throws Exception {
+
+ // Call HDI util class with scope. This returns the AT and from that get token string and return.
+ // AccessToken contains expiry time and user can cache the token once acquired and call for a new one
+ // if it is about to expire (Say, <= 30mins for expiry).
+ HdiIdentityTokenServiceUtils hdiUtil = new HdiIdentityTokenServiceUtils();
+
+ AccessToken token = hdiUtil.getAccessToken(clusterUrl);
+
+ return token.getToken();
+
+ }
+
+ });
+ }
+```
+
+**Connect using pre-fetched Access Token:**
+
+Fetches accesstoken explicitly and pass it as an option.
+
+```
+String targetResourceUri = "https://<my-kusto-cluster>";
+HdiIdentityTokenServiceUtils tokenUtils = new HdiIdentityTokenServiceUtils();
+AccessToken token = tokenUtils.getAccessToken(targetResourceUri);
+
+df.write
+ .format("com.microsoft.kusto.spark.datasource")
+ .option(KustoSinkOptions.KUSTO_CLUSTER, "MyCluster")
+ .option(KustoSinkOptions.KUSTO_DATABASE, "MyDatabase")
+ .option(KustoSinkOptions.KUSTO_TABLE, "MyTable")
+ .option(KustoSinkOptions.KUSTO_ACCESS_TOKEN, token.getToken())
+ .option(KustoOptions., "MyTable")
+ .mode(SaveMode.Append)
+ .save()
+```
+
+> [!NOTE]
+> HdiIdentityTokenCredential class can be used in combination with various Azure SDK client libraries to authenticate requests and access Azure services without the need to manage access tokens manually.
+
+### Troubleshooting
+
+Integrated **HdiIdentityTokenCredential** utility into the Spark job but hitting the following exception while accessing the token during runtime (Job execution).
+
+```
+User class threw exception: java.lang.NoSuchFieldError: Companion
+at okhttp3.internal.Util.<clinit>(Util.kt:70)
+at okhttp3.internal.concurrent.TaskRunner.<clinit>(TaskRunner.kt:309)
+at okhttp3.ConnectionPool.<init>(ConnectionPool.kt:41)
+at okhttp3.ConnectionPool.<init>(ConnectionPool.kt:47)
+at okhttp3.OkHttpClient$Builder.<init>(OkHttpClient.kt:471)
+at com.microsoft.azure.hdinsight.oauthtoken.utils.HdiIdentityTokenServiceUtils.getAccessToken(HdiIdentityTokenServiceUtils.java:142)
+at com.microsoft.azure.hdinsight.oauthtoken.credential.HdiIdentityTokenCredential.getTokenSync(HdiIdentityTokenCredential.java:83)
+```
+**Answer:**
+
+Following is the maven dependency tree of `hdi-oauth-util` library. User need to make sure that these jars are available at the runtime (in job container).
+
+```
+[INFO] +- com.azure:azure-core:jar:1.49.0:compile
+[INFO] | +- com.azure:azure-json:jar:1.1.0:compile
+[INFO] | +- com.azure:azure-xml:jar:1.0.0:compile
+[INFO] | +- com.fasterxml.jackson.core:jackson-annotations:jar:2.13.5:compile
+[INFO] | +- com.fasterxml.jackson.core:jackson-core:jar:2.13.5:compile
+[INFO] | +- com.fasterxml.jackson.datatype:jackson-datatype-jsr310:jar:2.13.5:compile
+[INFO] | \- io.projectreactor:reactor-core:jar:3.4.36:compile
+[INFO] | \- org.reactivestreams:reactive-streams:jar:1.0.4:compile
+[INFO] \- com.squareup.okhttp3:okhttp:jar:4.9.3:compile
+[INFO] +- com.squareup.okio:okio:jar:2.8.0:compile
+[INFO] | \- org.jetbrains.kotlin:kotlin-stdlib-common:jar:1.4.0:compile
+[INFO] \- org.jetbrains.kotlin:kotlin-stdlib:jar:1.4.10:compile
+```
+
+When you build the spark uber jar, user need to make sure these jars are shaded and included into the uber jar. Can refer the following plugins.
+
+```xml
+ <plugin>
+ <groupId>org.apache.maven.plugins</groupId>
+ <artifactId>maven-shade-plugin</artifactId>
+ <version>${maven.plugin.shade.version}</version>
+ <configuration>
+ <createDependencyReducedPom>false</createDependencyReducedPom>
+ <relocations>
+ <relocation>
+ <pattern>okio</pattern>
+ <shadedPattern>com.shaded.okio</shadedPattern>
+ </relocation>
+ <relocation>
+ <pattern>okhttp</pattern>
+ <shadedPattern>com.shaded.okhttp</shadedPattern>
+ </relocation>
+ <relocation>
+ <pattern>okhttp3</pattern>
+ <shadedPattern>com.shaded.okhttp3</shadedPattern>
+ </relocation>
+ <relocation>
+ <pattern>kotlin</pattern>
+ <shadedPattern>com.shaded.kotlin</shadedPattern>
+ </relocation>
+ <relocation>
+ <pattern>com.fasterxml.jackson</pattern>
+ <shadedPattern>com.shaded.com.fasterxml.jackson</shadedPattern>
+ </relocation>
+ <relocation>
+ <pattern>com.azure</pattern>
+ <shadedPattern>com.shaded.com.azure</shadedPattern>
+ </relocation>
+```
iot-hub Create Connect Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/create-connect-device.md
Create a device identity for your device to connect to Azure IoT Hub. This artic
## Prerequisites
-* An IoT hub in your subscription. If you don't have an IoT hub, follow the steps in [create an IoT hub](./iot-hub-create-through-portal.md).
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
* Depending on which tool you use, either have access to the [Azure portal](https://portal.azure.com) or [install the Azure CLI](/cli/azure/install-azure-cli).
If you want to keep a device in your IoT hub's identity registry, but want to pr
* To prevent a device from connecting, set the **Enable connection to IoT Hub** parameter to **Disable**.
+ :::image type="content" source="./media/create-connect-device/disable-device.png" alt-text="Screenshot that shows disabling a device in the Azure portal.":::
+ * To completely remove a device from your IoT hub's identity registry, select **Delete**.
+ :::image type="content" source="./media/create-connect-device/delete-device.png" alt-text="Screenshot that shows deleting a device in the Azure portal.":::
+ ### [Azure CLI](#tab/cli) To disable a device, use the [az iot hub device-identity update](/cli/azure/iot/hub/device-identity#az-iot-hub-device-identity-update) command and change the `status` of the device. For example:
iot-hub Create Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/create-hub.md
+
+ Title: Create an Azure IoT hub
+
+description: How to create, manage, and delete Azure IoT hubs through the Azure portal, CLI, and PowerShell. Includes information about retrieving the service connection string.
+++++ Last updated : 07/10/2024+++
+# Create and manage Azure IoT hubs
+
+This article describes how to create and manage an IoT hub.
+
+## Prerequisites
+
+Prepare the following prerequisites, depending on which tool you use.
+
+### [Azure portal](#tab/portal)
+
+* Access to the [Azure portal](https://portal.azure.com).
+
+### [Azure CLI](#tab/cli)
+
+* The Azure CLI installed on your development machine. If you don't have the Azure CLI, follow the steps to [Install the Azure CLI](/cli/azure/install-azure-cli).
+
+* A resource group in your Azure subscription. If you want to create a new resource group, use the [az group create](/cli/azure/group#az-group-create) command:
+
+ ```azurecli-interactive
+ az group create --name <RESOURCE_GROUP_NAME> --location <REGION>
+ ```
+
+### [Azure PowerShell](#tab/powershell)
+
+* Azure PowerShell installed on your development machine. If you don't have Azure PowerShell, follow the steps to [Install Azure PowerShell](/powershell/azure/install-azure-powershell).
+
+* A resource group in your Azure subscription. If you want to create a new resource group, use the [New-AzResourceGroup](/powershell/module/az.Resources/New-azResourceGroup) command:
+
+ ```azurepowershell-interactive
+ New-AzResourceGroup -Name <RESOURCE_GROUP_NAME> -Location "<REGION>"
+ ```
+++
+## Create an IoT hub
+
+### [Azure portal](#tab/portal)
++
+### [Azure CLI](#tab/cli)
+
+Use the [az iot hub create](/cli/azure/iot/hub#az-iot-hub-create) command to create an IoT hub in your resource group, using a globally unique name for your IoT hub. For example:
+
+```azurecli-interactive
+az iot hub create --name <NEW_NAME_FOR_YOUR_IOT_HUB> --resource-group <RESOURCE_GROUP_NAME> --sku S1
+```
++
+The previous command creates an IoT hub in the S1 pricing tier. For more information, see [Azure IoT Hub pricing](https://azure.microsoft.com/pricing/details/iot-hub/).
+
+### [Azure PowerShell](#tab/powershell)
+
+Use the [New-AzIotHub](/powershell/module/az.IotHub/New-azIotHub) command to create an IoT hub in your resource group. The name of the IoT hub must be globally unique. For example:
+
+```azurepowershell-interactive
+New-AzIotHub `
+ -ResourceGroupName <RESOURCE_GROUP_NAME> `
+ -Name <NEW_NAME_FOR_YOUR_IOT_HUB> `
+ -SkuName S1 -Units 1 `
+ -Location "<REGION>"
+```
++
+The previous command creates an IoT hub in the S1 pricing tier. For more information, see [Azure IoT Hub pricing](https://azure.microsoft.com/pricing/details/iot-hub/).
+++
+## Connect to an IoT hub
+
+Provide access permissions to applications and services that use IoT Hub functionality.
+
+### Connect with a connection string
+
+Connection strings are tokens that grant devices and services permissions to connect to IoT Hub based on shared access policies. Connection strings are an easy way to get started with IoT Hub, and are used in many samples and tutorials, but aren't recommended for production scenarios.
+
+For most sample scenarios, the **service** policy is sufficient. The service policy grants **Service Connect** permissions to access service endpoints. For more information about the other built-in shared access policies, see [IoT Hub permissions](./iot-hub-dev-guide-sas.md#access-control-and-permissions).
+
+To get the IoT Hub connection string for the **service** policy, follow these steps:
+
+#### [Azure portal](#tab/portal)
+
+1. In the [Azure portal](https://portal.azure.com), select **Resource groups**. Select the resource group where your hub is located, and then select your hub from the list of resources.
+
+1. On the left-side pane of your IoT hub, select **Shared access policies**.
+
+1. From the list of policies, select the **service** policy.
+
+1. Copy the **Primary connection string** and save the value.
+
+#### [Azure CLI](#tab/cli)
+
+Use the [az iot hub connection-string show](/cli/azure/iot/hub/connection-string#az-iot-hub-connection-string-show) command to get a connection string for your IoT hub that grants the service policy permissions:
+
+```azurecli-interactive
+az iot hub connection-string show --hub-name <YOUR_IOT_HUB_NAME> --policy-name service
+```
+
+The service connection string should look similar to the following example:
+
+```text
+"HostName=<IOT_HUB_NAME>.azure-devices.net;SharedAccessKeyName=service;SharedAccessKey=<SHARED_ACCESS_KEY>"
+```
+
+#### [Azure PowerShell](#tab/powershell)
+
+Use the [Get-AzIotHubConnectionString](/powershell/module/az.iothub/get-aziothubconnectionstring) command to get a connection string for your IoT hub that grants the service policy permissions.
+
+```azurepowershell-interactive
+Get-AzIotHubConnectionString -ResourceGroupName "<YOUR_RESOURCE_GROUP>" -Name "<YOUR_IOT_HUB_NAME>" -KeyName "service"
+```
+
+The service connection string should look similar to the following example:
+
+```text
+"HostName=<IOT_HUB_NAME>.azure-devices.net;SharedAccessKeyName=service;SharedAccessKey=<SHARED_ACCESS_KEY>"
+```
+++
+### Connect with role assignments
+
+Authenticating access by using Microsoft Entra ID and controlling permissions by using Azure role-based access control (RBAC) provides improved security and ease of use over security tokens. To minimize potential security issues inherent in security tokens, we recommend that you enforce Microsoft Entra authentication whenever possible. For more information, see [Control access to IoT Hub by using Microsoft Entra ID](./authenticate-authorize-azure-ad.md).
+
+## Delete an IoT hub
+
+When you delete an IoT hub, you lose the associated device identity registry. If you want to move or upgrade an IoT hub, or delete an IoT hub but keep the devices, consider [migrating an IoT hub using the Azure CLI](./migrate-hub-state-cli.md).
+
+### [Azure portal](#tab/portal)
+
+To delete an IoT hub, open your IoT hub in the Azure portal, then choose **Delete**.
++
+### [Azure CLI](#tab/cli)
+
+To delete an IoT hub, run the [az iot hub delete](/cli/azure/iot/hub#az-iot-hub-delete) command:
+
+```azurecli-interactive
+az iot hub delete --name <IOT_HUB_NAME> --resource-group <RESOURCE_GROUP_NAME>
+```
+
+### [Azure PowerShell](#tab/powershell)
+
+To delete the IoT hub, use the [Remove-AzIotHub](/powershell/module/az.iothub/remove-aziothub) command.
+
+```azurepowershell-interactive
+Remove-AzIotHub `
+ -ResourceGroupName MyIoTRG1 `
+ -Name MyTestIoTHub
+```
+++
+## Other tools for managing IoT hubs
+
+In addition to the Azure portal and CLI, the following tools are available to help you work with IoT hubs in whichever way supports your scenario:
+
+* **IoT Hub resource provider REST API**
+
+ Use the [IoT Hub Resource](/rest/api/iothub/iot-hub-resource) set of operations.
+
+* **Azure resource manager templates, Bicep, or Terraform**
+
+ Use the [Microsoft.Devices/IoTHubs](/azure/templates/microsoft.devices/iothubs) resource type. For examples, see [IoT Hub sample templates](/samples/browse/?terms=iot%20hub&languages=bicep%2Cjson).
+
+* **Visual Studio Code**
+
+ Use the [Azure IoT Hub extension for Visual Studio Code](./reference-iot-hub-extension.md).
iot-hub Device Management Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-management-cli.md
This article shows you how to create two Azure CLI sessions:
* Azure CLI. You can also run the commands in this article using the [Azure Cloud Shell](../cloud-shell/overview.md), an interactive CLI shell that runs in your browser or in an app such as Windows Terminal. If you use the Cloud Shell, you don't need to install anything. If you prefer to use the CLI locally, this article requires Azure CLI version 2.36 or later. Run `az --version` to find the version. To locally install or upgrade Azure CLI, see [Install Azure CLI](/cli/azure/install-azure-cli).
-* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
iot-hub Device Management Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-management-dotnet.md
This article shows you how to create:
* Visual Studio.
-* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
* A device registered in your IoT hub. If you don't have a device in your IoT hub, follow the steps in [Register a device](create-connect-device.md#register-a-device).
iot-hub Device Management Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-management-java.md
This article shows you how to create:
## Prerequisites
-* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
* A device registered in your IoT hub. If you don't have a device in your IoT hub, follow the steps in [Register a device](create-connect-device.md#register-a-device).
iot-hub Device Management Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-management-node.md
This article shows you how to create:
## Prerequisites
-* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
* A device registered in your IoT hub. If you don't have a device in your IoT hub, follow the steps in [Register a device](create-connect-device.md#register-a-device).
iot-hub Device Management Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-management-python.md
This article shows you how to create:
* An active Azure account. (If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.)
-* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
* A device registered in your IoT hub. If you don't have a device in your IoT hub, follow the steps in [Register a device](create-connect-device.md#register-a-device).
iot-hub Device Twins Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-twins-cli.md
This article shows you how to create two Azure CLI sessions:
* Azure CLI. You can also run the commands in this article using the [Azure Cloud Shell](../cloud-shell/overview.md), an interactive CLI shell that runs in your browser or in an app such as Windows Terminal. If you use the Cloud Shell, you don't need to install anything. If you prefer to use the CLI locally, this article requires Azure CLI version 2.36 or later. Run `az --version` to find the version. To locally install or upgrade Azure CLI, see [Install Azure CLI](/cli/azure/install-azure-cli).
-* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
* Make sure that port 8883 is open in your firewall. The samples in this article use MQTT protocol, which communicates over port 8883. This port can be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
iot-hub Device Twins Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-twins-dotnet.md
In this article, you create two .NET console apps:
* Visual Studio.
-* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
* A device registered in your IoT hub. If you don't have a device in your IoT hub, follow the steps in [Register a device](create-connect-device.md#register-a-device).
iot-hub Device Twins Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-twins-java.md
In this article, you create two Java console apps:
## Prerequisites
-* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
* A device registered in your IoT hub. If you don't have a device in your IoT hub, follow the steps in [Register a device](create-connect-device.md#register-a-device).
iot-hub Device Twins Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-twins-node.md
In this article, you create two Node.js console apps:
To complete this article, you need:
-* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
* A device registered in your IoT hub. If you don't have a device in your IoT hub, follow the steps in [Register a device](create-connect-device.md#register-a-device).
iot-hub Device Twins Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-twins-python.md
In this article, you create two Python console apps:
* An active Azure account. (If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.)
-* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
* A device registered in your IoT hub. If you don't have a device in your IoT hub, follow the steps in [Register a device](create-connect-device.md#register-a-device).
iot-hub File Upload Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/file-upload-dotnet.md
At the end of this article, you run two .NET console apps:
## Prerequisites
-* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
* A device registered in your IoT hub. If you don't have a device in your IoT hub, follow the steps in [Register a device](create-connect-device.md#register-a-device).
iot-hub File Upload Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/file-upload-java.md
These files are typically batch processed in the cloud, using tools such as [Azu
## Prerequisites
-* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
* A device registered in your IoT hub. If you don't have a device in your IoT hub, follow the steps in [Register a device](create-connect-device.md#register-a-device).
iot-hub File Upload Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/file-upload-node.md
At the end of this article, you run two Node.js console apps:
## Prerequisites
-* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
* A device registered in your IoT hub. If you don't have a device in your IoT hub, follow the steps in [Register a device](create-connect-device.md#register-a-device).
iot-hub File Upload Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/file-upload-python.md
At the end of this article, you run the Python console app **FileUpload.py**, wh
* An active Azure account. (If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.)
-* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
* A device registered in your IoT hub. If you don't have a device in your IoT hub, follow the steps in [Register a device](create-connect-device.md#register-a-device).
iot-hub How To Routing Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/how-to-routing-azure-cli.md
This article uses the Azure CLI to work with IoT Hub and other Azure services. Y
### IoT Hub
-You need an IoT hub in your [Azure subscription](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). If you don't have a hub yet, you can follow the steps to [create an IoT hub by using the Azure CLI](iot-hub-create-using-cli.md).
+Have an IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
### Endpoint service
iot-hub How To Routing Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/how-to-routing-portal.md
To create an IoT hub route, you need an IoT hub that you created by using Azure
Be sure to have the following hub resource to use when you create your IoT hub route:
-* An IoT hub in your [Azure subscription](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). If you don't have a hub yet, you can follow the steps to [create an IoT hub by using the Azure portal](./iot-hub-create-through-portal.md).
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
### Endpoint service
iot-hub How To Routing Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/how-to-routing-powershell.md
To create an IoT hub route, you need an IoT hub that you created by using Azure
Be sure to have the following hub resource to use when you create your IoT hub route:
-* An IoT hub in your [Azure subscription](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). If you don't have a hub yet, you can follow the steps to [create an IoT hub by using the New-AzIotHub PowerShell cmdlet](./iot-hub-create-using-powershell.md).
+* An Azure IoT hub. If you don't have an IoT hub, you can use the [New-AzIoTHub cmdlet](/powershell/module/az.iothub/new-aziothub) to create one or follow the steps in [Create an IoT hub](create-hub.md).
### Endpoint service
iot-hub Iot Hub Automatic Device Management Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-automatic-device-management-cli.md
Automatic configurations run for the first time shortly after the configuration
## CLI prerequisites
-* An [IoT hub](../iot-hub/iot-hub-create-using-cli.md) in your Azure subscription.
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
* [Azure CLI](/cli/azure/install-azure-cli) in your environment. At a minimum, your Azure CLI version must be 2.0.70 or above. Use `az ΓÇô-version` to validate. This version supports az extension commands and introduces the Knack command framework.
iot-hub Iot Hub Configure File Upload Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-configure-file-upload-cli.md
To use the [file upload functionality in IoT Hub](iot-hub-devguide-file-upload.m
* An active Azure account. If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.
-* An Azure IoT hub. If you don't have an IoT hub, you can use the [`az iot hub create` command](/cli/azure/iot/hub#az-iot-hub-create) to create one or [Create an IoT hub using the portal](iot-hub-create-through-portal.md).
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
* An Azure Storage account. If you don't have an Azure Storage account, you can use the Azure CLI to create one. For more information, see [Create a storage account](../storage/common/storage-account-create.md).
iot-hub Iot Hub Configure File Upload Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-configure-file-upload-powershell.md
To use the [file upload functionality in IoT Hub](iot-hub-devguide-file-upload.m
* An active Azure account. If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.
-* An Azure IoT hub. If you don't have an IoT hub, you can use the [New-AzIoTHub cmdlet](/powershell/module/az.iothub/new-aziothub) to create one or use the portal to [Create an IoT hub](iot-hub-create-through-portal.md).
+* An Azure IoT hub. If you don't have an IoT hub, you can use the [New-AzIoTHub cmdlet](/powershell/module/az.iothub/new-aziothub) to create one or follow the steps in [Create an IoT hub](create-hub.md).
* An Azure storage account. If you don't have an Azure storage account, you can use the [Azure Storage PowerShell cmdlets](/powershell/module/az.storage/) to create one or use the portal to [Create a storage account](../storage/common/storage-account-create.md)
iot-hub Iot Hub Configure File Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-configure-file-upload.md
To use the [file upload functionality in IoT Hub](iot-hub-devguide-file-upload.m
* An active Azure account. If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.
-* An Azure IoT hub. If you don't have an IoT hub, see [Create an IoT hub using the portal](iot-hub-create-through-portal.md).
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
## Configure your IoT hub
iot-hub Iot Hub Create Through Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-create-through-portal.md
- Title: Create an IoT hub using the Azure portal
-description: How to create, manage, and delete Azure IoT hubs through the Azure portal. Includes information about pricing tiers, scaling, security, and messaging configuration.
----- Previously updated : 06/10/2024---
-# Create an IoT hub using the Azure portal
--
-This article describes how to create and manage an IoT hub, using the [Azure portal](https://portal.azure.com).
-
-## Create an IoT hub
--
-## Update the IoT hub
-
-You can change the settings of an existing IoT hub after it's created from the IoT Hub pane. Here are some properties you can set for an IoT hub:
-
-**Pricing and scale**: Migrate to a different tier or set the number of IoT Hub units.
-
-**IP Filter**: Specify a range of IP addresses for the IoT hub to accept or reject.
-
-**Properties**: A list of properties that you can copy and use elsewhere, such as the resource ID, resource group, location, and so on.
-
-For a complete list of options to update an IoT hub, see the [**az iot hub update** commands](/cli/azure/iot/hub#az-iot-hub-update) reference page.
-
-### Shared access policies
-
-You can also view or modify the list of shared access policies by choosing **Shared access policies** in the **Security settings** section. These policies define the permissions for devices and services to connect to IoT Hub.
-
-Select **Add shared access policy** to open the **Add shared access policy** page. You can enter the new policy name and the permissions that you want to associate with this policy, as shown in the following screenshot:
--
-* The **Registry Read** and **Registry Write** policies grant read and write access rights to the identity registry. These permissions are used by back-end cloud services to manage device identities. Choosing the write option automatically includes the read option.
-
-* The **Service Connect** policy grants permission to access service endpoints. This permission is used by back-end cloud services to send and receive messages from devices. It's also used to update and read device twin and module twin data.
-
-* The **Device Connect** policy grants permissions for sending and receiving messages using the IoT Hub device-side endpoints. This permission is used by devices to send and receive messages from an IoT hub or update and read device twin and module twin data. It's also used for file uploads.
-
-Select **Add** to add your newly created policy to the existing list.
-
-For more detailed information about the access granted by specific permissions, see [IoT Hub permissions](./iot-hub-dev-guide-sas.md#access-control-and-permissions).
-
-## Delete an IoT hub
-
-To delete an IoT hub, open your IoT hub in the Azure portal, then choose **Delete**.
--
-## Next steps
-
-Learn more about managing Azure IoT Hub:
-
-* [Message routing with IoT Hub](how-to-routing-portal.md)
-* [Monitor your IoT hub](monitor-iot-hub.md)
iot-hub Iot Hub Create Use Iot Toolkit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-create-use-iot-toolkit.md
- Title: Create an Azure IoT hub using the Azure IoT Hub extension for Visual Studio Code
-description: Learn how to use the Azure IoT Hub extension for Visual Studio Code to create an Azure IoT hub in a resource group.
----- Previously updated : 01/04/2019--
-# Create an IoT hub using the Azure IoT Hub extension for Visual Studio Code
--
-This article shows you how to use the [Azure IoT Hub extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) to create an Azure IoT hub.
--
-## Prerequisites
--- [Visual Studio Code](https://code.visualstudio.com/)--- [Azure IoT Hub extension](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) installed for Visual Studio Code--- An Azure subscription: [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin--- An Azure resource group: [create a resource group](../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups) in the Azure portal-
-## Create an IoT hub
-
-The following steps show how to create an IoT hub in Visual Studio Code (VS Code):
-
-1. In VS Code, open the **Explorer** view.
-
-2. At the bottom of the Explorer, expand the **Azure IoT Hub** section.
-
- :::image type="content" source="./media/iot-hub-create-use-iot-toolkit/azure-iot-hub-devices.png" alt-text="A screenshot that shows the location of the Azure IoT Hub section in Visual Studio Code." lightbox="./media/iot-hub-create-use-iot-toolkit/azure-iot-hub-devices.png":::
-
-3. Select **Create IoT Hub** from the list in the **Azure IoT Hub** section.
-
- :::image type="content" source="./media/iot-hub-create-use-iot-toolkit/create-iot-hub.png" alt-text="A screenshot that shows the location of the Create IoT Hub list item in Visual Studio Code." lightbox="./media/iot-hub-create-use-iot-toolkit/create-iot-hub.png":::
-
-4. If you're not signed into Azure, a pop-up notification is shown in the bottom right corner to let you sign in to Azure. Select **Sign In** and follow the instructions to sign into Azure.
-
-5. From the command palette at the top of VS Code, select your Azure subscription.
-
-6. Select your resource group.
-
-7. Select a region.
-
-8. Select a pricing tier.
-
-9. Enter a globally unique name for your IoT hub, and then select the Enter key.
-
-10. Wait a few minutes until the IoT hub is created and confirmation is displayed in the **Output** panel.
-
-> [!TIP]
-> There is no option to delete your IoT hub in Visual Studio Code, however you can [delete your hub in the Azure portal](iot-hub-create-through-portal.md#delete-an-iot-hub).
-
-## Next steps
-
-Now that you've deployed an IoT hub using the Azure IoT Hub extension for Visual Studio Code, explore these articles:
--- [Use the Azure IoT Hub extension for Visual Studio Code to send and receive messages between your device and an IoT hub](iot-hub-vscode-iot-toolkit-cloud-device-messaging.md).--- [Use the Azure IoT Hub extension for Visual Studio Code for Azure IoT Hub device management](iot-hub-device-management-iot-toolkit.md)--- [See the Azure IoT Hub extension for Visual Studio Code wiki page](https://github.com/microsoft/vscode-azure-iot-toolkit/wiki).
iot-hub Iot Hub Create Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-create-using-cli.md
- Title: Create an IoT hub using the Azure CLI
-description: Learn how to use the Azure CLI commands to create a resource group and then create an IoT hub in the resource group. Also learn how to remove the hub.
------ Previously updated : 08/23/2018--
-# Create an IoT hub using the Azure CLI
--
-This article shows you how to create an IoT hub using Azure CLI.
--
-When you create an IoT hub, you must create it in a resource group. Either use an existing resource group, or run the following [command to create a resource group](/cli/azure/resource):
-
- ```azurecli-interactive
- az group create --name {your resource group name} --location westus
- ```
-
- > [!TIP]
- > The previous example creates the resource group in the West US location. You can view a list of available locations by running this command:
- >
- > ```azurecli-interactive
- > az account list-locations -o table
- > ```
-
-## Create an IoT hub
-
-Use the Azure CLI to create a resource group and then add an IoT hub.
-
-Run the following [command to create an IoT hub](/cli/azure/iot/hub#az-iot-hub-create) in your resource group, using a globally unique name for your IoT hub:
-
- ```azurecli-interactive
- az iot hub create --name {your iot hub name} \
- --resource-group {your resource group name} --sku S1
- ```
-
- [!INCLUDE [iot-hub-pii-note-naming-hub](../../includes/iot-hub-pii-note-naming-hub.md)]
-
-The previous command creates an IoT hub in the S1 pricing tier for which you're billed. For more information, see [Azure IoT Hub pricing](https://azure.microsoft.com/pricing/details/iot-hub/).
-
-For more information on Azure IoT Hub commands, see the [`az iot hub`](/cli/azure/iot/hub) reference article.
-
-## Update the IoT hub
-
-You can change the settings of an existing IoT hub after it's created. Here are some properties you can set for an IoT hub:
-
-**Pricing and scale**: Migrate to a different tier or set the number of IoT Hub units.
-
-**IP Filter**: Specify a range of IP addresses that will be accepted or rejected by the IoT hub.
-
-**Properties**: A list of properties that you can copy and use elsewhere, such as the resource ID, resource group, location, and so on.
-
-For a complete list of options to update an IoT hub, see the [**az iot hub update** commands](/cli/azure/iot/hub#az-iot-hub-update) reference page.
-
-## Register a new device in the IoT hub
-
-In this section, you create a device identity in the identity registry in your IoT hub. A device can't connect to a hub unless it has an entry in the identity registry. For more information, see [Understand the identity registry in your IoT hub](iot-hub-devguide-identity-registry.md). This device identity is [IoT Edge](../iot-edge/index.yml) enabled.
-
-Run the following command to create a device identity. Use your IoT hub name and create a new device ID name in place of `{iothub_name}` and `{device_id}`. This command creates a device identity with default authorization (shared private key).
-
-```azurecli-interactive
-az iot hub device-identity create -n {iothub_name} -d {device_id} --ee
-```
-
-The result is a JSON printout which includes your keys and other information.
-
-Alternatively, there are several options to register a device using different kinds of authorization. To explore the options, see [Examples](/cli/azure/iot/hub/device-identity#az-iot-hub-device-identity-create-examples) on the **az iot hub device-identity** reference page.
-
-## Remove an IoT hub
-
-There are various commands to [delete an individual resource](/cli/azure/resource), such as an IoT hub.
-
-To [delete an IoT hub](/cli/azure/iot/hub#az-iot-hub-delete), run the following command:
-
-```azurecli-interactive
-az iot hub delete --name {your iot hub name} -\
- -resource-group {your resource group name}
-```
-
-## Next steps
-
-Learn more about the commands available in the Microsoft Azure IoT extension for Azure CLI:
-
-* [IoT Hub-specific commands (az iot hub)](/cli/azure/iot/hub)
-* [All commands (az iot)](/cli/azure/iot)
iot-hub Iot Hub Create Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-create-using-powershell.md
- Title: Create an Azure IoT Hub using a PowerShell cmdlet
-description: Learn how to use the PowerShell cmdlets to create a resource group and then create an IoT hub in the resource group. Also learn how to remove the hub.
----- Previously updated : 08/29/2018---
-# Create an IoT hub using the New-AzIotHub cmdlet
--
-You can use Azure PowerShell cmdlets to create and manage Azure IoT hubs. This tutorial shows you how to create an IoT hub with PowerShell.
--
-Alternatively, you can use Azure Cloud Shell, if you'd rather not install additional modules onto your machine. The following section gets you started with Azure Cloud Shell.
--
-## Prerequisites
-
-You need a resource group to deploy an IoT hub. You can use an existing resource group or create a new one.
-
-To create a new resource group for your IoT hub, use the [New-AzResourceGroup](/powershell/module/az.Resources/New-azResourceGroup) command. This example creates a resource group called **MyIoTRG1** in the **East US** region:
-
-```azurepowershell-interactive
-New-AzResourceGroup -Name MyIoTRG1 -Location "East US"
-```
-
-## Connect to your Azure subscription
-
-If you're using Cloud Shell, you're already logged in to your subscription, so you can skip this section. If you're running PowerShell locally instead, enter the following command to sign in to your Azure subscription:
-
-```powershell
-# Log into Azure account.
-Login-AzAccount
-```
-
-## Create an IoT hub
-
-Create an IoT hub using your resource group. Use the [New-AzIotHub](/powershell/module/az.IotHub/New-azIotHub) command. This example creates an **S1** hub called **MyTestIoTHub** in the **East US** region:
-
-```azurepowershell-interactive
-New-AzIotHub `
- -ResourceGroupName MyIoTRG1 `
- -Name MyTestIoTHub `
- -SkuName S1 -Units 1 `
- -Location "East US"
-```
-
-The name of the IoT hub must be globally unique.
--
-To list all the IoT hubs in your subscription, use the [Get-AzIotHub](/powershell/module/az.IotHub/Get-azIotHub) command.
-
-This example shows the S1 Standard IoT Hub you created in the previous step.
-
-```azurepowershell-interactive
-Get-AzIotHub
-```
-
-To delete the IoT hub, use the [Remove-AzIotHub](/powershell/module/az.iothub/remove-aziothub) command.
-
-```azurepowershell-interactive
-Remove-AzIotHub `
- -ResourceGroupName MyIoTRG1 `
- -Name MyTestIoTHub
-```
-
-## Update the IoT hub
-
-You can change the settings of an existing IoT hub after it's created. Here are some properties you can set for an IoT hub:
-
-**Pricing and scale**: Migrate to a different tier or set the number of IoT Hub units.
-
-**IP Filter**: Specify a range of IP addresses that will be accepted or rejected by the IoT hub.
-
-**Properties**: A list of properties that you can copy and use elsewhere, such as the resource ID, resource group, location, and so on.
-
-Explore the [**Set-AzIotHub** commands](/powershell/module/az.iothub/set-aziothub) for a complete list of update options.
-
-## Next steps
-
-Now that you've deployed an IoT hub using a PowerShell cmdlet, explore more articles:
-
-* [PowerShell cmdlets for working with your IoT hub](/powershell/module/az.iothub/).
-
-* [IoT Hub resource provider REST API](/rest/api/iothub/iothubresource).
-
-Develop for IoT Hub:
-
-* [Azure IoT SDKs](iot-hub-devguide-sdks.md)
-
-Explore the capabilities of IoT Hub:
-
-* [Deploying AI to edge devices with Azure IoT Edge](../iot-edge/quickstart-linux.md)
iot-hub Iot Hub How To Order Connection State Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-how-to-order-connection-state-events.md
The sequence number is a string representation of a hexadecimal number. You can
* An Azure subscription. If you don't have an Azure subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-* An IoT hub under your Azure subscription. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
## Create a logic app
iot-hub Iot Hub Live Data Visualization In Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-live-data-visualization-in-web-apps.md
The web application sample for this tutorial is written in Node.js. The steps in
* An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps to create an IoT hub using the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
* A device registered in your IoT hub. If you don't have a device in your IoT hub, follow the steps in [Register a device](create-connect-device.md#register-a-device).
iot-hub Iot Hub Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-managed-identity.md
In IoT Hub, managed identities can be used for egress connectivity from IoT Hub
- Understand the managed identity differences between *system-assigned* and *user-assigned* in [What are managed identities for Azure resources?](./../active-directory/managed-identities-azure-resources/overview.md) -- An [IoT hub](iot-hub-create-through-portal.md)
+- An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
## System-assigned managed identity
iot-hub Iot Hub Preview Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-preview-mode.md
These features are improvements at the IoT Hub protocol and authentication layer
1. Select **IoT Hub** from the search results, and then select **Create**.
-1. On the **Basics** tab, complete the fields [as you normally would](iot-hub-create-through-portal.md) except for **Region**. Select one of these regions:
+1. On the **Basics** tab, complete the fields [as you normally would](create-hub.md) except for **Region**. Select one of these regions:
- Central US - West Europe
iot-hub Iot Hub Rm Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-rm-rest.md
- Title: Create an Azure IoT hub using the resource provider REST API
-description: Learn how to use the resource provider C# REST API to create and manage an IoT hub programmatically.
----- Previously updated : 08/08/2017---
-# Create an IoT hub using the resource provider REST API (.NET)
-
-You can use the [IoT Hub Resource](/rest/api/iothub/iothubresource) REST API to create and manage Azure IoT hubs programmatically. This article shows you how to use the IoT Hub Resource to create an IoT hub using **Postman**. Alternatively, you can use **cURL**. If any of these REST commands fail, find help with the [IoT Hub API common error codes](/rest/api/iothub/common-error-codes).
--
-## Prerequisites
-
-* [Azure PowerShell module](/powershell/azure/install-azure-powershell) or [Azure Cloud Shell](../cloud-shell/overview.md)
-
-* [Postman](/rest/api/azure/#how-to-call-azure-rest-apis-with-postman) or [cURL](https://curl.se/)
-
-## Get an Azure access token
-
-1. In the Azure PowerShell cmdlet or Azure Cloud Shell, sign in and then retrieve a token with the following command. If you're using Cloud Shell you are already signed in, so skip this step.
-
- ```azurecli-interactive
- az account get-access-token --resource https://management.azure.com
- ```
- You should see a response in the console similar to this JSON (except the access token is long):
-
- ```json
- {
- "accessToken": "eyJ ... pZA",
- "expiresOn": "2022-09-16 20:57:52.000000",
- "subscription": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
- "tenant": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
- "tokenType": "Bearer"
- }
- ```
-
-1. In a new **Postman** request, from the **Auth** tab, select the **Type** dropdown list and choose **Bearer Token**.
-
- :::image type="content" source="media/iot-hub-rm-rest/select-bearer-token.png" alt-text="Screenshot that shows how to select the Bearer Token type of authorization in **Postman**.":::
-
-1. Paste the access token into the field labeled **Token**.
-
-Keep in mind the access token expires after 5-60 minutes, so you may need to generate another one.
-
-## Create an IoT hub
-
-1. Select the REST command dropdown list and choose the PUT command. Copy the URL below, replacing the values in the `{}` with your own values. The `{resourceName}` value is the name you'd like for your new IoT hub. Paste the URL into the field next to the PUT command.
-
- ```rest
- PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Devices/IotHubs/{resourceName}?api-version=2021-04-12
- ```
-
- :::image type="content" source="media/iot-hub-rm-rest/paste-put-command.png" alt-text="Screenshot that shows how to add a PUT command in Postman.":::
-
- See the [PUT command in the IoT Hub Resource](/rest/api/iothub/iot-hub-resource/create-or-update?tabs=HTTP).
-
-1. From the **Body** tab, select **raw** and **JSON** from the dropdown lists.
-
- :::image type="content" source="media/iot-hub-rm-rest/add-body-for-put.png" alt-text="Screenshot that shows how to add JSON to the body of your request in Postman.":::
-
-1. Copy the following JSON, replacing values in `<>` with your own. Paste the JSON into the box in **Postman** on the **Body** tab. Make sure your IoT hub name matches the one in your PUT URL. Change the location to your location (the location assigned to your resource group).
-
- ```json
- {
- "name": "<my-iot-hub>",
- "location": "<region>",
- "tags": {},
- "properties": {},
- "sku": {
- "name": "S1",
- "tier": "Standard",
- "capacity": 1
- }
- }
- ```
-
- See the [PUT command in the IoT Hub Resource](/rest/api/iothub/iot-hub-resource/create-or-update?tabs=HTTP).
-
-1. Select **Send** to send your request and create a new IoT hub. A successful request will return a **201 Created** response with a JSON printout of your IoT hub specifications. You can save your request if you're using **Postman**.
-
-## View an IoT hub
-
-To see all the specifications of your new IoT hub, use a GET request. You can use the same URL that you used with the PUT request, but must erase the **Body** of that request (if not already blank) because a GET request can't have a body. Here's the GET request template:
-
-```rest
-GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Devices/IotHubs/{resourceName}?api-version=2018-04-01
-```
-
-See the [GET command in the IoT Hub Resource](/rest/api/iothub/iot-hub-resource/get?tabs=HTTP).
-
-## Update an IoT hub
-
-Updating is as simple as using the same PUT request from when we created the IoT hub and editing the JSON body to contain parameters of your choosing. Edit the body of the request by adding a **tags** property, then run the PUT request.
-
-```json
-{
- "name": "<my-iot-hub>",
- "location": "westus2",
- "tags": {
- "Animal": "Cat"
- },
- "properties": {},
- "sku": {
- "name": "S1",
- "tier": "Standard",
- "capacity": 1
- }
-}
-```
-
-```rest
-PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Devices/IotHubs/{resourceName}?api-version=2018-04-01
-```
-
-The response will show the new tag added in the console. Remember, you may need to refresh your access token if too much time has passed since the last time you generated one.
-
-See the [PUT command in the IoT Hub Resource](/rest/api/iothub/iot-hub-resource/create-or-update?tabs=HTTP).
-
-Alternatively, use the [PATCH command in the IoT Hub Resource](/rest/api/iothub/iot-hub-resource/update?tabs=HTTP) to update tags.
-
-## Delete an IoT hub
-
-If you're only testing, you might want to clean up your resources and delete your new IoT hub, by sending a DELETE request. be sure to replace the values in `{}` with your own values. The `{resourceName}` value is the name of your IoT hub.
-
-```rest
-DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Devices/IotHubs/{resourceName}?api-version=2018-04-01
-```
-
-See the [DELETE command in the IoT Hub Resource](/rest/api/iothub/iot-hub-resource/delete?tabs=HTTP).
-
-## Next steps
-
-Since you've deployed an IoT hub using the resource provider REST API, you may want to explore further:
-
-* Read about the capabilities of the [IoT Hub resource provider REST API](/rest/api/iothub/iothubresource).
-
-* Read [Azure Resource Manager overview](../azure-resource-manager/management/overview.md) to learn more about the capabilities of Azure Resource Manager.
-
-To learn more about developing for IoT Hub, see the following articles:
-
-* [Azure IoT SDKs](iot-hub-devguide-sdks.md)
-
-To further explore the capabilities of IoT Hub, see:
-
-* [Deploying AI to edge devices with Azure IoT Edge](../iot-edge/quickstart-linux.md)
iot-hub Iot Hub Rm Template Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-rm-template-powershell.md
- Title: Create an Azure IoT hub using a template (PowerShell)
-description: How to use an Azure Resource Manager template to create an IoT hub with Azure PowerShell.
----- Previously updated : 04/02/2019 ---
-# Create an IoT hub using Azure Resource Manager template (PowerShell)
--
-This article shows you how to use an Azure Resource Manager template to create an IoT Hub and a [consumer group](../event-hubs/event-hubs-features.md#consumer-groups), using Azure PowerShell. Resource Manager templates are JSON files that define the resources you need to deploy for your solution. For more information about developing Resource Manager templates, see the [Azure Resource Manager documentation](../azure-resource-manager/index.yml).
-
-## Prerequisites
-
-[Azure PowerShell module](/powershell/azure/install-azure-powershell) or [Azure Cloud Shell](../cloud-shell/overview.md)
-
-Azure Cloud Shell is useful if you don't want to install the PowerShell module locally, as Cloud Shell performs from a browser.
-
-## Create an IoT hub
-
-The [Resource Manager JSON template](https://azure.microsoft.com/resources/templates/iothub-with-consumergroup-create/) used in this article is one of many templates from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/). The JSON template creates an Azure Iot hub with three endpoints (eventhub, cloud-to-device, and messaging) and a consumer group. For more information on the Iot Hub template schema, see [Microsoft.Devices (IoT Hub) resource types](/azure/templates/microsoft.devices/iothub-allversions).
-
-Use the following PowerShell command to create a resource group which is then used to create an IoT hub. The JSON template is used in `-TemplateUri`.
-
-To run the following PowerShell script, select **Try it** to open the Azure Cloud Shell. Copy the script, paste into your shell, then press enter. Answer the prompts. These prompts will help you to create a new resource, choose a region, and create a new IoT hub. Once answered, a confirmation of your IoT hub prints to the console.
-
-```azurepowershell-interactive
-$resourceGroupName = Read-Host -Prompt "Enter the Resource Group name"
-$location = Read-Host -Prompt "Enter the location (for example: centralus)"
-$iotHubName = Read-Host -Prompt "Enter the IoT Hub name"
-
-New-AzResourceGroup -Name $resourceGroupName -Location "$location"
-New-AzResourceGroupDeployment `
- -ResourceGroupName $resourceGroupName `
- -TemplateUri "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.devices/iothub-with-consumergroup-create/azuredeploy.json" `
- -iotHubName $iotHubName
-```
-
-> [!NOTE]
-> To use your own template, upload your template file to the Cloud Shell, and then use the `-TemplateFile` switch to specify the file name. For example, see [Deploy the template](../azure-resource-manager/templates/quickstart-create-templates-use-visual-studio-code.md?tabs=PowerShell#deploy-the-template).
--
-## Next steps
-
-Since you've deployed an IoT hub, using an Azure Resource Manager template, you may want to explore:
-
-* Capabilities of the [IoT Hub resource provider REST API][lnk-rest-api]
-* Capabilities of the [Azure Resource Manager][lnk-azure-rm-overview]
-* JSON syntax and properties to use in templates: [Microsoft.Devices resource types](/azure/templates/microsoft.devices/iothub-allversions)
-
-To learn more about developing for IoT Hub, see the [Azure IoT SDKs][lnk-sdks].
-
-To explore more capabilities of IoT Hub, see:
-
-* [Deploying AI to edge devices with Azure IoT Edge][lnk-iotedge]
-
-<!-- Links -->
-[lnk-free-trial]: https://azure.microsoft.com/pricing/free-trial/
-[lnk-status]: https://azure.microsoft.com/status/
-[lnk-powershell-install]: /powershell/azure/install-Az-ps
-[lnk-rest-api]: /rest/api/iothub/iothubresource
-[lnk-azure-rm-overview]: ../azure-resource-manager/management/overview.md
-[lnk-powershell-arm]: ../azure-resource-manager/management/manage-resources-powershell.md
-
-[lnk-sdks]: iot-hub-devguide-sdks.md
-
-[lnk-iotedge]: ../iot-edge/quickstart-linux.md
iot-hub Iot Hub Tls Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-tls-support.md
For added security, configure your IoT Hubs to *only* allow client connections t
* US Gov Arizona * US Gov Virginia (TLS 1.0/1.1 support isn't available in this region - TLS 1.2 enforcement must be enabled or IoT hub creation fails)
-To enable TLS 1.2 enforcement, follow the steps in [Create IoT hub in Azure portal](iot-hub-create-through-portal.md), except
+To enable TLS 1.2 enforcement, follow the steps in [Create an IoT hub in Azure portal](create-hub.md), except
- Choose a **Region** from one in the list above. - Under **Management -> Advanced -> Transport Layer Security (TLS) -> Minimum TLS version**, select **1.2**. This setting only appears for IoT hub created in supported region.
iot-hub Iot Hub Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-upgrade.md
Previously updated : 02/07/2023 Last updated : 06/21/2024
When you have more devices and need more capabilities, there are three ways to a
* Change the size of the IoT hub. For example, migrate a hub from the B1 tier to the B2 tier to increase the number of messages that each unit can support per day from 400,000 to 6 million. Both these changes can occur without interrupting existing operations. * Upgrade to a higher tier. For example, upgrade a hub from the B1 tier to the S1 tier for access to advanced features with the same messaging capacity.+ > [!Warning] > You cannot upgrade from a Free Hub to a Paid Hub through our upgrade function. You must create a Paid hub and migrate the configurations and devices from the Free hub to the Paid hub. This process is documented at [How to migrate an IoT hub](./migrate-hub-state-cli.md).+ > [!Tip] > When you are upgrading your IoT Hub to a higher tier, some messages may be received out of order for a short period of time. If your business logic relies on the order of messages, we recommend upgrading during non-business hours.
If you want to downgrade your IoT hub, you can remove units and reduce the size
These examples are meant to help you understand how to adjust your IoT hub as your solution changes. For specific information about each tier's capabilities, you should always refer to [Azure IoT Hub pricing](https://azure.microsoft.com/pricing/details/iot-hub/).
+Get more details about [How to choose the right IoT Hub tier](iot-hub-scaling.md).
+ ## Upgrade your existing IoT hub
-If you want to upgrade an existing IoT hub, you can do so from the Azure portal.
+If you want to upgrade an existing IoT hub, you can do so from the Azure portal or Azure CLI.
+
+### [Azure portal](#tab/portal)
+
+In the Azure portal, navigate to your IoT hub to view and update its settings.
1. Sign in to the [Azure portal](https://portal.azure.com/) and navigate to your IoT hub.
If you want to upgrade an existing IoT hub, you can do so from the Azure portal.
:::image type="content" source="./media/iot-hub-upgrade/message-pricing-advanced-options.png" alt-text="Screenshot that shows how to upgrade the size or units of your IoT hub.":::
-The maximum limit of device-to-cloud partitions for basic tier and standard tier IoT hubs is 32. Most IoT hubs only need four partitions. You choose the number of partitions when you create the IoT hub. The number of partitions relates the device-to-cloud messages to the number of simultaneous readers of these messages. The number of partitions remains unchanged when you migrate from the basic tier to the standard tier.
+### [Azure CLI](#tab/cli)
+
+Use the [az iot hub show](/cli/azure/iot/hub#az-iot-hub-show) command to view the current details of an IoT hub.
+
+```bash
+az iot hub show --name <HUB_NAME>
+```
-## Next steps
+Use the [az iot hub update](/cli/azure/iot/hub#az-iot-hub-update) command to make changes to an existing IoT hub.
-Get more details about [How to choose the right IoT Hub tier](iot-hub-scaling.md).
+For example, the following command updates the IoT hub tier to `S2`, or standard tier, size 2.
+
+```bash
+az iot hub update --name <HUB_NAME> --sku S2
+```
+
+For example, the following command sets the number of units for an IoT hub. The type of units in an IoT hub are determined by the size value of the tier (1, 2, or 3) but you can scale up or down by changing the number of units.
+
+```bash
+az iot hub update -n MyIotHub --unit 2
+```
+++
+The maximum limit of device-to-cloud partitions for basic tier and standard tier IoT hubs is 32. Most IoT hubs only need four partitions. You choose the number of partitions when you create the IoT hub. The number of partitions relates the device-to-cloud messages to the number of simultaneous readers of these messages. The number of partitions remains unchanged when you migrate from the basic tier to the standard tier.
iot-hub Iot Hubs Manage Device Twin Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hubs-manage-device-twin-tags.md
Device twin tags can be used as a powerful tool to help you organize your device
## Prerequisites
-* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
* At least two registered devices. If you don't have devices in your IoT hub, follow the steps in [Register a device](create-connect-device.md#register-a-device).
To try out some of the concepts described in this article, see the following IoT
* [How to use the device twin](device-twins-node.md) * [How to use device twin properties](tutorial-device-twins.md)
-* [Device management with the Azure IoT Hub extension for VS Code](iot-hub-device-management-iot-toolkit.md)
+
iot-hub Module Twins C https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/module-twins-c.md
- Title: Get started with module identity and module twins (C)-
-description: Learn how to create module identities and update module twins using the Azure IoT Hub device SDK for C.
----- Previously updated : 06/25/2018---
-# Get started with IoT Hub module identity and module twin (C)
--
-[Module identities and module twins](iot-hub-devguide-module-twins.md) are similar to Azure IoT Hub device identity and device twin, but provide finer granularity. While Azure IoT Hub device identity and device twin enable the back-end application to configure a device and provides visibility on the device's conditions, a module identity and module twin provide these capabilities for individual components of a device. On capable devices with multiple components, such as operating system devices or firmware devices, it allows for isolated configuration and conditions for each component.
--
-At the end of this article, you have two C apps:
-
-* **CreateIdentities**: creates a device identity, a module identity and associated security key to connect your device and module clients.
-
-* **UpdateModuleTwinReportedProperties**: sends updated module twin, reported properties to your IoT Hub.
-
-> [!NOTE]
-> See [Azure IoT SDKs](iot-hub-devguide-sdks.md) for more information about the SDK tools available to build both device and back-end apps.
-
-## Prerequisites
-
-* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
-
-* The latest [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c).
-
-## Get the IoT hub connection string
---
-## Create a device identity and a module identity in IoT Hub
-
-In this section, you create a C app that creates a device identity and a module identity in the identity registry in your IoT hub. A device or module can't connect to IoT hub unless it has an entry in the identity registry. For more information, see [Understand the identity registry in your IoT hub](iot-hub-devguide-identity-registry.md). When you run this console app, it generates a unique ID and key for both device and module. Your device and module use these values to identify itself when it sends device-to-cloud messages to IoT Hub. The IDs are case-sensitive.
-
-Add the following code to your C file:
-
-```C
-#include <stdio.h>
-#include <stdlib.h>
-
-#include "azure_c_shared_utility/crt_abstractions.h"
-#include "azure_c_shared_utility/threadapi.h"
-#include "azure_c_shared_utility/platform.h"
-
-#include "iothub_service_client_auth.h"
-#include "iothub_registrymanager.h"
-
-static const char* hubConnectionString ="[your hub's connection string]"; // modify
-
-static void createDevice(IOTHUB_REGISTRYMANAGER_HANDLE
- iotHubRegistryManagerHandle, const char* deviceId)
-{
- IOTHUB_REGISTRY_DEVICE_CREATE_EX deviceCreateInfo;
- IOTHUB_REGISTRYMANAGER_RESULT result;
-
- (void)memset(&deviceCreateInfo, 0, sizeof(deviceCreateInfo));
- deviceCreateInfo.version = 1;
- deviceCreateInfo.deviceId = deviceId;
- deviceCreateInfo.primaryKey = "";
- deviceCreateInfo.secondaryKey = "";
- deviceCreateInfo.authMethod = IOTHUB_REGISTRYMANAGER_AUTH_SPK;
-
- IOTHUB_DEVICE_EX deviceInfoEx;
- memset(&deviceInfoEx, 0, sizeof(deviceInfoEx));
- deviceInfoEx.version = 1;
-
- // Create device
- result = IoTHubRegistryManager_CreateDevice_Ex(iotHubRegistryManagerHandle,
- &deviceCreateInfo, &deviceInfoEx);
- if (result == IOTHUB_REGISTRYMANAGER_OK)
- {
- (void)printf("IoTHubRegistryManager_CreateDevice: Device has been created successfully: deviceId=%s, primaryKey=%s\n", deviceInfoEx.deviceId, deviceInfoEx.primaryKey);
- }
- else if (result == IOTHUB_REGISTRYMANAGER_DEVICE_EXIST)
- {
- (void)printf("IoTHubRegistryManager_CreateDevice: Device already exists\n");
- }
- else if (result == IOTHUB_REGISTRYMANAGER_ERROR)
- {
- (void)printf("IoTHubRegistryManager_CreateDevice failed\n");
- }
- // You will need to Free the returned device information after it was created
- IoTHubRegistryManager_FreeDeviceExMembers(&deviceInfoEx);
-}
-
-static void createModule(IOTHUB_REGISTRYMANAGER_HANDLE iotHubRegistryManagerHandle, const char* deviceId, const char* moduleId)
-{
- IOTHUB_REGISTRY_MODULE_CREATE moduleCreateInfo;
- IOTHUB_REGISTRYMANAGER_RESULT result;
-
- (void)memset(&moduleCreateInfo, 0, sizeof(moduleCreateInfo));
- moduleCreateInfo.version = 1;
- moduleCreateInfo.deviceId = deviceId;
- moduleCreateInfo.moduleId = moduleId;
- moduleCreateInfo.primaryKey = "";
- moduleCreateInfo.secondaryKey = "";
- moduleCreateInfo.authMethod = IOTHUB_REGISTRYMANAGER_AUTH_SPK;
-
- IOTHUB_MODULE moduleInfo;
- memset(&moduleInfo, 0, sizeof(moduleInfo));
- moduleInfo.version = 1;
-
- // Create module
- result = IoTHubRegistryManager_CreateModule(iotHubRegistryManagerHandle, &moduleCreateInfo, &moduleInfo);
- if (result == IOTHUB_REGISTRYMANAGER_OK)
- {
- (void)printf("IoTHubRegistryManager_CreateModule: Module has been created successfully: deviceId=%s, moduleId=%s, primaryKey=%s\n", moduleInfo.deviceId, moduleInfo.moduleId, moduleInfo.primaryKey);
- }
- else if (result == IOTHUB_REGISTRYMANAGER_DEVICE_EXIST)
- {
- (void)printf("IoTHubRegistryManager_CreateModule: Module already exists\n");
- }
- else if (result == IOTHUB_REGISTRYMANAGER_ERROR)
- {
- (void)printf("IoTHubRegistryManager_CreateModule failed\n");
- }
- // You will need to Free the returned module information after it was created
- IoTHubRegistryManager_FreeModuleMembers(&moduleInfo);
-}
-
-int main(void)
-{
- (void)platform_init();
-
- const char* deviceId = "myFirstDevice";
- const char* moduleId = "myFirstModule";
- IOTHUB_SERVICE_CLIENT_AUTH_HANDLE iotHubServiceClientHandle = NULL;
- IOTHUB_REGISTRYMANAGER_HANDLE iotHubRegistryManagerHandle = NULL;
-
- if ((iotHubServiceClientHandle = IoTHubServiceClientAuth_CreateFromConnectionString(hubConnectionString)) == NULL)
- {
- (void)printf("IoTHubServiceClientAuth_CreateFromConnectionString failed\n");
- }
- else if ((iotHubRegistryManagerHandle = IoTHubRegistryManager_Create(iotHubServiceClientHandle)) == NULL)
- {
- (void)printf("IoTHubServiceClientAuth_CreateFromConnectionString failed\n");
- }
- else
- {
- createDevice(iotHubRegistryManagerHandle, deviceId);
- createModule(iotHubRegistryManagerHandle, deviceId, moduleId);
- }
-
- if (iotHubRegistryManagerHandle != NULL)
- {
- (void)printf("Calling IoTHubRegistryManager_Destroy...\n");
- IoTHubRegistryManager_Destroy(iotHubRegistryManagerHandle);
- }
-
- if (iotHubServiceClientHandle != NULL)
- {
- (void)printf("Calling IoTHubServiceClientAuth_Destroy...\n");
- IoTHubServiceClientAuth_Destroy(iotHubServiceClientHandle);
- }
-
- platform_deinit();
- return 0;
-}
-```
-
-This app creates a device identity with ID **myFirstDevice** and a module identity with ID **myFirstModule** under device **myFirstDevice**. (If that module ID already exists in the identity registry, the code simply retrieves the existing module information.) The app then displays the primary key for that identity. You use this key in the simulated module app to connect to your IoT hub.
-
-> [!NOTE]
-> The IoT Hub identity registry only stores device and module identities to enable secure access to the IoT hub. The identity registry stores device IDs and keys to use as security credentials. The identity registry also stores an enabled/disabled flag for each device that you can use to disable access for that device. If your application needs to store other device-specific metadata, it should use an application-specific store. There is no enabled/disabled flag for module identities. For more information, see [IoT Hub developer guide](iot-hub-devguide-identity-registry.md).
-
-## Update the module twin using C device SDK
-
-In this section, you create a C app on your simulated device that updates the module twin reported properties.
-
-1. Get your module connection string. In the [Azure portal](https://portal.azure.com), navigate to your IoT hub and select **IoT devices**. Find myFirstDevice, open it and you see myFirstModule was successfully created. Copy the module connection string. It is needed in the next step.
-
- ![Azure portal module detail](./media/module-twins-c/module-detail.png)
-
-2. **Create UpdateModuleTwinReportedProperties app**
-
- Add the following to your C file:
-
- ```C
- #include <stdio.h>
- #include <stdlib.h>
-
- #include "azure_c_shared_utility/crt_abstractions.h"
- #include "azure_c_shared_utility/threadapi.h"
- #include "azure_c_shared_utility/platform.h"
-
- #include "iothub_service_client_auth.h"
- #include "iothub_devicetwin.h"
-
- const char* deviceId = "bugbash-test-2";
- const char* moduleId = "module-id-1";
- static const char* hubConnectionString ="[your hub's connection string]"; // modify
- const char* testJson = "{\"properties\":{\"desired\":{\"integer_property\": b-1234, \"string_property\": \"abcd\"}}}";
-
- int main(void)
- {
- (void)platform_init();
-
- IOTHUB_SERVICE_CLIENT_AUTH_HANDLE iotHubServiceClientHandle = NULL;
- IOTHUB_SERVICE_CLIENT_DEVICE_TWIN_HANDLE iothubDeviceTwinHandle = NULL;
-
- if ((iotHubServiceClientHandle = IoTHubServiceClientAuth_CreateFromConnectionString(moduleConnectionString)) == NULL)
- {
- (void)printf("IoTHubServiceClientAuth_CreateFromConnectionString failed\n");
- }
- else if ((iothubDeviceTwinHandle = IoTHubDeviceTwin_Create(iotHubServiceClientHandle)) == NULL)
- {
- (void)printf("IoTHubServiceClientAuth_CreateFromConnectionString failed\n");
- }
- else
- {
- char *result = IoTHubDeviceTwin_UpdateModuleTwin(iothubDeviceTwinHandle, deviceId, moduleId, testJson);
- printf("IoTHubDeviceTwin_UpdateModuleTwin returned %s\n", result);
- }
-
- if (iothubDeviceTwinHandle != NULL)
- {
- (void)printf("Calling IoTHubDeviceTwin_Destroy...\n");
- IoTHubDeviceTwin_Destroy(iothubDeviceTwinHandle);
- }
-
- if (iotHubServiceClientHandle != NULL)
- {
- (void)printf("Calling IoTHubServiceClientAuth_Destroy...\n");
- IoTHubServiceClientAuth_Destroy(iotHubServiceClientHandle);
- }
-
- platform_deinit();
- return 0;
- }
- ```
-
-This code sample shows you how to retrieve the module twin and update reported properties.
-
-## Get updates on the device side
-
-In addition to the previous code, you can add the following code block to get the twin update message on your device:
-
-```C
-#include <stdio.h>
-#include <stdlib.h>
-
-#include "azure_c_shared_utility/crt_abstractions.h"
-#include "azure_c_shared_utility/macro_utils.h"
-#include "azure_c_shared_utility/threadapi.h"
-#include "azure_c_shared_utility/platform.h"
-#include "iothub_module_client_ll.h"
-#include "iothub_client_options.h"
-#include "iothub_message.h"
-
-// The protocol you wish to use should be uncommented
-//
-//#define SAMPLE_MQTT
-//#define SAMPLE_MQTT_OVER_WEBSOCKETS
-#define SAMPLE_AMQP
-//#define SAMPLE_AMQP_OVER_WEBSOCKETS
-//#define SAMPLE_HTTP
-
-#ifdef SAMPLE_MQTT
- #include "iothubtransportmqtt.h"
-#endif // SAMPLE_MQTT
-#ifdef SAMPLE_MQTT_OVER_WEBSOCKETS
- #include "iothubtransportmqtt_websockets.h"
-#endif // SAMPLE_MQTT_OVER_WEBSOCKETS
-#ifdef SAMPLE_AMQP
- #include "iothubtransportamqp.h"
-#endif // SAMPLE_AMQP
-#ifdef SAMPLE_AMQP_OVER_WEBSOCKETS
- #include "iothubtransportamqp_websockets.h"
-#endif // SAMPLE_AMQP_OVER_WEBSOCKETS
-#ifdef SAMPLE_HTTP
- #include "iothubtransporthttp.h"
-#endif // SAMPLE_HTTP
-
-/* Paste in the your iothub connection string */
-static const char* connectionString = "[Fill in connection string]";
-
-static bool g_continueRunning;
-#define DOWORK_LOOP_NUM 3
-
-static void deviceTwinCallback(DEVICE_TWIN_UPDATE_STATE update_state, const unsigned char* payLoad, size_t size, void* userContextCallback)
-{
- (void)userContextCallback;
-
- printf("Device Twin update received (state=%s, size=%zu): %s\r\n",
- MU_ENUM_TO_STRING(DEVICE_TWIN_UPDATE_STATE, update_state), size, payLoad);
-}
-
-static void reportedStateCallback(int status_code, void* userContextCallback)
-{
- (void)userContextCallback;
- printf("Device Twin reported properties update completed with result: %d\r\n", status_code);
-
- g_continueRunning = false;
-}
-
-void iothub_module_client_sample_device_twin_run(void)
-{
- IOTHUB_CLIENT_TRANSPORT_PROVIDER protocol;
- IOTHUB_MODULE_CLIENT_LL_HANDLE iotHubModuleClientHandle;
- g_continueRunning = true;
-
- // Select the Protocol to use with the connection
-#ifdef SAMPLE_MQTT
- protocol = MQTT_Protocol;
-#endif // SAMPLE_MQTT
-#ifdef SAMPLE_MQTT_OVER_WEBSOCKETS
- protocol = MQTT_WebSocket_Protocol;
-#endif // SAMPLE_MQTT_OVER_WEBSOCKETS
-#ifdef SAMPLE_AMQP
- protocol = AMQP_Protocol;
-#endif // SAMPLE_AMQP
-#ifdef SAMPLE_AMQP_OVER_WEBSOCKETS
- protocol = AMQP_Protocol_over_WebSocketsTls;
-#endif // SAMPLE_AMQP_OVER_WEBSOCKETS
-#ifdef SAMPLE_HTTP
- protocol = HTTP_Protocol;
-#endif // SAMPLE_HTTP
-
- if (platform_init() != 0)
- {
- (void)printf("Failed to initialize the platform.\r\n");
- }
- else
- {
- if ((iotHubModuleClientHandle = IoTHubModuleClient_LL_CreateFromConnectionString(connectionString, protocol)) == NULL)
- {
- (void)printf("ERROR: iotHubModuleClientHandle is NULL!\r\n");
- }
- else
- {
- bool traceOn = true;
- const char* reportedState = "{ 'device_property': 'new_value'}";
- size_t reportedStateSize = strlen(reportedState);
-
- (void)IoTHubModuleClient_LL_SetOption(iotHubModuleClientHandle, OPTION_LOG_TRACE, &traceOn);
-
- // Check the return of all API calls when developing your solution. Return checks omitted for sample simplification.
-
- (void)IoTHubModuleClient_LL_SetModuleTwinCallback(iotHubModuleClientHandle, deviceTwinCallback, iotHubModuleClientHandle);
- (void)IoTHubModuleClient_LL_SendReportedState(iotHubModuleClientHandle, (const unsigned char*)reportedState, reportedStateSize, reportedStateCallback, iotHubModuleClientHandle);
-
- do
- {
- IoTHubModuleClient_LL_DoWork(iotHubModuleClientHandle);
- ThreadAPI_Sleep(1);
- } while (g_continueRunning);
-
- for (size_t index = 0; index < DOWORK_LOOP_NUM; index++)
- {
- IoTHubModuleClient_LL_DoWork(iotHubModuleClientHandle);
- ThreadAPI_Sleep(1);
- }
-
- IoTHubModuleClient_LL_Destroy(iotHubModuleClientHandle);
- }
- platform_deinit();
- }
-}
-
-int main(void)
-{
- iothub_module_client_sample_device_twin_run();
- return 0;
-}
-```
-
-## Next steps
-
-To continue getting started with IoT Hub and to explore other IoT scenarios, see:
-
-* [Getting started with device management](device-management-node.md)
-* [Getting started with IoT Edge](../iot-edge/quickstart-linux.md)
iot-hub Module Twins Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/module-twins-cli.md
This article shows you how to create an Azure CLI session in which you:
* Azure CLI. You can also run the commands in this article using the [Azure Cloud Shell](../cloud-shell/overview.md), an interactive CLI shell that runs in your browser or in an app such as Windows Terminal. If you use the Cloud Shell, you don't need to install anything. If you prefer to use the CLI locally, this article requires Azure CLI version 2.36 or later. Run `az --version` to find the version. To locally install or upgrade Azure CLI, see [Install Azure CLI](/cli/azure/install-azure-cli).
-* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
* Make sure that port 8883 is open in your firewall. The samples in this article use MQTT protocol, which communicates over port 8883. This port can be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
iot-hub Module Twins Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/module-twins-dotnet.md
At the end of this article, you have two .NET console apps:
* Visual Studio.
-* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
## Module authentication
iot-hub Module Twins Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/module-twins-node.md
At the end of this article, you have two Node.js apps:
## Prerequisites
-* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
* Node.js version 10.0.x or later. [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-node/tree/main/doc/node-devbox-setup.md) describes how to install Node.js for this article on either Windows or Linux.
iot-hub Module Twins Portal Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/module-twins-portal-dotnet.md
In this article, you will learn how to:
* Visual Studio.
-* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
* A device registered in your IoT hub. If you don't have a device in your IoT hub, follow the steps in [Register a device](create-connect-device.md#register-a-device).
iot-hub Module Twins Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/module-twins-python.md
At the end of this article, you have three Python apps:
* An active Azure account. (If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.)
-* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
* [Python version 3.7 or later](https://www.python.org/downloads/) is recommended. Make sure to use the 32-bit or 64-bit installation as required by your setup. When prompted during the installation, make sure to add Python to your platform-specific environment variable.
iot-hub Quickstart Send Telemetry Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/quickstart-send-telemetry-cli.md
Azure CLI requires you to be logged into your Azure account. All communication b
In this section, you use the Azure CLI to create a resource group and an IoT hub. An Azure resource group is a logical container into which Azure resources are deployed and managed. An IoT hub acts as a central message hub for bi-directional communication between your IoT application and the devices.
-> [!TIP]
-> Optionally, you can create an Azure resource group, an IoT hub, and other resources by using the [Azure portal](iot-hub-create-through-portal.md), [Visual Studio Code](iot-hub-create-use-iot-toolkit.md), or other programmatic methods.
- 1. In the first CLI session, run the [az group create](/cli/azure/group#az-group-create) command to create a resource group. The following command creates a resource group named *MyResourceGroup* in the *eastus* location. ```azurecli
iot-hub Raspberry Pi Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/raspberry-pi-get-started.md
This article provides basic steps for getting starting with connecting a Raspber
Have the following prerequisites prepared before starting this article: * An Azure subscription.
-* An IoT hub with a device registered to it. If you don't have a hub with a registered device already, see [Create an IoT hub using the Azure portal](./iot-hub-create-through-portal.md).
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
+* A device registered in your IoT hub. If you don't have devices in your IoT hub, follow the steps in [Register a device](create-connect-device.md#register-a-device).
## Use the online simulator
iot-hub Schedule Jobs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/schedule-jobs-cli.md
This article shows you how to create two Azure CLI sessions:
* Azure CLI. You can also run the commands in this article using the [Azure Cloud Shell](../cloud-shell/overview.md), an interactive CLI shell that runs in your browser or in an app such as Windows Terminal. If you use the Cloud Shell, you don't need to install anything. If you prefer to use the CLI locally, this article requires Azure CLI version 2.36 or later. Run `az --version` to find the version. To locally install or upgrade Azure CLI, see [Install Azure CLI](/cli/azure/install-azure-cli).
-* An IoT hub in your Azure subscription. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
iot-hub Schedule Jobs Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/schedule-jobs-dotnet.md
This article shows you how to create two .NET (C#) console apps:
* Visual Studio.
-* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
* A device registered in your IoT hub. If you don't have a device in your IoT hub, follow the steps in [Register a device](create-connect-device.md#register-a-device).
iot-hub Schedule Jobs Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/schedule-jobs-java.md
This article shows you how to create two Java apps:
## Prerequisites
-* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
* A device registered in your IoT hub. If you don't have a device in your IoT hub, follow the steps in [Register a device](create-connect-device.md#register-a-device).
iot-hub Schedule Jobs Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/schedule-jobs-node.md
This article shows you how to create two Node.js apps:
## Prerequisites
-* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
* A device registered in your IoT hub. If you don't have a device in your IoT hub, follow the steps in [Register a device](create-connect-device.md#register-a-device).
iot-hub Schedule Jobs Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/schedule-jobs-python.md
This article shows you how to create two Python apps:
* An active Azure account. (If you don't have an account, you can create a [free account](https://azure.microsoft.com/pricing/free-trial/) in just a couple of minutes.)
-* An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md).
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
* A device registered in your IoT hub. If you don't have a device in your IoT hub, follow the steps in [Register a device](create-connect-device.md#register-a-device).
iot-hub Tutorial Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-routing.md
In this tutorial, you perform the following tasks:
* An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](iot-hub-create-through-portal.md).
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
* This tutorial uses sample code from [Azure IoT SDK for C#](https://github.com/Azure/azure-iot-sdk-csharp).
iot-hub Tutorial X509 Test Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-x509-test-certs.md
The following tutorial uses [OpenSSL](https://www.openssl.org/) and the [OpenSSL
* An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](iot-hub-create-through-portal.md).
+* An IoT hub in your Azure subscription. If you don't have a hub yet, you can follow the steps in [Create an IoT hub](create-hub.md).
* The latest version of [Git](https://git-scm.com/download/). Make sure that Git is added to the environment variables accessible to the command window. See [Software Freedom Conservancy's Git client tools](https://git-scm.com/download/) for the latest version of `git` tools to install, which includes *Git Bash*, the command-line app that you can use to interact with your local Git repository.
Perform the following steps to:
## Next steps
-You can register your device with your IoT hub for testing the client certificate that you've created for that device. For more information about registering a device, see [Create and manage device identities](iot-hub-create-through-portal.md).
+You can register your device with your IoT hub for testing the client certificate that you've created for that device. For more information about registering a device, see [Create and manage device identities](create-connect-device.md).
If you have multiple related devices to test, you can use the Azure IoT Hub Device Provisioning Service to provision multiple devices in an enrollment group. For more information about using enrollment groups in the Device Provisioning Service, see [Tutorial: Provision multiple X.509 devices using enrollment groups](../iot-dps/tutorial-custom-hsm-enrollment-group-x509.md).
lighthouse Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/architecture.md
Title: Azure Lighthouse architecture description: Learn about the relationship between tenants in Azure Lighthouse, and the resources created in the customer's tenant that enable that relationship. Previously updated : 05/10/2023 Last updated : 07/10/2024
At a high level, here's how Azure Lighthouse works for the managing tenant:
2. Specify this access and onboard the customer to Azure Lighthouse either by [publishing a Managed Service offer to Azure Marketplace](../how-to/publish-managed-services-offers.md), or by [deploying an Azure Resource Manager template](../how-to/onboard-customer.md). This onboarding process creates the two resources described above (registration definition and registration assignment) in the customer's tenant. 3. Once the customer has been onboarded, authorized users sign in to your managing tenant and perform tasks at the specified customer scope (subscription or resource group) per the access that you defined. Customers can review all actions taken, and they can remove access at any time.
-While in most cases only one service provider will be managing specific resources for a customer, itΓÇÖs possible for the customer to create multiple delegations for the same subscription or resource group, allowing multiple service providers to have access. This scenario also enables ISV scenarios that [project resources from the service providerΓÇÖs tenant to multiple customers](isv-scenarios.md#saas-based-multi-tenant-offerings).
+While in most cases only one service provider will be managing specific resources for a customer, itΓÇÖs possible for the customer to create multiple delegations for the same subscription or resource group, allowing multiple service providers to have access. This scenario also enables ISV scenarios that [project resources from the service providerΓÇÖs tenant to multiple customers](isv-scenarios.md#saas-based-multitenant-offerings).
## Next steps
lighthouse Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/enterprise.md
Title: Azure Lighthouse in enterprise scenarios description: The capabilities of Azure Lighthouse can be used to simplify cross-tenant management within an enterprise which uses multiple Microsoft Entra tenants. Previously updated : 05/10/2023 Last updated : 07/10/2024
In most enterprise scenarios, youΓÇÖll want to delegate a full subscription to A
Either way, be sure to [follow the principle of least privilege when defining which users will have access to delegated resources](recommended-security-practices.md#assign-permissions-to-groups-using-the-principle-of-least-privilege). Doing so helps to ensure that users only have the permissions needed to perform the required tasks and reduces the chance of inadvertent errors.
-Azure Lighthouse only provides logical links between a managing tenant and managed tenants, rather than physically moving data or resources. Furthermore, the access always goes in only one direction, from the managing tenant to the managed tenants. Users and groups in the managing tenant should continue to use multifactor authentication when performing management operations on managed tenant resources.
+Azure Lighthouse only provides logical links between a managing tenant and managed tenants, rather than physically moving data or resources. Furthermore, the access always goes in only one direction, from the managing tenant to the managed tenants. Users and groups in the managing tenant should use multifactor authentication when performing management operations on managed tenant resources.
-Enterprises with internal or external governance and compliance guardrails can use [Azure Activity logs](../../azure-monitor/essentials/platform-logs-overview.md) to meet their transparency requirements. When enterprise tenants have established managing and managed tenant relationships, users in each tenant can view logged activity to see actions taken by users in the managing tenant.
+Enterprises with internal or external governance and compliance guardrails can use [Azure Activity logs](../../azure-monitor/essentials/activity-log.md) to meet their transparency requirements. When enterprise tenants have established managing and managed tenant relationships, users in each tenant can view logged activity to see actions taken by users in the managing tenant.
## Onboarding considerations
lighthouse Isv Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/isv-scenarios.md
Title: Azure Lighthouse in ISV scenarios
-description: The capabilities of Azure Lighthouse can be used by ISVs for more flexibility with customer offerings.
Previously updated : 05/10/2023
+ Title: Azure Lighthouse in ISV scenarios
+description: ISVs can use the capabilities of Azure Lighthouse for more flexibility with customer offerings.
Last updated : 07/10/2024 # Azure Lighthouse in ISV scenarios
-A typical scenario for [Azure Lighthouse](../overview.md) involves a service provider that manages resources in its customers' Microsoft Entra tenants. However, the capabilities of Azure Lighthouse can also be used by Independent Software Vendors (ISVs) using SaaS-based offerings with their customers. Azure Lighthouse can be especially useful for ISVs who are offering managed services or support that require access to the subscription scope.
+A typical scenario for [Azure Lighthouse](../overview.md) involves a service provider that manages resources in its customers' Microsoft Entra tenants. Independent Software Vendors (ISVs) using SaaS-based offerings with their customers may also benefit from the capabilities of Azure Lighthouse. Using Azure Lighthouse can be especially helpful for ISVs who offer managed services that require access to a customer's subscription scope.
## Managed Service offers in Azure Marketplace
For more information, see [Publish a Managed Service offer to Azure Marketplace]
For more information, see [Azure Lighthouse and Azure managed applications](managed-applications.md).
-## SaaS-based multi-tenant offerings
+## SaaS-based multitenant offerings
An additional scenario is where the ISV hosts resources in a subscription in their own tenant, then uses Azure Lighthouse to let customers access those specific resources. Once this access is granted, the customer can log in to their own tenant and access the resources as needed. The ISV maintains their IP in their own tenant, and can use their own support plan to raise tickets related to the solution hosted in their tenant, rather than the customer's plan. Since the resources are in the ISV's tenant, all actions can be performed directly by the ISV, such as logging into VMs, installing apps, and performing maintenance tasks.
-In this scenario, users in the customerΓÇÖs tenant are essentially granted access as a "managing tenant", even though the customer is not managing the ISV's resources. Because they are accessing the ISV's tenant directly, itΓÇÖs important to grant only the minimum permissions necessary, so that customers can't inadvertently make changes to the solution or other ISV resources.
+In this scenario, users in the customer's tenant are essentially granted access as a "managing tenant," even though the customer isn't managing the ISV's resources. Because the customer is directly accessing the ISV's tenant, it's important to grant only the minimum permissions necessary, so that they can't make changes to the solution or access other ISV resources.
To enable this architecture, the ISV needs to obtain the object ID for a user group in the customer's Microsoft Entra tenant, along with their tenant ID. The ISV then builds an ARM template granting this user group the appropriate permissions, and [deploys it on the ISV's subscription](../how-to/onboard-customer.md) that contains the resources that the customer will access.
lighthouse Managed Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/managed-applications.md
Title: Azure Lighthouse and Azure managed applications description: Understand how Azure Lighthouse and Azure managed applications can be used together. Previously updated : 12/07/2023 Last updated : 07/10/2024 # Azure Lighthouse and Azure managed applications
-Both Azure managed applications and Azure Lighthouse work by enabling a service provider to access resources that reside in the customer's tenant. It can be helpful to understand the differences in the way that they work, the scenarios that they help to enable, and how they can be used together.
+Both [Azure managed applications](../../azure-resource-manager/managed-applications/overview.md) and [Azure Lighthouse](../overview.md) work by enabling a service provider to access resources that reside in the customer's tenant. It can be helpful to understand the differences in the way that they work, the scenarios that they help to enable, and how they can be used together.
> [!TIP] > Though we refer to service providers and customers in this topic, [enterprises managing multiple tenants](enterprise.md) can use the same processes and tools.
In a managed application, the resources used by the application are bundled toge
Managed applications support [customized Azure portal experiences](../../azure-resource-manager/managed-applications/concepts-view-definition.md) and [integration with custom providers](../../azure-resource-manager/managed-applications/tutorial-create-managed-app-with-custom-provider.md). These options can be used to deliver a more customized and integrated experience, making it easier for customers to perform some management tasks themselves.
-Managed applications can be [published to Azure Marketplace](../../marketplace/azure-app-offer-setup.md), either as a private offer for a specific customer's use, or as public offers that multiple customers can purchase. They can also be delivered to users within your organization by [publishing managed applications to your service catalog](../../azure-resource-manager/managed-applications/publish-service-catalog-app.md). You can deploy both service catalog and Marketplace instances using ARM templates, which can include a Commercial Marketplace partner's unique identifier to track [customer usage attribution](../../marketplace/azure-partner-customer-usage-attribution.md).
+Managed applications can be [published to Azure Marketplace](../../marketplace/azure-app-offer-setup.md), either as a private offer for a specific customer's use, or as public offers that multiple customers can purchase. They can also be delivered to users within your organization by [publishing managed applications to your service catalog](../../azure-resource-manager/managed-applications/publish-service-catalog-app.md). You can deploy both service catalog and Marketplace instances using ARM templates, which can include a commercial marketplace partner's unique identifier to track [customer usage attribution](../../marketplace/azure-partner-customer-usage-attribution.md).
Azure managed applications are typically used for a specific customer need that can be achieved through a turnkey solution that is fully managed by the service provider.
lighthouse Tenants Users Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/tenants-users-roles.md
Title: Tenants, users, and roles in Azure Lighthouse scenarios description: Understand how Microsoft Entra tenants, users, and roles can be used in Azure Lighthouse scenarios. Previously updated : 05/04/2023 Last updated : 07/10/2024
With either onboarding method, you'll need to define *authorizations*. Each auth
When creating your authorizations, we recommend the following best practices: -- In most cases, you'll want to assign permissions to a Microsoft Entra user group or service principal, rather than to a series of individual user accounts. This lets you add or remove access for individual users through your tenant's Microsoft Entra ID, rather than having to [update the delegation](../how-to/update-delegation.md) every time your individual access requirements change.-- Follow the principle of least privilege so that users only have the permissions needed to complete their job, helping to reduce the chance of inadvertent errors. For more information, see [Recommended security practices](../concepts/recommended-security-practices.md).-- Include an authorization with the [Managed Services Registration Assignment Delete Role](../../role-based-access-control/built-in-roles.md#managed-services-registration-assignment-delete-role) so that you can [remove access to the delegation](../how-to/remove-delegation.md) later if needed. If this role isn't assigned, access to delegated resources can only be removed by a user in the customer's tenant.
+- In most cases, you'll want to assign permissions to a Microsoft Entra user group or service principal, rather than to a series of individual user accounts. Doing so lets you add or remove access for individual users through your tenant's Microsoft Entra ID, without having to [update the delegation](../how-to/update-delegation.md) every time your individual access requirements change.
+- Follow the principle of least privilege. To reduce the chance of inadvertent errors, users should have only the permissions needed to perform their specific job. For more information, see [Recommended security practices](../concepts/recommended-security-practices.md).
+- Include an authorization with the [Managed Services Registration Assignment Delete Role](../../role-based-access-control/built-in-roles.md#managed-services-registration-assignment-delete-role) so that you can [remove access to the delegation](../how-to/remove-delegation.md) if needed. If this role isn't assigned, access to delegated resources can only be removed by a user in the customer's tenant.
- Be sure that any user who needs to [view the My customers page in the Azure portal](../how-to/view-manage-customers.md) has the [Reader](../../role-based-access-control/built-in-roles.md#reader) role (or another built-in role that includes Reader access). > [!IMPORTANT]
When creating your authorizations, we recommend the following best practices:
## Role support for Azure Lighthouse
-When you define an authorization, each user account must be assigned one of the [Azure built-in roles](../../role-based-access-control/built-in-roles.md). Custom roles and [classic subscription administrator roles](../../role-based-access-control/classic-administrators.md) are not supported.
+When you define an authorization, each user account must be assigned one of the [Azure built-in roles](../../role-based-access-control/built-in-roles.md). Custom roles and [classic subscription administrator roles](../../role-based-access-control/classic-administrators.md) aren't supported.
All [built-in roles](../../role-based-access-control/built-in-roles.md) are currently supported with Azure Lighthouse, with the following exceptions: -- The [Owner](../../role-based-access-control/built-in-roles.md#owner) role is not supported.
+- The [Owner](../../role-based-access-control/built-in-roles.md#owner) role isn't supported.
- The [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) role is supported, but only for the limited purpose of [assigning roles to a managed identity in the customer tenant](../how-to/deploy-policy-remediation.md#create-a-user-who-can-assign-roles-to-a-managed-identity-in-the-customer-tenant). No other permissions typically granted by this role will apply. If you define a user with this role, you must also specify the role(s) that this user can assign to managed identities.-- Any roles with [`DataActions`](../../role-based-access-control/role-definitions.md#dataactions) permission are not supported.-- Roles that include any of the following [actions](../../role-based-access-control/role-definitions.md#actions) are not supported:
+- Any roles with [`DataActions`](../../role-based-access-control/role-definitions.md#dataactions) permission aren't supported.
+- Roles that include any of the following [actions](../../role-based-access-control/role-definitions.md#actions) aren't supported:
- */write - */delete
All [built-in roles](../../role-based-access-control/built-in-roles.md) are curr
- Microsoft.Authorization/denyAssignments/delete > [!IMPORTANT]
-> When assigning roles, be sure to review the [actions](../../role-based-access-control/role-definitions.md#actions) specified for each role. In some cases, even though roles with [`DataActions`](../../role-based-access-control/role-definitions.md#dataactions) permission are not supported, the actions included in a role may allow access to data, where data is exposed through access keys and not accessed via the user's identity. For example, the [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md) role includes the `Microsoft.Storage/storageAccounts/listKeys/action` action, which returns storage account access keys that could be used to retrieve certain customer data.
+> When assigning roles, be sure to review the [actions](../../role-based-access-control/role-definitions.md#actions) specified for each role. Even though roles with [`DataActions`](../../role-based-access-control/role-definitions.md#dataactions) permission aren't supported, there are cases where actions included in a supported role may allow access to data. This generally occurs when data is exposed through access keys, not accessed via the user's identity. For example, the [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md) role includes the `Microsoft.Storage/storageAccounts/listKeys/action` action, which returns storage account access keys that could be used to retrieve certain customer data.
-In some cases, a role that was previously supported with Azure Lighthouse may become unavailable. For example, if the [`DataActions`](../../role-based-access-control/role-definitions.md#dataactions) permission is added to a role that previously didn't have that permission, that role can no longer be used when onboarding new delegations. Users who had already been assigned the role will still be able to work on previously delegated resources, but they won't be able to perform tasks that use the [`DataActions`](../../role-based-access-control/role-definitions.md#dataactions) permission.
+In some cases, a role that was previously supported with Azure Lighthouse may become unavailable. For example, if the [`DataActions`](../../role-based-access-control/role-definitions.md#dataactions) permission is added to a role that previously didn't have that permission, that role can no longer be used when onboarding new delegations. Users who had already been assigned that role will still be able to work on previously delegated resources, but they won't be able to perform any tasks that use the [`DataActions`](../../role-based-access-control/role-definitions.md#dataactions) permission.
-As soon as a new applicable built-in role is added to Azure, it can be assigned when [onboarding a customer using Azure Resource Manager templates](../how-to/onboard-customer.md). There may be a delay before the newly added role becomes available in Partner Center when [publishing a managed service offer](../how-to/publish-managed-services-offers.md). Similarly, if a role becomes unavailable, you may still see it in Partner Center for a while; however, you won't be able to publish new offers using such roles.
+As soon as a new applicable built-in role is added to Azure, it can be assigned when [onboarding a customer using Azure Resource Manager templates](../how-to/onboard-customer.md). There may be a delay before the newly added role becomes available in Partner Center when [publishing a managed service offer](../how-to/publish-managed-services-offers.md). Similarly, if a role becomes unavailable, you may still see it in Partner Center for a while, but you won't be able to publish new offers using such roles.
<a name='transferring-delegated-subscriptions-between-azure-ad-tenants'></a>
lighthouse Remove Delegation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/remove-delegation.md
Title: Remove access to a delegation description: Learn how to remove access to resources that were delegated to a service provider for Azure Lighthouse. Previously updated : 03/02/2023 Last updated : 07/10/2024 # Remove access to a delegation
-After a customer's subscription or resource group has been delegated to a service provider for [Azure Lighthouse](../overview.md), the delegation can be removed if needed. Once a delegation is removed, the [Azure delegated resource management](../concepts/architecture.md) access that was previously granted to users in the service provider tenant will no longer apply.
+When a customer's subscription or resource group has been delegated to a service provider for [Azure Lighthouse](../overview.md), that delegation can be removed if needed. Once a delegation is removed, the [Azure delegated resource management](../concepts/architecture.md) access that was previously granted to users in the service provider tenant will no longer apply.
Removing a delegation can be done by a user in either the customer tenant or the service provider tenant, as long as the user has the appropriate permissions.
Removing a delegation can be done by a user in either the customer tenant or the
> Though we refer to service providers and customers in this topic, [enterprises managing multiple tenants](../concepts/enterprise.md) can use the same processes. > [!IMPORTANT]
-> When a customer subscription has multiple delegations from the same service provider, removing one delegation could cause users to lose access granted via the other delegations. This only occurs when the same `principalId` and `roleDefinitionId` combination is included in multiple delegations and then one of the delegations is removed. To fix this, repeat the [onboarding process](onboard-customer.md) for the delegations that you aren't removing.
+> When a customer subscription has multiple delegations from the same service provider, removing one delegation could cause users to lose access granted via the other delegations. This only occurs when the same `principalId` and `roleDefinitionId` combination is included in multiple delegations and then one of the delegations is removed. If this happens, you can fix the issue by repeating the [onboarding process](onboard-customer.md) for the delegations that you don't want to remove.
## Customers
After confirming the deletion, no users in the service provider's tenant will be
## Service providers
-Users in a managing tenant can remove access to delegated resources if they were granted the [Managed Services Registration Assignment Delete Role](../../role-based-access-control/built-in-roles.md#managed-services-registration-assignment-delete-role) for the customer's resources. If this role isn't assigned to any service provider users, the delegation can only be removed by a user in the customer's tenant.
+Users in a managing tenant can remove access to delegated resources if they were granted the [Managed Services Registration Assignment Delete Role](../../role-based-access-control/built-in-roles.md#managed-services-registration-assignment-delete-role) during the onboarding process. If this role isn't assigned to any service provider users, the delegation can only be removed by a user in the customer's tenant.
This example shows an assignment granting the **Managed Services Registration Assignment Delete Role** that can be included in a parameter file during the [onboarding process](onboard-customer.md):
A user with this permission can remove a delegation in one of the following ways
Login-AzAccount
-# Select the subscription that is delegated - or contains the delegated resource group(s)
+# Select the subscription that is delegated or that contains the delegated resource group(s)
Select-AzSubscription -SubscriptionName "<subscriptionName>"
Remove-AzManagedServicesAssignment -Name "<Assignmentname>" -Scope "/subscriptio
az login
-# Select the subscription that is delegated ΓÇô or contains the delegated resource group(s)
+# Select the subscription that is delegated or that contains the delegated resource group(s)
az account set -s <subscriptionId/name>
lighthouse View Manage Customers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/view-manage-customers.md
Title: View and manage customers and delegated resources in the Azure portal
-description: As a service provider or enterprise using Azure Lighthouse, you can view all of your delegated resources and subscriptions by going to My customers in the Azure portal.
Previously updated : 03/01/2023
+description: As a service provider or enterprise using Azure Lighthouse, you can view delegated resources and subscriptions by going to My customers in the Azure portal.
Last updated : 07/10/2024
Service providers using [Azure Lighthouse](../overview.md) can use the **My customers** page in the [Azure portal](https://portal.azure.com) to view delegated customer resources and subscriptions.
+To view information about a customer, you must have been granted the [Reader](../../role-based-access-control/built-in-roles.md#reader) role (or another built-in role that includes Reader access) when that customer was onboarded.
+ > [!TIP] > While we'll refer to service providers and customers here, [enterprises managing multiple tenants](../concepts/enterprise.md) can use the same process to consolidate their management experience.
-To access the **My customers** page in the Azure portal, enter "My customers" in the search box near the top of the Azure portal. You can also select **All services**, then search for **Azure Lighthouse**, or search for "Azure Lighthouse". From the Azure Lighthouse page, select **Manage your customers**.
+To access the **My customers** page in the Azure portal, enter "My customers" in the search box near the top of the Azure portal. You can also access this page from the main **Azure Lighthouse** page in the Azure portal by selecting **Manage your customers**.
-Keep in mind that the top **Customers** section of the **My customers** page only shows info about customers who have delegated subscriptions or resource groups to your Microsoft Entra tenant through Azure Lighthouse. If you work with other customers (such as through the [Cloud Solution Provider (CSP) program](/partner-center/csp-overview)), you won't see info about those customers in the **Customers** section unless you [onboarded their resources to Azure Lighthouse](onboard-customer.md). However, you may see details about certain CSP customers in the [Cloud Solution Provider (Preview) section](#cloud-solution-provider-preview) lower on the page.
+The **Customers** section of the **My customers** page only shows information about customers who have delegated subscriptions or resource groups to your Microsoft Entra tenant through Azure Lighthouse. If you work with other customers (such as through the [Cloud Solution Provider (CSP) program](/partner-center/csp-overview)), you won't see those customers in the **Customers** section unless you [onboarded their resources to Azure Lighthouse](onboard-customer.md). However, you may see details about certain CSP customers in the [**Cloud Solution Provider (Preview)** section](#cloud-solution-provider-preview) lower on the page.
> [!NOTE]
-> Your customers can view info about service providers by navigating to **Service providers** in the Azure portal. For more info, see [View and manage service providers](view-manage-service-providers.md).
+> Your customers can view details about service providers by navigating to **Service providers** in the Azure portal. For more information, see [View and manage service providers](view-manage-service-providers.md).
## View and manage customer details
-To view customer details, select **Customers** on the left side of the **My customers** page.
-
-> [!IMPORTANT]
-> In order to see this information, users must have been granted the [Reader](../../role-based-access-control/built-in-roles.md#reader) role (or another built-in role that includes Reader access) in the onboarding process.
+To view customer details, select **Customers** from the service menu of the **My customers** page.
-For each customer, you'll see the customer's name, customer ID (tenant ID), and the **Offer ID** and **Offer version** associated with the engagement. In the **Delegations** column, you'll see the number of delegated subscriptions and/or the number of delegated resource groups.
+For each customer, you'll see the customer's name and customer ID (tenant ID), along with the **Offer ID** and **Offer version** associated with the engagement. In the **Delegations** column, you'll see the number of delegated subscriptions and/or resource groups.
Options at the top of the page let you sort, filter, and group your customer information by specific customers, offers, or keywords.
-You can view the following information from this page:
+To see additional details, use the following options:
- To see all of the subscriptions, offers, and delegations associated with a customer, select the customer's name.-- To see more details about an offer and its delegations, select the offer name.-- To view more details about role assignments for delegated subscriptions or resource groups, select the entry in the **Delegations** column.
+- To see details about an offer and its delegations, select the offer name.
+- To see details about role assignments for delegated subscriptions or resource groups, select the entry in the **Delegations** column.
> [!NOTE]
-> If a customer renames a subscription after it's been delegated, you'll see the updated subscription name. If they rename the tenant, you may still see the older tenant name in some places in the Azure portal.
+> If a customer renames a subscription after it's been delegated, you'll see the updated subscription name. However, if they rename their tenant, you may still see the older tenant name in some places in the Azure portal.
## View and manage delegations
Options at the top of the page let you sort, filter, and group this information
### View role assignments
-The users and permissions associated with each delegation appear in the **Role assignments** column. You can select each entry to view the full list of users, groups, and service principals that have been granted access to the subscription or resource group. From there, you can select a particular user, group, or service principal name to get more details.
+The users and permissions associated with each delegation appear in the **Role assignments** column. You can select each entry to view more details. After you do so, select **Role assignments** to see the full list of users, groups, and service principals that have been granted access to the subscription or resource group. From there, you can select a particular user, group, or service principal name to see more information.
### Remove delegations
-If you included users with the [Managed Services Registration Assignment Delete Role](../../role-based-access-control/built-in-roles.md#managed-services-registration-assignment-delete-role) when onboarding a customer to Azure Lighthouse, those users can remove a delegation by selecting the trash can icon that appears in the row for that delegation. When they do so, no users in the service provider's tenant will be able to access the resources that had been previously delegated.
+If you included users with the [Managed Services Registration Assignment Delete Role](../../role-based-access-control/built-in-roles.md#managed-services-registration-assignment-delete-role) when onboarding a customer to Azure Lighthouse, those users can remove delegations by selecting the trash can icon that appears in the row for that delegation. When they do so, no users in the service provider's tenant will be able to access the resources that had been previously delegated.
For more information, see [Remove access to a delegation](remove-delegation.md). ## View delegation change activity
-The **Activity log** section of the **My customers** page keeps track of every time customer subscriptions or resource groups are delegated to your tenant, and every time previously delegated resources are removed. This information can only be viewed by users who have been [assigned the Monitoring Reader role at root scope](monitor-delegation-changes.md).
+The **Activity log** section of the **My customers** page keeps track of every time that a customer subscription or resource group is delegated to your tenant. It also records whenever any previously delegated resources are removed. This information can only be viewed by users who have been [assigned the Monitoring Reader role at root scope](monitor-delegation-changes.md).
For more information, see [View delegation changes in the Azure portal](monitor-delegation-changes.md#view-delegation-changes-in-the-azure-portal).
For more information, see [View delegation changes in the Azure portal](monitor-
You can work directly in the context of a delegated subscription within the Azure portal, without switching the directory you're signed in to. To do so:
-1. Select the **Directory + subscriptions** or **Settings** icon near the top of the Azure portal.
+1. Select the **Settings** icon near the top of the Azure portal.
1. In the [Directories + subscriptions settings page](../../azure-portal/set-preferences.md#directories--subscriptions), ensure that the **Advanced filters** toggle is [turned off](../../azure-portal/set-preferences.md#subscription-filters). 1. In the **Default subscription filter** section, select the appropriate directory and subscription. (If you've been granted access to one or more resource groups, rather than to an entire subscription, select the subscription to which that resource group belongs. You'll then work in the context of that subscription, but will only be able to access the designated resource group(s).)
You can work directly in the context of a delegated subscription within the Azur
After that, when you access a service that supports [cross-tenant management experiences](../concepts/cross-tenant-management-experience.md), the service will default to the context of the delegated subscription that you included in your filter.
-You can change the default subscription at any time by following the steps above and choosing a different subscription, or selecting multiple subscriptions. You can also select **All directories**, then check the **Select all** box, if you want the filter to include all of the subscriptions to which you have access.
+You can change the default subscription at any time by following the steps above and choosing a different subscription (or multiple subscriptions). If you want the filter to include all of the subscriptions to which you have access, select **All directories**, then check the **Select all** box.
:::image type="content" source="../media/subscription-filter-all.png" alt-text="Screenshot of the default subscription filter with all directories and subscriptions selected":::
You can also work on delegated subscriptions or resource groups by selecting the
## Cloud Solution Provider (Preview)
-A separate **Cloud Solution Provider (Preview)** section of the **My customers** page shows billing info and resources for your CSP customers who have [signed the Microsoft Customer Agreement (MCA)](/partner-center/confirm-customer-agreement) and are [under the Azure plan](/partner-center/azure-plan-get-started). For more information, see [Get started with your Microsoft Partner Agreement billing account](../../cost-management-billing/understand/mpa-overview.md).
+A separate **Cloud Solution Provider (Preview)** section of the **My customers** page shows billing information and resources for your CSP customers who have [signed the Microsoft Customer Agreement (MCA)](/partner-center/confirm-customer-agreement) and are [under the Azure plan](/partner-center/azure-plan-get-started). For more information, see [Get started with your Microsoft Partner Agreement billing account](../../cost-management-billing/understand/mpa-overview.md).
These CSP customers appear in this section whether or not you also onboarded them to Azure Lighthouse. Similarly, a CSP customer doesn't have to appear in the **Cloud Solution Provider (Preview)** section of **My customers** in order for you to onboard them to Azure Lighthouse.
lighthouse Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/samples/index.md
Title: Azure Lighthouse samples and templates description: These samples and Azure Resource Manager templates help you onboard customers and support Azure Lighthouse scenarios. Previously updated : 01/26/2023 Last updated : 07/10/2024 # Azure Lighthouse samples
load-balancer Cross Region Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/cross-region-overview.md
This region doesn't affect how the traffic is routed. If a home region goes down
* US Gov Virginia * West Europe * West US
+* China North 2
> [!NOTE] > You can only deploy your cross-region load balancer or Public IP in Global tier in one of the listed Home regions.
machine-learning Apache Spark Azure Ml Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/apache-spark-azure-ml-concepts.md
After the serverless Spark compute resource tear-down happens, submission of the
### Session-level Conda packages A Conda dependency YAML file can define many session-level Conda packages in a session configuration. A session will time out if it needs more than 15 minutes to install the Conda packages defined in the YAML file. It becomes important to first check whether a required package is already available in the Azure Synapse base image. To do this, users should follow the link to determine *packages available in the base image for* the Apache Spark version in use:-- [Azure Synapse Runtime for Apache Spark 3.3](../synapse-analytics/spark/apache-spark-33-runtime.md#python-libraries-normal-vms)-- [Azure Synapse Runtime for Apache Spark 3.2](../synapse-analytics/spark/apache-spark-32-runtime.md#python-libraries-normal-vms)
+- [Azure Synapse Runtime for Apache Spark 3.3](https://github.com/microsoft/synapse-spark-runtime/tree/main/Synapse/spark3.3)
++
+- [Azure Synapse Runtime for Apache Spark 3.2](https://github.com/microsoft/synapse-spark-runtime/tree/main/Synapse/spark3.2)
> [!IMPORTANT] > Azure Synapse Runtime for Apache Spark: Announcements
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
__Docker images maintained by Azure Machine Learning__
| Microsoft Container Registry | mcr.microsoft.com</br>\*.data.mcr.microsoft.com | TCP | 443 | > [!TIP]
-> * __Azure Container Registry__ is required for any custom Docker image. This includes small modifications (such as additional packages) to base images provided by Microsoft. It is also required by the internal training job submission process of Azure Machine Learning.
-> * __Microsoft Container Registry__ is only needed if you plan on using the _default Docker images provided by Microsoft_, and _enabling user-managed dependencies_.
+> * __Azure Container Registry__ is required for any custom Docker image. This includes small modifications (such as additional packages) to base images provided by Microsoft. It is also required by the internal training job submission process of Azure Machine Learning. Furthermore, __Microsoft Container Registry__ is always needed regardless of the scenario.
> * If you plan on using federated identity, follow the [Best practices for securing Active Directory Federation Services](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs) article. Also, use the information in the [compute with public IP](#scenario-using-compute-cluster-or-compute-instance-with-a-public-ip) section to add IP addresses for `BatchNodeManagement` and `AzureMachineLearning`.
machine-learning How To Create Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-compute-instance.md
To further enhance security, when you create a compute instance on behalf of a d
:::image type="content" source="media/how-to-create-compute-instance/pobo-creation.png" alt-text="Screenshot shows SSO is disabled during creation of compute instance."::: The assigned to user needs to enable SSO on compute instance themselves after the compute is assigned to them by updating the SSO setting on the compute instance.
-Assigned to user needs to have the following permission/action in their role *MachineLearningServices/workspaces/computes/enableSso/action*.
+Assigned to user needs to have the following permission/action in their role *MachineLearningServices/workspaces/computes/enableSso/action*.
+Assigned to user does not need compute write (create) permission to enable SSO.
Here are the steps assigned to user needs to take. Please note creator of compute instance is not allowed to enable SSO on that compute instance due to security reasons.
machine-learning How To Interactive Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-interactive-jobs.md
When you select on the endpoints to interact when your job, you're taken to the
- You can also interact with the job container within VS Code. To attach a debugger to a job during job submission and pause execution, [navigate here](./how-to-interactive-jobs.md#attach-a-debugger-to-a-job).
+ > [!NOTE]
+ > Private link-enabled workspaces are not currently supported when interacting with the job container with VS Code.
+ :::image type="content" source="./media/interactive-jobs/vs-code-open.png" alt-text="Screenshot of interactive jobs VS Code panel when first opened. This shows the sample python file that was created to print two lines."::: - If you have logged tensorflow events for your job, you can use TensorBoard to monitor the metrics when your job is running.
Once you're done with the interactive training, you can also go to the job detai
## Attach a debugger to a job To submit a job with a debugger attached and the execution paused, you can use debugpy, and VS Code (`debugpy` must be installed in your job environment).
+> [!NOTE]
+> Private link-enabled workspaces are not currently supported when attaching a debugger to a job in VS Code.
+ 1. During job submission (either through the UI, the CLI or the SDK) use the debugpy command to run your python script. For example, the below screenshot shows a sample command that uses debugpy to attach the debugger for a tensorflow script (`tfevents.py` can be replaced with the name of your training script). :::image type="content" source="./media/interactive-jobs/use-debugpy.png" alt-text="Screenshot of interactive jobs configuration of debugpy":::
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
The following list is of common deployment errors that are reported as part of t
* [ImageBuildFailure](#error-imagebuildfailure) * [Azure Container Registry (ACR) authorization failure](#container-registry-authorization-failure) * [Image build compute not set in a private workspace with VNet](#image-build-compute-not-set-in-a-private-workspace-with-vnet)
+ * [Image build timing out](#image-build-timing-out)
* [Generic or unknown failure](#generic-image-build-failure) * [OutOfQuota](#error-outofquota) * [CPU](#cpu-quota)
Container registries that are behind a virtual network may also encounter this e
If the error message mentions `"failed to communicate with the workspace's container registry"` and you're using virtual networks and the workspace's Azure Container Registry is private and configured with a private endpoint, you need to [enable Azure Container Registry](how-to-managed-network.md#configure-image-builds) to allow building images in the virtual network.
+### Image build timing out
+
+Image build timeouts are often due to an image becoming too large to be able to complete building within the timeframe of deployment creation.
+To verify if this is your issue, check your image build logs at the location that the error may specify. The logs are cut off at the point that the image build timed out.
+
+To resolve this, please [build your image separately](https://learn.microsoft.com/azure/devops/pipelines/ecosystems/containers/publish-to-acr?view=azure-devops&tabs=javascript%2Cportal%2Cmsi) so that the image only needs to be pulled during deployment creation.
+ #### Generic image build failure As stated previously, you can check the build log for more information on the failure.
machine-learning How To Deploy Azure Container Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-azure-container-instance.md
Previously updated : 11/04/2022 Last updated : 07/10/2024 # Deploy a model to Azure Container Instances with CLI (v1)
Last updated 11/04/2022
Learn how to use Azure Machine Learning to deploy a model as a web service on Azure Container Instances (ACI). Use Azure Container Instances if you: - prefer not to manage your own Kubernetes cluster-- Are OK with having only a single replica of your service, which may impact uptime
+- Are OK with having only a single replica of your service, which might affect uptime
For information on quota and region availability for ACI, see [Quotas and region availability for Azure Container Instances](../../container-instances/container-instances-quotas.md) article. > [!IMPORTANT]
-> It is highly advised to debug locally before deploying to the web service, for more information see [Debug Locally](how-to-troubleshoot-deployment-local.md)
+> It is highly advised to debug locally before deploying to the web service, for more information, see [Debug Locally](how-to-troubleshoot-deployment-local.md)
> > You can also refer to Azure Machine Learning - [Deploy to Local Notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml)
For information on quota and region availability for ACI, see [Quotas and region
## Limitations
-When your Azure Machine Learning workspace is configured with a private endpoint, deploying to Azure Container Instances in a VNet is not supported. Instead, consider using a [Managed online endpoint with network isolation](../how-to-secure-online-endpoint.md).
+When your Azure Machine Learning workspace is configured with a private endpoint, deploying to Azure Container Instances in a virtual network isn't supported. Instead, consider using a [Managed online endpoint with network isolation](../how-to-secure-online-endpoint.md).
## Deploy to ACI
The entries in the `deploymentconfig.json` document map to the parameters for [A
| &emsp;&emsp;`memoryInGB` | `memory_gb` | The amount of memory (in GB) to allocate for this web service. Default, `0.5` | | `location` | `location` | The Azure region to deploy this Webservice to. If not specified the Workspace location will be used. More details on available regions can be found here: [ACI Regions](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=container-instances) | | `authEnabled` | `auth_enabled` | Whether to enable auth for this Webservice. Defaults to False |
-| `sslEnabled` | `ssl_enabled` | Whether to enable SSL for this Webservice. Defaults to False. |
+| `sslEnabled` | `ssl_enabled` | Whether to enable TLS for this Webservice. Defaults to False. |
| `appInsightsEnabled` | `enable_app_insights` | Whether to enable AppInsights for this Webservice. Defaults to False |
-| `sslCertificate` | `ssl_cert_pem_file` | The cert file needed if SSL is enabled |
-| `sslKey` | `ssl_key_pem_file` | The key file needed if SSL is enabled |
-| `cname` | `ssl_cname` | The cname for if SSL is enabled |
+| `sslCertificate` | `ssl_cert_pem_file` | The cert file needed if TLS is enabled |
+| `sslKey` | `ssl_key_pem_file` | The key file needed if TLS is enabled |
+| `cname` | `ssl_cname` | The CNAME for if TLS is enabled |
| `dnsNameLabel` | `dns_name_label` | The dns name label for the scoring endpoint. If not specified a unique dns name label will be generated for the scoring endpoint. | The following JSON is an example deployment configuration for use with the CLI:
managed-grafana How To Sync Teams With Azure Ad Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-sync-teams-with-azure-ad-groups.md
Last updated 06/7/2024
# Configure Grafana teams with Microsoft Entra groups and Grafana team sync
-In this guide, you learn how to useMicrosoft Entra groups with [Grafana Team Sync](https://grafana.com/docs/grafana/latest/setup-grafana/configure-security/configure-team-sync/) to manage dashboard permissions in Azure Managed Grafana.
+In this guide, you learn how to use Microsoft Entra groups with [Grafana Team Sync](https://grafana.com/docs/grafana/latest/setup-grafana/configure-security/configure-team-sync/) to manage dashboard permissions in Azure Managed Grafana.
In Azure Managed Grafana, you can use Azure's role-based access control (RBAC) roles for Grafana to define access rights. These permissions apply to all resources in your Grafana workspace by default, not per folder or dashboard. If you assign a user to the Grafana Editor role, that user can edit any dashboard in your Grafana workspace. However, with Grafana's [granular permission model](https://grafana.com/docs/grafana/latest/setup-grafana/configure-security/configure-team-sync/), you can adjust a user's default permission level for specific dashboards or dashboard folders.
mysql Migrate Single Flexible Mysql Import Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/migrate-single-flexible-mysql-import-cli.md
Before you trigger the Azure Database for MySQL Import CLI command, consider the
| Single Server Pricing Tier | Single Server VCores | Flexible Server Tier | Flexible Server SKU Name | | | | :: | :: |
- | Basic | 1 | Burstable | Standard_B1s |
- | Basic | 2 | Burstable | Standard_B2s |
+ | Basic | 1 | Burstable | Standard_B2ms |
+ | Basic | 2 | Burstable | Standard_B2ms |
| General Purpose | 4 | GeneralPurpose | Standard_D4ds_v4 | | General Purpose | 8 | GeneralPurpose | Standard_D8ds_v4 | | General Purpose | 16 | GeneralPurpose | Standard_D16ds_v4 |
network-watcher Flow Logs Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/flow-logs-read.md
The concepts discussed in this article aren't limited to the PowerShell and are
# [**Network security group flow logs**](#tab/nsg)
-The following PowerShell script sets up the variables needed to query the network security group flow log blob and list the blocks within the [CloudBlockBlob](/dotnet/api/microsoft.azure.storage.blob.cloudblockblob) block blob. Update the script to contain valid values for your environment.
+The following PowerShell script sets up the variables needed to query the network security group flow log blob and list the blocks within the [CloudBlockBlob](/dotnet/api/microsoft.azure.storage.blob.cloudblockblob) block blob. Update the script to contain valid values for your environment, specifically "yourSubscriptionId", "FLOWLOGSVALIDATIONWESTCENTRALUS", "V2VALIDATIONVM-NSG", "yourStorageAccountName", "ml-rg", "000D3AF87856", "11/11/2018 03:00". For example, yourSubscriptionId should be replaced with your subscription ID.
```powershell function Get-NSGFlowLogCloudBlockBlob {
openshift Intro Openshift https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/intro-openshift.md
Previously updated : 04/17/2024 Last updated : 07/10/2024
The Microsoft *Azure Red Hat OpenShift* service enables you to deploy fully managed [OpenShift](https://www.openshift.com/) clusters. Azure Red Hat OpenShift extends [Kubernetes](https://kubernetes.io/). Running containers in production with Kubernetes requires additional tools and resources. This often includes needing to juggle image registries, storage management, networking solutions, and logging and monitoring tools - all of which must be versioned and tested together. Building container-based applications requires even more integration work with middleware, frameworks, databases, and CI/CD tools. Azure Red Hat OpenShift combines all this into a single platform, bringing ease of operations to IT teams while giving application teams what they need to execute.
-Azure Red Hat OpenShift is jointly engineered, operated, and supported by Red Hat and Microsoft to provide an integrated support experience. There are no virtual machines to operate, and no patching is required. Master, infrastructure, and application nodes are patched, updated, and monitored on your behalf by Red Hat and Microsoft. Your Azure Red Hat OpenShift clusters are deployed into your Azure subscription and are included on your Azure bill.
+Azure Red Hat OpenShift is jointly engineered, operated, and supported by Red Hat and Microsoft to provide an integrated support experience. There are no virtual machines to operate, and no patching is required. Control plane, infrastructure, and application nodes are patched, updated, and monitored on your behalf by Red Hat and Microsoft. Your Azure Red Hat OpenShift clusters are deployed into your Azure subscription and are included on your Azure bill.
You can choose your own registry, networking, storage, and CI/CD solutions, or use the built-in solutions for automated source code management, container and application builds, deployments, scaling, health management, and more. Azure Red Hat OpenShift provides an integrated sign-on experience through Microsoft Entra ID.
postgresql Concepts Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-identity.md
+
+ Title: Identity
+description: Learn about Managed Idenities in the Flexible Server deployment option for Azure Database for PostgreSQL - Flexible Server.
+++ Last updated : 07/09/2024++++
+ - mvc
+ - mode-other
+ms.devlang: python
++
+# Managed Identity in Azure Database for PostgreSQL - Flexible Server
++
+A common challenge for developers is the management of secrets, credentials, certificates, and keys used to secure communication between services. Managed identities eliminate the need for developers to manage these credentials.
+
+While developers can securely store the secrets in Azure Key Vault, services need a way to access Azure Key Vault. Managed identities provide an automatically managed identity in Microsoft Entra ID for applications to use when connecting to resources that support Microsoft Entra authentication. Applications can use managed identities to obtain Microsoft Entra tokens without having to manage any credentials.
+
+Here are some of the benefits of using managed identities:
+
+- You don't need to manage credentials. Credentials arenΓÇÖt even accessible to you.
+- You can use managed identities to authenticate to any resource that supports Microsoft Entra authentication including your own applications.
+- Managed identities can be used at no extra cost.
+
+## Managed identity types
+
+There are two types of managed identities:
+
+- **System-assigned**. Some Azure resources, such as virtual machines, Azure Database for PostgreSQL Flexible Server allows you to enable a managed identity directly on the resource. When you enable a system-assigned managed identity:
+ - A service principal of a special type is created in Microsoft Entra ID for the identity. The service principal is tied to the lifecycle of that Azure resource. When the Azure resource is deleted, Azure automatically deletes the service principal for you.
+ - By design, only that Azure resource can use this identity to request tokens from Microsoft Entra ID.
+ - You authorize the managed identity to have access to one or more services.
+ - The name of the system-assigned service principal is always the same as the name of the Azure resource it's created for.
+
+
+- **User-assigned**. You may also create a managed identity as a standalone Azure resource. You can create a user-assigned managed identity and assign it to one or more Azure Resources. When you enable a user-assigned managed identity:
+ - A service principal of a special type is created in Microsoft Entra ID for the identity. The service principal is managed separately from the resources that use it.
+ - Multiple resources can utilize user-assigned identities.
+ - You authorize the managed identity to have access to one or more services.
+++
+## How to enable System Assigned Managed Identity on your Flexible Server
+
+## Azure portal
+
+Follow these steps to enable System Assigned Managed Identity on your Azure Database for PostgreSQL flexible server instance.
+
+1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for PostgreSQL flexible server instance for which you want to enable system assigned managed identity.
+
+2. On the Azure Database for PostgreSQL flexible server page, select **Identity**
+
+3. In the **Identity** section, select **On** radio button.
+
+4. Select **Save** to apply the changes.
+
+![Screenshot showing system assigned managed identity.](./media/concepts-Identity/system-assigned-managed-identity.png)
+
+5. A notification confirms that system assigned managed identity is enabled.
+
+## ARM template
+
+Here is the ARM template to enable system assigned managed identity. You can use the 2023-06-01-preview or the latest available API.
+
+```json
+{
+ "resources": [
+ {
+ "apiVersion": "2023-06-01-preview",
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "location": "Region name",
+ "name": "flexible server name",
+ "type": "Microsoft.DBforPostgreSQL/flexibleServers"
+ }
+ ]
+}
+ ```
+
+To disable system assigned managed identity change the type to **None**
+
+```json
+{
+ "resources": [
+ {
+ "apiVersion": "2023-06-01-preview",
+ "identity": {
+ "type": "None"
+ },
+ "location": "Region Name",
+ "name": "flexible server name",
+ "type": "Microsoft.DBforPostgreSQL/flexibleServers"
+ }
+ ]
+}
+ ```
+## How to verify the newly created System Assigned Managed Identity on your Flexible Server
+
+You can verify the managed identity created by going to **Enterprise Applications**
+
+1. Choose **Application Type == Managed Identity**
+
+2. Provide your flexible server name in **Search by application name or Identity** as shown in the screenshot.
+
+![Screenshot verifying system assigned managed identity.](./media/concepts-Identity/verify-managed-identity.png)
+++
+## Related content
+
+- [Microsoft Entra authentication](../concepts-aad-authentication.md)
+- [Firewall rules for IP addresses](concepts-firewall-rules.md)
+- [Private access networking with Azure Database for PostgreSQL - Flexible Server](concepts-networking.md)
postgresql Concepts User Roles Migration Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/migration-service/concepts-user-roles-migration-service.md
Allowing unrestricted access to these system tables and views could lead to unau
### pg_pltemplate deprecation
-Another important consideration is the deprecation of the **pg_pltemplate** system table within the pg_catalog schema by the PostgreSQL community **starting from version 13.** Therefore, if you're migrating to Flexible Server versions 13 and above, and if you have granted permissions to users on the pg_pltemplate table, it is necessary to undo these permissions before initiating the migration process.
+Another important consideration is the deprecation of the **pg_pltemplate** system table within the pg_catalog schema by the PostgreSQL community **starting from version 13.** If you're migrating to Flexible Server versions 13 and above and have granted permissions to users on the pg_pltemplate table on your single server, you mist revoke these permissions before initiating a new migration.
#### What is the impact?-- If your application is designed to directly query the affected tables and views, it will encounter issues upon migrating to the flexible server. We strongly advise you to refactor your application to avoid direct queries to these system tables.
+- If your application is designed to directly query the affected tables and views, it encounters issues upon migrating to the flexible server. We strongly advise you to refactor your application to avoid direct queries to these system tables.
-- If you have specifically granted or revoked privileges to any users or roles for the affected pg_catalog tables and views, you will encounter an error during the migration process. This error will be identified by the following pattern:
+- If you have granted or revoked privileges to any users or roles for the affected pg_catalog tables and views, you encounter an error during the migration process. This error will be identified by the following pattern:
```sql pg_restore error: could not execute query <GRANT/REVOKE> <PRIVILEGES> on <affected TABLE/VIEWS> to <user>.
GROUP BY
**Step 2: Review the Output**
-The output of the above query will show the list of privileges granted to roles on the impacted tables and views.
+The output of the query shows the list of privileges granted to roles on the impacted tables and views.
For example: | Privileges | Relation name | Grantee |
-| : | : | : |
-| SELECT | pg_authid | adminuser1
-| SELECT, UPDATE | pg_shadow | adminuser2
+| : |: |: |
+| SELECT | pg_authid | adminuser1 |
+| SELECT, UPDATE |pg_shadow | adminuser2 |
+ **Step 3: Undo the privileges**
-To undo the privileges, run REVOKE statements for each privilege on the relation from the grantee. In the above example, you would run:
+To undo the privileges, run REVOKE statements for each privilege on the relation from the grantee. In this example, you would run:
```sql REVOKE SELECT ON pg_authid FROM adminuser1; REVOKE SELECT ON pg_shadow FROM adminuser2; REVOKE UPDATE ON pg_shadow FROM adminuser2; ```
+> [!NOTE]
+> Make sure you perform the above steps for all the databases included in the migration to avoid any permission-related issues during the migration..
-After completing these steps, you can proceed to initiate a new migration from the single server to the flexible server using the migration service. You should not encounter permission-related issues during this process.
+After completing these steps, you can proceed to initiate a new migration from the single server to the flexible server using the migration service. You shouldn't encounter permission-related issues during this process.
## Related content - [Migration service](concepts-migration-service-postgresql.md)
private-link Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/availability.md
Previously updated : 03/18/2024 Last updated : 07/10/2024
The following tables list the Private Link services and the regions where they'r
|Supported services |Available regions | Other considerations | Status | |:-|:--|:-|:--| | Azure SQL Database | All public regions <br/> All Government regions<br/>All China regions | Supported for Proxy [connection policy](/azure/azure-sql/database/connectivity-architecture#connection-policy) | GA <br/> [Learn how to create a private endpoint for Azure SQL](./tutorial-private-endpoint-sql-portal.md) |
-|Azure Cosmos DB| All public regions<br/> All Government regions</br> All China regions | |GA <br/> [Learn how to create a private endpoint for Azure Cosmos DB.](./tutorial-private-endpoint-cosmosdb-portal.md)|
-| Azure Database for PostgreSQL - Single server | All public regions <br/> All Government regions<br/>All China regions | Supported for General Purpose and Memory Optimized pricing tiers | GA <br/> [Learn how to create a private endpoint for Azure Database for PostgreSQL.](../postgresql/concepts-data-access-and-security-private-link.md) |
+| Azure Cosmos DB| All public regions<br/> All Government regions</br> All China regions | |GA <br/> [Learn how to create a private endpoint for Azure Cosmos DB.](./tutorial-private-endpoint-cosmosdb-portal.md)|
+| Azure Database for PostgreSQL - Single server | All public regions <br/> All Government regions<br/>All China regions | Supported for General Purpose and Memory Optimized pricing tiers | GA <br/> [Learn how to create a private endpoint for Azure Database for PostgreSQL Single Server.](../postgresql/concepts-data-access-and-security-private-link.md) |
+| Azure Database for PostgreSQL - Flexible server | All public regions <br/> All Government regions<br/>All China regions | | GA <br/> [Learn how to create a private endpoint for Azure Database for PostgreSQL Flexible Server.](../postgresql/flexible-server/concepts-networking-private-link.md) |
| Azure Database for MySQL | All public regions<br/> All Government regions<br/>All China regions | | GA <br/> [Learn how to create a private endpoint for Azure Database for MySQL.](../mysql/concepts-data-access-security-private-link.md) | | Azure Database for MariaDB | All public regions<br/> All Government regions<br/>All China regions | | GA <br/> [Learn how to create a private endpoint for Azure Database for MariaDB.](../mariadb/concepts-data-access-security-private-link.md) | | Azure Cache for Redis | All public regions<br/> All Government regions<br/>All China regions | | GA <br/> [Learn how to create a private endpoint for Azure Cache for Redis.](../azure-cache-for-redis/cache-private-link.md) |
reliability Availability Zones Baseline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-baseline.md
If you require multi-region, or if your Azure region doesn't support availabilit
- Each data center in a region is assigned to a physical zone. Physical zones are mapped to the logical zones in your Azure subscription. Azure subscriptions are automatically assigned this mapping at the time a subscription is created. You can use the dedicated ARM REST API, [listLocations](/rest/api/resources/subscriptions/list-locations?tabs=HTTP) and set the API version to 2022-12-01 to list the logical zone mapping to physical zone for your subscription. This information is important for critical application components that require co-location with Azure resources categorized as [Strategic services](/azure/reliability/availability-service-by-category#strategic-services) that may not be available in all physical zones. -- Inter-zone bandwidth charges apply when traffic moves across zones. To learn more about bandwidth pricing, see [Bandwidth pricing](https://azure.microsoft.com/pricing/details/bandwidth/). ## Next steps
route-server Tutorial Configure Route Server With Quagga https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/tutorial-configure-route-server-with-quagga.md
Title: 'Tutorial: Configure peering between Azure Route Server and Network Virtual Appliance'
-description: This tutorial shows you how to configure an Azure Route Server and peer it with a Network Virtual Appliance (NVA) using the Azure portal.
+description: This tutorial shows you how to configure an Azure Route Server and peer it with a Quagga network virtual appliance (NVA) using the Azure portal.
search Search Create Service Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-create-service-portal.md
- references_regions - build-2024 Previously updated : 05/21/2024 Last updated : 07/09/2024 # Create an Azure AI Search service in the portal
-[**Azure AI Search**](search-what-is-azure-search.md) is a vector and full text information retrieval solution for the enterprise, and for traditional and generative AI scenarios.
+[**Azure AI Search**](search-what-is-azure-search.md) is an information retrieval platform for the enterprise. It supports traditional search and conversational AI-driven search for "chat with your data" experiences over your proprietary content.
-If you have an Azure subscription, including a [trial subscription](https://azure.microsoft.com/pricing/free-trial/?WT.mc_id=A261C142F), you can create a search service for free. Free services have limitations, but you can complete all of the quickstarts and most tutorials, except for those featuring semantic ranking (it requires a billable service).
+The easiest way to create a service is using the [Azure portal](https://portal.azure.com/), which is covered in this article.
-The easiest way to create a service is using the [Azure portal](https://portal.azure.com/), which is covered in this article. You can also use [Azure PowerShell](search-manage-powershell.md#create-or-delete-a-service), [Azure CLI](search-manage-azure-cli.md#create-or-delete-a-service), the [Management REST API](search-manage-rest.md#create-or-update-a-service), an [Azure Resource Manager service template](search-get-started-arm.md), a [Bicep file](search-get-started-bicep.md), or [Terraform](search-get-started-terraform.md).
+You can also use [Azure PowerShell](search-manage-powershell.md#create-or-delete-a-service), [Azure CLI](search-manage-azure-cli.md#create-or-delete-a-service), the [Management REST API](search-manage-rest.md#create-or-update-a-service), an [Azure Resource Manager service template](search-get-started-arm.md), a [Bicep file](search-get-started-bicep.md), or [Terraform](search-get-started-terraform.md).
[![Animated GIF](./media/search-create-service-portal/AnimatedGif-AzureSearch-small.gif)](./media/search-create-service-portal/AnimatedGif-AzureSearch.gif#lightbox) ## Before you start
-The following service properties are fixed for the lifetime of the service. Consider their usage implications as you fill in each property:
+A few service properties are fixed for the lifetime of the service. Before creating the service, decide on a name, region, and tier.
-+ Service name becomes part of the URL endpoint ([review tips for helpful service names](#name-the-service)).
-+ [Tier](search-sku-tier.md) (Free, Basic, Standard, and so forth) determines the underlying physical hardware and billing. Some features are tier-constrained.
-+ [Service region](#choose-a-region) can determine the availability of certain scenarios and higher storage limits. If you need availability zones or [AI enrichment](cognitive-search-concept-intro.md) or more storage, create the resource in a region that provides the feature.
++ [Service name](#name-the-service) becomes part of the URL endpoint. The name must be unique and it must conform to naming rules.
-## Subscribe (free or paid)
++ [Region](search-region-support.md) determines data residency and the availability of certain features. Semantic ranking and Azure AI integration come with region requirements. Make sure your region of choice supports the features you need.
-To try search for free, [open a free Azure account](https://azure.microsoft.com/pricing/free-trial/?WT.mc_id=A261C142F) and then create your search service by choosing the **Free** tier. You can have one free search service per Azure subscription. Free search services are intended for short-term evaluation of the product for nonproduction applications. If you want to move forward with a production application, create a new search service on a billable tier.
++ [Service tier](search-sku-tier.md) determines infrastructure, service limits, and billing. Some features aren't available on lower or specialized tiers.
-Alternatively, you can use free credits to try out paid Azure services. With this approach, you can create your search service at **Basic** or higher to get more capacity. Your credit card is never charged unless you explicitly change your settings and ask to be charged. Another approach is to [activate Azure credits in a Visual Studio subscription](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?WT.mc_id=A261C142F). A Visual Studio subscription gives you credits every month you can use for paid Azure services.
+## Subscribe (free or paid)
Paid (or billable) search occurs when you choose a billable tier (Basic or higher) when creating the resource on a billable Azure subscription.
+To try Azure AI Search for free, [open a trial subscription](https://azure.microsoft.com/pricing/free-trial/?WT.mc_id=A261C142F) and then create your search service by choosing the **Free** tier. You can have one free search service per Azure subscription. Free search services are intended for short-term evaluation of the product for nonproduction applications. Generally, you can complete all of the quickstarts and most tutorials, except for those featuring semantic ranking (it requires a billable service).
+
+Alternatively, you can use free credits to try out paid Azure services. With this approach, you can create your search service at **Basic** or higher to get more capacity. Your credit card is never charged unless you explicitly change your settings and ask to be charged. Another approach is to [activate Azure credits in a Visual Studio subscription](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/?WT.mc_id=A261C142F). A Visual Studio subscription gives you credits every month you can use for paid Azure services.
+ ## Find the Azure AI Search offering 1. Sign in to the [Azure portal](https://portal.azure.com/).
Service name requirements:
> [!IMPORTANT] > Due to high demand, Azure AI Search is currently unavailable for new instances in West Europe. If you don't immediately need semantic ranker or skillsets, choose Sweden Central because it has the most data center capacity. Otherwise, North Europe is another option.
-Azure AI Search is available in most regions, as listed in the [**Products available by region**](https://azure.microsoft.com/global-infrastructure/services/?products=search) page.
-
-We strongly recommend the following regions because they provide [more storage per partition](search-limits-quotas-capacity.md#service-limits), three to seven times more depending on the tier, at the same billing rate. Extra capacity applies to search services created after specific dates.
-
-### Roll out on May 2024
-
-| Country | Regions providing extra capacity per partition |
-|||
-| **United States** | East US 2 EUAP/PPE |
-| **South Africa** | South Africa NorthΓÇï |
-| **Germany** | Germany NorthΓÇï, Germany West CentralΓÇï ΓÇï|
-| **Azure Government** | Texas, Arizona, Virginia |
-
-### Roll out on April 2024
-
-| Country | Regions providing extra capacity per partition |
-|||
-| **United States** | East USΓÇï, East US 2, ΓÇïCentral USΓÇï, North Central USΓÇï, South Central USΓÇï, West USΓÇï, West US 2ΓÇï, West US 3ΓÇï, West Central USΓÇï |
-| **United Kingdom** | UK SouthΓÇï, UK WestΓÇï ΓÇï |
-| **United Arab Emirates** | UAE NorthΓÇïΓÇï |
-| **Switzerland** | Switzerland WestΓÇï |
-| **Sweden** | Sweden CentralΓÇïΓÇï |
-| **South Africa** | South Africa NorthΓÇï |
-| **Poland** | Poland CentralΓÇïΓÇï |
-| **Norway** | Norway EastΓÇïΓÇï |
-| **Korea** | Korea Central, Korea SouthΓÇï ΓÇï |
-| **Japan** | Japan East, Japan WestΓÇï |
-| **Italy** | Italy NorthΓÇïΓÇï |
-| **India** | Central India, Jio India WestΓÇï ΓÇï |
-| **France** | France CentralΓÇïΓÇï |
-| **Europe** | North EuropeΓÇïΓÇï |
-| **Canada** | Canada CentralΓÇï, Canada EastΓÇïΓÇï |
-| **Bazil** | Brazil SouthΓÇïΓÇï |
-| **Asia Pacific** | East Asia, Southeast AsiaΓÇï ΓÇï |
-| **Australia** | Australia EastΓÇï, Australia SoutheastΓÇïΓÇï |
+Review the [supported regions list](search-region-support.md) for supported regions at the service and feature level.
+
+Some features are subject to regional availability:
+++ [AI enrichment](cognitive-search-concept-intro.md)++ [Semantic ranker](semantic-search-overview.md)++ [Availability Zones](search-reliability.md#availability-zones)++ [Azure roles for data plane operations](search-security-rbac.md) (Azure public cloud only)+
+AI enrichment refers to Azure AI services and Azure OpenAI, and integration is through an Azure AI multi-service account. The account must be in the same physical region as Azure AI Search. There are just a few regions that *don't* provide both.
If you use multiple Azure services, putting all of them in the same region minimizes or voids bandwidth charges. There are no charges for data exchanges among same-region services.
Two notable exceptions might warrant provisioning Azure services in separate reg
+ Business continuity and disaster recovery (BCDR) requirements dictate creating multiple search services in [regional pairs](../availability-zones/cross-region-replication-azure.md#azure-paired-regions). For example, if you're operating in North America, you might choose East US and West US, or North Central US and South Central US, for each search service.
-Some features are subject to [regional availability](https://azure.microsoft.com/global-infrastructure/services/?products=search):
-
-+ [Availability Zones](search-reliability.md#availability-zones)
-+ [Azure roles for data plane operations](search-security-rbac.md) (Azure public cloud only)
-+ [Semantic ranker](semantic-search-overview.md), per the [**Products available by region**](https://azure.microsoft.com/global-infrastructure/services/?products=search) page.
-+ [AI enrichment](cognitive-search-concept-intro.md) requires Azure AI services to be in the same physical region as Azure AI Search. There are just a few regions that *don't* provide both.
-
-The [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=search) page indicates a common regional presence by showing two stacked check marks. An unavailable combination has a missing check mark. The time piece icon indicates future availability.
-
- :::image type="content" source="media/search-create-service-portal/region-availability.png" lightbox="media/search-create-service-portal/region-availability.png" alt-text="Screenshot of the Regional availability page." border="true":::
- ## Choose a tier
-Azure AI Search is offered in [multiple pricing tiers](https://azure.microsoft.com/pricing/details/search/): Free, Basic, Standard, or Storage Optimized. Each tier has its own [capacity and limits](search-limits-quotas-capacity.md). There are also several [features that are tier-dependent](search-sku-tier.md#feature-availability-by-tier).
+Azure AI Search is offered in [multiple pricing tiers](https://azure.microsoft.com/pricing/details/search/): Free, Basic, Standard, or Storage Optimized. Each tier has its own [capacity and limits](search-limits-quotas-capacity.md). There are also several features that are tier-dependent.
+
+Review the [tier descriptions](search-sku-tier.md) for computing characteristics and feature availability.
Basic and Standard are the most common choices for production workloads, but many customers start with the Free service. Among the billable tiers, key differences are partition size and speed, and limits on the number of objects you can create.
search Search Features List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-features-list.md
Azure AI Search provides information retrieval and uses optional AI integration
The following table summarizes features by category. For more information about how Azure AI Search compares with other search technologies, see [Compare search options](search-what-is-azure-search.md#compare-search-options).
-There's feature parity in all Azure public, private, and sovereign clouds, but some features aren't supported in specific regions. For more information, see [product availability by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=search&regions=all&rar=true).
+There's feature parity in all Azure public, private, and sovereign clouds, but some features aren't supported in specific regions. For more information, see [Choose a region](search-region-support.md).
> [!NOTE] > Looking for preview features? See the [preview features list](search-api-preview.md).
search Search Get Started Portal Image Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-portal-image-search.md
Sample data consists of image files in the [azure-search-sample-data](https://gi
+ An [Azure AI services multiservice account](/azure/ai-services/multi-service-resource) to use for image vectorization and optical character recognition (OCR). The account must be in a region that provides Azure AI Vision multimodal embeddings.
- Currently, eligible regions are: SwedenCentral, EastUS, NorthEurope, WestEurope, WestUS, SoutheastAsia, KoreaCentral, FranceCentral, AustraliaEast, WestUS2, SwitzerlandNorth, JapanEast. [Check the documentation](/azure/ai-services/computer-vision/how-to/image-retrieval) for an updated list.
+ Currently, those regions are: EastUS, WestUS, WestUS2, NorthEurope, WestEurope, FranceCentral, SwedenCentral, SwitzerlandNorth, SoutheastAsia, KoreaCentral, AustraliaEast, JapanEast. [Check the documentation](/azure/ai-services/computer-vision/overview-image-analysis#region-availability) for an updated list.
+ Azure AI Search for indexing and queries. It can be on any tier, but it must be in the same region as Azure AI services.
search Search Region Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-region-support.md
+
+ Title: Feature availability across clouds regions
+
+description: Shows supported regions and feature availability across regions for Azure AI Search.
+++++++ Last updated : 07/09/2024+++
+# Azure AI Search feature availability across cloud regions
+
+This article identifies the cloud regions in which Azure AI Search is available. It also lists which premium features are available in each region:
+
+- [Semantic ranking](semantic-search-overview.md) depends on models hosted by Microsoft. These models are available in specific regions.
+- [AI enrichment](cognitive-search-concept-intro.md) refer to skills and vectorizers that make internal calls to Azure AI and Azure OpenAI. AI enrichment requires that Azure AI Search coexist with an [Azure AI multi-service account](/azure/ai-services/multi-service-resource) in the same physical region. The following tables indicate whether Azure AI is offered in the same region as Azure AI Search.
+- [Availability zones](search-reliability.md#availability-zone-support) are an Azure platform capability that divides a region's data centers into distinct physical location groups to provide high-availability, within the same region.
+
+We recommend that you check [Azure AI Studio region availability](/azure/ai-studio/reference/region-support) and [Azure OpenAI model region availability](/azure/reliability/availability-zones-service-support#azure-regions-with-availability-zone-support) for the most current list of regions for those features.
+
+Also, if you're using Azure AI Vision 4.0 multimodal APIs for image vectorization, it's available on a more limited basis. [Check the Azure AI Vision region list for multimodal embeddings](/azure/ai-services/computer-vision/overview-image-analysis#region-availability) and be sure to create both your Azure AI multi-service account and Azure AI Search service in one of those supported regions.
+
+> [!NOTE]
+> Higher capacity partitions became available in selected regions starting in April 2024. A second wave of higher capacity partitions released in May 2024. If you're using an older search service, consider creating a new search service to benefit from more capacity at the same billing rate as before. For more information, see [Service limits](search-limits-quotas-capacity.md#service-limits)
+
+## Azure Public regions
+
+You can create an Azure AI Search resource in any of the following Azure public regions. Almost all of these regions support [higher capacity tiers](search-limits-quotas-capacity.md#service-limits). Exceptions are noted where they apply.
+
+### Americas
+
+| Region | AI enrichment | Semantic ranking | Availability zones |
+|--|--|--|--|
+| Brazil South​​ ​ | ✅ | ✅ | |
+| Canada Central​​ | ✅ | ✅ | ✅ |
+| Canada East​​ ​ | | ✅ | |
+| East US​ | ✅ | ✅ | ✅ |
+| East US 2 ​ | ✅ | ✅ | ✅ |
+| ​Central US​ ​ | ✅ | ✅ | ✅ |
+| North Central US​ ​ | ✅ | ✅ | |
+| South Central US​ ​ | ✅ | ✅ | ✅ |
+| West US​ ​ | ✅ | ✅ | |
+| West US 2​ ​ | ✅ | ✅ | ✅ |
+| West US 3​ ​ | ✅ | ✅ |✅ |
+| West Central US​ ​ | ✅ | ✅ | |
+
+### Europe
+
+| Region | AI enrichment | Semantic ranking | Availability zones |
+|--|--|--|--|
+| North Europe​​ | ✅ | ✅ | ✅ |
+| West Europe​​ <sup>1</sup>| ✅ | ✅ | ✅ |
+| France Central​​ | ✅ | ✅ | ✅ |
+| Germany West Central​ ​| ✅ | | ✅ |
+| Italy North​​ | | | ✅ |
+| Norway East​​ | ✅ | | ✅ |
+| Poland CentralΓÇïΓÇï | | | |
+| Spain Central | | | ✅ |
+| Sweden Central​​ | ✅ | | ✅ |
+| Switzerland North​ | ✅ | ✅ | ✅ |
+| Switzerland West​ | ✅ | ✅ | ✅ |
+| UK South​ | ✅ | ✅ | ✅ |
+| UK West​ ​| | ✅ | |
+
+<sup>1</sup> This region runs on older infrastructure that has lower capacity per partition at every tier. Choose a different region if you want [higher capacity](search-limits-quotas-capacity.md#service-limits).
+
+### Middle East
+
+| Region | AI enrichment | Semantic ranking | Availability zones |
+|--|--|--|--|
+| Israel Central​ <sup>1</sup> | | | ✅ |
+| Qatar Central​ <sup>1</sup> | | | ✅ |
+| UAE North​​ | ✅ | | ✅ |
+
+<sup>1</sup> These regions run on older infrastructure that has lower capacity per partition at every tier. Choose a different region if you want [higher capacity](search-limits-quotas-capacity.md#service-limits).
+
+### Africa
+
+| Region | AI enrichment | Semantic ranking | Availability zones |
+|--|--|--|--|
+| South Africa North​ | ✅ | | ✅ |
+
+### Asia Pacific
+
+| Region | AI enrichment | Semantic ranking | Availability zones |
+|--|--|--|--|
+| Australia East​ ​ | ✅ | ✅ | ✅ |
+| Australia Southeast​​​ | | ✅ | |
+| East Asia​ | ✅ | ✅ | ✅ |
+| Southeast Asia​ ​ ​ | ✅ | ✅ | ✅ |
+| Central India| ✅ | ✅ | ✅ |
+| Jio India West​ ​ | ✅ | ✅ | |
+| South India <sup>1</sup> | | | ✅ |
+| Japan East| ✅ | ✅ | ✅ |
+| Japan West​ | ✅ | ✅ | |
+| Korea Central | ✅ | ✅ | ✅ |
+| Korea South​ ​ | | ✅ | |
+
+<sup>1</sup> These regions run on older infrastructure that has lower capacity per partition at every tier. Choose a different region if you want [higher capacity](search-limits-quotas-capacity.md#service-limits).
++
+<!-- ### United States
+
+| Region | AI enrichment | Semantic ranking | Availability zones |
+|--|--|--|--|
+| East US​ | ✅ | ✅ | ✅ |
+| East US 2 ​ | ✅ | ✅ | ✅ |
+| ​Central US​ ​ | ✅ | ✅ | ✅ |
+| North Central US​ ​ | ✅ | ✅ | |
+| South Central US​ ​ | ✅ | ✅ | ✅ |
+| West US​ ​ | ✅ | ✅ | |
+| West US 2​ ​ | ✅ | ✅ | ✅ |
+| West US 3​ ​ | ✅ | ✅ |✅ |
+| West Central US​ ​ | ✅ | ✅ | |
+
+### United Kingdom
+
+| Region | AI enrichment | Semantic ranking | Availability zones |
+|--|--|--|--|
+| UK South​ | ✅ | ✅ | ✅ |
+| UK West​ ​| | ✅ | |
+
+### United Arab Emirates
+
+| Region | AI enrichment | Semantic ranking | Availability zones |
+|--|--|--|--|
+| UAE North​​ | ✅ | | ✅ |
+
+### Switzerland
+
+| Region | AI enrichment | Semantic ranking | Availability zones |
+|--|--|--|--|
+| Switzerland North​ | ✅ | ✅ | ✅ |
+| Switzerland West​ | ✅ | ✅ | ✅ |
+
+### Sweden
+
+| Region | AI enrichment | Semantic ranking | Availability zones |
+|--|--|--|--|
+| Sweden Central​​ | ✅ | | ✅ |
+
+### Spain
+
+| Region | AI enrichment | Semantic ranking | Availability zones |
+|--|--|--|--|
+| Spain Central | | | ✅ |
+
+### South Africa
+
+| Region | AI enrichment | Semantic ranking | Availability zones |
+|--|--|--|--|
+| South Africa North​ | ✅ | | ✅ |
+
+### Qatar
+
+| Region | AI enrichment | Semantic ranking | Availability zones |
+|--|--|--|--|
+| Qatar Central​ <sup>1</sup> | | | ✅ |
+
+<sup>1</sup> This region runs on older infrastructure that has lower capacity per partition at every tier. You can't create a search service with [higher capacity](search-limits-quotas-capacity.md#service-limits) in this region.
+
+### Poland
+
+| Region | AI enrichment | Semantic ranking | Availability zones |
+|--|--|--|--|
+| Poland CentralΓÇïΓÇï | | | |
+
+### Norway
+
+| Region | AI enrichment | Semantic ranking | Availability zones |
+|--|--|--|--|
+| Norway East​​ | ✅ | | ✅ |
+
+### Korea
+
+| Region | AI enrichment | Semantic ranking | Availability zones |
+|--|--|--|--|
+| Korea Central | ✅ | ✅ | ✅ |
+| Korea South​ ​ | | ✅ | |
+
+### Japan
+
+| Region | AI enrichment | Semantic ranking | Availability zones |
+|--|--|--|--|
+| Japan East| ✅ | ✅ | ✅ |
+| Japan West​ | ✅ | ✅ | |
+
+### Italy
+
+| Region | AI enrichment | Semantic ranking | Availability zones |
+|--|--|--|--|
+| Italy North​​ | | | ✅ |
+
+### Israel
+
+| Region | AI enrichment | Semantic ranking | Availability zones |
+|--|--|--|--|
+| Israel Central​ <sup>1</sup> | | | ✅ |
+
+<sup>1</sup> This region runs on older infrastructure that has lower capacity per partition at every tier. You can't create a search service with [higher capacity](search-limits-quotas-capacity.md#service-limits) in this region.
+
+### India
+
+| Region | AI enrichment | Semantic ranking | Availability zones |
+|--|--|--|--|
+| Central India| ✅ | ✅ | ✅ |
+| Jio India West​ ​ | ✅ | ✅ | |
+| South India <sup>1</sup> | | | ✅ |
+
+<sup>1</sup> This region runs on older infrastructure that has lower capacity per partition at every tier. You can't create a search service with [higher capacity](search-limits-quotas-capacity.md#service-limits) in this region.
+
+### Germany
+
+| Region | AI enrichment | Semantic ranking | Availability zones |
+|--|--|--|--|
+| Germany West Central​ ​| ✅ | | ✅ |
+
+### France
+
+| Region | AI enrichment | Semantic ranking | Availability zones |
+|--|--|--|--|
+| France Central​​ | ✅ | ✅ | ✅ |
+
+### Europe
+
+| Region | AI enrichment | Semantic ranking | Availability zones |
+|--|--|--|--|
+| North Europe​​ | ✅ | ✅ | ✅ |
+| West Europe​​ <sup>1</sup>| ✅ | ✅ | ✅ |
+
+<sup>1</sup> This region runs on older infrastructure that has lower capacity per partition at every tier. You can't create a search service with [higher capacity](search-limits-quotas-capacity.md#service-limits) in this region.
+
+### Canary (US)
+
+| Region | AI enrichment | Semantic ranking | Availability zones |
+|--|--|--|--|
+| Central US EUAP​ <sup>1</sup> | | ✅ | |
+| East US 2 EUAP ​ | | ✅ | |
+
+<sup>1</sup> This region runs on older infrastructure that has lower capacity per partition at every tier. You can't create a search service with [higher capacity](search-limits-quotas-capacity.md#service-limits) in this region.
+
+### Canada
+
+| Region | AI enrichment | Semantic ranking | Availability zones |
+|--|--|--|--|
+| Canada Central​​ | ✅ | ✅ | ✅ |
+| Canada East​​ ​ | | ✅ | |
+
+### Brazil
+
+| Region | AI enrichment | Semantic ranking | Availability zones |
+|--|--|--|--|
+| Brazil South​​ ​ | ✅ | ✅ | |
+
+### Asia Pacific
+
+| Region | AI enrichment | Semantic ranking | Availability zones |
+|--|--|--|--|
+| East Asia​ | ✅ | ✅ | ✅ |
+| Southeast Asia​ ​ ​ | ✅ | ✅ | ✅ |
+
+### Australia
+
+| Region | AI enrichment | Semantic ranking | Availability zones |
+|--|--|--|--|
+| Australia East​ ​ | ✅ | ✅ | ✅ |
+| Australia Southeast​​​ | | ✅ | | -->
+
+## Azure Government regions
+
+All of these regions support [higher capacity tiers](search-limits-quotas-capacity.md#service-limits).
+
+None of these regions support Azure [role-based access for data plane operations](search-security-rbac.md). You must use key-based authentication for indexing and query workloads.
+
+| Region | AI enrichment | Semantic ranking | Availability zones |
+|--|--|--|--|
+| Arizona | ✅ | | |
+| Texas | | | |
+| Virginia | ✅ | | ✅ |
+
+## Azure operated by 21Vianet
+
+You can install Azure AI Search in any of the following regions. If you need semantic ranking or AI enrichment, choose a region that provides the feature.
+
+| Region | AI enrichment | Semantic ranking | Availability zones |
+|--|--|--|--|
+| China East <sup>1</sup> | | | |
+| China East 2 <sup>1</sup> | ✅ | | |
+| China East 3 | | | |
+| China North <sup>1</sup> | | | |
+| China North 2 <sup>1</sup> | | | |
+| China North 3 | | ✅ | ✅ |
+
+<sup>1</sup> These regions run on older infrastructure that has lower capacity per partition at every tier. Choose a different region if you want [higher capacity](search-limits-quotas-capacity.md#service-limits).
+
+<!-- ## Early Update Access Program (EUAP)
+
+These regions
+
+| Region | AI enrichment | Semantic ranking | Availability zones |
+|--|--|--|--|
+| Central US EUAP​ <sup>1</sup> | | ✅ | |
+| East US 2 EUAP ​ | | ✅ | |
+
+<sup>1</sup> This region runs on older infrastructure that has lower capacity per partition at every tier. You can't create a search service with [higher capacity](search-limits-quotas-capacity.md#service-limits) in this region. -->
+
+## See also
+
+- [Azure AI Studio region availability](/azure/ai-studio/reference/region-support)
+- [Azure OpenAI model region availability](/azure/ai-services/openai/concepts/models#model-summary-table-and-region-availability)
+- [Availability zone region availability](/azure/reliability/availability-zones-service-support#azure-regions-with-availability-zone-support)
+- [Azure product by region page](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=search)
search Search Security Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-rbac.md
Last updated 06/03/2024-+ # Connect to Azure AI Search using roles
search Search Sku Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-sku-tier.md
Previously updated : 05/22/2024 Last updated : 07/09/2024
Tiers determine the maximum storage of the service itself, plus the maximum num
Tier pricing includes details about per-partition storage that ranges from 15 GB for Basic, up to 2 TB for Storage Optimized (L2) tiers. Other hardware characteristics, such as speed of operations, latency, and transfer rates, aren't published, but tiers that are designed for specific solution architectures are built on hardware that has the features to support those scenarios. For more information about partitions, see [Estimate and manage capacity](search-capacity-planning.md) and [Reliability in Azure AI Search](search-reliability.md).
+> [!NOTE]
+> Higher capacity partitions became available in selected regions starting in April 2024. A second wave of higher capacity partitions released in May 2024. If you're using an older search service, consider creating a new search service to benefit from more capacity at the same billing rate. For more information, see [Service limits](search-limits-quotas-capacity.md#service-limits)
+ ## Billing rates Tiers have different billing rates, with higher rates for tiers that run on more expensive hardware or provide more expensive features. The tier billing rate can be found in the [Azure pricing pages](https://azure.microsoft.com/pricing/details/search/) for Azure AI Search.
search Semantic Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-search-overview.md
In Azure AI Search, *semantic ranking* is a feature that measurably improves sea
Semantic ranker is a premium feature, billed by usage. We recommend this article for background, but if you'd rather get started, follow these steps: > [!div class="checklist"]
-> * [Check regional availability](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=search)
+> * [Check regional availability](search-region-support.md)
> * [Sign in to Azure portal](https://portal.azure.com) to verify your search service is Basic or higher > * [Enable semantic ranking and choose a pricing plan](semantic-how-to-enable-disable.md) > * [Set up a semantic configuration in a search index](semantic-how-to-configure.md)
search Vector Search How To Generate Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-generate-embeddings.md
If you want resources in the same region, start with:
1. [A region for the similarity embedding model](/azure/ai-services/openai/concepts/models#embeddings-models-1), currently in Europe and the United States.
-1. [A region for Azure AI Search](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=cognitive-search).
+1. [A region for Azure AI Search](search-region-support.md).
1. To support hybrid queries that include [semantic ranking](semantic-how-to-query-request.md), or if you want to try machine learning model integration using a [custom skill](cognitive-search-custom-skill-interface.md) in an [AI enrichment pipeline](cognitive-search-concept-intro.md), note the regions that provide those features.
search Vector Search Integrated Vectorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-integrated-vectorization.md
The diagram focuses on integrated vectorization, but your solution isn't limited
## Availability and pricing
-Integrated vectorization is available in all regions and tiers. However, if you're using Azure OpenAI and the AzureOpenAIEmbedding skill, check [regional availability]( https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=cognitive-services) of that service.
+Integrated vectorization is available in all regions and tiers. However, if you're using Azure OpenAI and Azure AI skills and vectorizers, make sure your Azure AI multi-service account is [available in the same regions as Azure AI Search](search-region-support.md).
-If you're using a custom skill and an Azure hosting mechanism (such as an Azure function app, Azure Web App, and Azure Kubernetes), check the [product by region page](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/) for feature availability.
+If you're using a custom skill and an Azure hosting mechanism (such as an Azure function app, Azure Web App, and Azure Kubernetes), check the [Azure product by region page](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=search) for feature availability.
Data chunking (Text Split skill) is free and available on all Azure AI services in all regions.
sentinel Microsoft 365 Defender Sentinel Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/microsoft-365-defender-sentinel-integration.md
description: Learn how using Microsoft Defender XDR together with Microsoft Sent
Previously updated : 06/25/2024 Last updated : 07/08/2024 appliesto: - Microsoft Sentinel in the Azure portal and the Microsoft Defender portal
+#customer intent: As a SOC admin, I want to integrate Microsoft Defender XDR with Microsoft Sentinel so my security operations center can work in a unified incident queue.
# Microsoft Defender XDR integration with Microsoft Sentinel
-Integrate Microsoft Defender XDR with Microsoft Sentinel to stream all Defender XDR incidents and advanced hunting events into Microsoft Sentinel and keep the incidents and events synchronized between both portals. Incidents from Defender XDR include all associated alerts, entities, and relevant information, providing you with enough context to perform triage and preliminary investigation in Microsoft Sentinel. Once in Microsoft Sentinel, incidents remain bi-directionally synced with Defender XDR, allowing you to take advantage of the benefits of both portals in your incident investigation.
+Integrate Microsoft Defender XDR with Microsoft Sentinel to stream all Defender XDR incidents and advanced hunting events into Microsoft Sentinel and keep the incidents and events synchronized between the Azure and Microsoft Defender portals. Incidents from Defender XDR include all associated alerts, entities, and relevant information, providing you with enough context to perform triage and preliminary investigation in Microsoft Sentinel. Once in Microsoft Sentinel, incidents remain bi-directionally synced with Defender XDR, allowing you to take advantage of the benefits of both portals in your incident investigation.
+Alternatively, onboard Microsoft Sentinel with Defender XDR to the unified security operations platform in the Defender portal. The unified security operations platform brings together the full capabilities of Microsoft Sentinel, Defender XDR, and generative AI built specifically for cybersecurity. For more information, see the following resources:
+
+- [Unified security operations platform with Microsoft Sentinel and Defender XDR](https://aka.ms/unified-soc-announcement)
+- [Microsoft Sentinel in the Microsoft Defender portal](microsoft-sentinel-defender-portal.md)
+- [Microsoft Copilot in Microsoft Defender](/defender-xdr/security-copilot-in-microsoft-365-defender)
+
+## Microsoft Sentinel and Defender XDR
+
+Use one of the following methods to integrate Microsoft Sentinel with Microsoft Defender XDR
+
+- Ingest Microsoft Defender XDR service data into Microsoft Sentinel and view Microsoft Sentinel data in the Azure portal. Enable the Defender XDR connector in Microsoft Sentinel.
+
+- Integrate Microsoft Sentinel and Defender XDR into a single, unified security operations platform in the Microsoft Defender portal. In this case, view Microsoft Sentinel data directly in the Microsoft Defender portal with the rest of your Defender incidents, alerts, vulnerabilities, and other security data. Enable the Defender XDR connector in Microsoft Sentinel and onboard Microsoft Sentinel to unified operations platform in the Defender portal.
+
+Select the appropriate tab to see what the Microsoft Sentinel integration with Defender XDR looks like depending on which integration method you use.
+
+## [Azure portal](#tab/azure-portal)
+
+The following illustration shows how Microsoft's XDR solution seamlessly integrates with Microsoft Sentinel.
++
+In this diagram:
+
+- Insights from signals across your entire organization feed into Microsoft Defender XDR and Microsoft Defender for Cloud.
+- Microsoft Defender XDR and Microsoft Defender for Cloud send SIEM log data through Microsoft Sentinel connectors.
+- SecOps teams can then analyze and respond to threats identified in Microsoft Sentinel and Microsoft Defender XDR.
+- Microsoft Sentinel provides support for multicloud environments and integrates with third-party apps and partners.
+
+## [Defender portal](#tab/defender-portal)
+
+The following illustration shows how Microsoft's XDR solution seamlessly integrates with Microsoft Sentinel with the unified security operations platform.
++
+In this diagram:
+
+- Insights from signals across your entire organization feed into Microsoft Defender XDR and Microsoft Defender for Cloud.
+- Microsoft Sentinel provides support for multicloud environments and integrates with third-party apps and partners.
+- Microsoft Sentinel data is ingested together with your organization's data into the Microsoft Defender portal.
+- SecOps teams can then analyze and respond to threats identified by Microsoft Sentinel and Microsoft Defender XDR in the Microsoft Defender portal.
++ ## Incident correlation and alerts
-The integration gives Defender XDR security incidents the visibility to be managed from within Microsoft Sentinel, as part of the primary incident queue across the entire organization. See and correlate Defender XDR incidents together with incidents from all of your other cloud and on-premises systems. At the same time, this integration allows you to take advantage of the unique strengths and capabilities of Defender XDR for in-depth investigations and a Defender-specific experience across the Microsoft 365 ecosystem. Defender XDR enriches and groups alerts from multiple Microsoft Defender products, both reducing the size of the SOCΓÇÖs incident queue and shortening the time to resolve. Alerts from the following Microsoft Defender products and services are also included in the integration of Defender XDR to Microsoft Sentinel:
+With the integration of Defender XDR with Microsoft Sentinel, Defender XDR incidents are visible and manageable from within Microsoft Sentinel. This gives you a primary incident queue across the entire organization. See and correlate Defender XDR incidents together with incidents from all of your other cloud and on-premises systems. At the same time, this integration allows you to take advantage of the unique strengths and capabilities of Defender XDR for in-depth investigations and a Defender-specific experience across the Microsoft 365 ecosystem.
+
+Defender XDR enriches and groups alerts from multiple Microsoft Defender products, both reducing the size of the SOCΓÇÖs incident queue and shortening the time to resolve. Alerts from the following Microsoft Defender products and services are also included in the integration of Defender XDR to Microsoft Sentinel:
- Microsoft Defender for Endpoint - Microsoft Defender for Identity
In addition to collecting alerts from these components and other services, Defen
Consider integrating Defender XDR with Microsoft Sentinel for the following use cases and scenarios: -- Onboard Microsoft Sentinel to the unified security operations platform in the Microsoft Defender portal. Enabling the Defender XDR connector is a prerequisite. For more information, see [Connect Microsoft Sentinel to Microsoft Defender XDR](/defender-xdr/microsoft-sentinel-onboard).
+- Onboard Microsoft Sentinel to the unified security operations platform in the Microsoft Defender portal. Enabling the Defender XDR connector is a prerequisite.
- Enable one-click connect of Defender XDR incidents, including all alerts and entities from Defender XDR components, into Microsoft Sentinel.
For more information about the capabilities of the Microsoft Sentinel integratio
## Connecting to Microsoft Defender XDR <a name="microsoft-defender-xdr-incidents-and-microsoft-incident-creation-rules"></a>
-Install the **Microsoft Defender XDR** solution for Microsoft Sentinel from the **Content hub**. Then, enable the **Microsoft Defender XDR** data connector to collect incidents and alerts. For more information, see [Connect data from Microsoft Defender XDR to Microsoft Sentinel](connect-microsoft-365-defender.md).
+Enable the Microsoft Defender XDR connector in Microsoft Sentinel to send all Defender XDR incidents and alerts information to Microsoft Sentinel and keep the incidents synchronized.
+
+- First, install the **Microsoft Defender XDR** solution for Microsoft Sentinel from the **Content hub**. Then, enable the **Microsoft Defender XDR** data connector to collect incidents and alerts. For more information, see [Connect data from Microsoft Defender XDR to Microsoft Sentinel](connect-microsoft-365-defender.md).
+
+- After you enable alert and incident collection in the Defender XDR data connector, Defender XDR incidents appear in the Microsoft Sentinel incidents queue shortly after they're generated in Defender XDR. It can take up to 10 minutes from the time an incident is generated in Defender XDR to the time it appears in Microsoft Sentinel. In these incidents, the **Alert product name** field contains **Microsoft Defender XDR** or one of the component Defender services' names.
+
+- To onboard your Microsoft Sentinel workspace to the unified security operations platform in the Defender portal, see [Connect Microsoft Sentinel to Microsoft Defender XDR](/defender-xdr/microsoft-sentinel-onboard).
+
+### Ingestion costs
+
+Alerts and incidents from Defender XDR, including items that populate the *SecurityAlert* and *SecurityIncident* tables, are ingested into and synchronized with Microsoft Sentinel at no charge. For all other data types from individual Defender components such as the *Advanced hunting* tables *DeviceInfo*, *DeviceFileEvents*, *EmailEvents*, and so on, ingestion is charged. For more information, see [Plan costs and understand Microsoft Sentinel pricing and billing](billing.md).
+
+### Data ingestion behavior
+
+When the Defender XDR connector is enabled, alerts created by Defender XDR-integrated products are sent to Defender XDR and grouped into incidents. Both the alerts and the incidents flow to Microsoft Sentinel through the Defender XDR connector. If you enabled any of the individual component connectors beforehand, they appear to remain connected, though no data flows through them.
-To onboard Microsoft Sentinel to the unified security operations platform in the Defender portal, see [Connect Microsoft Sentinel to Microsoft Defender XDR](/defender-xdr/microsoft-sentinel-onboard).
+The exception to this process is Microsoft Defender for Cloud. Although its integration with Defender XDR means that you receive Defender for Cloud *incidents* through Defender XDR, you need to also have a Microsoft Defender for Cloud connector enabled in order to receive Defender for Cloud *alerts*. For the available options and more information, see the following articles:
-After you enable alert and incident collection in the Defender XDR data connector, Defender XDR incidents appear in the Microsoft Sentinel incidents queue shortly after they're generated in Defender XDR. In these incidents, the **Alert product name** field contains **Microsoft Defender XDR** or one of the component Defender services' names.
-- It can take up to 10 minutes from the time an incident is generated in Defender XDR to the time it appears in Microsoft Sentinel.
+- [Microsoft Defender for Cloud in the Microsoft Defender portal](/microsoft-365/security/defender/microsoft-365-security-center-defender-cloud)
+- [Ingest Microsoft Defender for Cloud incidents with Microsoft Defender XDR integration](ingest-defender-for-cloud-incidents.md)
-- Alerts and incidents from Defender XDR (those items that populate the *SecurityAlert* and *SecurityIncident* tables) are ingested into and synchronized with Microsoft Sentinel at no charge. For all other data types from individual Defender components (such as the *Advanced hunting* tables *DeviceInfo*, *DeviceFileEvents*, *EmailEvents*, and so on), ingestion is charged.
+### Microsoft incident creation rules
-- When the Defender XDR connector is enabled, alerts created by Defender XDR-integrated products are sent to Defender XDR and grouped into incidents. Both the alerts and the incidents flow to Microsoft Sentinel through the Defender XDR connector. If you enabled any of the individual component connectors beforehand, they appear to remain connected, though no data flows through them.
+To avoid creating *duplicate incidents for the same alerts*, the **Microsoft incident creation rules** setting is turned off for Defender XDR-integrated products when connecting Defender XDR. Defender XDR-integrated products include Microsoft Defender for Identity, Microsoft Defender for Office 365, and more. Also, Microsoft incident creation rules aren't supported in the unified security operations platform. Defender XDR has its own incident creation rules. This change has the following potential impacts:
- The exception to this process is Microsoft Defender for Cloud. Although its integration with Defender XDR means that you receive Defender for Cloud *incidents* through Defender XDR, you need to also have a Microsoft Defender for Cloud connector enabled in order to receive Defender for Cloud *alerts*. For the available options and more information, see the following articles:
- - [Microsoft Defender for Cloud in the Microsoft Defender portal](/microsoft-365/security/defender/microsoft-365-security-center-defender-cloud)
- - [Ingest Microsoft Defender for Cloud incidents with Microsoft Defender XDR integration](ingest-defender-for-cloud-incidents.md)
+- Microsoft Sentinel's incident creation rules allowed you to filter the alerts that would be used to create incidents. With these rules disabled, preserve the alert filtering capability by configuring [alert tuning in the Microsoft Defender portal](/microsoft-365/security/defender/investigate-alerts), or by using [automation rules](automate-incident-handling-with-automation-rules.md#incident-suppression) to suppress or close incidents you don't want.
-- Similarly, to avoid creating *duplicate incidents for the same alerts*, the **Microsoft incident creation rules** setting is turned off for Defender XDR-integrated products when connecting Defender XDR. This is because Defender XDR has its own incident creation rules. This change has the following potential impacts:
+- After you enable the Defender XDR connector, you can no longer predetermine the titles of incidents. The Defender XDR correlation engine presides over incident creation and automatically names the incidents it creates. This change is liable to affect any automation rules you created that use the incident name as a condition. To avoid this pitfall, use criteria other than the incident name as conditions for [triggering automation rules](automate-incident-handling-with-automation-rules.md#conditions). We recommend using *tags*.
- - Microsoft Sentinel's incident creation rules allowed you to filter the alerts that would be used to create incidents. With these rules disabled, you can preserve the alert filtering capability by configuring [alert tuning in the Microsoft Defender portal](/microsoft-365/security/defender/investigate-alerts), or by using [automation rules](automate-incident-handling-with-automation-rules.md#incident-suppression) to suppress (close) incidents you don't want.
+- If you use Microsoft Sentinel's incident creation rules for other Microsoft security solutions or products not integrated into Defender XDR, such as Microsoft Purview Insider Risk Management, and you plan to onboard to the unified security operations platform in the Defender portal, replace your incident creation rules with [scheduled analytic rules](create-analytics-rule-from-template.md).
- - You can no longer predetermine the titles of incidents, since the Defender XDR correlation engine presides over incident creation and automatically names the incidents it creates. This change is liable to affect any automation rules you created that use the incident name as a condition. To avoid this pitfall, use criteria other than the incident name as conditions for [triggering automation rules](automate-incident-handling-with-automation-rules.md#conditions). We recommend using *tags*.
## Working with Microsoft Defender XDR incidents in Microsoft Sentinel and bi-directional sync
-Defender XDR incidents appear in the Microsoft Sentinel incidents queue with the product name **Microsoft Defender XDR**, and with similar details and functionality to any other Microsoft Sentinel incidents. Each incident contains a link back to the parallel incident in the Microsoft Defender Portal.
+Defender XDR incidents appear in the Microsoft Sentinel incidents queue with the product name **Microsoft Defender XDR**, and with similar details and functionality to any other Microsoft Sentinel incidents. Each incident contains a link back to the parallel incident in the Microsoft Defender portal.
As the incident evolves in Defender XDR, and more alerts or entities are added to it, the Microsoft Sentinel incident gets updated accordingly.
In Defender XDR, all alerts from one incident can be transferred to another, res
## Advanced hunting event collection
-The Defender XDR connector also lets you stream **advanced hunting** events&mdash;a type of raw event data&mdash;from Defender XDR and its component services into Microsoft Sentinel. Collect [advanced hunting](/microsoft-365/security/defender/advanced-hunting-overview) events from all Defender XDR components, and stream them straight into purpose-built tables in your Microsoft Sentinel workspace. These tables are built on the same schema that is used in the Defender portal. This gives you complete access to the full set of advanced hunting events, and allows you to do the following tasks:
+The Defender XDR connector also lets you stream **advanced hunting** events&mdash;a type of raw event data&mdash;from Defender XDR and its component services into Microsoft Sentinel. Collect [advanced hunting](/microsoft-365/security/defender/advanced-hunting-overview) events from all Defender XDR components, and stream them straight into purpose-built tables in your Microsoft Sentinel workspace. These tables are built on the same schema that is used in the Defender portal giving you complete access to the full set of advanced hunting events, and allowing for the following tasks:
- Easily copy your existing Microsoft Defender for Endpoint/Office 365/Identity/Cloud Apps advanced hunting queries into Microsoft Sentinel.
The Defender XDR connector also lets you stream **advanced hunting** events&mdas
- Store the logs with increased retention, beyond Defender XDRΓÇÖs or its components' default retention of 30 days. You can do so by configuring the retention of your workspace or by configuring per-table retention in Log Analytics.
-## Next steps
+## Related content
-In this document, you learned the benefit of using Defender XDR together with Microsoft Sentinel, by enabling the Defender XDR connector in Microsoft Sentinel.
+In this document, you learned the benefits of enabling the Defender XDR connector in Microsoft Sentinel.
- [Connect data from Microsoft Defender XDR to Microsoft Sentinel](connect-microsoft-365-defender.md) - To use the unified security operations platform in the Defender portal, see [Connect data from Microsoft Defender XDR to Microsoft Sentinel](connect-microsoft-365-defender.md). - Check [availability of different Microsoft Defender XDR data types](microsoft-365-defender-cloud-support.md) in the different Microsoft 365 and Azure clouds.-- Create [custom alerts](detect-threats-custom.md) and [investigate incidents](investigate-incidents.md).
site-recovery Azure To Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-powershell.md
When enabling zone to zone replication, only one fabric will be created. But the
```azurepowershell $primaryProtectionContainer = Get-AzRecoveryServicesAsrProtectionContainer -Fabric $fabric -Name "asr-a2a-default-westeurope-container"
-$recoveryPprotectionContainer = Get-AzRecoveryServicesAsrProtectionContainer -Fabric $fabric -Name "asr-a2a-default-westeurope-t-container"
+$recoveryProtectionContainer = Get-AzRecoveryServicesAsrProtectionContainer -Fabric $fabric -Name "asr-a2a-default-westeurope-t-container"
``` ### Create a replication policy
Remove-AzRecoveryServicesAsrReplicationProtectedItem -ReplicationProtectedItem $
## Next steps
-View the [Azure Site Recovery PowerShell reference](/powershell/module/az.RecoveryServices) to learn how you can do other tasks such as creating recovery plans and testing failover of recovery plans with PowerShell.
+View the [Azure Site Recovery PowerShell reference](/powershell/module/az.RecoveryServices) to learn how you can do other tasks such as creating recovery plans and testing failover of recovery plans with PowerShell.
site-recovery Delete Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/delete-appliance.md
+
+ Title: Remove an Azure Site Recovery replication appliance
+description: Learn how to remove an Azure Site Recovery replication appliance using the Azure portal.
+++ Last updated : 07/04/2024+++
+# How to Delete the Replication Appliance
++
+## Overview
+
+The Azure Site Recovery replication appliance is a virtual machine that runs on-premises and replicates data from your on-premises servers to Azure for disaster recovery purposes. You can delete the appliance from the Azure portal, when you no longer need it.
+
+This article provides a step-by-step guide for removing the Azure Site Recovery replication appliance from the Azure portal.
++
+## Before you begin
+
+There are two ways to remove the replication appliance: **delete** and **reset**. If all the components of the appliance are in a healthy state and the appliance is still accessible, you're allowed to only *reset* the appliance. Resetting moves the appliance to its factory state, enabling it to be associated with any Recovery Service vault again.
+
+If all the appliance components are in a critical state and there's no connectivity with the appliance, it can be *deleted* from the Azure portal. Before deleting the Recovery Services vault, you must also *remove infrastructure* to ensure that all the resources created in the background for replication and appliance registration are also removed. However, before you delete the Azure Site Recovery replication appliance, you must complete some preparatory steps to avoid errors.
++
+## Prerequisites
+
+Before you delete the Azure Site Recovery replication appliance, ensure that you *disable replication of all servers* using the Azure Site Recovery replication appliance. To do this, go to Azure portal, select the Recovery Services vault > *Replicated items* blade. Select the servers you want to stop replicating, select **Stop replication**, and confirm the action.
++
+### Delete an unhealthy appliance
+
+You can only delete the Azure Site Recovery replication appliance from the Azure portal if all components are in a critical state and the appliance is no longer accessible.
+
+> [!IMPORTANT]
+> The appliance must be unhealthy (in a critical state) for at least 30 minutes before it is eligible for deletion. If the appliance is healthy, you can only reset it. Ensure that you have disabled replication for all servers before deleting the appliance.
++
+To delete an appliance, follow these steps:
+
+1. Sign in to the Azure portal.
+2. Go to the *Recovery Services vault* > *Site Recovery infrastructure* (under **Manage**), select Azure Site Recovery *replication appliances* under **For VMware & Physical machines**.
+3. For the Azure Site Recovery replication appliance you want to delete, select **Delete** from its menu.
+
+ :::image type="content" source="./media/delete-appliance/delete.png" alt-text="Screenshot of Site Recovery appliance page.":::
+
+1. Confirm that no replicated items are associated with the replication appliance. If there are replicated items associated, a pop-up appears to block the appliance deletion.
+
+ :::image type="content" source="./media/delete-appliance/notification.png" alt-text="Screenshot of pop-up notification.":::
+
+1. If no replicated items are associated with the appliance, a pop-up appears to inform you about the Microsoft Entra ID Apps that must be deleted. Note these App IDs and proceed with deletion.
++
+### Post delete appliance
+
+After successfully deleting the Azure Site Recovery replication appliance, you can:
+
+- Free up resources used by the Azure Site Recovery replication appliance, such as the storage account, network interface, and public IP address.
+- Delete the Recovery Services vault if it is no longer needed.
+- Remove Microsoft Entra Apps with the Azure Site Recovery replication appliance. To do this, go to Azure portal > *Microsoft Entra ID* > *App registrations* under the *Manage* blade. Select the app that you should delete, and select **Delete** then confirm the action. You can learn the app names by following the steps [here](#delete-an-unhealthy-appliance).
++
+## Reset a healthy appliance
+
+You can only reset the Azure Site Recovery replication appliance if all components are in a healthy state. To reset the appliance, follow these steps:
+
+1. On the Azure portal, go to the appliance you want to reset.
+ Ensure that no replicated items are associated with this appliance.
+1. On **Microsoft Azure Appliance Configuration Manager**, go to the *Reset appliance* section and select **Reset**.
+1. If no machines are associated with the appliance, the reset begins.
+1. Once completed successfully, ensure the following:
+ - Open `Services.msc` and restart the service `World Wide Web Publishing Service`.
+ - Clear the cache for Microsoft Edge or other browsers being used. Restart the browser after the cache cleanup. Learn more [here](https://www.microsoft.com/edge/learning-center/how-to-manage-and-clear-your-cache-and-cookies).
+ - Restart the machine.
++
+## Next steps
+
+In this article, you learned how to delete the Azure Site Recovery replication appliance from the Azure portal. You can now free up the associated resources and delete the Recovery Services vault as needed.
site-recovery Monitor Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/monitor-log-analytics.md
We recommend that you review [common monitoring questions](monitoring-common-que
## Event logs available for Azure Site Recovery
-Azure Site Recovery provides the following Resource specific and legacy tables. Each event provides detailed data on a specific set of site recovery related artifacts.
+Azure Site Recovery provides the following resource-specific and legacy tables. Each event provides detailed data on a specific set of site recovery related artifacts.
-**Resource Specific tables**:
+**Resource-specific tables**:
- [AzureSiteRecoveryJobs](/azure/azure-monitor/reference/tables/asrjobs) - [Azure Site Recovery Replicated Items Details](/azure/azure-monitor/reference/tables/ASRReplicatedItems) - **Legacy tables**: - Azure Site Recovery Events - Azure Site Recovery Replicated Items -- Azure Site Recovery Replication Stats -- Azure Site Recovery Points -- Azure Site Recovery Replication Data Upload Rate
+- Azure Site Recovery Replication Stats
+- Azure Site Recovery Points
+- Azure Site Recovery Replication Data Upload Rate
- Azure Site Recovery Protected Disk Data Churn-- Azure Site Recovery Replicated Item Details -
+- Azure Site Recovery Replicated Item Details
## Configure Site Recovery to send logs
site-recovery Monitor Site Recovery Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/monitor-site-recovery-reference.md
Note that some of the following logs apply to Azure Backup and others apply to A
[!INCLUDE [horz-monitor-ref-logs-tables](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-logs-tables.md)] ### Recovery Services Vaults+ Microsoft.RecoveryServices/Vaults - [AzureActivity](/azure/azure-monitor/reference/tables/AzureActivity#columns)
Microsoft.RecoveryServices/Vaults
- [ASRReplicatedItems](/azure/azure-monitor/reference/tables/ASRReplicatedItems#columns) - [AzureDiagnostics](/azure/azure-monitor/reference/tables/AzureDiagnostics#columns)
+### Event logs available for Azure Site Recovery
+
+Azure Site Recovery provides the following resource-specific and legacy tables. Each event provides detailed data on a specific set of site recovery related artifacts.
+
+**Resource-specific tables**:
+
+- [AzureSiteRecoveryJobs](/azure/azure-monitor/reference/tables/asrjobs)
+- [ASRReplicatedItems](/azure/azure-monitor/reference/tables/ASRReplicatedItems)
+
+**Legacy tables**:
+
+- Azure Site Recovery Events
+- Azure Site Recovery Replicated Items
+- Azure Site Recovery Replication Stats
+- Azure Site Recovery Points
+- Azure Site Recovery Replication Data Upload Rate
+- Azure Site Recovery Protected Disk Data Churn
+- Azure Site Recovery Replicated Item Details
+
+## Log Analytics data model
+
+This section describes the Log Analytics data model for Azure Site Recover that's added to the Azure Diagnostics table (if your vaults are configured with diagnostics settings to send data to a Log Analytics workspace in Azure Diagnostics mode). You can use this data model to write queries on Log Analytics data to create custom alerts or reporting dashboards.
+
+To understand the fields of each Site Recovery table in Log Analytics, review the details for the Azure Site Recovery Replicated Item Details and Azure Site Recovery Jobs tables. You can find information about the [diagnostic tables](/azure/azure-monitor/reference/tables/azurediagnostics).
+
+> [!TIP]
+> Expand this table for better readability.
+
+| Category | Category Display Name | Log Table | [Supports basic log plan](../azure-monitor/logs/basic-logs-configure.md#compare-the-basic-and-analytics-log-data-plans) | [Supports ingestion-time transformation](../azure-monitor/essentials/data-collection-transformations.md) | Example queries | Costs to export |
+| | | | | | | |
+| *ASRReplicatedItems* | Azure Site Recovery Replicated Item Details | [ASRReplicatedItems](/azure/azure-monitor/reference/tables/asrreplicateditems) <br> This table contains details of Azure Site Recovery replicated items, such as associated vault, policy, replication health, failover readiness. etc. Data is pushed once a day to this table for all replicated items, to provide the latest information for each item. | No | No | [Queries](/azure/azure-monitor/reference/queries/asrreplicateditems) | Yes |
+| *AzureSiteRecoveryJobs* | Azure Site Recovery Jobs | [ASRJobs](/azure/azure-monitor/reference/tables/asrjobs) <br> This table contains records of Azure Site Recovery jobs such as failover, test failover, reprotection etc., with key details for monitoring and diagnostics, such as the replicated item information, duration, status, description, and so on. Whenever an Azure Site Recovery job is completed (that is, succeeded or failed), a corresponding record for the job is sent to this table. You can view history of Azure Site Recovery jobs by querying this table over a larger time range, provided your workspace has the required retention configured. | No | No | [Queries](/azure/azure-monitor/reference/queries/asrjobs) | No |
+| *AzureSiteRecoveryEvents* | Azure Site Recovery Events | [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) <br> Logs from multiple Azure resources. | No | No | [Queries](/azure/azure-monitor/reference/queries/azurediagnostics) | No |
+| *AzureSiteRecoveryProtectedDiskDataChurn* | Azure Site Recovery Protected Disk Data Churn | [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) <br> Logs from multiple Azure resources. | No | No | [Queries](/azure/azure-monitor/reference/queries/azurediagnostics#queries-for-microsoftrecoveryservices) | No |
+| *AzureSiteRecoveryRecoveryPoints* | Azure Site Recovery Points | [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) <br> Logs from multiple Azure resources. | No | No | [Queries](/azure/azure-monitor/reference/queries/azurediagnostics#queries-for-microsoftrecoveryservices) | No |
+| *AzureSiteRecoveryReplicatedItems* | Azure Site Recovery Replicated Items | [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) <br> Logs from multiple Azure resources. | No | No | [Queries](/azure/azure-monitor/reference/queries/azurediagnostics#queries-for-microsoftrecoveryservices) | No |
+| *AzureSiteRecoveryReplicationDataUploadRate* | Azure Site Recovery Replication Data Upload Rate | [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) <br> Logs from multiple Azure resources. | No | No | [Queries](/azure/azure-monitor/reference/queries/azurediagnostics#queries-for-microsoftrecoveryservices) | No |
+| *AzureSiteRecoveryReplicationStats* | Azure Site Recovery Replication Stats | [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) <br> Logs from multiple Azure resources. | No | No | [Queries](/azure/azure-monitor/reference/queries/azurediagnostics#queries-for-microsoftrecoveryservices) | No |
+
+### ASRReplicatedItems
+
+This is a resource specific table that contains details of Azure Site Recovery replicated items, such as associated vault, policy, replication health, failover readiness. etc. Data is pushed once a day to this table for all replicated items, to provide the latest information for each item.
+
+#### Fields
+
+| Attribute | Value |
+|-|-|
+| Resource types | microsoft.recoveryservices/vaults |
+| Categories |Audit |
+| Solutions | LogManagement |
+| Basic log | No |
+| Ingestion-time transformation | No |
+| Sample Queries | Yes |
+
+#### Columns
+
+| Column Name | Type | Description |
+|-|-|-|
+| ActiveLocation | string | Current active location for the replicated item. If the item is in failed over state, the active location is the secondary (target) region. Otherwise, it is the primary region. |
+| BilledSize | real | The record size in bytes |
+| Category | string | The category of the log. |
+| DatasourceFriendlyName | string | Friendly name of the datasource being replicated. |
+| DatasourceType | string | ARM type of the resource configured for replication. |
+| DatasourceUniqueId | string | Unique ID of the datasource being replicated. |
+| FailoverReadiness | string | Denotes whether there are any configuration issues that could affect the failover operation success for the Azure Site Recovery replicated item. |
+| IRProgressPercentage | int | Progress percentage of the initial replication phase for the replicated item. |
+| IsBillable | string | Specifies whether ingesting the data is billable. When _IsBillable is false ingestion isn't billed to your Azure account |
+| LastHeartbeat | datetime | Time at which the Azure Site Recovery agent associated with the replicated item last made a call to the Azure Site Recovery service. Useful for debugging error scenarios where you wish to identify the time at which issues started arising. |
+| LastRpoCalculatedTime | datetime | Time at which the RPO was last calculated by the Azure Site Recovery service for the replicated item. |
+| LastSuccessfulTestFailoverTime | datetime | Time of the last successful failover performed on the replicated item. |
+| MultiVMGroupId | string | For scenarios where multi-VM consistency feature is enabled for replicated virtual machines, this field specifies the ID of the multi-VM group associated with the replicated virtual machine. |
+| OperationName | string | The name of the operation. |
+| OSFamily | string | OS family of the resource being replicated. |
+| PolicyFriendlyName | string | Friendly name of the replication policy applied to the replicated item. |
+| PolicyId | string | ARM ID of the replication policy applied to the replicated item. |
+| PolicyUniqueId | string | Unique ID of the replication policy applied for the replicated item. |
+| PrimaryFabricName | string | Represents the source region of the replicated item. By default, the value is the name of the source region, however if you have specified a custom name for the primary fabric while enabling replication, then that custom name shows up under this field. |
+| PrimaryFabricType | string | Fabric type associated with the source region of the replicated item. Depending on whether the replicated item is an Azure virtual machine, Hyper-V virtual machine or VMware virtual machine, the value for this field varies. |
+| ProtectionInfo | string | Protection status of the replicated item. |
+| RecoveryFabricName | string | Represents the target region of the replicated item. By default, the value is the name of the target region. However, if you specify a custom name for the recovery fabric while enabling replication, then that custom name shows up under this field. |
+| RecoveryFabricType | string | Fabric type associated with the target region of the replicated item. Depending on whether the replicated item is an Azure virtual machine, Hyper-V virtual machine or VMware virtual machine, the value for this field varies. |
+| RecoveryRegion | string | Target region to which the resource is replicated. |
+| ReplicatedItemFriendlyName | string | Friendly name of the resource being replicated. |
+| ReplicatedItemId | string | ARM ID of the replicated item. |
+| ReplicatedItemUniqueId | string | Unique ID of the replicated item. |
+| ReplicationHealthErrors | string | List of issues that might be affecting the recovery point generation for the replicated item. |
+| ReplicationStatus | string | Status of replication for the Azure Site Recovery replicated item. |
+| _ResourceId | string | A unique identifier for the resource that the record is associated with |
+| SourceResourceId | string | ARM ID of the datasource being replicated. |
+| SourceSystem | string | The agent type that collected the event. For example, OpsManager for Windows agent, either direct connect or Operations Manager, Linux for all Linux agents, or Azure for Azure Diagnostics |
+| _SubscriptionId | string | A unique identifier for the subscription that the record is associated with |
+| TenantId | string | The Log Analytics workspace ID |
+| TimeGenerated | datetime | The timestamp (UTC) when the log was generated. |
+| Type | string | The name of the table |
+| VaultLocation | string | Location of the vault associated with the replicated item. |
+| VaultName | string | Name of the vault associated with the replicated item. |
+| VaultType | string | Type of the vault associated with the replicated item. |
+| Version | string | The API version. |
+
+### AzureSiteRecoveryJobs
+
+This table contains records of Azure Site Recovery jobs such as failover, test failover, reprotection etc., with key details for monitoring and diagnostics, such as the replicated item information, duration, status, description, and so on. Whenever an Azure Site Recovery job is completed (that is, succeeded or failed), a corresponding record for the job is sent to this table. You can view history of Azure Site Recovery jobs by querying this table over a larger time range, provided your workspace has the required retention configured.
+
+#### Fields
+
+| Attribute | Value |
+|-|-|
+| Resource types | microsoft.recoveryservices/vaults |
+| Categories | Audit |
+| Solutions | LogManagement |
+| Basic log | No |
+| Ingestion-time transformation | No |
+| Sample Queries | Yes |
+
+#### Columns
+
+| Column Name | Type | Description |
+|-|-|-|
+| _BilledSize | real | The record size in bytes |
+| Category | string | The category of the log. |
+| CorrelationId | string | Correlation ID associated with the Azure Site Recovery job for debugging purposes. |
+| DurationMs | int | Duration of the Azure Site Recovery job. |
+| EndTime | datetime | End time of the Azure Site Recovery job. |
+| _IsBillable | string | Specifies whether ingesting the data is billable. When _IsBillable is false ingestion isn't billed to your Azure account |
+| JobUniqueId | string | Unique ID of the Azure Site Recovery job. |
+| OperationName | string | Type of Azure Site Recovery job, for example, Test failover. |
+| PolicyFriendlyName | string | Friendly name of the replication policy applied to the replicated item (if applicable). |
+| PolicyId | string | ARM ID of the replication policy applied to the replicated item (if applicable). |
+| PolicyUniqueId | string | Unique ID of the replication policy applied to the replicated item (if applicable). |
+| ReplicatedItemFriendlyName | string | Friendly name of replicated item associated with the Azure Site Recovery job (if applicable). |
+| ReplicatedItemId | string | ARM ID of the replicated item associated with the Azure Site Recovery job (if applicable). |
+| ReplicatedItemUniqueId | string | Unique ID of the replicated item associated with the Azure Site Recovery job (if applicable). |
+| ReplicationScenario | string | Field used to identify whether the replication is being done for an Azure resource or an on-premises resource. |
+| _ResourceId | string | A unique identifier for the resource that the record is associated with |
+| ResultDescription | string | Result of the Azure Site Recovery job. |
+| SourceFriendlyName | string | Friendly name of the resource on which the Azure Site Recovery job was executed. |
+| SourceResourceGroup | string | Resource Group of the source. |
+| SourceResourceId | string | ARM ID of the resource on which the Azure Site Recovery job was executed. |
+| SourceSystem | string | The agent type that collected the event. For example, OpsManager for Windows agent, either direct connect or Operations Manager, Linux for all Linux agents, or Azure for Azure Diagnostics |
+| SourceType | string | Type of resource on which the Azure Site Recovery job was executed. |
+| StartTime | datetime | Start time of the Azure Site Recovery job. |
+| Status | string | Status of the Azure Site Recovery job. |
+| _SubscriptionId | string | A unique identifier for the subscription that the record is associated with |
+| TenantId | string | The Log Analytics workspace ID |
+| TimeGenerated | datetime | The timestamp (UTC) when the log was generated. |
+| Type | string | The name of the table |
+| VaultLocation | string | Location of the vault associated with the Azure Site Recovery job. |
+| VaultName | string | Name of the vault associated with the Azure Site Recovery job. |
+| VaultType | string | Type of the vault associated with the Azure Site Recovery job. |
+| Version | string | The API version. |
+ [!INCLUDE [horz-monitor-ref-activity-log](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-activity-log.md)] - [Microsoft.RecoveryServices](/azure/role-based-access-control/permissions/management-and-governance#microsoftrecoveryservices)
site-recovery Monitoring Common Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/monitoring-common-questions.md
Only machines for which initial replication has completed are included in the co
### How long is data kept in Azure Monitor logs?
-By default, retention is for 31 days. You can increase the period in the **Usage and Estimated Cost** section in the Log Analytics workspace. Click on **Data Retention**, and choose the range.
+For information on data retention, see [Data retention and archive in Azure Monitor logs](/azure/azure-monitor/logs/data-retention-archive).
+
+You can modify the default retention period in the **Usage and Estimated Cost** section in the Log Analytics workspace. Click on **Data Retention**, and choose the range.
### What's the size of the resource logs?
site-recovery Report Site Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/report-site-recovery.md
To set up a Log Analytics workspace, [follow these steps](../azure-monitor/logs/
### Configure diagnostics settings for your vaults
-Azure Resource Manager resources like Recovery Services vaults, record information about site recovery jobs and replicated items as diagnostics data. To configure diagnostics settings for your vaults, follow these steps:
+Azure Resource Manager resources, like Recovery Services vaults, record information about site recovery jobs and replicated items as diagnostics data.
-1. On the Azure portal, navigate to the chosen the Recovery Services vault of concern
-1. Select **Monitoring** > **Diagnostic settings**.
-1. Specify the target for the Recovery Services Vault's diagnostic data. Learn more about [using diagnostic events](../backup/backup-azure-diagnostic-events.md) for Recovery Services vaults.
+To learn how to configure diagnostics settings, see [Diagnostic settings in Azure Monitor](/azure/azure-monitor/essentials/diagnostic-settings).
+
+You can also configure diagnostics settings for your vaults using the following steps in the Azure portal.
+
+1. Navigate to the chosen the Recovery Services vault, then select **Monitoring** > **Diagnostic settings**.
+1. Specify the target for the Recovery Services Vault's diagnostic data. Learn more about [using diagnostic events](../backup/backup-azure-diagnostic-events.md) for Recovery Services vaults.
1. Select **Azure Site Recovery Jobs** and **Azure Site Recovery Replicated Item Details** options to populate the reports. :::image type="content" source="./media/report-site-recovery/logs.png" alt-text="Screenshot of logs options.":::
site-recovery Reporting Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/reporting-log-analytics.md
- Title: Log Analytics data model for Azure Site Recovery
-description: In this article, learn about the Azure Monitor Log Analytics data model details for Azure Site Recovery data.
-- Previously updated : 05/13/2024----
-# Log Analytics data model for Azure Site Recovery
-
-This article describes the Log Analytics data model for Azure Site Recover that's added to the Azure Diagnostics table (if your vaults are configured with diagnostics settings to send data to a Log Analytics workspace in Azure Diagnostics mode). You can use this data model to write queries on Log Analytics data to create custom alerts or reporting dashboards.
-
-To understand the fields of each Site Recovery table in Log Analytics, review the details for the Azure Site Recovery Replicated Item Details and Azure Site Recovery Jobs tables. You can find information about the diagnostic tables [here](/azure/azure-monitor/reference/tables/azurediagnostics).
-
-> [!TIP]
-> Expand this table for better readability.
-
-| Category | Category Display Name | Log Table | [Supports basic log plan](../azure-monitor/logs/basic-logs-configure.md#compare-the-basic-and-analytics-log-data-plans) | [Supports ingestion-time transformation](../azure-monitor/essentials/data-collection-transformations.md) | Example queries | Costs to export |
-| | | | | | | |
-| *ASRReplicatedItems* | Azure Site Recovery Replicated Item Details | [ASRReplicatedItems](/azure/azure-monitor/reference/tables/asrreplicateditems) <br> This table contains details of Azure Site Recovery replicated items, such as associated vault, policy, replication health, failover readiness. etc. Data is pushed once a day to this table for all replicated items, to provide the latest information for each item. | No | No | [Queries](/azure/azure-monitor/reference/queries/asrreplicateditems) | Yes |
-| *AzureSiteRecoveryJobs* | Azure Site Recovery Jobs | [ASRJobs](/azure/azure-monitor/reference/tables/asrjobs) <br> This table contains records of Azure Site Recovery jobs such as failover, test failover, reprotection etc., with key details for monitoring and diagnostics, such as the replicated item information, duration, status, description, and so on. Whenever an Azure Site Recovery job is completed (that is, succeeded or failed), a corresponding record for the job is sent to this table. You can view history of Azure Site Recovery jobs by querying this table over a larger time range, provided your workspace has the required retention configured. | No | No | [Queries](/azure/azure-monitor/reference/queries/asrjobs) | No |
-| *AzureSiteRecoveryEvents* | Azure Site Recovery Events | [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) <br> Logs from multiple Azure resources. | No | No | [Queries](/azure/azure-monitor/reference/queries/azurediagnostics) | No |
-| *AzureSiteRecoveryProtectedDiskDataChurn* | Azure Site Recovery Protected Disk Data Churn | [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) <br> Logs from multiple Azure resources. | No | No | [Queries](/azure/azure-monitor/reference/queries/azurediagnostics#queries-for-microsoftrecoveryservices) | No |
-| *AzureSiteRecoveryRecoveryPoints* | Azure Site Recovery Points | [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) <br> Logs from multiple Azure resources. | No | No | [Queries](/azure/azure-monitor/reference/queries/azurediagnostics#queries-for-microsoftrecoveryservices) | No |
-| *AzureSiteRecoveryReplicatedItems* | Azure Site Recovery Replicated Items | [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) <br> Logs from multiple Azure resources. | No | No | [Queries](/azure/azure-monitor/reference/queries/azurediagnostics#queries-for-microsoftrecoveryservices) | No |
-| *AzureSiteRecoveryReplicationDataUploadRate* | Azure Site Recovery Replication Data Upload Rate | [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) <br> Logs from multiple Azure resources. | No | No | [Queries](/azure/azure-monitor/reference/queries/azurediagnostics#queries-for-microsoftrecoveryservices) | No |
-| *AzureSiteRecoveryReplicationStats* | Azure Site Recovery Replication Stats | [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) <br> Logs from multiple Azure resources. | No | No | [Queries](/azure/azure-monitor/reference/queries/azurediagnostics#queries-for-microsoftrecoveryservices) | No |
--
-## ASRReplicatedItems
-
-This is a resource specific table that contains details of Azure Site Recovery replicated items, such as associated vault, policy, replication health, failover readiness. etc. Data is pushed once a day to this table for all replicated items, to provide the latest information for each item.
-
-### Fields
-
-| Attribute | Value |
-|-|-|
-| Resource types | microsoft.recoveryservices/vaults |
-| Categories |Audit |
-| Solutions | LogManagement |
-| Basic log | No |
-| Ingestion-time transformation | No |
-| Sample Queries | Yes |
-
-### Columns
-
-| Column Name | Type | Description |
-|-|-|-|
-| ActiveLocation | string | Current active location for the replicated item. If the item is in failed over state, the active location is the secondary (target) region. Otherwise, it is the primary region. |
-| BilledSize | real | The record size in bytes |
-| Category | string | The category of the log. |
-| DatasourceFriendlyName | string | Friendly name of the datasource being replicated. |
-| DatasourceType | string | ARM type of the resource configured for replication. |
-| DatasourceUniqueId | string | Unique ID of the datasource being replicated. |
-| FailoverReadiness | string | Denotes whether there are any configuration issues that could affect the failover operation success for the Azure Site Recovery replicated item. |
-| IRProgressPercentage | int | Progress percentage of the initial replication phase for the replicated item. |
-| IsBillable | string | Specifies whether ingesting the data is billable. When _IsBillable is false ingestion isn't billed to your Azure account |
-| LastHeartbeat | datetime | Time at which the Azure Site Recovery agent associated with the replicated item last made a call to the Azure Site Recovery service. Useful for debugging error scenarios where you wish to identify the time at which issues started arising. |
-| LastRpoCalculatedTime | datetime | Time at which the RPO was last calculated by the Azure Site Recovery service for the replicated item. |
-| LastSuccessfulTestFailoverTime | datetime | Time of the last successful failover performed on the replicated item. |
-| MultiVMGroupId | string | For scenarios where multi-VM consistency feature is enabled for replicated virtual machines, this field specifies the ID of the multi-VM group associated with the replicated virtual machine. |
-| OperationName | string | The name of the operation. |
-| OSFamily | string | OS family of the resource being replicated. |
-| PolicyFriendlyName | string | Friendly name of the replication policy applied to the replicated item. |
-| PolicyId | string | ARM ID of the replication policy applied to the replicated item. |
-| PolicyUniqueId | string | Unique ID of the replication policy applied for the replicated item. |
-| PrimaryFabricName | string | Represents the source region of the replicated item. By default, the value is the name of the source region, however if you have specified a custom name for the primary fabric while enabling replication, then that custom name shows up under this field. |
-| PrimaryFabricType | string | Fabric type associated with the source region of the replicated item. Depending on whether the replicated item is an Azure virtual machine, Hyper-V virtual machine or VMware virtual machine, the value for this field varies. |
-| ProtectionInfo | string | Protection status of the replicated item. |
-| RecoveryFabricName | string | Represents the target region of the replicated item. By default, the value is the name of the target region. However, if you specify a custom name for the recovery fabric while enabling replication, then that custom name shows up under this field. |
-| RecoveryFabricType | string | Fabric type associated with the target region of the replicated item. Depending on whether the replicated item is an Azure virtual machine, Hyper-V virtual machine or VMware virtual machine, the value for this field varies. |
-| RecoveryRegion | string | Target region to which the resource is replicated. |
-| ReplicatedItemFriendlyName | string | Friendly name of the resource being replicated. |
-| ReplicatedItemId | string | ARM ID of the replicated item. |
-| ReplicatedItemUniqueId | string | Unique ID of the replicated item. |
-| ReplicationHealthErrors | string | List of issues that might be affecting the recovery point generation for the replicated item. |
-| ReplicationStatus | string | Status of replication for the Azure Site Recovery replicated item. |
-| _ResourceId | string | A unique identifier for the resource that the record is associated with |
-| SourceResourceId | string | ARM ID of the datasource being replicated. |
-| SourceSystem | string | The agent type that collected the event. For example, OpsManager for Windows agent, either direct connect or Operations Manager, Linux for all Linux agents, or Azure for Azure Diagnostics |
-| _SubscriptionId | string | A unique identifier for the subscription that the record is associated with |
-| TenantId | string | The Log Analytics workspace ID |
-| TimeGenerated | datetime | The timestamp (UTC) when the log was generated. |
-| Type | string | The name of the table |
-| VaultLocation | string | Location of the vault associated with the replicated item. |
-| VaultName | string | Name of the vault associated with the replicated item. |
-| VaultType | string | Type of the vault associated with the replicated item. |
-| Version | string | The API version. |
-
-## AzureSiteRecoveryJobs
-
-This table contains records of Azure Site Recovery jobs such as failover, test failover, reprotection etc., with key details for monitoring and diagnostics, such as the replicated item information, duration, status, description, and so on. Whenever an Azure Site Recovery job is completed (that is, succeeded or failed), a corresponding record for the job is sent to this table. You can view history of Azure Site Recovery jobs by querying this table over a larger time range, provided your workspace has the required retention configured.
-
-### Fields
-
-| Attribute | Value |
-|-|-|
-| Resource types | microsoft.recoveryservices/vaults |
-| Categories | Audit |
-| Solutions | LogManagement |
-| Basic log | No |
-| Ingestion-time transformation | No |
-| Sample Queries | Yes |
-
-### Columns
-
-| Column Name | Type | Description |
-|-|-|-|
-| _BilledSize | real | The record size in bytes |
-| Category | string | The category of the log. |
-| CorrelationId | string | Correlation ID associated with the Azure Site Recovery job for debugging purposes. |
-| DurationMs | int | Duration of the Azure Site Recovery job. |
-| EndTime | datetime | End time of the Azure Site Recovery job. |
-| _IsBillable | string | Specifies whether ingesting the data is billable. When _IsBillable is false ingestion isn't billed to your Azure account |
-| JobUniqueId | string | Unique ID of the Azure Site Recovery job. |
-| OperationName | string | Type of Azure Site Recovery job, for example, Test failover. |
-| PolicyFriendlyName | string | Friendly name of the replication policy applied to the replicated item (if applicable). |
-| PolicyId | string | ARM ID of the replication policy applied to the replicated item (if applicable). |
-| PolicyUniqueId | string | Unique ID of the replication policy applied to the replicated item (if applicable). |
-| ReplicatedItemFriendlyName | string | Friendly name of replicated item associated with the Azure Site Recovery job (if applicable). |
-| ReplicatedItemId | string | ARM ID of the replicated item associated with the Azure Site Recovery job (if applicable). |
-| ReplicatedItemUniqueId | string | Unique ID of the replicated item associated with the Azure Site Recovery job (if applicable). |
-| ReplicationScenario | string | Field used to identify whether the replication is being done for an Azure resource or an on-premises resource. |
-| _ResourceId | string | A unique identifier for the resource that the record is associated with |
-| ResultDescription | string | Result of the Azure Site Recovery job. |
-| SourceFriendlyName | string | Friendly name of the resource on which the Azure Site Recovery job was executed. |
-| SourceResourceGroup | string | Resource Group of the source. |
-| SourceResourceId | string | ARM ID of the resource on which the Azure Site Recovery job was executed. |
-| SourceSystem | string | The agent type that collected the event. For example, OpsManager for Windows agent, either direct connect or Operations Manager, Linux for all Linux agents, or Azure for Azure Diagnostics |
-| SourceType | string | Type of resource on which the Azure Site Recovery job was executed. |
-| StartTime | datetime | Start time of the Azure Site Recovery job. |
-| Status | string | Status of the Azure Site Recovery job. |
-| _SubscriptionId | string | A unique identifier for the subscription that the record is associated with |
-| TenantId | string | The Log Analytics workspace ID |
-| TimeGenerated | datetime | The timestamp (UTC) when the log was generated. |
-| Type | string | The name of the table |
-| VaultLocation | string | Location of the vault associated with the Azure Site Recovery job. |
-| VaultName | string | Name of the vault associated with the Azure Site Recovery job. |
-| VaultType | string | Type of the vault associated with the Azure Site Recovery job. |
-| Version | string | The API version. |
--
-## Next steps
--- To learn more about the Azure Monitor Log Analytics data model, see [Azure Monitor Log Analytics data model](/azure/azure-monitor/log-query/log-query-overview)
site-recovery Site Recovery Monitor And Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-monitor-and-troubleshoot.md
-# Monitor Site Recovery
+# Use the Recovery Services dashboard
In this article, learn how to monitor Azure [Site Recovery](site-recovery-overview.md), using Site Recovery inbuilt monitoring. You can monitor:
storage Storage Ref Azcopy Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-copy.md
description: This article provides reference information for the azcopy copy com
Previously updated : 05/31/2024 Last updated : 07/09/2024
Copies source data to a destination location.
Copies source data to a destination location. The supported directions are: - local <-> Azure Blob (SAS or OAuth authentication)-- local <-> Azure Files (Share/directory SAS authentication)
+- local <-> Azure Files (Share/directory SAS authentication or OAuth authentication)
- local <-> Azure Data Lake Storage Gen2 (SAS, OAuth, or SharedKey authentication) - Azure Blob (SAS or public) -> Azure Blob (SAS or OAuth authentication) - Azure Data Lake Storage Gen2 (SAS or public) -> Azure Data Lake Storage Gen2 (SAS or OAuth authentication)
Copies source data to a destination location. The supported directions are:
- Azure Data Lake Storage Gen2 (SAS or OAuth authentication) <-> Azure Data Lake Storage Gen2 (SAS or OAuth authentication) - Azure Data Lake Storage Gen2 (SAS or OAuth authentication) <-> Azure Blob (SAS or OAuth authentication) - Azure Blob (SAS or public) -> Azure Files (SAS)-- Azure Files (SAS) -> Azure Files (SAS)
+- Azure File (SAS or OAuth authentication) <-> Azure File (SAS or OAuth authentication)
- Azure Files (SAS) -> Azure Blob (SAS or OAuth authentication) - AWS S3 (Access Key) -> Azure Block Blob (SAS or OAuth authentication) - Google Cloud Storage (Service Account Key) -> Azure Block Blob (SAS or OAuth authentication)
storage Storage Files Prevent File Share Deletion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-prevent-file-share-deletion.md
Azure Files offers soft delete, which allows you to recover your file share when
When soft delete for Azure file shares is enabled on a storage account, if a file share is deleted, it transitions to a soft deleted state instead of being permanently erased. You can configure the amount of time soft deleted data is recoverable before it's permanently deleted, and undelete the share anytime during this retention period. After being undeleted, the share and all of contents, including snapshots, will be restored to the state it was in prior to deletion. Soft delete only works on a file share level. Individual files that are deleted will still be permanently erased.
-Soft delete can be enabled on either new or existing file shares. Soft delete is also backwards compatible, so you don't have to make any changes to your applications to take advantage of the protections of soft delete. Soft delete doesn't work for NFS shares, even if it's enabled for the storage account.
+Soft delete can be enabled on either new or existing file shares. Soft delete is also backwards compatible, so you don't have to make any changes to your applications to take advantage of the protections of soft delete.
-To permanently delete a file share in a soft delete state before its expiry time, you must undelete the share, disable soft delete, and then delete the share again. Then you should re-enable soft delete, since any other file shares in that storage account will be vulnerable to accidental deletion while soft delete is off.
+To permanently delete a file share in a soft delete state before its expiry time, you must undelete the share, disable soft delete, and then delete the share again. Then you should re-enable soft delete, because any other file shares in that storage account will be vulnerable to accidental deletion while soft delete is off.
For soft-deleted premium file shares, the file share quota (the provisioned size of a file share) is used in the total storage account quota calculation until the soft-deleted share expiry date, when the share is fully deleted.
storage Partner Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/container-solutions/partner-overview.md
This article highlights Microsoft partner solutions that enable automation, data
| Partner | Description | Website/product link | | - | -- | -- |
-| ![Kasten company logo](./media/kasten-logo.png) |**Kasten**<br>Kasten by Veeam provides a solution for Kubernetes backup and disaster recovery. Kasten helps enterprises overcome Day 2 data management challenges to confidently run applications on Kubernetes.<br><br>The Kasten K10 data management software platform provides enterprise operations teams a scalable and secure system for BCDR and mobility of Kubernetes applications.|[Partner page](https://docs.kasten.io/latest/install/azure/azure.html)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/veeam.kasten_k10_by_veeam_byol?tab=Overview)|
+| ![CloudCasa by Catalogic logo](./media/cloudcasa-logo.png)| **CloudCasa**<br>CloudCasa by Catalogic is an award-winning backup, recovery, migration, and replication service, built specifically for Kubernetes, and cloud native applications. It supports AKS, and all other major Kubernetes distributions, and managed services. <br>From a single dashboard, CloudCasa makes managing cross-cluster, cross-tenant, cross-region, and cross-cloud backup and recovery easy. With CloudCasa's Azure integration, cluster recoveries, and migrations include the ability to automatically re-create entire AKS clusters along with their vNETs, add-ons, and load balancers.|[Partner page](https://cloudcasa.io/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/catalogicsoftware1625626770507.cloudcasa-aks-app)|
+| ![Kasten company logo](./media/kasten-logo.png) |**Kasten**<br>Kasten by Veeam provides a solution for Kubernetes backup and disaster recovery. Kasten helps enterprises overcome Day 2 data management challenges to confidently run applications on Kubernetes.<br><br>The Kasten K10 data management software platform provides enterprise operations teams a scalable and secure system for BCDR and mobility of Kubernetes applications.|[Partner page](https://docs.kasten.io/latest/install/azure/azure.html)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/veeam.kasten_k10_by_veeam_byol?tab=Overview)|
+| ![NetApp company logo](./media/astra-logo.jpg) |**NetApp**<br>NetApp is a global cloud-led, data-centric software company that empowers organizations to lead with data in the age of accelerated digital transformation.<br><br>NetApp Astra Control Service is a fully managed service that makes it easier for customers to manage, protect, and move their data-rich containerized workloads running on Kubernetes within, and across public clouds, and on-premises. Astra Control provides persistent container storage with Azure NetApp Files offering advanced application-aware data management functionality (like snapshot-revert, backup-restore, activity log, and active cloning) for data protection, disaster recovery, data audit, and migration use-cases for your modern apps. |[Partner page](https://cloud.netapp.com/astra)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/netapp.netapp-astra-acs)|
| ![Portworx company logo](./media/portworx-logo.png) |**Portworx**<br>Portworx by Pure Storage is the Kubernetes Data Services Platform enterprises trust to run mission-critical applications in containers in production.<br><br>Portworx provides a fully integrated solution for persistent storage, data protection, disaster recovery, data security, cross-cloud and data migrations, and automated capacity management for applications running on Kubernetes.|[Partner page](https://portworx.com/azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/purestoragemarketplaceadmin.portworx-enterprise)|
+| ![Rackware company logo](./media/rackware-logo.png) |**Rackware**<br>RackWare provides an intelligent highly automated Hybrid Cloud Management Platform that extends across physical and virtual environments.<br><br>RackWare SWIFT is a converged disaster recovery, backup, and migration solution for Kubernetes and OpenShift. It's a cross-platform, cross-cloud, and cross-version solution that enables you to move and protect your stateful Kubernetes applications from any on-premises or cloud environment to Azure Kubernetes Service (AKS) and Azure Storage.|[Partner page](https://www.rackwareinc.com/rackware-swift-microsoft-azure)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?search=rackware%20swift&page=1&filters=virtual-machine-images)|
| ![Robin.io company logo](./media/robin-logo.png) |**Robin.io**<br>Robin.io provides an application and data management platform that enables enterprises and 5G service providers to deliver complex application pipelines as a service.<br><br>Robin Cloud Native Storage (CNS) brings advanced data management capabilities to Azure Kubernetes Service. Robin CNS seamlessly integrates with Azure Disk Storage to simplify management of stateful applications. Developers and DevOps teams can deploy Robin CNS as a standard Kubernetes operator on AKS. Robin Cloud Native Storage helps simplify data management operations such as BCDR and cloning of entire applications. |[Partner page](https://robin.io/robin-cloud-native-storage-for-microsoft-aks/)|
-| ![NetApp company logo](./media/astra-logo.jpg) |**NetApp**<br>NetApp is a global cloud-led, data-centric software company that empowers organizations to lead with data in the age of accelerated digital transformation.<br><br>NetApp Astra Control Service is a fully managed service that makes it easier for customers to manage, protect, and move their data-rich containerized workloads running on Kubernetes within and across public clouds and on-premises. Astra Control provides persistent container storage with Azure NetApp Files offering advanced application-aware data management functionality (like snapshot-revert, backup-restore, activity log, and active cloning) for data protection, disaster recovery, data audit, and migration use-cases for your modern apps. |[Partner page](https://cloud.netapp.com/astra)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/netapp.netapp-astra-acs)|
-| ![Rackware company logo](./media/rackware-logo.png) |**Rackware**<br>RackWare provides an intelligent highly automated Hybrid Cloud Management Platform that extends across physical and virtual environments.<br><br>RackWare SWIFT is a converged disaster recovery, backup and migration solution for Kubernetes and OpenShift. It is a cross-platform, cross-cloud and cross-version solution that enables you to move and protect your stateful Kubernetes applications from any on-premises or cloud environment to Azure Kubernetes Service (AKS) and Azure Storage.|[Partner page](https://www.rackwareinc.com/rackware-swift-microsoft-azure)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps?search=rackware%20swift&page=1&filters=virtual-machine-images)|
-Are you a storage partner but your solution is not listed yet? Send us your info [here](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR3i8TQB_XnRAsV3-7XmQFpFUQjY4QlJYUzFHQ0ZBVDNYWERaUlNRVU5IMyQlQCN0PWcu).
+Are you a storage partner but your solution isn't listed yet? Send us your info [here](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR3i8TQB_XnRAsV3-7XmQFpFUQjY4QlJYUzFHQ0ZBVDNYWERaUlNRVU5IMyQlQCN0PWcu).
## Next steps To learn more about some of our other partners, see:
synapse-analytics Apache Spark 32 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-32-runtime.md
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This document covers the runtime components and versions for the Azure Synapse Runtime for Apache Spark 3.2.
-> [!WARNING]
+
+> [!CAUTION]
> Deprecation and disablement notification for Azure Synapse Runtime for Apache Spark 3.2 > * End of Support announced for Azure Synapse Runtime for Apache Spark 3.2 July 8, 2023. > * Effective July 8, 2024, Azure Synapse will discontinue official support for Spark 3.2 Runtimes.
-> * In accordance with the Synapse runtime for Apache Spark lifecycle policy, Azure Synapse runtime for Apache Spark 3.2 will be retired and disabled as of July 8, 2024. After the End of Support date, the retired runtimes are unavailable for new Spark pools and existing workflows can't execute. Metadata will temporarily remain in the Synapse workspace.
+> * In accordance with the Synapse runtime for Apache Spark lifecycle policy, Azure Synapse runtime for Apache Spark 3.2 will be retired as of July 8, 2024. After the End of Support date, the retired runtimes are unavailable for new Spark pools and existing workflows can't execute. Metadata will temporarily remain in the Synapse workspace.
> * **We strongly recommend that you upgrade your Apache Spark 3.2 workloads to [Azure Synapse Runtime for Apache Spark 3.4 (GA)](./apache-spark-34-runtime.md) before July 8, 2024.** ## Component versions
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This docume
[Synapse-Python38-CPU.yml](https://github.com/Azure-Samples/Synapse/blob/main/Spark/Python/Synapse-Python38-CPU.yml) contains the list of libraries shipped in the default Python 3.8 environment in Azure Synapse Spark.
-## Scala and Java libraries
-
-HikariCP-2.5.1.jar
-
-JLargeArrays-1.5.jar
-
-JTransforms-3.1.jar
-
-RoaringBitmap-0.9.0.jar
-
-ST4-4.0.4.jar
-
-SparkCustomEvents-3.2.0-1.0.0.jar
-
-TokenLibrary-assembly-3.0.0.jar
-
-VegasConnector-1.1.01_2.12_3.2.0-SNAPSHOT.jar
-
-activation-1.1.1.jar
-
-adal4j-1.6.3.jar
-
-aircompressor-0.21.jar
-
-algebra_2.12-2.0.1.jar
-
-aliyun-java-sdk-core-3.4.0.jar
-
-aliyun-java-sdk-ecs-4.2.0.jar
-
-aliyun-java-sdk-ram-3.0.0.jar
-
-aliyun-java-sdk-sts-3.0.0.jar
-
-aliyun-sdk-oss-3.4.1.jar
-
-annotations-17.0.0.jar
-
-antlr-runtime-3.5.2.jar
-
-antlr4-runtime-4.8.jar
-
-aopalliance-repackaged-2.6.1.jar
-
-apache-log4j-extras-1.2.17.jar
-
-apiguardian-api-1.1.0.jar
-
-arpack-2.2.1.jar
-
-arpack_combined_all-0.1.jar
-
-arrow-format-2.0.0.jar
-
-arrow-memory-core-2.0.0.jar
-
-arrow-memory-netty-2.0.0.jar
-
-arrow-vector-2.0.0.jar
-
-audience-annotations-0.5.0.jar
-
-avro-1.10.2.jar
-
-avro-ipc-1.10.2.jar
-
-avro-mapred-1.10.2.jar
-
-aws-java-sdk-bundle-1.11.901.jar
-
-azure-data-lake-store-sdk-2.3.6.jar
-
-azure-eventhubs-3.3.0.jar
-
-azure-eventhubs-spark_2.12-2.3.21.jar
-
-azure-keyvault-core-1.0.0.jar
-
-azure-storage-7.0.1.jar
-
-azure-synapse-ml-pandas_2.12-1.0.0.jar
-
-azure-synapse-ml-predict_2.12-1.0.jar
-
-blas-2.2.1.jar
-
-bonecp-0.8.0.RELEASE.jar
-
-breeze-macros_2.12-1.2.jar
-
-breeze_2.12-1.2.jar
-
-cats-kernel_2.12-2.1.1.jar
-
-chill-java-0.10.0.jar
-
-chill_2.12-0.10.0.jar
-
-client-sdk-1.14.0.jar
-
-cntk-2.4.jar
-
-commons-cli-1.2.jar
-
-commons-codec-1.15.jar
-
-commons-collections-3.2.2.jar
-
-commons-compiler-3.0.16.jar
-
-commons-compress-1.21.jar
-
-commons-crypto-1.1.0.jar
-
-commons-dbcp-1.4.jar
-
-commons-io-2.8.0.jar
-
-commons-lang-2.6.jar
-
-commons-lang3-3.12.0.jar
-
-commons-logging-1.1.3.jar
-
-commons-math3-3.4.1.jar
-
-commons-net-3.1.jar
-
-commons-pool-1.5.4.jar
-
-commons-pool2-2.6.2.jar
-
-commons-text-1.6.jar
-
-compress-lzf-1.0.3.jar
-
-config-1.3.4.jar
-
-core-1.1.2.jar
-
-cos_api-bundle-5.6.19.jar
-
-cosmos-analytics-spark-3.2.1-connector-1.6.0.jar
-
-cudf-22.02.0-cuda11.jar
-
-curator-client-2.13.0.jar
-
-curator-framework-2.13.0.jar
-
-curator-recipes-2.13.0.jar
-
-datanucleus-api-jdo-4.2.4.jar
-
-datanucleus-core-4.1.17.jar
-
-datanucleus-rdbms-4.1.19.jar
-
-delta-core_2.12-1.2.1.2.jar
-
-derby-10.14.2.0.jar
-
-dropwizard-metrics-hadoop-metrics2-reporter-0.1.2.jar
-
-flatbuffers-java-1.9.0.jar
-
-fluent-logger-jar-with-dependencies-jdk8.jar
-
-gson-2.8.6.jar
-
-guava-14.0.1.jar
-
-hadoop-aliyun-3.3.1.5.0-57088972.jar
-
-hadoop-annotations-3.3.1.5.0-57088972.jar
-
-hadoop-aws-3.3.1.5.0-57088972.jar
-
-hadoop-azure-3.3.1.5.0-57088972.jar
-
-hadoop-azure-datalake-3.3.1.5.0-57088972.jar
-
-hadoop-client-api-3.3.1.5.0-57088972.jar
-
-hadoop-client-runtime-3.3.1.5.0-57088972.jar
-
-hadoop-cloud-storage-3.3.1.5.0-57088972.jar
-
-hadoop-cos-3.3.1.5.0-57088972.jar
-
-hadoop-openstack-3.3.1.5.0-57088972.jar
-
-hadoop-shaded-guava-1.1.0.jar
-
-hadoop-yarn-server-web-proxy-3.3.1.5.0-57088972.jar
-
-hdinsight-spark-metrics-3.2.0-1.0.0.jar
-
-hive-beeline-2.3.9.jar
-
-hive-cli-2.3.9.jar
-
-hive-common-2.3.9.jar
-
-hive-exec-2.3.9-core.jar
-
-hive-jdbc-2.3.9.jar
-
-hive-llap-common-2.3.9.jar
-
-hive-metastore-2.3.9.jar
-
-hive-serde-2.3.9.jar
-
-hive-service-rpc-3.1.2.jar
-
-hive-shims-0.23-2.3.9.jar
-
-hive-shims-2.3.9.jar
-
-hive-shims-common-2.3.9.jar
-
-hive-shims-scheduler-2.3.9.jar
-
-hive-storage-api-2.7.2.jar
-
-hive-vector-code-gen-2.3.9.jar
-
-hk2-api-2.6.1.jar
-
-hk2-locator-2.6.1.jar
-
-hk2-utils-2.6.1.jar
-
-htrace-core4-4.1.0-incubating.jar
-
-httpclient-4.5.13.jar
-
-httpclient-4.5.6.jar
-
-httpcore-4.4.14.jar
-
-httpmime-4.5.13.jar
-
-httpmime-4.5.6.jar
-
-hyperspace-core-spark3.2_2.12-0.5.1-synapse.jar
-
-impulse-core_spark3.2_2.12-0.1.8.jar
-
-impulse-telemetry-mds_spark3.2_2.12-0.1.8.jar
-
-isolation-forest_3.2.0_2.12-2.0.8.jar
-
-istack-commons-runtime-3.0.8.jar
-
-ivy-2.5.0.jar
-
-jackson-annotations-2.12.3.jar
-
-jackson-core-2.12.3.jar
-
-jackson-core-asl-1.9.13.jar
-
-jackson-databind-2.12.3.jar
-
-jackson-dataformat-cbor-2.12.3.jar
-
-jackson-mapper-asl-1.9.13.jar
-
-jackson-module-scala_2.12-2.12.3.jar
-
-jakarta.annotation-api-1.3.5.jar
-
-jakarta.inject-2.6.1.jar
-
-jakarta.servlet-api-4.0.3.jar
-
-jakarta.validation-api-2.0.2.jar
-
-jakarta.ws.rs-api-2.1.6.jar
-
-jakarta.xml.bind-api-2.3.2.jar
-
-janino-3.0.16.jar
-
-javassist-3.25.0-GA.jar
-
-javatuples-1.2.jar
-
-javax.jdo-3.2.0-m3.jar
-
-javolution-5.5.1.jar
-
-jaxb-api-2.2.11.jar
-
-jaxb-runtime-2.3.2.jar
-
-jcl-over-slf4j-1.7.30.jar
-
-jdo-api-3.0.1.jar
-
-jdom-1.1.jar
-
-jersey-client-2.34.jar
-
-jersey-common-2.34.jar
-
-jersey-container-servlet-2.34.jar
-
-jersey-container-servlet-core-2.34.jar
-
-jersey-hk2-2.34.jar
-
-jersey-server-2.34.jar
-
-jettison-1.1.jar
-
-jetty-util-9.4.43.v20210629.jar
-
-jetty-util-ajax-9.4.43.v20210629.jar
-
-jline-2.14.6.jar
-
-joda-time-2.10.10.jar
-
-jodd-core-3.5.2.jar
-
-jpam-1.1.jar
-
-jsch-0.1.54.jar
-
-json-1.8.jar
-
-json-20090211.jar
-
-json-20210307.jar
-
-json-simple-1.1.jar
-
-json4s-ast_2.12-3.7.0-M11.jar
-
-json4s-core_2.12-3.7.0-M11.jar
-
-json4s-jackson_2.12-3.7.0-M11.jar
-
-json4s-scalap_2.12-3.7.0-M11.jar
-
-jsr305-3.0.0.jar
-
-jta-1.1.jar
-
-jul-to-slf4j-1.7.30.jar
-
-junit-jupiter-5.5.2.jar
-
-junit-jupiter-api-5.5.2.jar
-
-junit-jupiter-engine-5.5.2.jar
-
-junit-jupiter-params-5.5.2.jar
-
-junit-platform-commons-1.5.2.jar
-
-junit-platform-engine-1.5.2.jar
-
-kafka-clients-2.8.0.jar
-
-kryo-shaded-4.0.2.jar
-
-kusto-data-2.7.0.jar
-
-kusto-ingest-2.7.0.jar
-
-kusto-spark_3.0_2.12-2.7.5.jar
-
-lapack-2.2.1.jar
-
-leveldbjni-all-1.8.jar
-
-libfb303-0.9.3.jar
-
-libshufflejni.so
-
-libthrift-0.12.0.jar
-
-libvegasjni.so
-
-lightgbmlib-3.2.110.jar
-
-log4j-1.2.17.jar
-
-lz4-java-1.7.1.jar
-
-macro-compat_2.12-1.1.1.jar
-
-mdsdclientdynamic-2.0.jar
-
-metrics-core-4.2.0.jar
-
-metrics-graphite-4.2.0.jar
-
-metrics-jmx-4.2.0.jar
-
-metrics-json-4.2.0.jar
-
-metrics-jvm-4.2.0.jar
-
-microsoft-catalog-metastore-client-1.0.63.jar
-
-microsoft-log4j-etwappender-1.0.jar
-
-microsoft-spark.jar
-
-minlog-1.3.0.jar
-
-mmlspark-1.0.0-rc3-194-14bef9b1-SNAPSHOT.jar
-
-mmlspark-cognitive-1.0.0-rc3-194-14bef9b1-SNAPSHOT.jar
-
-mmlspark-core-1.0.0-rc3-194-14bef9b1-SNAPSHOT.jar
-
-mmlspark-deep-learning-1.0.0-rc3-194-14bef9b1-SNAPSHOT.jar
-
-mmlspark-lightgbm-1.0.0-rc3-194-14bef9b1-SNAPSHOT.jar
-
-mmlspark-opencv-1.0.0-rc3-194-14bef9b1-SNAPSHOT.jar
-
-mmlspark-vw-1.0.0-rc3-194-14bef9b1-SNAPSHOT.jar
-
-mssql-jdbc-8.4.1.jre8.jar
-
-mysql-connector-java-8.0.18.jar
-
-netty-all-4.1.68.Final.jar
-
-notebook-utils-3.2.0-20220208.5.jar
-
-objenesis-2.6.jar
-
-onnxruntime_gpu-1.8.1.jar
-
-opencsv-2.3.jar
-
-opencv-3.2.0-1.jar
-
-opentest4j-1.2.0.jar
-
-orc-core-1.6.12.jar
-
-orc-mapreduce-1.6.12.jar
-
-orc-shims-1.6.12.jar
-
-oro-2.0.8.jar
-
-osgi-resource-locator-1.0.3.jar
-
-paranamer-2.8.jar
-
-parquet-column-1.12.2.jar
-
-parquet-common-1.12.2.jar
-
-parquet-encoding-1.12.2.jar
-
-parquet-format-structures-1.12.2.jar
-
-parquet-hadoop-1.12.2.jar
-
-parquet-jackson-1.12.2.jar
-
-peregrine-spark-0.10.jar
-
-postgresql-42.2.9.jar
-
-protobuf-java-2.5.0.jar
-
-proton-j-0.33.8.jar
-
-py4j-0.10.9.3.jar
-
-pyrolite-4.30.jar
-
-qpid-proton-j-extensions-1.2.4.jar
-
-rapids-4-spark_2.12-22.02.0-SNAPSHOT.jar
-
-rocksdbjni-6.20.3.jar
-
-scala-collection-compat_2.12-2.1.1.jar
-
-scala-compiler-2.12.15.jar
-
-scala-java8-compat_2.12-0.9.0.jar
-
-scala-library-2.12.15.jar
-
-scala-parser-combinators_2.12-1.1.2.jar
-
-scala-reflect-2.12.15.jar
-
-scala-xml_2.12-1.2.0.jar
-
-scalactic_2.12-3.0.5.jar
-
-shapeless_2.12-2.3.3.jar
-
-shims-0.9.0.jar
-
-slf4j-api-1.7.30.jar
-
-slf4j-log4j12-1.7.16.jar
-
-snappy-java-1.1.8.4.jar
-
-spark-3.2-rpc-history-server-app-listener_2.12-1.0.0.jar
-
-spark-3.2-rpc-history-server-core_2.12-1.0.0.jar
-
-spark-avro_2.12-3.2.1.5.0-57088972.jar
-
-spark-catalyst_2.12-3.2.1.5.0-57088972.jar
-
-spark-cdm-connector-assembly-1.19.2.jar
-
-spark-core_2.12-3.2.1.5.0-57088972.jar
-
-spark-enhancement_2.12-3.2.1.5.0-57088972.jar
-
-spark-enhancementui_2.12-3.0.0.jar
-
-spark-graphx_2.12-3.2.1.5.0-57088972.jar
-
-spark-hadoop-cloud_2.12-3.2.1.5.0-57088972.jar
-
-spark-hive-thriftserver_2.12-3.2.1.5.0-57088972.jar
-
-spark-hive_2.12-3.2.1.5.0-57088972.jar
-
-spark-kusto-synapse-connector_3.1_2.12-1.0.0.jar
-
-spark-kvstore_2.12-3.2.1.5.0-57088972.jar
-
-spark-launcher_2.12-3.2.1.5.0-57088972.jar
-
-spark-microsoft-tools_2.12-3.2.1.5.0-57088972.jar
-
-spark-mllib-local_2.12-3.2.1.5.0-57088972.jar
-
-spark-mllib_2.12-3.2.1.5.0-57088972.jar
-
-spark-mssql-connector-1.2.0.jar
-
-spark-network-common_2.12-3.2.1.5.0-57088972.jar
-
-spark-network-shuffle_2.12-3.2.1.5.0-57088972.jar
-
-spark-repl_2.12-3.2.1.5.0-57088972.jar
-
-spark-sketch_2.12-3.2.1.5.0-57088972.jar
-
-spark-sql-kafka-0-10_2.12-3.2.1.5.0-57088972.jar
-
-spark-sql_2.12-3.2.1.5.0-57088972.jar
-
-spark-streaming-kafka-0-10-assembly_2.12-3.2.1.5.0-57088972.jar
-
-spark-streaming-kafka-0-10_2.12-3.2.1.5.0-57088972.jar
-
-spark-streaming_2.12-3.2.1.5.0-57088972.jar
-
-spark-tags_2.12-3.2.1.5.0-57088972.jar
-
-spark-token-provider-kafka-0-10_2.12-3.2.1.5.0-57088972.jar
-
-spark-unsafe_2.12-3.2.1.5.0-57088972.jar
-
-spark-yarn_2.12-3.2.1.5.0-57088972.jar
-
-spark_diagnostic_cli-1.0.11_spark-3.2.0.jar
-
-spire-macros_2.12-0.17.0.jar
-
-spire-platform_2.12-0.17.0.jar
-
-spire-util_2.12-0.17.0.jar
-
-spire_2.12-0.17.0.jar
-
-spray-json_2.12-1.3.2.jar
-
-sqlanalyticsconnector_3.2.0-1.0.0.jar
-
-stax-api-1.0.1.jar
-
-stream-2.9.6.jar
-
-structuredstreamforspark_2.12-3.0.1-2.1.3.jar
-
-super-csv-2.2.0.jar
-
-synapse-spark-telemetry_2.12-0.0.6.jar
-
-synfs-3.2.0-20220208.5.jar
-
-threeten-extra-1.5.0.jar
-
-tink-1.6.0.jar
-
-transaction-api-1.1.jar
-
-univocity-parsers-2.9.1.jar
-
-velocity-1.5.jar
-
-vw-jni-8.9.1.jar
-
-wildfly-openssl-1.0.7.Final.jar
-
-xbean-asm9-shaded-4.20.jar
-
-xz-1.8.jar
-
-zookeeper-3.6.2.5.0-57088972.jar
-
-zookeeper-jute-3.6.2.5.0-57088972.jar
-
-zstd-jni-1.5.0-4.jar
-
-## Python libraries (Normal VMs)
-
-_libgcc_mutex=0.1
-
-_openmp_mutex=4.5
-
-_py-xgboost-mutex=2.0
-
-abseil-cpp=20210324.0
-
-absl-py=0.13.0
-
-adal=1.2.7
-
-adlfs=0.7.7
-
-aiohttp=3.7.4.post0
-
-alsa-lib=1.2.3
-
-appdirs=1.4.4
-
-arrow-cpp=3.0.0
-
-astor=0.8.1
-
-astunparse=1.6.3
-
-async-timeout=3.0.1
-
-attrs=21.2.0
-
-aws-c-cal=0.5.11
-
-aws-c-common=0.6.2
-
-aws-c-event-stream=0.2.7
-
-aws-c-io=0.10.5
-
-aws-checksums=0.1.11
-
-aws-sdk-cpp=1.8.186
-
-azure-datalake-store=0.0.51
-
-azure-identity=2021.03.15b1
-
-azure-storage-blob=12.8.1
-
-backcall=0.2.0
-
-backports=1.0
-
-backports.functools_lru_cache=1.6.4
-
-beautifulsoup4=4.9.3
-
-blas=2.109
-
-blas-devel=3.9.0=9_mkl
-
-blinker=1.4
-
-blosc=1.21.0
-
-bokeh=2.3.2
-
-brotli=1.0.9
-
-brotli-bin=1.0.9
-
-brotli-python=1.0.9
-
-brotlipy=0.7.0
-
-brunsli=0.1
-
-bzip2=1.0.8
-
-c-ares=1.17.1
-
-ca-certificates=2021.7.5
-
-cachetools=4.2.2
-
-cairo=1.16.0
-
-certifi=2021.5.30
-
-cffi=1.14.5
-
-chardet=4.0.0
-
-charls=2.2.0
-
-click=8.0.1
-
-cloudpickle=1.6.0
-
-conda=4.9.2
-
-conda-package-handling=1.7.3
-
-configparser=5.0.2
-
-cryptography=3.4.7
-
-cudatoolkit=11.1.1
-
-cycler=0.10.0
-
-cython=0.29.23
-
-cytoolz=0.11.0
-
-dash=1.20.0
-
-dash-core-components=1.16.0
-
-dash-html-components=1.1.3
-
-dash-renderer=1.9.1
-
-dash-table=4.11.3
-
-dash_cytoscape=0.2.0
-
-dask-core=2021.6.2
-
-databricks-cli=0.12.1
-
-dataclasses=0.8
-
-dbus=1.13.18
-
-debugpy=1.3.0
-
-decorator=4.4.2
-
-dill=0.3.4
-
-entrypoints=0.3
-
-et_xmlfile=1.1.0
-
-expat=2.4.1
-
-fire=0.4.0
-
-flask=2.0.1
-
-flask-compress=1.10.1
-
-fontconfig=2.13.1
-
-freetype=2.10.4
-
-fsspec=2021.6.1
-
-future=0.18.2
-
-gast=0.3.3
-
-gensim=3.8.3
-
-geographiclib=1.52
-
-geopy=2.1.0
-
-gettext=0.21.0
-
-gevent=21.1.2
-
-gflags=2.2.2
-
-giflib=5.2.1
-
-gitdb=4.0.7
-
-gitpython=3.1.18
-
-glib=2.68.3
-
-glib-tools=2.68.3
-
-glog=0.5.0
-
-gobject-introspection=1.68.0
-
-google-auth=1.32.1
-
-google-auth-oauthlib=0.4.1
-
-google-pasta=0.2.0
-
-greenlet=1.1.0
-
-grpc-cpp=1.37.1
-
-grpcio=1.37.1
-
-gst-plugins-base=1.18.4
-
-gstreamer=1.18.4
-
-h5py=2.10.0
-
-hdf5=1.10.6
-
-html5lib=1.1
-
-hummingbird-ml=0.4.0
-
-icu=68.1
-
-idna=2.10
-
-imagecodecs=2021.3.31
-
-imageio=2.9.0
-
-importlib-metadata=4.6.1
-
-intel-openmp=2021.2.0
-
-interpret=0.2.4
-
-interpret-core=0.2.4
-
-ipykernel=6.0.1
-
-ipython=7.23.1
-
-ipython_genutils=0.2.0
-
-isodate=0.6.0
-
-itsdangerous=2.0.1
-
-jdcal=1.4.1
-
-jedi=0.18.0
-
-jinja2=3.0.1
-
-joblib=1.0.1
-
-jpeg=9d
-
-jupyter_client=6.1.12
-
-jupyter_core=4.7.1
-
-jxrlib=1.1
-
-keras-applications=1.0.8
-
-keras-preprocessing=1.1.2
-
-keras2onnx=1.6.5
-
-kiwisolver=1.3.1
-
-koalas=1.8.0
-
-krb5=1.19.1
-
-lcms2=2.12
-
-ld_impl_linux-64=2.36.1
-
-lerc=2.2.1
-
-liac-arff=2.5.0
-
-libaec=1.0.5
-
-libblas=3.9.0=9_mkl
-
-libbrotlicommon=1.0.9
-
-libbrotlidec=1.0.9
-
-libbrotlienc=1.0.9
-
-libcblas=3.9.0=9_mkl
-
-libclang=11.1.0
-
-libcurl=7.77.0
-
-libdeflate=1.7
-
-libedit=3.1.20210216
-
-libev=4.33
-
-libevent=2.1.10
-
-libffi=3.3
-
-libgcc-ng=9.3.0
-
-libgfortran-ng=9.3.0
-
-libgfortran5=9.3.0
-
-libglib=2.68.3
-
-libiconv=1.16
-
-liblapack=3.9.0=9_mkl
-
-liblapacke=3.9.0=9_mkl
-
-libllvm10=10.0.1
-
-libllvm11=11.1.0
-
-libnghttp2=1.43.0
-
-libogg=1.3.5
-
-libopus=1.3.1
-
-libpng=1.6.37
-
-libpq=13.3
-
-libprotobuf=3.15.8
-
-libsodium=1.0.18
-
-libssh2=1.9.0
-
-libstdcxx-ng=9.3.0
-
-libthrift=0.14.1
-
-libtiff=4.2.0
-
-libutf8proc=2.6.1
-
-libuuid=2.32.1
-
-libuv=1.41.1
-
-libvorbis=1.3.7
-
-libwebp-base=1.2.0
-
-libxcb=1.14
-
-libxgboost=1.4.0
-
-libxkbcommon=1.0.3
-
-libxml2=2.9.12
-
-libzopfli=1.0.3
-
-lightgbm=3.2.1
-
-lime=0.2.0.1
-
-llvm-openmp=11.1.0
-
-llvmlite=0.36.0
-
-locket=0.2.1
-
-lz4-c=1.9.3
-
-markdown=3.3.4
-
-markupsafe=2.0.1
-
-matplotlib=3.4.2
-
-matplotlib-base=3.4.2
-
-matplotlib-inline=0.1.2
-
-mkl=2021.2.0
-
-mkl-devel=2021.2.0
-
-mkl-include=2021.2.0
-
-mleap=0.17.0
-
-mlflow-skinny=1.18.0
-
-msal=2021.06.08
-
-msal-extensions=2021.06.08
-
-msrest=2021.06.01
-
-multidict=5.1.0
-
-mysql-common=8.0.25
-
-mysql-libs=8.0.25
-
-ncurses=6.2
-
-networkx=2.5.1
-
-ninja=1.10.2
-
-nltk=3.6.2
-
-nspr=4.30
-
-nss=3.67
-
-numba=0.53.1
-
-numpy=1.19.4
-
-oauthlib=3.1.1
-
-olefile=0.46
-
-onnx=1.9.0
-
-onnxconverter-common=1.7.0
-
-onnxmltools=1.7.0
-
-onnxruntime=1.7.2
-
-openjpeg=2.4.0
-
-openpyxl=3.0.7
-
-openssl=1.1.1k
-
-opt_einsum=3.3.0
-
-orc=1.6.7
-
-packaging=21.0
-
-pandas=1.2.3
-
-parquet-cpp=1.5.1=1
-
-parso=0.8.2
-
-partd=1.2.0
-
-patsy=0.5.1
-
-pcre=8.45
-
-pexpect=4.8.0
-
-pickleshare=0.7.5
-
-pillow=8.2.0
-
-pip=21.1.1
-
-pixman=0.40.0
-
-plotly=4.14.3
-
-pmdarima=1.8.2
-
-pooch=1.4.0
-
-portalocker=1.7.1
-
-prompt-toolkit=3.0.19
-
-protobuf=3.15.8
-
-psutil=5.8.0
-
-ptyprocess=0.7.0
-
-py-xgboost=1.4.0
-
-py4j=0.10.9
-
-pyarrow=3.0.0
-
-pyasn1=0.4.8
-
-pyasn1-modules=0.2.8
-
-pycairo=1.20.1
-
-pycosat=0.6.3
-
-pycparser=2.20
-
-pygments=2.9.0
-
-pygobject=3.40.1
-
-pyjwt=2.1.0
-
-pyodbc=4.0.30
-
-pyopenssl=20.0.1
-
-pyparsing=2.4.7
-
-pyqt=5.12.3
-
-pyqt-impl=5.12.3
-
-pyqt5-sip=4.19.18
-
-pyqtchart=5.12
-
-pyqtwebengine=5.12.1
-
-pysocks=1.7.1
-
-python=3.8.10
-
-python-dateutil=2.8.1
-
-python-flatbuffers=1.12
-
-python_abi=3.8=2_cp38
-
-pytorch=1.8.1.8_cuda11.1_cudnn8.0.5_0
-
-pytz=2021.1
-
-pyu2f=0.1.5
-
-pywavelets=1.1.1
-
-pyyaml=5.4.1
-
-pyzmq=22.1.0
-
-qt=5.12.9
-
-re2=2021.04.01
-
-readline=8.1
-
-regex=2021.7.6
-
-requests=2.25.1
-
-requests-oauthlib=1.3.0
-
-retrying=1.3.3
-
-rsa=4.7.2
-
-ruamel_yaml=0.15.100
-
-s2n=1.0.10
-
-salib=1.3.11
-
-scikit-image=0.18.1
-
-scikit-learn=0.23.2
-
-scipy=1.5.3
-
-seaborn=0.11.1
-
-seaborn-base=0.11.1
-
-setuptools=49.6.0
-
-shap=0.39.0
-
-six=1.16.0
-
-skl2onnx=1.8.0.1
-
-sklearn-pandas=2.2.0
-
-slicer=0.0.7
-
-smart_open=5.1.0
-
-smmap=3.0.5
-
-snappy=1.1.8
-
-soupsieve=2.2.1
-
-sqlite=3.36.0
-
-statsmodels=0.12.2
-
-tabulate=0.8.9
-
-tenacity=7.0.0
-
-tensorboard=2.4.1
-
-tensorboard-plugin-wit=1.8.0
-
-tensorflow=2.4.1
-
-tensorflow-base=2.4.1
-
-tensorflow-estimator=2.4.0
-
-termcolor=1.1.0
-
-textblob=0.15.3
-
-threadpoolctl=2.1.0
-
-tifffile=2021.4.8
-
-tk=8.6.10
-
-toolz=0.11.1
-
-tornado=6.1
-
-tqdm=4.61.2
-
-traitlets=5.0.5
-
-typing-extensions=3.10.0.0
-
-typing_extensions=3.10.0.0
-
-unixodbc=2.3.9
-
-urllib3=1.26.4
-
-wcwidth=0.2.5
-
-webencodings=0.5.1
-
-werkzeug=2.0.1
-
-wheel=0.36.2
-
-wrapt=1.12.1
-
-xgboost=1.4.0
-
-xorg-kbproto=1.0.7
-
-xorg-libice=1.0.10
-
-xorg-libsm=1.2.3
-
-xorg-libx11=1.7.2
-
-xorg-libxext=1.3.4
-
-xorg-libxrender=0.9.10
-
-xorg-renderproto=0.11.1
-
-xorg-xextproto=7.3.0
-
-xorg-xproto=7.0.31
-
-xz=5.2.5
-
-yaml=0.2.5
-
-yarl=1.6.3
-
-zeromq=4.3.4
-
-zfp=0.5.5
-
-zipp=3.5.0
-
-zlib=1.2.11
-
-zope.event=4.5.0
-
-zope.interface=5.4.0
-
-zstd=1.4.9
-
-applicationinsights==0.11.10
-
-argon2-cffi==21.3.0
-
-argon2-cffi-bindings==21.2.0
-
-azure-common==1.1.27
-
-azure-core==1.16.0
-
-azure-graphrbac==0.61.1
-
-azure-identity==1.4.1
-
-azure-mgmt-authorization==0.61.0
-
-azure-mgmt-containerregistry==8.0.0
-
-azure-mgmt-core==1.3.0
-
-azure-mgmt-keyvault==2.2.0
-
-azure-mgmt-resource==13.0.0
-
-azure-mgmt-storage==11.2.0
-
-azureml-core==1.34.0
-
-azureml-dataprep==2.22.2
-
-azureml-dataprep-native==38.0.0
-
-azureml-dataprep-rslex==1.20.2
-
-azureml-dataset-runtime==1.34.0
-
-azureml-mlflow==1.34.0
-
-azureml-opendatasets==1.34.0
-
-azureml-telemetry==1.34.0
-
-backports-tempfile==1.0
-
-backports-weakref==1.0.post1
-
-bleach==5.0.1
-
-contextlib2==0.6.0.post1
-
-defusedxml==0.7.1
-
-distlib==0.3.6
-
-distro==1.7.0
-
-docker==4.4.4
-
-dotnetcore2==2.1.23
-
-fastjsonschema==2.16.1
-
-filelock==3.8.0
-
-fusepy==3.0.1
-
-importlib-resources==5.9.0
-
-ipywidgets==7.6.3
-
-jeepney==0.6.0
-
-jmespath==0.10.0
-
-jsonpickle==2.0.0
-
-jsonschema==4.15.0
-
-jupyterlab-pygments==0.2.2
-
-jupyterlab-widgets==3.0.3
-
-kqlmagiccustom==0.1.114.post8
-
-lxml==4.6.5
-
-mistune==2.0.4
-
-msal-extensions==0.2.2
-
-msrestazure==0.6.4
-
-mypy==0.780
-
-mypy-extensions==0.4.3
-
-nbclient==0.6.7
-
-nbconvert==7.0.0
-
-nbformat==5.4.0
-
-ndg-httpsclient==0.5.1
-
-nest-asyncio==1.5.5
-
-notebook==6.4.12
-
-pandasql==0.7.3
-
-pandocfilters==1.5.0
-
-pathspec==0.8.1
-
-pkgutil-resolve-name==1.3.10
-
-platformdirs==2.5.2
-
-prettytable==2.4.0
-
-prometheus-client==0.14.1
-
-pyperclip==1.8.2
-
-pyrsistent==0.18.1
-
-pyspark==3.2.1
-
-ruamel-yaml==0.17.4
-
-ruamel-yaml-clib==0.2.6
-
-secretstorage==3.3.1
-
-send2trash==1.8.0
-
-sqlalchemy==1.4.20
-
-terminado==0.15.0
-
-tinycss2==1.1.1
-
-torchvision==0.9.1
-
-traitlets==5.3.0
-
-typed-ast==1.4.3
-
-virtualenv==20.14.0
-
-websocket-client==1.1.0
-
-widgetsnbextension==3.5.2
-
-## Python libraries (GPU Accelerated VMs)
-
-_libgcc_mutex=0.1
-
-_openmp_mutex=4.5
-
-_py-xgboost-mutex=2.0
-
-_tflow_select=2.3.0
-
-abseil-cpp=20210324.1
-
-absl-py=0.13.0
-
-adal=1.2.7
-
-adlfs=0.7.7
-
-aiohttp=3.7.4.post0
-
-appdirs=1.4.4
-
-arrow-cpp=3.0.0
-
-astor=0.8.1
-
-astunparse=1.6.3
-
-async-timeout=3.0.1
-
-attrs=21.2.0
-
-aws-c-cal=0.5.11
-
-aws-c-common=0.6.2
-
-aws-c-event-stream=0.2.7
-
-aws-c-io=0.10.5
-
-aws-checksums=0.1.11
-
-aws-sdk-cpp=1.8.186
-
-azure-datalake-store=0.0.51
-
-azure-storage-blob=12.9.0
-
-backcall=0.2.0
-
-backports=1.0
-
-backports.functools_lru_cache=1.6.4
-
-beautifulsoup4=4.9.3
-
-blas=2.111
-
-blas-devel=3.9.0
-
-blinker=1.4
-
-bokeh=2.3.2
-
-brotli=1.0.9
-
-brotli-bin=1.0.9
-
-brotli-python=1.0.9
-
-brotlipy=0.7.0
-
-bzip2=1.0.8
-
-c-ares=1.17.2
-
-ca-certificates=2021.5.30
-
-cachetools=4.2.2
-
-cairo=1.16.0
-
-certifi=2021.5.30
-
-cffi=1.14.6
-
-chardet=4.0.0
-
-click=8.0.1
-
-colorama=0.4.4
-
-conda=4.9.2
-
-conda-package-handling=1.7.3
-
-configparser=5.0.2
-
-cryptography=3.4.7
-
-cudatoolkit=11.1.1
-
-cycler=0.10.0
-
-cython=0.29.24
-
-cytoolz=0.11.0
-
-dash=1.21.0
-
-dash-core-components=1.17.1
-
-dash-html-components=1.1.4
-
-dash-renderer=1.9.1
-
-dash-table=4.12.0
-
-dash_cytoscape=0.2.0
-
-dask-core=2021.9.1
-
-databricks-cli=0.12.1
-
-dataclasses=0.8
-
-dbus=1.13.18
-
-debugpy=1.4.1
-
-decorator=5.1.0
-
-dill=0.3.4
-
-entrypoints=0.3
-
-et_xmlfile=1.0.1001
-
-expat=2.4.1
-
-ffmpeg=4.3
-
-fire=0.4.0
-
-flask=2.0.1
-
-flask-compress=1.10.1
-
-fontconfig=2.13.1
-
-freetype=2.10.4
-
-fsspec=2021.8.1
-
-future=0.18.2
-
-g-ir-build-tools=1.68.0
-
-g-ir-host-tools=1.68.0
-
-gensim=3.8.3
-
-geographiclib=1.52
-
-geopy=2.1.0
-
-gettext=0.19.8.1
-
-gevent=21.8.0
-
-gflags=2.2.2
-
-gitdb=4.0.7
-
-gitpython=3.1.23
-
-glib=2.68.4
-
-glib-tools=2.68.4
-
-glog=0.5.0
-
-gmp=6.2.1
-
-gnutls=3.6.13
-
-gobject-introspection=1.68.0
-
-google-auth=1.35.0
-
-google-auth-oauthlib=0.4.6
-
-google-pasta=0.2.0
-
-greenlet=1.1.1
-
-grpc-cpp=1.37.1
-
-gst-plugins-base=1.14.0
-
-gstreamer=1.14.0
-
-h5py=2.10.0
-
-hdf5=1.10.6
-
-html5lib=1.1
-
-hummingbird-ml=0.4.0
-
-icu=58.2
-
-idna=2.10
-
-imagecodecs-lite=2019.12.3
-
-imageio=2.9.0
-
-importlib-metadata=4.8.1
-
-interpret=0.2.4
-
-interpret-core=0.2.4
-
-ipykernel=6.4.1
-
-ipython=7.23.1
-
-ipython_genutils=0.2.0
-
-isodate=0.6.0
-
-itsdangerous=2.0.1
-
-jdcal=1.4.1
-
-jedi=0.18.0
-
-jinja2=3.0.1
-
-joblib=1.0.1
-
-jpeg=9b
-
-jupyter_client=7.0.3
-
-jupyter_core=4.8.1
-
-keras=2.4.3
-
-keras-applications=1.0.8
-
-keras-preprocessing=1.1.2
-
-keras2onnx=1.6.5
-
-kiwisolver=1.3.2
-
-koalas=1.8.0
-
-krb5=1.19.2
-
-lame=3.100
-
-ld_impl_linux-64=2.36.1
-
-liac-arff=2.5.0
-
-libblas=3.9.0
-
-libbrotlicommon=1.0.9
-
-libbrotlidec=1.0.9
-
-libbrotlienc=1.0.9
-
-libcblas=3.9.0
-
-libcurl=7.79.1
-
-libedit=3.1.20191231
-
-libev=4.33
-
-libevent=2.1.10
-
-libffi=3.4.2
-
-libgcc-ng=11.2.0
-
-libgfortran-ng=11.2.0
-
-libgfortran5=11.2.0
-
-libgirepository=1.68.0
-
-libglib=2.68.4
-
-libiconv=1.16
-
-liblapack=3.9.0
-
-liblapacke=3.9.0
-
-libllvm11=11.1.0
-
-libnghttp2=1.43.0
-
-libpng=1.6.37
-
-libprotobuf=3.16.0
-
-libsodium=1.0.18
-
-libssh2=1.10.0
-
-libstdcxx-ng=11.2.0
-
-libthrift=0.14.1
-
-libtiff=4.2.0
-
-libutf8proc=2.6.1
-
-libuuid=2.32.1
-
-libuv=1.42.0
-
-libwebp-base=1.2.1
-
-libxcb=1.13
-
-libxgboost=1.4.0
-
-libxml2=2.9.9
-
-lightgbm=3.2.1
-
-lime=0.2.0.1
-
-llvm-openmp=12.0.1
-
-llvmlite=0.37.0
-
-locket=0.2.0
-
-lz4-c=1.9.3
-
-markdown=3.3.4
-
-markupsafe=2.0.1
-
-matplotlib=3.4.2
-
-matplotlib-base=3.4.2
-
-matplotlib-inline=0.1.3
-
-mkl=2021.3.0
-
-mkl-devel=2021.3.0
-
-mkl-include=2021.3.0
-
-mleap=0.17.0
-
-mlflow-skinny=1.18.0
-
-msal=2021.09.01
-
-msrest=2021.09.01
-
-multidict=5.1.0
-
-multiprocess=0.70.12.2
-
-ncurses=6.2
-
-nest-asyncio=1.5.1
-
-nettle=3.6
-
-networkx=2.5
-
-ninja=1.10.2
-
-nltk=3.6.2
-
-numba=0.54.0
-
-numpy=1.19.4
-
-oauthlib=3.1.1
-
-olefile=0.46
-
-onnx=1.9.0
-
-onnxconverter-common=1.7.0
-
-onnxmltools=1.7.0
-
-onnxruntime=1.7.2
-
-openh264=2.1.1
-
-openpyxl=3.0.7
-
-openssl=1.1.1l
-
-opt_einsum=3.3.0
-
-orc=1.6.7
-
-packaging=21.0
-
-pandas=1.2.3
-
-parquet-cpp=1.5.1
-
-parso=0.8.2
-
-partd=1.2.0
-
-pathos=0.2.8
-
-patsy=0.5.1
-
-pcre=8.45
-
-pexpect=4.8.0
-
-pickleshare=0.7.5003
-
-pillow=7.1.2
-
-pip=21.1.1
-
-pixman=0.38.0
-
-pkg-config=0.29.2
-
-plotly=4.14.3
-
-pmdarima=1.8.2
-
-pooch=1.5.1
-
-portalocker=1.7.1
-
-pox=0.3.0
-
-ppft=1.6.6.4
-
-prompt-toolkit=3.0.20
-
-protobuf=3.16.0
-
-psutil=5.8.0
-
-pthread-stubs=0.4
-
-ptyprocess=0.7.0
-
-py-xgboost=1.4.0
-
-py4j=0.10.9
-
-pyarrow=3.0.0
-
-pyasn1=0.4.8
-
-pyasn1-modules=0.2.7
-
-pycairo=1.20.1
-
-pycosat=0.6.3
-
-pycparser=2.20
-
-pydeprecate=0.3.1
-
-pygments=2.10.0
-
-pygobject=3.40.1
-
-pyjwt=2.1.0
-
-pyodbc=4.0.30
-
-pyopenssl=20.0.1
-
-pyparsing=2.4.7
-
-pyqt=5.9.2
-
-pysocks=1.7.1
-
-pyspark=3.1.2
-
-python=3.8.12
-
-python-dateutil=2.8.2
-
-python_abi=3.8
-
-pytorch=1.8.1
-
-pytorch-lightning=1.4.2
-
-pytz=2021.1
-
-pyu2f=0.1.5
-
-pywavelets=1.1.1
-
-pyyaml=5.4.1
-
-pyzmq=22.3.0
-
-qt=5.9.7
-
-re2=2021.04.01
-
-readline=8.1
-
-regex=2021.8.28
-
-requests=2.25.1
-
-requests-oauthlib=1.3.0
-
-retrying=1.3.3
-
-rsa=4.7.2
-
-ruamel_yaml=0.15.80
-
-s2n=1.0.10
-
-salib=1.4.5
-
-scikit-image=0.18.1
-
-scikit-learn=0.23.2
-
-scipy=1.7.1
-
-seaborn=0.11.1
-
-seaborn-base=0.11.1
-
-setuptools=49.6.0
-
-shap=0.39.0
-
-sip=4.19.13
-
-skl2onnx=1.8.0.1
-
-sklearn-pandas=2.2.0
-
-slicer=0.0.7
-
-smart_open=5.2.1
-
-smmap=3.0.5
-
-snappy=1.1.8
-
-soupsieve=2.0.1
-
-sqlite=3.36.0
-
-statsmodels=0.12.2
-
-tabulate=0.8.9
-
-tbb=2021.3.0
-
-tenacity=8.0.1
-
-tensorboard=2.6.0
-
-tensorboard-data-server=0.6.0
-
-tensorboard-plugin-wit=1.8.0
-
-tensorflow=2.4.1
-
-tensorflow-base=2.4.1
-
-termcolor=1.1.0
-
-textblob=0.15.3
-
-threadpoolctl=2.2.0
-
-tifffile=2020.6.3
-
-tk=8.6.11
-
-toolz=0.11.1
-
-torchaudio=0.8.1
-
-torchmetrics=0.5.1
-
-torchvision=0.9.1
-
-tornado=6.1
-
-tqdm=4.62.3
-
-traitlets=5.1.0
-
-unixodbc=2.3.9
-
-urllib3=1.26.4
-
-wcwidth=0.2.5
-
-webencodings=0.5.1
-
-werkzeug=2.0.1
-
-wheel=0.37.0
-
-wrapt=1.12.1
-
-xgboost=1.4.0
-
-xorg-kbproto=1.0.7
-
-xorg-libice=1.0.10
-
-xorg-libsm=1.2.3
-
-xorg-libx11=1.7.2
-
-xorg-libxau=1.0.9
-
-xorg-libxdmcp=1.1.3
-
-xorg-libxext=1.3.4
-
-xorg-libxrender=0.9.10
-
-xorg-renderproto=0.11.1
-
-xorg-xextproto=7.3.0
-
-xorg-xproto=7.0.31
-
-xz=5.2.5
-
-yaml=0.2.5
-
-yarl=1.6.3
-
-zeromq=4.3.4
-
-zipp=3.5.0
-
-zlib=1.2.11
-
-zope.event=4.5.0
-
-zope.interface=5.4.0
-
-zstd=1.4.9
-
-applicationinsights==0.11.10
-
-argon2-cffi==21.1.0
-
-azure-common==1.1.27
-
-azure-core==1.18.0
-
-azure-graphrbac==0.61.1
-
-azure-identity==1.4.1
-
-azure-mgmt-authorization==0.61.0
-
-azure-mgmt-containerregistry==8.1.0
-
-azure-mgmt-core==1.3.0
-
-azure-mgmt-keyvault==9.1.0
-
-azure-mgmt-resource==13.0.0
-
-azure-mgmt-storage==11.2.0
-
-azureml-core==1.34.0
-
-azureml-dataprep==2.22.2
-
-azureml-dataprep-native==38.0.0
-
-azureml-dataprep-rslex==1.20.2
-
-azureml-dataset-runtime==1.34.0
-
-azureml-mlflow==1.34.0
-
-azureml-opendatasets==1.34.0
-
-azureml-telemetry==1.34.0
-
-backports-tempfile==1.0
-
-backports-weakref==1.0.post1
-
-bleach==4.1.0
-
-cloudpickle==1.6.0
-
-contextlib2==21.6.0
-
-defusedxml==0.7.1
-
-diskcache==5.2.1
-
-distro==1.6.0
-
-docker==5.0.2
-
-dotnetcore2==2.1.21
-
-flatbuffers==1.12
-
-fusepy==3.0.1
-
-gast==0.3.3
-
-grpcio==1.32.0
-
-ipywidgets==7.6.5
-
-jeepney==0.7.1
-
-jmespath==0.10.0
-
-jsonpickle==2.0.0
-
-jsonschema==4.1.2
-
-jupyterlab-pygments==0.1.2
-
-jupyterlab-widgets==1.0.2
-
-kqlmagiccustom==0.1.114.post8
-
-lxml==4.7.1
-
-mistune==0.8.4
-
-msal-extensions==0.2.2
-
-msrestazure==0.6.4
-
-mypy==0.780
-
-mypy-extensions==0.4.3
-
-nbclient==0.5.4
-
-nbconvert==6.2.0
-
-nbformat==5.1.3
-
-ndg-httpsclient==0.5.1
-
-notebook==6.4.5
-
-opencv-python==4.5.3.56
-
-pandasql==0.7.3
-
-pandocfilters==1.5.0
-
-pathspec==0.9.0
-
-petastorm==0.11.3
-
-prettytable==3.0.0
-
-prometheus-client==0.12.0
-
-pyperclip==1.8.2
-
-pyrsistent==0.18.0
-
-ruamel-yaml==0.17.4
-
-ruamel-yaml-clib==0.2.6
-
-secretstorage==3.3.1
-
-send2trash==1.8.0
-
-six==1.15.0
-
-sqlalchemy==1.4.25
-
-tensorflow-estimator==2.4.0
-
-tensorflow-gpu==2.4.1
-
-terminado==0.12.1
-
-testpath==0.5.0
-
-typed-ast==1.4.3
-
-typing-extensions==3.7.4.3
-
-websocket-client==1.2.1
-
-widgetsnbextension==3.5.2
-
-## R libraries (Preview)
-
-| **Library** | **Version** | **Library** | **Version** | **Library** | **Version** |
-|:-:|:--:|::|:--:|::|:--:|
-| askpass | 1.1 | highcharter | 0.9.4 | readr | 2.1.3 |
-| assertthat | 0.2.1 | highr | 0.9 | readxl | 1.4.1 |
-| backports | 1.4.1 | hms | 1.1.2 | recipes | 1.0.3 |
-| base64enc | 0.1-3 | htmltools | 0.5.3 | rematch | 1.0.1 |
-| bit | 4.0.5 | htmlwidgets | 1.5.4 | rematch2 | 2.1.2 |
-| bit64 | 4.0.5 | httpcode | 0.3.0 | remotes | 2.4.2 |
-| blob | 1.2.3 | httpuv | 1.6.6 | reprex | 2.0.2 |
-| brew | 1.0-8 | httr | 1.4.4 | reshape2 | 1.4.4 |
-| brio | 1.1.3 | ids | 1.0.1 | rjson | 0.2.21 |
-| broom | 1.0.1 | igraph | 1.3.5 | rlang | 1.0.6 |
-| bslib | 0.4.1 | infer | 1.0.3 | rlist | 0.4.6.2 |
-| cachem | 1.0.6 | ini | 0.3.1 | rmarkdown | 2.18 |
-| callr | 3.7.3 | ipred | 0.9-13 | RODBC | 1.3-19 |
-| caret | 6.0-93 | isoband | 0.2.6 | roxygen2 | 7.2.2 |
-| cellranger | 1.1.0 | iterators | 1.0.14 | rprojroot | 2.0.3 |
-| cli | 3.4.1 | jquerylib | 0.1.4 | rsample | 1.1.0 |
-| clipr | 0.8.0 | jsonlite | 1.8.3 | rstudioapi | 0.14 |
-| clock | 0.6.1 | knitr | 1.41 | rversions | 2.1.2 |
-| colorspace | 2.0-3 | labeling | 0.4.2 | rvest | 1.0.3 |
-| commonmark | 1.8.1 | later | 1.3.0 | sass | 0.4.4 |
-| config | 0.3.1 | lava | 1.7.0 | scales | 1.2.1 |
-| conflicted | 1.1.0 | lazyeval | 0.2.2 | selectr | 0.4-2 |
-| coro | 1.0.3 | lhs | 1.1.5 | sessioninfo | 1.2.2 |
-| cpp11 | 0.4.3 | lifecycle | 1.0.3 | shiny | 1.7.3 |
-| crayon | 1.5.2 | lightgbm | 3.3.3 | slider | 0.3.0 |
-| credentials | 1.3.2 | listenv | 0.8.0 | sourcetools | 0.1.7 |
-| crosstalk | 1.2.0 | lobstr | 1.1.2 | sparklyr | 1.7.8 |
-| crul | 1.3 | lubridate | 1.9.0 | SQUAREM | 2021.1 |
-| curl | 4.3.3 | magrittr | 2.0.3 | stringi | 1.7.8 |
-| data.table | 1.14.6 | maps | 3.4.1 | stringr | 1.4.1 |
-| DBI | 1.1.3 | memoise | 2.0.1 | sys | 3.4.1 |
-| dbplyr | 2.2.1 | mime | 0.12 | systemfonts | 1.0.4 |
-| desc | 1.4.2 | miniUI | 0.1.1.1 | testthat | 3.1.5 |
-| devtools | 2.4.5 | modeldata | 1.0.1 | textshaping | 0.3.6 |
-| dials | 1.1.0 | modelenv | 0.1.0 | tibble | 3.1.8 |
-| DiceDesign | 1.9 | ModelMetrics | 1.2.2.2 | tidymodels | 1.0.0 |
-| diffobj | 0.3.5 | modelr | 0.1.10 | tidyr | 1.2.1 |
-| digest | 0.6.30 | munsell | 0.5.0 | tidyselect | 1.2.0 |
-| downlit | 0.4.2 | numDeriv | 2016.8-1.1 | tidyverse | 1.3.2 |
-| dplyr | 1.0.10 | openssl | 2.0.4 | timechange | 0.1.1 |
-| dtplyr | 1.2.2 | parallelly | 1.32.1 | timeDate | 4021.106 |
-| e1071 | 1.7-12 | parsnip | 1.0.3 | tinytex | 0.42 |
-| ellipsis | 0.3.2 | patchwork | 1.1.2 | torch | 0.9.0 |
-| evaluate | 0.18 | pillar | 1.8.1 | triebeard | 0.3.0 |
-| fansi | 1.0.3 | pkgbuild | 1.4.0 | TTR | 0.24.3 |
-| farver | 2.1.1 | pkgconfig | 2.0.3 | tune | 1.0.1 |
-| fastmap | 1.1.0 | pkgdown | 2.0.6 | tzdb | 0.3.0 |
-| fontawesome | 0.4.0 | pkgload | 1.3.2 | urlchecker | 1.0.1 |
-| forcats | 0.5.2 | plotly | 4.10.1 | urltools | 1.7.3 |
-| foreach | 1.5.2 | plyr | 1.8.8 | usethis | 2.1.6 |
-| forge | 0.2.0 | praise | 1.0.0 | utf8 | 1.2.2 |
-| fs | 1.5.2 | prettyunits | 1.1.1 | uuid | 1.1-0 |
-| furrr | 0.3.1 | pROC | 1.18.0 | vctrs | 0.5.1 |
-| future | 1.29.0 | processx | 3.8.0 | viridisLite | 0.4.1 |
-| future.apply | 1.10.0 | prodlim | 2019.11.13 | vroom | 1.6.0 |
-| gargle | 1.2.1 | profvis | 0.3.7 | waldo | 0.4.0 |
-| generics | 0.1.3 | progress | 1.2.2 | warp | 0.2.0 |
-| gert | 1.9.1 | progressr | 0.11.0 | whisker | 0.4 |
-| ggplot2 | 3.4.0 | promises | 1.2.0.1 | withr | 2.5.0 |
-| gh | 1.3.1 | proxy | 0.4-27 | workflows | 1.1.2 |
-| gistr | 0.9.0 | pryr | 0.1.5 | workflowsets | 1.0.0 |
-| gitcreds | 0.1.2 | ps | 1.7.2 | xfun | 0.35 |
-| globals | 0.16.2 | purrr | 0.3.5 | xgboost | 1.6.0.1 |
-| glue | 1.6.2 | quantmod | 0.4.20 | XML | 3.99-0.12 |
-| googledrive | 2.0.0 | r2d3 | 0.2.6 | xml2 | 1.3.3 |
-| googlesheets4 | 1.0.1 | R6 | 2.5.1 | xopen | 1.0.0 |
-| gower | 1.0.0 | ragg | 1.2.4 | xtable | 1.8-4 |
-| GPfit | 1.0-8 | rappdirs | 0.3.3 | xts | 0.12.2 |
-| gtable | 0.3.1 | rbokeh | 0.5.2 | yaml | 2.3.6 |
-| hardhat | 1.2.0 | rcmdcheck | 1.4.0 | yardstick | 1.1.0 |
-| haven | 2.5.1 | RColorBrewer | 1.1-3 | zip | 2.2.2 |
-| hexbin | 1.28.2 | Rcpp | 1.0.9 | zoo | 1.8-11 |
+## Libraries
+To check the libraries included in Azure Synapse Runtime for Apache Spark 3.2 for Java/Scala, Python and R go to [Azure Synapse Runtime for Apache Spark 3.2 Release Notes](https://github.com/microsoft/synapse-spark-runtime/tree/main/Synapse/spark3.2)
## Next steps
synapse-analytics Apache Spark Version Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-version-support.md
The runtimes have the following advantages:
## Supported Azure Synapse runtime releases > [!WARNING]
-> End of Support Notification for Azure Synapse Runtime for Apache Spark 2.4 and Apache Spark 3.1.
+> End of Support Notification for Azure Synapse Runtime for Apache Spark 2.4, Apache Spark 3.1 and Apache Spark 3.2.
> * Effective September 29, 2023, Azure Synapse will discontinue official support for Spark 2.4 Runtimes.
-> * Effective January 26, 2024, Azure Synapse will discontinue official support for Spark 3.1 Runtimes.
-> * After these dates, we will not be addressing any support tickets related to Spark 2.4 or 3.1. There will be no release pipeline in place for bug or security fixes for Spark 2.4 and 3.1. **Utilizing Spark 2.4 or 3.1 post the support cutoff dates is undertaken at one's own risk. We strongly discourage its continued use due to potential security and functionality concerns.**
+> * Effective January 26, 2024, Azure Synapse will discontinue official support for Spark 3.1 Runtimes.
+> * Effective July 8, 2024, Azure Synapse will discontinue official support for Spark 3.2 Runtimes.
+> * After these dates, we will not be addressing any support tickets related to Spark 2.4, 3.1 and 3.2. There will be no release pipeline in place for bug or security fixes for Spark 2.4, 3.1 and 3.2. **Utilizing Spark 2.4, 3.1 and 3.2 post the support cutoff dates is undertaken at one's own risk. We strongly discourage its continued use due to potential security and functionality concerns.**
> [!TIP] > We strongly recommend proactively upgrading workloads to a more recent GA version of the runtime (for example, [Azure Synapse Runtime for Apache Spark 3.4 (GA)](./apache-spark-34-runtime.md)). Refer to the [Apache Spark migration guide](https://spark.apache.org/docs/latest/sql-migration-guide.html).
The following table lists the runtime name, Apache Spark version, and release da
| | || | | | [Azure Synapse Runtime for Apache Spark 3.4](./apache-spark-34-runtime.md) | Nov 21, 2023 | GA (as of Apr 8, 2024) | Q2 2025| Q1 2026| | [Azure Synapse Runtime for Apache Spark 3.3](./apache-spark-33-runtime.md) | Nov 17, 2022 | GA (as of Feb 23, 2023) | Q2/Q3 2024 | Q1 2025 |
-| [Azure Synapse Runtime for Apache Spark 3.2](./apache-spark-32-runtime.md) | July 8, 2022 | __deprecated__ | July 8, 2023 | July 8, 2024 |
-| [Azure Synapse Runtime for Apache Spark 3.1](./apache-spark-3-runtime.md) | May 26, 2021 | __deprecated__ | January 26, 2023 | January 26, 2024 |
-| [Azure Synapse Runtime for Apache Spark 2.4](./apache-spark-24-runtime.md) | December 15, 2020 | __deprecated__ | __July 29, 2022__ | __September 29, 2023__ |
+| [Azure Synapse Runtime for Apache Spark 3.2](./apache-spark-32-runtime.md) | July 8, 2022 | __deprecated and soon disabled__ | July 8, 2023 | __July 8, 2024__ |
+| [Azure Synapse Runtime for Apache Spark 3.1](./apache-spark-3-runtime.md) | May 26, 2021 | __deprecated and soon disabled__ | January 26, 2023 | __January 26, 2024__ |
+| [Azure Synapse Runtime for Apache Spark 2.4](./apache-spark-24-runtime.md) | December 15, 2020 | __deprecated and soon disabled__ | July 29, 2022 | __September 29, 2023__ |
## Runtime release stages
trusted-signing Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/trusted-signing/quickstart.md
To create an identity validation request:
| :- | :- | | Onboarding | Trusted Signing at this time can onboard only legal business entities that have verifiable tax history of three or more years. For a quicker onboarding process, ensure that public records for the legal business entity that you're validated are up to date. | | Accuracy | Ensure that you provide the correct information for public identity validation. If you need to make any changes after it is created, you must complete a new identity validation request. This change affects the associated certificates that are being used for signing. |
-| More documentation | If we need more documentation to process the identity validation request, you're notified through email. You can upload the documents in the Azure portal. The documentation request email contains information about file size requirements. Ensure that any documents you provide are the most current. |
| Failed email verification | If email verification fails, you must initiate a new identity validation request. | | Identity validation status | You're notified through email when there's an update to the identity validation status. You can also check the status in the Azure portal at any time. | | Processing time | Processing your identity validation request takes from 1 to 7 business days (possibly longer if we need to request more documentation from you). |
+| More documentation | If we need more documentation to process the identity validation request, you're notified through email. You can upload the documents in the Azure portal. The documentation request email contains information about file size requirements. Ensure that any documents you provide are the most current. <br>- All documents submitted must be issued within the previous 12 months or where the expiration date is a future date that is at least two months away. <br> - If it is not possible to provide additional documentation, please update your account information to match any legal documents already provided or your official Company registration details. <br> - When providing official business document, such as business registration form, business charter, or articles of incorporation that list the company name and address as it is provided at the time of Identity Validation request creation. <br> - Ensure the domain registration or domain invoice from registration or renewal that lists the entity/contact name and domain as it is state on the request.|
++ ## Create a certificate profile
virtual-desktop Teams On Avd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/teams-on-avd.md
# Use Microsoft Teams on Azure Virtual Desktop
-Microsoft Teams on Azure Virtual Desktop supports chat and collaboration. With media optimizations, it also supports calling and meeting functionality by redirecting it to the local device when using Windows App or the Remote Desktop client on a supported platform. You can still use Microsoft Teams on Azure Virtual Desktop with other clients without optimized calling and meetings. Teams chat and collaboration features are supported on all platforms.
+Microsoft Teams on Azure Virtual Desktop supports chat and collaboration. With media optimizations, it also supports calling and meeting functionality by redirecting it to the local device when using Windows App or the Remote Desktop client on a supported platform. You can still use Microsoft Teams on Azure Virtual Desktop on other platforms without optimized calling and meetings. Teams chat and collaboration features are supported on all platforms.
There are two versions of Teams, *Classic Teams* and *[New Teams](/microsoftteams/new-teams-desktop-admin)*, and you can use either with Azure Virtual Desktop. New Teams has with feature parity with Classic Teams, but improves performance, reliability, and security.
virtual-desktop Whats New Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md
zone_pivot_groups: azure-virtual-desktop-windows-clients Previously updated : 06/11/2024 Last updated : 07/10/2024 # What's new in the Remote Desktop client for Windows
virtual-machines Enable Nvme Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/enable-nvme-interface.md
Title: OS Images Supported
-description: OS Image Support List for Remote NVMe
+ Title: Supported OS Images
+description: Get a list of supported operating system images for remote NVMe.
Last updated 06/25/2024-+
-# OS Images Supported with Remote NVMe
+# Supported OS images for remote NVMe
> [!NOTE]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> This article references CentOS, a Linux distribution that reached the end of support. Consider your use and plan accordingly. For more information, see the [guidance for CentOS end of support](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
-The following lists provide up-to-date information on which OS images are tagged as NVMe supported. These lists will be updated when new OS images are made available with remote NVMe support.
+The following lists provide up-to-date information on which OS images are tagged as supported for remote NVM Express (NVMe).
-Always check the [detailed product pages for specifics](/azure/virtual-machines/sizes) about which VM generations support which storage types.
+For specifics about which virtual machine (VM) generations support which storage types, check the [documentation about VM sizes in Azure](/azure/virtual-machines/sizes).
-For more information about enabling the NVMe interface on virtual machines created in Azure, be sure to review the [Remote NVMe Disks FAQ](/azure/virtual-machines/enable-nvme-remote-faqs).
+For more information about enabling the NVMe interface on virtual machines created in Azure, review the [FAQ for remote NVMe disks](/azure/virtual-machines/enable-nvme-remote-faqs).
-## OS Images supported
-
-### Linux
+## Supported Linux OS images
| Distribution | Image | |--||
For more information about enabling the NVMe interface on virtual machines creat
| SLES 15.4 | SUSE:sles-15-sp4:gen2:latest | | SLES 15.5 | SUSE:sles-15-sp5:gen2:latest | -
-### Windows
+## Supported Windows OS images
- [Azure portal - Plan ID: 2019-datacenter-core-smalldisk](https://portal.azure.com/#create/Microsoft.smalldiskWindowsServer2019DatacenterServerCore) - [Azure portal - Plan ID: 2019-datacenter-core-smalldisk-g2](https://portal.azure.com/#create/Microsoft.smalldiskWindowsServer2019DatacenterServerCore2019-datacenter-core-smalldisk-g2)
For more information about enabling the NVMe interface on virtual machines creat
- [Azure portal - Plan ID: 2022-datacenter-azure-edition-smalldisk](https://portal.azure.com/#create/microsoftwindowsserver.windowsserver2022-datacenter-azure-edition-smalldisk) - [Azure portal - Plan ID: 2022-datacenter-azure-edition](https://portal.azure.com/#create/microsoftwindowsserver.windowsserver2022-datacenter-azure-edition) - [Azure portal - Plan ID: 2022-datacenter-azure-edition-core](https://portal.azure.com/#create/microsoftwindowsserver.windowsserver2022-datacenter-azure-edition-core)-- [Azure portal - Plan 2022-datacenter-azure-edition-core-smalldisk](https://portal.azure.com/#create/microsoftwindowsserver.windowsserver2022-datacenter-azure-edition-core-smalldisk)
+- [Azure portal - Plan ID: 2022-datacenter-azure-edition-core-smalldisk](https://portal.azure.com/#create/microsoftwindowsserver.windowsserver2022-datacenter-azure-edition-core-smalldisk)
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/service-tags-overview.md
Previously updated : 04/16/2024 Last updated : 07/10/2024
By default, service tags reflect the ranges for the entire cloud. Some service t
| **[SerialConsole](/troubleshoot/azure/virtual-machines/linux/serial-console-linux#use-serial-console-with-custom-boot-diagnostics-storage-account-firewall-enabled)** | Limit access to boot diagnostics storage accounts from only Serial Console service tag | Inbound | No | Yes | | **ServiceBus** | Azure Service Bus traffic that uses the Premium service tier. | Outbound | Yes | Yes | | **[ServiceFabric](/azure/service-fabric/how-to-managed-cluster-networking#bring-your-own-virtual-network)** | Azure Service Fabric.<br/><br/>**Note**: This tag represents the Service Fabric service endpoint for control plane per region. This enables customers to perform management operations for their Service Fabric clusters from their VNET endpoint. (For example, https:// westus.servicefabric.azure.com). | Both | No | Yes |
-| **Sql** | Azure SQL Database, Azure Database for MySQL, Azure Database for PostgreSQL, Azure Database for MariaDB, and Azure Synapse Analytics.<br/><br/>**Note**: This tag represents the service, but not specific instances of the service. For example, the tag represents the Azure SQL Database service, but not a specific SQL database or server. This tag doesn't apply to SQL managed instance. | Outbound | Yes | Yes |
+| **Sql** | Azure SQL Database, Azure Database for MySQL Single Server, Azure Database for PostgreSQL Single Server, Azure Database for MariaDB, and Azure Synapse Analytics.<br/><br/>**Note**: This tag represents the service, but not specific instances of the service. For example, the tag represents the Azure SQL Database service, but not a specific SQL database or server. This tag doesn't apply to SQL managed instance. | Outbound | Yes | Yes |
| **SqlManagement** | Management traffic for SQL-dedicated deployments. | Both | No | Yes | | **[Storage](/azure/storage/file-sync/file-sync-networking-overview#configuring-firewalls-and-service-tags)** | Azure Storage. <br/><br/>**Note**: This tag represents the service, but not specific instances of the service. For example, the tag represents the Azure Storage service, but not a specific Azure Storage account. | Outbound | Yes | Yes | | **[StorageSyncService](/azure/storage/file-sync/file-sync-networking-overview#configuring-firewalls-and-service-tags)** | Storage Sync Service. | Both | No | Yes |
vpn-gateway Vpn Gateway Vpn Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-vpn-faq.md
description: Learn about frequently asked questions for VPN Gateway cross-premis
Previously updated : 06/19/2024 Last updated : 07/10/2024
If you specified a DNS server or servers when you created your virtual network,
### Can I connect to multiple sites from a single virtual network?
-You can connect to multiple sites by using Windows PowerShell and the Azure REST APIs. See the [Multi-Site and VNet-to-VNet Connectivity](#V2VMulti) FAQ section.
+You can connect to multiple sites by using Windows PowerShell and the Azure REST APIs. See the [Multi-site and VNet-to-VNet Connectivity](#V2VMulti) FAQ section.
### Is there an additional cost for setting up a VPN gateway as active-active?
You can configure your virtual network to use both site-to-site and point-to-sit
For normal functioning, the Azure VPN Gateway must establish a secure, mandatory connection with the Azure control plane, facilitated through Public IPs. This connection relies on resolving communication endpoints via public URLs. By default, Azure Virtual Networks (VNets) utilize the built-in Azure DNS (168.63.129.16) to resolve these public URLs, ensuring seamless communication between the Azure VPN Gateway and the Azure control plane.
-In implementation of a custom DNS within the VNet, it is crucial to configure a DNS forwarder that points to the Azure native DNS (168.63.129.16), to maintain uninterrupted communication between the VPN Gateway and control plane. Failure to set up a DNS forwarder to the native Azure DNS can prevent Microsoft from performing operations and maintenance on the Azure VPN Gateway, posing a security risk.
+In implementation of a custom DNS within the VNet, it's crucial to configure a DNS forwarder that points to the Azure native DNS (168.63.129.16), to maintain uninterrupted communication between the VPN Gateway and control plane. Failure to set up a DNS forwarder to the native Azure DNS can prevent Microsoft from performing operations and maintenance on the Azure VPN Gateway, posing a security risk.
To proper functionalities and healthy state to your VPN Gateway, consider one of the following configurations DNS configurations in VNet: 1. Revert to the default native Azure DNS by removing the custom DNS within the VNet settings (recommended configuration).
-2. Add in your custom DNS configuration a DNS forwarder pointing to the native Azure DNS (IP address: 168.63.129.16). Considering the specific rules and nature of your custom DNS, this setup may not resolve and fix the issue as expected.
+2. Add in your custom DNS configuration a DNS forwarder pointing to the native Azure DNS (IP address: 168.63.129.16). Considering the specific rules and nature of your custom DNS, this setup might not resolve and fix the issue as expected.
+
+### Could point-to-site VPN connections be affected by a potential vulnerability known as "tunnel vision"?
+
+Microsoft is aware of reports discussing network technique that bypasses VPN encapsulation. This is an industry-wide issue impacting any operating system that implements a DHCP client according to its RFC specification and has support for DHCP option 121 routes, including Windows.
+As the research notes, mitigations include running the VPN inside of a VM that obtains a lease from a virtualized DHCP server to prevent the local networks DHCP server from installing routes altogether.
+More information about vulnerability can be found at [NVD - CVE-2024-3661 (nist.gov)](https://nvd.nist.gov/vuln/detail/CVE-2024-3661).
## <a name="privacy"></a>Privacy
No. A gateway type can't be changed from policy-based to route-based, or from ro
Yes, traffic selectors can be defined via the *trafficSelectorPolicies* attribute on a connection via the [New-AzIpsecTrafficSelectorPolicy](/powershell/module/az.network/new-azipsectrafficselectorpolicy) PowerShell command. For the specified traffic selector to take effect, ensure the [Use Policy Based Traffic Selectors](vpn-gateway-connect-multiple-policybased-rm-ps.md#enablepolicybased) option is enabled.
-The custom configured traffic selectors will be proposed only when an Azure VPN gateway initiates the connection. A VPN gateway accepts any traffic selectors proposed by a remote gateway (on-premises VPN device). This behavior is consistent between all connection modes (Default, InitiatorOnly, and ResponderOnly).
+The custom configured traffic selectors are proposed only when an Azure VPN gateway initiates the connection. A VPN gateway accepts any traffic selectors proposed by a remote gateway (on-premises VPN device). This behavior is consistent between all connection modes (Default, InitiatorOnly, and ResponderOnly).
-### Do I need a 'GatewaySubnet'?
+### Do I need a GatewaySubnet?
Yes. The gateway subnet contains the IP addresses that the virtual network gateway services use. You need to create a gateway subnet for your virtual network in order to configure a virtual network gateway. All gateway subnets must be named 'GatewaySubnet' to work properly. Don't name your gateway subnet something else. And don't deploy VMs or anything else to the gateway subnet.
Yes, the Set Pre-Shared Key API and PowerShell cmdlet can be used to configure b
### Can I use other authentication options?
-We're limited to using pre-shared keys (PSK) for authentication.
+We're limited to using preshared keys (PSK) for authentication.
### How do I specify which traffic goes through the VPN gateway?
They're required for Azure infrastructure communication. They're protected (lock
A virtual network gateway is fundamentally a multi-homed device with one NIC tapping into the customer private network, and one NIC facing the public network. Azure infrastructure entities can't tap into customer private networks for compliance reasons, so they need to utilize public endpoints for infrastructure communication. The public endpoints are periodically scanned by Azure security audit.
-### <a name="vpn-basic"></a>Can I create a VPN gateway with the Basic gateway SKU in the portal?
+### <a name="vpn-basic"></a>Can I create a VPN gateway using the Basic gateway SKU in the portal?
No. The Basic SKU isn't available in the portal. You can create a Basic SKU VPN gateway using Azure CLI or PowerShell.
We support Windows Server 2012 Routing and Remote Access (RRAS) servers for site
Other software VPN solutions should work with our gateway as long as they conform to industry standard IPsec implementations. Contact the vendor of the software for configuration and support instructions.
-### Can I connect to a VPN gateway via point-to-site when located at a Site that has an active site-to-site connection?
+### Can I connect to a VPN gateway via point-to-site when located at a site that has an active site-to-site connection?
-Yes, but the Public IP address(es) of the point-to-site client must be different than the Public IP address(es) used by the site-to-site VPN device, or else the point-to-site connection won't work. point-to-site connections with IKEv2 can't be initiated from the same Public IP address(es) where a site-to-site VPN connection is configured on the same Azure VPN gateway.
+Yes, but the Public IP address(es) of the point-to-site client must be different than the Public IP address(es) used by the site-to-site VPN device, or else the point-to-site connection won't work. Point-to-site connections with IKEv2 can't be initiated from the same Public IP address(es) where a site-to-site VPN connection is configured on the same Azure VPN gateway.
-## <a name="P2S"></a>Point-to-site - Certificate authentication
+## <a name="P2S"></a>Point-to-site FAQ
-This section applies to the Resource Manager deployment model.
+
+## <a name="P2S-cert"></a>Point-to-site - certificate authentication
[!INCLUDE [P2S Azure cert](../../includes/vpn-gateway-faq-p2s-azurecert-include.md)] ## <a name="P2SRADIUS"></a>Point-to-site - RADIUS authentication
-This section applies to the Resource Manager deployment model.
+### Is RADIUS authentication supported on all Azure VPN Gateway SKUs?
+
+RADIUS authentication is supported for all SKUs except the Basic SKU.
+
+For legacy SKUs, RADIUS authentication is supported on Standard and High Performance SKUs.
+
+### Is RADIUS authentication supported for the classic deployment model?
+
+No. RADIUS authentication isn't supported for the classic deployment model.
+
+### What is the timeout period for RADIUS requests sent to the RADIUS server?
+
+RADIUS requests are set to timeout after 30 seconds. User defined timeout values aren't supported today.
+
+### Are 3rd-party RADIUS servers supported?
+
+Yes, 3rd-party RADIUS servers are supported.
+
+### What are the connectivity requirements to ensure that the Azure gateway is able to reach an on-premises RADIUS server?
+
+A site-to-site VPN connection to the on-premises site, with the proper routes configured, is required.
+
+### Can traffic to an on-premises RADIUS server (from the Azure VPN gateway) be routed over an ExpressRoute connection?
+
+No. It can only be routed over a site-to-site connection.
+
+### Is there a change in the number of SSTP connections supported with RADIUS authentication? What is the maximum number of SSTP and IKEv2 connections supported?
+
+There's no change in the maximum number of SSTP connections supported on a gateway with RADIUS authentication. It remains 128 for SSTP, but depends on the gateway SKU for IKEv2. For more information on the number of connections supported, see [About gateway SKUs](about-gateway-skus.md).
+
+### What is the difference between doing certificate authentication using a RADIUS server vs. using Azure native certificate authentication (by uploading a trusted certificate to Azure)?
+
+In RADIUS certificate authentication, the authentication request is forwarded to a RADIUS server that handles the actual certificate validation. This option is useful if you want to integrate with a certificate authentication infrastructure that you already have through RADIUS.
+
+When using Azure for certificate authentication, the Azure VPN gateway performs the validation of the certificate. You need to upload your certificate public key to the gateway. You can also specify list of revoked certificates that shouldnΓÇÖt be allowed to connect.
+
+### Does RADIUS authentication support Network Policy Server (NPS) integration for multifactor authorization (MFA)?
+
+If your MFA is text based (SMS, mobile app verification code etc.) and requires the user to enter a code or text in the VPN client UI, the authentication won't succeed and isn't a supported scenario. See [Integrate Azure VPN gateway RADIUS authentication with NPS server for multifactor authentication](vpn-gateway-radius-mfa-nsp.md).
+
+### Does RADIUS authentication work with both IKEv2, and SSTP VPN?
+
+Yes, RADIUS authentication is supported for both IKEv2, and SSTP VPN.
+
+### Does RADIUS authentication work with the OpenVPN client?
+RADIUS authentication is supported for the OpenVPN protocol.
-## <a name="V2VMulti"></a>VNet-to-VNet and Multi-Site connections
+## <a name="V2VMulti"></a>VNet-to-VNet and multi-site connections
[!INCLUDE [vpn-gateway-vnet-vnet-faq-include](../../includes/vpn-gateway-faq-vnet-vnet-include.md)]
Yes. See the [BGP](#bgp) section for more information.
**Classic deployment model**<br> Transit traffic via Azure VPN gateway is possible using the classic deployment model, but relies on statically defined address spaces in the network configuration file. BGP isn't yet supported with Azure Virtual Networks and VPN gateways using the classic deployment model. Without BGP, manually defining transit address spaces is very error prone, and not recommended.
-### Does Azure generate the same IPsec/IKE pre-shared key for all my VPN connections for the same virtual network?
+### Does Azure generate the same IPsec/IKE preshared key for all my VPN connections for the same virtual network?
-No, Azure by default generates different pre-shared keys for different VPN connections. However, you can use the `Set VPN Gateway Key` REST API or PowerShell cmdlet to set the key value you prefer. The key MUST only contain printable ASCII characters except space, hyphen (-) or tilde (~).
+No, Azure by default generates different preshared keys for different VPN connections. However, you can use the `Set VPN Gateway Key` REST API or PowerShell cmdlet to set the key value you prefer. The key MUST only contain printable ASCII characters except space, hyphen (-) or tilde (~).
### Do I get more bandwidth with more site-to-site VPNs than for a single virtual network?
You can also connect to your virtual machine by private IP address from another
### If my virtual machine is in a virtual network with cross-premises connectivity, does all the traffic from my VM go through that connection?
-No. Only the traffic that has a destination IP that is contained in the virtual network Local Network IP address ranges that you specified will go through the virtual network gateway. Traffic has a destination IP located within the virtual network stays within the virtual network. Other traffic is sent through the load balancer to the public networks, or if forced tunneling is used, sent through the Azure VPN gateway.
+No. Only the traffic that has a destination IP that is contained in the virtual network Local Network IP address ranges that you specified goes through the virtual network gateway. Traffic has a destination IP located within the virtual network stays within the virtual network. Other traffic is sent through the load balancer to the public networks, or if forced tunneling is used, sent through the Azure VPN gateway.
### How do I troubleshoot an RDP connection to a VM
vpn-gateway Work Remotely Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/work-remotely-support.md
Title: 'Remote work and Point-to-Site VPN gateways'
+ Title: 'Remote work and point-to-site VPN gateways'
-description: Learn how you can use VPN Gateway point-to-site connections in order to work remotely due to the COVID-19 pandemic.
+description: Learn how you can use VPN Gateway point-to-site connections in order to work remotely.
Previously updated : 03/12/2024 Last updated : 07/10/2024
-# Remote work using Azure VPN Gateway Point-to-site
+# Remote work using Azure VPN Gateway VPN connections
->[!NOTE]
->This article describes how you can leverage Azure VPN Gateway, Azure, Microsoft network, and the Azure partner ecosystem to work remotely and mitigate network issues that you are facing because of COVID-19 crisis.
->
+This article describes the options that are available to organizations to set up remote access for their users or to supplement their existing solutions with additional capacity. The Azure VPN Gateway point-to-site VPN solution is cloud-based and can be provisioned quickly to cater for the increased demand of users to work from home. It can scale up easily and turned off just as easily and quickly when the increased capacity isn't needed anymore.
-This article describes the options that are available to organizations to set up remote access for their users or to supplement their existing solutions with additional capacity during the COVID-19 epidemic.
+## <a name="p2s"></a>About point-to-site VPN
-The Azure point-to-site solution is cloud-based and can be provisioned quickly to cater for the increased demand of users to work from home. It can scale up easily and turned off just as easily and quickly when the increased capacity isn't needed anymore.
-
-## <a name="p2s"></a>About Point-to-Site VPN
-
-A Point-to-Site (P2S) VPN gateway connection lets you create a secure connection to your virtual network from an individual client computer. A P2S connection is established by starting it from the client computer. This solution is useful for telecommuters who want to connect to Azure VNets or on-premises data centers from a remote location, such as from home or a conference. This article describes how to enable users to work remotely based on various scenarios.
+A point-to-site (P2S) VPN gateway connection lets you create a secure connection to your virtual network from an individual client computer. A P2S connection is established by starting it from the client computer. This solution is useful for telecommuters who want to connect to Azure VNets or on-premises data centers from a remote location, such as from home or a conference. For more information about Azure point-to-site VPN, see [About VPN Gateway point-to-site VPN](point-to-site-about.md) and the [VPN Gateway FAQ](vpn-gateway-vpn-faq.md).
The following table shows the client operating systems and the authentication options that are available to them. It would be helpful to select the authentication method based on the client OS that is already in use. For example, select OpenVPN with Certificate-based authentication if you have a mixture of client operating systems that need to connect. Also, note that point-to-site VPN is only supported on route-based VPN gateways.
-![Screenshot that shows client operating systems and available authentication options.](./media/working-remotely-support/os-table.png "OS")
## <a name="scenario1"></a>Scenario 1 - Users need access to resources in Azure only In this scenario, the remote users only need to access to resources that are in Azure.
-![Diagram that shows a point-to-site scenario for users that need access to resources in Azure only.](./media/working-remotely-support/scenario1.png "Scenario 1")
At a high level, the following steps are needed to enable users to connect to Azure resources securely: 1. Create a virtual network gateway (if one doesn't exist).
-2. Configure point-to-site VPN on the gateway.
+1. Configure point-to-site VPN on the gateway.
- * For certificate authentication, follow [this link](vpn-gateway-howto-point-to-site-resource-manager-portal.md#creategw).
- * For OpenVPN, follow [this link](vpn-gateway-howto-openvpn.md).
- * For Microsoft Entra authentication, follow [this link](openvpn-azure-ad-tenant.md).
- * For troubleshooting point-to-site connections, follow [this link](vpn-gateway-troubleshoot-vpn-point-to-site-connection-problems.md).
-3. Download and distribute the VPN client configuration.
-4. Distribute the certificates (if certificate authentication is selected) to the clients.
-5. Connect to Azure VPN.
+ * For certificate authentication, see [Configure point-to-site certificate authentication](vpn-gateway-howto-point-to-site-resource-manager-portal.md).
+ * For Microsoft Entra ID authentication, see [Configure point-to-site Microsoft Entra ID authentication](point-to-site-entra-gateway.md)
+ * For troubleshooting point-to-site connections, see [Troubleshooting: Azure point-to-site connection problems](vpn-gateway-troubleshoot-vpn-point-to-site-connection-problems.md).
+1. Download and distribute the VPN client configuration.
+1. Distribute the certificates (if certificate authentication is selected) to the clients.
+1. Connect to Azure VPN.
## <a name="scenario2"></a>Scenario 2 - Users need access to resources in Azure and/or on-premises resources In this scenario, the remote users need to access to resources that are in Azure and in the on premises data center(s).
-![Diagram that shows a point-to-site scenario for users that need access to resources in Azure.](./media/working-remotely-support/scenario2.png "Scenario 2")
At a high level, the following steps are needed to enable users to connect to Azure resources securely: 1. Create a virtual network gateway (if one doesn't exist).
-2. Configure point-to-site VPN on the gateway (see [Scenario 1](#scenario1)).
-3. Configure a site-to-site tunnel on the Azure virtual network gateway with BGP enabled.
-4. Configure the on-premises device to connect to Azure virtual network gateway.
-5. Download the point-to-site profile from the Azure portal and distribute to clients
-
-To learn how to set up a site-to-site VPN tunnel, see [this link](./tutorial-site-to-site-portal.md).
-
-## <a name="faqcert"></a>FAQ for native Azure certificate authentication
+1. Configure point-to-site VPN on the gateway (see [Scenario 1](#scenario1)).
+1. Configure a site-to-site tunnel on the Azure virtual network gateway with BGP enabled.
+1. Configure the on-premises device to connect to Azure virtual network gateway.
+1. Download the point-to-site profile from the Azure portal and distribute to clients
-
-## <a name="faqradius"></a>FAQ for RADIUS authentication
-
+To learn how to set up a site-to-site VPN tunnel, see [Create a site-to-site VPN connection](./tutorial-site-to-site-portal.md).
## Next Steps
-* [Configure a P2S connection - Microsoft Entra authentication](openvpn-azure-ad-tenant.md)
-
+* [Configure a P2S connection - Microsoft Entra ID authentication](point-to-site-entra-gateway.md)
+* [Configure a P2S connection - Certificate authentication](vpn-gateway-howto-point-to-site-resource-manager-portal.md)
* [Configure a P2S connection - RADIUS authentication](point-to-site-how-to-radius-ps.md)-
-* [Configure a P2S connection - Azure native certificate authentication](vpn-gateway-howto-point-to-site-rm-ps.md)
+* [About VPN Gateway point-to-site VPN](point-to-site-about.md)
+* [About point-to-site VPN routing](vpn-gateway-about-point-to-site-routing.md)
**"OpenVPN" is a trademark of OpenVPN Inc.**