Updates from: 10/27/2023 01:11:48
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Commitment Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/commitment-tier.md
For more information, see [Azure AI services pricing](https://azure.microsoft.co
2. Enter the applicable information to create your resource. Be sure to select the standard pricing tier. > [!NOTE]
- > If you intend to purchase a commitment tier for disconnected container usage, you will need to request separate access and select the **Commitment tier disconnected containers** pricing tier. For more information, [disconnected containers](./containers/disconnected-containers.md).
+ > If you intend to purchase a commitment tier for disconnected container usage, you will need to request separate access and select the **Commitment tier disconnected containers** pricing tier. For more information, see [disconnected containers](./containers/disconnected-containers.md).
:::image type="content" source="media/commitment-tier/create-resource.png" alt-text="A screenshot showing resource creation on the Azure portal." lightbox="media/commitment-tier/create-resource.png":::
If you need a larger commitment plan than any of the ones offered, contact `csga
If you decide that you don't want to continue purchasing a commitment plan, you can set your resource's autorenewal to **Do not auto-renew**. Your commitment plan expires on the displayed commitment end date. After this date, you won't be charged for the commitment plan. You're able to continue using the Azure resource to make API calls, charged at pay-as-you-go pricing. You have until midnight (UTC) on the last day of each month to end a commitment plan, and not be charged for the following month.
+## Purchase a commitment tier pricing plan for disconnected containers
+
+Commitment plans for disconnected containers have a calendar year commitment period. These are different plans than web and connected container commitment plans. When you purchase a commitment plan, you'll be charged the full price immediately. During the commitment period, you can't change your commitment plan, however you can purchase additional unit(s) at a pro-rated price for the remaining days in the year. You have until midnight (UTC) on the last day of your commitment, to end a commitment plan.
+
+You can choose a different commitment plan in the **Commitment Tier pricing** settings of your resource.
+
+## Overage pricing for disconnected containers
+
+To use a disconnected container beyond the quota initially purchased with your disconnected container commitment plan, you can purchase additional quota by updating your commitment plan at any time.
+
+To purchase additional quota, go to your resource in Azure portal and adjust the "unit count" of your disconnected container commitment plan using the slider. This will add additional monthly quota and you will be charged a pro-rated price based on the remaining days left in the current billing cycle.
+ ## See also * [Azure AI services pricing](https://azure.microsoft.com/pricing/details/cognitive-services/).
ai-services Disconnected Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/containers/disconnected-containers.md
it will return a JSON response similar to the example below:
} ```
-## Purchase a different commitment plan for disconnected containers
+## Purchase a commitment tier pricing plan for disconnected containers
-Commitment plans for disconnected containers have a calendar year commitment period. When you purchase a plan, you'll be charged the full price immediately. During the commitment period, you can't change your commitment plan, however you can purchase additional unit(s) at a pro-rated price for the remaining days in the year. You have until midnight (UTC) on the last day of your commitment, to end a commitment plan.
+Commitment plans for disconnected containers have a calendar year commitment period. These are different plans than web and connected container commitment plans. When you purchase a commitment plan, you'll be charged the full price immediately. During the commitment period, you can't change your commitment plan, however you can purchase additional unit(s) at a pro-rated price for the remaining days in the year. You have until midnight (UTC) on the last day of your commitment, to end a commitment plan.
-You can choose a different commitment plan in the **Commitment Tier pricing** settings of your resource.
+You can choose a different commitment plan in the **Commitment Tier pricing** settings of your resource. For more information about commitment tier pricing plans, see [purchase commitment tier pricing](../commitment-tier.md).
+
+## Overage pricing for disconnected containers
+
+To use a disconnected container beyond the quota initially purchased with your disconnected container commitment plan, you can purchase additional quota by updating your commitment plan at any time.
+
+To purchase additional quota, go to your resource in Azure portal and adjust the "unit count" of your disconnected container commitment plan using the slider. This will add additional monthly quota and you will be charged a pro-rated price based on the remaining days left in the current billing cycle.
## End a commitment plan
ai-services Disconnected https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/containers/disconnected.md
Run the container with an output mount and logging enabled. These settings enabl
## Next steps * [Deploy the Sample Labeling tool to an Azure Container Instance (ACI)](../deploy-label-tool.md#deploy-with-azure-container-instances-aci)
-* [Change or end a commitment plan](../../../ai-services/containers/disconnected-containers.md#purchase-a-different-commitment-plan-for-disconnected-containers)
+* [Change or end a commitment plan](../../../ai-services/containers/disconnected-containers.md#purchase-a-commitment-tier-pricing-plan-for-disconnected-containers)
aks Dapr Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-workflow.md
Create an AKS cluster.
az aks create --resource-group myResourceGroup --name myAKSCluster --node-count 2 --generate-ssh-keys ```
-[Make sure `kubectl` is installed and pointed to your AKS cluster.][kubectl] If you use [the Azure Cloud Shell][az-cloud-shell], `kubectl` is already installed.
+[Make sure `kubectl` is installed and pointed to your AKS cluster.][kubectl] If you use the Azure Cloud Shell, `kubectl` is already installed.
For more information, see the [Deploy an AKS cluster][cluster] tutorial.
aks Quick Kubernetes Deploy Bicep Extensibility Kubernetes Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-bicep-extensibility-kubernetes-provider.md
Title: Quickstart - Deploy Azure applications to Azure Kubernetes Service clusters using Bicep extensibility Kubernetes provider
-description: Learn how to quickly create a Kubernetes cluster and deploy Azure applications in Azure Kubernetes Service (AKS) using Bicep extensibility Kubernetes provider.
+ Title: 'Quickstart: Create an Azure Kubernetes Service (AKS) cluster using the Bicep extensibility Kubernetes provider'
+description: Learn how to quickly create a Kubernetes cluster using the Bicep extensibility Kubernetes provider and deploy an application in Azure Kubernetes Service (AKS).
Previously updated : 02/21/2023 Last updated : 10/23/2023 #Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure.
-# Quickstart: Deploy Azure applications to Azure Kubernetes Service (AKS) clusters using Bicep extensibility Kubernetes provider (Preview)
+# Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using the Bicep extensibility Kubernetes provider (Preview)
-Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you'll deploy a sample multi-container application with a web front-end and a Redis instance to an AKS cluster.
+Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you:
-This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
+* Deploy an AKS cluster using the Bicep extensibility Kubernetes provider (preview).
+* Run a sample multi-container application with a group of microservices and web front ends simulating a retail scenario.
+> [!NOTE]
+> This sample application is just for demo purposes and doesn't represent all the best practices for Kubernetes applications.
+ > [!IMPORTANT] > The Bicep Kubernetes provider is currently in preview. You can enable the feature from the [Bicep configuration file](../../azure-resource-manager/bicep/bicep-config.md#enable-experimental-features) by adding:
This quickstart assumes a basic understanding of Kubernetes concepts. For more i
> } > ```
-## Prerequisites
-
+## Before you begin
-* To set up your environment for Bicep development, see [Install Bicep tools](../../azure-resource-manager/bicep/install.md). After completing those steps, you'll have [Visual Studio Code](https://code.visualstudio.com/) and the [Bicep extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep). You also have either the latest [Azure CLI](/cli/azure/) or the latest [Azure PowerShell module](/powershell/azure/new-azureps-module-az).
+* This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
+* You need an Azure account with an active subscription. If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* To learn more about creating a Windows Server node pool, see [Create an AKS cluster that supports Windows Server containers](quick-windows-container-deploy-cli.md).
-* To create an AKS cluster using a Bicep file, you provide an SSH public key. If you need this resource, see [Create an SSH key pair](#create-an-ssh-key-pair). If not, skip to [Review the Bicep file](#review-the-bicep-file).
-
-* The identity you use to create your cluster has the appropriate minimum permissions. For more information on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
-* To deploy a Bicep file, you need write access on the resources you deploy and access to all operations on the `Microsoft.Resources/deployments` resource type. For example, to deploy a virtual machine, you need `Microsoft.Compute/virtualMachines/write and Microsoft.Resources/deployments/*` permissions. For a list of roles and permissions, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
+* To set up your environment for Bicep development, see [Install Bicep tools](../../azure-resource-manager/bicep/install.md). After completing the steps, you have [Visual Studio Code](https://code.visualstudio.com/) and the [Bicep extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep). You also have either the latest [Azure CLI](/cli/azure/) version or the latest [Azure PowerShell module](/powershell/azure/new-azureps-module-az).
+* To create an AKS cluster using a Bicep file, you provide an SSH public key. If you need this resource, see the following section. Otherwise, skip to [Review the Bicep file](#review-the-bicep-file).
+* Make sure the identity you use to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
+* To deploy a Bicep file, you need write access on the resources you deploy and access to all operations on the `Microsoft.Resources/deployments` resource type. For example, to deploy a virtual machine, you need `Microsoft.Compute/virtualMachines/write` and `Microsoft.Resources/deployments/*` permissions. For a list of roles and permissions, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
### Create an SSH key pair
-To access AKS nodes, you connect using an SSH key pair (public and private), which you generate using the `ssh-keygen` command. By default, these files are created in the *~/.ssh* directory. Running the `ssh-keygen` command will overwrite any SSH key pair with the same name already existing in the given location.
- 1. Go to [https://shell.azure.com](https://shell.azure.com) to open Cloud Shell in your browser.
+2. Create an SSH key pair using the [`az sshkey create`][az-sshkey-create] Azure CLI command or the `ssh-keygen` command.
-1. Run the `ssh-keygen` command. The following example creates an SSH key pair using RSA encryption and a bit length of 4096:
+ ```azurecli-interactive
+ # Create an SSH key pair using Azure CLI
+ az sshkey create --name "mySSHKey" --resource-group "myResourceGroup"
- ```console
+ # Create an SSH key pair using ssh-keygen
ssh-keygen -t rsa -b 4096 ```
For more information about creating SSH keys, see [Create and manage SSH keys fo
## Review the Bicep file
-The Bicep file used to create an AKS cluster is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/aks/). For more AKS samples, see the [AKS quickstart templates][aks-quickstart-templates] site.
+The Bicep file used to create an AKS cluster is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/aks/). For more AKS samples, see [AKS quickstart templates][aks-quickstart-templates].
:::code language="bicep" source="~/quickstart-templates/quickstarts/microsoft.kubernetes/aks/main.bicep":::
Save a copy of the file as `main.bicep` to your local computer.
## Add the application definition
-A [Kubernetes manifest file][kubernetes-deployment] defines a cluster's desired state, such as which container images to run.
+To deploy the application, you use a manifest file to create all the objects required to run the [AKS Store application](https://github.com/Azure-Samples/aks-store-demo). A [Kubernetes manifest file][kubernetes-deployment] defines a cluster's desired state, such as which container images to run. The manifest includes the following Kubernetes deployments and
-In this quickstart, you use a manifest to create all objects needed to run the [Azure Vote application][azure-vote-app]. This manifest includes two [Kubernetes deployments][kubernetes-deployment]:
-* The sample Azure Vote Python applications
-* A Redis instance
+* **Store front**: Web application for customers to view products and place orders.
+* **Product service**: Shows product information.
+* **Order service**: Places orders.
+* **Rabbit MQ**: Message queue for an order queue.
-Two [Kubernetes Services][kubernetes-service] are also created:
-
-* An internal service for the Redis instance
-* An external service to access the Azure Vote application from the internet
-
-Use the following procedure to add the application definition:
+> [!NOTE]
+> We don't recommend running stateful containers, such as Rabbit MQ, without persistent storage for production. These are used here for simplicity, but we recommend using managed services, such as Azure CosmosDB or Azure Service Bus.
-1. Create a file named `azure-vote.yaml` in the same folder as `main.bicep` with the following YAML definition:
+1. Create a file named `aks-store-quickstart.yaml` in the same folder as `main.bicep` and copy in the following manifest:
```yaml apiVersion: apps/v1 kind: Deployment metadata:
- name: azure-vote-back
+ name: rabbitmq
spec: replicas: 1 selector: matchLabels:
- app: azure-vote-back
+ app: rabbitmq
template: metadata: labels:
- app: azure-vote-back
+ app: rabbitmq
spec: nodeSelector: "kubernetes.io/os": linux containers:
- - name: azure-vote-back
- image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
+ - name: rabbitmq
+ image: mcr.microsoft.com/mirror/docker/library/rabbitmq:3.10-management-alpine
+ ports:
+ - containerPort: 5672
+ name: rabbitmq-amqp
+ - containerPort: 15672
+ name: rabbitmq-http
env:
- - name: ALLOW_EMPTY_PASSWORD
- value: "yes"
+ - name: RABBITMQ_DEFAULT_USER
+ value: "username"
+ - name: RABBITMQ_DEFAULT_PASS
+ value: "password"
resources: requests:
- cpu: 100m
+ cpu: 10m
memory: 128Mi limits: cpu: 250m memory: 256Mi
+ volumeMounts:
+ - name: rabbitmq-enabled-plugins
+ mountPath: /etc/rabbitmq/enabled_plugins
+ subPath: enabled_plugins
+ volumes:
+ - name: rabbitmq-enabled-plugins
+ configMap:
+ name: rabbitmq-enabled-plugins
+ items:
+ - key: rabbitmq_enabled_plugins
+ path: enabled_plugins
+
+ apiVersion: v1
+ data:
+ rabbitmq_enabled_plugins: |
+ [rabbitmq_management,rabbitmq_prometheus,rabbitmq_amqp1_0].
+ kind: ConfigMap
+ metadata:
+ name: rabbitmq-enabled-plugins
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: rabbitmq
+ spec:
+ selector:
+ app: rabbitmq
+ ports:
+ - name: rabbitmq-amqp
+ port: 5672
+ targetPort: 5672
+ - name: rabbitmq-http
+ port: 15672
+ targetPort: 15672
+ type: ClusterIP
+
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: order-service
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: order-service
+ template:
+ metadata:
+ labels:
+ app: order-service
+ spec:
+ nodeSelector:
+ "kubernetes.io/os": linux
+ containers:
+ - name: order-service
+ image: ghcr.io/azure-samples/aks-store-demo/order-service:latest
ports:
- - containerPort: 6379
- name: redis
+ - containerPort: 3000
+ env:
+ - name: ORDER_QUEUE_HOSTNAME
+ value: "rabbitmq"
+ - name: ORDER_QUEUE_PORT
+ value: "5672"
+ - name: ORDER_QUEUE_USERNAME
+ value: "username"
+ - name: ORDER_QUEUE_PASSWORD
+ value: "password"
+ - name: ORDER_QUEUE_NAME
+ value: "orders"
+ - name: FASTIFY_ADDRESS
+ value: "0.0.0.0"
+ resources:
+ requests:
+ cpu: 1m
+ memory: 50Mi
+ limits:
+ cpu: 75m
+ memory: 128Mi
+ initContainers:
+ - name: wait-for-rabbitmq
+ image: busybox
+ command: ['sh', '-c', 'until nc -zv rabbitmq 5672; do echo waiting for rabbitmq; sleep 2; done;']
+ resources:
+ requests:
+ cpu: 1m
+ memory: 50Mi
+ limits:
+ cpu: 75m
+ memory: 128Mi
apiVersion: v1 kind: Service metadata:
- name: azure-vote-back
+ name: order-service
spec:
+ type: ClusterIP
ports:
- - port: 6379
+ - name: http
+ port: 3000
+ targetPort: 3000
selector:
- app: azure-vote-back
+ app: order-service
apiVersion: apps/v1 kind: Deployment metadata:
- name: azure-vote-front
+ name: product-service
spec: replicas: 1 selector: matchLabels:
- app: azure-vote-front
+ app: product-service
template: metadata: labels:
- app: azure-vote-front
+ app: product-service
spec: nodeSelector: "kubernetes.io/os": linux containers:
- - name: azure-vote-front
- image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
+ - name: product-service
+ image: ghcr.io/azure-samples/aks-store-demo/product-service:latest
+ ports:
+ - containerPort: 3002
resources: requests:
- cpu: 100m
- memory: 128Mi
+ cpu: 1m
+ memory: 1Mi
limits:
- cpu: 250m
- memory: 256Mi
+ cpu: 1m
+ memory: 7Mi
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: product-service
+ spec:
+ type: ClusterIP
+ ports:
+ - name: http
+ port: 3002
+ targetPort: 3002
+ selector:
+ app: product-service
+
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: store-front
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: store-front
+ template:
+ metadata:
+ labels:
+ app: store-front
+ spec:
+ nodeSelector:
+ "kubernetes.io/os": linux
+ containers:
+ - name: store-front
+ image: ghcr.io/azure-samples/aks-store-demo/store-front:latest
ports:
- - containerPort: 80
- env:
- - name: REDIS
- value: "azure-vote-back"
+ - containerPort: 8080
+ name: store-front
+ env:
+ - name: VUE_APP_ORDER_SERVICE_URL
+ value: "http://order-service:3000/"
+ - name: VUE_APP_PRODUCT_SERVICE_URL
+ value: "http://product-service:3002/"
+ resources:
+ requests:
+ cpu: 1m
+ memory: 200Mi
+ limits:
+ cpu: 1000m
+ memory: 512Mi
apiVersion: v1 kind: Service metadata:
- name: azure-vote-front
+ name: store-front
spec:
- type: LoadBalancer
ports: - port: 80
+ targetPort: 8080
selector:
- app: azure-vote-front
+ app: store-front
+ type: LoadBalancer
``` For a breakdown of YAML manifest files, see [Deployments and YAML manifests](../concepts-clusters-workloads.md#deployments-and-yaml-manifests).
-1. Open `main.bicep` in Visual Studio Code.
-1. Press <kbd>Ctrl+Shift+P</kbd> to open **Command Palette**.
-1. Search for **bicep**, and then select **Bicep: Import Kubernetes Manifest**.
+2. Open `main.bicep` in Visual Studio Code.
+3. Press <kbd>Ctrl+Shift+P</kbd> to open **Command Palette**.
+4. Search for **bicep**, and then select **Bicep: Import Kubernetes Manifest**.
- :::image type="content" source="./media/quick-kubernetes-deploy-bicep-extensibility-kubernetes-provider/bicep-extensibility-kubernetes-provider-import-kubernetes-manifest.png" alt-text="Screenshot of Visual Studio Code import Kubernetes Manifest.":::
+ :::image type="content" source="./media/quick-kubernetes-deploy-bicep-extensibility-kubernetes-provider/bicep-extensibility-kubernetes-provider-import-kubernetes-manifest.png" alt-text="Screenshot of Visual Studio Code import Kubernetes Manifest." lightbox="./media/quick-kubernetes-deploy-bicep-extensibility-kubernetes-provider/bicep-extensibility-kubernetes-provider-import-kubernetes-manifest.png":::
-1. Select `azure-vote.yaml` from the prompt. This process creates an `azure-vote.bicep` file in the same folder.
-1. Open `azure-vote.bicep` and add the following line at the end of the file to output the load balancer public IP:
+5. Select `aks-store-quickstart.yaml` from the prompt. This process creates an `aks-store-quickstart.bicep` file in the same folder.
+6. Open `main.bicep` and add the following Bicep at the end of the file to reference the newly created `aks-store-quickstart.bicep` module:
```bicep
- output frontendIp string = coreService_azureVoteFront.status.loadBalancer.ingress[0].ip
- ```
-
-1. Before the `output` statement in `main.bicep`, add the following Bicep to reference the newly created `azure-vote.bicep` module:
-
- ```bicep
- module kubernetes './azure-vote.bicep' = {
+ module kubernetes './aks-store-quickstart.bicep' = {
name: 'buildbicep-deploy' params: { kubeConfig: aks.listClusterAdminCredential().kubeconfigs[0].value
Use the following procedure to add the application definition:
} ```
-1. At the bottom of `main.bicep`, add the following line to output the load balancer public IP:
-
- ```bicep
- output lbPublicIp string = kubernetes.outputs.frontendIp
- ```
-
-1. Save both `main.bicep` and `azure-vote.bicep`.
+7. Save both `main.bicep` and `aks-store-quickstart.bicep`.
## Deploy the Bicep file
-1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+### [Azure CLI](#tab/azure-cli)
- # [CLI](#tab/CLI)
+1. Create an Azure resource group using the [`az group create`][az-group-create] command.
- ```azurecli
+ ```azurecli-interactive
az group create --name myResourceGroup --location eastus
+ ```
+
+2. Deploy the Bicep file using the [`az deployment group create`][az-deployment-group-create] command.
+
+ ```azurecli-interactive
az deployment group create --resource-group myResourceGroup --template-file main.bicep --parameters clusterName=<cluster-name> dnsPrefix=<dns-previs> linuxAdminUsername=<linux-admin-username> sshRSAPublicKey='<ssh-key>' ```
- # [PowerShell](#tab/PowerShell)
+### [Azure PowerShell](#tab/azure-powershell)
- ```azurepowershell
+1. Create an Azure resource group using the [`New-AzResourceGroup`][new-azresourcegroup] cmdlet.
+
+ ```azurepowershell-interactive
New-AzResourceGroup -Name myResourceGroup -Location eastus
- New-AzResourceGroupDeployment -ResourceGroupName myResourceGroup -TemplateFile ./main.bicep -clusterName=<cluster-name> -dnsPrefix=<dns-prefix> -linuxAdminUsername=<linux-admin-username> -sshRSAPublicKey="<ssh-key>"
```
-
-
- Provide the following values in the commands:
+2. Deploy the Bicep file using the [`New-AzResourceGroupDeployment`][new-azresourcegroupdeployment] cmdlet.
- * **Cluster name**: Enter a unique name for the AKS cluster, such as *myAKSCluster*.
- * **DNS prefix**: Enter a unique DNS prefix for your cluster, such as *myakscluster*.
- * **Linux Admin Username**: Enter a username to connect using SSH, such as *azureuser*.
- * **SSH RSA Public Key**: Copy and paste the *public* part of your SSH key pair (by default, the contents of *~/.ssh/id_rsa.pub*).
+ ```azurepowershell-interactive
+ New-AzResourceGroupDeployment -ResourceGroupName myResourceGroup -TemplateFile ./main.bicep -clusterName=<cluster-name> -dnsPrefix=<dns-prefix> -linuxAdminUsername=<linux-admin-username> -sshRSAPublicKey="<ssh-key>"
+ ```
- It takes a few minutes to create the AKS cluster. Wait for the cluster to be successfully deployed before you move on to the next step.
+
-2. From the deployment output, look for the `outputs` section. For example:
+Provide the following values in the commands:
- ```json
- "outputs": {
- "controlPlaneFQDN": {
- "type": "String",
- "value": "myaks0201-d34ae860.hcp.eastus.azmk8s.io"
- },
- "lbPublicIp": {
- "type": "String",
- "value": "52.179.23.131"
- }
- },
- ```
+* **Cluster name**: Enter a unique name for the AKS cluster, such as *myAKSCluster*.
+* **DNS prefix**: Enter a unique DNS prefix for your cluster, such as *myakscluster*.
+* **Linux Admin Username**: Enter a username to connect using SSH, such as *azureuser*.
+* **SSH RSA Public Key**: Copy and paste the *public* part of your SSH key pair (by default, the contents of *~/.ssh/id_rsa.pub*).
-3. Take note of the value of lbPublicIp.
+It takes a few minutes to create the AKS cluster. Wait for the cluster successfully deploy before you move on to the next step.
## Validate the Bicep deployment
-To see the Azure Vote app in action, open a web browser to the external IP address of your service.
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. On the Azure portal menu or from the **Home** page, navigate to your AKS cluster.
+3. Under **Kubernetes resources**, select **Services and ingresses**.
+4. Find the **store-front** service and copy the value for **External IP**.
+5. Open a web browser to the external IP address of your service to see the Azure Store app in action.
+ :::image type="content" source="media/quick-kubernetes-deploy-bicep-extensibility-kubernetes-provider/aks-store-application.png" alt-text="Screenshot of AKS Store sample application." lightbox="media/quick-kubernetes-deploy-bicep-extensibility-kubernetes-provider/aks-store-application.png":::
-## Clean up resources
+## Delete the cluster
+
+If you don't plan on going through the following tutorials, clean up unnecessary resources to avoid Azure charges.
### [Azure CLI](#tab/azure-cli)
-To avoid Azure charges, if you don't plan on going through the tutorials that follow, clean up your unnecessary resources. Use the [`az group delete`][az-group-delete] command to remove the resource group, container service, and all related resources.
+* Remove the resource group, container service, and all related resources using the [`az group delete`][az-group-delete] command.
-```azurecli-interactive
-az group delete --name myResourceGroup --yes --no-wait
-```
+ ```azurecli-interactive
+ az group delete --name myResourceGroup --yes --no-wait
+ ```
### [Azure PowerShell](#tab/azure-powershell)
-To avoid Azure charges, if you don't plan on going through the tutorials that follow, clean up your unnecessary resources. Use the [`Remove-AzResourceGroup`][remove-azresourcegroup] cmdlet to remove the resource group, container service, and all related resources.
+* Remove the resource group, container service, and all related resources using the [`Remove-AzResourceGroup`][remove-azresourcegroup] cmdlet.
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name myResourceGroup
-```
+ ```azurepowershell-interactive
+ Remove-AzResourceGroup -Name myResourceGroup
+ ```
> [!NOTE]
-> In this quickstart, the AKS cluster was created with a system-assigned managed identity (the default identity option). This identity is managed by the platform and doesn't require removal.
+> The AKS cluster was created with a system-assigned managed identity, which is the default identity option used in this quickstart. The platform manages this identity so you don't need to manually remove it.
## Next steps In this quickstart, you deployed a Kubernetes cluster and then deployed a sample multi-container application to it.
-To learn more about AKS, and walk through a complete code to deployment example, continue to the Kubernetes cluster tutorial:
+To learn more about AKS and walk through a complete code to deployment example, continue to the Kubernetes cluster tutorial.
> [!div class="nextstepaction"]
-> [Kubernetes on Azure tutorial: Prepare an application][aks-tutorial]
+> [AKS tutorial][aks-tutorial]
<!-- LINKS - external --> [azure-vote-app]: https://github.com/Azure-Samples/azure-voting-app-redis.git
To learn more about AKS, and walk through a complete code to deployment example,
[kubernetes-service]: ../concepts-network.md#services [ssh-keys]: ../../virtual-machines/linux/create-ssh-keys-detailed.md [az-ad-sp-create-for-rbac]: /cli/azure/ad/sp#az_ad_sp_create_for_rbac
+[az-deployment-group-create]: /cli/azure/group/deployment#az_deployment_group_create
+[new-azresourcegroup]: /powershell/module/az.resources/new-azresourcegroup
+[new-azresourcegroupdeployment]: /powershell/module/az.resources/new-azresourcegroupdeployment
+[az-sshkey-create]: /cli/azure/sshkey#az_sshkey_create
aks Quick Kubernetes Deploy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-bicep.md
Title: Quickstart - Create an Azure Kubernetes Service (AKS) cluster by using Bicep
-description: Learn how to quickly create a Kubernetes cluster using a Bicep file and deploy an application in Azure Kubernetes Service (AKS)
+ Title: 'Quickstart: Create an Azure Kubernetes Service (AKS) cluster using Bicep'
+description: Learn how to quickly create a Kubernetes cluster using a Bicep file and deploy an application in Azure Kubernetes Service (AKS).
Previously updated : 11/01/2022 Last updated : 10/23/2023 #Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure.
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you:
-* Deploy an AKS cluster using a Bicep file.
-* Run a sample multi-container application with a web front-end and a Redis instance in the cluster.
+* Deploy an AKS cluster using Bicep.
+* Run a sample multi-container application with a group of microservices and web front ends simulating a retail scenario.
-
+> [!NOTE]
+> This sample application is just for demo purposes and doesn't represent all the best practices for Kubernetes applications.
-This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
-## Prerequisites
+## Before you begin
+* This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
+* You need an Azure account with an active subscription. If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* To learn more about creating a Windows Server node pool, see [Create an AKS cluster that supports Windows Server containers](quick-windows-container-deploy-cli.md).
+* [!INCLUDE [About Bicep](../../../includes/resource-manager-quickstart-bicep-introduction.md)]
### [Azure CLI](#tab/azure-cli) [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)]
-* This article requires version 2.20.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
-* This article requires an existing Azure resource group. If you need to create one, you can use the [`az group create`][az-group-create] command or the [`New-AzAksCluster`][new-az-aks-cluster] cmdlet.
+* This article requires Azure CLI version 2.0.64 or later. If you're using Azure Cloud Shell, the latest version is already installed.
+* This article requires an existing Azure resource group. If you need to create one, you can use the [`az group create`][az-group-create] command.
### [Azure PowerShell](#tab/azure-powershell)
-* If you're running PowerShell locally, install the Az PowerShell module and connect to your Azure account using the [Connect-AzAccount][connect-azaccount] cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell][install-azure-powershell]. You'll also need Bicep CLI. For more information, see [Azure PowerShell](../../azure-resource-manager/bicep/install.md#azure-powershell). If using Azure Cloud Shell, the latest version is already installed.
+* If you're running PowerShell locally, install the `Az PowerShell` module. If using Azure Cloud Shell, the latest version is already installed.
+* You need the Bicep CLI. For more information, see [Azure PowerShell](../../azure-resource-manager/bicep/install.md#azure-powershell).
+* This article requires an existing Azure resource group. If you need to create one, you can use the [`New-AzAksCluster`][new-az-aks-cluster] cmdlet.
-* To create an AKS cluster using a Bicep file, you provide an SSH public key. If you need this resource, see the following section; otherwise skip to the [Review the Bicep file](#review-the-bicep-file) section.
-
-* The identity you're using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
-
-* To deploy a Bicep file, you need write access on the resources you're deploying and access to all operations on the Microsoft.Resources/deployments resource type. For example, to deploy a virtual machine, you need Microsoft.Compute/virtualMachines/write and Microsoft.Resources/deployments/* permissions. For a list of roles and permissions, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
+* To create an AKS cluster using a Bicep file, you provide an SSH public key. If you need this resource, see the following section. Otherwise, skip to [Review the Bicep file](#review-the-bicep-file).
+* Make sure the identity you use to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
+* To deploy a Bicep file, you need write access on the resources you deploy and access to all operations on the `Microsoft.Resources/deployments` resource type. For example, to deploy a virtual machine, you need `Microsoft.Compute/virtualMachines/write` and `Microsoft.Resources/deployments/*` permissions. For a list of roles and permissions, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
### Create an SSH key pair 1. Go to [https://shell.azure.com](https://shell.azure.com) to open Cloud Shell in your browser. 2. Create an SSH key pair using the [`az sshkey create`][az-sshkey-create] Azure CLI command or the `ssh-keygen` command.
- ```console
+ ```azurecli-interactive
# Create an SSH key pair using Azure CLI az sshkey create --name "mySSHKey" --resource-group "myResourceGroup"
For more AKS samples, see the [AKS quickstart templates][aks-quickstart-template
2. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
- # [Azure CLI](#tab/azure-cli)
+ ### [Azure CLI](#tab/azure-cli)
- ```azurecli
+ ```azurecli-interactive
az deployment group create --resource-group myResourceGroup --template-file main.bicep --parameters dnsPrefix=<dns-prefix> linuxAdminUsername=<linux-admin-username> sshRSAPublicKey='<ssh-key>' ```
- # [Azure PowerShell](#tab/azure-powershell)
+ ### [Azure PowerShell](#tab/azure-powershell)
- ```azurepowershell
+ ```azurepowershell-interactive
New-AzResourceGroup -Name myResourceGroup -Location eastus New-AzResourceGroupDeployment -ResourceGroupName myResourceGroup -TemplateFile ./main.bicep -dnsPrefix=<dns-prefix> -linuxAdminUsername=<linux-admin-username> -sshRSAPublicKey="<ssh-key>" ```
To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl
### [Azure CLI](#tab/azure-cli)
-1. Install `kubectl` locally using the [az aks install-cli][az-aks-install-cli] command:
+1. Install `kubectl` locally using the [`az aks install-cli`][az-aks-install-cli] command.
- ```azurecli
+ ```azurecli-interactive
az aks install-cli ```
-2. Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them.
+2. Configure `kubectl` to connect to your Kubernetes cluster using the [`az aks get-credentials`][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them.
```azurecli-interactive az aks get-credentials --resource-group myResourceGroup --name myAKSCluster ```
-3. Verify the connection to your cluster using the [kubectl get][kubectl-get] command. This command returns a list of the cluster nodes.
+3. Verify the connection to your cluster using the [`kubectl get`][kubectl-get] command. This command returns a list of the cluster nodes.
- ```console
+ ```azurecli-interactive
kubectl get nodes ```
- The following output example shows the three nodes created in the previous steps. Make sure the node status is *Ready*:
+ The following example output shows the single node created in the previous steps. Make sure the node status is *Ready*.
```output NAME STATUS ROLES AGE VERSION
To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl
### [Azure PowerShell](#tab/azure-powershell)
-1. Install `kubectl` locally using the [Install-AzAksKubectl][install-azakskubectl] cmdlet:
+1. Install `kubectl` locally using the [`Install-AzAksKubectl`][install-azakskubectl] cmdlet.
- ```azurepowershell
+ ```azurepowershell-interactive
Install-AzAksKubectl ```
-2. Configure `kubectl` to connect to your Kubernetes cluster using the [Import-AzAksCredential][import-azakscredential] cmdlet. The following cmdlet downloads credentials and configures the Kubernetes CLI to use them.
+2. Configure `kubectl` to connect to your Kubernetes cluster using the [`Import-AzAksCredential`][import-azakscredential] cmdlet. This command downloads credentials and configures the Kubernetes CLI to use them.
```azurepowershell-interactive Import-AzAksCredential -ResourceGroupName myResourceGroup -Name myAKSCluster ```
-3. Verify the connection to your cluster using the [kubectl get][kubectl-get] command. This command returns a list of the cluster nodes.
+3. Verify the connection to your cluster using the [`kubectl get`][kubectl-get] command. This command returns a list of the cluster nodes.
```azurepowershell-interactive kubectl get nodes ```
- The following output example shows the three nodes created in the previous steps. Make sure the node status is *Ready*:
+ The following example output shows the three nodes created in the previous steps. Make sure the node status is *Ready*.
- ```plaintext
+ ```output
NAME STATUS ROLES AGE VERSION aks-agentpool-41324942-0 Ready agent 6m44s v1.12.6 aks-agentpool-41324942-1 Ready agent 6m46s v1.12.6
To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl
## Deploy the application
-A [Kubernetes manifest file][kubernetes-deployment] defines a cluster's desired state, such as which container images to run.
-
-In this quickstart, you'll use a manifest to create all objects needed to run the [Azure Vote application][azure-vote-app]. This manifest includes two [Kubernetes deployments][kubernetes-deployment]:
+To deploy the application, you use a manifest file to create all the objects required to run the [AKS Store application](https://github.com/Azure-Samples/aks-store-demo). A [Kubernetes manifest file][kubernetes-deployment] defines a cluster's desired state, such as which container images to run. The manifest includes the following Kubernetes deployments and
-* The sample Azure Vote Python applications.
-* A Redis instance.
-Two [Kubernetes Services][kubernetes-service] are also created:
+* **Store front**: Web application for customers to view products and place orders.
+* **Product service**: Shows product information.
+* **Order service**: Places orders.
+* **Rabbit MQ**: Message queue for an order queue.
-* An internal service for the Redis instance.
-* An external service to access the Azure Vote application from the internet.
+> [!NOTE]
+> We don't recommend running stateful containers, such as Rabbit MQ, without persistent storage for production. These are used here for simplicity, but we recommend using managed services, such as Azure CosmosDB or Azure Service Bus.
-1. Create a file named `azure-vote.yaml`.
-1. Copy in the following YAML definition:
+1. Create a file named `aks-store-quickstart.yaml` and copy in the following manifest:
```yaml apiVersion: apps/v1 kind: Deployment metadata:
- name: azure-vote-back
+ name: rabbitmq
spec: replicas: 1 selector: matchLabels:
- app: azure-vote-back
+ app: rabbitmq
template: metadata: labels:
- app: azure-vote-back
+ app: rabbitmq
spec: nodeSelector: "kubernetes.io/os": linux containers:
- - name: azure-vote-back
- image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
+ - name: rabbitmq
+ image: mcr.microsoft.com/mirror/docker/library/rabbitmq:3.10-management-alpine
+ ports:
+ - containerPort: 5672
+ name: rabbitmq-amqp
+ - containerPort: 15672
+ name: rabbitmq-http
env:
- - name: ALLOW_EMPTY_PASSWORD
- value: "yes"
+ - name: RABBITMQ_DEFAULT_USER
+ value: "username"
+ - name: RABBITMQ_DEFAULT_PASS
+ value: "password"
resources: requests:
- cpu: 100m
+ cpu: 10m
memory: 128Mi limits: cpu: 250m memory: 256Mi
+ volumeMounts:
+ - name: rabbitmq-enabled-plugins
+ mountPath: /etc/rabbitmq/enabled_plugins
+ subPath: enabled_plugins
+ volumes:
+ - name: rabbitmq-enabled-plugins
+ configMap:
+ name: rabbitmq-enabled-plugins
+ items:
+ - key: rabbitmq_enabled_plugins
+ path: enabled_plugins
+
+ apiVersion: v1
+ data:
+ rabbitmq_enabled_plugins: |
+ [rabbitmq_management,rabbitmq_prometheus,rabbitmq_amqp1_0].
+ kind: ConfigMap
+ metadata:
+ name: rabbitmq-enabled-plugins
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: rabbitmq
+ spec:
+ selector:
+ app: rabbitmq
+ ports:
+ - name: rabbitmq-amqp
+ port: 5672
+ targetPort: 5672
+ - name: rabbitmq-http
+ port: 15672
+ targetPort: 15672
+ type: ClusterIP
+
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: order-service
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: order-service
+ template:
+ metadata:
+ labels:
+ app: order-service
+ spec:
+ nodeSelector:
+ "kubernetes.io/os": linux
+ containers:
+ - name: order-service
+ image: ghcr.io/azure-samples/aks-store-demo/order-service:latest
ports:
- - containerPort: 6379
- name: redis
+ - containerPort: 3000
+ env:
+ - name: ORDER_QUEUE_HOSTNAME
+ value: "rabbitmq"
+ - name: ORDER_QUEUE_PORT
+ value: "5672"
+ - name: ORDER_QUEUE_USERNAME
+ value: "username"
+ - name: ORDER_QUEUE_PASSWORD
+ value: "password"
+ - name: ORDER_QUEUE_NAME
+ value: "orders"
+ - name: FASTIFY_ADDRESS
+ value: "0.0.0.0"
+ resources:
+ requests:
+ cpu: 1m
+ memory: 50Mi
+ limits:
+ cpu: 75m
+ memory: 128Mi
+ initContainers:
+ - name: wait-for-rabbitmq
+ image: busybox
+ command: ['sh', '-c', 'until nc -zv rabbitmq 5672; do echo waiting for rabbitmq; sleep 2; done;']
+ resources:
+ requests:
+ cpu: 1m
+ memory: 50Mi
+ limits:
+ cpu: 75m
+ memory: 128Mi
apiVersion: v1 kind: Service metadata:
- name: azure-vote-back
+ name: order-service
spec:
+ type: ClusterIP
ports:
- - port: 6379
+ - name: http
+ port: 3000
+ targetPort: 3000
selector:
- app: azure-vote-back
+ app: order-service
apiVersion: apps/v1 kind: Deployment metadata:
- name: azure-vote-front
+ name: product-service
spec: replicas: 1 selector: matchLabels:
- app: azure-vote-front
+ app: product-service
template: metadata: labels:
- app: azure-vote-front
+ app: product-service
spec: nodeSelector: "kubernetes.io/os": linux containers:
- - name: azure-vote-front
- image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
+ - name: product-service
+ image: ghcr.io/azure-samples/aks-store-demo/product-service:latest
+ ports:
+ - containerPort: 3002
resources: requests:
- cpu: 100m
- memory: 128Mi
+ cpu: 1m
+ memory: 1Mi
limits:
- cpu: 250m
- memory: 256Mi
+ cpu: 1m
+ memory: 7Mi
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: product-service
+ spec:
+ type: ClusterIP
+ ports:
+ - name: http
+ port: 3002
+ targetPort: 3002
+ selector:
+ app: product-service
+
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: store-front
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: store-front
+ template:
+ metadata:
+ labels:
+ app: store-front
+ spec:
+ nodeSelector:
+ "kubernetes.io/os": linux
+ containers:
+ - name: store-front
+ image: ghcr.io/azure-samples/aks-store-demo/store-front:latest
ports:
- - containerPort: 80
- env:
- - name: REDIS
- value: "azure-vote-back"
+ - containerPort: 8080
+ name: store-front
+ env:
+ - name: VUE_APP_ORDER_SERVICE_URL
+ value: "http://order-service:3000/"
+ - name: VUE_APP_PRODUCT_SERVICE_URL
+ value: "http://product-service:3002/"
+ resources:
+ requests:
+ cpu: 1m
+ memory: 200Mi
+ limits:
+ cpu: 1000m
+ memory: 512Mi
apiVersion: v1 kind: Service metadata:
- name: azure-vote-front
+ name: store-front
spec:
- type: LoadBalancer
ports: - port: 80
+ targetPort: 8080
selector:
- app: azure-vote-front
+ app: store-front
+ type: LoadBalancer
``` For a breakdown of YAML manifest files, see [Deployments and YAML manifests](../concepts-clusters-workloads.md#deployments-and-yaml-manifests).
-1. Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
+2. Deploy the application using the [`kubectl apply`][kubectl-apply] command and specify the name of your YAML manifest.
```console
- kubectl apply -f azure-vote.yaml
+ kubectl apply -f aks-store-quickstart.yaml
```
- The following example resembles output showing the successfully created deployments and
+ The following example output shows the deployments and
```output
- deployment "azure-vote-back" created
- service "azure-vote-back" created
- deployment "azure-vote-front" created
- service "azure-vote-front" created
+ deployment.apps/rabbitmq created
+ service/rabbitmq created
+ deployment.apps/order-service created
+ service/order-service created
+ deployment.apps/product-service created
+ service/product-service created
+ deployment.apps/store-front created
+ service/store-front created
``` ### Test the application When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete.
-Monitor progress using the [kubectl get service][kubectl-get] command with the `--watch` argument.
+1. Check the status of the deployed pods using the [`kubectl get pods`][kubectl-get] command. Make all pods are `Running` before proceeding.
-```console
-kubectl get service azure-vote-front --watch
-```
+2. Check for a public IP address for the store-front application. Monitor progress using the [`kubectl get service`][kubectl-get] command with the `--watch` argument.
-The **EXTERNAL-IP** output for the `azure-vote-front` service will initially show as *pending*.
+ ```console
+ kubectl get service store-front --watch
+ ```
-```output
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-azure-vote-front LoadBalancer 10.0.37.27 <pending> 80:30572/TCP 6s
-```
+ The **EXTERNAL-IP** output for the `store-front` service initially shows as *pending*:
-Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service:
+ ```output
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ store-front LoadBalancer 10.0.100.10 <pending> 80:30025/TCP 4h4m
+ ```
+
+3. Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process.
+
+ The following example output shows a valid public IP address assigned to the service:
+
+ ```output
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ store-front LoadBalancer 10.0.100.10 20.62.159.19 80:30025/TCP 4h5m
+ ```
-```output
-azure-vote-front LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m
-```
+4. Open a web browser to the external IP address of your service to see the Azure Store app in action.
-To see the Azure Vote app in action, open a web browser to the external IP address of your service.
+ :::image type="content" source="media/quick-kubernetes-deploy-bicep/aks-store-application.png" alt-text="Screenshot of AKS Store sample application." lightbox="media/quick-kubernetes-deploy-bicep/aks-store-application.png":::
+## Delete the cluster
-## Clean up resources
+If you don't plan on going through the following tutorials, clean up unnecessary resources to avoid Azure charges.
### [Azure CLI](#tab/azure-cli)
-To avoid Azure charges, if you don't plan on going through the tutorials that follow, clean up your unnecessary resources. Use the [az group delete][az-group-delete] command to remove the resource group, container service, and all related resources.
+* Remove the resource group, container service, and all related resources using the [`az group delete`][az-group-delete] command.
-```azurecli-interactive
-az group delete --name myResourceGroup --yes --no-wait
-```
+ ```azurecli-interactive
+ az group delete --name myResourceGroup --yes --no-wait
+ ```
### [Azure PowerShell](#tab/azure-powershell)
-To avoid Azure charges, if you don't plan on going through the tutorials that follow, clean up your unnecessary resources. Use the [Remove-AzResourceGroup][remove-azresourcegroup] cmdlet to remove the resource group, container service, and all related resources.
+* Remove the resource group, container service, and all related resources using the [`Remove-AzResourceGroup`][remove-azresourcegroup] cmdlet
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name myResourceGroup
-```
+ ```azurepowershell-interactive
+ Remove-AzResourceGroup -Name myResourceGroup
+ ```
-> [!NOTE]
-> In this quickstart, the AKS cluster was created with a system-assigned managed identity (the default identity option). This identity is managed by the platform and does not require removal.
+ > [!NOTE]
+ > The AKS cluster was created with a system-assigned managed identity, which is the default identity option used in this quickstart. The platform manages this identity so you don't need to manually remove it.
## Next steps
To learn more about AKS and walk through a complete code to deployment example,
> [AKS tutorial][aks-tutorial] <!-- LINKS - external -->
-[azure-vote-app]: https://github.com/Azure-Samples/azure-voting-app-redis.git
[kubectl]: https://kubernetes.io/docs/reference/kubectl/ [kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply [kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
To learn more about AKS and walk through a complete code to deployment example,
[az-group-create]: /cli/azure/group#az_group_create [az-group-delete]: /cli/azure/group#az_group_delete [remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup
-[install-azure-powershell]: /powershell/azure/install-az-ps
-[connect-azaccount]: /powershell/module/az.accounts/Connect-AzAccount
[kubernetes-deployment]: ../concepts-clusters-workloads.md#deployments-and-yaml-manifests
-[kubernetes-service]: ../concepts-network.md#services
[ssh-keys]: ../../virtual-machines/linux/create-ssh-keys-detailed.md [new-az-aks-cluster]: /powershell/module/az.aks/new-azakscluster
-[az-sshkey-create]: /cli/azure/sshkey#az_sshkey_create
+[az-sshkey-create]: /cli/azure/sshkey#az_sshkey_create
aks Quick Kubernetes Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-cli.md
Title: 'Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Azure CLI'
-description: Learn how to create a Kubernetes cluster, deploy an application, and monitor performance in Azure Kubernetes Service (AKS) using Azure CLI.
+description: Learn how to quickly create a Kubernetes cluster, deploy an application, and monitor performance in Azure Kubernetes Service (AKS) using Azure CLI.
Previously updated : 05/04/2023 Last updated : 10/23/2023 #Customer intent: As a developer or cluster operator, I want to create an AKS cluster and deploy an application so I can see how to run and monitor applications using the managed Kubernetes service in Azure.
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you: * Deploy an AKS cluster using the Azure CLI.
-* Run a sample multi-container application with a web front end and a Redis instance in the cluster.
+* Run a sample multi-container application with a group of microservices and web front ends simulating a retail scenario.
+> [!NOTE]
+> This sample application is just for demo purposes and doesn't represent all the best practices for Kubernetes applications.
+ ## Before you begin * This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts]. * You need an Azure account with an active subscription. If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * To learn more about creating a Windows Server node pool, see [Create an AKS cluster that supports Windows Server containers](quick-windows-container-deploy-cli.md).-- * This article requires Azure CLI version 2.0.64 or later. If you're using Azure Cloud Shell, the latest version is already installed. * Make sure the identity you use to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)][aks-identity-concepts]. * If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the [`az account`][az-account] command. * Verify you have the *Microsoft.OperationsManagement* and *Microsoft.OperationalInsights* providers registered on your subscription. These Azure resource providers are required to support [Container insights][azure-monitor-containers]. Check the registration status using the following commands:
- ```azurecli
+ ```azurecli-interactive
az provider show -n Microsoft.OperationsManagement -o table az provider show -n Microsoft.OperationalInsights -o table ``` If they're not registered, register them using the following commands:
- ```azurecli
+ ```azurecli-interactive
az provider register --namespace Microsoft.OperationsManagement az provider register --namespace Microsoft.OperationalInsights ```
The following example creates a resource group named *myResourceGroup* in the *e
az group create --name myResourceGroup --location eastus ```
- The following output example resembles successful creation of the resource group:
+ The following example output resembles successful creation of the resource group:
- ```json
+ ```output
{ "id": "/subscriptions/<guid>/resourceGroups/myResourceGroup", "location": "eastus",
The following example creates a resource group named *myResourceGroup* in the *e
The following example creates a cluster named *myAKSCluster* with one node and enables a system-assigned managed identity.
-* Create an AKS cluster using the [`az aks create`][az-aks-create] command with the `--enable-addons monitoring` parameter to enable [Azure Monitor Container insights][azure-monitor-containers] with managed identity authentication (Minimum Azure CLI version 2.49.0 or higher).
+* Create an AKS cluster using the [`az aks create`][az-aks-create] command with the `--enable-addons monitoring` and `--enable-msi-auth-for-monitoring` parameters to enable [Azure Monitor Container insights][azure-monitor-containers] with managed identity authentication (preview).
```azurecli-interactive
- az aks create -g myResourceGroup -n myAKSCluster --enable-managed-identity --node-count 1 --enable-addons monitoring --generate-ssh-keys
+ az aks create -g myResourceGroup -n myAKSCluster --enable-managed-identity --node-count 1 --enable-addons monitoring --enable-msi-auth-for-monitoring --generate-ssh-keys
``` After a few minutes, the command completes and returns JSON-formatted information about the cluster.
To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl
1. Install `kubectl` locally using the [`az aks install-cli`][az-aks-install-cli] command.
- ```azurecli
+ ```azurecli-interactive
az aks install-cli ```
-2. Configure `kubectl` to connect to your Kubernetes cluster using the [`az aks get-credentials`][az-aks-get-credentials] command.
-
- This command executes the following operations:
-
- * Downloads credentials and configures the Kubernetes CLI to use them.
- * Uses `~/.kube/config`, the default location for the [Kubernetes configuration file][kubeconfig-file]. Specify a different location for your Kubernetes configuration file using *--file* argument.
+2. Configure `kubectl` to connect to your Kubernetes cluster using the [`az aks get-credentials`][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them.
```azurecli-interactive az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl
kubectl get nodes ```
- The following output example shows the single node created in the previous steps. Make sure the node status is *Ready*.
+ The following example output shows the single node created in the previous steps. Make sure the node status is *Ready*.
```output NAME STATUS ROLES AGE VERSION
To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl
## Deploy the application
-A [Kubernetes manifest file][kubernetes-deployment] defines a cluster's desired state, such as which container images to run.
+To deploy the application, you use a manifest file to create all the objects required to run the [AKS Store application](https://github.com/Azure-Samples/aks-store-demo). A [Kubernetes manifest file][kubernetes-deployment] defines a cluster's desired state, such as which container images to run. The manifest includes the following Kubernetes deployments and
-In this quickstart, you use a manifest to create all objects needed to run the [Azure Vote application][azure-vote-app]. This manifest includes two [Kubernetes deployments][kubernetes-deployment]:
-* The sample Azure Vote Python applications.
-* A Redis instance.
+* **Store front**: Web application for customers to view products and place orders.
+* **Product service**: Shows product information.
+* **Order service**: Places orders.
+* **Rabbit MQ**: Message queue for an order queue.
-It also creates two [Kubernetes Services][kubernetes-service]:
-
-* An internal service for the Redis instance.
-* An external service to access the Azure Vote application from the internet.
+> [!NOTE]
+> We don't recommend running stateful containers, such as Rabbit MQ, without persistent storage for production. These are used here for simplicity, but we recommend using managed services, such as Azure CosmosDB or Azure Service Bus.
-1. Create a file named `azure-vote.yaml` and copy in the following manifest.
+1. Create a file named `aks-store-quickstart.yaml` and copy in the following manifest:
```yaml apiVersion: apps/v1 kind: Deployment metadata:
- name: azure-vote-back
+ name: rabbitmq
spec: replicas: 1 selector: matchLabels:
- app: azure-vote-back
+ app: rabbitmq
template: metadata: labels:
- app: azure-vote-back
+ app: rabbitmq
spec: nodeSelector: "kubernetes.io/os": linux containers:
- - name: azure-vote-back
- image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
+ - name: rabbitmq
+ image: mcr.microsoft.com/mirror/docker/library/rabbitmq:3.10-management-alpine
+ ports:
+ - containerPort: 5672
+ name: rabbitmq-amqp
+ - containerPort: 15672
+ name: rabbitmq-http
env:
- - name: ALLOW_EMPTY_PASSWORD
- value: "yes"
+ - name: RABBITMQ_DEFAULT_USER
+ value: "username"
+ - name: RABBITMQ_DEFAULT_PASS
+ value: "password"
resources: requests:
- cpu: 100m
+ cpu: 10m
memory: 128Mi limits: cpu: 250m memory: 256Mi
+ volumeMounts:
+ - name: rabbitmq-enabled-plugins
+ mountPath: /etc/rabbitmq/enabled_plugins
+ subPath: enabled_plugins
+ volumes:
+ - name: rabbitmq-enabled-plugins
+ configMap:
+ name: rabbitmq-enabled-plugins
+ items:
+ - key: rabbitmq_enabled_plugins
+ path: enabled_plugins
+
+ apiVersion: v1
+ data:
+ rabbitmq_enabled_plugins: |
+ [rabbitmq_management,rabbitmq_prometheus,rabbitmq_amqp1_0].
+ kind: ConfigMap
+ metadata:
+ name: rabbitmq-enabled-plugins
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: rabbitmq
+ spec:
+ selector:
+ app: rabbitmq
+ ports:
+ - name: rabbitmq-amqp
+ port: 5672
+ targetPort: 5672
+ - name: rabbitmq-http
+ port: 15672
+ targetPort: 15672
+ type: ClusterIP
+
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: order-service
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: order-service
+ template:
+ metadata:
+ labels:
+ app: order-service
+ spec:
+ nodeSelector:
+ "kubernetes.io/os": linux
+ containers:
+ - name: order-service
+ image: ghcr.io/azure-samples/aks-store-demo/order-service:latest
ports:
- - containerPort: 6379
- name: redis
+ - containerPort: 3000
+ env:
+ - name: ORDER_QUEUE_HOSTNAME
+ value: "rabbitmq"
+ - name: ORDER_QUEUE_PORT
+ value: "5672"
+ - name: ORDER_QUEUE_USERNAME
+ value: "username"
+ - name: ORDER_QUEUE_PASSWORD
+ value: "password"
+ - name: ORDER_QUEUE_NAME
+ value: "orders"
+ - name: FASTIFY_ADDRESS
+ value: "0.0.0.0"
+ resources:
+ requests:
+ cpu: 1m
+ memory: 50Mi
+ limits:
+ cpu: 75m
+ memory: 128Mi
+ initContainers:
+ - name: wait-for-rabbitmq
+ image: busybox
+ command: ['sh', '-c', 'until nc -zv rabbitmq 5672; do echo waiting for rabbitmq; sleep 2; done;']
+ resources:
+ requests:
+ cpu: 1m
+ memory: 50Mi
+ limits:
+ cpu: 75m
+ memory: 128Mi
apiVersion: v1 kind: Service metadata:
- name: azure-vote-back
+ name: order-service
spec:
+ type: ClusterIP
ports:
- - port: 6379
+ - name: http
+ port: 3000
+ targetPort: 3000
selector:
- app: azure-vote-back
+ app: order-service
apiVersion: apps/v1 kind: Deployment metadata:
- name: azure-vote-front
+ name: product-service
spec: replicas: 1 selector: matchLabels:
- app: azure-vote-front
+ app: product-service
template: metadata: labels:
- app: azure-vote-front
+ app: product-service
spec: nodeSelector: "kubernetes.io/os": linux containers:
- - name: azure-vote-front
- image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
+ - name: product-service
+ image: ghcr.io/azure-samples/aks-store-demo/product-service:latest
+ ports:
+ - containerPort: 3002
resources: requests:
- cpu: 100m
- memory: 128Mi
+ cpu: 1m
+ memory: 1Mi
limits:
- cpu: 250m
- memory: 256Mi
+ cpu: 1m
+ memory: 7Mi
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: product-service
+ spec:
+ type: ClusterIP
+ ports:
+ - name: http
+ port: 3002
+ targetPort: 3002
+ selector:
+ app: product-service
+
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: store-front
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: store-front
+ template:
+ metadata:
+ labels:
+ app: store-front
+ spec:
+ nodeSelector:
+ "kubernetes.io/os": linux
+ containers:
+ - name: store-front
+ image: ghcr.io/azure-samples/aks-store-demo/store-front:latest
ports:
- - containerPort: 80
- env:
- - name: REDIS
- value: "azure-vote-back"
+ - containerPort: 8080
+ name: store-front
+ env:
+ - name: VUE_APP_ORDER_SERVICE_URL
+ value: "http://order-service:3000/"
+ - name: VUE_APP_PRODUCT_SERVICE_URL
+ value: "http://product-service:3002/"
+ resources:
+ requests:
+ cpu: 1m
+ memory: 200Mi
+ limits:
+ cpu: 1000m
+ memory: 512Mi
apiVersion: v1 kind: Service metadata:
- name: azure-vote-front
+ name: store-front
spec:
- type: LoadBalancer
ports: - port: 80
+ targetPort: 8080
selector:
- app: azure-vote-front
+ app: store-front
+ type: LoadBalancer
``` For a breakdown of YAML manifest files, see [Deployments and YAML manifests](../concepts-clusters-workloads.md#deployments-and-yaml-manifests). 2. Deploy the application using the [`kubectl apply`][kubectl-apply] command and specify the name of your YAML manifest.
- ```console
- kubectl apply -f azure-vote.yaml
+ ```azurecli-interactive
+ kubectl apply -f aks-store-quickstart.yaml
```
- The following example resembles output showing successfully created deployments and services.
+ The following example output shows the deployments and
```output
- deployment "azure-vote-back" created
- service "azure-vote-back" created
- deployment "azure-vote-front" created
- service "azure-vote-front" created
+ deployment.apps/rabbitmq created
+ service/rabbitmq created
+ deployment.apps/order-service created
+ service/order-service created
+ deployment.apps/product-service created
+ service/product-service created
+ deployment.apps/store-front created
+ service/store-front created
``` ## Test the application When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete.
-1. Monitor progress using the [`kubectl get service`][kubectl-get] command with the `--watch` argument.
+1. Check the status of the deployed pods using the [`kubectl get pods`][kubectl-get] command. Make all pods are `Running` before proceeding.
+
+2. Check for a public IP address for the store-front application. Monitor progress using the [`kubectl get service`][kubectl-get] command with the `--watch` argument.
```azurecli-interactive
- kubectl get service azure-vote-front --watch
+ kubectl get service store-front --watch
```
- The **EXTERNAL-IP** output for the `azure-vote-front` service will initially show as *pending*.
+ The **EXTERNAL-IP** output for the `store-front` service initially shows as *pending*:
```output
- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
- azure-vote-front LoadBalancer 10.0.37.27 <pending> 80:30572/TCP 6s
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ store-front LoadBalancer 10.0.100.10 <pending> 80:30025/TCP 4h4m
```
-2. Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process.
+3. Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process.
The following example output shows a valid public IP address assigned to the service: ```output
- azure-vote-front LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ store-front LoadBalancer 10.0.100.10 20.62.159.19 80:30025/TCP 4h5m
```
-3. Open a web browser to the external IP address of your service to see the Azure Vote app in action.
+4. Open a web browser to the external IP address of your service to see the Azure Store app in action.
- :::image type="content" source="media/quick-kubernetes-deploy-portal/azure-voting-application.png" alt-text="Screenshot of browsing to Azure Vote sample application.":::
+ :::image type="content" source="media/quick-kubernetes-deploy-portal/aks-store-application.png" alt-text="Screenshot of AKS Store sample application." lightbox="media/quick-kubernetes-deploy-portal/aks-store-application.png":::
## Delete the cluster
If you don't plan on going through the following tutorials, clean up unnecessary
In this quickstart, you deployed a Kubernetes cluster and deployed a simple multi-container application to it.
-To learn more about AKS, and walk through a complete code-to-deployment example, continue to the Kubernetes cluster tutorial.
+To learn more about AKS and walk through a complete code-to-deployment example, continue to the Kubernetes cluster tutorial.
> [!div class="nextstepaction"] > [AKS tutorial][aks-tutorial]
To learn more about AKS, and walk through a complete code-to-deployment example,
This quickstart is for introductory purposes. For guidance on creating full solutions with AKS for production, see [AKS solution guidance][aks-solution-guidance]. <!-- LINKS - external -->
-[azure-vote-app]: https://github.com/Azure-Samples/azure-voting-app-redis.git
[kubectl]: https://kubernetes.io/docs/reference/kubectl/ [kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply [kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
-[kubeconfig-file]: https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/
<!-- LINKS - internal --> [kubernetes-concepts]: ../concepts-clusters-workloads.md
This quickstart is for introductory purposes. For guidance on creating full solu
[az-group-delete]: /cli/azure/group#az-group-delete [azure-monitor-containers]: ../../azure-monitor/containers/container-insights-overview.md [kubernetes-deployment]: ../concepts-clusters-workloads.md#deployments-and-yaml-manifests
-[kubernetes-service]: ../concepts-network.md#services
[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?WT.mc_id=AKSDOCSPAGE
-[intro-azure-linux]: ../../azure-linux/intro-azure-linux.md
+[intro-azure-linux]: ../../azure-linux/intro-azure-linux.md
aks Quick Kubernetes Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-portal.md
Title: 'Quickstart: Deploy an AKS cluster by using the Azure portal'
+ Title: 'Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using the Azure portal'
description: Learn how to quickly create a Kubernetes cluster, deploy an application, and monitor performance in Azure Kubernetes Service (AKS) using the Azure portal. Previously updated : 11/01/2022 Last updated : 10/23/2023 #Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run and monitor applications using the managed Kubernetes service in Azure.
-# Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using the Azure portal
+# Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Azure portal
-Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you will:
+Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you:
* Deploy an AKS cluster using the Azure portal.
-* Run a sample multi-container application with a web front-end and a Redis instance in the cluster.
+* Run a sample multi-container application with a group of microservices and web front ends simulating a retail scenario.
-
-This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
+> [!NOTE]
+> This sample application is just for demo purposes and doesn't represent all the best practices for Kubernetes applications.
-## Prerequisites
+## Before you begin
-- If you're unfamiliar with the Azure Cloud Shell, review [Overview of Azure Cloud Shell](../../cloud-shell/overview.md).-- The identity you're using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
+* This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
+* You need an Azure account with an active subscription. If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* If you're unfamiliar with the Azure Cloud Shell, review [Overview of Azure Cloud Shell](../../cloud-shell/overview.md).
+* To learn more about creating a Windows Server node pool, see [Create an AKS cluster that supports Windows Server containers](quick-windows-container-deploy-portal.md).
+* The identity you use to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
> [!NOTE] > The Azure Linux node pool is now generally available (GA). To learn about the benefits and deployment steps, see the [Introduction to the Azure Linux Container Host for AKS][intro-azure-linux].
This quickstart assumes a basic understanding of Kubernetes concepts. For more i
* Select a **Region** for the AKS cluster, and leave the default value selected for **Kubernetes version**. - **Primary node pool**: * Leave the default values selected.
-
- :::image type="content" source="media/quick-kubernetes-deploy-portal/create-cluster-basics.png" alt-text="Screenshot of Create AKS cluster - provide basic information.":::
+
+ :::image type="content" source="media/quick-kubernetes-deploy-portal/create-cluster-basics.png" alt-text="Screenshot of Create AKS cluster - provide basic information." lightbox="media/quick-kubernetes-deploy-portal/create-cluster-basics.png":::
> [!NOTE] > You can change the preset configuration when creating your cluster by selecting *Learn more and compare presets* and choosing a different option.
- > :::image type="content" source="media/quick-kubernetes-deploy-portal/cluster-preset-options.png" alt-text="Screenshot of Create AKS cluster - portal preset options.":::
+ > :::image type="content" source="media/quick-kubernetes-deploy-portal/cluster-preset-options.png" alt-text="Screenshot of Create AKS cluster - portal preset options." lightbox="media/quick-kubernetes-deploy-portal/cluster-preset-options.png":::
-1. Select **Next: Node pools** when complete.
+1. Select **Next: Node pools** when complete.
1. On the **Node pools** page, leave the default options and then select **Next: Access**. 1. On the **Access** page, configure the following options:
- - The default value for **Resource identity** is **System-assigned managed identity**. Managed identities provide an identity for applications to use when connecting to resources that support Microsoft Entra authentication. For more details about managed identities, see [What are managed identities for Azure resources?](../../active-directory/managed-identities-azure-resources/overview.md)
+ - The default value for **Resource identity** is **System-assigned managed identity**. Managed identities provide an identity for applications to use when connecting to resources that support Microsoft Entra authentication. For more details about managed identities, see [What are managed identities for Azure resources?](../../active-directory/managed-identities-azure-resources/overview.md).
- The Kubernetes role-based access control (RBAC) option is the default value to provide more fine-grained control over access to the Kubernetes resources deployed in your AKS cluster. 1. Select **Next: Networking** when complete.
To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl
1. Open Cloud Shell using the `>_` button on the top of the Azure portal.
- :::image type="content" source="media/quick-kubernetes-deploy-portal/aks-cloud-shell.png" alt-text="Screenshot of Open the Azure Cloud Shell in the portal option.":::
+ :::image type="content" source="media/quick-kubernetes-deploy-portal/aks-cloud-shell.png" alt-text="Screenshot of Open the Azure Cloud Shell in the portal option." lightbox="media/quick-kubernetes-deploy-portal/aks-cloud-shell.png":::
> [!NOTE] > To perform these operations in a local shell installation:
+ >
> 1. Verify Azure CLI or Azure PowerShell is installed. > 2. Connect to Azure via the `az login` or `Connect-AzAccount` command. ### [Azure CLI](#tab/azure-cli)
-2. Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. The following command downloads credentials and configures the Kubernetes CLI to use them.
+1. Configure `kubectl` to connect to your Kubernetes cluster using the [`az aks get-credentials`][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them.
```azurecli-interactive az aks get-credentials --resource-group myResourceGroup --name myAKSCluster ```
+2. Verify the connection to your cluster using `kubectl get` to return a list of the cluster nodes.
+
+ ```azurecli-interactive
+ kubectl get nodes
+ ```
+
+ The following example output shows the single node created in the previous steps. Make sure the node status is *Ready*.
+
+ ```output
+ NAME STATUS ROLES AGE VERSION
+ aks-nodepool1-31718369-0 Ready agent 6m44s v1.15.10
+ ```
+ ### [Azure PowerShell](#tab/azure-powershell)
-2. Configure `kubectl` to connect to your Kubernetes cluster using the [Import-AzAksCredential][import-azakscredential] cmdlet. The following command downloads credentials and configures the Kubernetes CLI to use them.
+1. Configure `kubectl` to connect to your Kubernetes cluster using the [`Import-AzAksCredential`][import-azakscredential] cmdlet. This command downloads credentials and configures the Kubernetes CLI to use them.
```azurepowershell-interactive Import-AzAksCredential -ResourceGroupName myResourceGroup -Name myAKSCluster ```
-
-
-3. Verify the connection to your cluster using `kubectl get` to return a list of the cluster nodes.
+2. Verify the connection to your cluster using `kubectl get` to return a list of the cluster nodes.
- ```console
+ ```azurepowershell-interactive
kubectl get nodes ```
- Output shows the single node created in the previous steps. Make sure the node status is *Ready*:
+ The following example output shows the single node created in the previous steps. Make sure the node status is *Ready*.
```output NAME STATUS ROLES AGE VERSION
- aks-agentpool-87331340-vmss000000 Ready agent 8m53s v1.25.6
- aks-agentpool-87331340-vmss000001 Ready agent 8m51s v1.25.6
- aks-agentpool-87331340-vmss000002 Ready agent 8m57s v1.25.6
+ aks-nodepool1-31718369-0 Ready agent 6m44s v1.15.10
```
-## Deploy the application
+
-A Kubernetes manifest file defines a cluster's desired state, like which container images to run.
+## Deploy the application
-In this quickstart, you will use a manifest to create all objects needed to run the Azure Vote application. This manifest includes two Kubernetes deployments:
+To deploy the application, you use a manifest file to create all the objects required to run the [AKS Store application](https://github.com/Azure-Samples/aks-store-demo). A Kubernetes manifest file defines a cluster's desired state, such as which container images to run. The manifest includes the following Kubernetes deployments and
-* The sample Azure Vote Python applications.
-* A Redis instance.
-Two Kubernetes Services are also created:
+* **Store front**: Web application for customers to view products and place orders.
+* **Product service**: Shows product information.
+* **Order service**: Places orders.
+* **Rabbit MQ**: Message queue for an order queue.
-* An internal service for the Redis instance.
-* An external service to access the Azure Vote application from the internet.
+> [!NOTE]
+> We don't recommend running stateful containers, such as Rabbit MQ, without persistent storage for production. These are used here for simplicity, but we recommend using managed services, such as Azure CosmosDB or Azure Service Bus.
-1. In the Cloud Shell, open an editor and create a file named `azure-vote.yaml`.
-2. Paste in the following YAML definition:
+1. In the Cloud Shell, open an editor and create a file named `aks-store-quickstart.yaml`.
+2. Paste the following manifest into the editor:
```yaml apiVersion: apps/v1 kind: Deployment metadata:
- name: azure-vote-back
+ name: rabbitmq
spec: replicas: 1 selector: matchLabels:
- app: azure-vote-back
+ app: rabbitmq
template: metadata: labels:
- app: azure-vote-back
+ app: rabbitmq
spec: nodeSelector: "kubernetes.io/os": linux containers:
- - name: azure-vote-back
- image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
+ - name: rabbitmq
+ image: mcr.microsoft.com/mirror/docker/library/rabbitmq:3.10-management-alpine
+ ports:
+ - containerPort: 5672
+ name: rabbitmq-amqp
+ - containerPort: 15672
+ name: rabbitmq-http
env:
- - name: ALLOW_EMPTY_PASSWORD
- value: "yes"
+ - name: RABBITMQ_DEFAULT_USER
+ value: "username"
+ - name: RABBITMQ_DEFAULT_PASS
+ value: "password"
resources: requests:
- cpu: 100m
+ cpu: 10m
memory: 128Mi limits: cpu: 250m memory: 256Mi
+ volumeMounts:
+ - name: rabbitmq-enabled-plugins
+ mountPath: /etc/rabbitmq/enabled_plugins
+ subPath: enabled_plugins
+ volumes:
+ - name: rabbitmq-enabled-plugins
+ configMap:
+ name: rabbitmq-enabled-plugins
+ items:
+ - key: rabbitmq_enabled_plugins
+ path: enabled_plugins
+
+ apiVersion: v1
+ data:
+ rabbitmq_enabled_plugins: |
+ [rabbitmq_management,rabbitmq_prometheus,rabbitmq_amqp1_0].
+ kind: ConfigMap
+ metadata:
+ name: rabbitmq-enabled-plugins
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: rabbitmq
+ spec:
+ selector:
+ app: rabbitmq
+ ports:
+ - name: rabbitmq-amqp
+ port: 5672
+ targetPort: 5672
+ - name: rabbitmq-http
+ port: 15672
+ targetPort: 15672
+ type: ClusterIP
+
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: order-service
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: order-service
+ template:
+ metadata:
+ labels:
+ app: order-service
+ spec:
+ nodeSelector:
+ "kubernetes.io/os": linux
+ containers:
+ - name: order-service
+ image: ghcr.io/azure-samples/aks-store-demo/order-service:latest
ports:
- - containerPort: 6379
- name: redis
+ - containerPort: 3000
+ env:
+ - name: ORDER_QUEUE_HOSTNAME
+ value: "rabbitmq"
+ - name: ORDER_QUEUE_PORT
+ value: "5672"
+ - name: ORDER_QUEUE_USERNAME
+ value: "username"
+ - name: ORDER_QUEUE_PASSWORD
+ value: "password"
+ - name: ORDER_QUEUE_NAME
+ value: "orders"
+ - name: FASTIFY_ADDRESS
+ value: "0.0.0.0"
+ resources:
+ requests:
+ cpu: 1m
+ memory: 50Mi
+ limits:
+ cpu: 75m
+ memory: 128Mi
+ initContainers:
+ - name: wait-for-rabbitmq
+ image: busybox
+ command: ['sh', '-c', 'until nc -zv rabbitmq 5672; do echo waiting for rabbitmq; sleep 2; done;']
+ resources:
+ requests:
+ cpu: 1m
+ memory: 50Mi
+ limits:
+ cpu: 75m
+ memory: 128Mi
apiVersion: v1 kind: Service metadata:
- name: azure-vote-back
+ name: order-service
spec:
+ type: ClusterIP
ports:
- - port: 6379
+ - name: http
+ port: 3000
+ targetPort: 3000
selector:
- app: azure-vote-back
+ app: order-service
apiVersion: apps/v1 kind: Deployment metadata:
- name: azure-vote-front
+ name: product-service
spec: replicas: 1 selector: matchLabels:
- app: azure-vote-front
+ app: product-service
template: metadata: labels:
- app: azure-vote-front
+ app: product-service
spec: nodeSelector: "kubernetes.io/os": linux containers:
- - name: azure-vote-front
- image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
+ - name: product-service
+ image: ghcr.io/azure-samples/aks-store-demo/product-service:latest
+ ports:
+ - containerPort: 3002
resources: requests:
- cpu: 100m
- memory: 128Mi
+ cpu: 1m
+ memory: 1Mi
limits:
- cpu: 250m
- memory: 256Mi
+ cpu: 1m
+ memory: 7Mi
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: product-service
+ spec:
+ type: ClusterIP
+ ports:
+ - name: http
+ port: 3002
+ targetPort: 3002
+ selector:
+ app: product-service
+
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: store-front
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: store-front
+ template:
+ metadata:
+ labels:
+ app: store-front
+ spec:
+ nodeSelector:
+ "kubernetes.io/os": linux
+ containers:
+ - name: store-front
+ image: ghcr.io/azure-samples/aks-store-demo/store-front:latest
ports:
- - containerPort: 80
- env:
- - name: REDIS
- value: "azure-vote-back"
+ - containerPort: 8080
+ name: store-front
+ env:
+ - name: VUE_APP_ORDER_SERVICE_URL
+ value: "http://order-service:3000/"
+ - name: VUE_APP_PRODUCT_SERVICE_URL
+ value: "http://product-service:3002/"
+ resources:
+ requests:
+ cpu: 1m
+ memory: 200Mi
+ limits:
+ cpu: 1000m
+ memory: 512Mi
apiVersion: v1 kind: Service metadata:
- name: azure-vote-front
+ name: store-front
spec:
- type: LoadBalancer
ports: - port: 80
+ targetPort: 8080
selector:
- app: azure-vote-front
+ app: store-front
+ type: LoadBalancer
``` For a breakdown of YAML manifest files, see [Deployments and YAML manifests](../concepts-clusters-workloads.md#deployments-and-yaml-manifests).
-1. Deploy the application using the `kubectl apply` command and specify the name of your YAML manifest:
+3. Deploy the application using the `kubectl apply` command and specify the name of your YAML manifest:
```console kubectl apply -f azure-vote.yaml ```
- Output shows the successfully created deployments and
+ The following example output shows the deployments and
```output
- deployment "azure-vote-back" created
- service "azure-vote-back" created
- deployment "azure-vote-front" created
- service "azure-vote-front" created
+ deployment.apps/rabbitmq created
+ service/rabbitmq created
+ deployment.apps/order-service created
+ service/order-service created
+ deployment.apps/product-service created
+ service/product-service created
+ deployment.apps/store-front created
+ service/store-front created
``` ## Test the application When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete.
-To monitor progress, use the `kubectl get service` command with the `--watch` argument.
-
-```console
-kubectl get service azure-vote-front --watch
-```
+1. Check the status of the deployed pods using the [`kubectl get pods`][kubectl-get] command. Make all pods are `Running` before proceeding.
-The **EXTERNAL-IP** output for the `azure-vote-front` service will initially show as *pending*.
+2. Check for a public IP address for the store-front application. Monitor progress using the [`kubectl get service`][kubectl-get] command with the `--watch` argument.
-```output
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-azure-vote-front LoadBalancer 10.0.37.27 <pending> 80:30572/TCP 6s
-```
-
-Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service:
+ ```azurecli-interactive
+ kubectl get service store-front --watch
+ ```
-```output
-azure-vote-front LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m
-```
+ The **EXTERNAL-IP** output for the `store-front` service initially shows as *pending*:
-To see the Azure Vote app in action, open a web browser to the external IP address of your service.
+ ```output
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ store-front LoadBalancer 10.0.100.10 <pending> 80:30025/TCP 4h4m
+ ```
+3. Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process.
-## Delete cluster
+ The following example output shows a valid public IP address assigned to the service:
-To avoid Azure charges, if you don't plan on going through the tutorials that follow, clean up your unnecessary resources. Select the **Delete** button on the AKS cluster dashboard. You can also use the [az group delete][az-group-delete] command or the [Remove-AzResourceGroup][remove-azresourcegroup] cmdlet to remove the resource group, container service, and all related resources.
+ ```output
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ store-front LoadBalancer 10.0.100.10 20.62.159.19 80:30025/TCP 4h5m
+ ```
-### [Azure CLI](#tab/azure-cli)
+4. Open a web browser to the external IP address of your service to see the Azure Store app in action.
-```azurecli-interactive
-az group delete --name myResourceGroup --yes --no-wait
-```
+ :::image type="content" source="media/quick-kubernetes-deploy-portal/aks-store-application.png" alt-text="Screenshot of AKS Store sample application." lightbox="media/quick-kubernetes-deploy-portal/aks-store-application.png":::
-### [Azure PowerShell](#tab/azure-powershell)
+## Delete the cluster
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name myResourceGroup
-```
+If you don't plan on going through the following tutorials, clean up unnecessary resources to avoid Azure charges.
-
+1. In the Azure portal, navigate to your AKS cluster resource group.
+2. Select **Delete resource group**.
+3. Enter the name of the resource group to delete, and then select **Delete** > **Delete**.
-> [!NOTE]
-> The AKS cluster was created with a system-assigned managed identity. This identity is managed by the platform and doesn't require removal.
+ > [!NOTE]
+ > The AKS cluster was created with a system-assigned managed identity. This identity is managed by the platform and doesn't require removal.
## Next steps
-In this quickstart, you deployed a Kubernetes cluster and then deployed a sample multi-container application to it.
+In this quickstart, you deployed a Kubernetes cluster and deployed a simple multi-container application to it.
-To learn more about AKS by walking through a complete example, including building an application, deploying from Azure Container Registry, updating a running application, and scaling and upgrading your cluster, continue to the Kubernetes cluster tutorial.
+To learn more about AKS and walk through a complete code-to-deployment example, continue to the Kubernetes cluster tutorial.
> [!div class="nextstepaction"] > [AKS tutorial][aks-tutorial]
To learn more about AKS by walking through a complete example, including buildin
[http-routing]: ../http-application-routing.md [preset-config]: ../quotas-skus-regions.md#cluster-configuration-presets-in-the-azure-portal [sp-delete]: ../kubernetes-service-principal.md#additional-considerations
-[intro-azure-linux]: ../../azure-linux/intro-azure-linux.md
+[intro-azure-linux]: ../../azure-linux/intro-azure-linux.md
aks Quick Kubernetes Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-powershell.md
Title: 'Quickstart: Deploy an AKS cluster by using PowerShell'
+ Title: 'Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Azure PowerShell'
description: Learn how to quickly create a Kubernetes cluster and deploy an application in Azure Kubernetes Service (AKS) using PowerShell. Previously updated : 11/01/2022 Last updated : 10/23/2023 #Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure.
-# Quickstart: Deploy an Azure Kubernetes Service cluster using PowerShell
+# Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Azure PowerShell
-Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you will:
+Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you:
-* Deploy an AKS cluster using PowerShell.
-* Run a sample multi-container application with a web front-end and a Redis instance in the cluster.
+* Deploy an AKS cluster using Azure PowerShell.
+* Run a sample multi-container application with a group of microservices and web front ends simulating a retail scenario.
-
-This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
+> [!NOTE]
+> This sample application is just for demo purposes and doesn't represent all the best practices for Kubernetes applications.
-## Prerequisites
+## Before you begin
-- If you're running PowerShell locally, install the Az PowerShell module and connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell][install-azure-powershell].
+* This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
+* You need an Azure account with an active subscription. If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* To learn more about creating a Windows Server node pool, see [Create an AKS cluster that supports Windows Server containers](quick-windows-container-deploy-powershell.md).
-- The identity you are using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md). -- If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the
-[Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet.
+* If you're running PowerShell locally, install the `Az PowerShell` module and connect to your Azure account using the [`Connect-AzAccount`](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell][install-azure-powershell].
+* The identity you use to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
+* If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the
+[`Set-AzContext`](/powershell/module/az.accounts/set-azcontext) cmdlet.
```azurepowershell-interactive Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000 ```
+> [!NOTE]
+> If you plan to run the commands locally instead of in Azure Cloud Shell, make sure you run the commands with administrative privileges.
## Create a resource group
-An [Azure resource group](../../azure-resource-manager/management/overview.md) is a logical group in which Azure resources are deployed and managed. When you create a resource group, you will be prompted to specify a location. This location is:
-
-* The storage location of your resource group metadata.
-* Where your resources will run in Azure if you don't specify another region during resource creation.
+An [Azure resource group][azure-resource-group] is a logical group in which Azure resources are deployed and managed. When you create a resource group, you're prompted to specify a location. This location is the storage location of your resource group metadata and where your resources run in Azure if you don't specify another region during resource creation.
-The following example creates a resource group named *myResourceGroup* in the *eastus* region.
+The following example creates a resource group named *myResourceGroup* in the *eastus* location.
-Create a resource group using the [New-AzResourceGroup][new-azresourcegroup] cmdlet.
+* Create a resource group using the [`New-AzResourceGroup`][new-azresourcegroup] cmdlet.
-```azurepowershell-interactive
-New-AzResourceGroup -Name myResourceGroup -Location eastus
-```
+ ```azurepowershell-interactive
+ New-AzResourceGroup -Name myResourceGroup -Location eastus
+ ```
-The following output example resembles successful creation of the resource group:
+ The following example output resembles successful creation of the resource group:
-```plaintext
-ResourceGroupName : myResourceGroup
-Location : eastus
-ProvisioningState : Succeeded
-Tags :
-ResourceId : /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup
-```
+ ```output
+ ResourceGroupName : myResourceGroup
+ Location : eastus
+ ProvisioningState : Succeeded
+ Tags :
+ ResourceId : /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup
+ ```
## Create AKS cluster
-Create an AKS cluster using the [New-AzAksCluster][new-azakscluster] cmdlet with the *-WorkspaceResourceId* parameter to enable [Azure Monitor container insights][azure-monitor-containers].
+The following example creates a cluster named *myAKSCluster* with one node and enables a system-assigned managed identity.
-1. Create an AKS cluster named **myAKSCluster** with one node.
+* Create an AKS cluster using the [`New-AzAksCluster`][new-azakscluster] cmdlet with the `-WorkspaceResourceId` parameter to enable [Azure Monitor container insights][azure-monitor-containers].
```azurepowershell-interactive New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 1 -GenerateSshKey -WorkspaceResourceId <WORKSPACE_RESOURCE_ID> ```
-After a few minutes, the command completes and returns information about the cluster.
+ After a few minutes, the command completes and returns information about the cluster.
-> [!NOTE]
-> When you create an AKS cluster, a second resource group is automatically created to store the AKS resources. For more information, see [Why are two resource groups created with AKS?](../faq.md#why-are-two-resource-groups-created-with-aks)
+ > [!NOTE]
+ > When you create an AKS cluster, a second resource group is automatically created to store the AKS resources. For more information, see [Why are two resource groups created with AKS?](../faq.md#why-are-two-resource-groups-created-with-aks)
## Connect to the cluster To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell.
-1. Install `kubectl` locally using the `Install-AzAksCliTool` cmdlet:
+1. Install `kubectl` locally using the `Install-AzAksCliTool` cmdlet.
- ```azurepowershell
+ ```azurepowershell-interactive
Install-AzAksCliTool ```
-2. Configure `kubectl` to connect to your Kubernetes cluster using the [Import-AzAksCredential][import-azakscredential] cmdlet. The following cmdlet downloads credentials and configures the Kubernetes CLI to use them.
+2. Configure `kubectl` to connect to your Kubernetes cluster using the [`Import-AzAksCredential`][import-azakscredential] cmdlet. This command downloads credentials and configures the Kubernetes CLI to use them.
```azurepowershell-interactive Import-AzAksCredential -ResourceGroupName myResourceGroup -Name myAKSCluster ```
-3. Verify the connection to your cluster using the [kubectl get][kubectl-get] command. This command returns a list of the cluster nodes.
+3. Verify the connection to your cluster using the [`kubectl get`][kubectl-get] command. This command returns a list of the cluster nodes.
```azurepowershell-interactive kubectl get nodes ```
- The following output example shows the single node created in the previous steps. Make sure the node status is *Ready*:
+ The following example output shows the single node created in the previous steps. Make sure the node status is *Ready*.
- ```plaintext
+ ```output
NAME STATUS ROLES AGE VERSION aks-nodepool1-31718369-0 Ready agent 6m44s v1.15.10 ``` ## Deploy the application
-A [Kubernetes manifest file][kubernetes-deployment] defines a cluster's desired state, such as which container images to run.
-
-In this quickstart, you will use a manifest to create all objects needed to run the [Azure Vote application][azure-vote-app]. This manifest includes two [Kubernetes deployments][kubernetes-deployment]:
+To deploy the application, you use a manifest file to create all the objects required to run the [AKS Store application](https://github.com/Azure-Samples/aks-store-demo). A [Kubernetes manifest file][kubernetes-deployment] defines a cluster's desired state, such as which container images to run. The manifest includes the following Kubernetes deployments and
-* The sample Azure Vote Python applications.
-* A Redis instance.
-Two [Kubernetes Services][kubernetes-service] are also created:
+* **Store front**: Web application for customers to view products and place orders.
+* **Product service**: Shows product information.
+* **Order service**: Places orders.
+* **Rabbit MQ**: Message queue for an order queue.
-* An internal service for the Redis instance.
-* An external service to access the Azure Vote application from the internet.
+> [!NOTE]
+> We don't recommend running stateful containers, such as Rabbit MQ, without persistent storage for production. These are used here for simplicity, but we recommend using managed services, such as Azure CosmosDB or Azure Service Bus.
-1. Create a file named `azure-vote.yaml`.
-
-1. Copy in the following YAML definition:
+1. Create a file named `aks-store-quickstart.yaml` and copy in the following manifest:
```yaml apiVersion: apps/v1 kind: Deployment metadata:
- name: azure-vote-back
+ name: rabbitmq
spec: replicas: 1 selector: matchLabels:
- app: azure-vote-back
+ app: rabbitmq
template: metadata: labels:
- app: azure-vote-back
+ app: rabbitmq
spec: nodeSelector: "kubernetes.io/os": linux containers:
- - name: azure-vote-back
- image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
+ - name: rabbitmq
+ image: mcr.microsoft.com/mirror/docker/library/rabbitmq:3.10-management-alpine
+ ports:
+ - containerPort: 5672
+ name: rabbitmq-amqp
+ - containerPort: 15672
+ name: rabbitmq-http
env:
- - name: ALLOW_EMPTY_PASSWORD
- value: "yes"
+ - name: RABBITMQ_DEFAULT_USER
+ value: "username"
+ - name: RABBITMQ_DEFAULT_PASS
+ value: "password"
resources: requests:
- cpu: 100m
+ cpu: 10m
memory: 128Mi limits: cpu: 250m memory: 256Mi
+ volumeMounts:
+ - name: rabbitmq-enabled-plugins
+ mountPath: /etc/rabbitmq/enabled_plugins
+ subPath: enabled_plugins
+ volumes:
+ - name: rabbitmq-enabled-plugins
+ configMap:
+ name: rabbitmq-enabled-plugins
+ items:
+ - key: rabbitmq_enabled_plugins
+ path: enabled_plugins
+
+ apiVersion: v1
+ data:
+ rabbitmq_enabled_plugins: |
+ [rabbitmq_management,rabbitmq_prometheus,rabbitmq_amqp1_0].
+ kind: ConfigMap
+ metadata:
+ name: rabbitmq-enabled-plugins
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: rabbitmq
+ spec:
+ selector:
+ app: rabbitmq
+ ports:
+ - name: rabbitmq-amqp
+ port: 5672
+ targetPort: 5672
+ - name: rabbitmq-http
+ port: 15672
+ targetPort: 15672
+ type: ClusterIP
+
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: order-service
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: order-service
+ template:
+ metadata:
+ labels:
+ app: order-service
+ spec:
+ nodeSelector:
+ "kubernetes.io/os": linux
+ containers:
+ - name: order-service
+ image: ghcr.io/azure-samples/aks-store-demo/order-service:latest
ports:
- - containerPort: 6379
- name: redis
+ - containerPort: 3000
+ env:
+ - name: ORDER_QUEUE_HOSTNAME
+ value: "rabbitmq"
+ - name: ORDER_QUEUE_PORT
+ value: "5672"
+ - name: ORDER_QUEUE_USERNAME
+ value: "username"
+ - name: ORDER_QUEUE_PASSWORD
+ value: "password"
+ - name: ORDER_QUEUE_NAME
+ value: "orders"
+ - name: FASTIFY_ADDRESS
+ value: "0.0.0.0"
+ resources:
+ requests:
+ cpu: 1m
+ memory: 50Mi
+ limits:
+ cpu: 75m
+ memory: 128Mi
+ initContainers:
+ - name: wait-for-rabbitmq
+ image: busybox
+ command: ['sh', '-c', 'until nc -zv rabbitmq 5672; do echo waiting for rabbitmq; sleep 2; done;']
+ resources:
+ requests:
+ cpu: 1m
+ memory: 50Mi
+ limits:
+ cpu: 75m
+ memory: 128Mi
apiVersion: v1 kind: Service metadata:
- name: azure-vote-back
+ name: order-service
spec:
+ type: ClusterIP
ports:
- - port: 6379
+ - name: http
+ port: 3000
+ targetPort: 3000
selector:
- app: azure-vote-back
+ app: order-service
apiVersion: apps/v1 kind: Deployment metadata:
- name: azure-vote-front
+ name: product-service
spec: replicas: 1 selector: matchLabels:
- app: azure-vote-front
+ app: product-service
template: metadata: labels:
- app: azure-vote-front
+ app: product-service
spec: nodeSelector: "kubernetes.io/os": linux containers:
- - name: azure-vote-front
- image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
+ - name: product-service
+ image: ghcr.io/azure-samples/aks-store-demo/product-service:latest
+ ports:
+ - containerPort: 3002
resources: requests:
- cpu: 100m
- memory: 128Mi
+ cpu: 1m
+ memory: 1Mi
limits:
- cpu: 250m
- memory: 256Mi
+ cpu: 1m
+ memory: 7Mi
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: product-service
+ spec:
+ type: ClusterIP
+ ports:
+ - name: http
+ port: 3002
+ targetPort: 3002
+ selector:
+ app: product-service
+
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: store-front
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: store-front
+ template:
+ metadata:
+ labels:
+ app: store-front
+ spec:
+ nodeSelector:
+ "kubernetes.io/os": linux
+ containers:
+ - name: store-front
+ image: ghcr.io/azure-samples/aks-store-demo/store-front:latest
ports:
- - containerPort: 80
- env:
- - name: REDIS
- value: "azure-vote-back"
+ - containerPort: 8080
+ name: store-front
+ env:
+ - name: VUE_APP_ORDER_SERVICE_URL
+ value: "http://order-service:3000/"
+ - name: VUE_APP_PRODUCT_SERVICE_URL
+ value: "http://product-service:3002/"
+ resources:
+ requests:
+ cpu: 1m
+ memory: 200Mi
+ limits:
+ cpu: 1000m
+ memory: 512Mi
apiVersion: v1 kind: Service metadata:
- name: azure-vote-front
+ name: store-front
spec:
- type: LoadBalancer
ports: - port: 80
+ targetPort: 8080
selector:
- app: azure-vote-front
+ app: store-front
+ type: LoadBalancer
``` For a breakdown of YAML manifest files, see [Deployments and YAML manifests](../concepts-clusters-workloads.md#deployments-and-yaml-manifests).
-1. Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
+1. Deploy the application using the [`kubectl apply`][kubectl-apply] command and specify the name of your YAML manifest.
- ```azurepowershell-interactive
- kubectl apply -f azure-vote.yaml
+ ```console
+ kubectl apply -f aks-store-quickstart.yaml
```
- The following example resembles output showing the successfully created deployments and
-
- ```plaintext
- deployment.apps/azure-vote-back created
- service/azure-vote-back created
- deployment.apps/azure-vote-front created
- service/azure-vote-front created
+ The following example output shows the deployments and
+
+ ```output
+ deployment.apps/rabbitmq created
+ service/rabbitmq created
+ deployment.apps/order-service created
+ service/order-service created
+ deployment.apps/product-service created
+ service/product-service created
+ deployment.apps/store-front created
+ service/store-front created
``` ## Test the application When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete.
-Monitor progress using the [kubectl get service][kubectl-get] command with the `--watch` argument.
+1. Check the status of the deployed pods using the [`kubectl get pods`][kubectl-get] command. Make all pods are `Running` before proceeding.
+
+2. Check for a public IP address for the store-front application. Monitor progress using the [`kubectl get service`][kubectl-get] command with the `--watch` argument.
+
+ ```azurecli-interactive
+ kubectl get service store-front --watch
+ ```
-```azurepowershell-interactive
-kubectl get service azure-vote-front --watch
-```
+ The **EXTERNAL-IP** output for the `store-front` service initially shows as *pending*:
-The **EXTERNAL-IP** output for the `azure-vote-front` service will initially show as *pending*.
+ ```output
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ store-front LoadBalancer 10.0.100.10 <pending> 80:30025/TCP 4h4m
+ ```
-```plaintext
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-azure-vote-front LoadBalancer 10.0.37.27 <pending> 80:30572/TCP 6s
-```
+3. Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process.
-Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service:
+ The following example output shows a valid public IP address assigned to the service:
-```plaintext
-azure-vote-front LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m
-```
+ ```output
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ store-front LoadBalancer 10.0.100.10 20.62.159.19 80:30025/TCP 4h5m
+ ```
-To see the Azure Vote app in action, open a web browser to the external IP address of your service.
+4. Open a web browser to the external IP address of your service to see the Azure Store app in action.
+ :::image type="content" source="media/quick-kubernetes-deploy-powershell/aks-store-application.png" alt-text="Screenshot of AKS Store sample application." lightbox="media/quick-kubernetes-deploy-powershell/aks-store-application.png":::
## Delete the cluster
-To avoid Azure charges, if you don't plan on going through the tutorials that follow, clean up your unnecessary resources. Use the [Remove-AzResourceGroup][remove-azresourcegroup] cmdlet to remove the resource group, container service, and all related resources.
+If you don't plan on going through the following tutorials, clean up unnecessary resources to avoid Azure charges.
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name myResourceGroup
-```
+* Remove the resource group, container service, and all related resources using the [`Remove-AzResourceGroup`][remove-azresourcegroup] cmdlet
-> [!NOTE]
-> The AKS cluster was created with system-assigned managed identity (default identity option used in this quickstart), the identity is managed by the platform and does not require removal.
+ ```azurepowershell-interactive
+ Remove-AzResourceGroup -Name myResourceGroup
+ ```
+
+ > [!NOTE]
+ > The AKS cluster was created with system-assigned managed identity (default identity option used in this quickstart), the identity is managed by the platform and doesn't require removal.
## Next steps In this quickstart, you deployed a Kubernetes cluster and then deployed a sample multi-container application to it.
-To learn more about AKS, and walk through a complete code to deployment example, continue to the Kubernetes cluster tutorial.
+To learn more about AKS and walk through a complete code to deployment example, continue to the Kubernetes cluster tutorial.
> [!div class="nextstepaction"] > [AKS tutorial][aks-tutorial]
To learn more about AKS, and walk through a complete code to deployment example,
[azure-monitor-containers]: ../../azure-monitor/containers/container-insights-overview.md [kubectl]: https://kubernetes.io/docs/reference/kubectl/ [kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
-[azure-dev-spaces]: /previous-versions/azure/dev-spaces/
[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
-[azure-vote-app]: https://github.com/Azure-Samples/azure-voting-app-redis.git
<!-- LINKS - internal -->
-[windows-container-powershell]: ../windows-container-powershell.md
[kubernetes-concepts]: ../concepts-clusters-workloads.md
-[aks-monitor]: ../../azure-monitor/containers/container-insights-onboard.md
[install-azure-powershell]: /powershell/azure/install-az-ps [new-azresourcegroup]: /powershell/module/az.resources/new-azresourcegroup [new-azakscluster]: /powershell/module/az.aks/new-azakscluster [import-azakscredential]: /powershell/module/az.aks/import-azakscredential [kubernetes-deployment]: ../concepts-clusters-workloads.md#deployments-and-yaml-manifests
-[azure-monitor-containers]: ../../azure-monitor/containers/container-insights-overview.md
-[kubernetes-service]: ../concepts-network.md#services
[remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup
-[sp-delete]: ../kubernetes-service-principal.md#additional-considerations
[aks-tutorial]: ../tutorial-kubernetes-prepare-app.md
+[azure-resource-group]: ../../azure-resource-manager/management/overview.md
aks Quick Kubernetes Deploy Rm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-rm-template.md
Title: Quickstart - Create an Azure Kubernetes Service (AKS) cluster
-description: Learn how to quickly create a Kubernetes cluster using an Azure Resource Manager template and deploy an application in Azure Kubernetes Service (AKS)
+ Title: 'Quickstart: Create an Azure Kubernetes Service (AKS) cluster using an ARM template'
+description: Learn how to quickly create a Kubernetes cluster using an Azure Resource Manager template and deploy an application in Azure Kubernetes Service (AKS).
Previously updated : 11/01/2022 Last updated : 10/23/2023 #Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure. # Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using an ARM template
-Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you will:
+Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you:
* Deploy an AKS cluster using an Azure Resource Manager template.
-* Run a sample multi-container application with a web front-end and a Redis instance in the cluster.
+* Run a sample multi-container application with a group of microservices and web front ends simulating a retail scenario.
--
-This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
+> [!NOTE]
+> This sample application is just for demo purposes and doesn't represent all the best practices for Kubernetes applications.
-If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
-[![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.kubernetes%2Faks%2Fazuredeploy.json)
+## Before you begin
-## Prerequisites
+* This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
+* You need an Azure account with an active subscription. If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* To learn more about creating a Windows Server node pool, see [Create an AKS cluster that supports Windows Server containers](quick-windows-container-deploy-cli.md).
+* [!INCLUDE [About Azure Resource Manager](../../../includes/resource-manager-quickstart-introduction.md)]
+* If your environment meets the prerequisites and you're familiar with ARM templates, select **Deploy to Azure** to open the template in the Azure portal.
+ [![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.kubernetes%2Faks%2Fazuredeploy.json)
### [Azure CLI](#tab/azure-cli) [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)]
-* This article requires version 2.0.64 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+* This article requires Azure CLI version 2.0.64 or later. If using Azure Cloud Shell, the latest version is already installed.
### [Azure PowerShell](#tab/azure-powershell)
-* If you're running PowerShell locally, install the Az PowerShell module and connect to your Azure account using the [Connect-AzAccount][connect-azaccount] cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell][install-azure-powershell]. If using Azure Cloud Shell, the latest version is already installed.
+* If you're running PowerShell locally, install the `Az PowerShell` module. If using Azure Cloud Shell, the latest version is already installed.
-* To create an AKS cluster using a Resource Manager template, you provide an SSH public key. If you need this resource, see the following section; otherwise skip to the [Review the template](#review-the-template) section.
+* To create an AKS cluster using an ARM template, you provide an SSH public key. If you need this resource, see the following section. Otherwise, skip [Review the template](#review-the-template).
-* The identity you are using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
+* Make sure the identity you use to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
-* To deploy a Bicep file or ARM template, you need write access on the resources you're deploying and access to all operations on the Microsoft.Resources/deployments resource type. For example, to deploy a virtual machine, you need Microsoft.Compute/virtualMachines/write and Microsoft.Resources/deployments/* permissions. For a list of roles and permissions, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
+* To deploy an ARM template, you need write access on the resources you're deploying and access to all operations on the `Microsoft.Resources/deployments` resource type. For example, to deploy a virtual machine, you need `Microsoft.Compute/virtualMachines/write` and `Microsoft.Resources/deployments/*` permissions. For a list of roles and permissions, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
### Create an SSH key pair To access AKS nodes, you connect using an SSH key pair (public and private), which you generate using the `ssh-keygen` command. By default, these files are created in the *~/.ssh* directory. Running the `ssh-keygen` command will overwrite any SSH key pair with the same name already existing in the given location. 1. Go to [https://shell.azure.com](https://shell.azure.com) to open Cloud Shell in your browser.
+2. Create an SSH key pair using the [`az sshkey create`][az-sshkey-create] Azure CLI command or the `ssh-keygen` command.
-1. Run the `ssh-keygen` command. The following example creates an SSH key pair using RSA encryption and a bit length of 4096:
+ ```azurecli-interactive
+ # Create an SSH key pair using Azure CLI
+ az sshkey create --name "mySSHKey" --resource-group "myResourceGroup"
- ```console
+ # Create an SSH key pair using ssh-keygen
ssh-keygen -t rsa -b 4096 ```
The template used in this quickstart is from [Azure Quickstart Templates](https:
:::code language="json" source="~/quickstart-templates/quickstarts/microsoft.kubernetes/aks/azuredeploy.json":::
-The resource defined in the ARM template includes:
+The resource defined in the ARM template:
* [**Microsoft.ContainerService/managedClusters**](/azure/templates/microsoft.containerservice/managedclusters?pivots=deployment-language-arm-template)
For more AKS samples, see the [AKS quickstart templates][aks-quickstart-template
## Deploy the template
-1. Select the following button to sign in to Azure and open a template.
+1. Select **Deploy to Azure** to sign in and open a template.
[![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.kubernetes%2Faks%2Fazuredeploy.json)
-1. Select or enter the following values.
-
- For this quickstart, leave the default values for the *OS Disk Size GB*, *Agent Count*, *Agent VM Size*, *OS Type*, and *Kubernetes Version*. Provide your own values for the following template parameters:
+2. On the **Basics** page, leave the default values for the *OS Disk Size GB*, *Agent Count*, *Agent VM Size*, *OS Type*, and *Kubernetes Version*, and configure the following template parameters:
* **Subscription**: Select an Azure subscription.
- * **Resource group**: Select **Create new**. Enter a unique name for the resource group, such as *myResourceGroup*, then choose **OK**.
+ * **Resource group**: Select **Create new**. Enter a unique name for the resource group, such as *myResourceGroup*, then select **OK**.
* **Location**: Select a location, such as **East US**. * **Cluster name**: Enter a unique name for the AKS cluster, such as *myAKSCluster*. * **DNS prefix**: Enter a unique DNS prefix for your cluster, such as *myakscluster*. * **Linux Admin Username**: Enter a username to connect using SSH, such as *azureuser*.
- * **SSH RSA Public Key**: Copy and paste the *public* part of your SSH key pair (by default, the contents of *~/.ssh/id_rsa.pub*).
+ * **SSH public key source**: Select **Use existing public key**.
+ * **Key pair name**: Copy and paste the *public* part of your SSH key pair (by default, the contents of *~/.ssh/id_rsa.pub*).
- :::image type="content" source="./media/quick-kubernetes-deploy-rm-template/create-aks-cluster-using-template-portal.png" alt-text="Screenshot of Resource Manager template to create an Azure Kubernetes Service cluster in the portal.":::
-
-1. Select **Review + Create**.
+3. Select **Review + Create** > **Create**.
It takes a few minutes to create the AKS cluster. Wait for the cluster to be successfully deployed before you move on to the next step.
To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl
### [Azure CLI](#tab/azure-cli)
-1. Install `kubectl` locally using the [az aks install-cli][az-aks-install-cli] command:
+1. Install `kubectl` locally using the [`az aks install-cli`][az-aks-install-cli] command.
- ```azurecli
+ ```azurecli-interactive
az aks install-cli ```
-1. Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them.
+2. Configure `kubectl` to connect to your Kubernetes cluster using the [`az aks get-credentials`][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them.
```azurecli-interactive az aks get-credentials --resource-group myResourceGroup --name myAKSCluster ```
-1. Verify the connection to your cluster using the [kubectl get][kubectl-get] command. This command returns a list of the cluster nodes.
+3. Verify the connection to your cluster using the [`kubectl get`][kubectl-get] command. This command returns a list of the cluster nodes.
- ```console
+ ```azurecli-interactive
kubectl get nodes ```
- The following output example shows the three nodes created in the previous steps. Make sure the node status is *Ready*:
+ The following example output shows the three nodes created in the previous steps. Make sure the node status is *Ready*.
```output NAME STATUS ROLES AGE VERSION
To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl
### [Azure PowerShell](#tab/azure-powershell)
-1. Install `kubectl` locally using the [Install-AzAksKubectl][install-azakskubectl] cmdlet:
+1. Install `kubectl` locally using the [`Install-AzAksKubectl`][install-azakskubectl] cmdlet.
- ```azurepowershell
+ ```azurepowershell-interactive
Install-AzAksKubectl ```
-1. Configure `kubectl` to connect to your Kubernetes cluster using the [Import-AzAksCredential][import-azakscredential] cmdlet. The following cmdlet downloads credentials and configures the Kubernetes CLI to use them.
+2. Configure `kubectl` to connect to your Kubernetes cluster using the [`Import-AzAksCredential`][import-azakscredential] cmdlet. This command downloads credentials and configures the Kubernetes CLI to use them.
```azurepowershell-interactive Import-AzAksCredential -ResourceGroupName myResourceGroup -Name myAKSCluster ```
-1. Verify the connection to your cluster using the [kubectl get][kubectl-get] command. This command returns a list of the cluster nodes.
+3. Verify the connection to your cluster using the [`kubectl get`][kubectl-get] command. This command returns a list of the cluster nodes.
```azurepowershell-interactive kubectl get nodes ```
- The following output example shows the three nodes created in the previous steps. Make sure the node status is *Ready*:
+ The following example output shows the three nodes created in the previous steps. Make sure the node status is *Ready*.
- ```plaintext
+ ```output
NAME STATUS ROLES AGE VERSION aks-agentpool-41324942-0 Ready agent 6m44s v1.12.6 aks-agentpool-41324942-1 Ready agent 6m46s v1.12.6
To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl
### Deploy the application
-A [Kubernetes manifest file][kubernetes-deployment] defines a cluster's desired state, such as which container images to run.
+To deploy the application, you use a manifest file to create all the objects required to run the [AKS Store application](https://github.com/Azure-Samples/aks-store-demo). A [Kubernetes manifest file][kubernetes-deployment] defines a cluster's desired state, such as which container images to run. The manifest includes the following Kubernetes deployments and
-In this quickstart, you will use a manifest to create all objects needed to run the [Azure Vote application][azure-vote-app]. This manifest includes two [Kubernetes deployments][kubernetes-deployment]:
-* The sample Azure Vote Python applications.
-* A Redis instance.
+* **Store front**: Web application for customers to view products and place orders.
+* **Product service**: Shows product information.
+* **Order service**: Places orders.
+* **Rabbit MQ**: Message queue for an order queue.
-Two [Kubernetes Services][kubernetes-service] are also created:
-
-* An internal service for the Redis instance.
-* An external service to access the Azure Vote application from the internet.
+> [!NOTE]
+> We don't recommend running stateful containers, such as Rabbit MQ, without persistent storage for production. These are used here for simplicity, but we recommend using managed services, such as Azure CosmosDB or Azure Service Bus.
-1. Create a file named `azure-vote.yaml`.
-
-1. Copy in the following YAML definition:
+1. Create a file named `aks-store-quickstart.yaml` and copy in the following manifest:
```yaml apiVersion: apps/v1 kind: Deployment metadata:
- name: azure-vote-back
+ name: rabbitmq
spec: replicas: 1 selector: matchLabels:
- app: azure-vote-back
+ app: rabbitmq
template: metadata: labels:
- app: azure-vote-back
+ app: rabbitmq
spec: nodeSelector: "kubernetes.io/os": linux containers:
- - name: azure-vote-back
- image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
+ - name: rabbitmq
+ image: mcr.microsoft.com/mirror/docker/library/rabbitmq:3.10-management-alpine
+ ports:
+ - containerPort: 5672
+ name: rabbitmq-amqp
+ - containerPort: 15672
+ name: rabbitmq-http
env:
- - name: ALLOW_EMPTY_PASSWORD
- value: "yes"
+ - name: RABBITMQ_DEFAULT_USER
+ value: "username"
+ - name: RABBITMQ_DEFAULT_PASS
+ value: "password"
resources: requests:
- cpu: 100m
+ cpu: 10m
memory: 128Mi limits: cpu: 250m memory: 256Mi
+ volumeMounts:
+ - name: rabbitmq-enabled-plugins
+ mountPath: /etc/rabbitmq/enabled_plugins
+ subPath: enabled_plugins
+ volumes:
+ - name: rabbitmq-enabled-plugins
+ configMap:
+ name: rabbitmq-enabled-plugins
+ items:
+ - key: rabbitmq_enabled_plugins
+ path: enabled_plugins
+
+ apiVersion: v1
+ data:
+ rabbitmq_enabled_plugins: |
+ [rabbitmq_management,rabbitmq_prometheus,rabbitmq_amqp1_0].
+ kind: ConfigMap
+ metadata:
+ name: rabbitmq-enabled-plugins
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: rabbitmq
+ spec:
+ selector:
+ app: rabbitmq
+ ports:
+ - name: rabbitmq-amqp
+ port: 5672
+ targetPort: 5672
+ - name: rabbitmq-http
+ port: 15672
+ targetPort: 15672
+ type: ClusterIP
+
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: order-service
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: order-service
+ template:
+ metadata:
+ labels:
+ app: order-service
+ spec:
+ nodeSelector:
+ "kubernetes.io/os": linux
+ containers:
+ - name: order-service
+ image: ghcr.io/azure-samples/aks-store-demo/order-service:latest
ports:
- - containerPort: 6379
- name: redis
+ - containerPort: 3000
+ env:
+ - name: ORDER_QUEUE_HOSTNAME
+ value: "rabbitmq"
+ - name: ORDER_QUEUE_PORT
+ value: "5672"
+ - name: ORDER_QUEUE_USERNAME
+ value: "username"
+ - name: ORDER_QUEUE_PASSWORD
+ value: "password"
+ - name: ORDER_QUEUE_NAME
+ value: "orders"
+ - name: FASTIFY_ADDRESS
+ value: "0.0.0.0"
+ resources:
+ requests:
+ cpu: 1m
+ memory: 50Mi
+ limits:
+ cpu: 75m
+ memory: 128Mi
+ initContainers:
+ - name: wait-for-rabbitmq
+ image: busybox
+ command: ['sh', '-c', 'until nc -zv rabbitmq 5672; do echo waiting for rabbitmq; sleep 2; done;']
+ resources:
+ requests:
+ cpu: 1m
+ memory: 50Mi
+ limits:
+ cpu: 75m
+ memory: 128Mi
apiVersion: v1 kind: Service metadata:
- name: azure-vote-back
+ name: order-service
spec:
+ type: ClusterIP
ports:
- - port: 6379
+ - name: http
+ port: 3000
+ targetPort: 3000
selector:
- app: azure-vote-back
+ app: order-service
apiVersion: apps/v1 kind: Deployment metadata:
- name: azure-vote-front
+ name: product-service
spec: replicas: 1 selector: matchLabels:
- app: azure-vote-front
+ app: product-service
template: metadata: labels:
- app: azure-vote-front
+ app: product-service
spec: nodeSelector: "kubernetes.io/os": linux containers:
- - name: azure-vote-front
- image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
+ - name: product-service
+ image: ghcr.io/azure-samples/aks-store-demo/product-service:latest
+ ports:
+ - containerPort: 3002
resources: requests:
- cpu: 100m
- memory: 128Mi
+ cpu: 1m
+ memory: 1Mi
limits:
- cpu: 250m
- memory: 256Mi
+ cpu: 1m
+ memory: 7Mi
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: product-service
+ spec:
+ type: ClusterIP
+ ports:
+ - name: http
+ port: 3002
+ targetPort: 3002
+ selector:
+ app: product-service
+
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: store-front
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: store-front
+ template:
+ metadata:
+ labels:
+ app: store-front
+ spec:
+ nodeSelector:
+ "kubernetes.io/os": linux
+ containers:
+ - name: store-front
+ image: ghcr.io/azure-samples/aks-store-demo/store-front:latest
ports:
- - containerPort: 80
- env:
- - name: REDIS
- value: "azure-vote-back"
+ - containerPort: 8080
+ name: store-front
+ env:
+ - name: VUE_APP_ORDER_SERVICE_URL
+ value: "http://order-service:3000/"
+ - name: VUE_APP_PRODUCT_SERVICE_URL
+ value: "http://product-service:3002/"
+ resources:
+ requests:
+ cpu: 1m
+ memory: 200Mi
+ limits:
+ cpu: 1000m
+ memory: 512Mi
apiVersion: v1 kind: Service metadata:
- name: azure-vote-front
+ name: store-front
spec:
- type: LoadBalancer
ports: - port: 80
+ targetPort: 8080
selector:
- app: azure-vote-front
+ app: store-front
+ type: LoadBalancer
``` For a breakdown of YAML manifest files, see [Deployments and YAML manifests](../concepts-clusters-workloads.md#deployments-and-yaml-manifests).
-1. Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
+2. Deploy the application using the [`kubectl apply`][kubectl-apply] command and specify the name of your YAML manifest.
```console
- kubectl apply -f azure-vote.yaml
+ kubectl apply -f aks-store-quickstart.yaml
```
- The following example resembles output showing the successfully created deployments and
+ The following example output shows the deployments and
```output
- deployment "azure-vote-back" created
- service "azure-vote-back" created
- deployment "azure-vote-front" created
- service "azure-vote-front" created
+ deployment.apps/rabbitmq created
+ service/rabbitmq created
+ deployment.apps/order-service created
+ service/order-service created
+ deployment.apps/product-service created
+ service/product-service created
+ deployment.apps/store-front created
+ service/store-front created
``` ### Test the application
-When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete.
+1. Check the status of the deployed pods using the [`kubectl get pods`][kubectl-get] command. Make all pods are `Running` before proceeding.
-Monitor progress using the [kubectl get service][kubectl-get] command with the `--watch` argument.
+2. Check for a public IP address for the store-front application. Monitor progress using the [`kubectl get service`][kubectl-get] command with the `--watch` argument.
-```console
-kubectl get service azure-vote-front --watch
-```
+ ```console
+ kubectl get service store-front --watch
+ ```
-The **EXTERNAL-IP** output for the `azure-vote-front` service will initially show as *pending*.
+ The **EXTERNAL-IP** output for the `store-front` service initially shows as *pending*:
-```output
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-azure-vote-front LoadBalancer 10.0.37.27 <pending> 80:30572/TCP 6s
-```
+ ```output
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ store-front LoadBalancer 10.0.100.10 <pending> 80:30025/TCP 4h4m
+ ```
+
+3. Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process.
-Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service:
+ The following example output shows a valid public IP address assigned to the service:
+
+ ```output
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ store-front LoadBalancer 10.0.100.10 20.62.159.19 80:30025/TCP 4h5m
+ ```
-```output
-azure-vote-front LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m
-```
+4. Open a web browser to the external IP address of your service to see the Azure Store app in action.
-To see the Azure Vote app in action, open a web browser to the external IP address of your service.
+ :::image type="content" source="media/quick-kubernetes-deploy-rm-template/aks-store-application.png" alt-text="Screenshot of AKS Store sample application." lightbox="media/quick-kubernetes-deploy-rm-template/aks-store-application.png":::
+## Delete the cluster
-## Clean up resources
+If you don't plan on going through the following tutorials, clean up unnecessary resources to avoid Azure charges.
### [Azure CLI](#tab/azure-cli)
-To avoid Azure charges, if you don't plan on going through the tutorials that follow, clean up your unnecessary resources. Use the [az group delete][az-group-delete] command to remove the resource group, container service, and all related resources.
+* Remove the resource group, container service, and all related resources using the [`az group delete`][az-group-delete] command.
-```azurecli-interactive
-az group delete --name myResourceGroup --yes --no-wait
-```
+ ```azurecli-interactive
+ az group delete --name myResourceGroup --yes --no-wait
+ ```
### [Azure PowerShell](#tab/azure-powershell)
-To avoid Azure charges, if you don't plan on going through the tutorials that follow, clean up your unnecessary resources. Use the [Remove-AzResourceGroup][remove-azresourcegroup] cmdlet to remove the resource group, container service, and all related resources.
+* Remove the resource group, container service, and all related resources using the [`Remove-AzResourceGroup`][remove-azresourcegroup] cmdlet
-```azurepowershell-interactive
-Remove-AzResourceGroup -Name myResourceGroup
-```
+ ```azurepowershell-interactive
+ Remove-AzResourceGroup -Name myResourceGroup
+ ```
-> [!NOTE]
-> The AKS cluster was created with system-assigned managed identity (default identity option used in this quickstart), the identity is managed by the platform and does not require removal.
+ > [!NOTE]
+ > The AKS cluster was created with a system-assigned managed identity, which is the default identity option used in this quickstart. The platform manages this identity so you don't need to manually remove it.
## Next steps In this quickstart, you deployed a Kubernetes cluster and then deployed a sample multi-container application to it.
-To learn more about AKS, and walk through a complete code to deployment example, continue to the Kubernetes cluster tutorial.
+To learn more about AKS and walk through a complete code to deployment example, continue to the Kubernetes cluster tutorial.
> [!div class="nextstepaction"] > [AKS tutorial][aks-tutorial] <!-- LINKS - external -->
-[azure-vote-app]: https://github.com/Azure-Samples/azure-voting-app-redis.git
[kubectl]: https://kubernetes.io/docs/reference/kubectl/ [kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply [kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
-[azure-dev-spaces]: /previous-versions/azure/dev-spaces/
[aks-quickstart-templates]: https://azure.microsoft.com/resources/templates/?term=Azure+Kubernetes+Service <!-- LINKS - internal --> [kubernetes-concepts]: ../concepts-clusters-workloads.md
-[aks-monitor]: ../../azure-monitor/containers/container-insights-onboard.md
[aks-tutorial]: ../tutorial-kubernetes-prepare-app.md
-[az-aks-browse]: /cli/azure/aks#az_aks_browse
-[az-aks-create]: /cli/azure/aks#az_aks_create
[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [import-azakscredential]: /powershell/module/az.aks/import-azakscredential [az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli [install-azakskubectl]: /powershell/module/az.aks/install-azaksclitool
-[az-group-create]: /cli/azure/group#az_group_create
[az-group-delete]: /cli/azure/group#az_group_delete [remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup
-[azure-cli-install]: /cli/azure/install-azure-cli
-[install-azure-powershell]: /powershell/azure/install-az-ps
-[connect-azaccount]: /powershell/module/az.accounts/Connect-AzAccount
-[sp-delete]: ../kubernetes-service-principal.md#additional-considerations
[kubernetes-deployment]: ../concepts-clusters-workloads.md#deployments-and-yaml-manifests
-[kubernetes-service]: ../concepts-network.md#services
-[ssh-keys]: ../../virtual-machines/linux/create-ssh-keys-detailed.md
-[az-ad-sp-create-for-rbac]: /cli/azure/ad/sp#az_ad_sp_create_for_rbac
+[ssh-keys]: ../../virtual-machines/linux/create-ssh-keys-detailed.md
aks Quick Kubernetes Deploy Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-terraform.md
Title: 'Quickstart: Create an Azure Kubernetes Service (AKS) cluster by using Terraform'
-description: In this article, you learn how to quickly create a Kubernetes cluster using Terraform and deploy an application in Azure Kubernetes Service (AKS)
+ Title: 'Quickstart: Create an Azure Kubernetes Service (AKS) cluster using Terraform'
+description: Learn how to quickly create a Kubernetes cluster using Terraform and deploy an application in Azure Kubernetes Service (AKS).
Previously updated : 06/13/2023 Last updated : 10/23/2023 content_well_notification: - AI-contribution #Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure.
-# Quickstart: Create an Azure Kubernetes Service (AKS) cluster by using Terraform
+# Quickstart: Create an Azure Kubernetes Service (AKS) cluster using Terraform
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you:
-* Deploy an AKS cluster using Terraform. The sample code is fully encapsulated such that it automatically creates a service principal and SSH key pair (using the [AzAPI provider](/azure/developer/terraform/overview-azapi-provider)).
-* Run a sample multi-container application with a web front-end and a Redis instance in the cluster.
+* Deploy an AKS cluster using Terraform.
+* Run a sample multi-container application with a group of microservices and web front ends simulating a retail scenario.
--
-In this article, you learn how to:
+> [!NOTE]
+> This sample application is just for demo purposes and doesn't represent all the best practices for Kubernetes applications.
-> [!div class="checklist"]
-> * Create a random value for the Azure resource group name using [random_pet](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet).
-> * Create an Azure resource group using [azurerm_resource_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/resource_group).
-> * Access the configuration of the AzureRM provider to get the Azure Object ID using [azurerm_client_config](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/data-sources/client_config).
-> * Create a Kubernetes cluster using [azurerm_kubernetes_cluster](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster).
-> * Create an AzAPI resource [azapi_resource](https://registry.terraform.io/providers/Azure/azapi/latest/docs/resources/azapi_resource).
-> * Create an AzAPI resource to generate an SSH key pair using [azapi_resource_action](https://registry.terraform.io/providers/Azure/azapi/latest/docs/resources/azapi_resource_action).
-## Prerequisites
+## Before you begin
-- [Install and configure Terraform](/azure/developer/terraform/quickstart-configure)-- **Kubernetes command-line tool (kubectl):** [Download kubectl](https://kubernetes.io/releases/download/).
+* This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
+* You need an Azure account with an active subscription. If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* To learn more about creating a Windows Server node pool, see [Create an AKS cluster that supports Windows Server containers](quick-windows-container-deploy-cli.md).
+* [Install and configure Terraform](/azure/developer/terraform/quickstart-configure).
+* [Download kubectl](https://kubernetes.io/releases/download/).
+* Create a random value for the Azure resource group name using [random_pet](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet).
+* Create an Azure resource group using [azurerm_resource_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/resource_group).
+* Access the configuration of the AzureRM provider to get the Azure Object ID using [azurerm_client_config](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/data-sources/client_config).
+* Create a Kubernetes cluster using [azurerm_kubernetes_cluster](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster).
+* Create an AzAPI resource [azapi_resource](https://registry.terraform.io/providers/Azure/azapi/latest/docs/resources/azapi_resource).
+* Create an AzAPI resource to generate an SSH key pair using [azapi_resource_action](https://registry.terraform.io/providers/Azure/azapi/latest/docs/resources/azapi_resource_action).
> [!NOTE] > The Azure Linux node pool is now generally available (GA). To learn about the benefits and deployment steps, see the [Introduction to the Azure Linux Container Host for AKS][intro-azure-linux].
In this article, you learn how to:
> > See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform)
-1. Create a directory in which to test the sample Terraform code and make it the current directory.
+1. Create a directory you can use to test the sample Terraform code and make it your current directory.
1. Create a file named `providers.tf` and insert the following code:
In this article, you learn how to:
## Verify the results
-#### [Azure CLI](#tab/azure-cli)
-
-1. Get the Azure resource group name.
+1. Get the Azure resource group name using the following command.
```console resource_group_name=$(terraform output -raw resource_group_name) ```
-1. Run [az aks list](/cli/azure/aks#az-aks-list) to display the name of the new Kubernetes cluster.
+2. Display the name of your new Kubernetes cluster using the [`az aks list`](/cli/azure/aks#az-aks-list) command.
- ```azurecli
+ ```azurecli-interactive
az aks list \ --resource-group $resource_group_name \ --query "[].{\"K8s cluster name\":name}" \ --output table ```
-1. Get the Kubernetes configuration from the Terraform state and store it in a file that kubectl can read.
+3. Get the Kubernetes configuration from the Terraform state and store it in a file that `kubectl` can read using the following command.
```console echo "$(terraform output kube_config)" > ./azurek8s ```
-1. Verify the previous command didn't add an ASCII EOT character.
+4. Verify the previous command didn't add an ASCII EOT character using the following command.
```console cat ./azurek8s
In this article, you learn how to:
**Key points:**
- - If you see `<< EOT` at the beginning and `EOT` at the end, remove these characters from the file. Otherwise, you could receive the following error message: `error: error loading config file "./azurek8s": yaml: line 2: mapping values are not allowed in this context`
+ * If you see `<< EOT` at the beginning and `EOT` at the end, remove these characters from the file. Otherwise, you may receive the following error message: `error: error loading config file "./azurek8s": yaml: line 2: mapping values are not allowed in this context`
-1. Set an environment variable so that kubectl picks up the correct config.
+5. Set an environment variable so `kubectl` can pick up the correct config using the following command.
```console export KUBECONFIG=./azurek8s ```
-1. Verify the health of the cluster.
+6. Verify the health of the cluster using the `kubectl get nodes` command.
```console kubectl get nodes ```
- ![Screenshot showing how the kubectl tool allows you to verify the health of your Kubernetes cluster.](./media/quick-kubernetes-deploy-terraform/kubectl-get-nodes.png)
- **Key points:** -- When the AKS cluster was created, monitoring was enabled to capture health metrics for both the cluster nodes and pods. These health metrics are available in the Azure portal. For more information on container health monitoring, see [Monitor Azure Kubernetes Service health](/azure/azure-monitor/insights/container-insights-overview).-- Several key values were output when you applied the Terraform execution plan. For example, the host address, AKS cluster user name, and AKS cluster password are output.
+* When you created the AKS cluster, monitoring was enabled to capture health metrics for both the cluster nodes and pods. These health metrics are available in the Azure portal. For more information on container health monitoring, see [Monitor Azure Kubernetes Service health](/azure/azure-monitor/insights/container-insights-overview).
+* Several key values classified as output when you applied the Terraform execution plan. For example, the host address, AKS cluster user name, and AKS cluster password are output.
## Deploy the application
-A [Kubernetes manifest file](/azure/aks/concepts-clusters-workloads#deployments-and-yaml-manifests) defines a cluster's desired state, such as which container images to run.
+To deploy the application, you use a manifest file to create all the objects required to run the [AKS Store application](https://github.com/Azure-Samples/aks-store-demo). A [Kubernetes manifest file][kubernetes-deployment] defines a cluster's desired state, such as which container images to run. The manifest includes the following Kubernetes deployments and
++
+* **Store front**: Web application for customers to view products and place orders.
+* **Product service**: Shows product information.
+* **Order service**: Places orders.
+* **Rabbit MQ**: Message queue for an order queue.
+
+> [!NOTE]
+> We don't recommend running stateful containers, such as Rabbit MQ, without persistent storage for production. These are used here for simplicity, but we recommend using managed services, such as Azure CosmosDB or Azure Service Bus.
+
+1. Create a file named `aks-store-quickstart.yaml` and copy in the following manifest:
+
+ ```yaml
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: rabbitmq
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: rabbitmq
+ template:
+ metadata:
+ labels:
+ app: rabbitmq
+ spec:
+ nodeSelector:
+ "kubernetes.io/os": linux
+ containers:
+ - name: rabbitmq
+ image: mcr.microsoft.com/mirror/docker/library/rabbitmq:3.10-management-alpine
+ ports:
+ - containerPort: 5672
+ name: rabbitmq-amqp
+ - containerPort: 15672
+ name: rabbitmq-http
+ env:
+ - name: RABBITMQ_DEFAULT_USER
+ value: "username"
+ - name: RABBITMQ_DEFAULT_PASS
+ value: "password"
+ resources:
+ requests:
+ cpu: 10m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ volumeMounts:
+ - name: rabbitmq-enabled-plugins
+ mountPath: /etc/rabbitmq/enabled_plugins
+ subPath: enabled_plugins
+ volumes:
+ - name: rabbitmq-enabled-plugins
+ configMap:
+ name: rabbitmq-enabled-plugins
+ items:
+ - key: rabbitmq_enabled_plugins
+ path: enabled_plugins
+
+ apiVersion: v1
+ data:
+ rabbitmq_enabled_plugins: |
+ [rabbitmq_management,rabbitmq_prometheus,rabbitmq_amqp1_0].
+ kind: ConfigMap
+ metadata:
+ name: rabbitmq-enabled-plugins
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: rabbitmq
+ spec:
+ selector:
+ app: rabbitmq
+ ports:
+ - name: rabbitmq-amqp
+ port: 5672
+ targetPort: 5672
+ - name: rabbitmq-http
+ port: 15672
+ targetPort: 15672
+ type: ClusterIP
+
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: order-service
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: order-service
+ template:
+ metadata:
+ labels:
+ app: order-service
+ spec:
+ nodeSelector:
+ "kubernetes.io/os": linux
+ containers:
+ - name: order-service
+ image: ghcr.io/azure-samples/aks-store-demo/order-service:latest
+ ports:
+ - containerPort: 3000
+ env:
+ - name: ORDER_QUEUE_HOSTNAME
+ value: "rabbitmq"
+ - name: ORDER_QUEUE_PORT
+ value: "5672"
+ - name: ORDER_QUEUE_USERNAME
+ value: "username"
+ - name: ORDER_QUEUE_PASSWORD
+ value: "password"
+ - name: ORDER_QUEUE_NAME
+ value: "orders"
+ - name: FASTIFY_ADDRESS
+ value: "0.0.0.0"
+ resources:
+ requests:
+ cpu: 1m
+ memory: 50Mi
+ limits:
+ cpu: 75m
+ memory: 128Mi
+ initContainers:
+ - name: wait-for-rabbitmq
+ image: busybox
+ command: ['sh', '-c', 'until nc -zv rabbitmq 5672; do echo waiting for rabbitmq; sleep 2; done;']
+ resources:
+ requests:
+ cpu: 1m
+ memory: 50Mi
+ limits:
+ cpu: 75m
+ memory: 128Mi
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: order-service
+ spec:
+ type: ClusterIP
+ ports:
+ - name: http
+ port: 3000
+ targetPort: 3000
+ selector:
+ app: order-service
+
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: product-service
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: product-service
+ template:
+ metadata:
+ labels:
+ app: product-service
+ spec:
+ nodeSelector:
+ "kubernetes.io/os": linux
+ containers:
+ - name: product-service
+ image: ghcr.io/azure-samples/aks-store-demo/product-service:latest
+ ports:
+ - containerPort: 3002
+ resources:
+ requests:
+ cpu: 1m
+ memory: 1Mi
+ limits:
+ cpu: 1m
+ memory: 7Mi
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: product-service
+ spec:
+ type: ClusterIP
+ ports:
+ - name: http
+ port: 3002
+ targetPort: 3002
+ selector:
+ app: product-service
+
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: store-front
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: store-front
+ template:
+ metadata:
+ labels:
+ app: store-front
+ spec:
+ nodeSelector:
+ "kubernetes.io/os": linux
+ containers:
+ - name: store-front
+ image: ghcr.io/azure-samples/aks-store-demo/store-front:latest
+ ports:
+ - containerPort: 8080
+ name: store-front
+ env:
+ - name: VUE_APP_ORDER_SERVICE_URL
+ value: "http://order-service:3000/"
+ - name: VUE_APP_PRODUCT_SERVICE_URL
+ value: "http://product-service:3002/"
+ resources:
+ requests:
+ cpu: 1m
+ memory: 200Mi
+ limits:
+ cpu: 1000m
+ memory: 512Mi
+
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: store-front
+ spec:
+ ports:
+ - port: 80
+ targetPort: 8080
+ selector:
+ app: store-front
+ type: LoadBalancer
+ ```
+
+ For a breakdown of YAML manifest files, see [Deployments and YAML manifests](../concepts-clusters-workloads.md#deployments-and-yaml-manifests).
-In this quickstart, you use a manifest to create all the objects needed to run the [Azure Vote application](https://github.com/Azure-Samples/azure-voting-app-redis.git). This manifest includes two [Kubernetes deployments](/azure/aks/concepts-clusters-workloads#deployments-and-yaml-manifests):
+2. Deploy the application using the `kubectl apply` command and specify the name of your YAML manifest.
-* The sample Azure Vote Python applications.
-* A Redis instance.
+ ```console
+ kubectl apply -f aks-store-quickstart.yaml
+ ```
-Two [Kubernetes Services](/azure/aks/concepts-network#services) are created:
+ The following example output shows the deployments and
+
+ ```output
+ deployment.apps/rabbitmq created
+ service/rabbitmq created
+ deployment.apps/order-service created
+ service/order-service created
+ deployment.apps/product-service created
+ service/product-service created
+ deployment.apps/store-front created
+ service/store-front created
+ ```
-* An internal service for the Redis instance.
-* An external service to access the Azure Vote application from the internet.
+### Test the application
-1. Create a file named `azure-vote.yaml` and insert the following code:
+When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete.
- [!code-terraform[master](~/terraform_samples/quickstart/201-k8s-cluster-with-tf-and-aks/azure-vote.yaml)]
+1. Check the status of the deployed pods using the `kubectl get pods` command. Make all pods are `Running` before proceeding.
- **Key points:**
+2. Check for a public IP address for the store-front application. Monitor progress using the `kubectl get service` command with the `--watch` argument.
- - For more information about YAML manifest files, see [Deployments and YAML manifests](/azure/aks/concepts-clusters-workloads#deployments-and-yaml-manifests).
+ ```azurecli-interactive
+ kubectl get service store-front --watch
+ ```
-1. Run [kubectl apply](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply) to deploy the application.
+ The **EXTERNAL-IP** output for the `store-front` service initially shows as *pending*:
- ```console
- kubectl apply -f azure-vote.yaml
+ ```output
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ store-front LoadBalancer 10.0.100.10 <pending> 80:30025/TCP 4h4m
```
-### Test the application
+3. Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process.
-1. When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete. Run [kubectl get service](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get) with the `--watch` argument to monitor progress.
+ The following example output shows a valid public IP address assigned to the service:
- ```console
- kubectl get service azure-vote-front --watch
+ ```output
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ store-front LoadBalancer 10.0.100.10 20.62.159.19 80:30025/TCP 4h5m
```
-1. The **EXTERNAL-IP** output for the `azure-vote-front` service initially shows as *pending*. Once the **EXTERNAL-IP** address displays an IP address, use `CTRL-C` to stop the `kubectl` watch process.
-
-1. To see the **Azure Vote** app in action, open a web browser to the external IP address of your service.
+4. Open a web browser to the external IP address of your service to see the Azure Store app in action.
- :::image type="content" source="media/quick-kubernetes-deploy-terraform/azure-voting-application.png" alt-text="Screenshot of Azure Vote sample application.":::
+ :::image type="content" source="media/quick-kubernetes-deploy-terraform/aks-store-application.png" alt-text="Screenshot of AKS Store sample application." lightbox="media/quick-kubernetes-deploy-terraform/aks-store-application.png":::
## Clean up resources
Two [Kubernetes Services](/azure/aks/concepts-network#services) are created:
### Delete service principal
-1. Get the service principal ID.
+1. Get the service principal ID using the following command.
- ```azurecli
+ ```azurecli-interactive
sp=$(terraform output -raw sp) ```
-
-1. Run [az ad sp delete](/cli/azure/ad/sp#az-ad-sp-delete) to delete the service principal.
- ```azurecli
+1. Delete the service principal using the [`az ad sp delete`](/cli/azure/ad/sp#az-ad-sp-delete) command.
+
+ ```azurecli-interactive
az ad sp delete --id $sp ```
-
+ ## Troubleshoot Terraform on Azure
-[Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot)
+[Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot).
## Next steps > [!div class="nextstepaction"]
-> [Learn more about using AKS](/azure/aks)
+> [Learn more about using AKS.](/azure/aks)
+
+<!-- LINKS - internal -->
+[kubernetes-concepts]: ../concepts-clusters-workloads.md
+[kubernetes-deployment]: ../concepts-clusters-workloads.md#deployments-and-yaml-manifests
<!-- LINKS - Internal -->
-[intro-azure-linux]: ../../azure-linux/intro-azure-linux.md
+[intro-azure-linux]: ../../azure-linux/intro-azure-linux.md
aks Tutorial Kubernetes Deploy Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-deploy-application.md
Title: Kubernetes on Azure tutorial - Deploy an application
-description: In this Azure Kubernetes Service (AKS) tutorial, you deploy a multi-container application to your cluster using a custom image stored in Azure Container Registry.
+ Title: Kubernetes on Azure tutorial - Deploy an application to Azure Kubernetes Service (AKS)
+description: In this Azure Kubernetes Service (AKS) tutorial, you deploy a multi-container application to your cluster using images stored in Azure Container Registry.
Previously updated : 01/04/2023 Last updated : 10/23/2023 #Customer intent: As a developer, I want to learn how to deploy apps to an Azure Kubernetes Service (AKS) cluster so that I can deploy and run my own applications.
-# Tutorial: Run applications in Azure Kubernetes Service (AKS)
+# Tutorial - Deploy an application to Azure Kubernetes Service (AKS)
-Kubernetes provides a distributed platform for containerized applications. You build and deploy your own applications and services into a Kubernetes cluster and let the cluster manage the availability and connectivity. In this tutorial, part four of seven, you deploy a sample application into a Kubernetes cluster. You learn how to:
+Kubernetes provides a distributed platform for containerized applications. You build and deploy your own applications and services into a Kubernetes cluster and let the cluster manage the availability and connectivity.
+
+In this tutorial, part four of seven, you deploy a sample application into a Kubernetes cluster. You learn how to:
> [!div class="checklist"] >
Kubernetes provides a distributed platform for containerized applications. You b
> * Run an application in Kubernetes. > * Test the application.
-In later tutorials, you'll scale out and update your application.
-
-This quickstart assumes you have a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
- > [!TIP]
-> AKS clusters can use GitOps for configuration management. GitOp enables declarations of your cluster's state, which are pushed to source control, to be applied to the cluster automatically. To learn how to use GitOps to deploy an application with an AKS cluster, see the [prerequisites for Azure Kubernetes Service clusters][gitops-flux-tutorial-aks] in the [GitOps with Flux v2][gitops-flux-tutorial] tutorial.
+>
+> With AKS, you can use the following approaches for configuration management:
+>
+> * **GitOps**: Enables declarations of your cluster's state to automatically apply to the cluster. To learn how to use GitOps to deploy an application with an AKS cluster, see the [prerequisites for Azure Kubernetes Service clusters][gitops-flux-tutorial-aks] in the [GitOps with Flux v2][gitops-flux-tutorial] tutorial.
+>
+> * **DevOps**: Enables you to build, test, and deploy with continuous integration (CI) and continuous delivery (CD). To see examples of how to use DevOps to deploy an application with an AKS cluster, see [Build and deploy to AKS with Azure Pipelines](./devops-pipeline.md) or [GitHub Actions for deploying to Kubernetes](./kubernetes-action.md).
## Before you begin
-In previous tutorials, you packaged an application into a container image, uploaded the image to Azure Container Registry, and created a Kubernetes cluster.
-
-To complete this tutorial, you need the pre-created `azure-vote-all-in-one-redis.yaml` Kubernetes manifest file. This file download was included with the application source code in a previous tutorial. Verify that you've cloned the repo and that you've changed directories into the cloned repo. If you haven't done these steps and would like to follow along, start with [Tutorial 1: Prepare an application for AKS][aks-tutorial-prepare-app].
+In previous tutorials, you packaged an application into a container image, uploaded the image to Azure Container Registry, and created a Kubernetes cluster. To complete this tutorial, you need the pre-created `aks-store-quickstart.yaml` Kubernetes manifest file. This file download was included with the application source code in a previous tutorial. Make sure you cloned the repo and changed directories into the cloned repo. If you haven't completed these steps and want to follow along, start with [Tutorial 1 - Prepare application for AKS][aks-tutorial-prepare-app].
### [Azure CLI](#tab/azure-cli)
-This tutorial requires that you're running the Azure CLI version 2.0.53 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+This tutorial requires Azure CLI version 2.34.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
### [Azure PowerShell](#tab/azure-powershell)
-This tutorial requires that you're running Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
+This tutorial requires Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
## Update the manifest file
-In these tutorials, an Azure Container Registry (ACR) instance stores the container image for the sample application. To deploy the application, you must update the image name in the Kubernetes manifest file to include the ACR login server name.
+In these tutorials, your Azure Container Registry (ACR) instance stores the container images for the sample application. To deploy the application, you must update the image names in the Kubernetes manifest file to include your ACR login server name.
### [Azure CLI](#tab/azure-cli)
-Get the ACR login server name using the [az acr list][az-acr-list] command.
+1. Get your login server address using the [`az acr list`][az-acr-list] command and query for your login server.
-```azurecli
-az acr list --resource-group myResourceGroup --query "[].{acrLoginServer:loginServer}" --output table
-```
+ ```azurecli-interactive
+ az acr list --resource-group myResourceGroup --query "[].{acrLoginServer:loginServer}" --output table
+ ```
-### [Azure PowerShell](#tab/azure-powershell)
+2. Make sure you're in the cloned *aks-store-demo* directory, and then open the manifest file with a text editor, such as `vi`:
-Get the ACR login server name using the [Get-AzContainerRegistry][get-azcontainerregistry] cmdlet.
+ ```azurecli-interactive
+ vi aks-store-quickstart.yaml
+ ```
-```azurepowershell
-(Get-AzContainerRegistry -ResourceGroupName myResourceGroup -Name <acrName>).LoginServer
-```
+3. Update the `image` property for the containers by replacing *ghcr.io/azure-samples* with your ACR login server name.
-
+ ```yaml
+ containers:
+ ...
+ - name: order-service
+ image: <acrName>.azurecr.io/aks-store-demo/order-service:latest
+ ...
+ - name: product-service
+ image: <acrName>.azurecr.io/aks-store-demo/product-service:latest
+ ...
+ - name: store-front
+ image: <acrName>.azurecr.io/aks-store-demo/store-front:latest
+ ...
+ ```
+
+4. Save and close the file. In `vi`, use `:wq`.
-The sample manifest file from the git repo you cloned in the first tutorial uses the images from Microsoft Container Registry (*mcr.microsoft.com*). Make sure you're in the cloned *azure-voting-app-redis* directory, and then open the manifest file with a text editor, such as `vi`:
+### [Azure PowerShell](#tab/azure-powershell)
-```console
-vi azure-vote-all-in-one-redis.yaml
-```
+1. Get your login server address using the [`Get-AzContainerRegistry`][get-azcontainerregistry] cmdlet and query for your login server. Make sure you replace `<acrName>` with the name of your ACR instance.
-Replace *mcr.microsoft.com* with your ACR login server name. You can find the image name on line 60 of the manifest file. The following example shows the default image name:
+ ```azurepowershell-interactive
+ (Get-AzContainerRegistry -ResourceGroupName myResourceGroup -Name <acrName>).LoginServer
+ ```
-```yaml
-containers:
-- name: azure-vote-front
- image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
-```
+2. Make sure you're in the cloned *aks-store-demo* directory, and then open the manifest file with a text editor, such as `vi`:
-Provide your own ACR login server name so your manifest file looks similar to the following example:
+ ```azurepowershell-interactive
+ vi aks-store-quickstart.yaml
+ ```
-```yaml
-containers:
-- name: azure-vote-front
- image: <acrName>.azurecr.io/azure-vote-front:v1
-```
+3. Update the `image` property for the containers by replacing *ghcr.io/azure-samples* with your ACR login server name.
-Save and close the file. In `vi`, use `:wq`.
+ ```yaml
+ containers:
+ ...
+ - name: order-service
+ image: <acrName>.azurecr.io/aks-store-demo/order-service:latest
+ ...
+ - name: product-service
+ image: <acrName>.azurecr.io/aks-store-demo/product-service:latest
+ ...
+ - name: store-front
+ image: <acrName>.azurecr.io/aks-store-demo/store-front:latest
+ ...
+ ```
-## Deploy the application
+4. Save and close the file. In `vi`, use `:wq`.
-To deploy your application, use the [`kubectl apply`][kubectl-apply] command, specifying the sample manifest file. This command parses the manifest file and creates the defined Kubernetes objects.
++
+## Deploy the application
-```console
-kubectl apply -f azure-vote-all-in-one-redis.yaml
-```
+* Deploy the application using the [`kubectl apply`][kubectl-apply] command, which parses the manifest file and creates the defined Kubernetes objects.
-The following example output shows the resources successfully created in the AKS cluster:
+ ```console
+ kubectl apply -f aks-store-quickstart.yaml
+ ```
-```console
-$ kubectl apply -f azure-vote-all-in-one-redis.yaml
+ The following example output shows the resources successfully created in the AKS cluster:
-deployment "azure-vote-back" created
-service "azure-vote-back" created
-deployment "azure-vote-front" created
-service "azure-vote-front" created
-```
+ ```output
+ deployment.apps/rabbitmq created
+ service/rabbitmq created
+ deployment.apps/order-service created
+ service/order-service created
+ deployment.apps/product-service created
+ service/product-service created
+ deployment.apps/store-front created
+ service/store-front created
+ ```
## Test the application When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete.
-To monitor progress, use the [`kubectl get service`][kubectl-get] command with the `--watch` argument.
+1. Monitor progress using the [`kubectl get service`][kubectl-get] command with the `--watch` argument.
-```console
-kubectl get service azure-vote-front --watch
-```
+ ```console
+ kubectl get service store-front --watch
+ ```
-Initially the *EXTERNAL-IP* for the *azure-vote-front* service shows as *pending*.
+ Initially, the `EXTERNAL-IP` for the *store-front* service shows as *pending*.
-```output
-azure-vote-front LoadBalancer 10.0.34.242 <pending> 80:30676/TCP 5s
-```
+ ```output
+ store-front LoadBalancer 10.0.34.242 <pending> 80:30676/TCP 5s
+ ```
-When the *EXTERNAL-IP* address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service:
+2. When the `EXTERNAL-IP` address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process.
-```output
-azure-vote-front LoadBalancer 10.0.34.242 52.179.23.131 80:30676/TCP 67s
-```
+ The following example output shows a valid public IP address assigned to the service:
-To see the application in action, open a web browser to the external IP address of your service.
+ ```output
+ store-front LoadBalancer 10.0.34.242 52.179.23.131 80:30676/TCP 67s
+ ```
+3. View the application in action by opening a web browser to the external IP address of your service.
If the application doesn't load, it might be an authorization problem with your image registry. To view the status of your containers, use the `kubectl get pods` command. If you can't pull the container images, see [Authenticate with Azure Container Registry from Azure Kubernetes Service](cluster-container-registry-integration.md). ## Next steps
-In this tutorial, you deployed a sample Azure vote application to a Kubernetes cluster in AKS. You learned how to:
+In this tutorial, you deployed a sample Azure application to a Kubernetes cluster in AKS. You learned how to:
> [!div class="checklist"] >
In this tutorial, you deployed a sample Azure vote application to a Kubernetes c
> * Run an application in Kubernetes. > * Test the application.
-In the next tutorial, you'll learn how to scale a Kubernetes application and the underlying Kubernetes infrastructure.
+In the next tutorial, you learn how to use PaaS services for stateful workloads in Kubernetes.
> [!div class="nextstepaction"]
-> [Scale Kubernetes application and infrastructure][aks-tutorial-scale]
+> Use PaaS services for stateful workloads in AKS
<!-- LINKS - external --> [kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
-[kubectl-create]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create
[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get <!-- LINKS - internal --> [aks-tutorial-prepare-app]: ./tutorial-kubernetes-prepare-app.md
-[aks-tutorial-scale]: ./tutorial-kubernetes-scale.md
[az-acr-list]: /cli/azure/acr [azure-cli-install]: /cli/azure/install-azure-cli
-[kubernetes-concepts]: concepts-clusters-workloads.md
-[kubernetes-service]: concepts-network.md#services
[azure-powershell-install]: /powershell/azure/install-az-ps [get-azcontainerregistry]: /powershell/module/az.containerregistry/get-azcontainerregistry [gitops-flux-tutorial]: ../azure-arc/kubernetes/tutorial-use-gitops-flux2.md?toc=/azure/aks/toc.json [gitops-flux-tutorial-aks]: ../azure-arc/kubernetes/tutorial-use-gitops-flux2.md?toc=/azure/aks/toc.json#for-azure-kubernetes-service-clusters
+[aks-tutorial-paas]: ./tutorial-kubernetes-paas-services.md
aks Tutorial Kubernetes Deploy Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-deploy-cluster.md
Title: Kubernetes on Azure tutorial - Deploy a cluster
-description: In this Azure Kubernetes Service (AKS) tutorial, you create an AKS cluster and use kubectl to connect to the Kubernetes master node.
+ Title: Kubernetes on Azure tutorial - Deploy an Azure Kubernetes Service (AKS) cluster
+description: In this Azure Kubernetes Service (AKS) tutorial, you create an AKS cluster and use kubectl to connect to the Kubernetes main node.
Previously updated : 12/01/2022- Last updated : 10/23/2023 #Customer intent: As a developer or IT pro, I want to learn how to create an Azure Kubernetes Service (AKS) cluster so that I can deploy and run my own applications.
-# Tutorial: Deploy an Azure Kubernetes Service (AKS) cluster
+# Tutorial - Deploy an Azure Kubernetes Service (AKS) cluster
+
+Kubernetes provides a distributed platform for containerized applications. With Azure Kubernetes Service (AKS), you can quickly create a production ready Kubernetes cluster.
-Kubernetes provides a distributed platform for containerized applications. With AKS, you can quickly create a production ready Kubernetes cluster. In this tutorial, part three of seven, you deploy a Kubernetes cluster in AKS. You learn how to:
+In this tutorial, part three of seven, you deploy a Kubernetes cluster in AKS. You learn how to:
> [!div class="checklist"]
-> * Deploy a Kubernetes AKS cluster that can authenticate to an Azure Container Registry (ACR).
+> * Deploy an AKS cluster that can authenticate to an Azure Container Registry (ACR).
> * Install the Kubernetes CLI, `kubectl`. > * Configure `kubectl` to connect to your AKS cluster.
-In later tutorials, you'll deploy the Azure Vote application to your AKS cluster and scale and update your application.
- ## Before you begin
-In previous tutorials, you created a container image and uploaded it to an ACR instance. If you haven't done these steps and would like to follow along, start with [Tutorial 1: Prepare an application for AKS][aks-tutorial-prepare-app].
+In previous tutorials, you created a container image and uploaded it to an ACR instance. If you haven't completed these steps and want to follow along, start with [Tutorial 1 - Prepare application for AKS][aks-tutorial-prepare-app].
* If you're using Azure CLI, this tutorial requires that you're running the Azure CLI version 2.0.53 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install]. * If you're using Azure PowerShell, this tutorial requires that you're running Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
AKS clusters can use [Kubernetes role-based access control (Kubernetes RBAC)][k8
To learn more about AKS and Kubernetes RBAC, see [Control access to cluster resources using Kubernetes RBAC and Microsoft Entra identities in AKS][aks-k8s-rbac]. - ### [Azure CLI](#tab/azure-cli)
-Create an AKS cluster using [`az aks create`][az aks create]. The following example creates a cluster named *myAKSCluster* in the resource group named *myResourceGroup*. This resource group was created in the [previous tutorial][aks-tutorial-prepare-acr] in the *eastus* region. The AKS cluster will also be created in the *eastus* region.
+This tutorial requires Azure CLI version 2.0.53 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+This tutorial requires Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
-For more information about AKS resource limits and region availability, see [Quotas, virtual machine size restrictions, and region availability in AKS][quotas-skus-regions].
++
+## Create an AKS cluster
-To allow an AKS cluster to interact with other Azure resources, a cluster identity is automatically created. In this example, the cluster identity is [granted the right to pull images][container-registry-integration] from the ACR instance you created in the previous tutorial. To execute the command successfully, you're required to have an **Owner** or **Azure account administrator** role in your Azure subscription.
+AKS clusters can use [Kubernetes role-based access control (Kubernetes RBAC)][k8s-rbac], which allows you to define access to resources based on roles assigned to users. Permissions are combined when users are assigned multiple roles. Permissions can be scoped to either a single namespace or across the whole cluster. For more information, see [Control access to cluster resources using Kubernetes RBAC and Azure Active Directory identities in AKS][aks-k8s-rbac].
-```azurecli
-az aks create \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --node-count 2 \
- --generate-ssh-keys \
- --attach-acr <acrName>
-```
+For information about AKS resource limits and region availability, see [Quotas, virtual machine size restrictions, and region availability in AKS][quotas-skus-regions].
> [!NOTE]
-> If you've already generated SSH keys, you may encounter an error similar to `linuxProfile.ssh.publicKeys.keyData is invalid`. To proceed, retry the command without the `--generate-ssh-keys` parameter.
+> To ensure your cluster operates reliably, you should run at least two nodes.
-### [Azure PowerShell](#tab/azure-powershell)
+### [Azure CLI](#tab/azure-cli)
-Create an AKS cluster using [`New-AzAksCluster`][new-azakscluster]. The following example creates a cluster named *myAKSCluster* in the resource group named *myResourceGroup*. This resource group was created in the [previous tutorial][aks-tutorial-prepare-acr] in the *eastus* region. The AKS cluster will also be created in the *eastus* region.
+To allow an AKS cluster to interact with other Azure resources, the Azure platform automatically creates a cluster identity. In this example, the cluster identity is [granted the right to pull images][container-registry-integration] from the ACR instance you created in the previous tutorial. To execute the command successfully, you need to have an **Owner** or **Azure account administrator** role in your Azure subscription.
-For more information about AKS resource limits and region availability, see [Quotas, virtual machine size restrictions, and region availability in AKS][quotas-skus-regions].
+* Create an AKS cluster using the [`az aks create`][az aks create] command. The following example creates a cluster named *myAKSCluster* in the resource group named *myResourceGroup*. This resource group was created in the [previous tutorial][aks-tutorial-prepare-acr] in the *eastus* region.
-To allow an AKS cluster to interact with other Azure resources, a cluster identity is automatically created. In this example, the cluster identity is [granted the right to pull images][container-registry-integration] from the ACR instance you created in the previous tutorial. To execute the command successfully, you're required to have an **Owner** or **Azure account administrator** role in your Azure subscription.
+ ```azurecli-interactive
+ az aks create \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --node-count 2 \
+ --generate-ssh-keys \
+ --attach-acr <acrName>
+ ```
-```azurepowershell
-New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 2 -GenerateSshKey -AcrNameToAttach <acrName>
-```
+ > [!NOTE]
+ > If you already generated SSH keys, you may encounter an error similar to `linuxProfile.ssh.publicKeys.keyData is invalid`. To proceed, retry the command without the `--generate-ssh-keys` parameter.
-> [!NOTE]
-> If you've already generated SSH keys, you may encounter an error similar to `linuxProfile.ssh.publicKeys.keyData is invalid`. To proceed, retry the command without the `-GenerateSshKey` parameter.
+### [Azure PowerShell](#tab/azure-powershell)
+
+To allow an AKS cluster to interact with other Azure resources, the Azure platform automatically creates a cluster identity. In this example, the cluster identity is [granted the right to pull images][container-registry-integration] from the ACR instance you created in the previous tutorial. To execute the command successfully, you need to have an **Owner** or **Azure account administrator** role in your Azure subscription.
+
+* Create an AKS cluster using the [`New-AzAksCluster`][new-azakscluster] cmdlet. The following example creates a cluster named *myAKSCluster* in the resource group named *myResourceGroup*. This resource group was created in the [previous tutorial][aks-tutorial-prepare-acr] in the *eastus* region.
+
+ ```azurepowershell-interactive
+ New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 2 -GenerateSshKey -AcrNameToAttach <acrName>
+ ```
+
+ > [!NOTE]
+ > If you already generated SSH keys, you may encounter an error similar to `linuxProfile.ssh.publicKeys.keyData is invalid`. To proceed, retry the command without the `-GenerateSshKey` parameter.
To avoid needing an **Owner** or **Azure account administrator** role, you can a
After a few minutes, the deployment completes and returns JSON-formatted information about the AKS deployment.
-> [!NOTE]
-> To ensure your cluster operates reliably, you should run at least two nodes.
- ## Install the Kubernetes CLI
-Use the Kubernetes CLI, [`kubectl`][kubectl], to connect to the Kubernetes cluster from your local computer.
+You use the Kubernetes CLI, [`kubectl`][kubectl], to connect to your Kubernetes cluster. If you use the Azure Cloud Shell, `kubectl` is already installed. If you're running the commands locally, you can use the Azure CLI or Azure PowerShell to install `kubectl`.
### [Azure CLI](#tab/azure-cli)
-If you use the Azure Cloud Shell, `kubectl` is already installed. You can also install it locally using the [`az aks install-cli`][az aks install-cli] command.
+* Install `kubectl` locally using the [`az aks install-cli`][az aks install-cli] command.
-```azurecli
-az aks install-cli
-```
+ ```azurecli-interactive
+ az aks install-cli
+ ```
### [Azure PowerShell](#tab/azure-powershell)
-If you use the Azure Cloud Shell, `kubectl` is already installed. You can also install it locally using the [`Install-AzAksKubectl`][install-azakskubectl] cmdlet.
+* Install `kubectl` locally using the [`Install-AzAksCliTool`][install-azaksclitool] cmdlet.
-```azurepowershell
-Install-AzAksKubectl
-```
+ ```azurepowershell-interactive
+ Install-AzAksCliTool
+ ```
Install-AzAksKubectl
### [Azure CLI](#tab/azure-cli)
-To configure `kubectl` to connect to your Kubernetes cluster, use the [`az aks get-credentials`][az aks get-credentials] command. The following example gets credentials for the AKS cluster named *myAKSCluster* in *myResourceGroup*.
+1. Configure `kubectl` to connect to your Kubernetes cluster using the [`az aks get-credentials`][az aks get-credentials] command. The following example gets credentials for the AKS cluster named *myAKSCluster* in *myResourceGroup*.
-```azurecli
-az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
-```
+ ```azurecli-interactive
+ az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
+ ```
-### [Azure PowerShell](#tab/azure-powershell)
+2. Verify connection to your cluster using the [`kubectl get nodes`][kubectl-get] command, which returns a list of cluster nodes.
-To configure `kubectl` to connect to your Kubernetes cluster, use the [`Import-AzAksCredential`][import-azakscredential] cmdlet. The following example gets credentials for the AKS cluster named *myAKSCluster* in *myResourceGroup*.
+ ```azurecli-interactive
+ kubectl get nodes
+ ```
-```azurepowershell
-Import-AzAksCredential -ResourceGroupName myResourceGroup -Name myAKSCluster
-```
+ The following example output shows a list of the cluster nodes:
-
+ ```output
+ NAME STATUS ROLES AGE VERSION
+ aks-nodepool1-19366578-vmss000002 Ready agent 47h v1.25.6
+ aks-nodepool1-19366578-vmss000003 Ready agent 47h v1.25.6
+ ```
+
+### [Azure PowerShell](#tab/azure-powershell)
-To verify connection to your cluster, run [`kubectl get nodes`][kubectl-get] to return a list of cluster nodes.
+1. Configure `kubectl` to connect to your Kubernetes cluster using the [`Import-AzAksCredential`][import-azakscredential] cmdlet. The following example gets credentials for the AKS cluster named *myAKSCluster* in *myResourceGroup*.
-```azurecli-interactive
-kubectl get nodes
-```
+ ```azurepowershell-interactive
+ Import-AzAksCredential -ResourceGroupName myResourceGroup -Name myAKSCluster
+ ```
-The following example output shows the list of cluster nodes.
+2. Verify connection to your cluster using the [`kubectl get nodes`][kubectl-get] command, which returns a list of cluster nodes.
-```
-$ kubectl get nodes
+ ```azurepowershell-interactive
+ kubectl get nodes
+ ```
-NAME STATUS ROLES AGE VERSION
-aks-nodepool1-19366578-vmss000002 Ready agent 47h v1.25.6
-aks-nodepool1-19366578-vmss000003 Ready agent 47h v1.25.6
-```
+ The following example output shows a list of the cluster nodes:
+
+ ```output
+ NAME STATUS ROLES AGE VERSION
+ aks-nodepool1-19366578-vmss000002 Ready agent 47h v1.25.6
+ aks-nodepool1-19366578-vmss000003 Ready agent 47h v1.25.6
+ ```
++ ## Next steps
In this tutorial, you deployed a Kubernetes cluster in AKS and configured `kubec
> * Install the Kubernetes CLI, `kubectl`. > * Configure `kubectl` to connect to your AKS cluster.
-In the next tutorial, you'll learn how to deploy an application to your cluster.
+In the next tutorial, you learn how to deploy an application to your cluster.
> [!div class="nextstepaction"] > [Deploy an application in AKS][aks-tutorial-deploy-app]
In the next tutorial, you'll learn how to deploy an application to your cluster.
[aks-tutorial-deploy-app]: ./tutorial-kubernetes-deploy-application.md [aks-tutorial-prepare-acr]: ./tutorial-kubernetes-prepare-acr.md [aks-tutorial-prepare-app]: ./tutorial-kubernetes-prepare-app.md
-[az ad sp create-for-rbac]: /cli/azure/ad/sp#az_ad_sp_create_for_rbac
-[az acr show]: /cli/azure/acr#az_acr_show
-[az role assignment create]: /cli/azure/role/assignment#az_role_assignment_create
[az aks create]: /cli/azure/aks#az_aks_create [az aks install-cli]: /cli/azure/aks#az_aks_install_cli [az aks get-credentials]: /cli/azure/aks#az_aks_get_credentials
In the next tutorial, you'll learn how to deploy an application to your cluster.
[quotas-skus-regions]: quotas-skus-regions.md [azure-powershell-install]: /powershell/azure/install-az-ps [new-azakscluster]: /powershell/module/az.aks/new-azakscluster
-[install-azakskubectl]: /powershell/module/az.aks/install-azaksclitool
+[install-azaksclitool]: /powershell/module/az.aks/install-azaksclitool
[import-azakscredential]: /powershell/module/az.aks/import-azakscredential
-[aks-k8s-rbac]: azure-ad-rbac.md
-[preset-config]: /quotas-skus-regions.md#cluster-configuration-presets-in-the-azure-portal
-[azure-managed-identities]: ../active-directory/managed-identities-azure-resources/overview.md
-[az-container-insights]: ../azure-monitor/containers/container-insights-overview.md
+[aks-k8s-rbac]: azure-ad-rbac.md
aks Tutorial Kubernetes Paas Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-paas-services.md
+
+ Title: Kubernetes on Azure tutorial - Use PaaS services with an Azure Kubernetes Service (AKS) cluster
+description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to use the Azure Service Bus service with your AKS cluster.
+ Last updated : 10/23/2023+
+#Customer intent: As a developer, I want to learn how to use PaaS services with an Azure Kubernetes Service (AKS) cluster so that I can deploy and manage my applications.
++
+# Tutorial - Use PaaS services with an Azure Kubernetes Service (AKS) cluster
+
+With Kubernetes, you can use PaaS services, such as [Azure Service Bus][azure-service-bus], to develop and run your applications.
+
+In this tutorial, part five of seven, you create an Azure Service Bus namespace and queue to test your application. You learn how to:
+
+> [!div class="checklist"]
+>
+> * Create an Azure Service Bus namespace and queue.
+> * Update the Kubernetes manifest file to use the Azure Service Bus queue.
+> * Test the updated application by placing an order.
+
+## Before you begin
+
+In previous tutorials, you packaged an application into a container image, uploaded the image to Azure Container Registry, created a Kubernetes cluster, and deployed an application. To complete this tutorial, you need the pre-created `aks-store-quickstart.yaml` Kubernetes manifest file. This file download was included with the application source code in a previous tutorial. Make sure you cloned the repo and changed directories into the cloned repo. If you haven't completed these steps and want to follow along, start with [Tutorial 1 - Prepare application for AKS][aks-tutorial-prepare-app].
+
+### [Azure CLI](#tab/azure-cli)
+
+This tutorial requires Azure CLI version 2.34.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+This tutorial requires Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
+++
+## Create environment variables
+
+### [Azure CLI](#tab/azure-cli)
+
+* Create the following environment variables to use for the commands in this tutorial:
+
+ ```azurecli-interactive
+ LOC_NAME=eastus
+ RAND=$RANDOM
+ RG_NAME=myResourceGroup
+ AKS_NAME=myAKSCluster
+ SB_NS=sb-store-demo-$RAND
+ ```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+* Create the following environment variables to use for the commands in this tutorial:
+
+ ```azurepowershell-interactive
+ $LOC_NAME="eastus"
+ $rand=New-Object System.Random
+ $RAND=$rand.Next()
+ $RG_NAME="myResourceGroup"
+ $AKS_NAME="myAKSCluster"
+ $SB_NS="sb-store-demo-$RAND"
+ ```
+++
+## Create Azure Service Bus namespace and queue
+
+In previous tutorials, you used a RabbitMQ container to store orders submitted by the `order-service`. In this tutorial, you use an Azure Service Bus namespace to provide a scoping container for the Service Bus resources within the application. You also use an Azure Service bus queue to send and receive messages between the application components. For more information on Azure Service Bus, see [Create an Azure Service Bus namespace and queue](../service-bus-messaging/service-bus-quickstart-cli.md).
+
+### [Azure CLI](#tab/azure-cli)
+
+1. Create an Azure Service Bus namespace using the [`az servicebus namespace create`][az-servicebus-namespace-create] command.
+
+ ```azurecli-interactive
+ az servicebus namespace create -n $SB_NS -g $RG_NAME -l $LOC_NAME
+ ```
+
+2. Create an Azure Service Bus queue using the [`az servicebus queue create`][az-servicebus-queue-create] command.
+
+ ```azurecli-interactive
+ az servicebus queue create -n orders -g $RG_NAME --namespace-name $SB_NS -g $RG_NAME
+ ```
+
+3. Create an Azure Service Bus authorization rule using the [`az servicebus queue authorization-rule create`][az-servicebus-queue-authorization-rule-create] command.
+
+ ```azurecli-interactive
+ az servicebus queue authorization-rule create \
+ --name sender \
+ --namespace-name $SB_NS \
+ --resource-group $RG_NAME \
+ --queue-name orders \
+ --rights Send
+ ```
+
+4. Get the Azure Service Bus credentials for later use using the [`az servicebus namespace show`][az-servicebus-namespace-show] and [`az servicebus queue authorization-rule keys list`][az-servicebus-queue-authorization-rule-keys-list] commands.
+
+ ```azurecli-interactive
+ az servicebus namespace show --name $SB_NS --resource-group $RG_NAME --query name -o tsv
+ az servicebus queue authorization-rule keys list --namespace-name $SB_NS --resource-group $RG_NAME --queue-name orders --name sender --query primaryKey -o tsv
+ ```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+1. Create an Azure Service Bus namespace using the [`New-AzServiceBusNamespace`][new-az-service-bus-namespace] cmdlet.
+
+ ```azurepowershell-interactive
+ New-AzServiceBusNamespace -Name $SB_NS -ResourceGroupName $RG_NAME -Location $LOC_NAME
+ ```
+
+2. Create an Azure Service Bus queue using the [`New-AzServiceBusQueue`][new-az-service-bus-queue] cmdlet.
+
+ ```azurepowershell-interactive
+ New-AzServiceBusQueue -Name orders -ResourceGroupName $RG_NAME -NamespaceName $SB_NS
+ ```
+
+3. Create an Azure Service Bus authorization rule using the [`New-AzServiceBusAuthorizationRule`][new-az-service-bus-authorization-rule] cmdlet.
+
+ ```azurepowershell-interactive
+ New-AzServiceBusAuthorizationRule `
+ -Name sender `
+ -NamespaceName $SB_NS `
+ -ResourceGroupName $RG_NAME `
+ -QueueName orders `
+ -Rights Send
+ ```
+
+4. Get the Azure Service Bus credentials for later use using the [`Get-AzServiceBusNamespace`][get-az-service-bus-namespace] and [`Get-AzServiceBusKey`][get-az-service-bus-key] cmdlets.
+
+ ```azurepowershell-interactive
+ (Get-AzServiceBusNamespace -Name $SB_NS -ResourceGroupName $RG_NAME).Name
+ (Get-AzServiceBusKey -NamespaceName $SB_NS -ResourceGroupName $RG_NAME -Name sender -QueueName orders).PrimaryKey
+ ```
+++
+## Update Kubernetes manifest file
+
+### [Azure CLI](#tab/azure-cli)
+
+1. Configure `kubectl` to connect to your cluster using the [`az aks get-credentials`][az-aks-get-credentials] command.
+
+ ```azurecli-interactive
+ az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
+ ```
+
+2. Open the `aks-store-quickstart.yaml` file in a text editor.
+3. Remove the existing `rabbitmq` Deployment, ConfigMap, and Service sections and replace the existing `order-service` Deployment section with the following content:
+
+ ```yaml
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: order-service
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: order-service
+ template:
+ metadata:
+ labels:
+ app: order-service
+ spec:
+ nodeSelector:
+ "kubernetes.io/os": linux
+ containers:
+ - name: order-service
+ image: <REPLACE_WITH_YOUR_ACR_NAME>.azurecr.io/aks-store-demo/order-service:latest
+ ports:
+ - containerPort: 3000
+ env:
+ - name: ORDER_QUEUE_HOSTNAME
+ value: "<REPLACE_WITH_YOUR_SB_NS_HOSTNAME>" # Example: sb-store-demo-123456.servicebus.windows.net
+ - name: ORDER_QUEUE_PORT
+ value: "5671"
+ - name: ORDER_QUEUE_TRANSPORT
+ value: "tls"
+ - name: ORDER_QUEUE_USERNAME
+ value: "sender"
+ - name: ORDER_QUEUE_PASSWORD
+ value: "<REPLACE_WITH_YOUR_SB_SENDER_PASSWORD>"
+ - name: ORDER_QUEUE_NAME
+ value: "orders"
+ - name: FASTIFY_ADDRESS
+ value: "0.0.0.0"
+ resources:
+ requests:
+ cpu: 1m
+ memory: 50Mi
+ limits:
+ cpu: 75m
+ memory: 128Mi
+ ```
+
+ > [!NOTE]
+ > Directly adding sensitive information, such as API keys, to your Kubernetes manifest files isn't secure and may accidentally get committed to code repositories. We added it here for simplicity. For production workloads, use [Managed Identity](./use-managed-identity.md) to authenticate with Azure Service Bus or store your secrets in [Azure Key Vault](./csi-secrets-store-driver.md).
+
+4. Save and close the updated `aks-store-quickstart.yaml` file.
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+1. Configure `kubectl` to connect to your cluster using the [`Import-AzAksCredential`][import-azakscredential] cmdlet.
+
+ ```azurepowershell-interactive
+ Import-AzAksCredential -ResourceGroupName myResourceGroup -Name myAKSCluster
+ ```
+
+2. Open the `aks-store-quickstart.yaml` file in a text editor.
+3. Remove the existing `rabbitmq` Deployment, ConfigMap, and Service sections and replace the existing `order-service` Deployment section with the following content:
+
+ ```YAML
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: order-service
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: order-service
+ template:
+ metadata:
+ labels:
+ app: order-service
+ spec:
+ nodeSelector:
+ "kubernetes.io/os": linux
+ containers:
+ - name: order-service
+ image: <REPLACE_WITH_YOUR_ACR_NAME>.azurecr.io/aks-store-demo/order-service:latest
+ ports:
+ - containerPort: 3000
+ env:
+ - name: ORDER_QUEUE_HOSTNAME
+ value: "<REPLACE_WITH_YOUR_SB_NS_HOSTNAME>" # Example: sb-store-demo-123456.servicebus.windows.net
+ - name: ORDER_QUEUE_PORT
+ value: "5671"
+ - name: ORDER_QUEUE_TRANSPORT
+ value: "tls"
+ - name: ORDER_QUEUE_USERNAME
+ value: "sender"
+ - name: ORDER_QUEUE_PASSWORD
+ value: "<REPLACE_WITH_YOUR_SB_SENDER_PASSWORD>"
+ - name: ORDER_QUEUE_NAME
+ value: "orders"
+ - name: FASTIFY_ADDRESS
+ value: "0.0.0.0"
+ resources:
+ requests:
+ cpu: 1m
+ memory: 50Mi
+ limits:
+ cpu: 75m
+ memory: 128Mi
+ ```
+
+ > [!NOTE]
+ > Directly adding sensitive information, such as API keys, to your Kubernetes manifest files isn't secure and may accidentally get committed to code repositories. We added it here for simplicity. For production workloads, use [Managed Identity](./use-managed-identity.md) to authenticate with Azure Service Bus or store your secrets in [Azure Key Vault](./csi-secrets-store-driver.md).
+
+4. Save and close the updated `aks-store-quickstart.yaml` file.
+++
+## Deploy the updated application
+
+* Deploy the updated application using the `kubectl apply` command.
+
+ ```console
+ kubectl apply -f aks-store-quickstart.yaml
+ ```
+
+ The following example output shows the successfully updated resources:
+
+ ```output
+ deployment.apps/order-service configured
+ service/order-service unchanged
+ deployment.apps/product-service unchanged
+ service/product-service unchanged
+ deployment.apps/store-front configured
+ service/store-front unchanged
+ ```
+
+## Test the application
+
+### Place a sample order
+
+1. Get the external IP address of the `store-front` service using the `kubectl get service` command.
+
+ ```console
+ kubectl get service store-front
+ ```
+
+2. Navigate to the external IP address of the `store-front` service in your browser.
+3. Place an order by choosing a product and selecting **Add to cart**.
+4. Select **Cart** to view your order, and then select **Checkout**.
+
+### View the order in the Azure Service Bus queue
+
+1. Navigate to the Azure portal and open the Azure Service Bus namespace you created earlier.
+2. Under **Entities**, select **Queues**, and then select the **orders** queue.
+3. In the **orders** queue, select **Service Bus Explorer**.
+4. Select **Peek from start** to view the order you submitted.
+
+## Next steps
+
+In this tutorial, you used Azure Service Bus to update and test the sample application. You learned how to:
+
+> [!div class="checklist"]
+>
+> * Create an Azure Service Bus namespace and queue.
+> * Update the Kubernetes manifest file to use the Azure Service Bus queue.
+> * Test the updated application by placing an order.
+
+In the next tutorial, you learn how to scale an application in AKS.
+
+> [!div class="nextstepaction"]
+> [Scale applications in AKS][aks-tutorial-scale]
+
+<!-- LINKS - external -->
+
+<!-- LINKS - internal -->
+[aks-tutorial-prepare-app]: ./tutorial-kubernetes-prepare-app.md
+[azure-cli-install]: /cli/azure/install-azure-cli
+[azure-powershell-install]: /powershell/azure/install-az-ps
+[aks-tutorial-scale]: ./tutorial-kubernetes-scale.md
+[azure-service-bus]: ../service-bus-messaging/service-bus-messaging-overview.md
+[az-servicebus-namespace-create]: /cli/azure/servicebus/namespace#az_servicebus_namespace_create
+[az-servicebus-queue-create]: /cli/azure/servicebus/queue#az_servicebus_queue_create
+[az-servicebus-queue-authorization-rule-create]: /cli/azure/servicebus/queue/authorization-rule#az_servicebus_queue_authorization_rule_create
+[az-servicebus-namespace-show]: /cli/azure/servicebus/namespace#az_servicebus_namespace_show
+[az-servicebus-queue-authorization-rule-keys-list]: /cli/azure/servicebus/queue/authorization-rule/keys#az_servicebus_queue_authorization_rule_keys_list
+[new-az-service-bus-namespace]: /powershell/module/az.servicebus/new-azservicebusnamespace
+[new-az-service-bus-queue]: /powershell/module/az.servicebus/new-azservicebusqueue
+[new-az-service-bus-authorization-rule]: /powershell/module/az.servicebus/new-azservicebusauthorizationrule
+[get-az-service-bus-namespace]: /powershell/module/az.servicebus/get-azservicebusnamespace
+[get-az-service-bus-key]: /powershell/module/az.servicebus/get-azservicebuskey
+[import-azakscredential]: /powershell/module/az.aks/import-azakscredential
+[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
aks Tutorial Kubernetes Prepare Acr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-prepare-acr.md
Title: Kubernetes on Azure tutorial - Create a container registry
-description: In this Azure Kubernetes Service (AKS) tutorial, you create an Azure Container Registry instance and upload a sample application container image.
+ Title: Kubernetes on Azure tutorial - Create an Azure Container Registry and build images
+description: In this Azure Kubernetes Service (AKS) tutorial, you create an Azure Container Registry instance and upload sample application container images.
Previously updated : 02/27/2023 Last updated : 10/23/2023 #Customer intent: As a developer, I want to learn how to create and use a container registry so that I can deploy my own applications to Azure Kubernetes Service.
-# Tutorial: Deploy and use Azure Container Registry (ACR)
+# Tutorial - Create an Azure Container Registry (ACR) and build images
-Azure Container Registry (ACR) is a private registry for container images. A private container registry allows you to securely build and deploy your applications and custom code. In this tutorial, part two of seven, you deploy an ACR instance and push a container image to it. You learn how to:
+Azure Container Registry (ACR) is a private registry for container images. A private container registry allows you to securely build and deploy your applications and custom code.
+
+In this tutorial, part two of seven, you deploy an ACR instance and push a container image to it. You learn how to:
> [!div class="checklist"] >
-> * Create an ACR instance
-> * Tag a container image for ACR
-> * Upload the image to ACR
-> * View images in your registry
-
-In later tutorials, you integrate your ACR instance with a Kubernetes cluster in AKS, and deploy an application from the image.
+> * Create an ACR instance.
+> * Use [ACR Tasks][acr-tasks] to build and push container images to ACR.
+> * View images in your registry.
## Before you begin
-In the [previous tutorial][aks-tutorial-prepare-app], you created a container image for a simple Azure Voting application. If you haven't created the Azure Voting app image, return to [Tutorial 1: Prepare an application for AKS][aks-tutorial-prepare-app].
+In the [previous tutorial][aks-tutorial-prepare-app], you used Docker to create a container image for a simple Azure Store Front application. If you haven't created the Azure Store Front app image, return to [Tutorial 1 - Prepare an application for AKS][aks-tutorial-prepare-app].
### [Azure CLI](#tab/azure-cli)
-This tutorial requires that you're running the Azure CLI version 2.0.53 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+This tutorial requires Azure CLI version 2.0.53 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
### [Azure PowerShell](#tab/azure-powershell)
-This tutorial requires that you're running Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
+This tutorial requires Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
## Create an Azure Container Registry
-Before creating an ACR, you need a resource group. An Azure resource group is a logical container into which you deploy and manage Azure resources.
+Before creating an ACR instance, you need a resource group. An Azure resource group is a logical container into which you deploy and manage Azure resources.
### [Azure CLI](#tab/azure-cli)
-1. Create a resource group with the [`az group create`][az-group-create] command.
+1. Create a resource group using the [`az group create`][az-group-create] command.
-```azurecli
-az group create --name myResourceGroup --location eastus
-```
+ ```azurecli-interactive
+ az group create --name myResourceGroup --location eastus
+ ```
-2. Create an ACR instance with the [`az acr create`][az-acr-create] command and provide your own unique registry name. The registry name must be unique within Azure, and contain 5-50 alphanumeric characters. In the rest of this tutorial, `<acrName>` is used as a placeholder for the container registry name. The *Basic* SKU is a cost-optimized entry point for development purposes that provides a balance of storage and throughput.
+2. Create an ACR instance using the [`az acr create`][az-acr-create] command and provide your own unique registry name. The registry name must be unique within Azure and contain 5-50 alphanumeric characters. The rest of this tutorial uses `<acrName>` as a placeholder for the container registry name. The *Basic* SKU is a cost-optimized entry point for development purposes that provides a balance of storage and throughput.
-```azurecli
-az acr create --resource-group myResourceGroup --name <acrName> --sku Basic
-```
+ ```azurecli-interactive
+ az acr create --resource-group myResourceGroup --name <acrName> --sku Basic
+ ```
### [Azure PowerShell](#tab/azure-powershell)
-1. Create a resource group with the [`New-AzResourceGroup`][new-azresourcegroup] cmdlet.
+1. Create a resource group using the [`New-AzResourceGroup`][new-azresourcegroup] cmdlet.
-```azurepowershell
-New-AzResourceGroup -Name myResourceGroup -Location eastus
-```
+ ```azurepowershell-interactive
+ New-AzResourceGroup -Name myResourceGroup -Location eastus
+ ```
-2. Create an ACR instance with the [`New-AzContainerRegistry`][new-azcontainerregistry] cmdlet and provide your own unique registry name. The registry name must be unique within Azure, and contain 5-50 alphanumeric characters. In the rest of this tutorial, `<acrName>` is used as a placeholder for the container registry name. The *Basic* SKU is a cost-optimized entry point for development purposes that provides a balance of storage and throughput.
+2. Create an ACR instance using the [`New-AzContainerRegistry`][new-azcontainerregistry] cmdlet and provide your own unique registry name. The registry name must be unique within Azure and contain 5-50 alphanumeric characters. The rest of this tutorial uses `<acrName>` as a placeholder for the container registry name. The *Basic* SKU is a cost-optimized entry point for development purposes that provides a balance of storage and throughput.
-```azurepowershell
-New-AzContainerRegistry -ResourceGroupName myResourceGroup -Name <acrname> -Sku Basic
-```
+ ```azurepowershell-interactive
+ New-AzContainerRegistry -ResourceGroupName myResourceGroup -Name <acrName> -Location eastus -Sku Basic
+ ```
-## Log in to the container registry
-
-### [Azure CLI](#tab/azure-cli)
-
-Log in to your ACR using the [`az acr login`][az-acr-login] command and provide the unique name given to the container registry in the previous step.
-
-```azurecli
-az acr login --name <acrName>
-```
-
-### [Azure PowerShell](#tab/azure-powershell)
-
-Log in to your ACR using the [`Connect-AzContainerRegistry`][connect-azcontainerregistry] cmdlet and provide the unique name given to the container registry in the previous step.
-
-```azurepowershell
-Connect-AzContainerRegistry -Name <acrName>
-```
--
+## Build and push container images to registry
-The command returns a *Login Succeeded* message once completed.
+* Build and push the images to your ACR using the [`az acr build`][az-acr-build] command.
-## Tag a container image
-
-To see a list of your current local images, use the [`docker images`][docker-images] command.
-
-```console
-docker images
-```
-
-The following example output shows a list of the current local Docker images:
-
-```output
-REPOSITORY TAG IMAGE ID CREATED SIZE
-mcr.microsoft.com/azuredocs/azure-vote-front v1 84b41c268ad9 7 minutes ago 944MB
-mcr.microsoft.com/oss/bitnami/redis 6.0.8 3a54a920bb6c 2 days ago 103MB
-```
-
-To use the *azure-vote-front* container image with ACR, you need to tag the image with the login server address of your registry. The tag is used for routing when pushing container images to an image registry.
-
-### [Azure CLI](#tab/azure-cli)
-
-To get the login server address, use the [`az acr list`][az-acr-list] command and query for the *loginServer*.
-
-```azurecli
-az acr list --resource-group myResourceGroup --query "[].{acrLoginServer:loginServer}" --output table
-```
-
-### [Azure PowerShell](#tab/azure-powershell)
+ > [!NOTE]
+ > In the following example, we don't build the `rabbitmq` image. This image is available from the Docker Hub public repository and doesn't need to be built or pushed to your ACR instance.
-To get the login server address, use the [`Get-AzContainerRegistry`][get-azcontainerregistry] cmdlet and query for the *loginServer*.
-
-```azurepowershell
-(Get-AzContainerRegistry -ResourceGroupName myResourceGroup -Name <acrName>).LoginServer
-```
---
-Then, tag your local *azure-vote-front* image with the *acrLoginServer* address of the container registry. To indicate the image version, add *:v1* to the end of the image name:
-
-```console
-docker tag mcr.microsoft.com/azuredocs/azure-vote-front:v1 <acrLoginServer>/azure-vote-front:v1
-```
-
-To verify the tags are applied, run [`docker images`][docker-images] again.
-
-```console
-docker images
-```
-
-The following example output shows an image tagged with the ACR instance address and a version number:
-
-```console
-REPOSITORY TAG IMAGE ID CREATED SIZE
-mcr.microsoft.com/azuredocs/azure-vote-front v1 84b41c268ad9 16 minutes ago 944MB
-mycontainerregistry.azurecr.io/azure-vote-front v1 84b41c268ad9 16 minutes ago 944MB
-mcr.microsoft.com/oss/bitnami/redis 6.0.8 3a54a920bb6c 2 days ago 103MB
-```
-
-## Push images to registry
-
-Push the *azure-vote-front* image to your ACR instance using the [`docker push`][docker-push] command. Make sure to provide your own *acrLoginServer* address for the image name.
-
-```console
-docker push <acrLoginServer>/azure-vote-front:v1
-```
-
-It may take a few minutes to complete the image push to ACR.
+ ```azurecli-interactive
+ az acr build --registry <acrName> --image aks-store-demo/product-service:latest ./src/product-service/
+ az acr build --registry <acrName> --image aks-store-demo/order-service:latest ./src/order-service/
+ az acr build --registry <acrName> --image aks-store-demo/store-front:latest ./src/store-front/
+ ```
## List images in registry ### [Azure CLI](#tab/azure-cli)
-To return a list of images that have been pushed to your ACR instance, use the [`az acr repository list`][az-acr-repository-list] command, providing your own `<acrName>`.
-
-```azurecli
-az acr repository list --name <acrName> --output table
-```
-
-The following example output lists the *azure-vote-front* image as available in the registry:
-
-```output
-Result
--
-azure-vote-front
-```
-
-To see the tags for a specific image, use the [`az acr repository show-tags`][az-acr-repository-show-tags] command.
+* View the images in your ACR instance using the [`az acr repository list`][az-acr-repository-list] command.
-```azurecli
-az acr repository show-tags --name <acrName> --repository azure-vote-front --output table
-```
+ ```azurecli-interactive
+ az acr repository list --name <acrName> --output table
+ ```
-The following example output shows the *v1* image tagged in a previous step:
+ The following example output lists the available images in your registry:
-```output
-Result
-v1
-```
+ ```output
+ Result
+ -
+ aks-store-demo/product-service
+ aks-store-demo/order-service
+ aks-store-demo/store-front
+ ```
### [Azure PowerShell](#tab/azure-powershell)
-To return a list of images that have been pushed to your ACR instance, use the [`Get-AzContainerRegistryManifest`][get-azcontainerregistrymanifest] cmdlet, providing your own `<acrName>`.
+* View the images in your ACR instance using the [`Get-AzContainerRegistryRepository`][get-azcontainerregistryrepository] cmdlet.
-```azurepowershell
-Get-AzContainerRegistryManifest -RegistryName <acrName> -RepositoryName azure-vote-front
-```
+ ```azurepowershell-interactive
+ Get-AzContainerRegistryRepository -RegistryName <acrName>
+ ```
-The following example output lists the *azure-vote-front* image as available in the registry:
+ The following example output lists the available images in your registry:
-```output
-Registry ImageName ManifestsAttributes
-
-<acrName> azure-vote-front {Microsoft.Azure.Commands.ContainerRegistry.Models.PSManifestAttributeBase}
-```
-
-To see the tags for a specific image, use the [`Get-AzContainerRegistryTag`][get-azcontainerregistrytag] cmdlet as follows:
-
-```azurepowershell
-Get-AzContainerRegistryTag -RegistryName <acrName> -RepositoryName azure-vote-front
-```
-
-The following example output shows the *v1* image tagged in a previous step:
-
-```output
-Registry ImageName Tags
-
-<acrName> azure-vote-front {v1}
-```
+ ```output
+ aks-store-demo/productservice
+ aks-store-demo/orderservice
+ aks-store-demo/storefront
+ ```
## Next steps
-In this tutorial, you created an ACR and pushed an image to use in an AKS cluster. You learned how to:
+In this tutorial, you created an ACR and pushed images to it to use in an AKS cluster. You learned how to:
> [!div class="checklist"] >
-> * Create an ACR instance
-> * Tag a container image for ACR
-> * Upload the image to ACR
-> * View images in your registry
+> * Create an ACR instance.
+> * Use [ACR Tasks][acr-tasks] to build and push container images to ACR.
+> * View images in your registry.
-In the next tutorial, you'll learn how to deploy a Kubernetes cluster in Azure.
+In the next tutorial, you learn how to deploy a Kubernetes cluster in Azure.
> [!div class="nextstepaction"] > [Deploy Kubernetes cluster][aks-tutorial-deploy-cluster]
-<!-- LINKS - external -->
-[docker-images]: https://docs.docker.com/engine/reference/commandline/images/
-[docker-push]: https://docs.docker.com/engine/reference/commandline/push/
- <!-- LINKS - internal --> [az-acr-create]: /cli/azure/acr#az_acr_create
-[az-acr-list]: /cli/azure/acr#az_acr_list
-[az-acr-login]: /cli/azure/acr#az_acr_login
[az-acr-repository-list]: /cli/azure/acr/repository#az_acr_repository_list
-[az-acr-repository-show-tags]: /cli/azure/acr/repository#az_acr_repository_show_tags
[az-group-create]: /cli/azure/group#az_group_create [azure-cli-install]: /cli/azure/install-azure-cli [aks-tutorial-deploy-cluster]: ./tutorial-kubernetes-deploy-cluster.md
In the next tutorial, you'll learn how to deploy a Kubernetes cluster in Azure.
[azure-powershell-install]: /powershell/azure/install-az-ps [new-azresourcegroup]: /powershell/module/az.resources/new-azresourcegroup [new-azcontainerregistry]: /powershell/module/az.containerregistry/new-azcontainerregistry
-[connect-azcontainerregistry]: /powershell/module/az.containerregistry/connect-azcontainerregistry
-[get-azcontainerregistry]: /powershell/module/az.containerregistry/get-azcontainerregistry
-[get-azcontainerregistrymanifest]: /powershell/module/az.containerregistry/get-azcontainerregistrymanifest
-[get-azcontainerregistrytag]: /powershell/module/az.containerregistry/get-azcontainerregistrytag
+[get-azcontainerregistryrepository]: /powershell/module/az.containerregistry/get-azcontainerregistryrepository
+[acr-tasks]: ../container-registry/container-registry-tasks-overview.md
+[az-acr-build]: /cli/azure/acr#az_acr_build
aks Tutorial Kubernetes Prepare App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-prepare-app.md
Title: Kubernetes on Azure tutorial - Prepare an application
+ Title: Kubernetes on Azure tutorial - Prepare an application for Azure Kubernetes Service (AKS)
description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to prepare and build a multi-container app with Docker Compose that you can then deploy to AKS. Previously updated : 12/06/2022 Last updated : 10/23/2023 #Customer intent: As a developer, I want to learn how to build a container-based application so that I can deploy the app to Azure Kubernetes Service.
-# Tutorial: Prepare an application for Azure Kubernetes Service (AKS)
+# Tutorial - Prepare an application for Azure Kubernetes Service (AKS)
In this tutorial, part one of seven, you prepare a multi-container application to use in Kubernetes. You use existing development tools like Docker Compose to locally build and test the application. You learn how to: > [!div class="checklist"] >
-> * Clone a sample application source from GitHub
-> * Create a container image from the sample application source
-> * Test the multi-container application in a local Docker environment
+> * Clone a sample application source from GitHub.
+> * Create a container image from the sample application source.
+> * Test the multi-container application in a local Docker environment.
Once completed, the following application runs in your local development environment: In later tutorials, you upload the container image to an Azure Container Registry (ACR), and then deploy it into an AKS cluster.
This tutorial assumes a basic understanding of core Docker concepts such as cont
To complete this tutorial, you need a local Docker development environment running Linux containers. Docker provides packages that configure Docker on a [Mac][docker-for-mac], [Windows][docker-for-windows], or [Linux][docker-for-linux] system. > [!NOTE]
-> Azure Cloud Shell does not include the Docker components required to complete every step in these tutorials. Therefore, we recommend using a full Docker development environment.
+> Azure Cloud Shell doesn't include the Docker components required to complete every step in these tutorials. Therefore, we recommend using a full Docker development environment.
## Get application code
-The [sample application][sample-application] used in this tutorial is a basic voting app consisting of a front-end web component and a back-end Redis instance. The web component is packaged into a custom container image. The Redis instance uses an unmodified image from Docker Hub.
-
-Use [git][] to clone the sample application to your development environment.
-
-```console
-git clone https://github.com/Azure-Samples/azure-voting-app-redis.git
+The [sample application][sample-application] used in this tutorial is a basic store front app including the following Kubernetes deployments and
++
+* **Store front**: Web application for customers to view products and place orders.
+* **Product service**: Shows product information.
+* **Order service**: Places orders.
+* **Rabbit MQ**: Message queue for an order queue.
+
+1. Use [git][] to clone the sample application to your development environment.
+
+ ```console
+ git clone https://github.com/Azure-Samples/aks-store-demo.git
+ ```
+
+2. Change into the cloned directory.
+
+ ```console
+ cd aks-store-demo
+ ```
+
+## Review Docker Compose file
+
+The sample application you create in this tutorial uses the [*docker-compose-quickstart* YAML file](https://github.com/Azure-Samples/aks-store-demo/blob/main/docker-compose-quickstart.yml) in the [repository](https://github.com/Azure-Samples/aks-store-demo/tree/main) you cloned in the previous step.
+
+```yaml
+version: "3.7"
+
+ rabbitmq:
+ image: rabbitmq:3.11.17-management-alpine
+ container_name: 'rabbitmq'
+ restart: always
+ environment:
+ - "RABBITMQ_DEFAULT_USER=username"
+ - "RABBITMQ_DEFAULT_PASS=password"
+ ports:
+ - 15672:15672
+ - 5672:5672
+ healthcheck:
+ test: ["CMD", "rabbitmqctl", "status"]
+ interval: 30s
+ timeout: 10s
+ retries: 5
+ volumes:
+ - ./rabbitmq_enabled_plugins:/etc/rabbitmq/enabled_plugins
+ networks:
+ - backend_services
+ orderservice:
+ build: src/order-service
+ container_name: 'orderservice'
+ restart: always
+ ports:
+ - 3000:3000
+ healthcheck:
+ test: ["CMD", "wget", "-O", "", "-q", "http://orderservice:3000/health"]
+ interval: 30s
+ timeout: 10s
+ retries: 5
+ environment:
+ - ORDER_QUEUE_HOSTNAME=rabbitmq
+ - ORDER_QUEUE_PORT=5672
+ - ORDER_QUEUE_USERNAME=username
+ - ORDER_QUEUE_PASSWORD=password
+ - ORDER_QUEUE_NAME=orders
+ - ORDER_QUEUE_RECONNECT_LIMIT=3
+ networks:
+ - backend_services
+ depends_on:
+ rabbitmq:
+ condition: service_healthy
+ productservice:
+ build: src/product-service
+ container_name: 'productservice'
+ restart: always
+ ports:
+ - 3002:3002
+ healthcheck:
+ test: ["CMD", "wget", "-O", "", "-q", "http://productservice:3002/health"]
+ interval: 30s
+ timeout: 10s
+ retries: 5
+ networks:
+ - backend_services
+ storefront:
+ build: src/store-front
+ container_name: 'storefront'
+ restart: always
+ ports:
+ - 8080:8080
+ healthcheck:
+ test: ["CMD", "wget", "-O", "", "-q", "http://storefront:80/health"]
+ interval: 30s
+ timeout: 10s
+ retries: 5
+ environment:
+ - VUE_APP_PRODUCT_SERVICE_URL=http://productservice:3002/
+ - VUE_APP_ORDER_SERVICE_URL=http://orderservice:3000/
+ networks:
+ - backend_services
+ depends_on:
+ - productservice
+ - orderservice
+networks:
+ backend_
+ driver: bridge
```
-Change into the cloned directory.
+## Create container images and run application
-```console
-cd azure-voting-app-redis
-```
+You can use [Docker Compose][docker-compose] to automate building container images and the deployment of multi-container applications.
-The directory contains the application source code, a pre-created Docker compose file, and a Kubernetes manifest file. These files are used throughout the tutorial set. The contents and structure of the directory are as follows:
-
-```output
-azure-voting-app-redis
-Γöé azure-vote-all-in-one-redis.yaml
-Γöé docker-compose.yaml
-Γöé LICENSE
-Γöé README.md
-Γöé
-Γö£ΓöÇΓöÇΓöÇazure-vote
-Γöé Γöé app_init.supervisord.conf
-Γöé Γöé Dockerfile
-Γöé Γöé Dockerfile-for-app-service
-Γöé Γöé sshd_config
-Γöé Γöé
-Γöé ΓööΓöÇΓöÇΓöÇazure-vote
-Γöé Γöé config_file.cfg
-Γöé Γöé main.py
-Γöé Γöé
-Γöé Γö£ΓöÇΓöÇΓöÇstatic
-Γöé Γöé default.css
-Γöé Γöé
-Γöé ΓööΓöÇΓöÇΓöÇtemplates
-Γöé https://docsupdatetracker.net/index.html
-Γöé
-ΓööΓöÇΓöÇΓöÇjenkins-tutorial
- config-jenkins.sh
- deploy-jenkins-vm.sh
-```
+1. Create the container image, download the Redis image, and start the application using the `docker compose` command.
-## Create container images
+ ```console
+ docker compose -f docker-compose-quickstart.yml up -d
+ ```
-[Docker Compose][docker-compose] can be used to automate building container images and the deployment of multi-container applications.
+2. View the created images using the [`docker images`][docker-images] command.
-The following command uses the sample `docker-compose.yaml` file to create the container image, download the Redis image, and start the application.
+ ```console
+ docker images
+ ```
-```console
-docker compose up -d
-```
+ The following condensed example output shows the created images:
-When completed, use the [`docker images`][docker-images] command to see the created images. Two images are downloaded or created. The *azure-vote-front* image contains the front-end application. The *redis* image is used to start a Redis instance.
+ ```output
+ REPOSITORY TAG IMAGE ID
+ aks-store-demo-productservice latest 2b66a7e91eca
+ aks-store-demo-orderservice latest 54ad5de546f9
+ aks-store-demo-storefront latest d9e3ac46a225
+ rabbitmq 3.11.17-management-alpine 79a570297657
+ ...
+ ```
-```
-$ docker images
-REPOSITORY TAG IMAGE ID CREATED SIZE
-mcr.microsoft.com/oss/bitnami/redis 6.0.8 3a54a920bb6c 2 years ago 103MB
-mcr.microsoft.com/azuredocs/azure-vote-front v1 4d4d08c25677 5 years ago 935MB
-```
+3. View the running containers using the [`docker ps`][docker-ps] command.
-Run the [`docker ps`][docker-ps] command to see the running containers.
+ ```console
+ docker ps
+ ```
-```
-$ docker ps
+ The following condensed example output shows four running containers:
-CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
-d10e5244f237 mcr.microsoft.com/azuredocs/azure-vote-front:v1 "/entrypoint.sh /sta…" 3 minutes ago Up 3 minutes 443/tcp, 0.0.0.0:8080->80/tcp azure-vote-front
-21574cb38c1f mcr.microsoft.com/oss/bitnami/redis:6.0.8 "/opt/bitnami/script…" 3 minutes ago Up 3 minutes 0.0.0.0:6379->6379/tcp azure-vote-back
-```
+ ```output
+ CONTAINER ID IMAGE
+ 21574cb38c1f aks-store-demo-productservice
+ c30a5ed8d86a aks-store-demo-orderservice
+ d10e5244f237 aks-store-demo-storefront
+ 94e00b50b86a rabbitmq:3.11.17-management-alpine
+ ```
## Test application locally To see your running application, navigate to `http://localhost:8080` in a local web browser. The sample application loads, as shown in the following example:
-## Clean up resources
+On this page, you can view products, add them to your cart, and then place an order.
-Now that the application's functionality has been validated, the running containers can be stopped and removed. ***Do not delete the container images*** - in the next tutorial, you'll upload the *azure-vote-front* image to an ACR instance.
+## Clean up resources
-To stop and remove the container instances and resources, use the [`docker-compose down`][docker-compose-down] command.
+Since you validated the application's functionality, you can stop and remove the running containers. ***Do not delete the container images*** - you use them in the next tutorial.
-```console
-docker compose down
-```
+* Stop and remove the container instances and resources using the [`docker-compose down`][docker-compose-down] command.
-When the local application has been removed, you have a Docker image that contains the Azure Vote application, *azure-vote-front*, to use in the next tutorial.
+ ```console
+ docker compose down
+ ```
## Next steps In this tutorial, you created a sample application, created container images for the application, and then tested the application. You learned how to: > [!div class="checklist"]
+> * Clone a sample application source from GitHub.
+> * Create a container image from the sample application source.
+> * Test the multi-container application in a local Docker environment.
-> * Clone a sample application source from GitHub
-> * Create a container image from the sample application source
-> * Test the multi-container application in a local Docker environment
-
-In the next tutorial, you'll learn how to store container images in an ACR.
+In the next tutorial, you learn how to store container images in an ACR.
> [!div class="nextstepaction"] > [Push images to Azure Container Registry][aks-tutorial-prepare-acr]
In the next tutorial, you'll learn how to store container images in an ACR.
[docker-ps]: https://docs.docker.com/engine/reference/commandline/ps/ [docker-compose-down]: https://docs.docker.com/compose/reference/down [git]: https://git-scm.com/downloads
-[sample-application]: https://github.com/Azure-Samples/azure-voting-app-redis
+[sample-application]: https://github.com/Azure-Samples/aks-store-demo
<!-- LINKS - internal -->
-[aks-tutorial-prepare-acr]: ./tutorial-kubernetes-prepare-acr.md
+[aks-tutorial-prepare-acr]: ./tutorial-kubernetes-prepare-acr.md
aks Tutorial Kubernetes Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-scale.md
Title: Kubernetes on Azure tutorial - Scale application
+ Title: Kubernetes on Azure tutorial - Scale applications in Azure Kubernetes Service (AKS)
description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to scale nodes and pods and implement horizontal pod autoscaling. Previously updated : 05/03/2023 Last updated : 10/23/2023 #Customer intent: As a developer or IT pro, I want to learn how to scale my applications in an Azure Kubernetes Service (AKS) cluster so I can provide high availability or respond to customer demand and application load.
-# Tutorial: Scale applications in Azure Kubernetes Service (AKS)
+# Tutorial - Scale applications in Azure Kubernetes Service (AKS)
-If you followed the previous tutorials, you have a working Kubernetes cluster and you deployed the sample Azure Voting app. In this tutorial, part five of seven, you scale out the pods in the app and try pod autoscaling. You also learn how to scale the number of Azure VM nodes to change the cluster's capacity for hosting workloads. You learn how to:
+If you followed the previous tutorials, you have a working Kubernetes cluster and Azure Store Front app.
+
+In this tutorial, part six of seven, you scale out the pods in the app, try pod autoscaling, and scale the number of Azure VM nodes to change the cluster's capacity for hosting workloads. You learn how to:
> [!div class="checklist"] > > * Scale the Kubernetes nodes. > * Manually scale Kubernetes pods that run your application.
-> * Configure autoscaling pods that run the app front-end.
-
-In the upcoming tutorials, you update the Azure Vote application to a new version.
+> * Configure autoscaling pods that run the app front end.
## Before you begin
-In previous tutorials, you packaged an application into a container image, uploaded the image to Azure Container Registry, created an AKS cluster, and deployed the application to the AKS cluster.
-
-If you haven't completed these steps and would like to follow along with this tutorial, start with the first tutorial, [Prepare an application for AKS][aks-tutorial-prepare-app].
+In previous tutorials, you packaged an application into a container image, uploaded the image to Azure Container Registry, created an AKS cluster, deployed an application, and used Azure Service Bus to redeploy an updated application. If you haven't completed these steps and want to follow along, start with [Tutorial 1 - Prepare application for AKS][aks-tutorial-prepare-app].
### [Azure CLI](#tab/azure-cli)
-This tutorial requires Azure CLI version 2.0.53 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+This tutorial requires Azure CLI version 2.34.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
### [Azure PowerShell](#tab/azure-powershell)
This tutorial requires Azure PowerShell version 5.9.0 or later. Run `Get-Install
## Manually scale pods
-When you deployed the Azure Vote front end and Redis instance in the previous tutorials, a single replica was created.
-
-1. See the number and state of pods in your cluster using the [`kubectl get`][kubectl-get] command.
+1. View the pods in your cluster using the [`kubectl get`][kubectl-get] command.
```console kubectl get pods ```
- The following example output shows one front-end pod and one back-end pod:
+ The following example output shows the pods running the Azure Store Front app:
```output
- NAME READY STATUS RESTARTS AGE
- azure-vote-back-2549686872-4d2r5 1/1 Running 0 31m
- azure-vote-front-848767080-tf34m 1/1 Running 0 31m
+ NAME READY STATUS RESTARTS AGE
+ order-service-848767080-tf34m 1/1 Running 0 31m
+ product-service-4019737227-2q2qz 1/1 Running 0 31m
+ store-front-2606967446-2q2qz 1/1 Running 0 31m
```
-2. Manually change the number of pods in the *azure-vote-front* deployment using the [`kubectl scale`][kubectl-scale] command. The following example command increases the number of front-end pods to five:
+2. Manually change the number of pods in the *store-front* deployment using the [`kubectl scale`][kubectl-scale] command.
```console
- kubectl scale --replicas=5 deployment/azure-vote-front
+ kubectl scale --replicas=5 deployment.apps/store-front
``` 3. Verify the additional pods were created using the [`kubectl get pods`][kubectl-get] command. ```console kubectl get pods-
- READY STATUS RESTARTS AGE
- azure-vote-back-2606967446-nmpcf 1/1 Running 0 15m
- azure-vote-front-3309479140-2hfh0 1/1 Running 0 3m
- azure-vote-front-3309479140-bzt05 1/1 Running 0 3m
- azure-vote-front-3309479140-fvcvm 1/1 Running 0 3m
- azure-vote-front-3309479140-hrbf2 1/1 Running 0 15m
- azure-vote-front-3309479140-qphz8 1/1 Running 0 3m
```
-## Autoscale pods
-
-### [Azure CLI](#tab/azure-cli)
+ The following example output shows the additional pods running the Azure Store Front app:
-Kubernetes supports [horizontal pod autoscaling][kubernetes-hpa] to adjust the number of pods in a deployment depending on CPU utilization or other select metrics. The [Metrics Server][metrics-server] is automatically deployed into AKS clusters with versions 1.10 and higher and provides resource utilization to Kubernetes.
-
-* Check the version of your AKS cluster using the [`az aks show`][az-aks-show] command.
-
- ```azurecli
- az aks show --resource-group myResourceGroup --name myAKSCluster --query kubernetesVersion --output table
- ```
-
-### [Azure PowerShell](#tab/azure-powershell)
-
-Kubernetes supports [horizontal pod autoscaling][kubernetes-hpa] to adjust the number of pods in a deployment depending on CPU utilization or other select metrics. The [Metrics Server][metrics-server] is automatically deployed into AKS clusters with versions 1.10 and higher and provides resource utilization to Kubernetes.
-
-* Check the version of your AKS cluster using the [`Get-AzAksCluster`][get-azakscluster] cmdlet.
-
- ```azurepowershell
- (Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster).KubernetesVersion
+ ```output
+ READY STATUS RESTARTS AGE
+ store-front-2606967446-2q2qzc 1/1 Running 0 15m
+ store-front-3309479140-2hfh0 1/1 Running 0 3m
+ store-front-3309479140-bzt05 1/1 Running 0 3m
+ store-front-3309479140-fvcvm 1/1 Running 0 3m
+ store-front-3309479140-hrbf2 1/1 Running 0 15m
+ store-front-3309479140-qphz8 1/1 Running 0 3m
``` --
-> [!NOTE]
-> If your AKS cluster is on a version lower than *1.10*, the Metrics Server isn't automatically installed. Metrics Server installation manifests are available as a `components.yaml` asset on Metrics Server releases, which means you can install them via a URL. To learn more about these YAML definitions, see the [Deployment][metrics-server-github] section of the readme.
->
-> **Example installation**:
->
-> ```console
-> kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
-> ```
+## Autoscale pods
-To use the autoscaler, all containers and pods must have defined CPU requests and limits. In the `azure-vote-front` deployment, the *front-end* container requests 0.25 CPU with a limit of 0.5 CPU.
+To use the horizontal pod autoscaler, all containers and pods must have defined CPU requests and limits. In the `aks-store-quickstart` deployment, the *front-end* container requests 1m CPU with a limit of 1000m CPU.
These resource requests and limits are defined for each container, as shown in the following condensed example YAML: ```yaml
+...
containers:
- - name: azure-vote-front
- image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
+ - name: store-front
+ image: ghcr.io/azure-samples/aks-store-demo/store-front:latest
ports:
- - containerPort: 80
+ - containerPort: 8080
+ name: store-front
+...
resources: requests:
- cpu: 250m
+ cpu: 1m
+...
limits:
- cpu: 500m
+ cpu: 1000m
+...
```
-### Autoscale pods using `kubectl autoscale`
-
-* Autoscale pods using the [`kubectl autoscale`][kubectl-autoscale] command. The following command autoscales the number of pods in the *azure-vote-front* deployment with the following conditions: if average CPU utilization across all pods exceeds 50% of the requested usage, the autoscaler increases the pods up to a maximum of 10 instances and a minimum of three instances for the deployment:
-
- ```console
- kubectl autoscale deployment azure-vote-front --cpu-percent=50 --min=3 --max=10
- ```
- ### Autoscale pods using a manifest file
-1. Create a manifest file to define the autoscaler behavior and resource limits, as shown in the following example manifest file `azure-vote-hpa.yaml`:
-
- > [!NOTE]
- > If you're using `apiVersion: autoscaling/v2`, you can introduce more metrics when autoscaling, including custom metrics. For more information, see [Autoscale multiple metrics and custom metrics using `v2` of the `HorizontalPodAutoscaler`](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics).
+1. Create a manifest file to define the autoscaler behavior and resource limits, as shown in the following condensed example manifest file `aks-store-quickstart-hpa.yaml`:
```yaml apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata:
- name: azure-vote-back-hpa
+ name: store-front-hpa
spec: maxReplicas: 10 # define max replica count minReplicas: 3 # define min replica count scaleTargetRef: apiVersion: apps/v1 kind: Deployment
- name: azure-vote-back
- targetCPUUtilizationPercentage: 50 # target CPU utilization
-
-
-
- apiVersion: autoscaling/v1
- kind: HorizontalPodAutoscaler
- metadata:
- name: azure-vote-front-hpa
- spec:
- maxReplicas: 10 # define max replica count
- minReplicas: 3 # define min replica count
- scaleTargetRef:
- apiVersion: apps/v1
- kind: Deployment
- name: azure-vote-front
+ name: store-front
targetCPUUtilizationPercentage: 50 # target CPU utilization ``` 2. Apply the autoscaler manifest file using the `kubectl apply` command. ```console
- kubectl apply -f azure-vote-hpa.yaml
+ kubectl apply -f aks-store-quickstart-hpa.yaml
``` 3. Check the status of the autoscaler using the `kubectl get hpa` command. ```console kubectl get hpa-
- # Example output
- NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
- azure-vote-front Deployment/azure-vote-front 0% / 50% 3 10 3 2m
```
- After a few minutes, with minimal load on the Azure Vote app, the number of pod replicas decreases to three. You can use `kubectl get pods` again to see the unneeded pods being removed.
+ After a few minutes, with minimal load on the Azure Store Front app, the number of pod replicas decreases to three. You can use `kubectl get pods` again to see the unneeded pods being removed.
+
+> [!NOTE]
+> You can enable the Kubernetes-based Event-Driven Autoscaler (KEDA) AKS add-on to your cluster to drive scaling based on the number of events needing to be processed. For more information, see [Enable simplified application autoscaling with the Kubernetes Event-Driven Autoscaling (KEDA) add-on (Preview)][keda-addon].
## Manually scale AKS nodes
The following example increases the number of nodes to three in the Kubernetes c
* Scale your cluster nodes using the [`az aks scale`][az-aks-scale] command.
- ```azurecli
+ ```azurecli-interactive
az aks scale --resource-group myResourceGroup --name myAKSCluster --node-count 3 ```
The following example increases the number of nodes to three in the Kubernetes c
### [Azure PowerShell](#tab/azure-powershell)
-* Scale your cluster nodes using the [`Get-AzAksCluster`][get-azakscluster] and [`Set-AzAksCluster`][set-azakscluster] commands.
+* Scale your cluster nodes using the [`Get-AzAksCluster`][get-azakscluster] and [`Set-AzAksCluster`][set-azakscluster] cmdlets.
- ```azurepowershell
+ ```azurepowershell-interactive
Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster | Set-AzAksCluster -NodeCount 3 ```
The following example increases the number of nodes to three in the Kubernetes c
+You can also autoscale the nodes in your cluster. For more information, see [Use the cluster autoscaler with node pools](./cluster-autoscaler.md#use-the-cluster-autoscaler-with-node-pools).
+ ## Next steps In this tutorial, you used different scaling features in your Kubernetes cluster. You learned how to:
In this tutorial, you used different scaling features in your Kubernetes cluster
> * Configure autoscaling pods that run the app front end. > * Manually scale the Kubernetes nodes.
-In the next tutorial, you learn how to update applications in Kubernetes.
+In the next tutorial, you learn how to upgrade Kubernetes in your AKS cluster.
> [!div class="nextstepaction"]
-> [Update an application in Kubernetes][aks-tutorial-update-app]
+> [Upgrade Kubernetes in Azure Kubernetes Service][aks-tutorial-upgrade-kubernetes]
<!-- LINKS - external -->
-[kubectl-autoscale]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#autoscale
[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get [kubectl-scale]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#scale
-[kubernetes-hpa]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
-[kubernetes-hpa-walkthrough]: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
-[metrics-server-github]: https://github.com/kubernetes-sigs/metrics-server/blob/master/README.md#deployment
-[metrics-server]: https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/#metrics-server
<!-- LINKS - internal --> [aks-tutorial-prepare-app]: ./tutorial-kubernetes-prepare-app.md
-[aks-tutorial-update-app]: ./tutorial-kubernetes-app-update.md
[az-aks-scale]: /cli/azure/aks#az_aks_scale [azure-cli-install]: /cli/azure/install-azure-cli
-[az-aks-show]: /cli/azure/aks#az_aks_show
[azure-powershell-install]: /powershell/azure/install-az-ps [get-azakscluster]: /powershell/module/az.aks/get-azakscluster [set-azakscluster]: /powershell/module/az.aks/set-azakscluster
+[aks-tutorial-upgrade-kubernetes]: ./tutorial-kubernetes-upgrade-cluster.md
+[keda-addon]: ./keda-about.md
aks Tutorial Kubernetes Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-upgrade-cluster.md
Title: Kubernetes on Azure tutorial - Upgrade a cluster
+ Title: Kubernetes on Azure tutorial - Upgrade an Azure Kubernetes Service (AKS) cluster
description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to upgrade an existing AKS cluster to the latest available Kubernetes version. Previously updated : 05/04/2023 Last updated : 10/23/2023 #Customer intent: As a developer or IT pro, I want to learn how to upgrade an Azure Kubernetes Service (AKS) cluster so that I can use the latest version of Kubernetes and features.
-# Tutorial: Upgrade Kubernetes in Azure Kubernetes Service (AKS)
+# Tutorial - Upgrade an Azure Kubernetes Service (AKS) cluster
As part of the application and cluster lifecycle, you might want to upgrade to the latest available version of Kubernetes. You can upgrade your Azure Kubernetes Service (AKS) cluster using the Azure CLI, Azure PowerShell, or the Azure portal.
-In this tutorial, part seven of seven, you learn how to:
+In this tutorial, part seven of seven, you upgrade an AKS cluster. You learn how to:
> [!div class="checklist"] >
In this tutorial, part seven of seven, you learn how to:
## Before you begin
-In previous tutorials, you packaged an application into a container image and uploaded the container image to Azure Container Registry (ACR). You also created an AKS cluster and deployed an application to it. If you haven't completed these steps and want to follow along with this tutorial, start with [Tutorial 1: Prepare an application for AKS][aks-tutorial-prepare-app].
+In previous tutorials, you packaged an application into a container image and uploaded the container image to Azure Container Registry (ACR). You also created an AKS cluster and deployed an application to it. If you haven't completed these steps and want to follow along, start with [Tutorial 1 - Prepare application for AKS][aks-tutorial-prepare-app].
-* If you're using Azure CLI, this tutorial requires Azure CLI version 2.34.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
-* If you're using Azure PowerShell, this tutorial requires Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
+If using Azure CLI, this tutorial requires Azure CLI version 2.34.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+
+If using Azure PowerShell, this tutorial requires Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].
## Get available cluster versions
In previous tutorials, you packaged an application into a container image and up
* Before you upgrade, check which Kubernetes releases are available for your cluster using the [`az aks get-upgrades`][az-aks-get-upgrades] command.
- ```azurecli
+ ```azurecli-interactive
az aks get-upgrades --resource-group myResourceGroup --name myAKSCluster ```
- The following example output shows the current version as *1.18.10* and lists the available versions under *upgrades*.
+ The following example output shows the current version as *1.26.6* and lists the available versions under `upgrades`:
```output { "agentPoolProfiles": null, "controlPlaneProfile": {
- "kubernetesVersion": "1.18.10",
+ "kubernetesVersion": "1.26.6",
... "upgrades": [ { "isPreview": null,
- "kubernetesVersion": "1.19.1"
+ "kubernetesVersion": "1.27.1"
}, { "isPreview": null,
- "kubernetesVersion": "1.19.3"
+ "kubernetesVersion": "1.27.3"
} ] },
In previous tutorials, you packaged an application into a container image and up
1. Before you upgrade, check which Kubernetes releases are available for your cluster and the region where your cluster resides using the [`Get-AzAksCluster`][get-azakscluster] cmdlet.
- ```azurepowershell
+ ```azurepowershell-interactive
Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster | Select-Object -Property Name, KubernetesVersion, Location ```
- The following example output shows the current version as *1.19.9* and the location as *eastus*.
+ The following example output shows the current version as *1.26.6* and the location as *eastus*:
```output Name KubernetesVersion Location - -- --
- myAKSCluster 1.19.9 eastus
+ myAKSCluster 1.26.6 eastus
``` 2. Check which Kubernetes upgrade releases are available in the region where your cluster resides using the [`Get-AzAksVersion`][get-azaksversion] cmdlet.
- ```azurepowershell
- Get-AzAksVersion -Location eastus | Where-Object OrchestratorVersion -gt 1.19.9
+ ```azurepowershell-interactive
+ Get-AzAksVersion -Location eastus | Where-Object OrchestratorVersion
```
- The following example output shows the available versions under *OrchestratorVersion*.
+ The following example output shows the available versions under `OrchestratorVersion`:
```output Default IsPreview OrchestratorType OrchestratorVersion - - -
- Kubernetes 1.20.2
- Kubernetes 1.20.5
+ Kubernetes 1.27.1
+ Kubernetes 1.27.3
``` ### [Azure portal](#tab/azure-portal)
-Check which Kubernetes releases are available for your cluster using the following steps:
- 1. Sign in to the [Azure portal](https://portal.azure.com). 2. Navigate to your AKS cluster. 3. Under **Settings**, select **Cluster configuration**.
-4. In **Kubernetes version**, select **Upgrade version**. This will redirect you to a new page.
+4. In **Kubernetes version**, select **Upgrade version**. This redirects you to a new page.
5. In **Kubernetes version**, select the version to check for available upgrades. If no upgrades are available, create a new cluster with a supported version of Kubernetes and migrate your workloads from the existing cluster to the new cluster. It's not supported to upgrade a cluster to a newer Kubernetes version when no upgrades are available.
-## Upgrade a cluster
+## Upgrade an AKS cluster
AKS nodes are carefully cordoned and drained to minimize any potential disruptions to running applications. During this process, AKS performs the following steps:
AKS nodes are carefully cordoned and drained to minimize any potential disruptio
[!INCLUDE [alias minor version callout](./includes/aliasminorversion/alias-minor-version-upgrade.md)]
-### [Azure CLI](#tab/azure-cli)
+You can either [manually upgrade your cluster](#manually-upgrade-cluster) or [configure automatic cluster upgrades](#configure-automatic-cluster-upgrades). **We recommend you configure automatic cluster upgrades to ensure your cluster is always running the latest version of Kubernetes**.
+
+### Manually upgrade cluster
+
+#### [Azure CLI](#tab/azure-cli)
* Upgrade your cluster using the [`az aks upgrade`][az-aks-upgrade] command.
- ```azurecli
+ ```azurecli-interactive
az aks upgrade \ --resource-group myResourceGroup \ --name myAKSCluster \
AKS nodes are carefully cordoned and drained to minimize any potential disruptio
> [!NOTE] > You can only upgrade one minor version at a time. For example, you can upgrade from *1.14.x* to *1.15.x*, but you can't upgrade from *1.14.x* to *1.16.x* directly. To upgrade from *1.14.x* to *1.16.x*, you must first upgrade from *1.14.x* to *1.15.x*, then perform another upgrade from *1.15.x* to *1.16.x*.
- The following example output shows the result of upgrading to *1.19.1*. Notice the *kubernetesVersion* now shows *1.19.1*.
+ The following example output shows the result of upgrading to *1.27.3*. Notice the `kubernetesVersion` now shows *1.27.3*:
```output {
AKS nodes are carefully cordoned and drained to minimize any potential disruptio
"enableRbac": false, "fqdn": "myaksclust-myresourcegroup-19da35-bd54a4be.hcp.eastus.azmk8s.io", "id": "/subscriptions/<Subscription ID>/resourcegroups/myResourceGroup/providers/Microsoft.ContainerService/managedClusters/myAKSCluster",
- "kubernetesVersion": "1.19.1",
+ "kubernetesVersion": "1.27.3",
"location": "eastus", "name": "myAKSCluster", "type": "Microsoft.ContainerService/ManagedClusters" } ```
-### [Azure PowerShell](#tab/azure-powershell)
+#### [Azure PowerShell](#tab/azure-powershell)
* Upgrade your cluster using the [`Set-AzAksCluster`][set-azakscluster] cmdlet.
- ```azurepowershell
+ ```azurepowershell-interactive
Set-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -KubernetesVersion <KUBERNETES_VERSION> ``` > [!NOTE] > You can only upgrade one minor version at a time. For example, you can upgrade from *1.14.x* to *1.15.x*, but you can't upgrade from *1.14.x* to *1.16.x* directly. To upgrade from *1.14.x* to *1.16.x*, first upgrade from *1.14.x* to *1.15.x*, then perform another upgrade from *1.15.x* to *1.16.x*.
- The following example output shows the result of upgrading to *1.19.9*. Notice the *kubernetesVersion* now shows *1.20.2*.
+ The following example output shows the result of upgrading to *1.27.3*. Notice the `KubernetesVersion` now shows *1.27.3*:
```output ProvisioningState : Succeeded MaxAgentPools : 100
- KubernetesVersion : 1.20.2
+ KubernetesVersion : 1.27.3
PrivateFQDN : AgentPoolProfiles : {default} Name : myAKSCluster
AKS nodes are carefully cordoned and drained to minimize any potential disruptio
Tags : {} ```
-### [Azure portal](#tab/azure-portal)
-
-Upgrade your cluster using the following steps:
+#### [Azure portal](#tab/azure-portal)
1. In the Azure portal, navigate to your AKS cluster. 2. Under **Settings**, select **Cluster configuration**.
-3. In **Kubernetes version**, select **Upgrade version**. This will redirect you to a new page.
+3. In **Kubernetes version**, select **Upgrade version**. This redirects you to a new page.
4. In **Kubernetes version**, select your desired version and then select **Save**. It takes a few minutes to upgrade the cluster, depending on how many nodes you have.
+### Configure automatic cluster upgrades
+
+#### [Azure CLI](#tab/azure-cli)
+
+* Set an auto-upgrade channel on your cluster using the [`az aks update`][az-aks-update] command with the `--auto-upgrade-channel` parameter set to `patch`.
+
+ ```azurecli-interactive
+ az aks update --resource-group myResourceGroup --name myAKSCluster --auto-upgrade-channel patch
+ ```
+
+#### [Azure PowerShell](#tab/azure-powershell)
+
+* Set an auto-upgrade channel on your cluster using the [`Set-AzAksCluster`][set-azakscluster] cmdlet with the `-AutoUpgradeChannel` parameter set to `Patch`.
+
+ ```azurepowershell-interactive
+ Set-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -AutoUpgradeChannel Patch
+ ```
+
+#### [Azure portal](#tab/azure-portal)
+
+1. In the Azure portal, navigate to your AKS cluster.
+2. Under **Settings**, select **Cluster configuration**.
+3. In **Kubernetes version**, select **Upgrade version**.
+4. For **Automatic upgrade**, select **Enabled with patch (recommended)** > **Save**.
+++
+For more information, see [Automatically upgrade an Azure Kubernetes Service (AKS) cluster][aks-auto-upgrade].
+
+#### Upgrade AKS node images
+
+AKS regularly provides new node images. Linux node images are updated weekly, and Windows node images are updated monthly. We recommend upgrading your node images frequently to use the latest AKS features and security updates. For more information, see [Upgrade node images in Azure Kubernetes Service (AKS)][node-image-upgrade]. To configure automatic node image upgrades, see [Automatically upgrade Azure Kubernetes Service (AKS) cluster node operating system images][auto-upgrade-node-image].
+ ## View the upgrade events > [!NOTE]
It takes a few minutes to upgrade the cluster, depending on how many nodes you h
* View the upgrade events in the default namespaces using the `kubectl get events` command.
- ```azurecli-interactive
+ ```console
kubectl get events ```
- The following example output shows some of the above events listed during an upgrade.
+ The following example output shows some of the above events listed during an upgrade:
```output ...
It takes a few minutes to upgrade the cluster, depending on how many nodes you h
... ``` -- ## Validate an upgrade ### [Azure CLI](#tab/azure-cli) * Confirm the upgrade was successful using the [`az aks show`][az-aks-show] command.
- ```azurecli
+ ```azurecli-interactive
az aks show --resource-group myResourceGroup --name myAKSCluster --output table ```
- The following example output shows the AKS cluster runs *KubernetesVersion 1.19.1*:
+ The following example output shows the AKS cluster runs *KubernetesVersion 1.27.3*:
```output Name Location ResourceGroup KubernetesVersion CurrentKubernetesVersion ProvisioningState Fqdn - - - -
- myAKSCluster eastus myResourceGroup 1.19.1 1.19.1 Succeeded myaksclust-myresourcegroup-19da35-bd54a4be.hcp.eastus.azmk8s.io
+ myAKSCluster eastus myResourceGroup 1.27.3 1.27.3 Succeeded myaksclust-myresourcegroup-19da35-bd54a4be.hcp.eastus.azmk8s.io
``` ### [Azure PowerShell](#tab/azure-powershell) * Confirm the upgrade was successful using the [`Get-AzAksCluster`][get-azakscluster] cmdlet.
- ```azurepowershell
+ ```azurepowershell-interactive
Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster | Select-Object -Property Name, Location, KubernetesVersion, ProvisioningState ```
- The following example output shows the AKS cluster runs *KubernetesVersion 1.20.2*:
+ The following example output shows the AKS cluster runs *KubernetesVersion 1.27.3*:
```output Name Location KubernetesVersion ProvisioningState - -- -- --
- myAKSCluster eastus 1.20.2 Succeeded
+ myAKSCluster eastus 1.27.3 Succeeded
``` ### [Azure portal](#tab/azure-portal)
-Confirm the upgrade was successful using the following steps:
- 1. In the Azure portal, navigate to your AKS cluster. 2. On the **Overview** page, select the **Kubernetes version** and ensure it's the latest version you installed in the previous step.
Confirm the upgrade was successful using the following steps:
## Delete the cluster
-As this tutorial is the last part of the series, you might want to delete your AKS cluster. The Kubernetes nodes run on Azure virtual machines and continue incurring charges even if you don't use the cluster.
+As this tutorial is the last part of the series, you might want to delete your AKS cluster to avoid incurring Azure charges.
### [Azure CLI](#tab/azure-cli)
As this tutorial is the last part of the series, you might want to delete your A
### [Azure portal](#tab/azure-portal)
-Delete your cluster using the following steps:
- 1. In the Azure portal, navigate to your AKS cluster. 2. On the **Overview** page, select **Delete**.
-3. A popup will appear that asks you to confirm the deletion of the cluster. Select **Yes**.
+3. On the popup that asks you to confirm the deletion of the cluster, select **Yes**.
For more information on AKS, see the [AKS overview][aks-intro]. For guidance on
[get-azaksversion]: /powershell/module/az.aks/get-azaksversion [set-azakscluster]: /powershell/module/az.aks/set-azakscluster [remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup
+[aks-auto-upgrade]: ./auto-upgrade-cluster.md
+[auto-upgrade-node-image]: ./auto-upgrade-node-image.md
+[node-image-upgrade]: ./node-image-upgrade.md
application-gateway Configuration Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-infrastructure.md
As a temporary extension, we have introduced a subscription-level [Azure Feature
> [!NOTE] > We suggest using this feature control (AFEC) provision only as interim mitigation until you assign the correct permission. You must prioritize fixing the permissions for all the applicable Users (and Service Principals) and then unregister this AFEC flag to reintroduce the permission verification on the Virtual Network resource. It is recommended not to permanently depend on this AFEC method, as it will be removed in the future.
+## Azure Virtual Network Manager
+
+Azure Virtual Network Manager is a management service that allows you to group, configure, deploy, and manage virtual networks globally across subscriptions. With Virtual Network Manager, you can define network groups to identify and logically segment your virtual networks. After that, you can determine the connectivity and security configurations you want and apply them across all the selected virtual networks in network groups at once. Azure Virtual Network Manager's security admin rule configuration allows you to define security policies at scale and apply them to multiple virtual networks at once.
+
+> [!NOTE]
+> Security admin rules of Azure Virtual Network Manager apply to Application Gateway subnets that only contain Application Gateways that have ["Network Isolation"](Application-gateway-private-deployment.md) enabled. Subnets that have any Application Gateway that does not have ["Network Isolation"](Application-gateway-private-deployment.md) enabled, will not have security admin rules.
++ ## Network security groups You can use Network security groups (NSGs) for your Application Gateway's subnet, but you should note some key points and restrictions.
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/validation-program.md
Title: "Azure Arc-enabled Kubernetes validation" Previously updated : 07/21/2023 Last updated : 10/26/2023 description: "Describes Arc validation program for Kubernetes distributions"
The following providers and their corresponding Kubernetes distributions have su
| VMware | [Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid) |TKGs 2.2; upstream K8s 1.25.7+vmware.3<br>TKGm 2.3; upstream K8s v1.26.5+vmware.2<br>TKGm 2.2; upstream K8s v1.25.7+vmware.2 <br>TKGm 2.1.0; upstream K8s v1.24.9+vmware.1| | Canonical | [Charmed Kubernetes](https://ubuntu.com/kubernetes)|[1.24](https://ubuntu.com/kubernetes/docs/1.24/components), [1.28](https://ubuntu.com/kubernetes/docs/1.28/components) | | SUSE Rancher | [Rancher Kubernetes Engine](https://rancher.com/products/rke/) | RKE CLI version: [v1.3.13](https://github.com/rancher/rke/releases/tag/v1.3.13); Kubernetes versions: 1.24.2, 1.23.8 |
+| SUSE Rancher | [K3s](https://rancher.com/products/k3s/) | [v1.27.4+k3s1](https://github.com/k3s-io/k3s/releases/tag/v1.27.4%2Bk3s1), [v1.26.7+k3s1](https://github.com/k3s-io/k3s/releases/tag/v1.26.7%2Bk3s1), [v1.25.12+k3s1](https://github.com/k3s-io/k3s/releases/tag/v1.25.12%2Bk3s1) |
| Nutanix | [Nutanix Kubernetes Engine](https://www.nutanix.com/products/kubernetes-engine) | Version [2.5](https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Kubernetes-Engine-v2_5:Nutanix-Kubernetes-Engine-v2_5); upstream K8s v1.23.11 | | Kublr | [Kublr Managed K8s](https://kublr.com/managed-kubernetes/) Distribution |[Kublr 1.26.0](https://docs.kublr.com/releasenotes/1.26/release-1.26.0/); Upstream K8s Versions: 1.21.3, 1.22.10, 1.22.17, 1.23.17, 1.24.13, 1.25.6, 1.26.4 | | Mirantis | [Mirantis Kubernetes Engine](https://www.mirantis.com/software/mirantis-kubernetes-engine/) | MKE Version [3.6.0](https://docs.mirantis.com/mke/3.6/release-notes/3-6-0.html) <br> MKE Version [3.5.5](https://docs.mirantis.com/mke/3.5/release-notes/3-5-5.html) <br> MKE Version [3.4.7](https://docs.mirantis.com/mke/3.4/release-notes/3-4-7.html) |
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/overview.md
Title: Azure Arc overview description: Learn about what Azure Arc is and how it helps customers enable management and governance of their hybrid resources with other Azure services and features. Previously updated : 05/04/2023 Last updated : 10/24/2023
Currently, Azure Arc allows you to manage the following resource types hosted ou
* [Azure data services](dat): Run Azure data services on-premises, at the edge, and in public clouds using Kubernetes and the infrastructure of your choice. SQL Managed Instance and PostgreSQL (preview) services are currently available. * [SQL Server](/sql/sql-server/azure-arc/overview): Extend Azure services to SQL Server instances hosted outside of Azure.
-* Virtual machines (preview): Provision, resize, delete and manage virtual machines based on [VMware vSphere](./vmware-vsphere/overview.md) or [Azure Stack HCI](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines) and enable VM self-service through role-based access.
+* Virtual machines (preview): Provision, resize, delete and manage virtual machines based on [VMware vSphere](./vmware-vsphere/overview.md) or [Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview) and enable VM self-service through role-based access.
## Key features and benefits
Some of the key scenarios that Azure Arc supports are:
* Create [custom locations](./kubernetes/custom-locations.md) on top of your [Azure Arc-enabled Kubernetes](./kubernetes/overview.md) clusters, using them as target locations for deploying Azure services instances. Deploy your Azure service cluster extensions for [Azure Arc-enabled data services](./dat).
-* Perform virtual machine lifecycle and management operations for [VMware vSphere](./vmware-vsphere/overview.md) and [Azure Stack HCI](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines) environments.
+* Perform virtual machine lifecycle and management operations for [VMware vSphere](./vmware-vsphere/overview.md) and [Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview) environments.
* A unified experience viewing your Azure Arc-enabled resources, whether you are using the Azure portal, the Azure CLI, Azure PowerShell, or Azure REST API.
For information, see the [Azure pricing page](https://azure.microsoft.com/pricin
* Learn about [Azure Arc-enabled Kubernetes](./kubernetes/overview.md). * Learn about [Azure Arc-enabled data services](https://azure.microsoft.com/services/azure-arc/hybrid-data-services/). * Learn about [Azure Arc-enabled SQL Server](/sql/sql-server/azure-arc/overview).
-* Learn about [Azure Arc-enabled VMware vSphere](vmware-vsphere/overview.md) and [Azure Arc-enabled Azure Stack HCI](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines).
+* Learn about [Azure Arc-enabled VMware vSphere](vmware-vsphere/overview.md).
+* Learn about [Azure Arc-enabled VM Management on Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview).
* Learn about [Azure Arc-enabled System Center Virtual Machine Manager](system-center-virtual-machine-manager/overview.md). * Experience Azure Arc by exploring the [Azure Arc Jumpstart](https://aka.ms/AzureArcJumpstart). * Learn about best practices and design patterns through the [Azure Arc Landing Zone Accelerators](https://aka.ms/ArcLZAcceleratorReady).
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/overview.md
Title: Azure Arc resource bridge (preview) overview description: Learn how to use Azure Arc resource bridge (preview) to support VM self-servicing on Azure Stack HCI, VMware, and System Center Virtual Machine Manager. Previously updated : 02/15/2023 Last updated : 10/26/2023
Azure Arc resource bridge (preview) can host other Azure services or solutions r
* Cluster extension: The Azure service deployed to run on-premises. For the preview release, it supports three * Azure Arc-enabled VMware
- * Azure Arc-enabled Azure Stack HCI
+ * Azure Arc VM management on Azure Stack HCI
* Azure Arc-enabled System Center Virtual Machine Manager (SCVMM)
-* Custom locations: A deployment target where you can create Azure resources. It maps to different resource for different Azure services. For example, for Arc-enabled VMware, the custom locations resource maps to an instance of vCenter, and for Arc-enabled Azure Stack HCI, it maps to an HCI cluster instance.
+* Custom locations: A deployment target where you can create Azure resources. It maps to different resource for different Azure services. For example, for Arc-enabled VMware, the custom locations resource maps to an instance of vCenter, and for Azure Arc VM management on Azure Stack HCI, it maps to an HCI cluster instance.
Custom locations and cluster extension are both Azure resources, which are linked to the Azure Arc resource bridge (preview) resource in Azure Resource Manager. When you create an on-premises VM from Azure, you can select the custom location, and that routes that *create action* to the mapped vCenter, Azure Stack HCI cluster, or SCVMM.
You can connect an SCVMM management server to Azure by deploying Azure Arc resou
* Add, remove, and update network interfaces * Add, remove, and update disks and update VM size (CPU cores and memory)
+## Example scenarios
+
+The following are just two examples of the many scenarios that can be enabled by using Arc resource bridge in a hybrid environment.
+
+### Apply Azure Policy and other Azure services to on-premises VMware VMs
+
+A customer deploys Arc Resource Bridge onto their on-premises VMware environment. They sign into the Azure portal and select the VMware VMs that they'd like to connect to Azure. Now they can manage these on-premises VMware VMs in Azure Resource Manager (ARM) as Arc-enabled machines, alongside their native Azure machines, achieving a single pane of glass to view their resources in a VMware/Azure hybrid environment. This includes deploying Azure services, such as Defender for Cloud and Azure Policy, to keep updated on the security and compliance posture of their on-premises VMware VMs in Azure.
++
+### Create physical HCI VMs on-premises from Azure
+
+A customer has multiple datacenter locations in Canada and New York. They install an Arc resource bridge in each datacenter and connect their Azure Stack HCI VMs to Azure in each location. They can then sign into Azure portal and see all their Arc-enabled VMs from the two physical locations together in one central cloud location. From the portal, the customer can choose to create a new VM; that VM is also created on-premises at the selected datacenter, allowing the customer to manage VMs in different physical locations centrally through Azure.
++
+## Version and region support
+ ### Supported regions
-In order to use Arc resource bridge in a region, Arc resource bridge and the arc-enabled feature for a private cloud must be supported in the region. For example, to use Arc resource bridge with Azure Stack HCI in East US, Arc resource bridge and the Arc VM management feature for Azure Stack HCI must be supported in East US. Please check with the private cloud product for their feature region availability - it is typically in their [deployment guide](deploy-cli.md#az-arcappliance-createconfig) for Arc resource bridge. There are instances where Arc Resource Bridge may be available in a region where the private cloud feature is not yet available.
+In order to use Arc resource bridge in a region, Arc resource bridge and the Arc-enabled feature for a private cloud must be supported in the region. For example, to use Arc resource bridge with Azure Stack HCI in East US, Arc resource bridge and the Arc VM management feature for Azure Stack HCI must be supported in East US. To confirm feature availability across regions for each private cloud provider, review their deployment guide and other documentation. There could be instances where Arc resource bridge is available in a region where the private cloud feature is not yet available.
Arc resource bridge supports the following Azure regions:
Arc resource bridge supports the following Azure regions:
* North Europe * UK South * UK West- * Sweden Central * Canada Central * Australia East
The following private cloud environments and their versions are officially suppo
* Azure Stack HCI * SCVMM - ### Supported versions
-Generally, the latest released version and the previous three versions (n-3) of Arc resource bridge are supported. For example, if the current version is 1.0.10, then the typical n-3 supported versions are:
+When the Arc-enabled private cloud announces General Availability, the minimum supported version of Arc resource bridge will be 1.0.15.
-- Current version: 1.0.10-- n-1 version: 1.0.9-- n-2 version: 1.0.8-- n-3 version: 1.0.7
+Generally, the latest released version and the previous three versions (n-3) of Arc resource bridge are supported. For example, if the current version is 1.0.18, then the typical n-3 supported versions are:
-There may be instances where supported versions are not sequential. For example, version 1.0.11 is released and later found to contain a bug. A hot fix is released in version 1.0.12 and version 1.0.11 is removed. In this scenario, n-3 supported versions become 1.0.12, 1.0.10, 1.0.9, 1.0.8.
+* Current version: 1.0.18
+* n-1 version: 1.0.17
+* n-2 version: 1.0.16
+* n-3 version: 1.0.15
-Arc resource bridge typically releases a new version on a monthly cadence, at the end of the month. Delays may occur that could push the release date further out. Regardless of when a new release comes out, if you are within n-3 supported versions, then your Arc resource bridge version is supported. To stay updated on releases, visit the [Arc resource bridge release notes](https://github.com/Azure/ArcResourceBridge/releases) on GitHub. To learn more about upgrade options, visit [Upgrade Arc resource bridge](upgrade.md).
+There may be instances where supported versions are not sequential. For example, version 1.0.18 is released and later found to contain a bug; a hot fix is released in version 1.0.19 and version 1.0.18 is removed. In this scenario, n-3 supported versions become 1.0.19, 1.0.17, 1.0.16, 1.0.15.
+
+Arc resource bridge typically releases a new version on a monthly cadence, at the end of the month. Delays may occur that could push the release date further out. Regardless of when a new release comes out, if you are within n-3 supported versions (starting with 1.0.15), then your Arc resource bridge version is supported. To stay updated on releases, visit the [Arc resource bridge release notes](https://github.com/Azure/ArcResourceBridge/releases) on GitHub. To learn more about upgrade options, visit [Upgrade Arc resource bridge](upgrade.md).
## Next steps * Learn more about [how Azure Arc-enabled VMware vSphere extends Azure's governance and management capabilities to VMware vSphere infrastructure](../vmware-vsphere/overview.md). * Learn more about [provisioning and managing on-premises Windows and Linux VMs running on Azure Stack HCI clusters](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines).
-* Review the [system requirements](system-requirements.md) for deploying and managing Arc resource bridge.
-----
+* Review the [system requirements](system-requirements.md) for deploying and managing Arc resource bridge.
azure-arc Agent Overview Scvmm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/agent-overview-scvmm.md
+
+ Title: Overview of Azure Connected Machine agent to manage Windows and Linux machines
+description: This article provides a detailed overview of the Azure Connected Machine agent, which supports monitoring virtual machines hosted in hybrid environments.
Last updated : 10/20/2023++++
+ms.
+++
+# Overview of Azure Connected Machine agent to manage Windows and Linux machines
+
+The Azure Connected Machine agent enables you to manage your Windows and Linux machines hosted outside of Azure on your corporate network or other cloud providers.
+
+## Agent components
++
+The Azure Connected Machine agent package contains several logical components bundled together:
+
+* The Hybrid Instance Metadata service (HIMDS) manages the connection to Azure and the connected machine's Azure identity.
+
+* The guest configuration agent provides functionality such as assessing whether the machine complies with required policies and enforcing compliance.
+
+ Note the following behavior with Azure Policy [guest configuration](../../governance/machine-configuration/overview.md) for a disconnected machine:
+
+ * An Azure Policy assignment that targets disconnected machines is unaffected.
+ * Guest assignment is stored locally for 14 days. Within the 14-day period, if the Connected Machine agent reconnects to the service, policy assignments are reapplied.
+ * Assignments are deleted after 14 days, and aren't reassigned to the machine after the 14-day period.
+
+* The Extension agent manages VM extensions, including install, uninstall, and upgrade. Azure downloads extensions and copies them to the `%SystemDrive%\%ProgramFiles%\AzureConnectedMachineAgent\ExtensionService\downloads` folder on Windows, and to `/opt/GC_Ext/downloads` on Linux. On Windows, the extension installs to the following path `%SystemDrive%\Packages\Plugins\<extension>`, and on Linux the extension installs to `/var/lib/waagent/<extension>`.
+
+>[!NOTE]
+> The [Azure Monitor agent](../../azure-monitor/agents/azure-monitor-agent-overview.md) (AMA) is a separate agent that collects monitoring data, and it does not replace the Connected Machine agent; the AMA only replaces the Log Analytics agent, Diagnostics extension, and Telegraf agent for both Windows and Linux machines.
+
+## Agent resources
+
+The following information describes the directories and user accounts used by the Azure Connected Machine agent.
+
+### Windows agent installation details
+
+The Windows agent is distributed as a Windows Installer package (MSI). Download the Windows agent from the [Microsoft Download Center](https://aka.ms/AzureConnectedMachineAgent).
+Installing the Connected Machine agent for Window applies the following system-wide configuration changes:
+
+* The installation process creates the following folders during setup.
+
+ | Directory | Description |
+ |--|-|
+ | %ProgramFiles%\AzureConnectedMachineAgent | azcmagent CLI and instance metadata service executables.|
+ | %ProgramFiles%\AzureConnectedMachineAgent\ExtensionService\GC | Extension service executables.|
+ | %ProgramFiles%\AzureConnectedMachineAgent\GCArcService\GC | Guest configuration (policy) service executables.|
+ | %ProgramData%\AzureConnectedMachineAgent | Configuration, log and identity token files for azcmagent CLI and instance metadata service.|
+ | %ProgramData%\GuestConfig | Extension package downloads, guest configuration (policy) definition downloads, and logs for the extension and guest configuration services.|
+ | %SYSTEMDRIVE%\packages | Extension package executables |
+
+* Installing the agent creates the following Windows services on the target machine.
+
+ | Service name | Display name | Process name | Description |
+ |--|--|--|-|
+ | himds | Azure Hybrid Instance Metadata Service | himds | Synchronizes metadata with Azure and hosts a local REST API for extensions and applications to access the metadata and request Microsoft Entra managed identity tokens |
+ | GCArcService | Guest configuration Arc Service | gc_service | Audits and enforces Azure guest configuration policies on the machine. |
+ | ExtensionService | Guest configuration Extension Service | gc_service | Installs, updates, and manages extensions on the machine. |
+
+* Agent installation creates the following virtual service account.
+
+ | Virtual Account | Description |
+ ||-|
+ | NT SERVICE\\himds | Unprivileged account used to run the Hybrid Instance Metadata Service. |
+
+ > [!TIP]
+ > This account requires the "Log on as a service" right. This right is automatically granted during agent installation, but if your organization configures user rights assignments with Group Policy, you might need to adjust your Group Policy Object to grant the right to "NT SERVICE\\himds" or "NT SERVICE\\ALL SERVICES" to allow the agent to function.
+
+* Agent installation creates the following local security group.
+
+ | Security group name | Description |
+ ||-|
+ | Hybrid agent extension applications | Members of this security group can request Microsoft Entra tokens for the system-assigned managed identity |
+
+* Agent installation creates the following environmental variables
+
+ | Name | Default value | Description |
+ ||||
+ | IDENTITY_ENDPOINT | `http://localhost:40342/metadata/identity/oauth2/token` |
+ | IMDS_ENDPOINT | `http://localhost:40342` |
+
+* There are several log files available for troubleshooting, described in the following table.
+
+ | Log | Description |
+ |--|-|
+ | %ProgramData%\AzureConnectedMachineAgent\Log\himds.log | Records details of the heartbeat and identity agent component. |
+ | %ProgramData%\AzureConnectedMachineAgent\Log\azcmagent.log | Contains the output of the azcmagent tool commands. |
+ | %ProgramData%\GuestConfig\arc_policy_logs\gc_agent.log | Records details about the guest configuration (policy) agent component. |
+ | %ProgramData%\GuestConfig\ext_mgr_logs\gc_ext.log | Records details about extension manager activity (extension install, uninstall, and upgrade events). |
+ | %ProgramData%\GuestConfig\extension_logs | Directory containing logs for individual extensions. |
+
+* The process creates the local security group **Hybrid agent extension applications**.
+
+* After uninstalling the agent, the following artifacts remain.
+
+ * %ProgramData%\AzureConnectedMachineAgent\Log
+ * %ProgramData%\AzureConnectedMachineAgent
+ * %ProgramData%\GuestConfig
+ * %SystemDrive%\packages
+
+### Linux agent installation details
+
+The preferred package format for the distribution (`.rpm` or `.deb`) that's hosted in the Microsoft [package repository](https://packages.microsoft.com/) provides the Connected Machine agent for Linux. The shell script bundle [Install_linux_azcmagent.sh](https://aka.ms/azcmagent) installs and configures the agent.
+
+Installing, upgrading, and removing the Connected Machine agent isn't required after server restart.
+
+Installing the Connected Machine agent for Linux applies the following system-wide configuration changes.
+
+* Setup creates the following installation folders.
+
+ | Directory | Description |
+ |--|-|
+ | /opt/azcmagent/ | azcmagent CLI and instance metadata service executables. |
+ | /opt/GC_Ext/ | Extension service executables. |
+ | /opt/GC_Service/ | Guest configuration (policy) service executables. |
+ | /var/opt/azcmagent/ | Configuration, log and identity token files for azcmagent CLI and instance metadata service.|
+ | /var/lib/GuestConfig/ | Extension package downloads, guest configuration (policy) definition downloads, and logs for the extension and guest configuration services.|
+
+* Installing the agent creates the following daemons.
+
+ | Service name | Display name | Process name | Description |
+ |--|--|--|-|
+ | himdsd.service | Azure Connected Machine Agent Service | himds | This service implements the Hybrid Instance Metadata service (IMDS) to manage the connection to Azure and the connected machine's Azure identity.|
+ | gcad.service | GC Arc Service | gc_linux_service | Audits and enforces Azure guest configuration policies on the machine. |
+ | extd.service | Extension Service | gc_linux_service | Installs, updates, and manages extensions on the machine. |
+
+* There are several log files available for troubleshooting, described in the following table.
+
+ | Log | Description |
+ |--|-|
+ | /var/opt/azcmagent/log/himds.log | Records details of the heartbeat and identity agent component. |
+ | /var/opt/azcmagent/log/azcmagent.log | Contains the output of the azcmagent tool commands. |
+ | /var/lib/GuestConfig/arc_policy_logs | Records details about the guest configuration (policy) agent component. |
+ | /var/lib/GuestConfig/ext_mgr_logs | Records details about extension manager activity (extension install, uninstall, and upgrade events). |
+ | /var/lib/GuestConfig/extension_logs | Directory containing logs for individual extensions. |
+
+* Agent installation creates the following environment variables, set in `/lib/systemd/system.conf.d/azcmagent.conf`.
+
+ | Name | Default value | Description |
+ |||-|
+ | IDENTITY_ENDPOINT | `http://localhost:40342/metadata/identity/oauth2/token` |
+ | IMDS_ENDPOINT | `http://localhost:40342` |
+
+* After uninstalling the agent, the following artifacts remain.
+
+ * /var/opt/azcmagent
+ * /var/lib/GuestConfig
+
+## Agent resource governance
+
+The Azure Connected Machine agent is designed to manage agent and system resource consumption. The agent approaches resource governance under the following conditions:
+
+* The Guest Configuration agent can use up to 5% of the CPU to evaluate policies.
+* The Extension Service agent can use up to 5% of the CPU to install, upgrade, run, and delete extensions. Some extensions might apply more restrictive CPU limits once installed. The following exceptions apply:
+
+ | Extension type | Operating system | CPU limit |
+ | -- | - | |
+ | AzureMonitorLinuxAgent | Linux | 60% |
+ | AzureMonitorWindowsAgent | Windows | 100% |
+ | AzureSecurityLinuxAgent | Linux | 30% |
+ | LinuxOsUpdateExtension | Linux | 60% |
+ | MDE.Linux | Linux | 60% |
+ | MicrosoftDnsAgent | Windows | 100% |
+ | MicrosoftMonitoringAgent | Windows | 60% |
+ | OmsAgentForLinux | Windows | 60%|
+
+During normal operations, defined as the Azure Connected Machine agent being connected to Azure and not actively modifying an extension or evaluating a policy, you can expect the agent to consume the following system resources:
+
+| | Windows | Linux |
+| | - | -- |
+| **CPU usage (normalized to 1 core)** | 0.07% | 0.02% |
+| **Memory usage** | 57 MB | 42 MB |
+
+The performance data above was gathered in April 2023 on virtual machines running Windows Server 2022 and Ubuntu 20.04. Actual agent performance and resource consumption will vary based on the hardware and software configuration of your servers.
+
+## Instance metadata
+
+Metadata information about a connected machine is collected after the Connected Machine agent registers with Azure Arc-enabled servers. Specifically:
+
+* Operating system name, type, and version
+* Computer name
+* Computer manufacturer and model
+* Computer fully qualified domain name (FQDN)
+* Domain name (if joined to an Active Directory domain)
+* Active Directory and DNS fully qualified domain name (FQDN)
+* UUID (BIOS ID)
+* Connected Machine agent heartbeat
+* Connected Machine agent version
+* Public key for managed identity
+* Policy compliance status and details (if using guest configuration policies)
+* SQL Server installed (Boolean value)
+* Cluster resource ID (for Azure Stack HCI nodes)
+* Hardware manufacturer
+* Hardware model
+* CPU family, socket, physical core and logical core counts
+* Total physical memory
+* Serial number
+* SMBIOS asset tag
+* Cloud provider
+
+The agent requests the following metadata information from Azure:
+
+* Resource location (region)
+* Virtual machine ID
+* Tags
+* Microsoft Entra managed identity certificate
+* Guest configuration policy assignments
+* Extension requests - install, update, and delete.
+
+> [!NOTE]
+> Azure Arc-enabled servers doesn't store/process customer data outside the region the customer deploys the service instance in.
+
+## Next steps
+
+- [Connect your SCVMM server to Azure Arc](/azure/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc).
+- [Install Arc agent at scale for your SCVMM VMs](/azure/azure-arc/system-center-virtual-machine-manager/enable-guest-management-at-scale).
+- [Install Arc agent using a script for SCVMM VMs](/azure/azure-arc/system-center-virtual-machine-manager/install-arc-agents-using-script).
azure-arc Azure Arc Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/azure-arc-agent.md
+
+ Title: Azure Arc agent
+description: Learn about Azure Arc agent
+ Last updated : 10/23/2023++++++++
+# Azure Arc agent
+
+The Azure Connected Machine agent enables you to manage your Windows and Linux machines hosted outside of Azure on your corporate network or other cloud providers.
+
+## Agent components
++
+The Azure Connected Machine agent package contains several logical components bundled together:
+
+* The Hybrid Instance Metadata service (HIMDS) manages the connection to Azure and the connected machine's Azure identity.
+
+* The guest configuration agent provides functionality such as assessing whether the machine complies with required policies and enforcing compliance.
+
+ Note the following behavior with Azure Policy [guest configuration](../../governance/machine-configuration/overview.md) for a disconnected machine:
+
+ * An Azure Policy assignment that targets disconnected machines is unaffected.
+ * Guest assignment is stored locally for 14 days. Within the 14-day period, if the Connected Machine agent reconnects to the service, policy assignments are reapplied.
+ * Assignments are deleted after 14 days and aren't reassigned to the machine after the 14-day period.
+
+* The Extension agent manages VM extensions, including install, uninstall, and upgrade. Azure downloads extensions and copies them to the `%SystemDrive%\%ProgramFiles%\AzureConnectedMachineAgent\ExtensionService\downloads` folder on Windows, and to `/opt/GC_Ext/downloads` on Linux. On Windows, the extension installs to the path `%SystemDrive%\Packages\Plugins\<extension>`, and on Linux the extension installs to `/var/lib/waagent/<extension>`.
+
+>[!NOTE]
+> The [Azure Monitor agent](../../azure-monitor/agents/azure-monitor-agent-overview.md) (AMA) is a separate agent that collects monitoring data, and it does not replace the Connected Machine agent; the AMA only replaces the Log Analytics agent, Diagnostics extension, and Telegraf agent for both Windows and Linux machines.
+
+## Agent resources
+
+The following information describes the directories and user accounts used by the Azure Connected Machine agent.
+
+### Windows agent installation details
+
+The Windows agent is distributed as a Windows Installer package (MSI). Download the Windows agent from the [Microsoft Download Center](https://aka.ms/AzureConnectedMachineAgent).
+Installing the Connected Machine agent for Window applies the following system-wide configuration changes:
+
+* The installation process creates the following folders during setup.
+
+ | Directory | Description |
+ |--|-|
+ | %ProgramFiles%\AzureConnectedMachineAgent | azcmagent CLI and instance metadata service executables.|
+ | %ProgramFiles%\AzureConnectedMachineAgent\ExtensionService\GC | Extension service executables.|
+ | %ProgramFiles%\AzureConnectedMachineAgent\GCArcService\GC | Guest configuration (policy) service executables.|
+ | %ProgramData%\AzureConnectedMachineAgent | Configuration, log and identity token files for azcmagent CLI and instance metadata service.|
+ | %ProgramData%\GuestConfig | Extension package downloads, guest configuration (policy) definition downloads, and logs for the extension and guest configuration services.|
+ | %SYSTEMDRIVE%\packages | Extension package executables. |
+
+* Installing the agent creates the following Windows services on the target machine.
+
+ | Service name | Display name | Process name | Description |
+ |--|--|--|-|
+ | himds | Azure Hybrid Instance Metadata Service | himds | Synchronizes metadata with Azure and hosts a local REST API for extensions and applications to access the metadata and request Microsoft Entra managed identity tokens |
+ | GCArcService | Guest configuration Arc Service | gc_service | Audits and enforces Azure guest configuration policies on the machine. |
+ | ExtensionService | Guest configuration Extension Service | gc_service | Installs, updates, and manages extensions on the machine. |
+
+* Agent installation creates the following virtual service account.
+
+ | Virtual Account | Description |
+ ||-|
+ | NT SERVICE\\himds | Unprivileged account used to run the Hybrid Instance Metadata Service. |
+
+ > [!TIP]
+ > This account requires the *Log on as a service* right. This right is automatically granted during agent installation, but if your organization configures user rights assignments with Group Policy, you might need to adjust your Group Policy Object to grant the right to **NT SERVICE\\himds** or **NT SERVICE\\ALL SERVICES** to allow the agent to function.
+
+* Agent installation creates the following local security group.
+
+ | Security group name | Description |
+ ||-|
+ | Hybrid agent extension applications | Members of this security group can request Microsoft Entra tokens for the system-assigned managed identity |
+
+* Agent installation creates the following environmental variables
+
+ | Name | Default value | Description |
+ ||||
+ | IDENTITY_ENDPOINT | `http://localhost:40342/metadata/identity/oauth2/token` |
+ | IMDS_ENDPOINT | `http://localhost:40342` |
+
+* There are several log files available for troubleshooting, described in the following table.
+
+ | Log | Description |
+ |--|-|
+ | %ProgramData%\AzureConnectedMachineAgent\Log\himds.log | Records details of the heartbeat and identity agent component. |
+ | %ProgramData%\AzureConnectedMachineAgent\Log\azcmagent.log | Contains the output of the azcmagent tool commands. |
+ | %ProgramData%\GuestConfig\arc_policy_logs\gc_agent.log | Records details about the guest configuration (policy) agent component. |
+ | %ProgramData%\GuestConfig\ext_mgr_logs\gc_ext.log | Records details about extension manager activity (extension install, uninstall, and upgrade events). |
+ | %ProgramData%\GuestConfig\extension_logs | Directory containing logs for individual extensions. |
+
+* The process creates the local security group **Hybrid agent extension applications**.
+
+* After uninstalling the agent, the following artifacts remain:
+
+ * %ProgramData%\AzureConnectedMachineAgent\Log
+ * %ProgramData%\AzureConnectedMachineAgent
+ * %ProgramData%\GuestConfig
+ * %SystemDrive%\packages
+
+### Linux agent installation details
+
+The preferred package format for the distribution (`.rpm` or `.deb`) that's hosted in the Microsoft [package repository](https://packages.microsoft.com/) provides the Connected Machine agent for Linux. The shell script bundle [Install_linux_azcmagent.sh](https://aka.ms/azcmagent) installs and configures the agent.
+
+Installing, upgrading, and removing the Connected Machine agent isn't required after server restart.
+
+Installing the Connected Machine agent for Linux applies the following system-wide configuration changes.
+
+* Setup creates the following installation folders.
+
+ | Directory | Description |
+ |--|-|
+ | /opt/azcmagent/ | azcmagent CLI and instance metadata service executables. |
+ | /opt/GC_Ext/ | Extension service executables. |
+ | /opt/GC_Service/ | Guest configuration (policy) service executables. |
+ | /var/opt/azcmagent/ | Configuration, log and identity token files for azcmagent CLI and instance metadata service.|
+ | /var/lib/GuestConfig/ | Extension package downloads, guest configuration (policy) definition downloads, and logs for the extension and guest configuration services.|
+
+* Installing the agent creates the following daemons.
+
+ | Service name | Display name | Process name | Description |
+ |--|--|--|-|
+ | himdsd.service | Azure Connected Machine Agent Service | himds | This service implements the Hybrid Instance Metadata service (IMDS) to manage the connection to Azure and the connected machine's Azure identity.|
+ | gcad.service | GC Arc Service | gc_linux_service | Audits and enforces Azure guest configuration policies on the machine. |
+ | extd.service | Extension Service | gc_linux_service | Installs, updates, and manages extensions on the machine. |
+
+* There are several log files available for troubleshooting, described in the following table.
+
+ | Log | Description |
+ |--|-|
+ | /var/opt/azcmagent/log/himds.log | Records details of the heartbeat and identity agent component. |
+ | /var/opt/azcmagent/log/azcmagent.log | Contains the output of the azcmagent tool commands. |
+ | /var/lib/GuestConfig/arc_policy_logs | Records details about the guest configuration (policy) agent component. |
+ | /var/lib/GuestConfig/ext_mgr_logs | Records details about extension manager activity (extension install, uninstall, and upgrade events). |
+ | /var/lib/GuestConfig/extension_logs | Directory containing logs for individual extensions. |
+
+* Agent installation creates the following environment variables, set in `/lib/systemd/system.conf.d/azcmagent.conf`.
+
+ | Name | Default value | Description |
+ |||-|
+ | IDENTITY_ENDPOINT | `http://localhost:40342/metadata/identity/oauth2/token` |
+ | IMDS_ENDPOINT | `http://localhost:40342` |
+
+* After uninstalling the agent, the following artifacts remain:
+
+ * /var/opt/azcmagent
+ * /var/lib/GuestConfig
+
+## Agent resource governance
+
+The Azure Connected Machine agent is designed to manage agent and system resource consumption. The agent approaches resource governance under the following conditions:
+
+* The Guest Configuration agent can use up to 5% of the CPU to evaluate policies.
+* The Extension Service agent can use up to 5% of the CPU to install, upgrade, run, and delete extensions. Some extensions might apply more restrictive CPU limits once installed. The following exceptions apply:
+
+ | Extension type | Operating system | CPU limit |
+ | -- | - | |
+ | AzureMonitorLinuxAgent | Linux | 60% |
+ | AzureMonitorWindowsAgent | Windows | 100% |
+ | AzureSecurityLinuxAgent | Linux | 30% |
+ | LinuxOsUpdateExtension | Linux | 60% |
+ | MDE.Linux | Linux | 60% |
+ | MicrosoftDnsAgent | Windows | 100% |
+ | MicrosoftMonitoringAgent | Windows | 60% |
+ | OmsAgentForLinux | Windows | 60%|
+
+During normal operations, defined as the Azure Connected Machine agent being connected to Azure and not actively modifying an extension or evaluating a policy, you can expect the agent to consume the following system resources:
+
+| | Windows | Linux |
+| | - | -- |
+| **CPU usage (normalized to 1 core)** | 0.07% | 0.02% |
+| **Memory usage** | 57 MB | 42 MB |
+
+The performance data above was gathered in April 2023 on virtual machines running Windows Server 2022 and Ubuntu 20.04. The actual agent performance and resource consumption vary based on the hardware and software configuration of your servers.
+
+## Instance metadata
+
+Metadata information about a connected machine is collected after the Connected Machine agent registers with Azure Arc-enabled servers, specifically:
+
+* Operating system name, type, and version
+* Computer name
+* Computer manufacturer and model
+* Computer fully qualified domain name (FQDN)
+* Domain name (if joined to an Active Directory domain)
+* Active Directory and DNS fully qualified domain name (FQDN)
+* UUID (BIOS ID)
+* Connected Machine agent heartbeat
+* Connected Machine agent version
+* Public key for managed identity
+* Policy compliance status and details (if using guest configuration policies)
+* SQL Server installed (Boolean value)
+* Cluster resource ID (for Azure Stack HCI nodes)
+* Hardware manufacturer
+* Hardware model
+* CPU family, socket, physical core and logical core counts
+* Total physical memory
+* Serial number
+* SMBIOS asset tag
+* Cloud provider
+* Amazon Web Services (AWS) metadata, when running in AWS:
+ * Account ID
+ * Instance ID
+ * Region
+* Google Cloud Platform (GCP) metadata, when running in GCP:
+ * Instance ID
+ * Image
+ * Machine type
+ * Project ID
+ * Project number
+ * Service accounts
+ * Zone
+
+The agent requests the following metadata information from Azure:
+
+* Resource location (region)
+* Virtual machine ID
+* Tags
+* Microsoft Entra managed identity certificate
+* Guest configuration policy assignments
+* Extension requests - install, update, and delete.
+
+> [!NOTE]
+> Azure Arc-enabled servers don't store/process customer data outside the region the customer deploys the service instance in.
+
+## Next steps
+
+- [Connect VMware vCenter Server to Azure Arc](quick-start-connect-vcenter-to-arc-using-script.md).
+- [Install Arc agent at scale for your VMware VMs](enable-guest-management-at-scale.md).
azure-functions Python Scale Performance Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/python-scale-performance-reference.md
I/O-bound apps may also benefit from increasing the number of worker processes b
The `FUNCTIONS_WORKER_PROCESS_COUNT` applies to each host that Azure Functions creates when scaling out your application to meet demand.
-> [!NOTE]
-> Multiple Python workers are not supported by the Python v2 programming model at this time. This means that enabling intelligent concurrency and setting `FUNCTIONS_WORKER_PROCESS_COUNT` greater than 1 is not supported for functions developed using the v2 model.
- #### Set up max workers within a language worker process As mentioned in the async [section](#understanding-async-in-python-worker), the Python language worker treats functions and [coroutines](https://docs.python.org/3/library/asyncio-task.html#coroutines) differently. A coroutine is run within the same event loop that the language worker runs on. On the other hand, a function invocation is run within a [ThreadPoolExecutor](https://docs.python.org/3/library/concurrent.futures.html#threadpoolexecutor), which is maintained by the language worker as a thread.
azure-monitor Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md
This article describes [Cost optimization](/azure/architecture/framework/cost/)
## Containers
azure-monitor Vminsights Enable Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-overview.md
The following table shows the installation methods available for enabling VM ins
| [Azure portal](vminsights-enable-portal.md) | Enable individual machines with the Azure portal. | | [Azure Policy](vminsights-enable-policy.md) | Create policy to automatically enable when a supported machine is created. | | [Azure Resource Manager templates](../vm/vminsights-enable-resource-manager.md) | Enable multiple machines by using any of the supported methods to deploy a Resource Manager template, such as the Azure CLI and PowerShell. |
-| [PowerShell](vminsights-enable-powershell.md) | Use a PowerShell script to enable multiple machines. Log Analytics agent only. |
-| [Manual install](vminsights-enable-hybrid.md) | Virtual machines or physical computers on-premises with other cloud environments. Log Analytics agent only. |
+| [PowerShell](vminsights-enable-powershell.md) | Use a PowerShell script to enable multiple machines. Currently only supported for Log Analytics agent. |
+| [Manual install](vminsights-enable-hybrid.md) | Virtual machines or physical computers on-premises with other cloud environments.|
## Supported Azure Arc machines
azure-monitor Vminsights Log Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-log-query.md
Records in these tables are generated from data reported by the Dependency Agent
The following fields and conventions apply to both VMConnection and VMBoundPort: - Computer: Fully-qualified domain name of reporting machine -- AgentId: The unique identifier for a machine with the Log Analytics agent
+- AgentId: The unique identifier for a machine running Azure Monitor Agent or the Log Analytics agent
- Machine: Name of the Azure Resource Manager resource for the machine exposed by ServiceMap. It's of the form *m-{GUID}*, where *GUID* is the same GUID as AgentId - Process: Name of the Azure Resource Manager resource for the process exposed by ServiceMap. It's of the form *p-{hex string}*. Process is unique within a machine scope and to generate a unique process ID across machines, combine Machine and Process fields. - ProcessName: Executable name of the reporting process.
Records with a type of *VMComputer* have inventory data for servers with the Dep
|SourceSystem | *Insights* | |TimeGenerated | Timestamp of the record (UTC) | |Computer | The computer FQDN |
-|AgentId | The unique ID of the Log Analytics agent |
+|AgentId | The unique identifier for a machine running Azure Monitor Agent or the Log Analytics agent |
|Machine | Name of the Azure Resource Manager resource for the machine exposed by ServiceMap. It's of the form *m-{GUID}*, where *GUID* is the same GUID as AgentId. | |DisplayName | Display name | |FullDisplayName | Full display name |
Records with a type of *VMProcess* have inventory data for TCP-connected process
|SourceSystem | *Insights* | |TimeGenerated | Timestamp of the record (UTC) | |Computer | The computer FQDN |
-|AgentId | The unique ID of the Log Analytics agent |
+|AgentId | The unique identifier for a machine running Azure Monitor Agent or the Log Analytics agent |
|Machine | Name of the Azure Resource Manager resource for the machine exposed by ServiceMap. It's of the form *m-{GUID}*, where *GUID* is the same GUID as AgentId. | |Process | The unique identifier of the Service Map process. It's in the form of *p-{GUID}*. |ExecutableName | The name of the process executable |
azure-monitor Vminsights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-troubleshoot.md
This article provides troubleshooting information to help you with problems you
When you onboard an Azure virtual machine from the Azure portal, the following steps occur: - A default Log Analytics workspace is created if that option was selected.-- The Log Analytics agent is installed on Azure virtual machines by using a VM extension if the agent is already installed.
+- Azure Monitor Agent is installed on Azure virtual machines by using a VM extension if the agent is already installed.
- The Dependency agent is installed on Azure virtual machines by using an extension if it's required. During the onboarding process, each of these steps is verified and a notification status appears in the portal. Configuration of the workspace and the agent installation typically takes 5 to 10 minutes. It takes another 5 to 10 minutes for data to become available to view in the portal.
azure-resource-manager Virtual Machines Move Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-limitations/virtual-machines-move-limitations.md
Title: Move Azure VMs to new subscription or resource group
+ Title: Special cases to move Azure VMs to new subscription or resource group
description: Use Azure Resource Manager to move virtual machines to a new resource group or subscription. Previously updated : 03/31/2022 Last updated : 10/25/2023
-# Move virtual machines to resource group or subscription
+# Handling special cases when moving virtual machines to resource group or subscription
-This article describes how to move a virtual machine to a new resource group or Azure subscription.
+This article describes special cases that require extra steps when moving a virtual machine to a new resource group or Azure subscription. If your virtual machine doesn't match any of these scenarios, you can move the virtual machine with the standard steps described in [Move resources to a new resource group or subscription](../move-resource-group-and-subscription.md).
If you want to move a virtual machine to a new region, see [Tutorial: Move Azure VMs across regions](../../../resource-mover/tutorial-move-region-virtual-machines.md).
Disable-AzVMDiskEncryption -ResourceGroupName demoRG -VMName myVm1 -VolumeType a
## Virtual machines with Marketplace plans
-Virtual machines created from Marketplace resources with plans attached can't be moved across subscriptions. To work around this limitation, you can de-provision the virtual machine in the current subscription, and deploy it again in the new subscription. The following steps help you recreate the virtual machine in the new subscription. However, they might not work for all scenarios. If the plan is no longer available in the Marketplace, these steps won't work.
+Virtual machines created from Marketplace resources with plans attached can't be moved across subscriptions. To work around this limitation, you can deprovision the virtual machine in the current subscription, and deploy it again in the new subscription. The following steps help you recreate the virtual machine in the new subscription. However, they might not work for all scenarios. If the plan is no longer available in the Marketplace, these steps won't work.
1. Get information about the plan.
azure-resource-manager Move Resource Group And Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-resource-group-and-subscription.md
content_well_notification:
- AI-contribution
-# Move resources to a new resource group or subscription
+# Move Azure resources to a new resource group or subscription
This article shows you how to move Azure resources to either another Azure subscription or another resource group under the same subscription. You can use the Azure portal, Azure PowerShell, Azure CLI, or the REST API to move resources.
azure-web-pubsub Howto Enable Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-enable-geo-replication.md
# Geo-replication (Preview) in Azure Web PubSub
-## What is geo-replication feature?
Mission critical apps often need to have a robust failover system and serve users closer to where they are. Before the release of the geo-replication feature, developers needed to deploy multiple Web PubSub resources and write custom code to orchestrate communication across resources. Now, with quick configuration through Azure portal, you can easily enable this feature. ## Benefits of using geo-replication
With the geo-replication feature, Contoso can now establish a replica in Canada
![Diagram of using one Azure Web PubSub instance with replica to handle traffic from two countries.](./media/howto-enable-geo-replication/web-pubsub-replica.png "Replica Example") ## How to enable geo-replication in a Web PubSub resource
-To create a replica in an Azure region, go to your Web PubSub resource and find the **Replicas** blade on the Azure portal and click **Add** to create a replica. It will be automatically enabled upon creation.
+# [Portal](#tab/Portal)
+To create a replica in an Azure region, go to your Web PubSub resource and find the **Replicas** blade on the Azure portal and click **Add** to create a replica.
![Screenshot of creating replica for Azure Web PubSub on Portal.](./media/howto-enable-geo-replication/web-pubsub-replica-create.png "Replica create")
After creation, you would be able to view/edit your replica on the portal by cli
![Screenshot of overview blade of Azure Web PubSub replica resource. ](./media/howto-enable-geo-replication/web-pubsub-replica-overview.svg "Replica Overview")
-> [!NOTE]
-> * Geo-replication is a feature available in premium tier.
-> * A replica is considered a separate resource when it comes to billing. See [Pricing and resource unit](#pricing-and-resource-unit) for more details.
-
+# [Bicep](#tab/Bicep)
+
+Use Visual Studio Code or your favorite editor to create a file with the following content and name it main.bicep:
+
+```bicep
+@description('The name for your Web PubSub service')
+param primaryName string = 'contoso'
+
+@description('The region in which to create your Web Pubsub service')
+param primaryLocation string = 'eastus'
+
+@description('Unit count of your Web PubSub service')
+param primaryCapacity int = 1
+
+resource primary 'Microsoft.SignalRService/webpubsub@2023-08-01-preview' = {
+ name: primaryName
+ location: primaryLocation
+ sku: {
+ capacity: primaryCapacity
+ name: 'Premium_P1'
+ }
+ properties: {
+ }
+}
+
+@description('The name for your Web PubSub replica')
+param replicaName string = 'contoso-westus'
+
+@description('The region in which to create the Web PubSub replica')
+param replicaLocation string = 'westus'
+
+@description('Unit count of the Web PubSub replica')
+param replicaCapacity int = 1
+
+@description('Whether to enable region endpoint for the replica')
+param regionEndpointEnabled string = 'Enabled'
+
+resource replica 'Microsoft.SignalRService/webpubsub/replicas@2023-08-01-preview' = {
+ parent: primary
+ name: replicaName
+ location: replicaLocation
+ sku: {
+ capacity: replicaCapacity
+ name: 'Premium_P1'
+ }
+ properties: {
+ regionEndpointEnabled: regionEndpointEnabled
+ }
+}
+```
+
+Deploy the Bicep file using Azure CLI
+ ```azurecli
+ az group create --name MyResourceGroup --location eastus
+ az deployment group create --resource-group MyResourceGroup --template-file main.bicep
+ ```
+
+# [CLI](#tab/CLI)
+Update **webpubsub** extension to the latest version, then run:
+ ```azurecli
+ az webpubsub replica create --sku Premium_P1 -l eastus --replica-name MyReplica --name MyWebPubSub -g MyResourceGroup
+ ```
## Pricing and resource unit Each replica has its **own** `unit` and `autoscale settings`.
To delete a replica in the Azure portal:
1. Navigate to your Web PubSub resource, and select **Replicas** blade. Click the replica you want to delete. 2. Click Delete button on the replica overview blade.
+To delete a replica using the Azure CLI:
+ ```azurecli
+ az webpubsub replica delete --replica-name MyReplica --name MyWebPubSub -g MyResourceGroup
+ ```
+ ## Understand how the geo-replication feature works ![Diagram of the arch of Azure Web PubSub replica. ](./media/howto-enable-geo-replication/web-pubsub-replica-arch.png "Replica Arch")
Before deleting a replication, consider disabling its endpoint first. Over time,
This feature is also useful for troubleshooting regional issues. > [!NOTE]
-> * Due to the DNS cache, it may take several minutes for the DNS update to take effect.
+> * Due to the DNS cache, it might take several minutes for the DNS update to take effect.
> * Existing connections remain unaffected until they disconnect. ## Impact on performance after enabling geo-replication feature
For more performance evaluation, refer to [Performance](concept-performance.md).
## Breaking issues * **Using replica and event handler together**
- If you use the Web PubSub event handler with Web PubSub C# server SDK or an Azure Function that utilizes the Web PubSub extension, you may encounter issues with the abuse protection once replicas are enabled. To address this, you can either **disable the abuse protection** or **upgrade to the latest SDK/extension versions**.
+ If you use the Web PubSub event handler with Web PubSub C# server SDK or an Azure Function that utilizes the Web PubSub extension, you might encounter issues with the abuse protection once replicas are enabled. To address this, you can either **disable the abuse protection** or **upgrade to the latest SDK/extension versions**.
For a detailed explanation and potential solutions, please refer to this [issue](https://github.com/Azure/azure-webpubsub/issues/598).
bastion Configuration Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/configuration-settings.md
Previously updated : 10/13/2023 Last updated : 10/26/2023+ # About Bastion configuration settings
The Bastion Developer SKU is a new, lower-cost, lightweight SKU. This SKU is ide
The Developer SKU has different requirements and limitations than the other SKU tiers. See [Deploy Bastion automatically - Developer SKU](quickstart-developer-sku.md) for more information and deployment steps. + ### Specify SKU | Method | SKU Value | Links |
bastion Quickstart Developer Sku https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/quickstart-developer-sku.md
description: Learn how to deploy Bastion using the Developer SKU.
Previously updated : 10/16/2023 Last updated : 10/26/2023 -+ # Quickstart: Deploy Bastion using the Developer SKU (Preview)
In this quickstart, you'll learn how to deploy Azure Bastion using the Developer
> [!IMPORTANT] > [!INCLUDE [Pricing](../../includes/bastion-pricing.md)] + ## About the Developer SKU The Bastion Developer SKU is a new [lower-cost](https://azure.microsoft.com/pricing/details/azure-bastion/), lightweight SKU. This SKU is ideal for Dev/Test users who want to securely connect to their VMs if they don't need additional features or scaling. With the Developer SKU, you can connect to one Azure VM at a time directly through the virtual machine connect page.
You can use the following example values when creating this configuration as an
| | | | Virtual machine| TestVM | | Resource group | TestRG1 |
-| Region | East US |
+| Region | West US |
| Virtual network | VNet1 | | Address space | 10.1.0.0/16 | | Subnets | FrontEnd: 10.1.0.0/24 |
batch Batch Pool No Public Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pool-no-public-ip-address.md
To restrict access to these nodes and reduce the discoverability of these nodes
## Prerequisites -- **Authentication**. To use a pool without public IP addresses inside a [virtual network](./batch-virtual-network.md), the Batch client API must use Microsoft Entra authentication. Azure Batch support for Microsoft Entra ID is documented in [Authenticate Batch service solutions with Active Directory](batch-aad-auth.md). If you aren't creating your pool within a virtual network, either Microsoft Entra authentication or key-based authentication can be used.
+- **Authentication**. To use a pool without public IP addresses inside a [virtual network](./batch-virtual-network.md), the Batch client API must use Microsoft Entra authentication. Azure Batch support for Microsoft Entra ID is documented in [Authenticate Azure Batch services with Microsoft Entra ID](batch-aad-auth.md). If you aren't creating your pool within a virtual network, either Microsoft Entra authentication or key-based authentication can be used.
- **An Azure VNet**. If you're creating your pool in a [virtual network](batch-virtual-network.md), follow these requirements and configurations. To prepare a VNet with one or more subnets in advance, you can use the Azure portal, Azure PowerShell, the Azure CLI, or other methods.
batch Managed Identity Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/managed-identity-pools.md
[Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md) eliminate complicated identity and credential management by providing an identity for the Azure resource in Microsoft Entra ID
-(Microsoft Entra ID). This identity is used to obtain Microsoft Entra tokens to authenticate with target
+(Azure AD ID). This identity is used to obtain Microsoft Entra tokens to authenticate with target
resources in Azure. This topic explains how to enable user-assigned managed identities on Batch pools and how to use managed identities within the nodes.
chaos-studio Chaos Studio Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-limitations.md
During the public preview of Azure Chaos Studio, there are a few limitations and
- Windows Server 2019, Windows Server 2016, Windows Server 2012, and Windows Server 2012 R2 - Red Hat Enterprise Linux 8.2, SUSE Enterprise Linux 15 SP2, CentOS 8.2, Debian 10 Buster (with unzip installation required), Oracle Linux 7.8, Ubuntu Server 16.04 LTS, and Ubuntu Server 18.04 LTS - **Hardened Linux untested** - The Chaos Studio agent isn't tested against custom Linux distributions or hardened Linux distributions (for example, FIPS or SELinux).-- **Supported browsers** The Chaos Studio portal experience has only been tested on the following browsers:
+- **Supported browsers** - The Chaos Studio portal experience has only been tested on the following browsers:
* **Windows:** Microsoft Edge, Google Chrome, and Firefox * **MacOS:** Safari, Google Chrome, and Firefox
+- **Terraform** - At present Chaos Studio does not support terraform.
+- **Built-in roles** - Chaos Studio does not currently have its own built-in roles. Permissions may be attained to run a chaos experiment by either assigning an [Azure built-in role](chaos-studio-fault-providers.md) or a created custom role to the experiment's identity.
## Known issues When you pick target resources for an agent-based fault in the experiment designer, it's possible to select virtual machines or virtual machine scale sets with an operating system not supported by the fault selected.
container-instances Container Instances Best Practices And Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-best-practices-and-considerations.md
Learn how to deploy a multi-container container group with an Azure Resource Man
> [!div class="nextstepaction"] > [Deploy a container group][resource-manager template]+
+<!-- LINKS - Internal -->
+[resource-manager template]: container-instances-multi-container-group.md
cosmos-db Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/support.md
Azure Cosmos DB supports the following database commands on API for Cassandra ac
| `COMPACT STORAGE` | N/A (PaaS service) | | `CREATE AGGREGATE` | No | | `CREATE CUSTOM INDEX (SASI)` | No |
-| `CREATE INDEX` | Yes (including [named indexes](secondary-indexing.md), and cluster key index is currently in [private preview](https://devblogs.microsoft.com/cosmosdb/now-in-private-preview-cluster-key-index-support-for-azure-cosmos-db-cassandra-api/) but full FROZEN collection is not supported) |
+| `CREATE INDEX` | Yes (including [named indexes](secondary-indexing.md) but full FROZEN collection is not supported) |
| `CREATE FUNCTION` | No | | `CREATE KEYSPACE` (replication settings ignored) | Yes | | `CREATE MATERIALIZED VIEW` | Yes |
cosmos-db Choose Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/choose-api.md
Azure Cosmos DB is a fully managed NoSQL database for modern app development. Az
## APIs in Azure Cosmos DB
-Azure Cosmos DB offers multiple database APIs, which include NoSQL, MongoDB, PostgreSQL Cassandra, Gremlin, and Table. By using these APIs, you can model real world data using documents, key-value, graph, and column-family data models. These APIs allow your applications to treat Azure Cosmos DB as if it were various other databases technologies, without the overhead of management, and scaling approaches. Azure Cosmos DB helps you to use the ecosystems, tools, and skills you already have for data modeling and querying with its various APIs.
+Azure Cosmos DB offers multiple database APIs, which include NoSQL, MongoDB, PostgreSQL, Cassandra, Gremlin, and Table. By using these APIs, you can model real world data using documents, key-value, graph, and column-family data models. These APIs allow your applications to treat Azure Cosmos DB as if it were various other databases technologies, without the overhead of management, and scaling approaches. Azure Cosmos DB helps you to use the ecosystems, tools, and skills you already have for data modeling and querying with its various APIs.
All the APIs offer automatic scaling of storage and throughput, flexibility, and performance guarantees. There's no one best API, and you may choose any one of the APIs to build your application. This article will help you choose an API based on your workload and team requirements.
cosmos-db Migrate Hbase To Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/migrate-hbase-to-cosmos-db.md
The key differences between the data structure of Azure Cosmos DB and HBase are
1000 column=Personal:Phone, timestamp=1611408732385, value=1-425-000-0001 ```
-* In Azure Cosmos DB for NoSQL, the JSON object represents the data format. The partition key resides in a field in the document and sets which field is the partition key for the collection. Azure Cosmos DB does not have the concept of timestamp used for column family or version. As highlighted previously, it has change feed support through which one can track/record changes performed on a container. The following is an example of a document.
+* In Azure Cosmos DB for NoSQL, the JSON object represents the data format. The partition key resides in a field in the document and sets which field is the partition key for the collection. Azure Cosmos DB doesn't have the concept of timestamp used for column family or version. As highlighted previously, it has change feed support through which one can track/record changes performed on a container. The following is an example of a document.
```json {
Azure Cosmos DB is a PaaS offering from Microsoft and underlying infrastructure
To estimate RUs consumed by your workload, consider the following [factors](../request-units.md#request-unit-considerations):
-There is a [capacity calculator](estimate-ru-with-capacity-planner.md) available to assist with sizing exercise for RUs.
+There's a [capacity calculator](estimate-ru-with-capacity-planner.md) available to assist with sizing exercise for RUs.
You can also use [autoscaling provisioning throughput](../provision-throughput-autoscale.md) in Azure Cosmos DB to automatically and instantly scale your database or container throughput (RU/sec). Throughput is scaled based on usage without impacting workload availability, latency, throughput, or performance.
HBase sorts data according to RowKey. The data is then partitioned into regions
**Azure Cosmos DB** Azure Cosmos DB uses [partitioning](../partitioning-overview.md) to scale individual containers in the database. Partitioning divides the items in a container into specific subsets called "logical partitions". Logical partitions are formed based on the value of the "partition key" associated with each item in the container. All items in a logical partition have the same partition key value. Each logical partition can hold up to 20 GB of data.
-Physical partitions each contain a replica of your data and an instance of the Azure Cosmos DB database engine. This structure makes your data durable and highly available and throughput is divided equally amongst the local physical partitions. Physical partitions are automatically created and configured, and it's not possible to control their size, location, or which logical partitions they contain. Logical partitions are not split between physical partitions.
+Physical partitions each contain a replica of your data and an instance of the Azure Cosmos DB database engine. This structure makes your data durable and highly available and throughput is divided equally amongst the local physical partitions. Physical partitions are automatically created and configured, and it's not possible to control their size, location, or which logical partitions they contain. Logical partitions aren't split between physical partitions.
-As with HBase RowKey, partition key design is important for Azure Cosmos DB. HBase's Row Key works by sorting data and storing continuous data, and Azure Cosmos DB's Partition Key is a different mechanism because it hash-distributes data. Assuming your application using HBase is optimized for data access patterns to HBase, using the same RowKey for the partition Key will not give good performance results. Given that it's sorted data on HBase, the [Azure Cosmos DB composite index](../index-policy.md#composite-indexes) may be useful. It is required if you want to use the ORDER BY clause in more than one field. You can also improve the performance of many equal and range queries by defining a composite index.
+As with HBase RowKey, partition key design is important for Azure Cosmos DB. HBase's Row Key works by sorting data and storing continuous data, and Azure Cosmos DB's Partition Key is a different mechanism because it hash-distributes data. Assuming your application using HBase is optimized for data access patterns to HBase, using the same RowKey for the partition Key won't give good performance results. Given that it's sorted data on HBase, the [Azure Cosmos DB composite index](../index-policy.md#composite-indexes) may be useful. It's required if you want to use the ORDER BY clause in more than one field. You can also improve the performance of many equal and range queries by defining a composite index.
### Availability
As with HBase RowKey, partition key design is important for Azure Cosmos DB. HBa
HBase consists of Master; Region Server; and ZooKeeper. High availability in a single cluster can be achieved by making each component redundant. When configuring geo-redundancy, one can deploy HBase clusters across different physical data centers and use replication to keep multiple clusters in-sync. **Azure Cosmos DB**
-Azure Cosmos DB does not require any configuration such as cluster component redundancy. It provides a comprehensive SLA for high availability, consistency, and latency. Please see [SLA for Azure Cosmos DB](https://azure.microsoft.com/support/legal/sla/cosmos-db/v1_3/) for more detail.
+Azure Cosmos DB doesn't require any configuration such as cluster component redundancy. It provides a comprehensive SLA for high availability, consistency, and latency. See [SLA for Azure Cosmos DB](https://azure.microsoft.com/support/legal/sla/cosmos-db/v1_3/) for more detail.
### Data reliability
Example of downstream dependencies could be applications that read data from HBa
* The RPO and RTO for HBase deployment on-premises.
-### Offline Vs online migration
+### Offline and online migration
For successful data migration, it is important to understand the characteristics of the business that uses the database and decide how to do it. Select offline migration if you can completely shut down the system, perform data migration, and restart the system at the destination. Also, if your database is always busy and you can't afford a long outage, consider migrating online. > [!NOTE] > This document covers only offline migration.
-When performing offline data migration, it depends on the version of HBase you are currently running and the tools available. See the [Data Migration](#migrate-your-data) section for more details.
+When performing offline data migration, it depends on the version of HBase you're currently running and the tools available. See the [Data Migration](#migrate-your-data) section for more details.
### Performance considerations
master coprocessors: []
You can get useful sizing information such as the size of heap memory, the number of regions, the number of requests as the status of the cluster, and the size of the data in compressed/uncompressed as the status of the table.
-If you are using Apache Phoenix on HBase cluster, you need to collect data from Phoenix as well.
+If you're using Apache Phoenix on HBase cluster, you need to collect data from Phoenix as well.
* Migration target table * Table schemas
There are various methods to migrate data offline, but here we will introduce ho
| | -- | - | | Azure Data Factory | HBase < 2 | Easy to set up. Suitable for large datasets. DoesnΓÇÖt support HBase 2 or later. | | Apache Spark | All versions | Support all versions of HBase. Suitable for large datasets. Spark setup required. |
-| Custom tool with Azure Cosmos DB bulk executor library | All versions | Most flexible to create custom data migration tools using libraries. Requires more effort to set up. |
+| Custom tool with Azure Cosmos DB bulk executor library | All versions | Most flexible to create custom data migration tools using libraries. Requires more effort to setup. |
The following flowchart uses some conditions to reach the available data migration methods. :::image type="content" source="./media/migrate-hbase-to-cosmos-db/flowchart-hbase-migration-tools.png" alt-text="Flowchart for options to migrate data to Azure Cosmos DB.":::
These HBase's sample codes are based on those described in [HBase's official doc
The code for Azure Cosmos DB presented here is based on the [Azure Cosmos DB for NoSQL: Java SDK v4 examples](samples-java.md) documentation. You can access the full code example from the documentation.
-The mappings for code migration are shown here, but the HBase RowKeys and Azure Cosmos DB Partition Keys used in these examples are not always well designed. Design according to the actual data model of the migration source.
+The mappings for code migration are shown here, but the HBase RowKeys and Azure Cosmos DB Partition Keys used in these examples aren't always well designed. Design according to the actual data model of the migration source.
### Establish connection
UPSERT INTO FamilyTable (id, lastName) VALUES (1, ΓÇÿWitherspoonΓÇÖ);
**Azure Cosmos DB**
-Azure Cosmos DB provides you type safety via data model. We use data model named ΓÇÿFamilyΓÇÖ.
+Azure Cosmos DB provides type safety via data model. We use data model named ΓÇÿFamilyΓÇÖ.
```java public class Family {
ON DUPLICATE KEY UPDATE id = "1", lastName = "Whiterspoon";
**Azure Cosmos DB**
-In Azure Cosmos DB, updates are treated as Upsert operations. That is, if the document does not exist, it will be inserted.
+In Azure Cosmos DB, updates are treated as Upsert operations. That is, if the document doesn't exist, it will be inserted.
```java // Replace existing document with new modified document (contingent on modification).
HBase clusters may be used with HBase workloads and MapReduce, Hive, Spark, and
### Server-side programming
-HBase offers several server-side programming features. If you are using these features, you will also need to migrate their processing.
+HBase offers several server-side programming features. If you're using these features, you will also need to migrate their processing.
**HBase**
HBase offers several server-side programming features. If you are using these fe
* Observer hooks specific operations and events. This is a function for adding arbitrary processing. This is a feature similar to RDBMS triggers. * Endpoint
- * Endpoint is a feature for extending HBase RPC. It is a function similar to an RDBMS stored procedure.
+ * Endpoint is a feature for extending HBase RPC. It's a function similar to an RDBMS stored procedure.
**Azure Cosmos DB**
Server-side programming mappings
## Security
-Data security is a shared responsibility of the customer and the database provider. For on-premises solutions, customers have to provide everything from endpoint protection to physical hardware security, which is not an easy task. If you choose a PaaS cloud database provider such as Azure Cosmos DB, customer involvement will be reduced. For Microsoft's security shared responsibility model, see [Shared Responsibilities for Cloud Computing](https://gallery.technet.microsoft.com/Shared-Responsibilities-81d0ff91). Azure Cosmos DB runs on the Azure platform, so it can be enhanced in a different way than HBase. Azure Cosmos DB does not require any extra components to be installed for security. We recommend that you consider migrating your database system security implementation using the following checklist:
+Data security is a shared responsibility of the customer and the database provider. For on-premises solutions, customers have to provide everything from endpoint protection to physical hardware security, which is not an easy task. If you choose a PaaS cloud database provider such as Azure Cosmos DB, customer involvement will be reduced. Azure Cosmos DB runs on the Azure platform, so it can be enhanced in a different way than HBase. Azure Cosmos DB doesn't require any extra components to be installed for security. We recommend that you consider migrating your database system security implementation using the following checklist:
| **Security control** | **HBase** | **Azure Cosmos DB** | | -- | -- | - |
Data security is a shared responsibility of the customer and the database provid
| Ability to replicate data globally for regional failures | Make a database replica in a remote data center using HBase's replication. | Azure Cosmos DB performs configuration-free global distribution and allows you to replicate data to data centers around the world in Azure with the select of a button. In terms of security, global replication ensures that your data is protected from local failures. | | Ability to fail over from one data center to another | You need to implement failover yourself. | If you're replicating data to multiple data centers and the region's data center goes offline, Azure Cosmos DB automatically rolls over the operation. | | Local data replication within a data center | The HDFS mechanism allows you to have multiple replicas across nodes within a single file system. | Azure Cosmos DB automatically replicates data to maintain high availability, even within a single data center. You can choose the consistency level yourself. |
-| Automatic data backups | There is no automatic backup function. You need to implement data backup yourself. | Azure Cosmos DB is backed up regularly and stored in the geo redundant storage. |
-| Protect and isolate sensitive data | For example, if you are using Apache Ranger, you can use Ranger policy to apply the policy to the table. | You can separate personal and other sensitive data into specific containers and read / write, or limit read-only access to specific users. |
+| Automatic data backups | There's no automatic backup function. You need to implement data backup yourself. | Azure Cosmos DB is backed up regularly and stored in the geo redundant storage. |
+| Protect and isolate sensitive data | For example, if you're using Apache Ranger, you can use Ranger policy to apply the policy to the table. | You can separate personal and other sensitive data into specific containers and read / write, or limit read-only access to specific users. |
| Monitoring for attacks | It needs to be implemented using third party products. | By using [audit logging and activity logs](../monitor.md), you can monitor your account for normal and abnormal activity. | | Responding to attacks | It needs to be implemented using third party products. | When you contact Azure support and report a potential attack, a five-step incident response process begins. | | Ability to geo-fence data to adhere to data governance restrictions | You need to check the restrictions of each country/region and implement it yourself. | Guarantees data governance for sovereign regions (Germany, China, US Gov, etc.). |
Also, see [Azure Cosmos DB metrics and log types](../monitor-reference.md) that
There are several ways to get a backup of HBase. For example, Snapshot, Export, CopyTable, Offline backup of HDFS data, and other custom backups.
-Azure Cosmos DB automatically backs up data at periodic intervals, which does not affect the performance or availability of database operations. Backups are stored in Azure storage and can be used to recover data if needed. There are two types of Azure Cosmos DB backups:
+Azure Cosmos DB automatically backs up data at periodic intervals, which doesn't affect the performance or availability of database operations. Backups are stored in Azure storage and can be used to recover data if needed. There are two types of Azure Cosmos DB backups:
* [Periodic backup](../periodic-backup-restore-introduction.md)
cosmos-db Sdk Java Spring Data V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-java-spring-data-v2.md
You can use Spring Data Azure Cosmos DB in your applications hosted in [Azure Sp
|**API documentation** | [Spring Data Azure Cosmos DB reference documentation]() | |**Contribute to the SDK** | [Spring Data Azure Cosmos DB repo on GitHub](https://github.com/microsoft/spring-data-cosmosdb) | |**Spring Boot Starter**| [Azure Cosmos DB Spring Boot Starter client library for Java](https://github.com/MicrosoftDocs/azure-dev-docs/blob/master/articles/jav) |
-|**Spring TODO app sample with Azure Cosmos DB**| [End-to-end Java Experience in App Service Linux (Part 2)](https://github.com/Azure-Samples/e2e-java-experience-in-app-service-linux-part-2) |
|**Developer's guide** | [Spring Data Azure Cosmos DB developer's guide](/azure/developer/java/spring-framework/how-to-guides-spring-data-cosmosdb) | |**Using Starter** | [How to use Spring Boot Starter with the Azure Cosmos DB for NoSQL](/azure/developer/jav) |
-|**Sample with Azure App Service** | [How to use Spring and Azure Cosmos DB with App Service on Linux](/azure/developer/java/spring-framework/configure-spring-app-with-cosmos-db-on-app-service-linux) <br> [TODO app sample](https://github.com/Azure-Samples/e2e-java-experience-in-app-service-linux-part-2.git) |
+|**Sample with Azure App Service** | [How to use Spring and Azure Cosmos DB with App Service on Linux](/azure/developer/java/spring-framework/configure-spring-app-with-cosmos-db-on-app-service-linux) |
## Release history
cosmos-db Sdk Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-observability.md
Distributed tracing is available in the following SDKs:
## Trace attributes
-Azure Cosmos DB traces follow the [OpenTelemetry database specification](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/semantic_conventions/database.md) and also provide several custom attributes. You may see different attributes depending on the operation of your request, and these attributes are core attributes for all requests.
+Azure Cosmos DB traces follow the [OpenTelemetry database specification](https://github.com/open-telemetry/opentelemetry-specification) and also provide several custom attributes. You may see different attributes depending on the operation of your request, and these attributes are core attributes for all requests.
|Attribute |Type |Description | |-|--||
Azure Cosmos DB traces follow the [OpenTelemetry database specification](https:/
| `db.cosmosdb.regions_contacted` | string | List of regions contacted in the Azure Cosmos DB account. | | `user_agent.original` | string | Full user-agent string generated by the Azure Cosmos DB SDK. |
-For more information, see [Azure Cosmos DB custom attributes](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/semantic_conventions/database.md#microsoft-azure-cosmos-db-attributes).
- ### Gather diagnostics If you've configured logs in your trace provider, you can automatically get [diagnostics](./troubleshoot-dotnet-sdk.md#capture-diagnostics) for Azure Cosmos DB requests that failed or had high latency. These logs can help you diagnose failed and slow requests without requiring any custom code to capture them.
cosmos-db Social Media Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/social-media-apps.md
But what can you learn? A few easy examples include [sentiment analysis](https:/
Now that I got you hooked, youΓÇÖll probably think you need some PhD in math science to extract these patterns and information out of simple databases and files, but youΓÇÖd be wrong.
-[Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/), part of the [Cortana Intelligence Suite](https://social.technet.microsoft.com/wiki/contents/articles/36688.introduction-to-cortana-intelligence-suite.aspx), is a fully managed cloud service that lets you create workflows using algorithms in a simple drag-and-drop interface, code your own algorithms in [R](https://en.wikipedia.org/wiki/R_\(programming_language\)), or use some of the already-built and ready to use APIs such as: [Text Analytics](https://gallery.cortanaanalytics.com/MachineLearningAPI/Text-Analytics-2), Content Moderator, or [Recommendations](https://gallery.azure.ai/Solution/Recommendations-Solution).
+[Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/), is a fully managed cloud service that lets you create workflows using algorithms in a simple drag-and-drop interface, code your own algorithms in [R](https://en.wikipedia.org/wiki/R_\(programming_language\)), or use some of the already-built and ready to use APIs such as: [Text Analytics](https://gallery.cortanaanalytics.com/MachineLearningAPI/Text-Analytics-2), Content Moderator, or [Recommendations](https://gallery.azure.ai/Solution/Recommendations-Solution).
To achieve any of these Machine Learning scenarios, you can use [Azure Data Lake](https://azure.microsoft.com/services/data-lake-store/) to ingest the information from different sources. You can also use [U-SQL](https://azure.microsoft.com/documentation/videos/data-lake-u-sql-query-execution/) to process the information and generate an output that can be processed by Azure Machine Learning.
cost-management-billing Capabilities Unit Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/finops/capabilities-unit-costs.md
description: This article helps you understand the measuring unit costs capabili
keywords: Previously updated : 06/23/2023 Last updated : 10/25/2023
Identify what a single unit is for your business ΓÇô like a sale transaction for
Measuring unit costs provides insights into profitability and allows organizations to make data-driven business decisions regarding cloud investments. Unit economics is what ties the cloud to measurable business value.
+The ultimate goal of unit economics, as a derivative of activity-based costing methodology, is to factor in the whole picture of your business's cost. This article focuses on capturing how you can factor your Microsoft Cloud costs into those efforts. As your FinOps practice matures, consider the manual processes and steps outside of the cloud that might be important for calculating units that are critical for your business to track the most accurate cost per unit.
+ ## Before you begin Before you can effectively measure unit costs, you need to familiarize yourself with [how you're charged for the services you use](https://azure.microsoft.com/pricing#product-pricing). Understanding the factors that contribute to costs, helps you break down the usage and costs and map them to individual units. Cost contributing-factors factors include compute, storage, networking, and data transfer. How your service usage aligns with the various pricing models (for example, pay-as-you-go, reservations, and Azure Hybrid Benefit) also impacts your costs.
Measuring unit costs isn't a simple task. Unit economics requires a deep underst
- If you don't have telemetry in place, consider setting up [Application Insights](../../azure-monitor/app/app-insights-overview.md), which is an extension of Azure Monitor. - Use [Azure Monitor metrics](../../azure-monitor/essentials/data-platform-metrics.md) to pull resource utilization data. - If you don't have telemetry, see what metrics are available in Azure Monitor that can map your application usage to the costs. You need anything that can break down the usage of your resources to give you an idea of what percentage of the billed usage was from one unit vs. another.
- - If you don't see the data you need in metrics, also check [logs and traces in Azure Monitor](../../azure-monitor/overview.md#data-platform). It may not be a direct correlation to usage but might be able to give you some indication of usage.
+ - If you don't see the data you need in metrics, also check [logs and traces in Azure Monitor](../../azure-monitor/overview.md#data-platform). It might not be a direct correlation to usage but might be able to give you some indication of usage.
- Use service-specific APIs to get detailed usage telemetry. - Every service uses Azure Monitor for a core set of logs and metrics. Some services also provide more detailed monitoring and utilization APIs to get more details than are available in Azure Monitor. Explore [Azure service documentation](../../index.yml) to find the right API for the services you use. - Using the data you've collected, quantify the percentage of usage coming from each unit.
data-factory Continuous Integration Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery.md
Previously updated : 03/16/2023 Last updated : 10/26/2023
If you're using Git integration with your data factory and have a CI/CD pipeline
>[!WARNING] >If you do not use latest versions of PowerShell and Data Factory module, you may run into deserialization errors while running the commands. -- **Integration runtimes and sharing**. Integration runtimes don't change often and are similar across all stages in your CI/CD. So Data Factory expects you to have the same name and type of integration runtime across all stages of CI/CD. If you want to share integration runtimes across all stages, consider using a ternary factory just to contain the shared integration runtimes. You can use this shared factory in all of your environments as a linked integration runtime type.
+- **Integration runtimes and sharing**. Integration runtimes don't change often and are similar across all stages in your CI/CD. So Data Factory expects you to have the same name, type and sub-type of integration runtime across all stages of CI/CD. If you want to share integration runtimes across all stages, consider using a ternary factory just to contain the shared integration runtimes. You can use this shared factory in all of your environments as a linked integration runtime type.
>[!Note] >The integration runtime sharing is only available for self-hosted integration runtimes. Azure-SSIS integration runtimes don't support sharing.
data-factory Self Hosted Integration Runtime Auto Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/self-hosted-integration-runtime-auto-update.md
Title: Self-hosted integration runtime auto-update and expire notification
-description: Learn about self-hosted integration runtime auto-update and expire notification
+description: Learn about self-hosted integration runtime auto-update and expire notification.
Last updated 10/20/2023
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article will describe how to let self-hosted integration runtime auto-update to the latest version and how ADF manages the versions of self-hosted integration runtime.
+This article describes how to let self-hosted integration runtime auto-update to the latest version and how Azure Data Factory (ADF) manages the versions of self-hosted integration runtime.
## How to check your self-hosted integration runtime version
-You can check the version either in your self-hosted integration runtime client or in Azure data factory portal:
+You can check the version either in your self-hosted integration runtime client or in the ADF portal:
:::image type="content" source="./media/self-hosted-integration-runtime-auto-update/self-hosted-integration-runtime-version-client.png" alt-text="Screenshot that shows the version in self-hosted integration runtime client."::: :::image type="content" source="./media/self-hosted-integration-runtime-auto-update/self-hosted-integration-runtime-version-portal.png" alt-text="Screenshot that shows the version in Azure data factory portal."::: ## Self-hosted Integration Runtime Auto-update
-Generally, when you install a self-hosted integration runtime in your local machine or an Azure VM, you have two options to manage the version of self-hosted integration runtime: auto-update or maintain manually. Typically, ADF releases two new versions of self-hosted integration runtime every month which includes new feature release, bug fix or enhancement. So we recommend users to update to newer version in order to get the latest feature and enhancement.
+Generally, when you install a self-hosted integration runtime in your local machine or an Azure VM, you have two options to manage the version of self-hosted integration runtime: auto-update or maintain manually. Typically, ADF releases two new versions of self-hosted integration runtime every month, which includes new features released, bugs fixed, and enhancements. So we recommend users to update to the latest version.
-The most convenient way is to enable auto-update when you create or edit self-hosted integration runtime. The self-hosted integration runtime will be automatically update to newer version. You can also schedule the update at the most suitable time slot as you wish.
+The most convenient way is to enable auto-update when you create or edit self-hosted integration runtime. The self-hosted integration runtime is automatically update to newer version. You can also schedule the update at the most suitable time slot as you wish.
:::image type="content" source="media/create-self-hosted-integration-runtime/shir-auto-update.png" alt-text="Enable auto-update":::
You can use this [PowerShell command](/powershell/module/az.datafactory/get-azda
> If you have multiple self-hosted integration runtime nodes, there is no downtime during auto-update. The auto-update happens in one node first while others are working on tasks. When the first node finishes the update, it will take over the remain tasks when other nodes are updating. If you only have one self-hosted integration runtime, then it has some downtime during the auto-update. ## Auto-update version vs latest version
-To ensure the stability of self-hosted integration runtime, although we release two versions, we will only push one version every month. So sometimes you will find that the auto-update version is the previous version of the actual latest version. If you want to get the latest version, you can go to [download center](https://www.microsoft.com/download/details.aspx?id=39717) and do so manually. Additionally, **auto-update** to a new version is managed internally. You cannot change it.
+To ensure the stability of self-hosted integration runtime, although we release two versions, we'll only push one version every month. So sometimes you find that the auto-updated version is the previous version of the actual latest version. If you want to get the latest version, you can go to [download center](https://www.microsoft.com/download/details.aspx?id=39717) and do so manually. Additionally, **auto-update** to a new version is managed internally. You can't change it.
-The self-hosted integration runtime **Auto update** page in ADF portal shows the newer version if current version is old. When your self-hosted integration runtime is online, this version is auto-update version and will automatically update your self-hosted integration runtime in the scheduled time. But if your self-hosted integration runtime is offline, the page only shows the latest version.
+The self-hosted integration runtime **Auto update** page in the ADF portal shows the newer version if current version is old. When your self-hosted integration runtime is online, this version is auto-update version and will automatically update your self-hosted integration runtime in the scheduled time. But if your self-hosted integration runtime is offline, the page only shows the latest version.
-If you have multiple nodes, and for some reasons that some of them are not auto-updated successfully. Then these nodes roll back to the version which was the same across all nodes before auto-update.
+If you have multiple nodes, and for some reasons that some of them aren't auto-updated successfully. Then these nodes roll back to the version, which was the same across all nodes before the auto-update.
## Self-hosted Integration Runtime Expire Notification
-If you want to manually control which version of self-hosted integration runtime, you can disable the setting of auto-update and install it manually. Each version of self-hosted integration runtime will be expired in one year. The expiring message is shown in ADF portal and self-hosted integration runtime client **90 days** before expiration.
+If you want to manually control which version of self-hosted integration runtime, you can disable the setting of auto-update and install it manually. Each version of self-hosted integration runtime expires in one year. The expiring message is shown in ADF portal and self-hosted integration runtime client **90 days** before expiration.
## Next steps - Review [integration runtime concepts in Azure Data Factory](./concepts-integration-runtime.md).- - Learn how to [create a self-hosted integration runtime in the Azure portal](./create-self-hosted-integration-runtime.md).
databox-online Azure Stack Edge Gpu 2309 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2309-release-notes.md
Previously updated : 10/09/2023 Last updated : 10/26/2023
This article applies to the **Azure Stack Edge 2309** release, which maps to sof
## Supported update paths
-To apply the 2309 update, your device must be running version 2203 or later.
+To apply the 2309 update, your device must be running version 2303 or later.
- If you are not running the minimum required version, you'll see this error: *Update package cannot be installed as its dependencies are not met.*
+ - You can update to 2303 from 2207 or later, and then install 2309.
You can update to the latest version using the following update paths:
The 2309 release has the following new features and enhancements:
| No. | Feature | Issue | Workaround/comments | | | | | | |**1.**|AKS Update |The AKS Kubernetes update might fail if the one of the AKS VMs are not running. This issue might be seen in the 2-node cluster. |If the AKS update has failed, [Connect to the PowerShell interface of the device](azure-stack-edge-gpu-connect-powershell-interface.md). Check the state of the Kubernetes VMs by running `Get-VM` cmdlet. If the VM is off, run the `Start-VM` cmdlet to restart the VM. Once the Kubernetes VM is running, reapply the update. |
+|**2.**|Wi-Fi |Starting this release, Wi-Fi functionality for Azure Stack Edge Mini R has been deprecated. | |
## Known issues from previous releases
dedicated-hsm Tutorial Deploy Hsm Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/tutorial-deploy-hsm-cli.md
After you configure your network, use these Azure CLI commands to provision your
There are some other commands that might be useful. Use the [az dedicated-hsm update](/cli/azure/dedicated-hsm#az-dedicated-hsm-update) command to update an HSM: ```azurecli
-az dedicated-hsm update --resource-group myRG ΓÇôname hsm1
+az dedicated-hsm update --resource-group myRG ΓÇô-name hsm1
``` To delete an HSM, use the [az dedicated-hsm delete](/cli/azure/dedicated-hsm#az-dedicated-hsm-delete) command: ```azurecli
-az dedicated-hsm delete --resource-group myRG ΓÇôname hsm1
+az dedicated-hsm delete --resource-group myRG ΓÇô-name hsm1
``` ## Verifying the Deployment
A design with two HSMs in a primary region addressing availability at the rack l
* [Physical Security](physical-security.md) * [Networking](networking.md) * [Supportability](supportability.md)
-* [Monitoring](monitoring.md)
+* [Monitoring](monitoring.md)
defender-for-cloud Recommendations Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference.md
Last updated 09/27/2023
+ # Security recommendations - a reference guide This article lists the recommendations you might see in Microsoft Defender for Cloud. The recommendations
impact on your secure score.
|Recommendation|Description & related policy|Severity| |-|-|-| |(Preview) Microsoft Defender for APIs should be enabled|Enable the Defender for APIs plan to discover and protect API resources against attacks and security misconfigurations. [Learn more](defender-for-apis-deploy.md)|High|
-(Preview) Azure API Management APIs should be onboarded to Defender for APIs. | Onboarding APIs to Defender for APIs requires compute and memory utilization on the Azure API Management service. Monitor performance of your Azure API Management service while onboarding APIs, and scale out your Azure API Management resources as needed.|High|
-(Preview) API endpoints that are unused should be disabled and removed from the Azure API Management service|As a security best practice, API endpoints that haven't received traffic for 30 days are considered unused, and should be removed from the Azure API Management service. Keeping unused API endpoints might pose a security risk. These might be APIs that should have been deprecated from the Azure API Management service, but have accidentally been left active. Such APIs typically do not receive the most up-to-date security coverage.|Low|
-(Preview) API endpoints in Azure API Management should be authenticated|API endpoints published within Azure API Management should enforce authentication to help minimize security risk. Authentication mechanisms are sometimes implemented incorrectly or are missing. This allows attackers to exploit implementation flaws and to access data. For APIs published in Azure API Management, this recommendation assesses the execution of authentication via the Subscription Keys, JWT, and Client Certificate configured within Azure API Management. If none of these authentication mechanisms are executed during the API call, the API will receive this recommendation.|High
+|(Preview) Azure API Management APIs should be onboarded to Defender for APIs. | Onboarding APIs to Defender for APIs requires compute and memory utilization on the Azure API Management service. Monitor performance of your Azure API Management service while onboarding APIs, and scale out your Azure API Management resources as needed.|High|
+|(Preview) API endpoints that are unused should be disabled and removed from the Azure API Management service|As a security best practice, API endpoints that haven't received traffic for 30 days are considered unused, and should be removed from the Azure API Management service. Keeping unused API endpoints might pose a security risk. These might be APIs that should have been deprecated from the Azure API Management service, but have accidentally been left active. Such APIs typically do not receive the most up-to-date security coverage.|Low|
+|(Preview) API endpoints in Azure API Management should be authenticated|API endpoints published within Azure API Management should enforce authentication to help minimize security risk. Authentication mechanisms are sometimes implemented incorrectly or are missing. This allows attackers to exploit implementation flaws and to access data. For APIs published in Azure API Management, this recommendation assesses authentication though verifying the presence of Azure API Management subscription keys for APIs or products where subscription is required, and the execution of policies for validating [JWT](/azure/api-management/validate-jwt-policy), [client certificates](/azure/api-management/validate-client-certificate-policy), and [Microsoft Entra](/azure/api-management/validate-azure-ad-token-policy) tokens. If none of these authentication mechanisms are executed during the API call, the API will receive this recommendation.|High|
## API management recommendations
To learn more about recommendations, see the following:
- [What are security policies, initiatives, and recommendations?](security-policy-concept.md) - [Review your security recommendations](review-security-recommendations.md)+
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important upcoming changes description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 10/09/2023 Last updated : 10/26/2023 # Important upcoming changes to Microsoft Defender for Cloud
Last updated 10/09/2023
> [!IMPORTANT] > The information on this page relates to pre-release products or features, which might be substantially modified before they are commercially released, if ever. Microsoft makes no commitments or warranties, express or implied, with respect to the information provided here.
-<!-- Please don't adjust this next line without getting approval from the Defender for Cloud documentation team. It is necessary for proper RSS functionality. -->
+<!-- Please don't adjust this next line without getting approval from the Defender for Cloud documentation team. It is necessary for proper RSS functionality. -->
On this page, you can learn about changes that are planned for Defender for Cloud. It describes planned modifications to the product that might affect things like your secure score or workflows.
If you're looking for the latest release notes, you can find them in the [What's
| Planned change | Announcement date | Estimated date for change | |--|--|--|
+| [Changes to how Microsoft Defender for Cloud's costs are presented in Microsoft Cost Management](#changes-to-how-microsoft-defender-for-clouds-costs-are-presented-in-microsoft-cost-management) | October 25, 2023 | November 2023 |
| [Four alerts are set to be deprecated](#four-alerts-are-set-to-be-deprecated) | October 23, 2023 | November 23, 2023 | | [Replacing the "Key Vaults should have purge protection enabled" recommendation with combined recommendation "Key Vaults should have deletion protection enabled"](#replacing-the-key-vaults-should-have-purge-protection-enabled-recommendation-with-combined-recommendation-key-vaults-should-have-deletion-protection-enabled) | | June 2023| | [Preview alerts for DNS servers to be deprecated](#preview-alerts-for-dns-servers-to-be-deprecated) | | August 2023 |
If you're looking for the latest release notes, you can find them in the [What's
| [Deprecating two security incidents](#deprecating-two-security-incidents) | | November 2023 | | [Defender for Cloud plan and strategy for the Log Analytics agent deprecation](#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation) | | August 2024 |
+## Changes to how Microsoft Defender for Cloud's costs are presented in Microsoft Cost Management
+
+**Annoucement date: October 26, 2023**
+
+**Estimated date for change: November 2023**
+
+In November there will be a change as to how Microsoft Defender for Cloud's costs are presented in Cost Management and in Subscriptions invoices.
+
+Costs will be presented for each protected resource instead of as an aggregation of all resources on the subscription.
+
+If a resource has a tag applied, which are often used by organizations to perform financial chargeback processes, it will be added to the appropriate billing lines.
+ ## Four alerts are set to be deprecated **Announcement date: October 23, 2023**
education-hub About Azure For Students https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/about-azure-for-students.md
# What is Azure for Students?
-[Azure for Students](https://azure.microsoft.com/free/students/) is a program offered by Microsoft Azure to provide students with access to various Azure cloud services and resources. Joining Azure for Students grant you $100 in Azure credits with no credit card required. This credit will be valid for 1 year and you can renew each year you're a student to get an addition $100. There is also a set of free services you can deploy when you join. Select versions of Azure Virtual Machines, Azure SQL Databases, Azure Blob Storage and more are all free with your subscription.
+[Azure for Students](https://azure.microsoft.com/free/students/) is a program offered by Microsoft Azure to provide students at Higher Education Institutions with access to various Azure cloud services and resources. Joining Azure for Students will grant you $100 in Azure credits with no credit card required. This credit will be valid for one year and you can renew each year you're an active student to get an addition $100. You also receive access to a set of free services you can deploy once you join. Select versions of Azure Virtual Machines, Azure SQL Databases, Azure Blob Storage and more are all free with your subscription.
## Prerequisites
-To access Azure for Students, you need to be a verified higher-ed student. Verification of student status may be required.
+Azure for Students is available only to students who meet the following requirements:
+* You must affirm that you're age 18 or older and attend an accredited, degree-granting, two-year or four-year educational institution where youΓÇÖre a full-time student.
+* You must verify your academic status through your institutionΓÇÖs email address.
+* See the Azure for Students Offer for detailed terms of use.
+* This offer isn't available for use in a massive open online course (MOOC) or in other professional trainings from for-profit organizations.
+This offer is limited to one Azure for Student subscription per eligible customer. It's nontransferable and can't be combined with any other offer, unless otherwise permitted by Microsoft.
## Next steps
+You need to verify yourself as an active student to become eligible for the offer. This verification process takes place as part of the signup flow for Azure for Students here.
+ - [Access the Education Hub](access-education-hub.md) - [Support options](educator-service-desk.md)
event-grid Monitor Mqtt Delivery Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/monitor-mqtt-delivery-reference.md
Title: Azure Event Grid - Monitor data reference (MQTT)
-description: This article provides reference documentation for metrics and diagnostic logs for Azure Event Grid's MQTT of events.
+ Title: Azure Event GridΓÇÖs MQTT broker feature - Monitor data reference
+description: This article provides reference documentation for metrics and diagnostic logs for Azure Event GridΓÇÖs MQTT broker feature.
Last updated 05/23/2023
-# Monitor data reference for Azure Event Grid's MQTT delivery (Preview)
-This article provides a reference of log and metric data collected to analyze the performance and availability of Azure Event Grid's MQTT delivery.
+# Monitor data reference for Azure Event GridΓÇÖs MQTT broker feature (Preview)
+This article provides a reference of log and metric data collected to analyze the performance and availability of MQTT broker.
[!INCLUDE [mqtt-preview-note](./includes/mqtt-preview-note.md)]
This article provides a reference of log and metric data collected to analyze th
| MQTT.SuccessfulPublishedMessages | MQTT: Successful Published Messages | CountΓÇ»| Total | The number of MQTT messages that were published successfully into the namespace. | Protocol, QoS | | MQTT.FailedPublishedMessages | MQTT: Failed Published Messages | CountΓÇ»| Total | The number of MQTT messages that failed to be published into the namespace. | Protocol, QoS, Error | | MQTT.SuccessfulDeliveredMessages | MQTT: Successful Delivered Messages | CountΓÇ»| TotalΓÇ»| The number of messages delivered by the namespace, regardless of the acknowledgments from MQTT clients. There are no failures for this operation. | Protocol, QoS |
-| MQTT.Throughput | MQTT: Throughput | Count | Total | The total bytes published to or delivered by the namespace. | Direction |
-| MQTT.SuccessfulSubscriptionOperations | MQTT: Successful Subscription Operations | Count | Total | The number of successful subscription operations (Subscribe, Unsubscribe). This metric is incremented for every topic filter within your subscription request that gets accepted by Event Grid. | OperationType, Protocol |
-| MQTT.FailedSubscriptionOperations | MQTT: Failed Subscription Operations | Count | Total | The number of failed subscription operations (Subscribe, Unsubscribe). This metric is incremented for every topic filter within your subscription request that gets rejected by Event Grid. | OperationType, Protocol, Error |
+| MQTT.Throughput | MQTT: Throughput | Count | Total | The total bytes published to or delivered by the namespace. This metric includes all the MQTT packets that your MQTT clients send to the MQTT broker, regardless of their success. | Direction |
+| MQTT.SuccessfulSubscriptionOperations | MQTT: Successful Subscription Operations | Count | Total | The number of successful subscription operations (Subscribe, Unsubscribe). This metric is incremented for every topic filter within your subscription request that gets accepted by MQTT broker. | OperationType, Protocol |
+| MQTT.FailedSubscriptionOperations | MQTT: Failed Subscription Operations | Count | Total | The number of failed subscription operations (Subscribe, Unsubscribe). This metric is incremented for every topic filter within your subscription request that gets rejected by MQTT broker. | OperationType, Protocol, Error |
| Mqtt.SuccessfulRoutedMessages | MQTT: Successful Routed Messages | Count | Total | The number of MQTT messages that were routed successfully from the namespace. | | | Mqtt.FailedRoutedMessages | MQTT: Failed Routed Messages | Count | Total | The number of MQTT messages that failed to be routed from the namespace. | Error |
-| MQTT.Connections | MQTT: Active Connections | Count | Total | The number of active connections in the namespace. The value for this metric is a point-in-time value. Connections that were active immediately after that point-in-time may not be reflected in the metric. | Protocol |
-| Mqtt.DroppedSessions | MQTT: Dropped Sessions | Count | Total | The number of dropped sessions in the namespace. The value for this metric is a point-in-time value. Sessions that were dropped immediately after that point-in-time may not be reflected in the metric. | DropReason |
+| MQTT.Connections | MQTT: Active Connections | Count | Total | The number of active connections in the namespace. The value for this metric is a point-in-time value. Connections that were active immediately after that point-in-time might not be reflected in the metric. | Protocol |
+| Mqtt.DroppedSessions | MQTT: Dropped Sessions | Count | Total | The number of dropped sessions in the namespace. The value for this metric is a point-in-time value. Sessions that were dropped immediately after that point-in-time might not be reflected in the metric. | DropReason |
This article provides a reference of log and metric data collected to analyze th
| Dimension | Values | | | |
-| OperationType | The type of the operation. The available values include: <br><br>- Publish: PUBLISH requests sent from MQTT clients to Event Grid. <br>- Deliver: PUBLISH requests sent from Event Grid to MQTT clients. <br>- Subscribe: SUBSCRIBE requests by MQTT clients. <br>- Unsubscribe: UNSUBSCRIBE requests by MQTT clients. <br>- Connect: CONNECT requests by MQTT clients. |
+| OperationType | The type of the operation. The available values include: <br><br>- Publish: PUBLISH requests sent from MQTT clients to MQTT broker. <br>- Deliver: PUBLISH requests sent from MQTT broker to MQTT clients. <br>- Subscribe: SUBSCRIBE requests by MQTT clients. <br>- Unsubscribe: UNSUBSCRIBE requests by MQTT clients. <br>- Connect: CONNECT requests by MQTT clients. |
| Protocol | The protocol used in the operation. The available values include: <br><br>- MQTT3: MQTT v3.1.1 <br>- MQTT5: MQTT v5 <br>- MQTT3-WS: MQTT v3.1.1 over WebSocket <br>- MQTT5-WS: MQTT v5 over WebSocket | Result | Result of the operation. The available values include: <br><br>- Success <br>- ClientError <br>- ServiceError |
-| Error | Error occurred during the operation. The available values include: <br><br>-QuotaExceeded: the client exceeded one or more of the throttling limits that resulted in a failure <br>- AuthenticationError: a failure because of any authentication reasons. <br>- AuthorizationError: a failure because of any authorization reasons.<br>- ClientError: the client sent a bad request or used one of the unsupported features that resulted in a failure. <br>- ServiceError: a failure because of an unexpected server error or for a server's operational reason.|
+| Error | Error occurred during the operation. The available values include: <br><br>-QuotaExceeded: the client exceeded one or more of the throttling limits that resulted in a failure <br>- AuthenticationError: a failure because of any authentication reasons. In case of failed MQTT routing messages, the EventGrid Data Sender role for the custom topic configured as the destination for MQTT routed messages was deleted. This error doesn't apply for namespace topics since they don't need a permission to route MQTT messages. In that case for MQTT message routing, MQTT broker drops the MQTT message that was meant to be routed. <br>- AuthorizationError: a failure because of any authorization reasons.<br>- ClientError: the client sent a bad request or used one of the unsupported features that resulted in a failure. <br>-TopicNotFoundError: The custom topic that is configured to receive all the MQTT routed messages was deleted. This error doesn't apply for namespace topics since they can't be deleted if they're used as the destination for MQTT routed messages. In that case, MQTT broker drops the MQTT message that was meant to be routed.<br>-TooManyRequests: the number of MQTT routed messages per second exceeds the limit of the destination (namespace topic or custom topic) for MQTT routed messages. In that case, Event Grid retries to route the MQTT message. <br>- ServiceError: a failure because of an unexpected server error or for a server's operational reason. In that case for MQTT message routing, Event Grid retries to route the MQTT message. |
| QoS | Quality of service level. The available values are: 0, 1. |
-| Direction | The direction of the operation. The available values are: <br><br>- Inbound: inbound throughput to Event Grid. <br>- Outbound: outbound throughput from Event Grid. |
+| Direction | The direction of the operation. The available values are: <br><br>- Inbound: inbound throughput to MQTT broker. <br>- Outbound: outbound throughput from MQTT broker. |
| DropReason | The reason a session was dropped. The available values include: <br><br>- SessionExpiry: a persistent session has expired. <br>- TransientSession: a non-persistent session has expired. <br>- SessionOverflow: a client didn't connect during the lifespan of the session to receive queued QOS1 messages until the queue reached its maximum limit. <br>- AuthorizationError: a session drop because of any authorization reasons. ## Next steps
event-grid Mqtt Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-access-control.md
Title: 'Access control for MQTT clients'
-description: 'Describes the main concepts for access control for MQTT clients in Azure Event Grid.'
+description: 'Describes the main concepts for access control for MQTT clients in Azure Event GridΓÇÖs MQTT broker feature.'
Last updated 05/23/2023
# Access control for MQTT clients
-Access control enables you to manage the authorization of clients to publish or subscribe to topics, using a role-based access control model. Given the enormous scale of IoT environments, assigning permission for each client to each topic is incredibly tedious. Event GridΓÇÖs flexible access control tackles this scale challenge through grouping clients and topics into client groups and topic spaces.
+Access control enables you to manage the authorization of clients to publish or subscribe to topics, using a role-based access control model. Given the enormous scale of IoT environments, assigning permission for each client to each topic is incredibly tedious. Azure Event GridΓÇÖs MQTT broker feature tackles this scale challenge through grouping clients and topics into client groups and topic spaces.
[!INCLUDE [mqtt-preview-note](./includes/mqtt-preview-note.md)]
Granular access control allows you to control the authorization of each client w
Even though a client group can have access to a certain topic space with all its topic templates, variables within topic templates enable you to control the authorization of each client within that client group to publish or subscribe to its own topic. For example, if client group ΓÇ£machinesΓÇ¥ includes two clients: ΓÇ£machine1ΓÇ¥ and ΓÇ£machine2ΓÇ¥. By using variables, you can allow only machine1 to publish its telemetry only on the MQTT topic ΓÇ£machines/machine1/telemetryΓÇ¥ and ΓÇ£machine2ΓÇ¥ to publish messages on MQTT topic ΓÇ£machines/machine2/telemetryΓÇ¥.
-The variables represent either client authentication names or client attributes. During communication with Event Grid, each client would replace the variable in the MQTT topic with a substituted value. For example, the variable ${client.authenticationName} would be replaced with the authentication name of each client: machine1, machine2, etc. Event Grid would allow access only to the clients that have a substituted value that matches either their authentication name or the value of the specified attribute.
+The variables represent either client authentication names or client attributes. During communication with MQTT broker, each client would replace the variable in the MQTT topic with a substituted value. For example, the variable ${client.authenticationName} would be replaced with the authentication name of each client: machine1, machine2, etc. MQTT broker would allow access only to the clients that have a substituted value that matches either their authentication name or the value of the specified attribute.
For example, consider the following configuration:
event-grid Mqtt Automotive Connectivity And Data Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-automotive-connectivity-and-data-solution.md
The *vehicle to cloud* dataflow is used to process telemetry data from the vehic
1. **Provisioning** information for vehicles and devices. 1. Initial vehicle **data collection** configuration based on market and business considerations. 1. Storage of initial **user consent** settings based on vehicle options and user acceptance.
-1. The vehicle publishes telemetry and events messages through an MQTT client with defined topics to the **Event Grid** *MQTT Broker* in the *vehicle messaging services*.
+1. The vehicle publishes telemetry and events messages through an MQTT client with defined topics to the **Azure Event GridΓÇÖs MQTT broker feature** in the *vehicle messaging services*.
1. The **Event Grid** routes messages to different subscribers based on the topic and message attributes. 1. Low priority messages that don't require immediate processing (for example, analytics messages) are routed directly to storage using an Event Hubs instance for buffering. 1. High priority messages that require immediate processing (for example, status changes that must be visualized in a user-facing application) are routed to an Azure Function using an Event Hubs instance for buffering.
This dataflow covers the process to register and provision vehicles and devices
1. The **Factory System** commissions the vehicle device to the desired construction state. This may include firmware & software initial installation and configuration. As part of this process, the factory system will obtain and write the device *certificate*, created from the **Public Key Infrastructure** provider. 1. The **Factory System** registers the vehicle & device using the *Vehicle & Device Provisioning API*.
-1. The factory system triggers the **device provisioning client** to connect to the *device registration* and provision the device. The device retrieves connection information to the *MQTT Broker*.
-1. The *device registration* application creates the device identity in **Event Grid**.
-1. The factory system triggers the device to establish a connection to the **Event Grid** *MQTT Data Broker* for the first time.
+1. The factory system triggers the **device provisioning client** to connect to the *device registration* and provision the device. The device retrieves connection information to the *MQTT broker*.
+1. The *device registration* application creates the device identity with MQTT broker.
+1. The factory system triggers the device to establish a connection to the *MQTT broker* for the first time.
1. The MQTT broker authenticates the device using the *CA Root Certificate* and extracts the client information.
-1. The *MQTT broker* manages authorization for allowed topics using the **Event Grid** local registry.
+1. The *MQTT broker* manages authorization for allowed topics using the local registry.
1. In case of part replacement, the OEM **Dealer System** can trigger the registration of a new device. > [!NOTE]
Each *vehicle messaging scale unit* supports a defined vehicle population (for e
1. The **application scale unit** subscribes applications to messages of interest. The common service handles subscription to the **vehicle messaging scale unit** components. 1. The vehicle uses the **device management service** to discover its assignment to a vehicle messaging scale unit. 1. If necessary, the vehicle is provisioned using the [Vehicle and device Provisioning](#vehicle-and-device-provisioning) workflow.
-1. The vehicle publishes a message to the **Event Grid** *MQTT broker*.
+1. The vehicle publishes a message to the *MQTT broker*.
1. **Event Grid** routes the message using the subscription information. 1. For messages that don't require processing and claims check, it's routed to an ingress hub on the corresponding application scale unit. 1. Messages that require processing are routed to the [D2C processing logic](#vehicle-to-cloud-messages) for decoding and authorization (user consent). 1. Applications consume events from their **app ingress** event hubs instance. 1. Applications publish messages for the vehicle.
- 1. Messages that don't require more processing are published to the **Event Grid** *MQTT Broker*.
+ 1. Messages that don't require more processing are published to the *MQTT broker*.
1. Messages that require more processing, workflow control and authorization are routed to the relevant [C2D Processing Logic](#cloud-to-vehicle-messages) over an Event Hubs instance. ### Components
event-grid Mqtt Client Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-client-authentication.md
Title: 'Azure Event Grid Namespace MQTT client authentication'
-description: 'Describes how MQTT clients are authenticated and mTLS connection is established when a client connects to MQTT service.'
+description: 'Describes how MQTT clients are authenticated and mTLS connection is established when a client connects to Azure Event GridΓÇÖs MQTT broker feature.'
Last updated 05/23/2023
You can use one of the following fields to provide client authentication name in
## High level flow of how mutual transport layer security (mTLS) connection is established
-To establish a secure connection for MQTT support in Event Grid, you can use either MQTTS over port 8883 or MQTT over web sockets on port 443. It's important to note that only secure connections are supported. The following steps are to establish secure connection prior to the client authentication.
+To establish a secure connection with MQTT broker, you can use either MQTTS over port 8883 or MQTT over web sockets on port 443. It's important to note that only secure connections are supported. The following steps are to establish secure connection prior to the client authentication.
-1. The client initiates the handshake with Event Grid MQTT service. It sends a hello packet with supported TLS version, cipher suites.
+1. The client initiates the handshake with MQTT broker. It sends a hello packet with supported TLS version, cipher suites.
2. Service presents its certificate to the client. - Service presents either a P-384 EC certificate or an RSA 2048 certificate depending on the ciphers in the client hello packet. - Service certificates are signed by a public certificate authority.
event-grid Mqtt Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-overview.md
Title: 'Overview of MQTT Support in Azure Event Grid (preview)'
-description: 'Describes the main concepts for the MQTT Support in Azure Event Grid.'
+ Title: 'Overview of Azure Event GridΓÇÖs MQTT broker feature (preview)'
+description: 'Describes the main concepts for the Azure Event GridΓÇÖs MQTT broker feature.'
Last updated 05/23/2023
-# Overview of the MQTT Support in Azure Event Grid (Preview)
+# Overview of the Azure Event GridΓÇÖs MQTT broker feature (Preview)
-Azure Event Grid enables your MQTT clients to communicate with each other and with Azure services, to support your Internet of Things (IoT) solutions.
+Azure Event Grid enables your MQTT clients to communicate with each other and with Azure services, to support your Internet of Things (IoT) solutions.
[!INCLUDE [mqtt-preview-note](./includes/mqtt-preview-note.md)]
-Event GridΓÇÖs MQTT support enables you to accomplish the following scenarios:
+Azure Event GridΓÇÖs MQTT broker feature enables you to accomplish the following scenarios:
- Ingest telemetry using a many-to-one messaging pattern. This pattern enables the application to offload the burden of managing the high number of connections with devices to Event Grid. - Control your MQTT clients using the request-response (one-to-one) messaging pattern. This pattern enables any client to communicate with any other client without restrictions, regardless of the clients' roles.
Event GridΓÇÖs MQTT support enables you to accomplish the following scenarios:
You can find code samples that demonstrate these scenarios in [this repository.](https://github.com/Azure-Samples/MqttApplicationSamples)
-The MQTT support in Event Grid is ideal for the implementation of automotive and mobility scenarios, among others. See [the reference architecture](mqtt-automotive-connectivity-and-data-solution.md) to learn how to build secure and scalable solutions for connecting millions of vehicles to the cloud, using AzureΓÇÖs messaging and data analytics services.
+The MQTT broker is ideal for the implementation of automotive and mobility scenarios, among others. See [the reference architecture](mqtt-automotive-connectivity-and-data-solution.md) to learn how to build secure and scalable solutions for connecting millions of vehicles to the cloud, using AzureΓÇÖs messaging and data analytics services.
:::image type="content" source="media/overview/mqtt-messaging-high-res.png" alt-text="High-level diagram of Event Grid that shows bidirectional MQTT communication with publisher and subscriber clients." border="false"::: ## Key concepts:
-The following are a list of key concepts involved in MQTT messaging on Event Grid.
+The following are a list of key concepts involved in Azure Event GridΓÇÖs MQTT broker feature.
### MQTT
-MQTT is a publish-subscribe messaging transport protocol that was designed for constrained environments. It has become the go-to communication standard for IoT scenarios due to efficiency, scalability, and reliability. Event Grid enables clients to publish and subscribe to messages over MQTT v3.1.1, MQTT v3.1.1 over WebSockets, MQTT v5, and MQTT v5 over WebSockets protocols. The following list shows some of the feature highlights of Event Grid's MQTT support:
+MQTT is a publish-subscribe messaging transport protocol that was designed for constrained environments. It has become the go-to communication standard for IoT scenarios due to efficiency, scalability, and reliability. MQTT broker enables clients to publish and subscribe to messages over MQTT v3.1.1, MQTT v3.1.1 over WebSockets, MQTT v5, and MQTT v5 over WebSockets protocols. The following list shows some of the feature highlights of MQTT broker:
- MQTT v5 features: - **User properties** allow you to add custom key-value pairs in the message header to provide more context about the message. For example, include the purpose or origin of the message so the receiver can handle the message efficiently. - **Request-response pattern** enables your clients to take advantage of the standard request-response asynchronous pattern, specifying the response topic and correlation ID in the request for the client to respond without prior configuration.
- - **Message expiry interval** allows you to declare to Event Grid when to disregard a message that is no longer relevant or valid. For example, disregard stale commands or alerts.
+ - **Message expiry interval** allows you to declare to MQTT broker when to disregard a message that is no longer relevant or valid. For example, disregard stale commands or alerts.
- **Topic aliases** helps your clients reduce the size of the topic field, making the data transfer less expensive. - **Maximum message size** allows your clients to control the maximum message size that they can handle from the server. - **Receive Maximum** allows your clients to control the message rate depending on their capabilities such as processing speed or storage capabilities. - **Clean start and session expiry** enable your clients to optimize the reliability and security of the session by preserving the client's subscription information and messages for a configurable time interval. - **Negative acknowledgments** allow your clients to efficiently react to different error codes. - **Server-sent disconnect packets** allow your clients to efficiently handle disconnects.-- Event Grid is adding more MQTT v5 features in the future to align more with the MQTT specifications. The following items detail the current differences in Event Grid's MQTT support from the MQTT specifications: Will message, Retain flag, Message ordering and QoS 2 aren't supported.
+- MQTT broker is adding more MQTT v5 features in the future to align more with the MQTT specifications. The following items detail the current differences between features supported by MQTT broker and the MQTT v5 specifications: Will message, Retain flag, Message ordering and QoS 2 aren't supported.
- MQTT v3.1.1 features: - **Persistent sessions** ensure reliability by preserving the client's subscription information and messages when a client disconnects. - **QoS 0 and 1** provide your clients with control over the efficiency and reliability of the communication.-- Event Grid is adding more MQTT v3.1.1 features in the future to align more with the MQTT specifications. The following items detail the current differences in Event Grid's MQTT support from the MQTT v3.1.1 specification: Will message, Retain flag, Message ordering and QoS 2 aren't supported.
+- MQTT broker is adding more MQTT v3.1.1 features in the future to align more with the MQTT specifications. The following items detail the current differences between features supported by MQTT broker and the MQTT v3.1.1 specification: Will message, Retain flag, Message ordering and QoS 2 aren't supported.
-[Learn more about Event GridΓÇÖs MQTT support and current limitations.](mqtt-support.md)
+[Learn more about the MQTT broker and current limitations.](mqtt-support.md)
### Publish-Subscribe messaging model The publish-subscribe messaging model provides a scalable and asynchronous communication to clients. It enables clients to offload the burden of handling a high number of connections and messages to the service. Through the Publish-Subscribe messaging model, your clients can communicate efficiently using one-to-many, many-to-one, and one-to-one messaging patterns. - The one-to-many messaging pattern enables clients to publish only one message that the service replicates for every interested client. -- The many-to-one messaging pattern enables clients to offload the burden of managing the high number of connections to Event Grid.
+- The many-to-one messaging pattern enables clients to offload the burden of managing the high number of connections to MQTT broker.
- The one-to-one messaging pattern enables any client to communicate with any other client without restrictions, regardless of the clients' roles. ### Namespace
-Event Grid Namespace is a management container for the resources supporting the MQTT broker functionality, along with the resources supporting the [pull delivery functionality](pull-delivery-overview.md). Your MQTT client can connect to Event Grid and publish/subscribe to messages, while Event Grid authenticates your clients, authorizes publish/subscribe requests, and forwards messages to interested clients. Learn more about [the namespace concept.](mqtt-event-grid-namespace-terminology.md)
+Event Grid Namespace is a management container for the resources supporting the MQTT broker functionality, along with the resources supporting the [pull delivery functionality](pull-delivery-overview.md). Your MQTT client can connect to MQTT broker and publish/subscribe to messages, while MQTT broker authenticates your clients, authorizes publish/subscribe requests, and forwards messages to interested clients. Learn more about [the namespace concept.](mqtt-event-grid-namespace-terminology.md)
### Clients
IoT applications are software designed to interact with and process data from Io
### Client authentication
-Event Grid has a client registry that stores information about the clients permitted to connect to it. Before a client can connect, there must be an entry for that client in the client registry. As a client connects to Event Grid, it needs to authenticate with Event Grid based on credentials stored in the identity registry. Event Grid supports X.509 certificate authentication that is the industry standard in IoT scenarios.[Learn more about MQTT client authentication.](mqtt-client-authentication.md)
+Event Grid has a client registry that stores information about the clients permitted to connect to it. Before a client can connect, there must be an entry for that client in the client registry. As a client connects to MQTT broker, it needs to authenticate with MQTT broker based on credentials stored in the identity registry. MQTT broker supports X.509 certificate authentication that is the industry authentication standard in IoT devices and [Microsoft Entra](mqtt-client-azure-ad-token-and-rbac.md) that is Azure's authentication standard for applications.[Learn more about MQTT client authentication.](mqtt-client-authentication.md)
### Access control
Client Life Cycle events allow applications to react to events about the client
## Next steps
-Use the following articles to learn more about the MQTT support in Event Grid and its main concepts.
+Use the following articles to learn more about the MQTT broker and its main concepts.
### Quick Start
event-grid Mqtt Publish And Subscribe Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-publish-and-subscribe-cli.md
Title: 'Quickstart: Publish and subscribe on an MQTT topic using CLI'
-description: 'Quickstart guide to use Azure Event Grid MQTT and Azure CLI to publish and subscribe MQTT messages on a topic'
+description: 'Quickstart guide to use Azure Event GridΓÇÖs MQTT broker feature and Azure CLI to publish and subscribe MQTT messages on a topic'
Last updated 05/23/2023
# Quickstart: Publish and subscribe to MQTT messages on Event Grid Namespace with Azure CLI (Preview)
-Azure Event Grid supports messaging using the MQTT protocol. Clients (both devices and cloud applications) can publish and subscribe MQTT messages over flexible hierarchical topics for scenarios such as high scale broadcast, and command & control.
+Azure Event GridΓÇÖs MQTT broker feature supports messaging using the MQTT protocol. Clients (both devices and cloud applications) can publish and subscribe MQTT messages over flexible hierarchical topics for scenarios such as high scale broadcast, and command & control.
[!INCLUDE [mqtt-preview-note](./includes/mqtt-preview-note.md)]
event-grid Mqtt Publish And Subscribe Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-publish-and-subscribe-portal.md
Title: 'Quickstart: Publish and subscribe on an MQTT topic using portal'
-description: 'Quickstart guide to use Azure Event Grid MQTT and Azure portal to publish and subscribe MQTT messages on a topic.'
+description: 'Quickstart guide to use Azure Event GridΓÇÖs MQTT broker feature and Azure portal to publish and subscribe MQTT messages on a topic.'
Last updated 05/23/2023
In this article, you use the Azure portal to do the following tasks:
-1. Create an Event Grid namespace with MQTT feature
+1. Create an Event Grid namespace and enable MQTT
2. Create sub resources such as clients, client groups, and topic spaces 3. Grant clients access to publish and subscribe to topic spaces 4. Publish and receive messages between clients
After a successful installation of Step, you should open a command prompt in you
> [!NOTE] > To keep the QuickStart simple, you'll be using only the Basics page to create a namespace. For detailed steps about configuring network, security, and other settings on other pages of the wizard, see [Create a Namespace](create-view-manage-namespaces.md).
-1. After the deployment succeeds, select **Go to resource** to navigate to the Event Grid Namespace Overview page for your namespace.
+1. After the deployment succeeds, select **Go to resource** to navigate to the Event Grid Namespace Overview page for your namespace.
1. In the Overview page, you see that the **MQTT** is in **Disabled** state. To enable MQTT, select the **Disabled** link, it will redirect you to Configuration page. 1. On **Configuration** page, select the **Enable MQTT** option, and then select **Apply** to apply the settings.
After a successful installation of Step, you should open a command prompt in you
1. Rest of the settings can be left with predefined default values. :::image type="content" source="./media/mqtt-publish-and-subscribe-portal/mqttx-app-client1-configuration-1.png" alt-text="Screenshot showing client 1 configuration part 1 on MQTTX app." lightbox="./media/mqtt-publish-and-subscribe-portal/mqttx-app-client1-configuration-1.png":::
-1. Select **Connect** to connect the client to the Event Grid MQTT service.
+1. Select **Connect** to connect the client to the MQTT broker.
1. Repeat the above steps to connect the second client **client2**, with corresponding authentication information as shown. :::image type="content" source="./media/mqtt-publish-and-subscribe-portal/mqttx-app-client2-configuration-1.png" alt-text="Screenshot showing client 2 configuration part 1 on MQTTX app." lightbox="./media/mqtt-publish-and-subscribe-portal/mqttx-app-client2-configuration-1.png":::
event-grid Mqtt Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-routing.md
az resource create --resource-type Microsoft.EventGrid/namespaces --id /subscrip
For enrichments configuration instructions, go to [Enrichment CLI configuration](mqtt-routing-enrichment.md#azure-cli-configuration). +
+## MQTT message routing behavior
+While routing MQTT messages to namespace topics or custom topics, Event Grid provides durable delivery as it tries to deliver each message **at least once** immediately. If there's a failure, Event Grid either retries delivery or drops the message that was meant to be routed. Event Grid doesn't guarantee order for event delivery, so subscribers might receive them out of order.
+
+The following table describes the behavior of MQTT message routing based on different errors.
+
+| Error| Error description | Behavior |
+| --| --|--|
+| TopicNotFoundError | The custom topic that is configured to receive all the MQTT routed messages was deleted. This error doesn't apply for namespace topics since they can't be deleted if they're used as the destination for MQTT routed messages. | Event Grid drops the MQTT message that was meant to be routed.|
+| AuthenticationError | The EventGrid Data Sender role for the custom topic configured as the destination for MQTT routed messages was deleted. This error doesn't apply for namespace topics since they don't need a permission to route MQTT messages. | Event Grid drops the MQTT message that was meant to be routed.|
+| TooManyRequests | The number of MQTT routed messages per second exceeds the limit of the destination (namespace topic or custom topic) for MQTT routed messages. | Event Grid retries to route the MQTT message.|
+| ServiceError | An unexpected server error for a server's operational reason. | Event Grid retries to route the MQTT message.|
+
+During retries, Event Grid uses an exponential backoff retry policy for MQTT message routing. Event Grid retries delivery on the following schedule on a best effort basis:
+
+- 10 seconds
+- 30 seconds
+- 1 minute
+- 5 minutes
+- 10 minutes
+- 30 minutes
+- 1 hour
+- 3 hours
+- 6 hours
+- Every 12 hours
+
+If a routed MQTT message that was queued for redelivery succeeded, Event Grid attempts to remove the message from the retry queue on a best effort basis, but duplicates might still be received.
+ ## Next steps: Use the following articles to learn more about routing:
event-grid Mqtt Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-support.md
Title: 'MQTT Features Support in Azure Event Grid'
-description: 'Describes the MQTT feature support in Azure Event Grid.'
+ Title: 'MQTT Features Support by Azure Event GridΓÇÖs MQTT broker feature'
+description: 'Describes the MQTT features supported by Azure Event GridΓÇÖs MQTT broker feature.'
Last updated 05/23/2023
-# MQTT features support in Azure Event Grid
-MQTT is a publish-subscribe messaging transport protocol that was designed for constrained environments. ItΓÇÖs efficient, scalable, and reliable, which made it the gold standard for communication in IoT scenarios. Event Grid supports clients that publish and subscribe to messages over MQTT v3.1.1, MQTT v3.1.1 over WebSockets, MQTT v5, and MQTT v5 over WebSockets. Event Grid also supports cross MQTT version (MQTT 3.1.1 and MQTT 5) communication.
+# MQTT features supported by Azure Event GridΓÇÖs MQTT broker feature
+MQTT is a publish-subscribe messaging transport protocol that was designed for constrained environments. ItΓÇÖs efficient, scalable, and reliable, which made it the gold standard for communication in IoT scenarios. MQTT broker supports clients that publish and subscribe to messages over MQTT v3.1.1, MQTT v3.1.1 over WebSockets, MQTT v5, and MQTT v5 over WebSockets. MQTT broker also supports cross MQTT version (MQTT 3.1.1 and MQTT 5) communication.
[!INCLUDE [mqtt-preview-note](./includes/mqtt-preview-note.md)]
MQTT v5 has introduced many improvements over MQTT v3.1.1 to deliver a more seam
Your MQTT clients *must* connect over TLS 1.2 or TLS 1.3. Attempts to skip this step fail with connection.
-While connecting to Event Grid, use the following ports during communication over MQTT:
+While connecting to MQTT broker, use the following ports during communication over MQTT:
- MQTT v3.1.1 and MQTT v5 on TCP port 8883 - MQTT v3.1.1 over WebSocket and MQTTv5 over WebSocket on TCP port 443.
Learn more about [Client authentication](mqtt-client-authentication.md)
### Multi-session support
-Multi-session support enables your application MQTT clients to have more scalable and reliable implementation by connecting to Event Grid with multiple active sessions at the same time.
+Multi-session support enables your application MQTT clients to have more scalable and reliable implementation by connecting to MQTT broker with multiple active sessions at the same time.
#### Namespace configuration
Before using this feature, you need to configure the namespace to allow multiple
#### Connection flow: The CONNECT packets for each session should include the following properties:-- Provide the Username property in the CONNECT packet to signify your client authentication name
+- Provide the Username property in the CONNECT packet to signify your client authentication name.
- Provide the ClientID property in the CONNECT packet to signify the session name such as there are one or more values for the ClientID for each Username.
-For example, the following combinations of Username and ClientIds in the CONNECT packet enable the client "Mgmt-application" to connect to Event Grid over three independent sessions:
+For example, the following combinations of Username and ClientIds in the CONNECT packet enable the client "Mgmt-application" to connect to MQTT broker over three independent sessions:
- First Session: - Username: Mgmt-application
For more information, see [How to establish multiple sessions for a single clien
#### Handling sessions: -- If a client tries to take over another client's active session by presenting its session name, its connection request is rejected with an unauthorized error. For example, if Client B tries to connect to session 123 that is assigned at that time for client A, Client B's connection request is rejected.-- If a client resource is deleted without ending its session, other clients can't use its session name until the session expires. For example, If client B creates a session with session name 123 then client B deleted, client A can't connect to session 123 until it expires.-
+- If a client tries to take over another client's active session by presenting its session name with a different authentication name, its connection request is rejected with an unauthorized error. For example, if Client B tries to connect to session 123 that is assigned at that time for client A, Client B's connection request is rejected. That being said, if the same client tries to reconnect with the same session names and the same authentication name, it is able to take over its existing session.
+- If a client resource is deleted without ending its session, other clients can't use its session name until the session expires. For example, If client B creates a session with session name 123 then client B gets deleted, client A can't connect to session 123 until it expires.
+- The limit for the number of sessions per client applies to online and offline sessions at any point in time. For example, consider a namespace with the maximum client sessions per authentication name is set to 1. If client A connects with a persistent session 123 then gets disconnected, client A won't be able to connect with a new session 456 since its session 123 is still active even if it's offline. Accordingly, we recommend that the same client always reconnects with the same static session names as opposed to generating a new session name with every reconnect.
## MQTT features
-Event Grid supports the following MQTT features:
+Azure Event GridΓÇÖs MQTT broker feature supports the following MQTT features:
### Quality of service (QoS)
-Event Grid supports QoS 0 and 1, which define the guarantee of message delivery on PUBLISH and SUBSCRIBE packets between clients and Event Grid. QoS 0 guarantees at-most-once delivery; messages with QoS 0 arenΓÇÖt acknowledged by the subscriber nor get retransmitted by the publisher. QoS 1 guarantees at-least-once delivery; messages are acknowledged by the subscriber and get retransmitted by the publisher if they didnΓÇÖt get acknowledged. QoS enables your clients to control the efficiency and reliability of the communication.
+MQTT broker supports QoS 0 and 1, which define the guarantee of message delivery on PUBLISH and SUBSCRIBE packets between clients and MQTT broker. QoS 0 guarantees at-most-once delivery; messages with QoS 0 arenΓÇÖt acknowledged by the subscriber nor get retransmitted by the publisher. QoS 1 guarantees at-least-once delivery; messages are acknowledged by the subscriber and get retransmitted by the publisher if they didnΓÇÖt get acknowledged. QoS enables your clients to control the efficiency and reliability of the communication.
### Persistent sessions
-Event Grid supports persistent sessions for MQTT v3.1.1 such that Event Grid preserves information about a clientΓÇÖs session in case of disconnections to ensure reliability of the communication. This information includes the clientΓÇÖs subscriptions and missed/ unacknowledged QoS 1 messages. Clients can configure a persistent session through setting the cleanSession flag in the CONNECT packet to false.
+MQTT broker supports persistent sessions for MQTT v3.1.1 such that MQTT broker preserves information about a clientΓÇÖs session in case of disconnections to ensure reliability of the communication. This information includes the clientΓÇÖs subscriptions and missed/ unacknowledged QoS 1 messages. Clients can configure a persistent session through setting the cleanSession flag in the CONNECT packet to false.
#### Clean start and session expiry
-MQTT v5 has introduced the clean start and session expiry features as an improvement over MQTT v3.1.1 in handling session persistence. Clean Start is a feature that allows a client to start a new session with Event Grid, discarding any previous session data. Session Expiry allows a client to inform Event Grid when an inactive session is considered expired and automatically removed. In the CONNECT packet, a client can set Clean Start flag to true and/or short session expiry interval for security reasons or to avoid any potential data conflicts that may have occurred during the previous session. A client can also set a clean start to false and/or long session expiry interval to ensure the reliability and efficiency of persistent sessions.
+MQTT v5 has introduced the clean start and session expiry features as an improvement over MQTT v3.1.1 in handling session persistence. Clean Start is a feature that allows a client to start a new session with MQTT broker, discarding any previous session data. Session Expiry allows a client to inform MQTT broker when an inactive session is considered expired and automatically removed. In the CONNECT packet, a client can set Clean Start flag to true and/or short session expiry interval for security reasons or to avoid any potential data conflicts that might have occurred during the previous session. A client can also set a clean start to false and/or long session expiry interval to ensure the reliability and efficiency of persistent sessions.
#### Maximum session expiry interval configuration You can configure the maximum session expiry interval allowed for all your clients connecting to the Event Grid namespace. For MQTT v3.1.1 clients, the configured limit is applied as the default session expiry interval for all persistent sessions. For MQTT v5 clients, the configured limit is applied as the maximum value for the Session Expiry Interval property in the CONNECT packet; any value that exceeds the limit will be adjusted. The default value for this namespace property is 1 hour and can be extended up to 8 hours. Use the following steps to configure the maximum session expiry interval in the Azure portal:
You can configure the maximum session expiry interval allowed for all your clien
:::image type="content" source="media/mqtt-support/mqtt-maximum-session-expiry-configuration.png" alt-text="screenshot for the maximum session expiry interval configuration." border="false"::: #### Session overflow
-Event Grid maintains a queue of messages for each active MQTT session that isn't connected, until the client connects with Event Grid again to receive the messages in the queue. If a client doesn't connect to receive the queued QOS1 messages, the session queue starts accumulating the messages until it reaches its limit: 100 messages or 1 MB. Once the queue reaches its limit during the lifespan of the session, the session is terminated.
+MQTT broker maintains a queue of messages for each active MQTT session that isn't connected, until the client connects with MQTT broker again to receive the messages in the queue. If a client doesn't connect to receive the queued QOS1 messages, the session queue starts accumulating the messages until it reaches its limit: 100 messages or 1 MB. Once the queue reaches its limit during the lifespan of the session, the session is terminated.
### User properties
-Event Grid supports user properties on MQTT v5 PUBLISH packets that allow you to add custom key-value pairs in the message header to provide more context about the message. The use cases for user properties are versatile. You can use this feature to include the purpose or origin of the message so the receiver can handle the message without parsing the payload, saving computing resources. For example, a message with a user property indicating its purpose as a "warning" could trigger different handling logic than one with the purpose of "information."
+MQTT broker supports user properties on MQTT v5 PUBLISH packets that allow you to add custom key-value pairs in the message header to provide more context about the message. The use cases for user properties are versatile. You can use this feature to include the purpose or origin of the message so the receiver can handle the message without parsing the payload, saving computing resources. For example, a message with a user property indicating its purpose as a "warning" could trigger different handling logic than one with the purpose of "information."
### Request-response pattern MQTTv5 introduced fields in the MQTT PUBLISH packet header that provide context for the response message in the request-response pattern. These fields include a response topic and a correlation ID that the responder can use in the response without prior configuration. The response information enables more efficient communication for the standard request-response pattern that is used in command-and-control scenarios.
MQTTv5 introduced fields in the MQTT PUBLISH packet header that provide context
:::image type="content" source="media/mqtt-support/mqtt-request-response-high-res.png" alt-text="Diagram of the request-response pattern example." border="false"::: ### Message expiry interval:
-In MQTT v5, message expiry interval allows messages to have a configurable lifespan. The message expiry interval is defined as the time interval between the time a message is published to Event Grid and the time when the Event Grid needs to discard the message if it hasn't been delivered. This feature is useful in scenarios where messages are only valid for a certain amount of time, such as time-sensitive commands, real-time data streaming, or security alerts. By setting a message expiry interval, Event Grid can automatically remove outdated messages, ensuring that only relevant information is available to subscribers. If a message's expiry interval is set to zero, it means the message should never expire.
+In MQTT v5, message expiry interval allows messages to have a configurable lifespan. The message expiry interval is defined as the time interval between the time a message is published to MQTT broker and the time when the MQTT broker needs to discard the message if it hasn't been delivered. This feature is useful in scenarios where messages are only valid for a certain amount of time, such as time-sensitive commands, real-time data streaming, or security alerts. By setting a message expiry interval, MQTT broker can automatically remove outdated messages, ensuring that only relevant information is available to subscribers. If a message's expiry interval is set to zero, it means the message should never expire.
### Topic aliases:
-In MQTT v5, topic aliases allow a client to use a shorter alias in place of the full topic name in the published message. Event Grid maintains a mapping between the topic alias and the actual topic name. This feature can save network bandwidth and reduce the size of the message header, particularly for topics with long names. It's useful in scenarios where the same topic is repeatedly published in multiple messages, such as in sensor networks. Event Grid supports up to 10 topic aliases. A client can use a Topic Alias field in the PUBLISH packet to replace the full topic name with the corresponding alias.
+In MQTT v5, topic aliases allow a client to use a shorter alias in place of the full topic name in the published message. MQTT broker maintains a mapping between the topic alias and the actual topic name. This feature can save network bandwidth and reduce the size of the message header, particularly for topics with long names. It's useful in scenarios where the same topic is repeatedly published in multiple messages, such as in sensor networks. MQTT broker supports up to 10 topic aliases. A client can use a Topic Alias field in the PUBLISH packet to replace the full topic name with the corresponding alias.
:::image type="content" source="media/mqtt-support/mqtt-topic-alias-high-res.png" alt-text="Diagram of the topic alias example." border="false"::: ### Flow control
-In MQTT v5, flow control refers to the mechanism for managing the rate and size of messages that a client can handle. Flow control can be configured by setting the Maximum Packet Size and Receive Maximum parameters in the CONNECT packet. The Receive Maximum parameter allows the client to limit the number of messages sent by the broker to the number of messages that the client is able to handle. The Maximum Packet Size parameter defines the maximum size of packets that the client can receive. Event Grid has a message size limit of 512 KiB. This feature ensures reliability and stability of the communication for constrained devices with limited processing speed or storage capabilities.
+In MQTT v5, flow control refers to the mechanism for managing the rate and size of messages that a client can handle. Flow control can be configured by setting the Maximum Packet Size and Receive Maximum parameters in the CONNECT packet. The Receive Maximum parameter allows the client to limit the number of messages sent by the broker to the number of messages that the client is able to handle. The Maximum Packet Size parameter defines the maximum size of packets that the client can receive. MQTT broker has a message size limit of 512 KiB. This feature ensures reliability and stability of the communication for constrained devices with limited processing speed or storage capabilities.
### Negative acknowledgments and server-initiated disconnect packet
-For MQTT v5, Event Grid is able to send negative acknowledgments (NACKs) and server-initiated disconnect packets that provide the client with more information about failures for message delivery or connection. These features help the client diagnose the reason behind a failure and take appropriate mitigating actions. Event Grid uses the reason codes that are defined in the [MQTT v5 Specification](https://docs.oasis-open.org/mqtt/mqtt/v5.0/mqtt-v5.0.html)
+For MQTT v5, MQTT broker is able to send negative acknowledgments (NACKs) and server-initiated disconnect packets that provide the client with more information about failures for message delivery or connection. These features help the client diagnose the reason behind a failure and take appropriate mitigating actions. MQTT broker uses the reason codes that are defined in the [MQTT v5 Specification](https://docs.oasis-open.org/mqtt/mqtt/v5.0/mqtt-v5.0.html)
## Current limitations
-Event Grid is adding more MQTT v5 and MQTT v3.1.1 features in the future to align more with the MQTT specifications. The following list details the current differences in Event Grid's MQTT support from the MQTT specifications:
+MQTT broker is adding more MQTT v5 and MQTT v3.1.1 features in the future to align more with the MQTT specifications. The following list details the current differences between features supported by the MQTT broker and the MQTT specifications:
### MQTTv5 current limitations
Learn more about MQTT:
- [MQTT v3.1.1 Specification](http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html) - [MQTT v5 Specification](https://docs.oasis-open.org/mqtt/mqtt/v5.0/mqtt-v5.0.html)
-Learn more about Event GridΓÇÖs MQTT support:
+Learn more about MQTT broker:
- [Client authentication](mqtt-client-authentication.md) - [How to establish multiple sessions for a single client](mqtt-establishing-multiple-sessions-per-client.md)
event-grid Mqtt Topic Spaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-topic-spaces.md
Last updated 05/23/2023
-# Topic Spaces in Azure Event Grid
+# Topic Spaces in Azure Event GridΓÇÖs MQTT broker feature
A topic space represents multiple topics through a set of topic templates. Topic templates are an extension of MQTT filters that support variables, along with the MQTT wildcards. Each topic space represents the MQTT topics that the same set of clients need to use to communicate. [!INCLUDE [mqtt-preview-note](./includes/mqtt-preview-note.md)]
-Topic spaces are used to simplify access control management by enabling you to scope publish or subscribe access for a client group, to a group of topics at once instead of managing access for each individual topic. To publish or subscribe to any MQTT topic, you need to:
+Topic spaces are used to simplify access control management by enabling you to grant publish or subscribe access to a group of topics at once instead of managing access for each individual topic. To publish or subscribe to any MQTT topic, you need to:
1. Create a **client** resource for each client that needs to communicate over MQTT. 2. Create a **client group** that includes the clients that need access to publish or subscribe on the same MQTT topic.
Topic spaces are used to simplify access control management by enabling you to s
An [MQTT topic filter](http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html) is an MQTT topic that can include wildcards for one or more of its segments, allowing it to match multiple MQTT topics. It's used to simplify subscription requests as one topic filter can match multiple topics.
-Event Grid supports all the MQTT wildcards defined by the [MQTT specification](https://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html) as follows:
+MQTT broker supports all the MQTT wildcards defined by the [MQTT specification](https://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html) as follows:
- +: which matches a single segment. - For example, topic filter: "machines/+/alert" matches the following topics:
Event Grid supports all the MQTT wildcards defined by the [MQTT specification](h
- machines/humidity - machines/temp/alert etc.
-For more information about wildcards, see [Topic Wildcards in the MQTT spc](https://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html).
+For more information about wildcards, see [Topic Wildcards in the MQTT spec](https://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html).
## Topic templates
Topic Spaces can group up to 10 topic templates. Topic templates support MQTT wi
**Note:**
+- Topics that start with $ are reserved for internal use.
- A variable can represent a portion of a segment or an entire segment but can't cover more than one segment. For example, a topic template could include "machines/${client.authenticationName|.factory1}/temp" matches topics "machines/machine1.factory1/temp", "machines/machine2.factory1/temp", etc. - Topic templates use special characters \$ and | and these need to be escaped differently based on the shell being used. In PowerShell, \$ can be escaped with vehicles/${dollar}telemetry/#. If youΓÇÖre using PowerShell, you can escape these special characters as shown in the following examples:
event-grid Mqtt Troubleshoot Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-troubleshoot-errors.md
Title: Azure Event Grid namespace MQTT functionality - Troubleshooting guide
-description: This article provides guidance on how to troubleshoot MQTT functionality related issues.
+ Title: Azure Event GridΓÇÖs MQTT broker feature - Troubleshooting guide
+description: This article provides guidance on how to troubleshoot MQTT broker related issues.
Last updated 05/23/2023
-# Guide to troubleshoot issues with Event Grid namespace MQTT functionality
+# Guide to troubleshoot issues with Azure Event GridΓÇÖs MQTT broker feature
-This guide provides you with information on what you can do to troubleshoot things before you submit a support ticket.
+This guide provides you with information on what you can do to troubleshoot things before you submit a support ticket.
[!INCLUDE [mqtt-preview-note](./includes/mqtt-preview-note.md)]
event-grid Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/overview.md
Azure Event Grid is used at different stages of data pipelines to achieve a dive
**MQTT messaging (preview)**. IoT devices and applications can communicate with each other over MQTT. Event Grid can also be used to route MQTT messages to Azure services or custom endpoints for further data analysis, visualization, or storage. This integration with Azure services enables you to build data pipelines that start with data ingestion from your IoT devices.
-**Data distribution using push and pull delivery (preview) modes**. At any point in a data pipeline, HTTP applications can consume messages using push or pull APIs. The source of the data may include MQTT clientsΓÇÖ data, but also includes the following data sources that send their events over HTTP:
+**Data distribution using push and pull delivery (preview) modes**. At any point in a data pipeline, HTTP applications can consume messages using push or pull APIs. The source of the data might include MQTT clientsΓÇÖ data, but also includes the following data sources that send their events over HTTP:
- Azure services - Your custom applications
Event Grid offers a rich mixture of features. These features include:
- **Publish-subscribe messaging model** - communicate efficiently using one-to-many, many-to-one, and one-to-one messaging patterns. - **[Built-in cloud integration](mqtt-routing.md)** - route your MQTT messages to Azure services or custom webhooks for further processing. - **Flexible and fine-grained [access control model](mqtt-access-control.md)** - group clients and topic to simplify access control management, and use the variable support in topic templates for a fine-grained access control.-- **X.509 certificate [authentication](mqtt-client-authentication.md)** - authenticate your devices the standard mechanism for device authentication in the IoT industry.
+- **X.509 certificate [authentication](mqtt-client-authentication.md)** - authenticate your devices using the IoT industry's standard mechanism for authentication.
+- **[AAD authentication](mqtt-client-azure-ad-token-and-rbac.md)** - authenticate your applications using the Azure's standard mechanism for authentication.
- **TLS 1.2 and TLS 1.3 support** - secure your client communication using robust encryption protocols. - **Multi-session support** - connect your applications with multiple active sessions to ensure reliability and scalability. - **MQTT over WebSockets** - enable connectivity for clients in firewall-restricted environments.
Your own service or application publishes events to Event Grid that subscriber a
#### Receive events from partner (SaaS providers) :::image type="content" source="media/overview/receive-saas-providers.png" alt-text="Diagram that shows an external partner application publishing event to Event Grid using HTTP. Event Grid sends those events to webhooks or Azure services." lightbox="media/overview/receive-saas-providers-high-res.png" border="false":::
-A multi-tenant SaaS provider or platform can publish their events to Event Grid through a feature called [Partner Events](partner-events-overview.md). You can [subscribe to those events](subscribe-to-partner-events.md) and automate tasks, for example. Events from the following partners are currently available:
+A multitenant SaaS provider or platform can publish their events to Event Grid through a feature called [Partner Events](partner-events-overview.md). You can [subscribe to those events](subscribe-to-partner-events.md) and automate tasks, for example. Events from the following partners are currently available:
- [Auth0](auth0-overview.md)-- [Microsoft Graph API](subscribe-to-graph-api-events.md). Through Microsoft Graph API you can get events from [Microsoft Entra ID](azure-active-directory-events.md), [Microsoft Outlook](outlook-events.md), [Teams](teams-events.md), Conversations, security alerts, and Universal Print.
+- [Microsoft Graph API](subscribe-to-graph-api-events.md). Through Microsoft Graph API you can get events from [Azure AD](azure-active-directory-events.md), [Microsoft Outlook](outlook-events.md), [Teams](teams-events.md), Conversations, security alerts, and Universal Print.
- [Tribal Group](subscribe-to-tribal-group-events.md) - [SAP](subscribe-to-sap-events.md)
You can configure **private links** to connect to Azure Event Grid to **publish
## How much does Event Grid cost?
-Azure Event Grid offers two tiers and uses a pay-per-use pricing model. For details on pricing, see [Azure Event Grid pricing](https://azure.microsoft.com/pricing/details/event-grid/). To learn more about the capabilities for each tier, see [Choose the right Event Grid tier](choose-right-tier.md).
+Azure Event Grid uses a pay-per-event pricing model. You only pay for what you use. For the push-style delivery that is generally available, the first 100,000 operations per month are free. Examples of operations include event publication, event delivery, delivery attempts, event filter evaluations that refer to event data properties (sometimes referred to as Advanced Filters), and events sent to a dead letter location. For details, see the [pricing page](https://azure.microsoft.com/pricing/details/event-grid/).
+
+Event Grid operations involving Namespaces and its resources, including MQTT and pull HTTP delivery operations, are in public preview and are available at no charge today.
## Next steps
Azure Event Grid offers two tiers and uses a pay-per-use pricing model. For deta
### Data distribution using pull or push delivery -- [Pull delivery overview](pull-delivery-overview.md)-- [Push delivery overview](push-delivery-overview.md)
+- [Pull delivery overview](pull-delivery-overview.md).
+- [Push delivery overview](push-delivery-overview.md).
- [Concepts](concepts.md)-- Quickstart: [Publish and subscribe to app events using namespace topics](publish-events-using-namespace-topics.md)
+- Quickstart: [Publish and subscribe to app events using namespace topics](publish-events-using-namespace-topics.md).
governance Exemption Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/exemption-structure.md
The Azure Policy exemptions feature is used to _exempt_ a resource hierarchy or an individual resource from evaluation of initiatives or definitions. Resources that are _exempt_ count toward overall compliance, but can't be evaluated or have a temporary waiver. For more information,
-see [Understand scope in Azure Policy](./scope.md). Azure Policy exemptions only work with
-[Resource Manager modes](./definition-structure.md#resource-manager-modes) and don't work with
+see [Understand applicability in Azure Policy](./policy-applicability.md). Azure Policy exemptions work with
+[Resource Manager modes](./definition-structure.md#resource-manager-modes), Microsoft.Kubneretes.Data, Microsoft.KeyVault.Data and Microsoft.Network.Data and don't work with the other
[Resource Provider modes](./definition-structure.md#resource-provider-modes).
-> [!NOTE]
-> By design, Azure Policy exempts all resources under the `Microsoft.Resources` resource provider (RP) from
-policy evaluation with the exception of subscriptions and resource groups, which can be evaluated.
- You use JavaScript Object Notation (JSON) to create a policy exemption. The policy exemption contains elements for: - [display name](#display-name-and-description)
You use JavaScript Object Notation (JSON) to create a policy exemption. The poli
- [resource selectors](#resource-selectors-preview) - [assignment scope validation](#assignment-scope-validation-preview)
-> [!NOTE]
-> A policy exemption is created as a child object on the resource hierarchy or the individual
-> resource granted the exemption, so the target isn't included in the exemption definition.
-> If the parent resource to which the exemption applies is removed, then the exemption
-> is removed as well.
+
+A policy exemption is created as a child object on the resource hierarchy or the individual resource granted the exemption. Exemptions cannot be created at the Resource Provider mode component level.
+If the parent resource to which the exemption applies is removed, then the exemption is removed as well.
For example, the following JSON shows a policy exemption in the **waiver** category of a resource to an initiative assignment named `resourceShouldBeCompliantInit`. The resource is _exempt_ from only two of the policy definitions in the initiative, the `customOrgPolicy` custom policy definition
-(reference `requiredTags`) and the **Allowed locations** built-in policy definition (ID:
-`e56962a6-4747-49cd-b67b-bf8b01975c4c`, reference `allowedLocations`):
+( `policyDefinitionReferenceId`: `requiredTags`) and the **Allowed locations** built-in policy definition ( `policyDefinitionReferenceId` : `allowedLocations`):
```json {
two of the policy definitions in the initiative, the `customOrgPolicy` custom po
} ```
-Snippet of the related initiative with the matching `policyDefinitionReferenceIds` used by the
-policy exemption:
-
-```json
-"policyDefinitions": [
- {
- "policyDefinitionId": "/subscriptions/{mySubscriptionID}/providers/Microsoft.Authorization/policyDefinitions/customOrgPolicy",
- "policyDefinitionReferenceId": "requiredTags",
- "parameters": {
- "reqTags": {
- "value": "[parameters('init_reqTags')]"
- }
- }
- },
- {
- "policyDefinitionId": "/providers/Microsoft.Authorization/policyDefinitions/e56962a6-4747-49cd-b67b-bf8b01975c4c",
- "policyDefinitionReferenceId": "allowedLocations",
- "parameters": {
- "listOfAllowedLocations": {
- "value": "[parameters('init_listOfAllowedLocations')]"
- }
- }
- }
-]
-```
- ## Display name and description You use **displayName** and **description** to identify the policy exemption and provide context for
its use with the specific resource. **displayName** has a maximum length of _128
## Metadata The **metadata** property allows creating any child property needed for storing relevant
-information. In the example above, properties **requestedBy**, **approvedBy**, **approvedOn**, and
+information. In the example, properties **requestedBy**, **approvedBy**, **approvedOn**, and
**ticketRef** contains customer values to provide information on who requested the exemption, who approved it and when, and an internal tracking ticket for the request. These **metadata** properties are examples, but they aren't required and **metadata** isn't limited to these child properties.
format `yyyy-MM-ddTHH:mm:ss.fffffffZ`.
## Resource selectors (preview)
-Exemptions support an optional property `resourceSelectors`. This property works the same way in exemptions as it does in assignments, allowing for gradual rollout or rollback of an _exemption_ to certain subsets of resources in a controlled manner based on resource type, resource location, or whether the resource has a location. More details about how to use resource selectors can be found in the [assignment structure](assignment-structure.md#resource-selectors-preview). Below is an example exemption JSON which leverages resource selectors. In this example, only resources in `westcentralus` will be exempt from the policy assignment:
+Exemptions support an optional property `resourceSelectors`. This property works the same way in exemptions as it does in assignments, allowing for gradual rollout or rollback of an _exemption_ to certain subsets of resources in a controlled manner based on resource type, resource location, or whether the resource has a location. More details about how to use resource selectors can be found in the [assignment structure](assignment-structure.md#resource-selectors-preview). Here is an example exemption JSON, which uses resource selectors. In this example, only resources in `westcentralus` will be exempt from the policy assignment:
```json {
Exemptions support an optional property `resourceSelectors`. This property works
} ```
-Regions can be added or removed from the `resourceLocation` list in the example above. Resource selectors allow for greater flexibility of where and how exemptions can be created and managed.
+Regions can be added or removed from the `resourceLocation` list in the example. Resource selectors allow for greater flexibility of where and how exemptions can be created and managed.
## Assignment scope validation (preview)
requiring the `Microsoft.Authorization/policyExemptions/write` operation on the
or individual resource, the creator of an exemption must have the `exempt/Action` verb on the target assignment.
+## Exemption creation and management
+
+Exemptions are recommended for time-bound or specific scenarios where a resource or resource hierarchy should still be tracked and would otherwise be evaluated, but there's a specific reason it shouldn't be assessed for compliance. For example, if an environment has the built-in definition `Storage accounts should disable public network access` (ID: `b2982f36-99f2-4db5-8eff-283140c09693`) assigned with _effect_ set to _audit_. Upon compliance assessment, resource "StorageAcc1" is non-compliant, but StorageAcc1 must have public network access enable for business purposes. At that time, a request should be submitted to create an exemption resource that targets StorageAcc1. Once the exemption is created, StorageAcc1 will be shown as _exempt_ in compliance review.
+
+Regularly revisit your exemptions to ensure that all eligible items are appropriately exempted and promptly remove any no longer qualifying for exemption. At that time, exemption resources that have expired could be deleted as well.
++ ## Next steps
+- Leverage [Azure Resource Graph queries on exemptions](../samples/resource-graph-samples.md#azure-policy-exemptions).
+- Learn about [the difference between exclusions and exemptions](./scope.md#scope-comparison).
- Study the [Microsoft.Authorization policyExemptions resource type](/azure/templates/microsoft.authorization/policyexemptions?tabs=json).-- Learn about the [policy definition structure](./definition-structure.md).-- Understand how to [programmatically create policies](../how-to/programmatically-create.md). - Learn how to [get compliance data](../how-to/get-compliance-data.md). - Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).-- Review what a management group is with
- [Organize your resources with Azure management groups](../../management-groups/overview.md).
governance Policy Applicability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-applicability.md
Condition(s) in the `if` block of the policy rule are evaluated for applicabilit
> [!NOTE] > Applicability is different from compliance, and the logic used to determine each is different. If a resource is **applicable** that means it is relevant to the policy. If a resource is **compliant** that means it adheres to the policy. Sometimes only certain conditions from the policy rule impact applicability, while all conditions of the policy rule impact compliance state.
-## Applicability logic for Resource Manager modes
+## Resource manager modes
-### Append, Audit, Manual, Modify and Deny policy effects
+### -IfNotExists policy effects
+
+The applicability of `AuditIfNotExists` and `DeployIfNotExists` policies is based off the entire `if` condition of the policy rule. When the `if` evaluates to false, the policy isn't applicable.
+
+### All other policy effects
Azure Policy evaluates only `type`, `name`, and `kind` conditions in the policy rule `if` expression and treats other conditions as true (or false when negated). If the final evaluation result is true, the policy is applicable. Otherwise, it's not applicable.
Following are special cases to the previously described applicability logic:
|When the `if` conditions consist of `type`, `name`, and other conditions |Both `type` and `name` conditions are considered when deciding applicability | |When any conditions (including deployment parameters) include a `location` condition |Won't be applicable to subscriptions |
-### AuditIfNotExists and DeployIfNotExists policy effects
-
-The applicability of `AuditIfNotExists` and `DeployIfNotExists` policies is based off the entire `if` condition of the policy rule. When the `if` evaluates to false, the policy isn't applicable.
-## Applicability logic for resource provider modes
+## Resource provider modes
### Microsoft.Kubernetes.Data The applicability of `Microsoft.Kubernetes.Data` policies is based off the entire `if` condition of the policy rule. When the `if` evaluates to false, the policy isn't applicable.
-### Microsoft.KeyVault.Data, Microsoft.ManagedHSM.Data, Microsoft.DataFactory.Data
+### Microsoft.KeyVault.Data, Microsoft.ManagedHSM.Data, and Microsoft.DataFactory.Data
Policies with mode `Microsoft.KeyVault.Data` are applicable if the `type` condition of the policy rule evaluates to true. The `type` refers to component type.
Azure Data Factory component type:
Policies with mode `Microsoft.Network.Data` are applicable if the `type` and `name` conditions of the policy rule evaluate to true. The `type` refers to component type: - Microsoft.Network/virtualNetworks
+## Not Applicable Resources
+
+There could be situations in which resources are applicable to an assignment based on conditions or scope, but they shouldn't be applicable due to business reasons. At that time, it would be best to apply [exclusions](./assignment-structure.md#excluded-scopes) or [exemptions](./exemption-structure.md). To learn more on when to use either, review [scope comparison](./scope.md#scope-comparison)
+
+> [!NOTE]
+> By design, Azure Policy does not evaluate resources under the `Microsoft.Resources` resource provider (RP) from
+policy evaluation, except for subscriptions and resource groups.
+ ## Next steps
+- Learn how to [mark resources as not applicable](./assignment-structure.md#excluded-scopes).
+- Lean more on [applicability limitations](https://github.com/azure/azure-policy#known-issues)
- Learn how to [Get compliance data of Azure resources](../how-to/get-compliance-data.md).-- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md). - Review the [update in policy compliance for resource type policies](https://azure.microsoft.com/updates/general-availability-update-in-policy-compliance-for-resource-type-policies/).
governance Determine Non Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/determine-non-compliance.md
Title: Determine causes of non-compliance
-description: When a resource is non-compliant, there are many possible reasons. Discover what caused the non-compliance quickly and easily.
Previously updated : 06/09/2022
+description: When a resource is non-compliant, there are many possible reasons. Discover what caused the non-compliance with the policy.
Last updated : 10/26/2023 + # Determine causes of non-compliance When an Azure resource is determined to be non-compliant to a policy rule, it's helpful to understand which portion of the rule the resource isn't compliant with. It's also useful to
-understand what change altered a previously compliant resource to make it non-compliant. There are
+understand which change altered a previously compliant resource to make it non-compliant. There are
two ways to find this information: - [Compliance details](#compliance-details)-- [Change history (Preview)](#change-history)
+- [Change history (Preview)](#change-history-preview)
## Compliance details When a resource is non-compliant, the compliance details for that resource are available from the **Policy compliance** page. The compliance details pane includes the following information: -- Resource details such as name, type, location, and resource ID-- Compliance state and timestamp of the last evaluation for the current policy assignment-- A list of _reasons_ for the resource non-compliance
+- Resource details such as name, type, location, and resource ID.
+- Compliance state and timestamp of the last evaluation for the current policy assignment.
+- A list of reasons for the resource non-compliance.
> [!IMPORTANT] > As the compliance details for a _Non-compliant_ resource shows the current value of properties on > that resource, the user must have **read** operation to the **type** of resource. For example, if
-> the _Non-compliant_ resource is **Microsoft.Compute/virtualMachines** then the user must have the
-> **Microsoft.Compute/virtualMachines/read** operation. If the user doesn't have the needed
+> the _Non-compliant_ resource is `Microsoft.Compute/virtualMachines` then the user must have the
+> `Microsoft.Compute/virtualMachines/read` operation. If the user doesn't have the needed
> operation, an access error is displayed. To view the compliance details, follow these steps:
To view the compliance details, follow these steps:
1. On the **Overview** or **Compliance** page, select a policy in a **compliance state** that is _Non-compliant_.
-1. Under the **Resource compliance** tab of the **Policy compliance** page, select and hold (or
- right-click) or select the ellipsis of a resource in a **compliance state** that is
+1. From the **Resource compliance** tab of the **Policy compliance** page, select and hold (or
+ right-click) or select the ellipsis of a resource in a **compliance state** that's
_Non-compliant_. Then select **View compliance details**.
- :::image type="content" source="../media/determine-non-compliance/view-compliance-details.png" alt-text="Screenshot of the 'View compliance details' link on the Resource compliance tab." border="false":::
+ :::image type="content" source="../media/determine-non-compliance/view-compliance-details.png" alt-text="Screenshot of the View compliance details link on the Resource compliance tab." :::
1. The **Compliance details** pane displays information from the latest evaluation of the resource
- to the current policy assignment. In this example, the field **Microsoft.Sql/servers/version** is
+ to the current policy assignment. In this example, the field `Microsoft.Sql/servers/version` is
found to be _12.0_ while the policy definition expected _14.0_. If the resource is non-compliant for multiple reasons, each is listed on this pane.
- :::image type="content" source="../media/determine-non-compliance/compliance-details-pane.png" alt-text="Screenshot of the Compliance details pane and reasons for non-compliance that current value is twelve and target value is fourteen." border="false":::
+ :::image type="content" source="../media/determine-non-compliance/compliance-details-pane.png" alt-text="Screenshot of the Compliance details pane and reasons for non-compliance that current value is 12 and target value is 14." :::
- For an **auditIfNotExists** or **deployIfNotExists** policy definition, the details include the
+ For an `auditIfNotExists` or `deployIfNotExists` policy definition, the details include the
**details.type** property and any optional properties. For a list, see [auditIfNotExists properties](../concepts/effects.md#auditifnotexists-properties) and [deployIfNotExists properties](../concepts/effects.md#deployifnotexists-properties). **Last evaluated resource** is a related resource from the **details** section of the definition.
- Example partial **deployIfNotExists** definition:
+ Example partial `deployIfNotExists` definition:
```json {
- "if": {
- "field": "type",
- "equals": "[parameters('resourceType')]"
- },
- "then": {
- "effect": "DeployIfNotExists",
- "details": {
- "type": "Microsoft.Insights/metricAlerts",
- "existenceCondition": {
- "field": "name",
- "equals": "[concat(parameters('alertNamePrefix'), '-', resourcegroup().name, '-', field('name'))]"
- },
- "existenceScope": "subscription",
- "deployment": {
- ...
- }
- }
+ "if": {
+ "field": "type",
+ "equals": "[parameters('resourceType')]"
+ },
+ "then": {
+ "effect": "deployIfNotExists",
+ "details": {
+ "type": "Microsoft.Insights/metricAlerts",
+ "existenceCondition": {
+ "field": "name",
+ "equals": "[concat(parameters('alertNamePrefix'), '-', resourcegroup().name, '-', field('name'))]"
+ },
+ "existenceScope": "subscription",
+ "deployment": {
+ ...
+ }
}
+ }
} ```
- :::image type="content" source="../media/determine-non-compliance/compliance-details-pane-existence.png" alt-text="Screenshot of Compliance details pane for ifNotExists including evaluated resource count." border="false":::
+ :::image type="content" source="../media/determine-non-compliance/compliance-details-pane-existence.png" alt-text="Screenshot of Compliance details pane for ifNotExists including evaluated resource count." :::
> [!NOTE] > To protect data, when a property value is a _secret_ the current value displays asterisks. These details explain why a resource is currently non-compliant, but don't show when the change was made to the resource that caused it to become non-compliant. For that information, see [Change
-history (Preview)](#change-history) below.
+history (Preview)](#change-history-preview).
### Compliance reasons
responsible [condition](../concepts/definition-structure.md#conditions) in the p
| Current value must not be like the target value. | notLike or **not** like | | Current value must not be case-sensitive match the target value. | notMatch or **not** match | | Current value must not be case-insensitive match the target value. | notMatchInsensitively or **not** matchInsensitively |
-| No related resources match the effect details in the policy definition. | A resource of the type defined in **then.details.type** and related to the resource defined in the **if** portion of the policy rule doesn't exist. |
+| No related resources match the effect details in the policy definition. | A resource of the type defined in `then.details.type` and related to the resource defined in the **if** portion of the policy rule doesn't exist. |
#### Azure Policy Resource Provider mode compliance reasons
its corresponding explanation:
| Compliance reason code | Error message and explanation | | -- | |
-| NonModifiablePolicyAlias | NonModifiableAliasConflict: The alias '{alias}' is not modifiable in requests using API version '{apiVersion}'. This error happens when a request using an API version where the alias does not support the 'modify' effect or only supports the 'modify' effect with a different token type. |
-| AppendPoliciesNotApplicable | AppendPoliciesUnableToAppend: The aliases: '{ aliases }' are not modifiable in requests using API version: '{ apiVersion }'. This can happen in requests using API versions for which the aliases do not support the 'modify' effect, or support the 'modify' effect with a different token type. |
-| ConflictingAppendPolicies | ConflictingAppendPolicies: Found conflicting policy assignments that modify the '{notApplicableFields}' field. Policy identifiers: '{policy}'. Please contact the subscription administrator to update the policy assignments. |
-| AppendPoliciesFieldsExist | AppendPoliciesFieldsExistWithDifferentValues: Policy assignments attempted to append fields which already exist in the request with different values. Fields: '{existingFields}'. Policy identifiers: '{policy}'. Please contact the subscription administrator to update the policies. |
-| AppendPoliciesUndefinedFields | AppendPoliciesUndefinedFields: Found policy definition that refers to an undefined field property for API version '{apiVersion}'. Fields: '{nonExistingFields}'. Policy identifiers: '{policy}'. Please contact the subscription administrator to update the policies. |
-| MissingRegistrationForType | MissingRegistrationForResourceType: The subscription is not registered for the resource type '{ResourceType}'. Please check that the resource type exists and that the resource type is registered. |
+| NonModifiablePolicyAlias | NonModifiableAliasConflict: The alias '{alias}' isn't modifiable in requests using API version '{apiVersion}'. This error happens when a request using an API version where the alias doesn't support the 'modify' effect or only supports the 'modify' effect with a different token type. |
+| AppendPoliciesNotApplicable | AppendPoliciesUnableToAppend: The aliases: '{ aliases }' aren't modifiable in requests using API version: '{ apiVersion }'. This can happen in requests using API versions for which the aliases don't support the 'modify' effect, or support the 'modify' effect with a different token type. |
+| ConflictingAppendPolicies | ConflictingAppendPolicies: Found conflicting policy assignments that modify the '{notApplicableFields}' field. Policy identifiers: '{policy}'. Contact the subscription administrator to update the policy assignments. |
+| AppendPoliciesFieldsExist | AppendPoliciesFieldsExistWithDifferentValues: Policy assignments attempted to append fields which already exist in the request with different values. Fields: '{existingFields}'. Policy identifiers: '{policy}'. Contact the subscription administrator to update the policies. |
+| AppendPoliciesUndefinedFields | AppendPoliciesUndefinedFields: Found policy definition that refers to an undefined field property for API version '{apiVersion}'. Fields: '{nonExistingFields}'. Policy identifiers: '{policy}'. Contact the subscription administrator to update the policies. |
+| MissingRegistrationForType | MissingRegistrationForResourceType: The subscription isn't registered for the resource type '{ResourceType}'. Check that the resource type exists and that the resource type is registered. |
| AmbiguousPolicyEvaluationPaths | The request content has one or more ambiguous paths: '{0}' required by policies: '{1}'. |
-| InvalidResourceNameWildcardPosition | The policy assignment '{0}' associated with the policy definition '{1}' could not be evaluated. The resource name '{2}' within an ifNotExists condition contains the wildcard '?' character in an invalid position. Wildcards can only be located at the end of the name in a segment by themselves (ex. TopLevelResourceName/?). Please either fix the policy or remove the policy assignment to unblock. |
-| TooManyResourceNameSegments | The policy assignment '{0}' associated with the policy definition '{1}' could not be evaluated. The resource name '{2}' within an ifNotExists condition contains too many name segments. The number of name segments must be equal to or less than the number of type segments (excluding the resource provider namespace). Please either fix the policy definition or remove the policy assignment to unblock. |
-| InvalidPolicyFieldPath | The field path '{0}' within the policy definition is invalid. Field paths must contain no empty segments. They may contain only alphanumeric characters with the exception of the '.' character for splitting segments and the '[*]' character sequence to access array properties. |
+| InvalidResourceNameWildcardPosition | The policy assignment '{0}' associated with the policy definition '{1}' couldn't be evaluated. The resource name '{2}' within an ifNotExists condition contains the wildcard '?' character in an invalid position. Wildcards can only be located at the end of the name in a segment by themselves (ex. TopLevelResourceName/?). Either fix the policy or remove the policy assignment to unblock. |
+| TooManyResourceNameSegments | The policy assignment '{0}' associated with the policy definition '{1}' couldn't be evaluated. The resource name '{2}' within an ifNotExists condition contains too many name segments. The number of name segments must be equal to or less than the number of type segments (excluding the resource provider namespace). Either fix the policy definition or remove the policy assignment to unblock. |
+| InvalidPolicyFieldPath | The field path '{0}' within the policy definition is invalid. Field paths must contain no empty segments. They might contain only alphanumeric characters with the exception of the '.' character for splitting segments and the '[*]' character sequence to access array properties. |
#### AKS Resource Provider mode compliance reasons
in the policy definition:
| Constraint/TemplateUpdateFailed | The Constraint/Template failed to update for a policy definition with a Constraint/Template that matches an existing Constraint/Template on cluster by resource metadata name. | | Constraint/TemplateInstallFailed | The Constraint/Template failed to build and was unable to be installed on cluster for either create or update operation. | | ConstraintTemplateConflicts | The Template has a conflict with one or more policy definitions using the same Template name with different source. |
-| ConstraintStatusStale | There is an existing 'Audit' status, but Gatekeeper has not performed an audit within the last hour. |
-| ConstraintNotProcessed | There is no status and Gatekeeper has not performed an audit within the last hour. |
-| InvalidConstraint/Template | API Server has rejected the resource due to a bad YAML. This reason can also be caused by a parameter type mismatch (example: string provided for an integer) |
+| ConstraintStatusStale | There's an existing 'Audit' status, but Gatekeeper hasn't performed an audit within the last hour. |
+| ConstraintNotProcessed | There's no status and Gatekeeper hasn't performed an audit within the last hour. |
+| InvalidConstraint/Template | The resource was rejected because of one of the following reasons: invalid constraint template Rego content, invalid YAML, or a parameter type mismatch between constraint and constraint template (providing a string value when an integer was expected). |
> [!NOTE] > For existing policy assignments and constraint templates already on the cluster, if that
in the policy definition:
For assignments with a [Resource Provider mode](../concepts/definition-structure.md#resource-provider-modes), select the
-_Non-compliant_ resource to open a deeper view. Under the **Component Compliance** tab is additional
-information specific to the Resource Provider mode on the assigned policy showing the
+_Non-compliant_ resource to open a deeper view. The **Component Compliance** tab shows more information specific to the Resource Provider mode on the assigned policy with the
_Non-compliant_ **Component** and **Component ID**. ## Compliance details for guest configuration For policy definitions in the _Guest Configuration_ category, there could be multiple
-settings evaluated inside the virtual machine and you'll need to view per-setting details. For
+settings evaluated inside the virtual machine and you need to view per-setting details. For
example, if you're auditing for a list of security settings and only one of them has status
-_Non-compliant_, you'll need to know which specific settings are out of compliance and why.
+_Non-compliant_, you need to know which specific settings are out of compliance and why.
You also might not have access to sign in to the virtual machine directly but you need to report on why the virtual machine is _Non-compliant_. ### Azure portal
-Begin by following the same steps in the section above for viewing policy compliance details.
+Begin by following the same steps in the [Compliance details](#compliance-details) section to view policy compliance details.
In the Compliance details pane view, select the link **Last evaluated resource**. The **Guest Assignment** page displays all available compliance details. Each row in the view represents an evaluation that was performed inside the machine. In the **Reason** column, a phrase
is shown describing why the Guest Assignment is _Non-compliant_. For example, if
password policies, the **Reason** column would display text including the current value for each setting. ### View configuration assignment details at scale The guest configuration feature can be used outside of Azure Policy assignments. For example,
-[Azure AutoManage](../../../automanage/index.yml)
+[Azure Automanage](../../../automanage/index.yml)
creates guest configuration assignments, or you might [assign configurations when you deploy machines](../../machine-configuration/how-to-create-assignment.md). To view all guest configuration assignments across your tenant, from the Azure portal open the **Guest Assignments** page. To view detailed compliance
-information, select each assignment using the link in the column "Name".
+information, select each assignment using the link in the column **Name**.
-## <a name="change-history"></a>Change history (Preview)
+## Change history (Preview)
As part of a new **public preview**, the last 14 days of change history are available for all Azure resources that support [complete mode
detection is triggered when the Azure Resource Manager properties are added, rem
1. On the **Overview** or **Compliance** page, select a policy in any **compliance state**.
-1. Under the **Resource compliance** tab of the **Policy compliance** page, select a resource.
+1. From the **Resource compliance** tab of the **Policy compliance** page, select a resource.
1. Select the **Change History (preview)** tab on the **Resource Compliance** page. A list of detected changes, if any exist, are displayed.
- :::image type="content" source="../media/determine-non-compliance/change-history-tab.png" alt-text="Screenshot of the Change History tab and detected change times on Resource Compliance page." border="false":::
+ :::image type="content" source="../media/determine-non-compliance/change-history-tab.png" alt-text="Screenshot of the Change History tab and detected change times on Resource Compliance page." :::
1. Select one of the detected changes. The _visual diff_ for the resource is presented on the **Change history** page.
- :::image type="content" source="../media/determine-non-compliance/change-history-visual-diff.png" alt-text="Screenshot of the Change History Visual Diff of the before and after state of properties on the Change history page." border="false":::
+ :::image type="content" source="../media/determine-non-compliance/change-history-visual-diff.png" alt-text="Screenshot of the Change History Visual Diff of the before and after state of properties on the Change history page." :::
-The _visual diff_ aides in identifying changes to a resource. The changes detected may not be
+The _visual diff_ aides in identifying changes to a resource. The changes detected might not be
related to the current compliance state of the resource. Change history data is provided by [Azure Resource Graph](../../resource-graph/overview.md). To
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/overview.md
In this documentation, you review each feature in detail.
> [!NOTE] > Azure Resource Graph powers Azure portal's search bar, the new browse **All resources** experience,
-> and Azure Policy's [Change history](../policy/how-to/determine-non-compliance.md#change-history)
+> and Azure Policy's [Change history](../policy/how-to/determine-non-compliance.md#change-history-preview)
> _visual diff_. It's designed to help customers manage large-scale environments. [!INCLUDE [azure-lighthouse-supported-service](../../../includes/azure-lighthouse-supported-service.md)]
Azure CLI and Azure PowerShell use subscriptions that the user has access to. Wh
API, the subscription list is provided by the user. If the user has access to any of the subscriptions in the list, the query results are returned for the subscriptions the user has access to. This behavior is the same as when calling [Resource Groups - List](/rest/api/resources/resourcegroups/list)
-because you get resource groups that you can access, without any indication that the result may be
+because you get resource groups that you can access, without any indication that the result might be
partial. If there are no subscriptions in the subscription list that the user has appropriate rights to, the response is a _403_ (Forbidden).
hdinsight-aks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/overview.md
Last updated 08/29/2023
[!INCLUDE [feature-in-preview](includes/feature-in-preview.md)]
-HDInsight on AKS is a modern, reliable, secure, and fully managed Platform as a Service (PaaS) that runs on Azure Kubernetes Service (AKS). HDInsight on AKS allows you to deploy popular Open-Source Analytics workloads like Apache Spark, Apache Flink, and Trino without the overhead of managing and monitoring containers.
+HDInsight on AKS is a modern, reliable, secure, and fully managed Platform as a Service (PaaS) that runs on Azure Kubernetes Service (AKS). HDInsight on AKS allows you to deploy popular Open-Source Analytics workloads like Apache Spark™, Apache Flink®️, and Trino without the overhead of managing and monitoring containers.
+ You can build end-to-end, petabyte-scale Big Data applications spanning streaming through Apache Flink, data engineering and machine learning using Apache Spark, and Trino's powerful query engine. All these capabilities combined with HDInsight on AKSΓÇÖs strong developer focus enables enterprises and digital natives with deep technical expertise to build and operate applications that are right fit for their needs. HDInsight on AKS allows developers to access all the rich configurations provided by open-source software and the extensibility to seamlessly include other ecosystem offerings. This offering empowers developers to test and tune their applications to extract the best performance at optimal cost.
HDInsight on AKS integrates with the entire Azure ecosystem, shortening implemen
HDInsight on AKS introduces the concept of cluster pools and clusters, which allow you to realize the complete value of data lakehouse. Cluster pools allow you to use multiple compute workloads on a single data lake, thereby removing the overhead of network management and resource planning.
-* **Cluster pools** are a logical grouping of clusters that help build robust interoperability across multiple cluster types and allow enterprises to have the clusters in the same virtual network. Cluster pools provide rapid and cost-effective access to all the cluster types created on-demand and at scale.
-<br>One cluster pool corresponds to one cluster in AKS infrastructure.
-* **Clusters** are individual compute workloads, such as Apache Spark, Apache Flink, and Trino, that can be created rapidly in few minutes with preset configurations.
+* **Cluster pools** are a logical grouping of clusters that help build robust interoperability across multiple cluster types and allow enterprises to have the clusters in the same virtual network. Cluster pools provide rapid and cost-effective access to all the cluster types created on-demand and at scale. One cluster pool corresponds to one cluster in AKS infrastructure.
+* **Clusters** are individual compute workloads, such as Apache Spark, Apache Flink, and Trino that can be created rapidly in few minutes with preset configurations.
You can create the pool with a single cluster or a combination of cluster types, which are based on the need and can custom configure the following options:
The latest version of HDInsight is orchestrated using AKS, which enables the pla
HDInsight on AKS can connect seamlessly with HDInsight. You can reap the benefits of using needed cluster types in a hybrid model. Interoperate with cluster types of HDInsight using the same storage and metastore across both the offerings.
+[HDInsight](/azure/hdinsight/) offers Apache Kafka®, Apache HBase® and other analytics workloads in Platform as a Service (PaaS) formfactor.
+ :::image type="content" source="./media/overview/connectivity-diagram.png" alt-text="Diagram showing connectivity concepts."::: **The following scenarios are supported:**
-* [Flink connecting to HBase](./flink/use-flink-to-sink-kafka-message-into-hbase.md)
-* [Flink connecting to Kafka](./flink/join-stream-kafka-table-filesystem.md)
-* Spark connecting to HBase
-* Spark connecting to Kafka
+* [Apache Flink connecting to Apache HBase](./flink/use-flink-to-sink-kafka-message-into-hbase.md)
+* [Apache Flink connecting to Apache Kafka](./flink/join-stream-kafka-table-filesystem.md)
+* Apache Spark connecting to Apache HBase
+* Apache Spark connecting to Apache Kafka
## Security architecture
For more information, see [HDInsight on AKS security](./concept-security.md).
* West US 2 * West US 3 * East US+
+> [!Note]
+> - The Trino brand and trademarks are owned and managed by the [Trino Software Foundation](https://trino.io/foundation.html). No endorsement by The Trino Software Foundation is implied by the use of these marks.
+> - Apache Spark, Spark and the Spark logo are trademarks of the [Apache Software Foundation](https://www.apache.org/) (ASF).
+> - Apache, Apache Kafka, Kafka and the Kafka logo are trademarks of the [Apache Software Foundation](https://www.apache.org/) (ASF).
+> - Apache, Apache Flink, Flink and the Flink logo are trademarks of the [Apache Software Foundation](https://www.apache.org/) (ASF).
+> - Apache HBase, HBase and the HBase logo are trademarks of the [Apache Software Foundation](https://www.apache.org/) (ASF).
+> - Apache®, Apache Spark™, Apache HBase®, Apache Kafka®, and Apache Flink® are either registered trademarks or trademarks of the [Apache Software Foundation](https://www.apache.org/) in the United States and/or other countries. No endorsement by The Apache Software Foundation is implied by the use of these marks.
++
hdinsight-aks Trademarks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trademarks.md
+
+ Title: Trademarks
+description: The Trademark and Brand Guidelines detail how you can help us protect MicrosoftΓÇÖs brand assets.
++ Last updated : 10/26/2023++
+# Trademarks
+
+Product names, logos and other material used on this Azure HDInsight on AKS learn pages are registered trademarks of various entities including, but not limited to, the following trademark owners and names:
+
+- The [Trino Software Foundation](https://trino.io/foundation.html) owns and manages the Trino brand and trademarks. The use of these marks does not imply endorsement by The Trino Software Foundation.
+- Apache Spark, Spark and the Spark logo are trademarks of the [Apache Software Foundation](https://www.apache.org/) (ASF).
+- Apache, Apache Kafka, Kafka and the Kafka logo are trademarks of the [Apache Software Foundation](https://www.apache.org/) (ASF).
+- Apache, Apache Flink, Flink and the Flink logo are trademarks of the [Apache Software Foundation](https://www.apache.org/) (ASF).
+- Apache HBase, HBase and the HBase logo are trademarks of the [Apache Software Foundation](https://www.apache.org/) (ASF).
+- Apache®, Apache Spark™, Apache HBase®, Apache Kafka®, and Apache Flink® are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. The use of these marks does not imply endorsement by The Apache Software Foundation.
hdinsight Hdinsight Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes-archive.md
description: Archived release notes for Azure HDInsight. Get development tips an
Previously updated : 7/28/2023 Last updated : 10/26/2023 # Archived release notes
Last updated 7/28/2023
Azure HDInsight is one of the most popular services among enterprise customers for open-source analytics on Azure. If you would like to subscribe on release notes, watch releases on [this GitHub repository](https://github.com/Azure/HDInsight/releases).
+## Release date: September 7, 2023
+
+This release applies to HDInsight 4.x and 5.x HDInsight release will be available to all regions over several days. This release is applicable for image number **2308221128**. [How to check the image number?](./view-hindsight-cluster-image-version.md)
+
+HDInsight uses safe deployment practices, which involve gradual region deployment. It might take up to 10 business days for a new release or a new version to be available in all regions.
+
+**OS versions**
+
+* HDInsight 4.0: Ubuntu 18.04.5 LTS Linux Kernel 5.4
+* HDInsight 5.0: Ubuntu 18.04.5 LTS Linux Kernel 5.4
+* HDInsight 5.1: Ubuntu 18.04.5 LTS Linux Kernel 5.4
+
+For workload specific versions, see
+
+* [HDInsight 5.x component versions](./hdinsight-5x-component-versioning.md)
+* [HDInsight 4.x component versions](./hdinsight-40-component-versioning.md)
+
+> [!IMPORTANT]
+> This release addresses the following CVEs released by [MSRC](https://msrc.microsoft.com/update-guide/vulnerability) on September 12, 2023. The action is to update to the latest image **2308221128**. Customers are advised to plan accordingly.
+
+| CVE | Severity| CVE Title| Remark |
+| - | - | - | - |
+| [CVE-2023-38156](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-38156) | Important | Azure HDInsight Apache Ambari Elevation of Privilege Vulnerability |Included on 2308221128 image |
+| [CVE-2023-36419](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-36419) | Important | Azure HDInsight Apache Oozie Workflow Scheduler Elevation of Privilege Vulnerability | Apply [Script action](https://hdiconfigactions2.blob.core.windows.net/msrc-script/script_action.sh) on your clusters |
+
+## ![Icon showing coming soon.](./media/hdinsight-release-notes/clock.svg) Coming soon
+
+* The max length of cluster name will be changed to 45 from 59 characters, to improve the security posture of clusters. This change will be implemented by September 30, 2023.
+* Cluster permissions for secure storage
+ * Customers can specify (during cluster creation) whether a secure channel should be used for HDInsight cluster nodes to contact the storage account.
+* In-line quota update.
+ * Request quotas increase directly from the My Quota page, which will be a direct API call, which is faster. If the APdI call fails, then customers need to create a new support request for quota increase.
+* HDInsight Cluster Creation with Custom VNets.
+ * To improve the overall security posture of the HDInsight clusters, HDInsight clusters using custom VNETs need to ensure that the user needs to have permission for `Microsoft Network/virtualNetworks/subnets/join/action` to perform create operations. Customers would need to plan accordingly as this change would be a mandatory check to avoid cluster creation failures before September 30, 2023.ΓÇ»
+* Basic and Standard A-series VMs Retirement.
+ * On August 31, 2024, we'll retire Basic and Standard A-series VMs. Before that date, you need to migrate your workloads to Av2-series VMs, which provide more memory per vCPU and faster storage on solid-state drives (SSDs). To avoid service disruptions, [migrate your workloads](https://aka.ms/Av1retirement) from Basic and Standard A-series VMs to Av2-series VMs before August 31, 2024.
+* Non-ESP ABFS clusters [Cluster Permissions for Word Readable]
+ * Plan to introduce a change in non-ESP ABFS clusters, which restricts non-Hadoop group users from executing Hadoop commands for storage operations. This change to improve cluster security posture. Customers need to plan for the updates before September 30, 2023.ΓÇ»
+
+If you have any more questions, contact [Azure Support](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview).
+
+You can always ask us about HDInsight on [Azure HDInsight - Microsoft Q&A](/answers/tags/168/azure-hdinsight)
+
+YouΓÇÖre welcome to add more proposals and ideas and other topics here and vote for them - [HDInsight Community (azure.com)](https://feedback.azure.com/d365community/search/?q=HDInsight).
+
+ > [!NOTE]
+ > We advise customers to use to latest versions of HDInsight [Images](./view-hindsight-cluster-image-version.md) as they bring in the best of open source updates, Azure updates and security fixes. For more information, see [Best practices](./hdinsight-overview-before-you-start.md).
+ ## Release date: July 25, 2023 This release applies to HDInsight 4.x and 5.x HDInsight release will be available to all regions over several days. This release is applicable for image number **2307201242**. [How to check the image number?](./view-hindsight-cluster-image-version.md)
-HDInsight uses safe deployment practices, which involve gradual region deployment. it may take up to 10 business days for a new release or a new version to be available in all regions.
+HDInsight uses safe deployment practices, which involve gradual region deployment. it might take up to 10 business days for a new release or a new version to be available in all regions.
**OS versions**
For workload specific versions, see
> [!IMPORTANT] > This release addresses the following CVEs released by [MSRC](https://msrc.microsoft.com/update-guide/vulnerability) on August 8, 2023. The action is to update to the latest image **2307201242**. Customers are advised to plan accordingly.
-|CVE | Severity| CVE Title|
-|-|-|-|
-|[CVE-2023-35393](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-35393)| Important|Azure Apache Hive Spoofing Vulnerability|
-|[CVE-2023-35394](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-35394)| Important|Azure HDInsight Jupyter Notebook Spoofing Vulnerability|
-|[CVE-2023-36877](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-36877)| Important|Azure Apache Oozie Spoofing Vulnerability|
-|[CVE-2023-36881](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-36881)| Important|Azure Apache Ambari Spoofing Vulnerability|
-|[CVE-2023-38188](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-38188)| Important|Azure Apache Hadoop Spoofing Vulnerability|
+| CVE | Severity| CVE Title|
+| - | - | - |
+| [CVE-2023-35393](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-35393) | Important|Azure Apache Hive Spoofing Vulnerability |
+| [CVE-2023-35394](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-35394) | Important|Azure HDInsight Jupyter Notebook Spoofing Vulnerability |
+| [CVE-2023-36877](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-36877) | Important|Azure Apache Oozie Spoofing Vulnerability |
+| [CVE-2023-36881](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-36881) | Important|Azure Apache Ambari Spoofing Vulnerability |
+| [CVE-2023-38188](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-38188) | Important|Azure Apache Hadoop Spoofing Vulnerability |
-## ![Icon showing coming soon.](./media/hdinsight-release-notes/clock.svg) Coming soon
+## ![Icon showing coming soon.](./media/hdinsight-release-notes/clock.svg)Coming soon
* The max length of cluster name will be changed to 45 from 59 characters, to improve the security posture of clusters. Customers need to plan for the updates before 30, September 2023. * Cluster permissions for secure storage
YouΓÇÖre welcome to add more proposals and ideas and other topics here and vote
This release applies to HDInsight 4.x and 5.x HDInsight release is available to all regions over several days. This release is applicable for image number **2304280205**. [How to check the image number?](./view-hindsight-cluster-image-version.md)
-HDInsight uses safe deployment practices, which involve gradual region deployment. it may take up to 10 business days for a new release or a new version to be available in all regions.
+HDInsight uses safe deployment practices, which involve gradual region deployment. it might take up to 10 business days for a new release or a new version to be available in all regions.
**OS versions**
For workload specific versions, see
This release applies to HDInsight 4.0. and 5.0, 5.1. HDInsight release is available to all regions over several days. This release is applicable for image number **2302250400**. [How to check the image number?](./view-hindsight-cluster-image-version.md)
-HDInsight uses safe deployment practices, which involve gradual region deployment. it may take up to 10 business days for a new release or a new version to be available in all regions.
+HDInsight uses safe deployment practices, which involve gradual region deployment. it might take up to 10 business days for a new release or a new version to be available in all regions.
**OS versions**
End of support for Azure HDInsight clusters on Spark 2.4 February 10, 2024. For
This release applies to HDInsight 4.0. and 5.0 HDInsight release is made available to all regions over several days.
-HDInsight uses safe deployment practices, which involve gradual region deployment. It may take up to 10 business days for a new release or a new version to be available in all regions.
+HDInsight uses safe deployment practices, which involve gradual region deployment. It might take up to 10 business days for a new release or a new version to be available in all regions.
**OS versions**
For more information on how to check Ubuntu version of cluster, see [here](https
This release applies to HDInsight 4.0.  HDInsight release is made available to all regions over several days.
-HDInsight uses safe deployment practices, which involve gradual region deployment. It may take up to 10 business days for a new release or a new version to be available in all regions.
+HDInsight uses safe deployment practices, which involve gradual region deployment. It might take up to 10 business days for a new release or a new version to be available in all regions.
![Icon_showing_new_features](media/hdinsight-release-notes/icon-for-new-feature.png)
HDInsight uses safe deployment practices, which involve gradual region deploymen
**1. Attach external disks in HDI Hadoop/Spark clusters**
-HDInsight cluster comes with predefined disk space based on SKU. This space may not be sufficient in large job scenarios.
+HDInsight cluster comes with predefined disk space based on SKU. This space might not be sufficient in large job scenarios.
This new feature allows you to add more disks in cluster, which used as node manager local directory. Add number of disks to worker nodes during HIVE and Spark cluster creation, while the selected disks are part of node managerΓÇÖs local directories.
HDInsight is compatible with Apache HIVE 3.1.2. Due to a bug in this release, th
This release applies to HDInsight 4.0.  HDInsight release is made available to all regions over several days.
-HDInsight uses safe deployment practices, which involve gradual region deployment. It may take up to 10 business days for a new release or a new version to be available in all regions.
+HDInsight uses safe deployment practices, which involve gradual region deployment. It might take up to 10 business days for a new release or a new version to be available in all regions.
![Icon_showing_new_features](media/hdinsight-release-notes/icon-for-new-feature.png)
HDInsight uses safe deployment practices, which involve gradual region deploymen
**1. Attach external disks in HDI Hadoop/Spark clusters**
-HDInsight cluster comes with predefined disk space based on SKU. This space may not be sufficient in large job scenarios.
+HDInsight cluster comes with predefined disk space based on SKU. This space might not be sufficient in large job scenarios.
This new feature allows you to add more disks in cluster, which will be used as node manager local directory. Add number of disks to worker nodes during HIVE and Spark cluster creation, while the selected disks are part of node managerΓÇÖs local directories.
HDI Hive 3.1 version is upgraded to OSS Hive 3.1.2. This version has all fixes a
| LLAP external client - Need to reduce LlapBaseInputFormat#getSplits() footprint|[HIVE-22221](https://issues.apache.org/jira/browse/HIVE-22221)| | Column name with reserved keyword is unescaped when query including join on table with mask column is rewritten (Zoltan Matyus via Zoltan Haindrich)|[HIVE-22208](https://issues.apache.org/jira/browse/HIVE-22208)| |Prevent LLAP shutdown on `AMReporter` related RuntimeException|[HIVE-22113](https://issues.apache.org/jira/browse/HIVE-22113)|
-| LLAP status service driver may get stuck with wrong Yarn app ID|[HIVE-21866](https://issues.apache.org/jira/browse/HIVE-21866)|
+| LLAP status service driver might get stuck with wrong Yarn app ID|[HIVE-21866](https://issues.apache.org/jira/browse/HIVE-21866)|
| OperationManager.queryIdOperation doesn't properly clean up multiple queryIds|[HIVE-22275](https://issues.apache.org/jira/browse/HIVE-22275)| | Bringing a node manager down blocks restart of LLAP service|[HIVE-22219](https://issues.apache.org/jira/browse/HIVE-22219)| | StackOverflowError when drop lots of partitions|[HIVE-15956](https://issues.apache.org/jira/browse/HIVE-15956)|
The new Azure monitor integration experience will be Preview in East US and West
HDInsight 3.6 version is deprecated effective Oct 01, 2022. ### Behavior changes #### HDInsight Interactive Query only supports schedule-based Autoscale
-As customer scenarios grow more mature and diverse, we've identified some limitations with Interactive Query (LLAP) load-based Autoscale. These limitations are caused by the nature of LLAP query dynamics, future load prediction accuracy issues, and issues in the LLAP scheduler's task redistribution. Due to these limitations, users may see their queries run slower on LLAP clusters when Autoscale is enabled. The effect on performance can outweigh the cost benefits of Autoscale.
+As customer scenarios grow more mature and diverse, we've identified some limitations with Interactive Query (LLAP) load-based Autoscale. These limitations are caused by the nature of LLAP query dynamics, future load prediction accuracy issues, and issues in the LLAP scheduler's task redistribution. Due to these limitations, users might see their queries run slower on LLAP clusters when Autoscale is enabled. The effect on performance can outweigh the cost benefits of Autoscale.
Starting from July 2021, the Interactive Query workload in HDInsight only supports schedule-based Autoscale. You can no longer enable load-based autoscale on new Interactive Query clusters. Existing running clusters can continue to run with the known limitations described above.
The following changes will happen in upcoming releases.
#### HDInsight Interactive Query only supports schedule-based Autoscale
-As customer scenarios grow more mature and diverse, we've identified some limitations with Interactive Query (LLAP) load-based Autoscale. These limitations are caused by the nature of LLAP query dynamics, future load prediction accuracy issues, and issues in the LLAP scheduler's task redistribution. Due to these limitations, users may see their queries run slower on LLAP clusters when Autoscale is enabled. The effect on performance can outweigh the cost benefits of Autoscale.
+As customer scenarios grow more mature and diverse, we've identified some limitations with Interactive Query (LLAP) load-based Autoscale. These limitations are caused by the nature of LLAP query dynamics, future load prediction accuracy issues, and issues in the LLAP scheduler's task redistribution. Due to these limitations, users might see their queries run slower on LLAP clusters when Autoscale is enabled. The effect on performance can outweigh the cost benefits of Autoscale.
Starting from July 2021, the Interactive Query workload in HDInsight only supports schedule-based Autoscale. You can no longer enable Autoscale on new Interactive Query clusters. Existing running clusters can continue to run with the known limitations described above.
Microsoft recommends that you move to a schedule-based Autoscale for LLAP. You
HDInsight now uses Azure virtual machines to provision the cluster. The service is gradually migrating to [Azure virtual machine scale sets](../virtual-machine-scale-sets/overview.md). This migration will change the cluster host name FQDN name format, and the numbers in the host name won't be guarantee in sequence. If you want to get the FQDN names for each node, refer to [Find the Host names of Cluster Nodes](./find-host-name.md). #### Move to Azure virtual machine scale sets
-HDInsight now uses Azure virtual machines to provision the cluster. The service will gradually migrate to [Azure virtual machine scale sets](../virtual-machine-scale-sets/overview.md). The entire process may take months. After your regions and subscriptions are migrated, newly created HDInsight clusters will run on virtual machine scale sets without customer actions. No breaking change is expected.
+HDInsight now uses Azure virtual machines to provision the cluster. The service will gradually migrate to [Azure virtual machine scale sets](../virtual-machine-scale-sets/overview.md). The entire process might take months. After your regions and subscriptions are migrated, newly created HDInsight clusters will run on virtual machine scale sets without customer actions. No breaking change is expected.
## Release date: 03/24/2021
HDInsight added [Kafka 2.4.1](http://kafka.apache.org/24/documentation.html) sup
HDInsight added `Eav4`-series support in this release. #### Moving to Azure virtual machine scale sets
-HDInsight now uses Azure virtual machines to provision the cluster. The service is gradually migrating to [Azure virtual machine scale sets](../virtual-machine-scale-sets/overview.md). The entire process may take months. After your regions and subscriptions are migrated, newly created HDInsight clusters will run on virtual machine scale sets without customer actions. No breaking change is expected.
+HDInsight now uses Azure virtual machines to provision the cluster. The service is gradually migrating to [Azure virtual machine scale sets](../virtual-machine-scale-sets/overview.md). The entire process might take months. After your regions and subscriptions are migrated, newly created HDInsight clusters will run on virtual machine scale sets without customer actions. No breaking change is expected.
### Deprecation No deprecation in this release.
The following changes will happen in upcoming releases.
#### HDInsight Interactive Query only supports schedule-based Autoscale
-As customer scenarios grow more mature and diverse, we've identified some limitations with Interactive Query (LLAP) load-based Autoscale. These limitations are caused by the nature of LLAP query dynamics, future load prediction accuracy issues, and issues in the LLAP scheduler's task redistribution. Due to these limitations, users may see their queries run slower on LLAP clusters when Autoscale is enabled. The impact on performance can outweigh the cost benefits of Autoscale.
+As customer scenarios grow more mature and diverse, we've identified some limitations with Interactive Query (LLAP) load-based Autoscale. These limitations are caused by the nature of LLAP query dynamics, future load prediction accuracy issues, and issues in the LLAP scheduler's task redistribution. Due to these limitations, users might see their queries run slower on LLAP clusters when Autoscale is enabled. The impact on performance can outweigh the cost benefits of Autoscale.
Starting from July 2021, the Interactive Query workload in HDInsight only supports schedule-based Autoscale. You can no longer enable Autoscale on new Interactive Query clusters. Existing running clusters can continue to run with the known limitations described above.
HDInsight added Dav4-series support in this release. Learn more about [Dav4-seri
Kafka REST Proxy enables you to interact with your Kafka cluster via a REST API over HTTPS. Kafka REST Proxy is general available starting from this release. Learn more about [Kafka REST Proxy here](./kafk). #### Moving to Azure virtual machine scale sets
-HDInsight now uses Azure virtual machines to provision the cluster. The service is gradually migrating to [Azure virtual machine scale sets](../virtual-machine-scale-sets/overview.md). The entire process may take months. After your regions and subscriptions are migrated, newly created HDInsight clusters will run on virtual machine scale sets without customer actions. No breaking change is expected.
+HDInsight now uses Azure virtual machines to provision the cluster. The service is gradually migrating to [Azure virtual machine scale sets](../virtual-machine-scale-sets/overview.md). The entire process might take months. After your regions and subscriptions are migrated, newly created HDInsight clusters will run on virtual machine scale sets without customer actions. No breaking change is expected.
### Deprecation #### Disabled VM sizes
Starting from this release, customers can use Azure KeyValut version-less encryp
HDInsight previously didn't support customizing Zookeeper node size for Spark, Hadoop, and ML Services cluster types. It defaults to A2_v2/A2 virtual machine sizes, which are provided free of charge. From this release, you can select a Zookeeper virtual machine size that is most appropriate for your scenario. Zookeeper nodes with virtual machine size other than A2_v2/A2 will be charged. A2_v2 and A2 virtual machines are still provided free of charge. #### Moving to Azure virtual machine scale sets
-HDInsight now uses Azure virtual machines to provision the cluster. Starting from this release, the service will gradually migrate to [Azure virtual machine scale sets](../virtual-machine-scale-sets/overview.md). The entire process may take months. After your regions and subscriptions are migrated, newly created HDInsight clusters will run on virtual machine scale sets without customer actions. No breaking change is expected.
+HDInsight now uses Azure virtual machines to provision the cluster. Starting from this release, the service will gradually migrate to [Azure virtual machine scale sets](../virtual-machine-scale-sets/overview.md). The entire process might take months. After your regions and subscriptions are migrated, newly created HDInsight clusters will run on virtual machine scale sets without customer actions. No breaking change is expected.
### Deprecation #### Deprecation of HDInsight 3.6 ML Services cluster
HDInsight Identity Broker (HIB) that enables OAuth authentication for ESP cluste
For more information, see [HIB documentation](./domain-joined/identity-broker.md). #### Moving to Azure virtual machine scale sets
-HDInsight now uses Azure virtual machines to provision the cluster. Starting from this release, the service will gradually migrate to [Azure virtual machine scale sets](../virtual-machine-scale-sets/overview.md). The entire process may take months. After your regions and subscriptions are migrated, newly created HDInsight clusters will run on virtual machine scale sets without customer actions. No breaking change is expected.
+HDInsight now uses Azure virtual machines to provision the cluster. Starting from this release, the service will gradually migrate to [Azure virtual machine scale sets](../virtual-machine-scale-sets/overview.md). The entire process might take months. After your regions and subscriptions are migrated, newly created HDInsight clusters will run on virtual machine scale sets without customer actions. No breaking change is expected.
### Deprecation #### Deprecation of HDInsight 3.6 ML Services cluster
This release applies for both HDInsight 3.6 and HDInsight 4.0. HDInsight release
HDInsight now supports creating clusters with no public IP and private link access to the clusters in preview. Customers can use the new advanced networking settings to create a fully isolated cluster with no public IP and use their own private endpoints to access the cluster. #### Moving to Azure virtual machine scale sets
-HDInsight now uses Azure virtual machines to provision the cluster. Starting from this release, the service will gradually migrate to [Azure virtual machine scale sets](../virtual-machine-scale-sets/overview.md). The entire process may take months. After your regions and subscriptions are migrated, newly created HDInsight clusters will run on virtual machine scale sets without customer actions. No breaking change is expected.
+HDInsight now uses Azure virtual machines to provision the cluster. Starting from this release, the service will gradually migrate to [Azure virtual machine scale sets](../virtual-machine-scale-sets/overview.md). The entire process might take months. After your regions and subscriptions are migrated, newly created HDInsight clusters will run on virtual machine scale sets without customer actions. No breaking change is expected.
### Deprecation #### Deprecation of HDInsight 3.6 ML Services cluster
Auto scale for Interactive Query cluster type is now General Available (GA) for
HDInsight now supports Premium ADLS Gen2 as primary storage account for HDInsight HBase 3.6 and 4.0 clusters. Together with [Accelerated Writes](./hbase/apache-hbase-accelerated-writes.md), you can get better performance for your HBase clusters. #### Kafka partition distribution on Azure fault domains
-A fault domain is a logical grouping of underlying hardware in an Azure data center. Each fault domain shares a common power source and network switch. Before HDInsight Kafka may store all partition replicas in the same fault domain. Starting from this release, HDInsight now supports automatically distribution of Kafka partitions based on Azure fault domains.
+A fault domain is a logical grouping of underlying hardware in an Azure data center. Each fault domain shares a common power source and network switch. Before HDInsight Kafka might store all partition replicas in the same fault domain. Starting from this release, HDInsight now supports automatically distribution of Kafka partitions based on Azure fault domains.
#### Encryption in transit Customers can enable encryption in transit between cluster nodes using IPSec encryption with platform-managed keys. This option can be enabled at the cluster creation time. See more details about [how to enable encryption in transit](./domain-joined/encryption-in-transit.md).
Customers can enable encryption in transit between cluster nodes using IPSec enc
When you enable encryption at host, data stored on the VM host is encrypted at rest and flows encrypted to the storage service. From this release, you can **Enable encryption at host on temp data disk** when creating the cluster. Encryption at host is only supported on [certain VM SKUs in limited regions](../virtual-machines/disks-enable-host-based-encryption-portal.md). HDInsight supports the [following node configuration and SKUs](./hdinsight-supported-node-configuration.md). See more details about [how to enable encryption at host](./disk-encryption.md#encryption-at-host-using-platform-managed-keys). #### Moving to Azure virtual machine scale sets
-HDInsight now uses Azure virtual machines to provision the cluster. Starting from this release, the service will gradually migrate to [Azure virtual machine scale sets](../virtual-machine-scale-sets/overview.md). The entire process may take months. After your regions and subscriptions are migrated, newly created HDInsight clusters will run on virtual machine scale sets without customer actions. No breaking change is expected.
+HDInsight now uses Azure virtual machines to provision the cluster. Starting from this release, the service will gradually migrate to [Azure virtual machine scale sets](../virtual-machine-scale-sets/overview.md). The entire process might take months. After your regions and subscriptions are migrated, newly created HDInsight clusters will run on virtual machine scale sets without customer actions. No breaking change is expected.
### Deprecation No deprecation for this release.
No component version change for this release. You can find the current component
### Known issues
-An issue has been fixed in the Azure portal, where users were experiencing an error when they were creating an Azure HDInsight cluster using an SSH authentication type of public key. When users clicked **Review + Create**, they would receive the error "Must not contain any three consecutive characters from SSH username." This issue has been fixed, but it may require that you refresh your browser cache by hitting CTRL + F5 to load the corrected view. The workaround to this issue was to create a cluster with an ARM template.
+An issue has been fixed in the Azure portal, where users were experiencing an error when they were creating an Azure HDInsight cluster using an SSH authentication type of public key. When users clicked **Review + Create**, they would receive the error "Must not contain any three consecutive characters from SSH username." This issue has been fixed, but it might require that you refresh your browser cache by hitting CTRL + F5 to load the corrected view. The workaround to this issue was to create a cluster with an ARM template.
## Release date: 07/13/2020
This release provides Hadoop Common 2.7.3 and the following Apache patches:
- [HDFS-7922](https://issues.apache.org/jira/browse/HDFS-7922): ShortCircuitCache\#close isn't releasing ScheduledThreadPoolExecutors. -- [HDFS-8496](https://issues.apache.org/jira/browse/HDFS-8496): Calling stopWriter() with FSDatasetImpl lock held may block other threads (cmccabe).
+- [HDFS-8496](https://issues.apache.org/jira/browse/HDFS-8496): Calling stopWriter() with FSDatasetImpl lock held might block other threads (cmccabe).
- [HDFS-10267](https://issues.apache.org/jira/browse/HDFS-10267): Extra "synchronized" on FsDatasetImpl\#recoverAppend and FsDatasetImpl\#recoverClose.
This release provides HBase 1.1.2 and the following Apache patches.
- [HBASE-15615](https://issues.apache.org/jira/browse/HBASE-15615): Wrong sleep time when `RegionServerCallable` need retry. -- [HBASE-16135](https://issues.apache.org/jira/browse/HBASE-16135): PeerClusterZnode under rs of removed peer may never be deleted.
+- [HBASE-16135](https://issues.apache.org/jira/browse/HBASE-16135): PeerClusterZnode under rs of removed peer might never be deleted.
- [HBASE-16570](https://issues.apache.org/jira/browse/HBASE-16570): Compute region locality in parallel at startup.
This release provides Hive 1.2.1 and Hive 2.1.0 in addition to the following pat
- [*HIVE-18551*](https://issues.apache.org/jira/browse/HIVE-18551): Vectorization: VectorMapOperator tries to write too many vector columns for Hybrid Grace. -- [*HIVE-18587*](https://issues.apache.org/jira/browse/HIVE-18587): insert DML event may attempt to calculate a checksum on directories.
+- [*HIVE-18587*](https://issues.apache.org/jira/browse/HIVE-18587): insert DML event might attempt to calculate a checksum on directories.
- [*HIVE-18613*](https://issues.apache.org/jira/browse/HIVE-18613): Extend JsonSerDe to support BINARY type.
This release provides Hive 1.2.1 and Hive 2.1.0 in addition to the following pat
- [*HIVE-18577*](https://issues.apache.org/jira/browse/HIVE-18577): SemanticAnalyzer.validate has some pointless metastore calls. -- [*HIVE-18587*](https://issues.apache.org/jira/browse/HIVE-18587): insert DML event may attempt to calculate a checksum on directories.
+- [*HIVE-18587*](https://issues.apache.org/jira/browse/HIVE-18587): insert DML event might attempt to calculate a checksum on directories.
- [*HIVE-18597*](https://issues.apache.org/jira/browse/HIVE-18597): LLAP: Always package the `log4j2` API jar for `org.apache.log4j`.
In HDP-2.5.x and 2.6.x, we removed the "commons-httpclient" library from Mahout
- Previously compiled Mahout jobs will need to be recompiled in the HDP-2.5 or 2.6 environment. -- There's a small possibility that some Mahout jobs may encounter "ClassNotFoundException" or "could not load class" errors related to "org.apache.commons.httpclient", "net.java.dev.jets3t", or related class name prefixes. If these errors happen, you may consider whether to manually install the needed jars in your classpath for the job, if the risk of security issues in the obsolete library is acceptable in your environment.
+- There's a small possibility that some Mahout jobs might encounter "ClassNotFoundException" or "could not load class" errors related to "org.apache.commons.httpclient", "net.java.dev.jets3t", or related class name prefixes. If these errors happen, you might consider whether to manually install the needed jars in your classpath for the job, if the risk of security issues in the obsolete library is acceptable in your environment.
-- There's an even smaller possibility that some Mahout jobs may encounter crashes in Mahout's hbase-client code calls to the hadoop-common libraries, due to binary compatibility problems. Regrettably, there's no way to resolve this issue except revert to the HDP-2.4.2 version of Mahout, which may have security issues. Again, this should be unusual, and is unlikely to occur in any given Mahout job suite.
+- There's an even smaller possibility that some Mahout jobs might encounter crashes in Mahout's hbase-client code calls to the hadoop-common libraries, due to binary compatibility problems. Regrettably, there's no way to resolve this issue except revert to the HDP-2.4.2 version of Mahout, which might have security issues. Again, this should be unusual, and is unlikely to occur in any given Mahout job suite.
#### Oozie
This release provides Ranger 0.7.0 and the following Apache patches:
- [RANGER-1982](https://issues.apache.org/jira/browse/RANGER-1982): Error Improvement for Analytics Metric of Ranger Admin and Ranger KMS. -- [RANGER-1984](https://issues.apache.org/jira/browse/RANGER-1984): HBase audit log records may not show all tags associated with accessed column.
+- [RANGER-1984](https://issues.apache.org/jira/browse/RANGER-1984): HBase audit log records might not show all tags associated with accessed column.
- [RANGER-1988](https://issues.apache.org/jira/browse/RANGER-1988): Fix insecure randomness.
This section covers all Common Vulnerabilities and Exposures (CVE) that are addr
### Fixed issues for support
-Fixed issues represent selected issues that were previously logged via Hortonworks Support, but are now addressed in the current release. These issues may have been reported in previous versions within the Known Issues section; meaning they were reported by customers or identified by Hortonworks Quality Engineering team.
+Fixed issues represent selected issues that were previously logged via Hortonworks Support, but are now addressed in the current release. These issues might have been reported in previous versions within the Known Issues section; meaning they were reported by customers or identified by Hortonworks Quality Engineering team.
**Incorrect Results**
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-97864 | [HIVE-18833](https://issues.apache.org/jira/browse/HIVE-18833) | Auto Merge fails when "insert into directory as orcfile" | | BUG-97889 | [RANGER-2008](https://issues.apache.org/jira/browse/RANGER-2008) | Policy evaluation is failing for multiline policy conditions. | | BUG-98655 | [RANGER-2066](https://issues.apache.org/jira/browse/RANGER-2066) | HBase column family access is authorized by a tagged column in the column family |
-| BUG-99883 | [HIVE-19073](https://issues.apache.org/jira/browse/HIVE-19073), [HIVE-19145](https://issues.apache.org/jira/browse/HIVE-19145) | StatsOptimizer may mangle constant columns |
+| BUG-99883 | [HIVE-19073](https://issues.apache.org/jira/browse/HIVE-19073), [HIVE-19145](https://issues.apache.org/jira/browse/HIVE-19145) | StatsOptimizer might mangle constant columns |
**Other**
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-95201 | [HDFS-13060](https://issues.apache.org/jira/browse/HDFS-13060) | Adding a BlacklistBasedTrustedChannelResolver for TrustedChannelResolver | | BUG-95284 | [HBASE-19395](https://issues.apache.org/jira/browse/HBASE-19395) | \[branch-1\] TestEndToEndSplitTransaction.testMasterOpsWhileSplitting fails with NPE | | BUG-95301 | [HIVE-18517](https://issues.apache.org/jira/browse/HIVE-18517) | Vectorization: Fix VectorMapOperator to accept VRBs and check vectorized flag correctly to support LLAP Caching |
-| BUG-95542 | [HBASE-16135](https://issues.apache.org/jira/browse/HBASE-16135) | PeerClusterZnode under rs of removed peer may never be deleted |
+| BUG-95542 | [HBASE-16135](https://issues.apache.org/jira/browse/HBASE-16135) | PeerClusterZnode under rs of removed peer might never be deleted |
| BUG-95595 | [HIVE-15563](https://issues.apache.org/jira/browse/HIVE-15563) | Ignore Illegal Operation state transition exception in SQLOperation.runQuery to expose real exception. | | BUG-95596 | [YARN-4126](https://issues.apache.org/jira/browse/YARN-4126), [YARN-5750](https://issues.apache.org/jira/browse/YARN-5750) | TestClientRMService fails | | BUG-96019 | [HIVE-18548](https://issues.apache.org/jira/browse/HIVE-18548) | Fix `log4j` import |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-96479 | [HDFS-12781](https://issues.apache.org/jira/browse/HDFS-12781) | After `Datanode` down, In `Namenode` UI `Datanode` tab is throwing warning message. | | BUG-96502 | [RANGER-1990](https://issues.apache.org/jira/browse/RANGER-1990) | Add One-way SSL MySQL support in Ranger Admin | | BUG-96718 | [ATLAS-2439](https://issues.apache.org/jira/browse/ATLAS-2439) | Update Sqoop hook to use V2 notifications |
-| BUG-96748 | [HIVE-18587](https://issues.apache.org/jira/browse/HIVE-18587) | insert DML event may attempt to calculate a checksum on directories |
+| BUG-96748 | [HIVE-18587](https://issues.apache.org/jira/browse/HIVE-18587) | insert DML event might attempt to calculate a checksum on directories |
| BUG-96821 | [HBASE-18212](https://issues.apache.org/jira/browse/HBASE-18212) | In Standalone mode with local filesystem HBase logs Warning message: Failed to invoke 'unbuffer' method in class org.apache.hadoop.fs.FSDataInputStream | | BUG-96847 | [HIVE-18754](https://issues.apache.org/jira/browse/HIVE-18754) | REPL STATUS should support 'with' clause | | BUG-96873 | [ATLAS-2443](https://issues.apache.org/jira/browse/ATLAS-2443) | Capture required entity attributes in outgoing DELETE messages | | BUG-96880 | [SPARK-23230](https://issues.apache.org/jira/browse/SPARK-23230) | When hive.default.fileformat is other kinds of file types, create `textfile` table cause a `serde` error | | BUG-96911 | [OOZIE-2571](https://issues.apache.org/jira/browse/OOZIE-2571), [OOZIE-2792](https://issues.apache.org/jira/browse/OOZIE-2792), [OOZIE-2799](https://issues.apache.org/jira/browse/OOZIE-2799), [OOZIE-2923](https://issues.apache.org/jira/browse/OOZIE-2923) | Improve Spark options parsing |
-| BUG-97100 | [RANGER-1984](https://issues.apache.org/jira/browse/RANGER-1984) | HBase audit log records may not show all tags associated with accessed column |
+| BUG-97100 | [RANGER-1984](https://issues.apache.org/jira/browse/RANGER-1984) | HBase audit log records might not show all tags associated with accessed column |
| BUG-97110 | [PHOENIX-3789](https://issues.apache.org/jira/browse/PHOENIX-3789) | Execute cross region index maintenance calls in postBatchMutateIndispensably | | BUG-97145 | [HIVE-12245](https://issues.apache.org/jira/browse/HIVE-12245), [HIVE-17829](https://issues.apache.org/jira/browse/HIVE-17829) | Support column comments for an HBase backed table | | BUG-97409 | [HADOOP-15255](https://issues.apache.org/jira/browse/HADOOP-15255) | Upper/Lower case conversion support for group names in LdapGroupsMapping |
Fixed issues represent selected issues that were previously logged via Hortonwor
SSL\_RSA\_WITH\_RC4\_128\_MD5, SSL\_RSA\_WITH\_RC4\_128\_SHA, TLS\_RSA\_WITH\_AES\_128\_CBC\_SHA, SSL\_RSA\_WITH\_3DES\_EDE\_CBC\_SHA >[!NOTE]
- >The noted values are working examples and may not be indicative of your environment. Ensure that the way you set these properties matches how your environment is configured.
+ >The noted values are working examples and might not be indicative of your environment. Ensure that the way you set these properties matches how your environment is configured.
- **RangerUI: Escape of policy condition text entered in the policy form**
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes.md
description: Latest release notes for Azure HDInsight. Get development tips and
Previously updated : 10/10/2023 Last updated : 10/26/2023 # Azure HDInsight release notes
This article provides information about the **most recent** Azure HDInsight rele
Azure HDInsight is one of the most popular services among enterprise customers for open-source analytics on Azure. Subscribe to the [HDInsight Release Notes](./subscribe-to-hdi-release-notes-repo.md) for up-to-date information on HDInsight and all HDInsight versions.
+In this release HDI 5.1 version is moved to General Availability (GA) stage.
+ To subscribe, click the ΓÇ£watchΓÇ¥ button in the banner and watch out for [HDInsight Releases](https://github.com/Azure/HDInsight/releases).
-## Release date: September 7, 2023
+## Release date: October 26, 2023
-This release applies to HDInsight 4.x and 5.x HDInsight release will be available to all regions over several days. This release is applicable for image number **2308221128**. [How to check the image number?](./view-hindsight-cluster-image-version.md)
+This release applies to HDInsight 4.x and 5.x HDInsight release will be available to all regions over several days. This release is applicable for image number **2310140056**. [How to check the image number?](./view-hindsight-cluster-image-version.md)
-HDInsight uses safe deployment practices, which involve gradual region deployment. it may take up to 10 business days for a new release or a new version to be available in all regions.
+HDInsight uses safe deployment practices, which involve gradual region deployment. it might take up to 10 business days for a new release or a new version to be available in all regions.
**OS versions**
For workload specific versions, see
* [HDInsight 5.x component versions](./hdinsight-5x-component-versioning.md) * [HDInsight 4.x component versions](./hdinsight-40-component-versioning.md)
-> [!IMPORTANT]
-> This release addresses the following CVEs released by [MSRC](https://msrc.microsoft.com/update-guide/vulnerability) on September 12, 2023. The action is to update to the latest image **2308221128**. Customers are advised to plan accordingly.
+## What's new
-|CVE | Severity| CVE Title| Remark |
-|-|-|-|-|
-|[CVE-2023-38156](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-38156)| Important | Azure HDInsight Apache Ambari Elevation of Privilege Vulnerability |Included on 2308221128 image |
-|[CVE-2023-36419](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-36419) | Important | Azure HDInsight Apache Oozie Workflow Scheduler Elevation of Privilege Vulnerability | Apply [Script action](https://hdiconfigactions2.blob.core.windows.net/msrc-script/script_action.sh) on your clusters |
+* HDInsight announces the General availability of HDInsight 5.1 starting October 26, 2023. This release brings in a full stack refresh to the [open source components](./hdinsight-5x-component-versioning.md#open-source-components-available-with-hdinsight-5x) and the integrations from Microsoft.
+ * Latest Open Source Versions ΓÇô [HDInsight 5.1](./hdinsight-5x-component-versioning.md) comes with the latest stable [open-source version](./hdinsight-5x-component-versioning.md#open-source-components-available-with-hdinsight-5x) available. Customers can benefit from all latest open source features, Microsoft performance improvements, and Bug fixes.
+ * Secure ΓÇô The latest versions come with the most recent security fixes, both open-source security fixes and security improvements by Microsoft.
+ * Lower TCO ΓÇô With performance enhancements customers can lower the operating cost, along with [enhanced autoscale](https://techcommunity.microsoft.com/t5/analytics-on-azure-blog/enhanced-autoscale-capabilities-in-hdinsight-clusters/ba-p/3811271).
+
+* Cluster permissions for secure storage
+ * Customers can specify (during cluster creation) whether a secure channel should be used for HDInsight cluster nodes to connect the storage account.
+
+* HDInsight Cluster Creation with Custom VNets.
+ * To improve the overall security posture of the HDInsight clusters, HDInsight clusters using custom VNETs need to ensure that the user needs to have permission for `Microsoft Network/virtualNetworks/subnets/join/action` to perform create operations. Customer might face creation failures if this check is not enabled.
+
+ * Non-ESP ABFS clusters [Cluster Permissions for Word Readable]
+ * Non-ESP ABFS clusters restrict non-Hadoop group users from executing Hadoop commands for storage operations. This change improves cluster security posture.ΓÇ»
## ![Icon showing coming soon.](./media/hdinsight-release-notes/clock.svg) Coming soon
-* The max length of cluster name will be changed to 45 from 59 characters, to improve the security posture of clusters. This change will be implemented by September 30, 2023.
-* Cluster permissions for secure storage
- * Customers can specify (during cluster creation) whether a secure channel should be used for HDInsight cluster nodes to contact the storage account.
+* The max length of cluster name will be changed to 45 from 59 characters, to improve the security posture of clusters. This change will be rolled out to all regions starting upcoming release.
+ * In-line quota update. * Request quotas increase directly from the My Quota page, which will be a direct API call, which is faster. If the APdI call fails, then customers need to create a new support request for quota increase.
-* HDInsight Cluster Creation with Custom VNets.
- * To improve the overall security posture of the HDInsight clusters, HDInsight clusters using custom VNETs need to ensure that the user needs to have permission for `Microsoft Network/virtualNetworks/subnets/join/action` to perform create operations. Customers would need to plan accordingly as this change would be a mandatory check to avoid cluster creation failures before September 30, 2023.ΓÇ»
+ * Basic and Standard A-series VMs Retirement.
- * On August 31, 2024, we'll retire Basic and Standard A-series VMs. Before that date, you need to migrate your workloads to Av2-series VMs, which provide more memory per vCPU and faster storage on solid-state drives (SSDs). To avoid service disruptions, [migrate your workloads](https://aka.ms/Av1retirement) from Basic and Standard A-series VMs to Av2-series VMs before August 31, 2024.
-* Non-ESP ABFS clusters [Cluster Permissions for Word Readable]
- * Plan to introduce a change in non-ESP ABFS clusters, which restricts non-Hadoop group users from executing Hadoop commands for storage operations. This change to improve cluster security posture. Customers need to plan for the updates before September 30, 2023.ΓÇ»
+ * On August 31, 2024, we will retire Basic and Standard A-series VMs. Before that date, you need to migrate your workloads to Av2-series VMs, which provide more memory per vCPU and faster storage on solid-state drives (SSDs).
+ * To avoid service disruptions, [migrate your workloads](https://aka.ms/Av1retirement) from Basic and Standard A-series VMs to Av2-series VMs before August 31, 2024.
If you have any more questions, contact [Azure Support](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview). You can always ask us about HDInsight on [Azure HDInsight - Microsoft Q&A](/answers/tags/168/azure-hdinsight)
-YouΓÇÖre welcome to add more proposals and ideas and other topics here and vote for them - [HDInsight Community (azure.com)](https://feedback.azure.com/d365community/search/?q=HDInsight).
+We are listening: YouΓÇÖre welcome to add more ideas and other topics here and vote for them - [HDInsight Ideas](https://feedback.azure.com/d365community/search/?q=HDInsight) and follow us for more updates on [AzureHDInsight Community](https://www.linkedin.com/groups/14313521/)
+
+> [!NOTE]
+> This release addresses the following CVEs released by [MSRC](https://msrc.microsoft.com/update-guide/vulnerability) on September 12, 2023. The action is to update to the latest image 2308221128 or 2310140056. Customers are advised to plan accordingly.
+
+| CVE | Severity | CVE Title | Remark |
+| - | - | - | - |
+| [CVE-2023-38156](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-38156) | Important | Azure HDInsight Apache Ambari Elevation of Privilege Vulnerability |Included on image 2308221128 or 2310140056 |
+| [CVE-2023-36419](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-36419) | Important | Azure HDInsight Apache Oozie Workflow Scheduler Elevation of Privilege Vulnerability | Apply [Script action](https://hdiconfigactions2.blob.core.windows.net/msrc-script/script_action.sh) on your clusters, or update to 2310140056 image |
- > [!NOTE]
- > We advise customers to use to latest versions of HDInsight [Images](./view-hindsight-cluster-image-version.md) as they bring in the best of open source updates, Azure updates and security fixes. For more information, see [Best practices](./hdinsight-overview-before-you-start.md).
+> [!NOTE]
+> We advise customers to use to latest versions of HDInsight [Images](./view-hindsight-cluster-image-version.md) as they bring in the best of open source updates, Azure updates and security fixes. For more information, see [Best practices](./hdinsight-overview-before-you-start.md).
### Next steps * [Azure HDInsight: Frequently asked questions](./hdinsight-faq.yml)
hdinsight Migrate 5 1 Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/migrate-5-1-versions.md
+
+ Title: Migrate Apache Kafka workloads to Azure HDInsight 5.1
+description: Learn how to migrate Apache Kafka workloads on HDInsight 4.0 to HDInsight 5.1.
++ Last updated : 10/26/2023++
+# Migrate Apache Kafka workloads to Azure HDInsight 5.1
+
+Azure HDInsight 5.1 offers the latest open-source components with significant enhancements in performance, connectivity, and security. This document explains how to migrate Apache Kafka workloads from HDInsight 4.0 (Kafka 2.1) to HDInsight 5.1 (Kafka 3.2).
++
+## Apache Kafka versions
+
+### Apache Kafka 3.2.0
+
+If you migrate from Kafka to 3.2.0, you can take advantage of the following new features:
++++
+- Support Automated consumer offsets sync across cluster in MM 2.0, making it easier to migrate or failover consumers across clusters. (KIP-545)
+- Hint to the partition leader to recover the partition: A new feature that allows the controller to communicate to a newly elected topic partition leader whether it needs to recover its state (KIP-704)
+- Supports TLS 1.2 by default for secure communication
+- Zookeeper Dependency Removal: Producers and consumers no longer need the zookeeper parameter. Use the `--bootstrap-server` option instead of `--zookeeper` with CLI commands. (KIP-500)
+- Configurable backlog size for creating Acceptor: A new configuration that allows setting the size of the SYN backlog for TCPΓÇÖs acceptor sockets on the brokers (KIP-764)
+- Top-level error code field to DescribeLogDirsResponse: A new error code that makes DescribeLogDirs API consistent with other APIs and allows returning other errors besides CLUSTER_AUTHORIZATION_FAILED (KIP-784)
++
+For a complete list of updates, see [Apache Kafka 3.2.0 release notes](https://archive.apache.org/dist/kafka/3.2.0/RELEASE_NOTES.html).
++
+## Kafka client compatibility
+
+New Kafka brokers support older clients. [KIP-35 - Retrieving protocol version](https://cwiki.apache.org/confluence/display/KAFKA/KIP-35+-+Retrieving+protocol+version) introduced a mechanism for dynamically determining the functionality of a Kafka broker and [KIP-97: Improved Kafka Client RPC Compatibility Policy](https://cwiki.apache.org/confluence/display/KAFKA/KIP-97%3A+Improved+Kafka+Client+RPC+Compatibility+Policy) introduced a new compatibility policy and guarantees for the Java client. Previously, a Kafka client had to interact with a broker of the same version or a newer version. Now, newer versions of the Java clients and other clients that support KIP-35 such as `librdkafka` can fall back to older request types or throw appropriate errors if functionality isn't available.
++
+> [!NOTE]
+> Recommended to use kafka client version same as the cluster versions. For more information, see [Compatibility Matrix](https://cwiki.apache.org/confluence/display/KAFKA/Compatibility+Matrix).
+
+## General migration process
+
+The following migration guidance assumes an Apache Kafka 2.1.1 cluster deployed on HDInsight 4.0 in a single virtual network. The existing broker has some topics and is being actively used by producers and consumers.
+Upgrading the Kafka version on an existing cluster isn't supported. After you create a cluster with HDI 5.1, migrate your Kafka clients to use the new cluster.
++
+To complete the migration, do the following steps:
+
+1. **Deploy a new HDInsight 5.1 cluster and clients for test.** Deploy a new HDInsight 5.1 Kafka cluster. If multiple Kafka cluster versions can be selected, it's recommended to select the latest version. After deployment, set some parameters as needed and create a topic with the same name as your existing environment. Also, set TLS and bring-your-own-key (BYOK) encryption as needed. Then check if it works correctly with the new cluster.
+
+ :::image type="content" source="./media/migrate-5-1-versions/deploy-new-hdinsight-clusters.png" alt-text="Screenshot shows how to Deploy new HDInsight 5.1 clusters." lightbox="./media/migrate-5-1-versions/deploy-new-hdinsight-clusters.png":::
+
+1. **Switch the cluster for the producer application, and wait until all the queue data is consumed by the current consumers.** When the new HDInsight 5.1 Kafka cluster is ready, switch the existing producer destination to the new cluster. Leave it as it is until the existing Consumer app has consumed all the data from the existing cluster.
+
+ :::image type="content" source="./media/migrate-5-1-versions/switch-cluster-producer-app.png" alt-text="Screenshot shows how to Switch cluster for producer app." lightbox="./media/migrate-5-1-versions/switch-cluster-producer-app.png":::
+
+1. **Switch the cluster on the consumer application.** After confirming that the existing consumer application has finished consuming all data from the existing cluster, switch the connection to the new cluster.
+
+ :::image type="content" source="./media/migrate-5-1-versions/switch-cluster-consumer-app.png" alt-text="Screenshot shows how to Switch cluster on consumer app." lightbox="./media/migrate-5-1-versions/switch-cluster-consumer-app.png":::
+
+1. **Remove the old cluster and test applications as needed.** Once the switch is complete and working properly, remove the old HDInsight 4.0 Kafka cluster and the producers and consumers used in the test as needed.
+
+## Next steps
+
+* [Performance optimization for Apache Kafka HDInsight clusters](apache-kafka-performance-tuning.md)
+* [Quickstart: Create Apache Kafka cluster in Azure HDInsight using Azure portal](apache-kafka-get-started.md)
healthcare-apis Dicom Service V2 Api Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-service-v2-api-changes.md
Failed validation of attributes not required by the API results in the file bein
A warning is given about each failing attribute per instance. When a sequence contains an attribute that fails validation, or when there are multiple issues with a single attribute, only the first failing attribute reason is noted. There are some notable behaviors for optional attributes that fail validation:
- * Searches for the attribute that failed validation returns the study/series/instance.
+ * Searches for the attribute that failed validation returns the study/series/instance if the value is corrected in one of the few ways [mentioned below](#search-results-might-be-incomplete-for-extended-query-tags-with-validation-warnings).
* The attributes aren't returned when retrieving metadata via WADO `/metadata` endpoints. Retrieving a study/series/instance always returns the original binary files with the original attributes, even if those attributes failed validation.
Single frame retrieval is supported by adding the following `Accept` header:
### Search #### Search results might be incomplete for extended query tags with validation warnings
-In the v1 API and continued for v2, if an [extended query tag](dicom-extended-query-tags-overview.md) has any errors, because one or more of the existing instances had a tag value that couldn't be indexed, then subsequent search queries containing the extended query tag return `erroneous-dicom-attributes` as detailed in the [documentation](dicom-extended-query-tags-overview.md#tag-query-status). However, tags (also known as attributes) with validation warnings from STOW-RS are **not** included in this header. If a store request results in validation warnings on [searchable tags](dicom-services-conformance-statement-v2.md#searchable-attributes), subsequent searches containing these tags don't consider any DICOM SOP instance that produced a warning. This behavior might result in incomplete search results. To correct an attribute, delete the stored instance and upload the corrected data.
+In the v1 API and continued for v2, if an [extended query tag](dicom-extended-query-tags-overview.md) has any errors, because one or more of the existing instances had a tag value that couldn't be indexed, then subsequent search queries containing the extended query tag return `erroneous-dicom-attributes` as detailed in the [documentation](dicom-extended-query-tags-overview.md#tag-query-status). However, tags (also known as attributes) with validation warnings from STOW-RS are **not** included in this header. If a store request results in validation warnings for [searchable attributes](dicom-services-conformance-statement-v2.md#searchable-attributes) at the time the instance was stored, those attributes may not be used to search for the stored instance. However, any [searchable attributes](dicom-services-conformance-statement-v2.md#searchable-attributes) that failed validation will be able to return results if the values are overwritten by instances in the same study/series that are stored after the failed one, or if the values are already stored correctly by a previous instance. If the attribute values are not overwritten, then they will not produce any search results.
+
+An attribute can be corrected in the following ways:
+- Delete the stored instance and upload a new instance with the corrected data
+- Upload a new instance in the same study/series with corrected data
#### Fewer Study, Series, and Instance attributes are returned by default The set of attributes returned by default has been reduced to improve performance. See the detailed list in the [search response](./dicom-services-conformance-statement-v2.md#search-response) documentation.
The Change Feed API now accepts optional `startTime` and `endTime` parameters to
For v2, it's recommended to always include a time range to improve performance.
healthcare-apis Dicom Services Conformance Statement V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-conformance-statement-v2.md
The following `Accept` header(s) are supported for searching:
* `application/dicom+json` ### Search changes from v1
-In the v1 API and continued for v2, if an [extended query tag](dicom-extended-query-tags-overview.md) has any errors, because one or more of the existing instances had a tag value that couldn't be indexed, then subsequent search queries containing the extended query tag returns `erroneous-dicom-attributes` as detailed in the [documentation](dicom-extended-query-tags-overview.md#tag-query-status). However, tags (also known as attributes) with validation warnings from STOW-RS are **not** included in this header. If a store request results in validation warnings on [searchable tags](#searchable-attributes), subsequent searches containing these tags doesn't consider any DICOM SOP instance that produced a warning. This behavior might result in incomplete search results.
-To correct an attribute, delete the stored instance and upload the corrected data.
+In the v1 API and continued for v2, if an [extended query tag](dicom-extended-query-tags-overview.md) has any errors, because one or more of the existing instances had a tag value that couldn't be indexed, then subsequent search queries containing the extended query tag returns `erroneous-dicom-attributes` as detailed in the [documentation](dicom-extended-query-tags-overview.md#tag-query-status). However, tags (also known as attributes) with validation warnings from STOW-RS are **not** included in this header. If a store request results in validation warnings for [searchable attributes](#searchable-attributes) at the time the [instance was stored](#store-changes-from-v1), those attributes may not be used to search for the stored instance. However, any [searchable attributes](#searchable-attributes) that failed validation will be able to return results if the values are overwritten by instances in the same study/series that are stored after the failed one, or if the values are already stored correctly by a previous instance. If the attribute values are not overwritten, then they will not produce any search results.
+
+An attribute can be corrected in the following ways:
+- Delete the stored instance and upload a new instance with the corrected data
+- Upload a new instance in the same study/series with corrected data
### Supported search parameters
iot-central Concepts Faq Extend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-faq-extend.md
To extend IoT Central's built-in rules and analytics capabilities, use the data
- Enrich, and transform your IoT data to generate advanced visualizations that provide insights. - Extract business metrics and use artificial intelligence and machine learning to derive business insights from your IoT data.-- Monitoring and diagnostics for millions of connected IoT devices.
+- Monitoring and diagnostics for hundreds of thousands of connected IoT devices.
- Combine your IoT data with other business data to build dashboards and reports. To learn more, see [IoT Central data integration guide](overview-iot-central-solution-builder.md).
iot-central Concepts Faq Scalability Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-faq-scalability-availability.md
# What does it mean for IoT Central to have high availability, disaster recovery (HADR), and elastic scale?
-Azure IoT Central is an application platform as a service (aPaaS) that manages scalability and HADR for you. An IoT Central application can scale to support millions of connected devices. For more information about device and message pricing, see [Azure IoT Central pricing](https://azure.microsoft.com/pricing/details/iot-central/). For more information about the service level agreement, see [SLA for Azure IoT Central](https://azure.microsoft.com/support/legal/sla/iot-central/v1_0/).
+Azure IoT Central is an application platform as a service (aPaaS) that manages scalability and HADR for you. An IoT Central application can scale to support hundreds of thousands of connected devices. For more information about device and message pricing, see [Azure IoT Central pricing](https://azure.microsoft.com/pricing/details/iot-central/). For more information about the service level agreement, see [SLA for Azure IoT Central](https://azure.microsoft.com/support/legal/sla/iot-central/v1_0/).
This article provides background information about how IoT Central scales and delivers HADR. The article also includes guidance on how to take advantage of these capabilities.
iot-central Concepts Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-quotas-limits.md
Title: Azure IoT Central quotas and limits
description: This article lists the key quotas and limits that apply to an IoT Central application including from the underlying DPS and IoT Hub services. Previously updated : 06/12/2023 Last updated : 10/26/2023
There are various quotas and limits that apply to IoT Central applications. IoT Central applications internally use multiple Azure services such as IoT Hub and the Device Provisioning Service (DPS), and these services also have quotas and limits. Where relevant, quotas and limits in the underlying services are called out in this article.
-> [!NOTE]
-> The quotas and limits described in this article apply to the new multiple IoT hub architecture. Currently, there are a few legacy IoT Central applications that were created before April 2021 that haven't yet been migrated to the multiple IoT hub architecture. Use the `az iot central device manual-failover` command in the [Azure CLI](/cli/azure/?view=azure-cli-latest&preserve-view=true) to check if your application still uses a single IoT hub. This triggers an IoT hub failover if your application uses the multiple IoT hub architecture. It returns an error if your application uses the older architecture.
- ## Devices
-| Item | Quota or limit | Notes |
-| - | -- | -- |
-| Number of devices in an application | 1,000,000 | Contact support to discuss increasing this quota for your application. |
-| Number of IoT Central simulated devices in an application | 100 | Contact support to discuss increasing this quota for your application. |
+| Item | Quota or limit |
+| - | -- |
+| Number of devices in an application | 200,000 |
+| Number of IoT Central simulated devices in an application | 100 |
## Telemetry | Item | Quota or limit | Notes | | - | -- | -- |
-| Number of telemetry messages per second per device| 10 | If you need to exceed this limit, contact support to discuss increasing it for your application. |
+| Number of messages per second per application | 200 | Individual devices can temporarily send up to 10 messages per second. |
| Maximum size of a device-to-cloud message | 256 KB | The IoT Hub service sets this value. | | Maximum size of a cloud-to-device message | 64 KB | The IoT Hub service sets this value. |
There are various quotas and limits that apply to IoT Central applications. IoT
## REST API calls
-| Item | Quota or limit | Notes |
-| - | -- | -- |
-| Query API requests per second | 1 | If you need to exceed this limit, contact support to discuss increasing it for your application. |
-| Other API requests per second | 20 | If you need to exceed this limit, contact support to discuss increasing it for your application. |
+| Item | Quota or limit |
+| - | -- |
+| Query API requests per second | 1 |
+| Other API requests per second | 20 |
## Storage
There are various quotas and limits that apply to IoT Central applications. IoT
| Number of data export destinations per job | 10 | If you need to exceed this limit, contact support to discuss increasing it for your application. | | Number of filters and enrichments per data export job | 10 | If you need to exceed this limit, contact support to discuss increasing it for your application. |
+For large volumes of export data, you may experience up to 60 seconds of latency. Typically, the latency is much lower than this.
+ ## Device modeling | Item | Quota or limit | Notes |
There are various quotas and limits that apply to IoT Central applications. IoT
| Item | Quota or limit | Notes | | - | -- | -- |
-| Number of device groups in an application | 1,000 | For performance reasons, you shouldn't exceed this limit. |
+| Number of device groups in an application | 500 | For performance reasons, you shouldn't exceed this limit. |
| Number of filters in a device group | 100 | For performance reasons, you shouldn't exceed this limit. | ## Device provisioning | Item | Quota or limit | Notes | | - | -- | -- |
-| Number of devices registrations per minute | 200 | The underlying DPS instance sets this quota. Contact support to discuss increasing this quota for your application. |
+| Number of devices registrations per minute | 200 | The underlying DPS instance sets this quota. |
## Rules | Item | Quota or limit | Notes | | - | -- | -- |
-| Number of rules in an application | 50 | Contact support to discuss increasing this quota for your application. |
+| Number of rules in an application | 50 | This quota is fixed and can't be changed. |
| Number of actions in a rule | 5 | This quota is fixed and can't be changed. | | Number of alerts for an email action | One alert every minute per rule | This quota is fixed and can't be changed. | | Number of alerts for a webhook action | One alert every 10 seconds per action | This quota is fixed and can't be changed. |
iot-central Overview Iot Central Developer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-developer.md
# IoT Central device connectivity guide
-An IoT Central application lets you monitor and manage millions of devices throughout their life cycle. This guide is for device developers who implement the code to run on devices that connect to IoT Central.
+An IoT Central application lets you monitor and manage hundreds of thousands of devices throughout their life cycle. This guide is for device developers who implement the code to run on devices that connect to IoT Central.
Devices interact with an IoT Central application by using the following primitives:
iot-central Overview Iot Central Operator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-operator.md
# IoT Central device management guide
-An IoT Central application lets you monitor and manage millions of devices throughout their life cycle.
+An IoT Central application lets you monitor and manage hundreds of thousands of devices throughout their life cycle.
IoT Central lets you complete device management tasks such as:
iot-central Overview Iot Central Solution Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-solution-builder.md
Scenarios that process IoT data outside of IoT Central to extract business value
- Streaming computation, monitoring, and diagnostics
- IoT Central provides a scalable and reliable infrastructure to capture streaming data from millions of connected devices. Sometimes, you need to run stream computations over the hot or warm data paths to meet business requirements. You can also merge IoT data with data in external stores such as Azure Data explorer to provide enhanced diagnostics.
+ IoT Central provides a scalable and reliable infrastructure to capture streaming data from hundreds of thousands of connected devices. Sometimes, you need to run stream computations over the hot or warm data paths to meet business requirements. You can also merge IoT data with data in external stores such as Azure Data explorer to provide enhanced diagnostics.
- Analyze and visualize IoT data alongside business data
load-balancer Quickstart Load Balancer Standard Internal Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-bicep.md
This quickstart describes how to use Bicep to create an internal Azure load balancer. + [!INCLUDE [About Bicep](../../includes/resource-manager-quickstart-bicep-introduction.md)] ## Prerequisites
load-balancer Quickstart Load Balancer Standard Internal Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-template.md
This quickstart describes how to use an Azure Resource Manager template (ARM template) to create an internal Azure load balancer. ++ [!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)] If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template opens in the Azure portal.
load-balancer Quickstart Load Balancer Standard Public Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-bicep.md
Previously updated : 09/27/2023 Last updated : 10/25/2023 #Customer intent: I want to create a load balancer by using a Bicep file so that I can load balance internet traffic to VMs.
Load balancing provides a higher level of availability and scale by spreading in
This quickstart shows you how to deploy a standard load balancer to load balance virtual machines. + Using a Bicep file takes fewer steps comparing to other deployment methods. [!INCLUDE [About Bicep](../../includes/resource-manager-quickstart-bicep-introduction.md)]
Multiple Azure resources have been defined in the bicep file:
- [**Microsoft.Compute/virtualMachine/extensions**](/azure/templates/microsoft.compute/virtualmachines/extensions) (3): use to configure the Internet Information Server (IIS), and the web pages. > [!IMPORTANT]- > [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
->
- To find more Bicep files or ARM templates that are related to Azure Load Balancer, see [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Network&pageNumber=1&sort=Popular). ## Deploy the Bicep file
load-balancer Quickstart Load Balancer Standard Public Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-template.md
Previously updated : 12/13/2022 Last updated : 10/25/2023 #Customer intent: I want to create a load balancer by using an Azure Resource Manager template so that I can load balance internet traffic to VMs.
Load balancing provides a higher level of availability and scale by spreading in
This quickstart shows you how to deploy a standard load balancer to load balance virtual machines. + Using an ARM template takes fewer steps comparing to other deployment methods. [!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
load-balancer Tutorial Load Balancer Port Forwarding Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-load-balancer-port-forwarding-portal.md
In this tutorial, you learn how to:
> * Create a NAT gateway for outbound internet access for the backend pool > * Install and configure a web server on the VMs to demonstrate the port forwarding and load-balancing rules ## Prerequisites
machine-learning How To Authenticate Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-authenticate-batch-endpoint.md
Batch endpoints support Microsoft Entra authentication, or `aad_token`. That means that in order to invoke a batch endpoint, the user must present a valid Microsoft Entra authentication token to the batch endpoint URI. Authorization is enforced at the endpoint level. The following article explains how to correctly interact with batch endpoints and the security requirements for it.
-## Prerequisites
-
-* This example assumes that you have a model correctly deployed as a batch endpoint. Particularly, we are using the *heart condition classifier* created in the tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
- ## How authorization works To invoke a batch endpoint, the user must present a valid Microsoft Entra token representing a __security principal__. This principal can be a __user principal__ or a __service principal__. In any case, once an endpoint is invoked, a batch deployment job is created under the identity associated with the token. The identity needs the following permissions in order to successfully create a job:
To invoke a batch endpoint, the user must present a valid Microsoft Entra token
> * Read and write from/to data stores. > * Lists datastore secrets.
-You can either use one of the [built-in security roles](../role-based-access-control/built-in-roles.md) or create a new one. In any case, the identity used to invoke the endpoints requires to be granted the permissions explicitly. See [Steps to assign an Azure role](../role-based-access-control/role-assignments-steps.md) for instructions to assign them.
+See [Configure RBAC for batch endpoint invoke](#configure-rbac-for-batch-endpoints-invoke) for a detailed list of RBAC permissions.
> [!IMPORTANT] > The identity used for invoking a batch endpoint may not be used to read the underlying data depending on how the data store is configured. Please see [Configure compute clusters for data access](#configure-compute-clusters-for-data-access) for more details.
The following examples show different ways to start batch deployment jobs using
> [!IMPORTANT] > When working on a private link-enabled workspaces, batch endpoints can't be invoked from the UI in Azure Machine Learning studio. Please use the Azure Machine Learning CLI v2 instead for job creation.
+### Prerequisites
+
+* This example assumes that you have a model correctly deployed as a batch endpoint. Particularly, we are using the *heart condition classifier* created in the tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
+ ### Running jobs using user's credentials In this case, we want to execute a batch endpoint using the identity of the user currently logged in. Follow these steps:
You can also use the Azure CLI to get an authentication token for the managed id
+## Configure RBAC for Batch Endpoints invoke
+
+Batch Endpoints exposes a durable API consumers can use to generate jobs. The invoker request proper permission to be able to generate those jobs. You can either use one of the [built-in security roles](../role-based-access-control/built-in-roles.md) or you can create a custom role for the purposes.
+
+To successfully invoke a batch endpoint you need the following explicit actions granted to the identity used to invoke the endpoints. See [Steps to assign an Azure role](../role-based-access-control/role-assignments-steps.md) for instructions to assign them.
+
+```json
+"actions": [
+ "Microsoft.MachineLearningServices/workspaces/read",
+ "Microsoft.MachineLearningServices/workspaces/data/versions/write",
+ "Microsoft.MachineLearningServices/workspaces/datasets/registered/read",
+ "Microsoft.MachineLearningServices/workspaces/datasets/registered/write",
+ "Microsoft.MachineLearningServices/workspaces/datasets/unregistered/read",
+ "Microsoft.MachineLearningServices/workspaces/datasets/unregistered/write",
+ "Microsoft.MachineLearningServices/workspaces/datastores/read",
+ "Microsoft.MachineLearningServices/workspaces/datastores/write",
+ "Microsoft.MachineLearningServices/workspaces/datastores/listsecrets/action",
+ "Microsoft.MachineLearningServices/workspaces/listStorageAccountKeys/action",
+ "Microsoft.MachineLearningServices/workspaces/batchEndpoints/read",
+ "Microsoft.MachineLearningServices/workspaces/batchEndpoints/deployments/read",
+ "Microsoft.MachineLearningServices/workspaces/computes/read",
+ "Microsoft.MachineLearningServices/workspaces/computes/listKeys/action",
+ "Microsoft.MachineLearningServices/workspaces/metadata/secrets/read",
+ "Microsoft.MachineLearningServices/workspaces/metadata/snapshots/read",
+ "Microsoft.MachineLearningServices/workspaces/metadata/artifacts/read",
+ "Microsoft.MachineLearningServices/workspaces/metadata/artifacts/write",
+ "Microsoft.MachineLearningServices/workspaces/experiments/read",
+ "Microsoft.MachineLearningServices/workspaces/experiments/runs/submit/action",
+ "Microsoft.MachineLearningServices/workspaces/experiments/runs/read",
+ "Microsoft.MachineLearningServices/workspaces/experiments/runs/write",
+ "Microsoft.MachineLearningServices/workspaces/metrics/resource/write",
+ "Microsoft.MachineLearningServices/workspaces/modules/read",
+ "Microsoft.MachineLearningServices/workspaces/models/read",
+ "Microsoft.MachineLearningServices/workspaces/endpoints/pipelines/read",
+ "Microsoft.MachineLearningServices/workspaces/endpoints/pipelines/write",
+ "Microsoft.MachineLearningServices/workspaces/environments/read",
+ "Microsoft.MachineLearningServices/workspaces/environments/write",
+ "Microsoft.MachineLearningServices/workspaces/environments/build/action"
+ "Microsoft.MachineLearningServices/workspaces/environments/readSecrets/action"
+]
+```
+ ## Configure compute clusters for data access Batch endpoints ensure that only authorized users are able to invoke batch deployments and generate jobs. However, depending on how the input data is configured, other credentials might be used to read the underlying data. Use the following table to understand which credentials are used:
machine-learning How To Debug Pipeline Reuse Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-pipeline-reuse-issues.md
The environment can also be compared in the graph comparison feature. We'll cove
### Step 5: Use graph comparison to check if there's any other change to the inputs, parameters, output settings, run settings
-You can compare the input data, parameters, output settings, run settings of the two components using graph compare. To learn more, see [how to enable and use the graph compare feature](./how-to-use-pipeline-ui.md#compare-different-pipelines-to-debug-failure-or-other-unexpected-issues-preview)
+You can compare the input data, parameters, output settings, run settings of the two pipeline jobs or components using compare feature. To learn more, see [how to enable and use the graph compare feature](./how-to-use-pipeline-ui.md#compare-different-pipelines-to-debug-failure-or-other-unexpected-issues-preview)
+
+To identify any changes in pipeline topology, pipeline input/output, or pipeline settings between two pipelines, select **Compare graph** after adding two pipeline jobs to the compare list.
++
+Furthermore, you can compare two components to observe if there have been any changes in the component input/output, component setting or source code. To do this, select **Compare details** after adding two components to the compare list.
:::image type="content" source="./media/how-to-debug-pipeline-reuse/compare.png" alt-text="Screenshot showing detail comparison.":::
machine-learning How To Custom Tool Package Creation And Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-custom-tool-package-creation-and-usage.md
Last updated 09/12/2023
# Custom tool package creation and usage (preview)
-When develop flows, you can not only use the built-in tools provided by Prompt Flow, but also develop your own custom tool. In this article, we'll guide you through the process of developing your own tool package, offering detailed steps and advice on how to utilize your creation.
+When developing flows, you can not only use the built-in tools provided by Prompt flow, but also develop your own custom tool. In this document, we guide you through the process of developing your own tool package, offering detailed steps and advice on how to utilize your creation.
+
+After successful installation, your custom "tool" can show up in the tool list:
> [!IMPORTANT] > Prompt flow is currently in public preview. This preview is provided without a service-level agreement, and are not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
When develop flows, you can not only use the built-in tools provided by Prompt F
## Create your own tool package
-Your tool package should be a Python package. To try, see [my-tools-package 0.0.1](https://pypi.org/project/my-tools-package/) and skip this section.
-
-### Prerequisites
-
-Create a new conda environment using Python 3.9 or 3.10. Run the following command to install Prompt Flow dependencies:
-
-```sh
-# eventually only need to install promptflow
-pip install promptflow-sdk promptflow --extra-index-url https://azuremlsdktestpypi.azureedge.net/promptflow/
-```
-
-Install Pytest packages for running tests:
-
-```sh
-pip install pytest
-pip install pytest-mock
-```
-
-### Create custom tool package
-
-Run the following command under root folder to create your tool project quickly:
-
-```sh
-python scripts\generate_tool_package_template.py --destination <your-tool-project> --package-name <your-package-name> --tool-name <your-tool-name> --function-name <your-tool-function-name>
-```
-
-For example:
-
-```sh
-python scripts\generate_tool_package_template.py --destination hello-world-proj --package-name hello-world --tool-name hello_world_tool --function-name get_greeting_message
-```
-
-This autogenerated script will create one tool for you. The parameters _destination_ and _package-name_ are mandatory. The parameters _tool-name_ and _function-name_ are optional. If left unfilled, the _tool-name_ will default to _hello_world_tool_, and the _function-name_ will default to _tool-name_.
-
-The command will generate the tool project as follows with one tool `hello_world_tool.py` in it:
--
-The following points outlined explain the purpose of each folder/file in the package. If your aim is to develop multiple tools within your package, make sure to closely examine bullet hello-world/tools and hello_world/yamls/hello_world_tool.yaml:
--- **hello-world-proj**: This is the source directory. All of your project's source code should be placed in this directory.--- **hello-world/tools**: This directory contains the individual tools for your project. Your tool package can contain either one tool or many tools. When adding a new tool, you should create another *_tool.py under the `tools` folder.--- **hello-world/tools/hello_world_tool.py**: Develop your tool within the def function. Use the `@tool` decorator to identify the function as a tool.
- > [!Note]
- > There are two ways to write a tool. The default and recommended way is the function implemented way. You can also use the class implementation way.
--- **hello-world/tools/utils.py**: This file implements the tool list method, which collects all the tools defined. It's required to have this tool list method, as it allows the User Interface (UI) to retrieve your tools and display them within the UI.-
- > [!Note]
- > There's no need to create your own list method if you maintain the existing folder structure. You can simply use the auto-generated list method provided in the `utils.py` file.
--- **hello_world/yamls/hello_world_tool.yaml**: Tool YAMLs defines the metadata of the tool. The tool list method, as outlined in the `utils.py`, fetches these tool YAMLs.-
- You may want to update `name` and `description` to a better one in `your_tool.yaml`, so that tool can have a great name and description hint in prompt flow UI.
-
- > [!Note]
- > If you create a new tool, don't forget to also create the corresponding tool YAML. you can use the following command under your tool project to auto generate your tool YAML.
-
- ```sh
- python ..\scripts\package_tools_generator.py -m <tool_module> -o <tool_yaml_path>
- ```
-
- For example:
-
- ```sh
- python ..\scripts\package_tools_generator.py -m hello_world.tools.hello_world_tool -o hello_world\yamls\hello_world_tool.yaml
- ```
-
- To populate your tool module, adhere to the pattern `\<package_name\>.tools.\<tool_name\>`, which represents the folder path to your tool within the package.
--- **tests**: This directory contains all your tests, though they aren't required for creating your custom tool package. When adding a new tool, you can also create corresponding tests and place them in this directory. Run the following command under your tool project:-
- ```sh
- pytest tests
- ```
--- **MANIFEST.in**: This file is used to determine which files to include in the distribution of the project. Tool YAML files should be included in MANIFEST.in so that your tool YAMLs would be packaged and your tools can show in the UI.-
- > [!Note]
- > There's no need to update this file if you maintain the existing folder structure.
--- **setup.py**: This file contains metadata about your project like the name, version, author, and more. Additionally, the entry point is automatically configured for you in the `generate_tool_package_template.py` script. In Python, configuring the entry point in `setup.py` helps establish the primary execution point for a package, streamlining its integration with other software.-
- The `package_tools` entry point together with the tool list method are used to retrieve all the tools and display them in the UI.
-
- ```python
- entry_points={
- "package_tools": ["<your_tool_name> = <list_module>:<list_method>"],
- },
- ```
-
- > [!Note]
- > There's no need to update this file if you maintain the existing folder structure.
-
-### Build and share the tool package
-
- Execute the following command in the tool package root directory to build your tool package:
-
- ```sh
- python setup.py sdist bdist_wheel
- ```
-
- This will generate a tool package `<your-package>-0.0.1.tar.gz` and corresponding `whl file` inside the `dist` folder.
-
- [Create an account on PyPI](https://pypi.org/account/register/) if you don't already have one, and install `twine` package by running `pip install twine`.
-
- Upload your package to PyPI by running `twine upload dist/*`, this will prompt you for your Pypi username and password, and then upload your package on PyPI. Once your package is uploaded to PyPI, others can install it using pip by running `pip install your-package-name`. Make sure to replace `your-package-name` with the name of your package as it appears on PyPI.
-
- If you only want to put it on Test PyPI, upload your package by running `twine upload --repository-url https://test.pypi.org/legacy/ dist/*`. Once your package is uploaded to Test PyPI, others can install it using pip by running `pip install --index-url https://test.pypi.org/simple/ your-package-name`.
+Your tool package should be a python package. To develop your custom tool, follow the steps **Create your own tool package** and **build and share the tool package** in [Create and Use Tool package](https://microsoft.github.io/promptflow/how-to-guides/develop-a-tool/create-and-use-tool-package.html). You can also [Add a tool icon](https://microsoft.github.io/promptflow/how-to-guides/develop-a-tool/add-a-tool-icon.html) and [Add Category and tags](https://microsoft.github.io/promptflow/how-to-guides/develop-a-tool/add-category-and-tags-for-tool.html) for your tool.
## Prepare runtime
-You can create runtime with CI (Compute Instance) or MIR (Managed Inference Runtime). CI is the recommended way.
+To add the custom tool to your tool list, it's necessary to create a runtime, which is based on a customized environment where your custom tool is preinstalled. Here we use [my-tools-package](https://pypi.org/project/my-tools-package/) as an example to prepare the runtime.
### Create customized environment 1. Create a customized environment with docker context.
- 1. Create a customized environment in Azure Machine Learning studio by going to **Environments** then select **Create**. In the settings tab under *Select environment source*, choose " Create a new docker content".
+ 1. Create a customized environment in Azure Machine Learning studio by going to **Environments** then select **Create**. In the settings tab under *Select environment source*, choose " Create a new docker content."
Currently we support creating environment with "Create a new docker context" environment source. "Use existing docker image with optional conda file" has known [limitation](../how-to-manage-environments-v2.md#create-an-environment-from-a-conda-specification) and isn't supported now.
You can create runtime with CI (Compute Instance) or MIR (Managed Inference Runt
```sh FROM mcr.microsoft.com/azureml/promptflow/promptflow-runtime:latest
- RUN pip install -i https://test.pypi.org/simple/ my-tools-package==0.0.1
+ RUN pip install my-tools-package==0.0.1
``` :::image type="content" source="./media/how-to-custom-tool-package-creation-and-usage/create-customized-environment-step-2.png" alt-text="Screenshot of create environment in Azure Machine Learning studio on the customize step."lightbox ="./media/how-to-custom-tool-package-creation-and-usage/create-customized-environment-step-2.png":::
- It will take several minutes to create the environment. After it succeeded, you can copy the Azure Container Registry (ACR) from environment detail page for the next step.
-
-2. Create another environment with inference config. This is to support create MIR runtime with the customized environment and deployment scenario.
+ It takes several minutes to create the environment. After it succeeded, you can copy the Azure Container Registry (ACR) from environment detail page for the next step.
- >[!Note]
- > This step can only be done through CLI, AML studio UI doesn't support creating environment with inference_config today.
+### Prepare compute instance runtime
- Create env.yaml file like below example:
- >[!Note]
- > Remember to replace the ACR in the 'image' field.
-
- ```yaml
- $schema: https://azuremlschemas.azureedge.net/latest/environment.schema.json
- name: my-tool-env-with-inference
-
- # Once the image build succeed in last step, you will see ACR from environment detail page, replace the ACR path here.
- image: a0e352e5655546debe782dc5cb4a52df.azurecr.io/azureml/azureml_39b1850f1ec09f5500365d2b3be13b96
-
- description: promptflow enrivonment with custom tool packages
-
- # make sure the inference_config is specified in yaml, otherwise the endpoint deployment won't work
- inference_config:
- liveness_route:
- port: 8080
- path: /health
- readiness_route:
- port: 8080
- path: /health
- scoring_route:
- port: 8080
- path: /score
- ```
-
- Run Azure Machine Learning CLI to create environment:
-
- ```cli
- # optional
- az login
-
- # create your environment in workspace
- az ml environment create --subscription <sub-id> -g <resource-group> -w <workspace> -f env.yaml
- ```
-
-### Prepare runtime with CI or MIR
-
-3. Create runtime with CI using the customized environment created in step 2.
- 1. Create a new compute instance. Existing compute instance created long time ago may hit unexpected issue.
- 1. Create runtime on CI with customized environment.
+1. Create a compute instance runtime using the customized environment created in step 2.
+ 1. Create a new compute instance. Existing compute instance created long time ago can possibly hit unexpected issue.
+ 2. Create runtime on CI with customized environment.
:::image type="content" source="./media/how-to-custom-tool-package-creation-and-usage/create-runtime-on-compute-instance.png" alt-text="Screenshot of add compute instance runtime in Azure Machine Learning studio."lightbox ="./media/how-to-custom-tool-package-creation-and-usage/create-runtime-on-compute-instance.png":::
-4. Create runtime with MIR using the customized environment created in step 2. To learn how to create a runtime with MIR, see [How to create a manage runtime](how-to-create-manage-runtime.md).
-
- :::image type="content" source="./media/how-to-custom-tool-package-creation-and-usage/create-runtime-on-managed-inference-runtime.png" alt-text="Screenshot of add managed online deployment runtime in Azure Machine Learning studio."lightbox = "./media/how-to-custom-tool-package-creation-and-usage/create-runtime-on-managed-inference-runtime.png":::
- ## Test from Prompt Flow UI-
->[!Note]
-> Currently you need to append flight `PFPackageTools` after studio url.
- 1. Create a standard flow. 2. Select the correct runtime ("my-tool-runtime") and add your tools. :::image type="content" source="./media/how-to-custom-tool-package-creation-and-usage/test-customer-tool-on-ui-step-1.png" alt-text="Screenshot of flow in Azure Machine Learning studio showing the runtime and more tools dropdown."lightbox ="./media/how-to-custom-tool-package-creation-and-usage/test-customer-tool-on-ui-step-1.png":::
You can create runtime with CI (Compute Instance) or MIR (Managed Inference Runt
## Test from VS Code extension
-1. Download the latest version [Prompt flow extension](https://ms.portal.azure.com/#view/Microsoft_Azure_Storage/ContainerMenuBlade/~/overview/storageAccountId/%2Fsubscriptions%2F96aede12-2f73-41cb-b983-6d11a904839b%2Fresourcegroups%2Fpromptflow%2Fproviders%2FMicrosoft.Storage%2FstorageAccounts%2Fpfvscextension/path/pf-vscode-extension/etag/%220x8DB7169BD91D29C%22/defaultEncryptionScope/%24account-encryption-key/denyEncryptionScopeOverride~/false/defaultId//publicAccessVal/None).
-
-2. Install the extension in VS Code via "Install from VSIX":
- :::image type="content" source="./media/how-to-custom-tool-package-creation-and-usage/install-vsix.png" alt-text="Screenshot of the VS Code extensions showing install from VSIX under the ellipsis." lightbox = "./media/how-to-custom-tool-package-creation-and-usage/install-vsix.png":::
-
-3. Go to terminal and install your tool package in conda environment of the extension. By default, the conda env name is `prompt-flow`.
+1. Install Prompt flow for VS Code extension
+ :::image type="content" source="./media/how-to-custom-tool-package-creation-and-usage/prompt-flow-vs-code-extension.png" alt-text="Screenshot of Prompt flow VS Code extension."lightbox ="./media/how-to-custom-tool-package-creation-and-usage/prompt-flow-vs-code-extension.png":::
+2. Go to terminal and install your tool package in conda environment of the extension. Assume your conda env name is `prompt-flow`.
```sh (local_test) PS D:\projects\promptflow\tool-package-quickstart> conda activate prompt-flow (prompt-flow) PS D:\projects\promptflow\tool-package-quickstart> pip install .\dist\my_tools_package-0.0.1-py3-none-any.whl ```
-4. Go to the extension and open one flow folder. Select 'flow.dag.yaml' and preview the flow. Next, select `+` button and you'll see your tools. You may need to reload the windows to clean previous cache if you don't see your tool in the list.
+3. Go to the extension and open one flow folder. Select 'flow.dag.yaml' and preview the flow. Next, select `+` button and you can see your tools. You need to **reload the windows** to clean previous cache if you don't see your tool in the list.
:::image type="content" source="./media/how-to-custom-tool-package-creation-and-usage/auto-list-tool-in-extension.png" alt-text="Screenshot of the VS Code showing the tools." lightbox ="./media/how-to-custom-tool-package-creation-and-usage/auto-list-tool-in-extension.png"::: ## FAQ ### Why is my custom tool not showing up in the UI?--- Ensure that you've set the UI flight to `&flight=PFPackageTools`.- You can test your tool package using the following script to ensure that you've packaged your tool YAML files and configured the package tool entry point correctly. 1. Make sure to install the tool package in your conda environment before executing this script.
You can test your tool package using the following script to ensure that you've
test() ```
- 3. Run this script in your conda environment. This will return the metadata of all tools installed in your local environment, and you should verify that your tools are listed.
+ 3. Run this script in your conda environment. It returns the metadata of all tools installed in your local environment, and you should verify that your tools are listed.
- If you're using runtime with CI, try to restart your container with command `docker restart <container_name_or_id>` to see if the issue can be resolved. ### Why am I unable to upload package to PyPI? - Make sure that the entered username and password of your PyPI account are accurate.-- If you encounter a `403 Forbidden Error`, it's likely due to a naming conflict with an existing package. You'll need to choose a different name. Package names must be unique on PyPI to avoid confusion and conflicts among users. Before creating a new package, it's recommended to search PyPI (https://pypi.org/) to verify that your chosen name isn't already taken. If the name you want is unavailable, consider selecting an alternative name or a variation that clearly differentiates your package from the existing one.
+- If you encounter a `403 Forbidden Error`, it's likely due to a naming conflict with an existing package. You need to choose a different name. Package names must be unique on PyPI to avoid confusion and conflicts among users. Before creating a new package, it's recommended to search PyPI (https://pypi.org/) to verify that your chosen name isn't already taken. If the name you want is unavailable, consider selecting an alternative name or a variation that clearly differentiates your package from the existing one.
## Next steps
machine-learning How To Integrate With Llm App Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-integrate-with-llm-app-devops.md
pfazure run show-metrics --name <evaluation_run_name>
```python pf.get_metrics("evaluation_run_name") ```-
+> [!IMPORTANT]
+> For more information, you can refer to [the Prompt flow CLI documentation for Azure](https://microsoft.github.io/promptflow/reference/pfazure-command-reference.html).
+ ## Iterative development from fine-tuning ### Local development and testing
-During iterative development, as you refine and fine-tune your flow or prompts, you might find it beneficial to carry out multiple iterations locally within your code repository. The community version, **Prompt flow VS Code extension** and **Prompt flow local SDK & CLI** is provided to facilitate pure local development and testing without Azure binding.
+During iterative development, as you refine and fine-tune your flow or prompts, it could be beneficial to carry out multiple iterations locally within your code repository. The community version, **Prompt flow VS Code extension** and **Prompt flow local SDK & CLI** is provided to facilitate pure local development and testing without Azure binding.
#### Prompt flow VS Code extension
By following this best practice, teams can create a seamless, efficient, and pro
## Next steps - [Set up end-to-end LLMOps with Prompt Flow and GitHub](how-to-end-to-end-llmops-with-prompt-flow.md)
+- [Prompt flow CLI documentation for Azure](https://microsoft.github.io/promptflow/reference/pfazure-command-reference.html)
machine-learning Azure Language Detector Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/azure-language-detector-tool.md
- Title: Azure Language Detector tool in Azure Machine Learning prompt flow (preview)-
-description: Azure Language Detector is a cloud-based service, which you can use to identify the language of a piece of text.
------- Previously updated : 06/30/2023--
-# Azure Language Detector tool (preview)
-
-Azure Language Detector is a cloud-based service, which you can use to identify the language of a piece of text. For more information, see the [Azure Language Detector API](../../../cognitive-services/translator/reference/v3-0-detect.md) for more information.
-
-## Requirements
--- requests-
-## Prerequisites
--- [Create a Translator resource](../../../cognitive-services/translator/create-translator-resource.md).-
-## Inputs
-
-The following are available input parameters:
-
-| Name | Type | Description | Required |
-| - | - | -- | -- |
-| input_text | string | Identify the language of the input text. | Yes |
-
-For more information, see to [Translator 3.0: Detect](../../../cognitive-services/translator/reference/v3-0-detect.md)
--
-## Outputs
-
-The following is an example output returned by the tool:
-
-input_text = "Is this a leap year?"
-
-```
-en
-```
machine-learning Azure Translator Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/azure-translator-tool.md
- Title: Azure Translator tool in Azure Machine Learning prompt flow (preview)-
-description: Reference on Azure Translator in Prompt flow.
------- Previously updated : 06/30/2023--
-# Azure Translator tool (preview)
-
-Azure AI Translator is a cloud-based machine translation service you can use to translate text in with a simple REST API call. See the [Azure Translator API](../../../ai-services/translator/index.yml) for more information.
-
-> [!IMPORTANT]
-> Prompt flow is currently in public preview. This preview is provided without a service-level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-## Requirements
--- requests-
-## Prerequisites
--- [Create a Translator resource](../../../ai-services/translator/create-translator-resource.md).-
-## Inputs
-
-The following are available input parameters:
-
-| Name | Type | Description | Required |
-| - | - | -- | -- |
-| input_text | string | The text to translate. | Yes |
-| source_language | string | The language (code) of the input text. | Yes |
-| target_language | string | The language (code) you want the text to be translated too. | Yes |
-
-For more information, please refer to [Translator 3.0: Translate](../../../cognitive-services/translator/reference/v3-0-translate.md#required-parameters)
-
-## Outputs
-
-The following is an example output returned by the tool:
-
-input_text = "Is this a leap year?"
-source_language = "en"
-target_language = "hi"
-
-```
-क्या यह एक छलांग वर्ष है?
-```
machine-learning Embedding Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/embedding-tool.md
+
+ Title: Embedding tool in Azure Machine Learning prompt flow (preview)
+
+description: Prompt flow embedding tool uses OpenAI's embedding models to convert text into dense vector representations for various NLP tasks.
+++++++ Last updated : 10/16/2023++
+# Embedding tool (preview)
+OpenAI's embedding models convert text into dense vector representations for various NLP tasks. See the [OpenAI Embeddings API](https://platform.openai.com/docs/api-reference/embeddings) for more information.
+
+> [!IMPORTANT]
+> Prompt flow is currently in public preview. This preview is provided without a service-level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+Create OpenAI resources:
+
+- **OpenAI**
+
+ Sign up account [OpenAI website](https://openai.com/)
+ Login and [Find personal API key](https://platform.openai.com/account/api-keys)
+
+- **Azure OpenAI (AOAI)**
+
+ Create Azure OpenAI resources with [instruction](../../../ai-services/openai/how-to/create-resource.md)
+
+## **Connections**
+
+Set up connections to provide resources in embedding tool.
+
+| Type | Name | API KEY | API Type | API Version |
+|-|-|-|-|-|
+| OpenAI | Required | Required | - | - |
+| AzureOpenAI | Required | Required | Required | Required |
++
+## Inputs
+
+| Name | Type | Description | Required |
+||-|--|-|
+| input | string | the input text to embed | Yes |
+| connection | string | the connection for the embedding tool use to provide resources | Yes |
+| model/deployment_name | string | instance of the text-embedding engine to use. Fill in model name if you use OpenAI connection, or deployment name if use Azure OpenAI connection. | Yes |
+++
+## Outputs
+
+| Return Type | Description |
+|-||
+| list | The vector representations for inputs |
+
+There is an example response returned by the embedding tool:
+
+<details>
+ <summary>Output</summary>
+
+```
+[-0.005744616035372019,
+-0.007096089422702789,
+-0.00563855143263936,
+-0.005272455979138613,
+-0.02355326898396015,
+0.03955197334289551,
+-0.014260607771575451,
+-0.011810848489403725,
+-0.023170066997408867,
+-0.014739611186087132,
+...]
+```
+</details>
machine-learning Llm Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/llm-tool.md
Prompt flow LLM tool enables you to leverage widely used large language models l
Prompt flow provides a few different LLM APIs: - **[Completion](https://platform.openai.com/docs/api-reference/completions)**: OpenAI's completion models generate text based on provided prompts. - **[Chat](https://platform.openai.com/docs/api-reference/chat)**: OpenAI's chat models facilitate interactive conversations with text-based inputs and responses.-- **[Embedding](https://platform.openai.com/docs/api-reference/embeddings)**: OpenAI's embedding models convert text into dense vector representations for various NLP tasks.+
+> [!NOTE]
+> We now remove the `embedding` option from LLM tool api setting. You can use embedding api with [Embedding tool](embedding-tool.md).
> [!IMPORTANT] > Prompt flow is currently in public preview. This preview is provided without a service-level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
Set up connections to provisioned resources in Prompt flow.
| Name | Type | Description | Required | ||-|--|-|
-| prompt | string | text prompt that the language model will complete | Yes |
+| prompt | string | text prompt for the language model. | Yes |
| model, deployment_name | string | the language model to use | Yes | | max\_tokens | integer | the maximum number of tokens to generate in the completion. Default is 16. | No | | temperature | float | the randomness of the generated text. Default is 1. | No |
Set up connections to provisioned resources in Prompt flow.
| frequency\_penalty | float | value that controls the model's behavior with regard to generating rare phrases. Default is 0. | No | | logit\_bias | dictionary | the logit bias for the language model. Default is empty dictionary. | No |
-### Embedding
-
-| Name | Type | Description | Required |
-||-|--|-|
-| input | string | the input text to embed | Yes |
-| model, deployment_name | string | instance of the text-embedding engine to use | Yes |
-- ## Outputs | API | Return Type | Description | ||-|| | Completion | string | The text of one predicted completion | | Chat | string | The text of one response of conversation |
-| Embedding | list | The vector representations for inputs |
## How to use LLM Tool?
machine-learning More Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/more-tools.md
+
+ Title: More tools in Prompt flow
+
+description: More tools in Prompt flow are displayed in the table, along with instructions for custom tool package creation and tool package usage.
+++++++ Last updated : 10/24/2023++
+# More tools in Prompt flow (preview)
+This table provides an index of more tools. If existing tools can't meet your requirements, you can [develop your own custom tool and make a tool package](https://microsoft.github.io/promptflow/how-to-guides/develop-a-tool/create-and-use-tool-package.html).
+
+> [!IMPORTANT]
+> Prompt flow is currently in public preview. This preview is provided without a service-level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+| Tool name | Description | Environment | Package Name |
+||--|-|--|
+| [Python](./python-tool.md) | Run Python code. | Default | -- |
+| [LLM](./llm-tool.md) | Use Open AI's Large Language Model for text completion or chat. | Default | -- |
+| [Prompt](./prompt-tool.md) | Craft prompt using Jinja as the templating language. | Default | -- |
+| [Embedding](./embedding-tool.md) | Use Open AI's embedding model to create an embedding vector representing the input text. | Default | -- |
+| [Open Source LLM](./open-source-llm-tool.md) | Use an Open Source model from the Azure Model catalog, deployed to an Azure Machine Learning Online Endpoint for LLM Chat or Completion API calls. | Default | -- |
+| [Serp API](./serp-api-tool.md) | Use Serp API to obtain search results from a specific search engine. | Default | -- |
+| [Content Safety (Text)](./content-safety-text-tool.md) | Use Azure Content Safety to detect harmful content. | Default | [promptflow-contentsafety](https://pypi.org/project/promptflow-contentsafety/) |
+| [Faiss Index Lookup](./faiss-index-lookup-tool.md) | Search vector based query from the FAISS index file. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
+| [Vector DB Lookup](./vector-db-lookup-tool.md) | Search vector based query from existing Vector Database. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
+| [Vector Index Lookup](./vector-index-lookup-tool.md) | Search text or vector based query from Azure Machine Learning Vector Index. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
+
+To discover more custom tools that developed by the open source community, see [more custom tools](https://microsoft.github.io/promptflow/integrations/tools/https://docsupdatetracker.net/index.html).
+
+For the tools that should be utilized in the custom environment, see [Custom tool package creation and usage](../how-to-custom-tool-package-creation-and-usage.md#prepare-runtime) to prepare the runtime. Then the tools can be displayed in the tool list.
+
machine-learning Open Source Llm Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/open-source-llm-tool.md
+
+ Title: Open source LLM tool in Azure Machine Learning prompt flow (preview)
+
+description: The Prompt flow Open Source LLM tool enables you to utilize various Open Source and Foundational Models.
++++++++ Last updated : 10/16/2023++
+# Open Source LLM (preview)
+The Prompt flow Open Source LLM tool enables you to utilize various Open Source and Foundational Models, such as [Falcon](https://aka.ms/AAlc25c) or [Llama 2](https://aka.ms/AAlc258) for natural language processing, in PromptFlow.
+
+Here's how it looks in action on the Visual Studio Code Prompt flow extension. In this example, the tool is being used to call a LlaMa-2 chat endpoint and asking "What is CI?".
++
+This Prompt flow supports two different LLM API types:
+
+- **Chat**: Shown in the example above. The chat API type facilitates interactive conversations with text-based inputs and responses.
+- **Completion**: The Completion API type is used to generate single response text completions based on provided prompt input.
+
+> [!IMPORTANT]
+> Prompt flow is currently in public preview. This preview is provided without a service-level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Quick Overview: How do I use Open Source LLM Tool?
+
+1. Choose a Model from the Azure Machine Learning Model Catalog and deploy.
+2. Setup and select the connections to the model deployment.
+3. Configure the tool with the model settings.
+4. [Prepare the Prompt](./prompt-tool.md#how-to-write-prompt).
+5. Run the flow.
+
+## Prerequisites: Model Deployment
+
+1. Pick the model that matched your scenario from the [Azure Machine Learning model catalog](https://ml.azure.com/model/catalog).
+2. Use the "Deploy" button to deploy the model to an Azure Machine Learning Online Inference endpoint.
+
+To learn more, see [Deploying foundation models to endpoints for inferencing.](../../how-to-use-foundation-models.md#deploying-foundation-models-to-endpoints-for-inferencing)
+
+## Prerequisites: Prompt flow Connections
+
+In order for Prompt flow to use your deployed model, you need to set up a Connection. Explicitly, the Open Source LLM tool uses the CustomConnection.
+
+1. To learn how to create a custom connection, see [create a connection](https://microsoft.github.io/promptflow/how-to-guides/manage-connections.html#create-a-connection).
+
+ The keys to set are:
+
+ 1. **endpoint_url**
+ - This value can be found at the previously created Inferencing endpoint.
+ 2. **endpoint_api_key**
+ - Ensure to set this key as a secret value.
+ - This value can be found at the previously created Inferencing endpoint.
+ 3. **model_family**
+ - Supported values: LLAMA, DOLLY, GPT2, or FALCON
+ - This value is dependent on the type of deployment you're targeting.
+
+## Running the Tool: Inputs
+
+The Open Source LLM tool has many parameters, some of which are required. See the below table for details, you can match these parameters to the screenshot above for visual clarity.
+
+| Name | Type | Description | Required |
+|||-|-|
+| api | string | This parameter is the API mode and depends on the model used and the scenario selected. *Supported values: (Completion \| Chat)* | Yes |
+| connection | CustomConnection | This parameter is the name of the connection, which points to the Online Inferencing endpoint. | Yes |
+| model_kwargs | dictionary | This input is used to provide configuration specific to the model used. For example, the Llama-02 model uses {\"temperature\":0.4}. *Default: {}* | No |
+| deployment_name | string | The name of the deployment to target on the Online Inferencing endpoint. If no value is passed, the Inferencing load balancer traffic settings are used. | No |
+| prompt | string | The text prompt that the language model uses to generate its response. | Yes |
+
+## Outputs
+
+| API | Return Type | Description |
+||-||
+| Completion | string | The text of one predicted completion |
+| Chat | string | The text of one response int the conversation |
machine-learning Python Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/python-tool.md
The Python Tool empowers users to offer customized code snippets as self-contain
| Code | string | Python code snippet | Yes | | Inputs | - | List of tool function parameters and its assignments | - |
+### Types
+
+| Type | Python example | Description |
+|--||--|
+| int | param: int | Integer type |
+| bool | param: bool | Boolean type |
+| string | param: str | String type |
+| double | param: float | Double type |
+| list | param: list or param: List[T] | List type |
+| object | param: dict or param: Dict[K, V] | Object type |
+| [Connection](../concept-connections.md) | param: CustomConnection | Connection type will be handled specially |
++
+Parameters with `Connection` type annotation will be treated as connection inputs, which means:
+- Prompt flow extension will show a selector to select the connection.
+- During execution time, prompt flow will try to find the connection with the name same from parameter value passed in.
+
+> [!Note]
+> `Union[...]` type annotation is supported **ONLY** for connection type, for example, `param: Union[CustomConnection, OpenAIConnection]`.
+ ## Outputs The return of the python tool function.
The return of the python tool function.
2. Python Tool Code must contain a function decorated with @tool (tool function), serving as the entry point for execution. The @tool decorator should be applied only once within the snippet.
- *The sample in the next section defines python tool "my_python_tool", decorated with @tool*
+ *The sample in the next section defines python tool "my_python_tool" which decorated with @tool*
3. Python tool function parameters must be assigned in 'Inputs' section
The return of the python tool function.
### Code
+This snippet shows the basic structure of a tool function. Prompt flow will read the function and extract inputs from function parameters and type annotations.
+ ```python from promptflow import tool
+from promptflow.connections import CustomConnection
# The inputs section will change based on the arguments of the tool function, after you save the code # Adding type to arguments and return value will help the system show the types properly
+# Please update the function name/signature per need
@tool
-def my_python_tool(message: str) -> str:
+def my_python_tool(message: str, my_conn: CustomConnection) -> str:
+ my_conn_dict = dict(my_conn)
+ # Do some function call with my_conn_dict...
return 'hello ' + message ``` Inputs:
-| Name | Type | Sample Value |
-||--|--|
-| message | string | "world" |
+| Name | Type | Sample Value in Flow Yaml | Value passed to function|
+||--|-| |
+| message | string | "world" | "world" |
+| my_conn | CustomConnection | "my_conn" | CustomConnection object |
+
+Prompt flow will try to find the connection named 'my_conn' during execution time.
Outputs:
Outputs:
"hello world" ``` - ## How to consume custom connection in Python Tool? If you are developing a python tool that requires calling external services with authentication, you can use the custom connection in prompt flow. It allows you to securely store the access key then retrieve it in your python code.
machine-learning Reference Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-kubernetes.md
Last updated 06/06/2022
# Reference for configuring Kubernetes cluster for Azure Machine Learning
-This article contains reference information that may be useful when [configuring Kubernetes with Azure Machine Learning](./how-to-attach-kubernetes-anywhere.md).
+This article contains reference information for [configuring Kubernetes with Azure Machine Learning](./how-to-attach-kubernetes-anywhere.md).
## Supported Kubernetes version and region
More information about how to use ARM template can be found from [ARM template d
| Date | Version |Version description | ||||
+|Oct 11, 2023 | 1.1.35| Fix vulnerable image. Bug fixes. |
+|Aug 25, 2023 | 1.1.34| Fix vulnerable image. Return more detailed identity error. Bug fixes. |
|July 18, 2023 | 1.1.29| Add new identity operator errors. Bug fixes. | |June 4, 2023 | 1.1.28 | Improve auto-scaler to handle multiple node pool. Bug fixes. | | Apr 18 , 2023| 1.1.26 | Bug fixes and vulnerabilities fix. |
-| Mar 27, 2023| 1.1.25 | Add Azure machine learning job throttle. Fast fail for training job when SSH setup failed. Reduce Prometheus scrape interval to 30s. Improve error messages for inference. Fix vulnerable image. |
+| Mar 27, 2023| 1.1.25 | Add Azure Machine Learning job throttle. Fast fail for training job when SSH setup failed. Reduce Prometheus scrape interval to 30s. Improve error messages for inference. Fix vulnerable image. |
| Mar 7, 2023| 1.1.23 | Change default instance-type to use 2Gi memory. Update metrics configurations for scoring-fe that add 15s scrape_interval. Add resource specification for mdc sidecar. Fix vulnerable image. Bug fixes.| | Feb 14, 2023 | 1.1.21 | Bug fixes.| | Feb 7, 2023 | 1.1.19 | Improve error return message for inference. Update default instance type to use 2Gi memory limit. Do cluster health check for pod healthiness, resource quota, Kubernetes version and extension version. Bug fixes|
machine-learning Tutorial Online Materialization Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-online-materialization-inference.md
To prepare the notebook environment for development:
1. This code cell starts the Spark session. It needs about 10 minutes to install all dependencies and start the Spark session.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable online store and run online inference.ipynb?name=start-spark-session)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=start-spark-session)]
1. Set up the root directory for the samples
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable online store and run online inference.ipynb?name=root-dir)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=root-dir)]
1. Initialize the `MLClient` for the project workspace, where the tutorial notebook runs. The `MLClient` is used for the create, read, update, and delete (CRUD) operations.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable online store and run online inference.ipynb?name=init-prj-ws-client)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=init-prj-ws-client)]
1. Initialize the `MLClient` for the feature store workspace, for the create, read, update, and delete (CRUD) operations on the feature store workspace.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable online store and run online inference.ipynb?name=init-fs-ws-client)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=init-fs-ws-client)]
> [!NOTE] > A **feature store workspace** supports feature reuse across projects. A **project workspace** - the current workspace in use - leverages features from a specific feature store, to train and inference models. Many project workspaces can share and reuse the same feature store workspace. 1. As mentioned earlier, this tutorial uses the Python feature store core SDK (`azureml-featurestore`). This initialized SDK client is used for create, read, update, and delete (CRUD) operations, on feature stores, feature sets, and feature store entities.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable online store and run online inference.ipynb?name=init-fs-core-sdk)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=init-fs-core-sdk)]
## Prepare Azure Cache for Redis
This tutorial uses Azure Cache for Redis as the online materialization store. Yo
1. Set values for the Azure Cache for Redis resource, to use as online materialization store. In this code cell, define the name of the Azure Cache for Redis resource to create or reuse. You can override other default settings.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable online store and run online inference.ipynb?name=redis-settings)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=redis-settings)]
1. You can create a new Redis instance. You would select the Redis Cache tier (basic, standard, premium, or enterprise). Choose an SKU family available for the cache tier you select. For more information about tiers and cache performance, see [this resource](../azure-cache-for-redis/cache-best-practices-performance.md). For more information about SKU tiers and Azure cache families, see [this resource](https://azure.microsoft.com/pricing/details/cache/). Execute this code cell to create an Azure Cache for Redis with premium tier, SKU family `P`, and cache capacity 2. It may take from five to 10 minutes to prepare the Redis instance.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable online store and run online inference.ipynb?name=provision-redis)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=provision-redis)]
1. Optionally, this code cell reuses an existing Redis instance with the previously defined name.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable online store and run online inference.ipynb?name=reuse-redis)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=reuse-redis)]
1. Retrieve the user-assigned managed identity (UAI) that the feature store used for materialization. This code cell retrieves the principal ID, client ID, and ARM ID property values for the UAI used by the feature store for data materialization.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable online store and run online inference.ipynb?name=retrieve-uai)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=retrieve-uai)]
## Attach online materialization store to the feature store The feature store needs the Azure Cache for Redis as an attached resource, for use as the online materialization store. This code cell handles that step.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable online store and run online inference.ipynb?name=attach-online-store)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=attach-online-store)]
## Materialize the `accounts` feature set data to online store
The feature store needs the Azure Cache for Redis as an attached resource, for u
Earlier in this tutorial series, you did **not** materialize the accounts feature set because it had precomputed features, and only batch inference scenarios used it. This code cell enables online materialization so that the features become available in the online store, with low latency access. For consistency, it also enables offline materialization. Enabling offline materialization is optional.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable online store and run online inference.ipynb?name=enable-accounts-material)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=enable-accounts-material)]
### Backfill the `account` feature set The `begin_backfill` function backfills data to all the materialization stores enabled for this feature set. Here offline and online materialization are both enabled. This code cell backfills the data to both online and offline materialization stores.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable online store and run online inference.ipynb?name=start-accounts-backfill)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=start-accounts-backfill)]
This code cell tracks completion of the backfill job. With the Azure Cache for Redis premium tier provisioned earlier, this step may take approximately 10 minutes to complete.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable online store and run online inference.ipynb?name=track-accounts-backfill)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=track-accounts-backfill)]
## Materialize `transactions` feature set data to the online store
Earlier in this tutorial series, you materialized `transactions` feature set dat
1. This code cell enables the `transactions` feature set online materialization.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable online store and run online inference.ipynb?name=enable-transact-material)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=enable-transact-material)]
1. This code cell backfills the data to both the online and offline materialization store, to ensure that both stores have the latest data. The recurrent materialization job, which you set up in tutorial 2 of this series, now materializes data to both online and offline materialization stores.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable online store and run online inference.ipynb?name=start-transact-material)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=start-transact-material)]
This code cell tracks completion of the backfill job. Using the premium tier Azure Cache for Redis provisioned earlier, this step may take approximately five minutes to complete.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable online store and run online inference.ipynb?name=track-transact-material)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=track-transact-material)]
## Test locally
Now, use your development environment to look up features from the online materi
This code cell parses the list of features from the existing feature retrieval specification.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable online store and run online inference.ipynb?name=parse-feat-list)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=parse-feat-list)]
This code retrieves feature values from the online materialization store.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable online store and run online inference.ipynb?name=init-online-lookup)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=init-online-lookup)]
Prepare some observation data for testing, and use that data to look up features from the online materialization store. During the online look-up, the keys (`accountID`) defined in the observation sample data might not exist in the Redis (due to `TTL`). In this case:
Prepare some observation data for testing, and use that data to look up features
1. Open the console for the Redis instance, and check for existing keys with the `KEYS *` command. 1. Replace the `accountID` values in the sample observation data with the existing keys.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable online store and run online inference.ipynb?name=online-feat-loockup)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=online-feat-loockup)]
These steps looked up features from the online store. In the next step, you'll test online features using an Azure Machine Learning managed online endpoint.
Visit [this resource](./how-to-deploy-online-endpoints.md?tabs=azure-cli) to lea
This code cell defines the `fraud-model` managed online endpoint.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable online store and run online inference.ipynb?name=define-endpoint)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=define-endpoint)]
This code cell creates the managed online endpoint defined in the previous code cell.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable online store and run online inference.ipynb?name=create-endpoint)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=create-endpoint)]
### Grant required RBAC permissions
Here, you grant required RBAC permissions to the managed online endpoint on the
This code cell retrieves the managed identity of the managed online endpoint:
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable online store and run online inference.ipynb?name=get-endpoint-identity)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=get-endpoint-identity)]
#### Grant the `Contributor` role to the online endpoint managed identity on the Azure Cache for Redis This code cell grants the `Contributor` role to the online endpoint managed identity on the Redis instance. This RBAC permission is needed to materialize data into the Redis online store.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable online store and run online inference.ipynb?name=endpoint-redis-rbac)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=endpoint-redis-rbac)]
#### Grant `AzureML Data Scientist` role to the online endpoint managed identity on the feature store This code cell grants the `AzureML Data Scientist` role to the online endpoint managed identity on the feature store. This RBAC permission is required for successful deployment of the model to the online endpoint.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable online store and run online inference.ipynb?name=endpoint-fs-rbac)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=endpoint-fs-rbac)]
#### Deploy the model to the online endpoint
Review the scoring script `project/fraud_model/online_inference/src/scoring.py`.
Next, execute this code cell to create a managed online deployment definition for model deployment.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable online store and run online inference.ipynb?name=define-online-deployment)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=define-online-deployment)]
Deploy the model to online endpoint with this code cell. The deployment may need four to five minutes.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable online store and run online inference.ipynb?name=begin-online-deployment)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=begin-online-deployment)]
### Test online deployment with mock data Execute this code cell to test the online deployment with the mock data. You should see `0` or `1` as the output of this cell.
- [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable online store and run online inference.ipynb?name=test-online-deployment)]
+ [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_only/5. Enable online store and run online inference.ipynb?name=test-online-deployment)]
## Next steps
migrate Tutorial Discover Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-aws.md
ms. Previously updated : 09/15/2023 Last updated : 10/15/2023 -+ #Customer intent: As a server admin I want to discover my AWS instances.
If you just created a free Azure account, you're the owner of your subscription.
1. In the Azure portal, search for "subscriptions", and under **Services**, select **Subscriptions**.
- ![Image of Search box to search for the Azure subscription.](./media/tutorial-discover-aws/search-subscription.png)
+ :::image type="content" source="./media/tutorial-discover-aws/search-subscription.png" alt-text="Screenshot of Search box to search for the Azure subscription.":::
1. In the **Subscriptions** page, select the subscription in which you want to create a project.
If you just created a free Azure account, you're the owner of your subscription.
| Assign access to | User | | Members | azmigrateuser |
- ![Add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-page.png" alt-text="Screenshot of Add role assignment page in Azure portal.":::
1. To register the appliance, your Azure account needs **permissions to register Microsoft Entra apps.**
Set up a new project.
5. In **Create project**, select your Azure subscription and resource group. Create a resource group if you don't have one. 6. In **Project Details**, specify the project name and the geography in which you want to create the project. Review supported geographies for [public](migrate-support-matrix.md#public-cloud) and [government clouds](migrate-support-matrix.md#azure-government).
- ![Screenshot for project name and region.](./media/tutorial-discover-aws/new-project.png)
- 7. Select **Create**. 8. Wait a few minutes for the project to deploy. The **Azure Migrate: Discovery and assessment** tool is added by default to the new project.
-![Page showing Server Assessment tool added by default.](./media/tutorial-discover-aws/added-tool.png)
> [!NOTE]
-> If you have already created a project, you can use the same project to register additional appliances to discover and assess more no of servers.[Learn more](create-manage-projects.md#find-a-project)
+> If you have already created a project, you can use the same project to register additional appliances to discover and assess more no of servers. [Learn more](create-manage-projects.md#find-a-project).
## Set up the appliance
In the configuration manager, select **Set up prerequisites**, and then complete
3. To register the appliance, you need to select **Login**. In **Continue with Azure Login**, select **Copy code & Login** to copy the device code (you must have a device code to authenticate with Azure) and open an Azure Login prompt in a new browser tab. Make sure you've disabled the pop-up blocker in the browser to see the prompt. :::image type="content" source="./media/tutorial-discover-vmware/device-code.png" alt-text="Screenshot that shows where to copy the device code and sign in.":::+ 4. In a new tab in your browser, paste the device code and sign in by using your Azure username and password. Signing in with a PIN isn't supported.+ > [!NOTE] > If you close the sign in tab accidentally without signing in, refresh the browser tab of the appliance configuration manager to display the device code and Copy code & Login button. 5. After you successfully sign in, return to the browser tab that displays the appliance configuration manager. If the Azure user account that you used to sign in has the required permissions for the Azure resources that were created during key generation, appliance registration starts.
Now, connect from the appliance to the physical servers to be discovered, and st
* Currently Azure Migrate does not support SSH private key file generated by PuTTY. * Azure Migrate supports OpenSSH format of the SSH private key file as shown below:
- ![Image of SSH private key supported format.](./media/tutorial-discover-physical/key-format.png)
+ :::image type="content" source="./media/tutorial-discover-physical/key-format.png" alt-text="Screenshot of SSH private key supported format.":::
1. If you want to add multiple credentials at once, select **Add more** to save and add more credentials. Multiple credentials are supported for physical servers discovery.
Select **Start discovery**, to kick off discovery of the successfully validated
* It takes approximately 2 minutes to complete discovery of 100 servers and their metadata to appear in the Azure portal. * [Software inventory](how-to-discover-applications.md) (discovery of installed applications) is automatically initiated when the discovery of servers is finished. * [Software inventory](how-to-discover-applications.md) identifies the SQL Server instances that are running on the servers. Using the information it collects, the appliance attempts to connect to the SQL Server instances through the Windows authentication credentials or the SQL Server authentication credentials that are provided on the appliance. Then, it gathers data on SQL Server databases and their properties. The SQL Server discovery is performed once every 24 hours.
-* Appliance can connect to only those SQL Server instances to which it has network line of sight, whereas software inventory by itself may not need network line of sight.
+* Appliance can connect to only those SQL Server instances to which it has network line of sight, whereas software inventory by itself might not need network line of sight.
* The time taken for discovery of installed applications depends on the number of discovered servers. For 500 servers, it takes approximately one hour for the discovered inventory to appear in the Azure Migrate project in the portal. * During software inventory, the added server credentials are iterated against servers and validated for agentless dependency analysis. When the discovery of servers is finished, in the portal, you can enable agentless dependency analysis on the servers. Only the servers on which validation succeeds can be selected to enable [agentless dependency analysis](how-to-create-group-machine-dependencies-agentless.md). * SQL Server instances and databases data begin to appear in the portal within 24 hours after you start discovery.
migrate Tutorial Discover Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-gcp.md
ms. Previously updated : 09/15/2023- Last updated : 10/25/2023+ #Customer intent: As a server admin I want to discover my GCP instances.
If you just created a free Azure account, you're the owner of your subscription.
1. In the Azure portal, search for "subscriptions", and under **Services**, select **Subscriptions**.
- ![Screenshot of Search box to search for the Azure subscription.](./media/tutorial-discover-gcp/search-subscription.png)
+ :::image type="content" source="./media/tutorial-discover-gcp/search-subscription.png" alt-text="Screenshot of Search box to search for the Azure subscription.":::
1. In the **Subscriptions** page, select the subscription in which you want to create a project.
If you just created a free Azure account, you're the owner of your subscription.
| Assign access to | User | | Members | azmigrateuser |
- ![Add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-page.png" alt-text="Screenshot of role assignment page in Azure portal.":::
1. To register the appliance, your Azure account needs **permissions to register Microsoft Entra apps**.
Set up a new project.
4. In **Create project**, select your Azure subscription and resource group. Create a resource group if you don't have one. 5. In **Project Details**, specify the project name and the geography in which you want to create the project. Review supported geographies for [public](migrate-support-matrix.md#public-cloud) and [government clouds](migrate-support-matrix.md#azure-government).
- ![Screenshot to enter project name and region.](./media/tutorial-discover-gcp/new-project.png)
- 6. Select **Create**. 7. Wait a few minutes for the project to deploy. The **Azure Migrate: Discovery and assessment** tool is added by default to the new project.
-![Page showing Server Assessment tool added by default.](./media/tutorial-discover-gcp/added-tool.png)
> [!NOTE]
-> If you have already created a project, you can use the same project to register additional appliances to discover and assess more no of servers.[Learn more](create-manage-projects.md#find-a-project).
+> If you have already created a project, you can use the same project to register additional appliances to discover and assess more no of servers. [Learn more](create-manage-projects.md#find-a-project).
## Set up the appliance
Now, connect from the appliance to the GCP servers to be discovered, and start t
- Currently Azure Migrate doesn't support SSH private key file generated by PuTTY. - Azure Migrate supports OpenSSH format of the SSH private key file as shown below:
- ![Image of SSH private key supported format.](./media/tutorial-discover-physical/key-format.png)
+ :::image type="content" source="./media/tutorial-discover-physical/key-format.png" alt-text="Screenshot of SSH private key supported format.":::
2. If you want to add multiple credentials at once, select **Add more** to save and add more credentials. + > [!Note] > By default, the credentials will be used to gather data about the installed applications, roles, and features, and also to collect dependency data from Windows and Linux servers, unless you disable the slider to not perform these features (as instructed in the last step). 3. In **Step 2:Provide physical or virtual server detailsΓÇï**, select **Add discovery source** to specify the server **IP address/FQDN** and the friendly name for credentials to connect to the server.
Click **Start discovery**, to kick off discovery of the successfully validated s
* It takes approximately 2 minutes to complete discovery of 100 servers and their metadata to appear in the Azure portal. * [Software inventory](how-to-discover-applications.md) (discovery of installed applications) is automatically initiated when the discovery of servers is finished. * [Software inventory](how-to-discover-applications.md) identifies the SQL Server instances that are running on the servers. Using the information it collects, the appliance attempts to connect to the SQL Server instances through the Windows authentication credentials or the SQL Server authentication credentials that are provided on the appliance. Then, it gathers data on SQL Server databases and their properties. The SQL Server discovery is performed once every 24 hours.
-* Appliance can connect to only those SQL Server instances to which it has network line of sight, whereas software inventory by itself may not need network line of sight.
+* Appliance can connect to only those SQL Server instances to which it has network line of sight, whereas software inventory by itself might not need network line of sight.
* The time taken for discovery of installed applications depends on the number of discovered servers. For 500 servers, it takes approximately one hour for the discovered inventory to appear in the Azure Migrate project in the portal. * During software inventory, the added server credentials are iterated against servers and validated for agentless dependency analysis. When the discovery of servers is finished, in the portal, you can enable agentless dependency analysis on the servers. Only the servers on which validation succeeds can be selected to enable [agentless dependency analysis](how-to-create-group-machine-dependencies-agentless.md). * SQL Server instances and databases data begin to appear in the portal within 24 hours after you start discovery.
migrate Tutorial Discover Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-hyper-v.md
ms. Previously updated : 09/15/2023 Last updated : 10/25/2023 #Customer intent: As a Hyper-V admin, I want to discover my on-premises servers on Hyper-V.
Before you start this tutorial, check you have these prerequisites in place.
| **Hyper-V host** | Hyper-V hosts on which servers are located can be standalone, or in a cluster.<br/><br/> The host must be running Windows Server 2019, Windows Server 2016, or Windows Server 2012 R2.<br/><br/> Verify inbound connections are allowed on WinRM port 5985 (HTTP), so that the appliance can connect to pull server metadata and performance data, using a Common Information Model (CIM) session. **Appliance deployment** | Hyper-V host needs resources to allocate a server for the appliance:<br/><br/> - 16 GB of RAM, 8 vCPUs, and around 80 GB of disk storage.<br/><br/> - An external virtual switch, and internet access on the appliance, directly or via a proxy.
-**Servers** | All Windows and Linux OS versions are supported for discovery of configuration and performance metadata. <br /><br /> For application discovery on servers, all Windows and Linux OS versions are supported. Check the [OS versions supported for agentless dependency analysis](migrate-support-matrix-hyper-v.md#dependency-analysis-requirements-agentless).<br /><br /> To discover ASP.NET web apps running on IIS web server, check [supported Windows OS and IIS versions](migrate-support-matrix-vmware.md#web-apps-discovery-requirements).For discovery of installed applications and for agentless dependency analysis, Windows servers must have PowerShell version 2.0 or later installed.<br /><br /> To discover Java web apps running on Apache Tomcat web server, check [supported Linux OS and Tomcat versions](migrate-support-matrix-vmware.md#web-apps-discovery-requirements).
+**Servers** | All Windows and Linux OS versions are supported for discovery of configuration and performance metadata. <br /><br /> For application discovery on servers, all Windows and Linux OS versions are supported. Check the [OS versions supported for agentless dependency analysis](migrate-support-matrix-hyper-v.md#dependency-analysis-requirements-agentless).<br /><br /> To discover ASP.NET web apps running on IIS web server, check [supported Windows OS and IIS versions](migrate-support-matrix-vmware.md#web-apps-discovery-requirements). For discovery of installed applications and for agentless dependency analysis, Windows servers must have PowerShell version 2.0 or later installed.<br /><br /> To discover Java web apps running on Apache Tomcat web server, check [supported Linux OS and Tomcat versions](migrate-support-matrix-vmware.md#web-apps-discovery-requirements).
**SQL Server access** | To discover SQL Server instances and databases, the Windows or SQL Server account must be a member of the sysadmin server role or have [these permissions](migrate-support-matrix-hyper-v.md#configure-the-custom-login-for-sql-server-discovery) for each SQL Server instance. ## Prepare an Azure user account
If you just created a free Azure account, you're the owner of your subscription.
1. In the Azure portal, search for "subscriptions", and under **Services**, select **Subscriptions**.
- ![Screenshot of Search box to search for the Azure subscription.](./media/tutorial-discover-hyper-v/search-subscription.png)
+ :::image type="content" source="./media/tutorial-discover-hyper-v/search-subscription.png" alt-text="Screenshot of Search box to search for the Azure subscription.":::
1. In the **Subscriptions** page, select the subscription in which you want to create a project.
If you just created a free Azure account, you're the owner of your subscription.
| Assign access to | User | | Members | azmigrateuser |
- ![Add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
+ :::image type="content" source="../../includes/role-based-access-control/media/add-role-assignment-page.png" alt-text="Screenshot of add role assignment page in Azure portal.":::
1. To register the appliance, your Azure account needs **permissions to register Microsoft Entra apps.**
Set up a new project.
5. In **Create project**, select your Azure subscription and resource group. Create a resource group if you don't have one. 6. In **Project Details**, specify the project name and the geography in which you want to create the project. Review supported geographies for [public](migrate-support-matrix.md#public-cloud) and [government clouds](migrate-support-matrix.md#azure-government).
- ![Screenshot of project name and region.](./media/tutorial-discover-hyper-v/new-project.png)
- > [!Note] > Use the **Advanced** configuration section to create an Azure Migrate project with private endpoint connectivity. [Learn more](discover-and-assess-using-private-endpoints.md#create-a-project-with-private-endpoint-connectivity). 7. Select **Create**. 8. Wait a few minutes for the project to deploy. The **Azure Migrate: Discovery and assessment** tool is added by default to the new project.
-![Page showing Azure Migrate: Discovery and assessment tool added by default.](./media/tutorial-discover-hyper-v/added-tool.png)
+ :::image type="content" source="./media/tutorial-discover-hyper-v/added-tool.png" alt-text="Screenshot showing Azure Migrate: Discovery and assessment tool added by default.":::
> [!NOTE]
-> If you have already created a project, you can use the same project to register additional appliances to discover and assess more no of servers.[Learn more](create-manage-projects.md#find-a-project)
+> If you have already created a project, you can use the same project to register additional appliances to discover and assess more no of servers. [Learn more](create-manage-projects.md#find-a-project)
## Set up the appliance
In the configuration manager, select **Set up prerequisites**, and then complete
3. To register the appliance, you need to select **Login**. In **Continue with Azure Login**, select **Copy code & Login** to copy the device code (you must have a device code to authenticate with Azure) and open an Azure Login prompt in a new browser tab. Make sure you've disabled the pop-up blocker in the browser to see the prompt. :::image type="content" source="./media/tutorial-discover-vmware/device-code.png" alt-text="Screenshot that shows where to copy the device code and log in.":::+ 4. In a new tab in your browser, paste the device code and sign in by using your Azure username and password. Signing in with a PIN isn't supported. > [!NOTE] > If you close the login tab accidentally without logging in, refresh the browser tab of the appliance configuration manager to display the device code and Copy code & Login button.
Select **Start discovery**, to kick off server discovery from the successfully v
* It takes approximately 2 minutes per host for metadata of discovered servers to appear in the Azure portal. * If you have provided server credentials, [software inventory](how-to-discover-applications.md) (discovery of installed applications) is automatically initiated when the discovery of servers running on Hyper-V host(s)/cluster(s) is finished. * [Software inventory](how-to-discover-applications.md) identifies the SQL Server instances that are running on the servers. Using the information it collects, the appliance attempts to connect to the SQL Server instances through the Windows authentication credentials or the SQL Server authentication credentials that are provided on the appliance. Then, it gathers data on SQL Server databases and their properties. The SQL Server discovery is performed once every 24 hours.
-* Appliance can connect to only those SQL Server instances to which it has network line of sight, whereas software inventory by itself may not need network line of sight.
+* Appliance can connect to only those SQL Server instances to which it has network line of sight, whereas software inventory by itself might not need network line of sight.
* The time taken for discovery of installed applications depends on the number of discovered servers. For 500 servers, it takes approximately one hour for the discovered inventory to appear in the Azure Migrate project in the portal. * [Software inventory](how-to-discover-applications.md) identifies web server role existing on discovered servers. If a server is found to have web server role enabled, Azure Migrate will perform web apps discovery on the server. Web apps configuration data is updated once every 24 hours. * During software inventory, the added server credentials are iterated against servers and validated for agentless dependency analysis. When the discovery of servers is finished, in the portal, you can enable agentless dependency analysis on the servers. Only the servers on which validation succeeds can be selected to enable [agentless dependency analysis](how-to-create-group-machine-dependencies-agentless.md).
migrate Tutorial Discover Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-vmware.md
ms. Previously updated : 10/11/2023 Last updated : 10/25/2023 #Customer intent: As an VMware admin, I want to discover my on-premises servers running in a VMware environment.
In VMware vSphere Web Client, set up a read-only account to use for vCenter Serv
1. Select the user account, and then select **Read-only** to assign the role to the account. Select **OK**. 1. To be able to start discovery of installed applications and agentless dependency analysis, in the menu under **Access Control**, select **Roles**. In the **Roles** pane, under **Roles**, select **Read-only**. Under **Privileges**, select **Guest operations**. To propagate the privileges to all objects in the vCenter Server instance, select the **Propagate to children** checkbox.
- :::image type="content" source="./media/tutorial-discover-vmware/guest-operations.png" alt-text="Screenshot that shows the v sphere web client and how to create a new account and select user roles and privileges.":::
+ :::image type="content" source="./media/tutorial-discover-vmware/guest-operations.png" alt-text="Screenshot that shows the vSphere web client and how to create a new account and select user roles and privileges.":::
> [!NOTE] > - For vCenter Server 7.x and above you must clone the Read Only system role and add the Guest Operations Privilages to the cloned role. Assign the cloned role to the vCenter Account. Learn how to [create a custom role in VMware vCenter](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-41E5E52E-A95B-4E81-9724-6AD6800BEF78.html).
To set up a new project:
1. In **Create project**, select your Azure subscription and resource group. Create a resource group if you don't have one. 1. In **Project Details**, specify the project name and the geography where you want to create the project. Review [supported geographies for public clouds](migrate-support-matrix.md#public-cloud) and [supported geographies for government clouds](migrate-support-matrix.md#azure-government).
- :::image type="content" source="./media/tutorial-discover-vmware/new-project.png" alt-text="Screenshot that shows how to add project details for a new Azure Migrate project.":::
- > [!Note] > Use the **Advanced** configuration section to create an Azure Migrate project with private endpoint connectivity. [Learn more](discover-and-assess-using-private-endpoints.md#create-a-project-with-private-endpoint-connectivity).
To start vCenter Server discovery, select **Start discovery**. After the discove
* It takes approximately 20-25 minutes for the discovery of servers across 10 vCenter Servers added to a single appliance. * If you have provided server credentials, software inventory (discovery of installed applications) is automatically initiated when the discovery of servers running on vCenter Server(s) is finished. Software inventory occurs once every 12 hours. * [Software inventory](how-to-discover-applications.md) identifies the SQL Server instances that are running on the servers. Using the information it collects, the appliance attempts to connect to the SQL Server instances through the Windows authentication credentials or the SQL Server authentication credentials that are provided on the appliance. Then, it gathers data on SQL Server databases and their properties. The SQL Server discovery is performed once every 24 hours.
-* Appliance can connect to only those SQL Server instances to which it has network line of sight, whereas software inventory by itself may not need network line of sight.
+* Appliance can connect to only those SQL Server instances to which it has network line of sight, whereas software inventory by itself might not need network line of sight.
* Discovery of installed applications might take longer than 15 minutes. The duration depends on the number of discovered servers. For 500 servers, it takes approximately one hour for the discovered inventory to appear in the Azure Migrate project in the portal. * [Software inventory](how-to-discover-applications.md) identifies web server role existing on discovered servers. If a server is found to have web server role enabled, Azure Migrate will perform web apps discovery on the server. Web apps configuration data is updated once every 24 hours. * During software inventory, the added server credentials are iterated against servers and validated for agentless dependency analysis. When the discovery of servers is finished, in the portal, you can enable agentless dependency analysis on the servers. Only the servers on which validation succeeds can be selected to enable [agentless dependency analysis](how-to-create-group-machine-dependencies-agentless.md).
To start vCenter Server discovery, select **Start discovery**. After the discove
:::image type="content" source="./media/tutorial-discover-vmware/discovery-assessment-tile.png" alt-text="Screenshot that shows how to refresh data in discovery and assessment tile.":::
-Details such as OS license support status, inventory, database instances, etc are displayed.
+Details such as OS license support status, inventory, database instances, etc. are displayed.
#### View support status
network-watcher Diagnose Vm Network Routing Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-routing-problem.md
Previously updated : 09/29/2023 Last updated : 10/26/2023 #CustomerIntent: As an Azure administrator, I want to diagnose virtual machine (VM) network routing problem that prevents it from communicating with the internet. # Tutorial: Diagnose a virtual machine network routing problem using the Azure portal
-In this tutorial, You use Azure Network Watcher [next hop](network-watcher-next-hop-overview.md) tool to troubleshoot and diagnose a VM routing problem that's preventing it from correctly communicating with other resources. Next hop shows you that the routing problem is caused by a [custom route](../virtual-network/virtual-networks-udr-overview.md?toc=/azure/network-watcher/toc.json#custom-routes).
+In this tutorial, you use Azure Network Watcher [next hop](network-watcher-next-hop-overview.md) tool to troubleshoot and diagnose a VM routing problem that's preventing it from correctly communicating with other resources. Next hop shows you that a [custom route](../virtual-network/virtual-networks-udr-overview.md?toc=/azure/network-watcher/toc.json#custom-routes) caused the routing problem.
:::image type="content" source="./media/diagnose-vm-network-routing-problem/next-hop-tutorial-diagram.png" alt-text="Diagram shows the resources created in the tutorial.":::
network-watcher Diagnose Vm Network Traffic Filtering Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-traffic-filtering-problem.md
Previously updated : 08/23/2023 Last updated : 10/26/2023+ #Customer intent: I want to diagnose a virtual machine (VM) network traffic filter using IP flow verify to know which security rule is denying the traffic and causing the communication problem to the VM.
IP flow verify checks Azure default and configured security rules. If the checks
When no longer needed, delete the resource group and all of the resources it contains:
-1. In the search box at the top of the portal, enter ***myResourceGroup***. When you see **myResourceGroup** in the search results, select it.
+1. In the search box at the top of the portal, enter ***myResourceGroup***. Select **myResourceGroup** from the search results.
1. Select **Delete resource group**.
When no longer needed, delete the resource group and all of the resources it con
1. Select **Delete** to confirm the deletion of the resource group and all its resources.
-## Next steps
-
-In this quickstart, you created a virtual machine and diagnosed inbound and outbound network traffic filters. You learned that network security group rules allow or deny traffic to and from a virtual machine. Learn more about [network security groups](../virtual-network/network-security-groups-overview.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json) and how to [create security rules](../virtual-network/manage-network-security-group.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json#create-a-security-rule).
+## Next step
-Even with the proper network traffic filters in place, communication to a virtual machine can still fail due to routing configuration. To learn how to diagnose virtual machine routing problems, see [Diagnose a virtual machine network routing problem](diagnose-vm-network-routing-problem.md). To diagnose outbound routing, latency, and traffic filtering problems with one tool, see [Troubleshoot connections with Azure Network Watcher](network-watcher-connectivity-portal.md).
+> [!div class="nextstepaction"]
+> [Diagnose a virtual machine network routing problem](diagnose-vm-network-routing-problem.md)
openshift Howto Infrastructure Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-infrastructure-nodes.md
keywords: infrastructure nodes, aro, deploy, openshift, red hat Previously updated : 09/30/2023 Last updated : 10/26/2023
In order for Azure VMs added to an ARO cluster to be recognized as infrastructur
- Standard_E4s_v5 - Standard_E8s_v5 - Standard_E16s_v5
+ - Standard_E4as_v5
+ - Standard_E8as_v5
+ - Standard_E16as_v5
- There can be no more than three nodes. Any additional nodes are charged an OpenShift fee.
openshift Support Policies V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/support-policies-v4.md
Previously updated : 09/11/2023 Last updated : 10/26/2023 #Customer intent: I need to understand the Azure Red Hat OpenShift support policies for OpenShift 4.0.
Azure Red Hat OpenShift 4 supports node instances on the following virtual machi
|Dasv5|Standard_D8as_v5|8|32| |Dasv5|Standard_D16as_v5|16|64| |Dasv5|Standard_D32as_v5|32|128|
-|Easv4|Standard_E4as_v4|4|32|
|Easv4|Standard_E8as_v4|8|64| |Easv4|Standard_E16as_v4|16|128| |Easv4|Standard_E20as_v4|20|160|
Azure Red Hat OpenShift 4 supports node instances on the following virtual machi
|Dasv5|Standard_D32as_v5|32|128| |Dasv5|Standard_D64as_v5|64|256| |Dasv5|Standard_D96as_v5|96|384|
-|Dsv2|Standard_D2s_v3|2|8|
|Dsv3|Standard_D4s_v3|4|16| |Dsv3|Standard_D8s_v3|8|32| |Dsv3|Standard_D16s_v3|16|64|
Azure Red Hat OpenShift 4 supports node instances on the following virtual machi
|Esv3|Standard_E8s_v3|8|64| |Esv3|Standard_E16s_v3|16|128| |Esv3|Standard_E32s_v3|32|256|
-|Esv4|Standard_E2s_v4|2|16|
|Esv4|Standard_E4s_v4|4|32| |Esv4|Standard_E8s_v4|8|64| |Esv4|Standard_E16s_v4|16|128|
Azure Red Hat OpenShift 4 supports node instances on the following virtual machi
|Esv4|Standard_E32s_v4|32|256| |Esv4|Standard_E48s_v4|48|384| |Esv4|Standard_E64s_v4|64|504|
-|Esv5|Standard_E2s_v5|2|16|
|Esv5|Standard_E4s_v5|4|32| |Esv5|Standard_E8s_v5|8|64| |Esv5|Standard_E16s_v5|16|128|
operator-nexus Howto Virtual Machine Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-virtual-machine-image.md
After executing the script, you'll have a VM image tailored for your Virtual Net
## Next steps
- Refer to the [QuickStart guide](./quickstarts-tenant-workload-deployment.md) to deploy a VNF using the image you created.
+ Refer to the [QuickStart guide](./quickstarts-virtual-machine-deployment-cli.md) to deploy a VNF using the image you created.
<!-- LINKS - internal --> [kubernetes-concepts]: ../../../aks/concepts-clusters-workloads.md
operator-nexus Howto Virtual Machine Placement Hints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-virtual-machine-placement-hints.md
You can increase the overall resilience of your application by using anti-affini
## Prerequisites
-Before proceeding with this how-to guide, ensure you have completed all steps in the Azure Operator Nexus virtual machine [QuickStart guide](./quickstarts-tenant-workload-deployment.md).
+Before proceeding with this how-to guide, ensure you have completed all steps in the Azure Operator Nexus virtual machine [QuickStart guide](./quickstarts-virtual-machine-deployment-cli.md).
## Placement hints configuration
The `resourceId` argument in placement hints specifies the target object against
* A Rack: If the target object is a rack, the placement hint is checked against all the bare-metal machines running on that rack. > [!IMPORTANT]
-> The resourceId argument must be specified in the form of an ARM ID, and it must be a valid resource ID for the target object. If the resourceId is incorrect or invalid, the placement hint will not work correctly, and the VM scheduling may fail.
+> The resourceId argument must be specified in the form of an ARM ID, and it must be a valid resource ID for the target object. If the resourceId is incorrect or invalid, the placement hint will not work correctly, and the VM scheduling might fail.
### Scope
The schedulingExecution argument has two possible values: `Hard` or `Soft`.
In this example, we explore the concepts of soft and hard affinities, particularly about placing virtual machines on specific racks. > [!NOTE]
-> In this and the following examples, only variations of the `--placement-hints` argument are provided. For the actual creation of the VM with placement hints, you should add `--placement-hints` to the CLI illustrated in the VM [QuickStart guide](./quickstarts-tenant-workload-deployment.md).
+> In this and the following examples, only variations of the `--placement-hints` argument are provided. For the actual creation of the VM with placement hints, you should add `--placement-hints` to the CLI illustrated in the VM [QuickStart guide](./quickstarts-virtual-machine-deployment-cli.md).
#### Strict scheduling (rack affinity)
This placement hint uses the `Affinity` hintType to ensure that the virtual mach
``` > [!NOTE]
-> The current placement hint configuration with the Affinity hintType ensures that the virtual machine is scheduled exclusively on the specified rack with the provided rack ID. However, it's important to note that the rack affinity cannot be specified for more than one rack with `Hard` scheduling execution. This limitation may influence your deployment strategy, particularly if you are considering placing VMs on multiple racks and allowing the scheduler to select from them.
+> The current placement hint configuration with the Affinity hintType ensures that the virtual machine is scheduled exclusively on the specified rack with the provided rack ID. However, it's important to note that the rack affinity cannot be specified for more than one rack with `Hard` scheduling execution. This limitation might influence your deployment strategy, particularly if you are considering placing VMs on multiple racks and allowing the scheduler to select from them.
#### Preferred scheduling (rack affinity)
In this example, we explore the concepts of soft and hard anti-affinities, parti
#### Strict scheduling (rack anti-affinity)
-This placement hint uses both the `AntiAffinity` hintType and `Hard` schedulingExecution to prevent the virtual machine from being scheduled on the specified rack identified by the rack ID. In this configuration, the scheduler strictly follows these placement hints. However, if the rack ID is incorrect or there's not enough capacity on other racks, the VM placement may fail due to the strict application of the `Hard` scheduling rule
+This placement hint uses both the `AntiAffinity` hintType and `Hard` schedulingExecution to prevent the virtual machine from being scheduled on the specified rack identified by the rack ID. In this configuration, the scheduler strictly follows these placement hints. However, if the rack ID is incorrect or there's not enough capacity on other racks, the VM placement might fail due to the strict application of the `Hard` scheduling rule
```bash --placement-hints '[{"hintType":"AntiAffinity","resourceId":"/subscriptions/<subscription>/resourceGroups/<managed-resource-group>/providers/Microsoft.NetworkCloud/racks/<compute-rack-2>","schedulingExecution":"Hard","scope":"Rack"}]'
In this example, we explore the concepts of soft and hard anti-affinities, parti
#### Strict scheduling (bare-metal machine anti-affinity)
-This placement hint uses both the `AntiAffinity` hintType and `Hard` schedulingExecution to prevent the virtual machine from being scheduled on the specified bare-metal machine identified by the bare-metal machine ID. In this configuration, the scheduler strictly follows these placement hints. However, if the bare-metal machine ID is incorrect or there's not enough capacity on other bare-metal machines, the VM placement may fail due to the strict application of the `Hard` scheduling rule
+This placement hint uses both the `AntiAffinity` hintType and `Hard` schedulingExecution to prevent the virtual machine from being scheduled on the specified bare-metal machine identified by the bare-metal machine ID. In this configuration, the scheduler strictly follows these placement hints. However, if the bare-metal machine ID is incorrect or there's not enough capacity on other bare-metal machines, the VM placement might fail due to the strict application of the `Hard` scheduling rule
```bash --placement-hints '[{"hintType":"AntiAffinity","resourceId":"/subscriptions/<subscription>/resourceGroups/<managed-resource-group>/providers/Microsoft.NetworkCloud/bareMetalMachines/<machine-name>","schedulingExecution":"Hard","scope":"Machine"}]'
operator-nexus Quickstarts Virtual Machine Deployment Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-virtual-machine-deployment-arm.md
+
+ Title: Create an Azure Operator Nexus virtual machine by using Azure Resource Manager template (ARM template)
+description: Learn how to create an Azure Operator Nexus virtual machine (VM) for virtual network function (VNF) workloads by using Azure Resource Manager template (ARM template).
++++ Last updated : 07/30/2023+++
+# Quickstart: Create an Azure Operator Nexus virtual machine by using Azure Resource Manager template (ARM template)
+
+* Deploy an Azure Nexus virtual machine using an Azure Resource Manager template.
+
+This quick-start guide is designed to help you get started with using Nexus virtual machines to host virtual network functions (VNFs). By following the steps outlined in this guide, you're able to quickly and easily create a customized Nexus virtual machine that meets your specific needs and requirements. Whether you're a beginner or an expert in Nexus networking, this guide is here to help. You learn everything you need to know to create and customize Nexus virtual machines for hosting virtual network functions.
+
+## Before you begin
+
+* Complete the [prerequisites](./quickstarts-tenant-workload-prerequisites.md) for deploying a Nexus virtual machine.
+
+## Review the template
+
+Before deploying the virtual machine template, let's review the content to understand its structure.
++
+Once you have reviewed and saved the template file named ```virtual-machine-arm-template.json```, proceed to the next section to deploy the template.
+
+## Deploy the template
+
+1. Create a file named ```virtual-machine-parameters.json``` and add the required parameters in JSON format. You can use the following example as a starting point. Replace the values with your own.
++
+2. Deploy the template.
+
+```azurecli
+ az deployment group create --resource-group myResourceGroup --template-file virtual-machine-arm-template.json --parameters @virtual-machine-parameters.json
+```
+
+## Review deployed resources
++
+## Clean up resources
++
+## Next steps
+
+You've successfully created a Nexus virtual machine. You can now use the virtual machine to host virtual network functions (VNFs).
operator-nexus Quickstarts Virtual Machine Deployment Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-virtual-machine-deployment-bicep.md
+
+ Title: Create an Azure Operator Nexus virtual machine by using Bicep
+description: Learn how to create an Azure Operator Nexus virtual machine (VM) for virtual network function (VNF) workloads by using Bicep
++++ Last updated : 07/30/2023+++
+# Quickstart: Create an Azure Operator Nexus virtual machine by using Bicep
+
+* Deploy an Azure Nexus virtual machine using Bicep
+
+This quick-start guide is designed to help you get started with using Nexus virtual machines to host virtual network functions (VNFs). By following the steps outlined in this guide, you're able to quickly and easily create a customized Nexus virtual machine that meets your specific needs and requirements. Whether you're a beginner or an expert in Nexus networking, this guide is here to help. You learn everything you need to know to create and customize Nexus virtual machines for hosting virtual network functions.
+
+## Before you begin
+
+* Complete the [prerequisites](./quickstarts-tenant-workload-prerequisites.md) for deploying a Nexus virtual machine.
+
+## Review the template
+
+Before deploying the virtual machine template, let's review the content to understand its structure.
++
+Once you have reviewed and saved the template file named ```virtual-machine-bicep-template.bicep```, proceed to the next section to deploy the template.
+
+## Deploy the template
+
+1. Create a file named ```virtual-machine-parameters.json``` and add the required parameters in JSON format. You can use the following example as a starting point. Replace the values with your own.
++
+2. Deploy the template.
+
+```azurecli
+ az deployment group create --resource-group myResourceGroup --template-file virtual-machine-bicep-template.bicep --parameters @virtual-machine-parameters.json
+```
+
+## Review deployed resources
++
+## Clean up resources
++
+## Next steps
+
+You've successfully created a Nexus virtual machine. You can now use the virtual machine to host virtual network functions (VNFs).
operator-nexus Quickstarts Virtual Machine Deployment Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-virtual-machine-deployment-cli.md
+
+ Title: Create an Azure Operator Nexus virtual machine by using Azure CLI
+description: Learn how to create an Azure Operator Nexus virtual machine (VM) for virtual network function (VNF) workloads
++++ Last updated : 07/10/2023+++
+# Quickstart: Create an Azure Operator Nexus virtual machine by using Azure CLI
+
+* Deploy an Azure Nexus virtual machine using Azure CLI
+
+This quick-start guide is designed to help you get started with using Nexus virtual machines to host virtual network functions (VNFs). By following the steps outlined in this guide, you're able to quickly and easily create a customized Nexus virtual machine that meets your specific needs and requirements. Whether you're a beginner or an expert in Nexus networking, this guide is here to help. You learn everything you need to know to create and customize Nexus virtual machines for hosting virtual network functions.
+
+## Before you begin
+
+* Complete the [prerequisites](./quickstarts-tenant-workload-prerequisites.md) for deploying a Nexus virtual machine.
+
+## Create a Nexus virtual machine
+
+The following example creates a virtual machine named *myNexusVirtualMachine* in resource group *myResourceGroup* in the *eastus* location.
+
+Before you run the commands, you need to set several variables to define the configuration for your virtual machine. Here are the variables you need to set, along with some default values you can use for certain variables:
+
+| Variable | Description |
+| -- | |
+| LOCATION | The Azure region where you want to create your virtual machine. |
+| RESOURCE_GROUP | The name of the Azure resource group where you want to create the virtual machine. |
+| SUBSCRIPTION | The ID of your Azure subscription. |
+| CUSTOM_LOCATION | This argument specifies a custom location of the Nexus instance. |
+| CSN_ARM_ID | CSN ID is the unique identifier for the cloud services network you want to use. |
+| L3_NETWORK_ID | L3 Network ID is the unique identifier for the network interface to be used by the virtual machine. |
+| NETWORK_INTERFACE_NAME | The name of the L3 network interface for the virtual machine. |
+| ADMIN_USERNAME | The username for the virtual machine administrator. |
+| SSH_PUBLIC_KEY | The SSH public key that is used for secure communication with the virtual machine. |
+| CPU_CORES | The number of CPU cores for the virtual machine (even number, max 46 vCPUs) |
+| MEMORY_SIZE | The amount of memory (in GB, max 224 GB) for the virtual machine. |
+| VM_DISK_SIZE | The size (in GB) of the virtual machine disk. |
+| VM_IMAGE | The URL of the virtual machine image. |
+| ACR_URL | The URL of the Azure Container Registry. |
+| ACR_USERNAME | The username for the Azure Container Registry. |
+| ACR_PASSWORD | The password for the Azure Container Registry. |
+
+Once you've defined these variables, you can run the Azure CLI command to create the virtual machine. Add the ```--debug``` flag at the end to provide more detailed output for troubleshooting purposes.
+
+To define these variables, use the following set commands and replace the example values with your preferred values. You can also use the default values for some of the variables, as shown in the following example:
+
+```bash
+# Azure parameters
+RESOURCE_GROUP="myResourceGroup"
+SUBSCRIPTION="<Azure subscription ID>"
+CUSTOM_LOCATION="/subscriptions/<subscription_id>/resourceGroups/<managed_resource_group>/providers/microsoft.extendedlocation/customlocations/<custom-location-name>"
+LOCATION="$(az group show --name $RESOURCE_GROUP --query location --subscription $SUBSCRIPTION -o tsv)"
+
+# VM parameters
+VM_NAME="myNexusVirtualMachine"
+
+# VM credentials
+ADMIN_USERNAME="azureuser"
+SSH_PUBLIC_KEY="$(cat ~/.ssh/id_rsa.pub)"
+
+# Network parameters
+CSN_ARM_ID="/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.NetworkCloud/cloudServicesNetworks/<csn-name>"
+L3_NETWORK_ID="/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.NetworkCloud/l3Networks/<l3Network-name>"
+NETWORK_INTERFACE_NAME="mgmt0"
+
+# VM Size parameters
+CPU_CORES=4
+MEMORY_SIZE=12
+VM_DISK_SIZE="64"
+
+# Virtual Machine Image parameters
+VM_IMAGE="<VM image, example: myacr.azurecr.io/ubuntu:20.04>"
+ACR_URL="<Azure container registry URL, example: myacr.azurecr.io>"
+ACR_USERNAME="<Azure container registry username>"
+ACR_PASSWORD="<Azure container registry password>"
+```
+
+> [!IMPORTANT]
+> It is essential that you replace the placeholders for CUSTOM_LOCATION, CSN_ARM_ID, L3_NETWORK_ID and ACR parameters with your actual values before running these commands.
+
+After defining these variables, you can create the virtual machine by executing the following Azure CLI command.
+
+```bash
+az networkcloud virtualmachine create \
+ --name "$VM_NAME" \
+ --resource-group "$RESOURCE_GROUP" \
+ --subscription "$SUBSCRIPTION" \
+ --extended-location name="$CUSTOM_LOCATION" type="CustomLocation" \
+ --location "$LOCATION" \
+ --admin-username "$ADMIN_USERNAME" \
+ --csn "attached-network-id=$CSN_ARM_ID" \
+ --cpu-cores $CPU_CORES \
+ --memory-size $MEMORY_SIZE \
+ --network-attachments '[{"attachedNetworkId":"'$L3_NETWORK_ID'","ipAllocationMethod":"Dynamic","defaultGateway":"True","networkAttachmentName":"'$NETWORK_INTERFACE_NAME'"}]'\
+ --storage-profile create-option="Ephemeral" delete-option="Delete" disk-size="$VM_DISK_SIZE" \
+ --vm-image "$VM_IMAGE" \
+ --ssh-key-values "$SSH_PUBLIC_KEY" \
+ --vm-image-repository-credentials registry-url="$ACR_URL" username="$ACR_USERNAME" password="$ACR_PASSWORD"
+```
+
+After a few minutes, the command completes and returns information about the virtual machine. You've created the virtual machine. You're now ready to use them.
+
+## Review deployed resources
++
+## Clean up resources
++
+## Next steps
+
+You've successfully created a Nexus virtual machine. You can now use the virtual machine to host virtual network functions (VNFs).
operator-nexus Quickstarts Virtual Machine Deployment Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-virtual-machine-deployment-ps.md
+
+ Title: Create an Azure Operator Nexus virtual machine by using Azure PowerShell
+description: Learn how to create an Azure Operator Nexus virtual machine (VM) for virtual network function (VNF) workloads using PowerShell
++++ Last updated : 09/20/2023+++
+# Quickstart: Create an Azure Operator Nexus virtual machine by using Azure PowerShell
+
+* Deploy an Azure Nexus virtual machine using Azure PowerShell
+
+This quick-start guide is designed to help you get started with using Nexus virtual machines to host virtual network functions (VNFs). By following the steps outlined in this guide, you're able to quickly and easily create a customized Nexus virtual machine that meets your specific needs and requirements. Whether you're a beginner or an expert in Nexus networking, this guide is here to help. You learn everything you need to know to create and customize Nexus virtual machines for hosting virtual network functions.
+
+## Before you begin
+
+* Complete the [prerequisites](./quickstarts-tenant-workload-prerequisites.md) for deploying a Nexus virtual machine.
+
+## Create a Nexus virtual machine
+
+The following example creates a virtual machine named *myNexusVirtualMachine* in resource group *myResourceGroup* in the *eastus* location.
+
+Before you run the commands, you need to set several variables to define the configuration for your virtual machine. Here are the variables you need to set, along with some default values you can use for certain variables:
+
+| Variable | Description |
+| -- | |
+| LOCATION | The Azure region where you want to create your virtual machine. |
+| RESOURCE_GROUP | The name of the Azure resource group where you want to create the virtual machine. |
+| SUBSCRIPTION | The ID of your Azure subscription. |
+| CUSTOM_LOCATION | This argument specifies a custom location of the Nexus instance. |
+| CSN_ARM_ID | CSN ID is the unique identifier for the cloud services network you want to use. |
+| L3_NETWORK_ID | L3 Network ID is the unique identifier for the network interface to be used by the virtual machine. |
+| NETWORK_INTERFACE_NAME | The name of the L3 network interface for the virtual machine. |
+| ADMIN_USERNAME | The username for the virtual machine administrator. |
+| SSH_PUBLIC_KEY | The SSH public key that is used for secure communication with the virtual machine. |
+| CPU_CORES | The number of CPU cores for the virtual machine (even number, max 46 vCPUs) |
+| MEMORY_SIZE | The amount of memory (in GB, max 224 GB) for the virtual machine. |
+| VM_DISK_SIZE | The size (in GB) of the virtual machine disk. |
+| VM_IMAGE | The URL of the virtual machine image. |
+| ACR_URL | The URL of the Azure Container Registry. |
+| ACR_USERNAME | The username for the Azure Container Registry. |
+| ACR_PASSWORD | The password for the Azure Container Registry. |
+| VMDEVICEMODEL | The VMDeviceModel defaults to T2, available options T2(Modern) and T1(Transitional). |
+| USERDATA | The base64 encoded string of cloud-init userdata. |
+| BOOTMETHOD | The Method used to boot the virutalmachine UEFI or BIOS. |
+| OS_DISK_CREATE_OPTION | The OS disk create specifies ephemeral disk option. |
+| OS_DISK_DELETE_OPTION | The OS disk delete specifies delete disk option. |
+| IP_AllOCATION_METHOD | The IpAllocationMethod valid for L3Networks specify Dynamic or Static or Disabled. |
+| NETWORKATTACHMENTNAME | The name of the Network to attach for workload. |
+| NETWORKDATA | The base64 encoded string of cloud-init network data. |
+
+Once you've defined these variables, you can run the Azure PowerShell command to create the virtual machine. Add the ```-Debug``` flag at the end to provide more detailed output for troubleshooting purposes.
+
+To define these variables, use the following set commands and replace the example values with your preferred values. You can also use the default values for some of the variables, as shown in the following example:
+
+```azurepowershell-interactive
+# Azure parameters
+$RESOURCE_GROUP="myResourceGroup"
+$SUBSCRIPTION="<Azure subscription ID>"
+$CUSTOM_LOCATION="/subscriptions/<subscription_id>/resourceGroups/<managed_resource_group>/providers/microsoft.extendedlocation/customlocations/<custom-location-name>"
+$CUSTOM_LOCATION_TYPE="CustomLocation"
+$LOCATION="<ClusterAzureRegion>"
+
+# VM parameters
+$VM_NAME="myNexusVirtualMachine"
+$BOOT_METHOD="UEFI"
+$OS_DISK_CREATE_OPTION="Ephemeral"
+$OS_DISK_DELETE_OPTION="Delete"
+$NETWORKDATA="bmV0d29ya0RhdGVTYW1wbGU="
+$VMDEVICEMODEL="T2"
+$USERDATA=""
+
+# VM credentials
+$ADMIN_USERNAME="admin"
+$SSH_PUBLIC_KEY = @{
+ KeyData = "$(cat ~/.ssh/id_rsa.pub)"
+}
+
+# Network parameters
+$CSN_ARM_ID="/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.NetworkCloud/cloudServicesNetworks/<csn-name>"
+$L3_NETWORK_ID="/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.NetworkCloud/l3Networks/<l3Network-name>"
+$IP_AllOCATION_METHOD="Dynamic"
+$CSN_ATTACHMENT_DEFAULTGATEWAY="False"
+$CSN_ATTACHMENT_NAME="<l3Network-name>"
+$ISOLATE_EMULATOR_THREAD="True"
+$VIRTIOINTERFACE="Modern"
+$NETWORKATTACHMENTNAME="mgmt0"
+
+# VM Size parameters
+$CPU_CORES=4
+$MEMORY_SIZE=12
+$VM_DISK_SIZE="64"
+
+# Virtual Machine Image parameters
+$VM_IMAGE="<VM image, example: myacr.azurecr.io/ubuntu:20.04>"
+$ACR_URL="<Azure container registry URL, example: myacr.azurecr.io>"
+$ACR_USERNAME="<Azure container registry username>"
+
+$NETWORKATTACHMENT = New-AzNetworkCloudNetworkAttachmentObject `
+-AttachedNetworkId $L3_NETWORK_ID `
+-IpAllocationMethod $IP_AllOCATION_METHOD `
+-DefaultGateway "True" `
+-Name $NETWORKATTACHMENTNAME
+
+$SECUREPASSWORD = ConvertTo-SecureString "<YourPassword>" -asplaintext -force
+```
+
+> [!IMPORTANT]
+> It is essential that you replace the placeholders for CUSTOM_LOCATION, CSN_ARM_ID, L3_NETWORK_ID and ACR parameters with your actual values before running these commands.
+
+After defining these variables, you can create the virtual machine by executing the following Azure PowerShell command.
+
+```azurepowershell-interactive
+New-AzNetworkCloudVirtualMachine -Name $VM_NAME `
+-ResourceGroupName $RESOURCE_GROUP `
+-AdminUsername $ADMIN_USERNAME `
+-CloudServiceNetworkAttachmentAttachedNetworkId $CSN_ARM_ID `
+-CloudServiceNetworkAttachmentIPAllocationMethod $IP_AllOCATION_METHOD `
+-CpuCore $CPU_CORES `
+-ExtendedLocationName $CUSTOM_LOCATION `
+-ExtendedLocationType $CUSTOM_LOCATION_TYPE `
+-Location $LOCATION `
+-SubscriptionId $SUBSCRIPTION `
+-MemorySizeGb $MEMORY_SIZE `
+-OSDiskSizeGb $VM_DISK_SIZE `
+-VMImage $VM_IMAGE `
+-BootMethod $BOOT_METHOD `
+-CloudServiceNetworkAttachmentDefaultGateway $CSN_ATTACHMENT_DEFAULTGATEWAY `
+-CloudServiceNetworkAttachmentName $CSN_ATTACHMENT_NAME `
+-IsolateEmulatorThread $ISOLATE_EMULATOR_THREAD `
+-NetworkAttachment $NETWORKATTACHMENT `
+-NetworkData $NETWORKDATA `
+-OSDiskCreateOption $OS_DISK_CREATE_OPTION `
+-OSDiskDeleteOption $OS_DISK_DELETE_OPTION `
+-SshPublicKey $SSH_PUBLIC_KEY `
+-UserData $USERDATA `
+-VMDeviceModel $VMDEVICEMODEL `
+-VMImageRepositoryCredentialsUsername $ACR_USERNAME `
+-VMImageRepositoryCredentialsPassword $SECUREPASSWORD `
+-VMImageRepositoryCredentialsRegistryUrl $ACR_URL
+```
+
+After a few minutes, the command completes and returns information about the virtual machine. You've created the virtual machine. You're now ready to use them.
+
+## Review deployed resources
++
+## Clean up resources
++
+## Next steps
+
+You've successfully created a Nexus virtual machine. You can now use the virtual machine to host virtual network functions (VNFs).
orbital Prepare Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/prepare-network.md
# Prepare the network for Azure Orbital Ground Station integration
-Azure Orbital Ground Station interfaces with your Azure resources using VNET injection, which is used in both uplink and downlink directions. This page describes how to ensure your Subnet and Azure Orbital Ground Station objects are configured correctly.
+Azure Orbital Ground Station interfaces with your Azure resources using virtual network (VNET) injection, which is used in both uplink and downlink directions. This page describes how to ensure your subnet and Azure Orbital Ground Station resources are configured correctly.
-Ensure the objects comply with the recommendations in this article. Note that these steps do not have to be followed in order.
+In this how-to guide, you'll learn how to:
-## Prepare subnet for VNET injection
+> [!div class="checklist"]
+> * Prepare the subnet for VNET injection
+> * Prepare endpoints
+> * Verify the contact profile
+> * Find IPs of scheduled contacts
+
+Ensure the objects comply with the recommendations in this article. Note that these steps don't have to be followed in order.
+
+## Create and prepare subnet for VNET injection
Prerequisites:-- An entire subnet with no existing IPs allocated or in use that can be dedicated to the Azure Orbital Ground Station service in your virtual network within your resource group.
+- An entire subnet with no existing IPs allocated or in use that can be dedicated to the Azure Orbital Ground Station service, in your virtual network within your resource group. If you need to make a new subnet, follow instructions to [add a subnet](../virtual-network/virtual-network-manage-subnet.md?tabs=azure-portal#add-a-subnet).
-Delegate a subnet to service named: Microsoft.Orbital/orbitalGateways. Follow instructions here: [Add or remove a subnet delegation in an Azure virtual network](../virtual-network/manage-subnet-delegation.md).
+Follow instructions to [add a subnet delegation](../virtual-network/manage-subnet-delegation.md#delegate-a-subnet-to-an-azure-service) in your virtual network. Delegate your subnet to the service named: **Microsoft.Orbital/orbitalGateways**.
> [!NOTE] > Address range needs to be at least /24 (e.g., 10.0.0.0/23)
-The following is an example of a typical VNET setup with a subnet delegated to Azure Orbital Ground Station.
+The following is an example of a typical VNET setup with a subnet delegated to Azure Orbital Ground Station:
## Prepare endpoints
-Set the MTU of all desired endpoints to at least 3650.
+Set the MTU of all desired endpoints to at least **3650** by sending appropriate commands to your virtual machine within your resource group.
-## Set up the contact profile
-
-Prerequisites:
-- The subnet/vnet is in the same region as the contact profile.
+## Verify the contact profile
Ensure the contact profile properties are set as follows:
-| **Property** | **Setting** |
-|-||
-| subnetId | Enter the **full ID to the delegated subnet**, which can be found inside the VNET's JSON view. subnetID is found under networkConfiguration. |
-| ipAddress | For each link, enter an **IP for TCP/UDP server mode**. Leave blank for TCP/UDP client mode. See the following section for a detailed explanation of configuring this property. |
-| port | For each link, the port must be within the 49152 and 65535 range and must be unique across all links in the contact profile. |
+### Region
+The VNET/subnet must be in the same region as the contact profile.
-> [!NOTE]
-> You can have multiple links/channels in a contact profile, and you can have multiple IPs. However the combination of port/protocol must be unique. You cannot have two identical ports, even if you have two different destination IPs.
+### Subnet ID
+1. Go to overview page of your contact profile and select **JSON view**. Find the **networkConfigurations** section, then identify the "**subnetId**".
+2. Go to overview page of your virtual network and select **JSON view**. Find the section for your **delegated subnet**, then identify the "**id**".
+3. Verify that these IDs are identical.
-For more information, learn about [contact profiles](/azure/orbital/concepts-contact-profile) and [how to configure a contact profile](/azure/orbital/contact-profile).
+### Link flows: IP Address and Port
-## Schedule the contact
+The links/channels must be set up in the following manner, based on direction and TCP or UDP preference.
-The Azure Orbital Ground Station platform pre-reserves IPs in the subnet when a contact is scheduled. These IPs represent the platform side endpoints for each link. IPs are unique between contacts, and if multiple concurrent contacts are using the same subnet, Microsoft guarantees those IPs to be distinct. The service fails to schedule the contact and an error is returned if the service runs out of IPs or cannot allocate an IP.
+> [!NOTE]
+> These settings are for managed modems only.
-When you create a contact, you can find these IPs by viewing the contact properties. Select JSON view in the portal or use the GET contact API call to view the contact properties. Make sure to use the current API version of 2022-03-01. The parameters of interest are below:
+#### Uplink
-| **Parameter** | **Usage** |
-||-|
-| antennaConfiguration.destinationIP | Connect to this IP when you configure the link as tcp/udp client. |
-| antennaConfiguration.sourceIps | Data will come from this IP when you configure the link as tcp/udp server. |
+| Setting | TCP Client | TCP Server | UDP Client | UDP Server |
+|:|:|:-|:|:|
+| Contact Profile: Link/Channel **IP Address** | Blank | Routable IP from delegated subnet | Blank | Not applicable |
+| Contact Profile: Link/Channel **Port** | Unique port in 49152-65535 | Unique port in 49152-65535 | Unique port in 49152-65535 | Not applicable |
+| **Output** | | | | |
+| Contact Resource: **destinationIP** | Connect to this IP | Not applicable | Connect to this IP | Not applicable |
+| Contact Resource: **sourceIP** | Not applicable | Link comes from one of these IPs | Not applicable | Not applicable |
-You can use this information to set up network policies or to distinguish between simultaneous contacts to the same endpoint.
+#### Downlink
+| Setting | TCP Client | TCP Server | UDP Client | UDP Server |
+|:|:|:-|:|:-|
+| Contact Profile: Link/Channel **IP Address** | Blank | Routable IP from delegated subnet | Not applicable | Routable IP from delegated subnet |
+| Contact Profile: Link/Channel **Port** | Unique port in 49152-65535 | Unique port in 49152-65535 | Not applicable | Unique port in 49152-65535 |
+| **Output** | | | | |
+| Contact Resource: **destinationIP** | Connect to this IP | Not applicable | Not applicable | Not applicable |
+| Contact Resource: **sourceIP** | Not applicable | Link comes from one of these IPs | Not applicable | Link comes from one of these IPs |
> [!NOTE]
-> - The source and destination IPs are always taken from the subnet address range.
-> - Only one destination IP is present. Any link in client mode should connect to this IP and the links are differentiated based on port.
-> - Many source IPs can be present. Links in server mode will connect to your specified IP address in the contact profile. The flows will originate from the source IPs present in this field and target the port as per the link details in the contact profile. There is no fixed assignment of link to source IP so please make sure to allow all IPs in any networking setup or firewalls.
+> You can have multiple links/channels in a contact profile, and you can have multiple IPs. However the combination of port/protocol must be unique. You can't have two identical ports, even if you have two different destination IPs.
-For more information, learn about [contacts](/azure/orbital/concepts-contact) and [how to schedule a contact](/azure/orbital/schedule-contact).
-
-## Client/Server, TCP/UDP, and link direction
+For more information, learn about [contact profiles](/azure/orbital/concepts-contact-profile) and [how to configure a contact profile](/azure/orbital/contact-profile).
-The following sections describe how to set up the link flows based on direction on TCP or UDP preference.
+## Find IPs of a scheduled contact
-> [!NOTE]
-> These settings are for managed modems only.
+The Azure Orbital Ground Station platform prereserves IPs in the subnet when a contact is scheduled. These IPs represent the platform-side endpoints for each link. IPs are unique between contacts, and if multiple concurrent contacts are using the same subnet, Microsoft guarantees those IPs to be distinct. The service fails to schedule the contact and an error is returned if the service runs out of IPs or can't allocate an IP.
-### Uplink
+When you create a contact, you can find these IPs by viewing the contact properties.
+To view the contact properties, go to the contact resource overview page and select **JSON view** in the portal or use the **GET contact** API call. Make sure to use the current API version of 2022-11-01. The parameters of interest are below:
-| Setting | TCP Client | TCP Server | UDP Client | UDP Server |
-|:-|:|:-|:|:|
-| _Contact Profile Link ipAddress_ | Blank | Routable IP from delegated subnet | Blank | Not applicable |
-| _Contact Profile Link port_ | Unique port in 49152-65535 | Unique port in 49152-65535 | Unique port in 49152-65535 | Not applicable |
-| **Output** | | | | |
-| _Contact Object destinationIP_ | Connect to this IP | Not applicable | Connect to this IP | Not applicable |
-| _Contact Object sourceIP_ | Not applicable | Link will come from one of these IPs | Not applicable | Not applicable |
+| **Parameter** | **Usage** |
+||--|
+| antennaConfiguration.destinationIp | Connect to this IP when you configure the link as **tcp/udp client**. |
+| antennaConfiguration.sourceIps | Data comes from this IP when you configure the link as **tcp/udp server**. |
+You can use this information to set up network policies or to distinguish between simultaneous contacts to the same endpoint.
-### Downlink
+> [!NOTE]
+> - The source and destination IPs are always taken from the subnet address range.
+> - Only one destination IP is present. Any link in client mode should connect to this IP and the links are differentiated based on port.
+> - Many source IPs can be present. Links in server mode will connect to your specified IP address in the contact profile. The flows will originate from the source IPs present in this field and target the port as per the link details in the contact profile. There is no fixed assignment of link to source IP so please make sure to allow all IPs in any networking setup or firewalls.
-| Setting | TCP Client | TCP Server | UDP Client | UDP Server |
-|:-|:|:-|:|:|
-| _Contact Profile Link ipAddress_ | Blank | Routable IP from delegated subnet | Not applicable | Routable IP from delegated subnet
-| _Contact Profile Link port_ | Unique port in 49152-65535 | Unique port in 49152-65535 | Not applicable | Unique port in 49152-65535 |
-| **Output** | | | | |
-| _Contact Object destinationIP_ | Connect to this IP | Not applicable | Not applicable | Not applicable |
-| _Contact Object sourceIP_ | Not applicable | Link will come from one of these IPs | Not applicable | Link will come from one of these IPs |
+For more information, learn about [contacts](/azure/orbital/concepts-contact) and [how to schedule a contact](/azure/orbital/schedule-contact).
## Next steps
partner-solutions Add Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/add-connectors.md
Title: Azure services and Confluent Cloud integration description: This article describes how to use Azure services and install connectors for Confluent Cloud integration. - Last updated 06/24/2022
# Azure services and Confluent Cloud integrations
-This article describes how to use Azure services like Azure Functions, and install connectors to Azure resources for Confluent Cloud.
+This article describes how to use Azure services like Azure Functions, and how to install connectors to Azure resources for Confluent Cloud.
## Azure Cosmos DB connector
partner-solutions Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/create-cli.md
Last updated 06/07/2021 -+ # QuickStart: Get started with Apache Kafka for Confluent Cloud - Azure CLI
partner-solutions Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/create-powershell.md
Last updated 11/03/2021 -+ # QuickStart: Get started with Apache Kafka for Confluent Cloud - Azure PowerShell
partner-solutions Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/create.md
Last updated 12/14/2021 -+ # QuickStart: Get started with Apache Kafka on Confluent Cloud - Azure portal
partner-solutions Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/manage.md
Title: Manage a Confluent Cloud description: This article describes management of a Confluent Cloud on the Azure portal. How to set up single sign-on, delete a Confluent organization, and get support. -+ Last updated 06/07/2021
partner-solutions Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/create.md
Last updated 01/06/2023 -+
partner-solutions Link To Existing Organization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/link-to-existing-organization.md
Last updated 06/01/2023 -+ # QuickStart: Link to existing Datadog organization
partner-solutions Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/elastic/create.md
Last updated 06/01/2023 -+ # QuickStart: Get started with Elastic
partner-solutions New Relic Link To Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-link-to-existing.md
Title: Link Azure Native New Relic Service to an existing account
description: Learn how to link to an existing New Relic account. - Last updated 02/16/2023
partner-solutions Nginx Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/nginx/nginx-create.md
Last updated 01/18/2023 -+
partner-solutions Nginx Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/nginx/nginx-manage.md
- Last updated 01/18/2023
partner-solutions Nginx Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/nginx/nginx-overview.md
description: Learn about using the NGINXaaS Cloud-Native Observability Platform
- Last updated 01/18/2023
partner-solutions Nginx Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/nginx/nginx-troubleshoot.md
Last updated 01/18/2023 -- # Troubleshooting NGINXaaS integration with Azure
partner-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/overview.md
description: Introduction to the Azure Native ISV Services.
- Previously updated : 04/19/2023 Last updated : 10/25/2023 # Azure Native ISV Services overview
-Azure Native ISV Services enable you to easily provision, manage, and tightly integrate *independent software vendor* (ISV) software and services on Azure. Azure Native ISV Services are developed and managed by Microsoft and the ISV. Currently, several services are publicly available across these areas: observability, data, networking, and storage. For a list of all our current ISV partner services, see [Extend Azure with Azure Native ISV Services](partners.md).
+Azure Native ISV Services enable you to easily provision, manage, and tightly integrate *independent software vendor* (ISV) software and services on Azure. Azure Native ISV Services is developed and managed by Microsoft and the ISV. Currently, several services are publicly available across these areas: observability, data, networking, and storage. For a list of all our current ISV partner services, see [Extend Azure with Azure Native ISV Services](partners.md).
## Features of Azure Native ISV Services
-A list of features of any Azure Native ISV Service is listed below.
+A list of features of any Azure Native ISV Service listed follows.
### Unified operations
A list of features of any Azure Native ISV Service is listed below.
### Integrations -- Logs and metrics: Seamlessly direct logs and metrics from Azure Monitor to the Azure Native ISV Service using just a few gestures. You can configure auto-discovery of resources to monitor, and set up automatic log forwarding and metrics shipping. You can easily do the setup in Azure, without needing to create additional infrastructure or write custom code.-- VNet injection: Provides private data plane access to Azure Native ISV services from customersΓÇÖ virtual networks.
+- Logs and metrics: Seamlessly direct logs and metrics from Azure Monitor to the Azure Native ISV Service using just a few gestures. You can configure autodiscovery of resources to monitor, and set up automatic log forwarding and metrics shipping. You can easily do the setup in Azure, without needing to create more infrastructure or write custom code.
+- Virtual network injection: Provides private data plane access to Azure Native ISV services from customersΓÇÖ virtual networks.
- Unified billing: Engage with a single entity, Microsoft Azure Marketplace, for billing. No separate license purchase is required to use Azure Native ISV Services.
partner-solutions Qumulo Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/qumulo/qumulo-create.md
Title: Get started with Azure Native Qumulo Scalable File Service
description: In this quickstart, learn how to create an instance of Azure Native Qumulo Scalable File Service. - Last updated 01/18/2023
partner-solutions Qumulo How To Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/qumulo/qumulo-how-to-manage.md
Title: Manage Azure Native Qumulo Scalable File Service
description: This article describes how to manage Azure Native Qumulo Scalable File Service in the Azure portal. - Last updated 01/18/2023
partner-solutions Qumulo Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/qumulo/qumulo-overview.md
Title: Azure Native Qumulo Scalable File Service overview
description: Learn about what Azure Native Qumulo Scalable File Service offers you. - Previously updated : 01/18/2023 Last updated : 10/25/2023
The Azure Native Qumulo Scalable File Service offering on Azure Marketplace enab
Azure Native Qumulo Scalable File Service provides: - Seamless onboarding: Easily include Qumulo as a natively integrated service on Azure.--- Unified billing: Get a single bill for all resources that you consume on Azure for the Qumulo service.
-<!-- Is the benefit one bill for all Qumulo deployments or one bill for anything you do on Azure including Qumulo? -->
-- Private access: The service is directly connected to your own virtual network (sometimes called *VNet injection*).
+- Unified billing: Get a single bill for all resources that you consume on Azure for the Qumulo service.
+- Private access: The service is directly connected to your own virtual network, sometimes called *VNet injection*.
## Next steps
Azure Native Qumulo Scalable File Service provides:
> [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Qumulo.Storage%2FfileSystems) > [!div class="nextstepaction"]
- > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qumulo1584033880660.qumulo-saas-mpp?tab=Overview)
+ > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qumulo1584033880660.qumulo-saas-mpp?tab=Overview)
postgresql Concepts Single To Flexible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/concepts-single-to-flexible.md
In this article, we provide compelling reasons for single server customers to mi
- **[Superior performance](../flexible-server/overview.md)** - Flexible server runs on Linux VM that is best suited to run PostgreSQL engine as compared to Windows environment, which is the case with Single Server. -- **[Cost Savings](../flexible-server/how-to-deploy-on-azure-free-account.md)** ΓÇô Flexible server allows you to stop and start server on-demand to lower your TCO. Your compute tier billing is stopped immediately which allows you to have significant cost savings during development, testing and for time-bound predictable production workloads.
+- **[Cost Savings](../flexible-server/how-to-deploy-on-azure-free-account.md)** ΓÇô Flexible server allows you to stop and start server on-demand to lower your TCO. Your compute tier billing is stopped immediately, which allows you to have significant cost savings during development, testing and for time-bound predictable production workloads.
-- **[Support for new PG versions](../flexible-server/concepts-supported-versions.md)** - Flexible server currently supports PG version 11 and onwards till version 14. Newer community versions of PostgreSQL will only be supported in flexible server.
+- **[Support for new PG versions](../flexible-server/concepts-supported-versions.md)** - Flexible server currently supports PG version 11 and onwards till version 14. Newer community versions of PostgreSQL will be supported only in flexible server.
- **Minimized Latency** ΓÇô You can collocate your flexible server in the same availability zone as the application server that results in a minimal latency. This option isn't available in Single server.
Let us first look at the methods you can consider performing the migration from
**Online Migration** - In an online migration, applications connecting to your single server aren't stopped while database(s) are copied to flexible server. The initial copy of the databases is followed by replication to keep flexible server in sync with the single server. A cutover is performed when the flexible server is in complete sync with the single server resulting in minimal downtime.
-The following table gives an overview of Offline vs Online migration.
+The following table gives an overview of Offline and Online modes of migration.
| Mode | Pros | Cons | | : | : | : | | Offline | - Simple, easy and less complex to execute.<br />- Very fewer chances of failure.<br />- No restrictions in terms of database objects it can handle | Downtime to applications. |
-| Online | - Very minimal downtime to application.<br />- Ideal for large databases and for customers having limited downtime requirements.<br />| - Replication used in online migration has multiple restrictions listed in this [doc](https://www.postgresql.org/docs/current/logical-replication-restrictions.html) (e.g Primary Keys needed in all tables)<br />- Tough and much complex to execute than offline migration.ΓÇ»<br />- Greater chances of failure due to complexity of migration.ΓÇ»<br />There is an impact on the source server's storage and compute if the migration runs for a long time. The impact needs to be monitored closely during migration. |
+| Online | - Very minimal downtime to application.<br />- Ideal for large databases and for customers having limited downtime requirements.<br />| - Replication used in online migration has multiple restrictions listed in this [doc](https://www.postgresql.org/docs/current/logical-replication-restrictions.html) (e.g Primary Keys needed in all tables)<br />- Tough and more complex to execute than offline migration.ΓÇ»<br />- Greater chances of failure due to complexity of migration.ΓÇ»<br />There's an impact on the source server's storage and compute if the migration runs for a long time. The impact needs to be monitored closely during migration. |
> [!IMPORTANT] > Offline migration is the recommended way to perform migrations from single server to flexible server. Customers should consider online migrations only if their downtime requirements are not met.
The following table shows the time for performing offline migrations for databas
> [!IMPORTANT] > In order to perform faster migrations, pick a higher SKU for your flexible server. You can always change the SKU to match the application needs post migration.
+## Pre-migration validations
+We have noticed many migrations fail due to setup issues on source and target server. Most of the issues can be categorized into the following buckets:
+* Issues related to authentication/permissions for the migration user on source and target server.
+* [Prerequisites](#migration-prerequisites) not being taken care of, before running the migration.
+* Unsupported features/configurations between the source and target.
+
+Pre-migration validation helps you verify if the migration setup is intact to perform a successful migration. Checks are done against a rule set and any potential problems along with the remedial actions are shown to take corrective measures.
+
+### How to use pre-migration validation?
+A new parameter called **Migration option** is introduced while creating a migration.
++
+You can pick any of the following options
+* **Validate** - Use this option to check your server and database readiness for migration to the target. **This option will not start data migration and will not require any downtime to your servers.**
+The result of the Validate option can be
+ - **Succeeded** - No issues were found and you can plan for the migration
+ - **Failed** - There were errors found during validation, which can fail the migration. Go through the list of errors along with their suggested workarounds and take corrective measures before planning the migration.
+ - **Warning** - Warnings are informative messages that you need to keep in mind while planning the migration.
+
+ Plan your migrations better by performing pre-migration validations in advance to know the potential issues you might encounter while performing migrations.
+
+* **Migrate** - Use this option to kickstart the migration without going through validation process. It's recommended to perform validation before triggering a migration to increase the chances for a successful migration. Once validation is done, you can use this option to start the migration process.
+
+* **Validate and Migrate** - In this option, validations are performed and then migration gets triggered if all checks are in **succeeded** or **warning** state. Validation failures don't start the migration between source and target servers.
+
+We recommend customers to use pre-migration validations in the following way:
+1) Choose **Validate** option and run pre-migration validation on an advanced date of your planned migration.
+2) Analyze the output and take any remedial actions of any errors.
+3) Run Step 1 again till the validation is successful
+4) Start the migration using the **Validate and Migrate** option on the planned date and time.
+
+> [!NOTE]
+> Pre-migration validations is enabled for flexible servers in North Europe region. It will be enabled for flexible servers in other Azure regions soon. This functionality is available only in Azure portal. Support for CLI will be introduced at a later point in time.
+ ## Migration of users/roles, ownerships and privileges Along with data migration, the tool automatically provides the following built-in capabilities: - Migration of users/roles present on your source server to the target server.
Along with data migration, the tool automatically provides the following built-i
## Limitations -- You can have only one active migration to your flexible server.-- The source and target server must be in the same Azure region. Cross region migrations is enabled only for servers in China regions.
+- You can have only one active migration or validation to your flexible server.
+- The source and target server must be in the same Azure region. Cross region migrations are enabled only for servers in China regions.
- The tool takes care of the migration of data and schema. It doesn't migrate managed service features such as server parameters, connection security details and firewall rules. - The migration tool shows the number of tables copied from source to target server. You need to manually validate the data in target server post migration. - The tool only migrates user databases and not system databases like template_0, template_1, azure_sys and azure_maintenance.
Along with data migration, the tool automatically provides the following built-i
> [!NOTE] > The following limitations are applicable only for flexible servers on which the migration of users/roles functionality is enabled. -- AAD users present on your source server will not be migrated to target server. To mitigate this limitation, manually create all AAD users on your target server using this [link](../flexible-server/how-to-manage-azure-ad-users.md) before triggering a migration. If AAD users are not created on target server, migration will fail with appropriate error message.-- If the target flexible server uses SCRAM-SHA-256 password encryption method, connection to flexible server using the users/roles on single server will fail since the passwords are encrypted using md5 algorithm. To mitigate this limitation, please choose the option **MD5** for **password_encryption** server parameter on your flexible server.-- Though the ownership of database objects such as tables, views, sequences, etc. are copied to the target server, the owner of the database in your target server will be the migration user of your target server. The limitation can be mitigated by executing the following command -
-```sql
- ALTER DATABASE <dbname> OWNER TO <user>;
-```
- Make sure the user executing the above command is a member of the user to which ownership is being assigned to. This limitation will be fixed in the upcoming releases of the migration tool to match the database owners on your source server.
+- Azure Active Directory users present on your source server won't be migrated to target server. To mitigate this limitation, manually create all Azure Active Directory users on your target server using this [link](../flexible-server/how-to-manage-azure-ad-users.md) before triggering a migration. If Azure Active Directory users aren't created on target server, migration fails with appropriate error message.
+- If the target flexible server uses SCRAM-SHA-256 password encryption method, connection to flexible server using the users/roles on single server fails since the passwords are encrypted using md5 algorithm. To mitigate this limitation, choose the option **MD5** for **password_encryption** server parameter on your flexible server.
## Experience Get started with the Single to Flex migration tool by using any of the following methods:
Here, we go through the phases of an overall database migration journey, with gu
Single server supports PG version 9.6,10 and 11 while Flexible server supports PG version 11, 12, 13 and 14. Given the differences in supported versions, you might be moving across versions while migrating from single to flexible server. If that is the case, make sure your application works well with the version of flexible server you're trying to migrate to. If there are breaking changes, make sure to fix them on your application before migrating to flexible server. Use this [link](https://www.postgresql.org/docs/14/appendix-obsolete.html) to check for any breaking changes while migrating to the target version.
-#### Database migration planning
-
-The most important thing to consider for performing offline migration using the single to flex migration tool is the downtime incurred by the application.
-
-##### How to calculate the downtime?
-
-In most cases, the non-prod servers (dev, UAT, test, staging) are migrated using offline migrations. Since these servers have less data than the production servers, the migration completes fast. For migration of production server, you need to know the time it would take to complete the migration to plan for it in advance.
-
-The time taken for an offline migration to complete is dependent on several factors that includes the number of databases, size of databases, number of tables inside each database, number of indexes, and the distribution of data across tables. It also depends on the SKU of the source and target server, and the IOPS available on the source and target server. Given the many factors that can affect the migration time, it's hard to estimate the total time for the offline migration to complete. The best approach would be to try it on a server restored from the primary server.
-
-For calculating the total downtime to perform offline migration of production server, the following phases are considered.
--- **Migration of PITR** - The best way to get a good estimate on the time taken to migrate your production database server would be to take a point-in time restore of your production server and run the offline migration on this newly restored server.--- **Migration of Buffer** - After completing the above step, you can plan for actual production migration during a time period when the application traffic is low. This migration can be planned on the same day or probably a week away. By this time, the size of the source server might have increased. Update your estimated migration time for your production server based on the amount of this increase. If the increase is significant, you can consider doing another test using the PITR server. But for most servers the size increase shouldn't be significant enough.--- **Validation** - Once the offline migration completes for the production server, you need to verify if the data in flexible server is an exact copy of the single server. Customers can use opensource/thirdparty tools or can do the validation manually. Prepare the validation steps that you would like to do in advance of the actual migration. Validation can include:
- * Row count match for all the tables involved in the migration.
- * Matching counts for all the database object (tables, sequences, extensions, procedures, indexes)
- * Comparing max or min IDs of key application related columns
-
-> [!NOTE]
-> The size of databases is not the right metric for validation.The source server might have bloats/dead tuples which can bump up the size on the source server. Also, the storage containers used in single and flexible servers are completely different. It is completely normal to have size differences between source and target servers. If there is an issue in the first three steps of validation, it indicates a problem with the migration.
--- **Migration of server settings** - The server parameters, firewall rules (if applicable), tags, alerts need to be manually copied from single server to flexible server.--- **Changing connection strings** - Post successful validation, application should change their connection strings to point to flexible server. This activity is coordinated with the application team to make changes to all the references of connection strings pointing to single server. Note that in the flexible server the user parameter in the connection string no longer needs to be in the **username@servername** format. You should just use the **user=username** format for this parameter in the connection string
-For example
-Psql -h **mysingleserver**.postgres.database.azure.com -u **user1@mysingleserver** -d db1
-should now be of the format
-Psql -h **myflexserver**.postgres.database.azure.com -u user1 -d db1
-
-**Total planned downtime** = **Time to migrate PITR** + **time to migrate Buffer** + **time for Validation** + **time to migrate server settings** + **time to switch connection strings to the flexible server.**
-
-While most frequently a migration runs without a hitch, it's good practice to plan for contingencies if there is additional time required for debugging or if a migration may need to be restarted.
- #### Migration prerequisites The following pre-requisites need to be taken care of before using the Single to Flex Migration tool for migration
The following table summarizes the list of networking scenarios supported by the
| Private Access | Private Access | Yes | **Steps needed to establish connectivity between your Single and Flexible Server**-- If your single server is public access (case #1 and case #2 in the above table), there's nothing needed from your end. The single to flex migration tool automatically establishes connection between single and flexible server and the migration will go through.-- If your single server is in private access, then the only supported scenario is when your Flexible server is inside a VNet. If your flexible server is deployed in the same VNet as the private end point of your Single server, connections between single server and flexible server should automatically work provided there is no network security group(NSGs) blocking the connectivity between subnets. If flexible server is deployed in another VNet, [peering should be established between the VNets](../../virtual-network/tutorial-connect-virtual-networks-portal.md) for the connection to work between Single and Flexible server.
+- If your single server is public access (case #1 and case #2 in the above table), there's nothing needed from your end. The single to flex migration tool automatically establishes connection between single and flexible server and the migration goes through.
+- If your single server is in private access, then the only supported scenario is when your Flexible server is inside a VNet. If your flexible server is deployed in the same VNet as the private end point of your Single server, connections between single server and flexible server should automatically work provided there's no network security group(NSGs) blocking the connectivity between subnets. If flexible server is deployed in another VNet, [peering should be established between the VNets](../../virtual-network/tutorial-connect-virtual-networks-portal.md) for the connection to work between Single and Flexible server.
##### Allow list required extensions
-The migration tool automatically allow lists all extensions used by your single server databases on your flexible server except for the ones whose libraries need to be loaded at the server start.
+The migration tool automatically allows lists all extensions used by your single server databases on your flexible server except for the ones whose libraries need to be loaded at the server start.
Use the following select command to list all the extensions used on your Single server databases.
Use the **Save and Restart** option and wait for the flexible server to restart.
> [!NOTE] > This pre-requisite is applicable only for flexible servers on which the migration of users/roles functionality is enabled.
-Execute the following query on your source server to get the list of AAD users.
+Execute the following query on your source server to get the list of Azure Active Directory users.
```sql SELECT r.rolname FROM
SELECT r.rolname
'azure_ad_mfa' ); ```
-Create the AAD users on your target flexible server using this [link](../flexible-server/how-to-manage-azure-ad-users.md) before creating a migration.
+Create the Azure Active Directory users on your target flexible server using this [link](../flexible-server/how-to-manage-azure-ad-users.md) before creating a migration.
+
+#### Database migration planning
+
+The first step in the database migration planning is to run pre-migration validation on your source and target server to check for any errors in the migration setup. Analyze the validation report and take any remedial actions if needed. Keep running pre migration validation until it results in **Succeeded** state. Now with the migration setup ready, you can move on to the next phase of planning.
+
+The next phase of planning involves downtime incurred by applications for performing offline migration using the single to flex migration tool.
+
+##### How to calculate the downtime?
+
+In most cases, the non-prod servers (dev, UAT, test, staging) are migrated using offline migrations. Since these servers have less data than the production servers, the migration completes fast. For migration of production server, you need to know the time it would take to complete the migration to plan for it in advance.
+
+The time taken for an offline migration to complete is dependent on several factors that includes the number of databases, size of databases, number of tables inside each database, number of indexes, and the distribution of data across tables. It also depends on the SKU of the source and target server, and the IOPS available on the source and target server. Given the many factors that can affect the migration time, it's hard to estimate the total time for the offline migration to complete. The best approach would be to try it on a server restored from the primary server.
+
+For calculating the total downtime to perform offline migration of production server, the following phases are considered.
+
+- **Migration of PITR** - The best way to get a good estimate on the time taken to migrate your production database server would be to take a point-in time restore of your production server and run the offline migration on this newly restored server.
+
+- **Migration of Buffer** - After completing the above step, you can plan for actual production migration during a time period when the application traffic is low. This migration can be planned on the same day or probably a week away. By this time, the size of the source server might have increased. Update your estimated migration time for your production server based on the amount of this increase. If the increase is significant, you can consider doing another test using the PITR server. But for most servers the size increase shouldn't be significant enough.
+
+- **Data Validation** - Once the offline migration completes for the production server, you need to verify if the data in flexible server is an exact copy of the single server. Customers can use opensource/thirdparty tools or can do the validation manually. Prepare the validation steps that you would like to do in advance of the actual migration. Validation can include:
+ * Row count match for all the tables involved in the migration.
+ * Matching counts for all the database object (tables, sequences, extensions, procedures, indexes)
+ * Comparing max or min IDs of key application related columns
+
+> [!NOTE]
+> The size of databases is not the right metric for validation.The source server might have bloats/dead tuples which can bump up the size on the source server. Also, the storage containers used in single and flexible servers are completely different. It is completely normal to have size differences between source and target servers. If there is an issue in the first three steps of validation, it indicates a problem with the migration.
+
+- **Migration of server settings** - The server parameters, firewall rules (if applicable), tags, alerts need to be manually copied from single server to flexible server.
+
+- **Changing connection strings** - Post successful validation, application should change their connection strings to point to flexible server. This activity is coordinated with the application team to make changes to all the references of connection strings pointing to single server. In the flexible server the user parameter in the connection string no longer needs to be in the **username@servername** format. You should just use the **user=username** format for this parameter in the connection string
+For example
+Psql -h **mysingleserver**.postgres.database.azure.com -u **user1@mysingleserver** -d db1
+should now be of the format
+Psql -h **myflexserver**.postgres.database.azure.com -u user1 -d db1
+
+**Total planned downtime** = **Time to migrate PITR** + **time to migrate Buffer** + **time for Validation** + **time to migrate server settings** + **time to switch connection strings to the flexible server.**
+
+While most frequently a migration runs without a hitch, it's good practice to plan for contingencies if more time is required for debugging or if a migration needs to be restarted.
### Migration
Once the pre-migration steps are complete, you're ready to carry out the migrati
- Checkpoint the source server by running **checkpoint** command and restart the source server. This command ensures any remaining applications or connections are disconnected. Additionally, you can run **select * from pg_stat_activity;** after the restart to ensure no applications is connected to the source server.
-Trigger the migration of your production databases using the single to flex migration tool. The migration requires close monitoring, and the monitoring user interface of the migration tool comes in handy. Check the migration status over the period of time to ensure there is progress and wait for the migration to complete.
+Trigger the migration of your production databases using the **Migrate** or **Validate and Migrate** option in the migration tool. The migration requires close monitoring, and the monitoring user interface of the migration tool comes in handy. Check the migration status over the period of time to ensure there is progress and wait for the migration to complete.
#### Improve migration speed - Parallel migration of tables
-In general, a powerful SKU is recommended for the target as the migration tool runs out of a container on the Flexible server. A powerful SKU enables a greater number of tables to be migrated in parallel. You can scale the SKU back to your preferred configuration after the migration. This section contains steps to improve the migration speed in case the data distribution among the tables is skewed and/or a more powerful SKU does not have a significant impact on the migration speed.
+In general, a powerful SKU is recommended for the target as the migration tool runs out of a container on the Flexible server. A powerful SKU enables a greater number of tables to be migrated in parallel. You can scale the SKU back to your preferred configuration after the migration. This section contains steps to improve the migration speed in case the data distribution among the tables is skewed and/or a more powerful SKU doesn't have a significant impact on the migration speed.
-If the data distribution on the source is highly skewed, with most of the data present in one table, the allocated compute for migration is not fully utilized and it creates a bottleneck. So, we will split large tables into smaller chunks which are then migrated in parallel. This is applicable to tables that have more than 10000000 (10m) tuples. Splitting the table into smaller chunks is possible is possible if one of the following conditions are satisfied.
+If the data distribution on the source is highly skewed, with most of the data present in one table, the allocated compute for migration isn't fully utilized and it creates a bottleneck. So, we split large tables into smaller chunks, which are then migrated in parallel. This is applicable to tables that have more than 10000000 (10m) tuples. Splitting the table into smaller chunks is possible if one of the following conditions is satisfied.
1. The table must have a column with a simple (not composite) primary key or unique index of type int or big int. > [!NOTE] > In case of approaches #2 or #3 below, the user must carefully evaluate the implications of adding a unique index column to the source schema. Only after confirmation that adding a unique index column will not affect the application should the user go ahead with the changes.
-2. If the table does not have a simple primary key or unique index of type int or big int, but has a column that meets the data type criteria, the column can be converted into a unique index using the below command. Note that this command does not require a lock on the table.
+2. If the table doesn't have a simple primary key or unique index of type int or big int, but has a column that meets the data type criteria, the column can be converted into a unique index using the below command. This command does not require a lock on the table.
```sql create unique index concurrently partkey_idx on <table name> (column name); ```
-3. If the table has neither a simple int/big int primary key or unique index nor any column that meets the data type criteria, you can add such a column using [ALTER](https://www.postgresql.org/docs/current/sql-altertable.html) and drop it post-migration. Note that running the ALTER command requires a lock on the table.
+3. If the table has neither a simple int/big int primary key or unique index nor any column that meets the data type criteria, you can add such a column using [ALTER](https://www.postgresql.org/docs/current/sql-altertable.html) and drop it post-migration. Running the ALTER command requires a lock on the table.
```sql alter table <table name> add column <column name> bigserial unique; ```
-If any of the above conditions are satisfied, the table will be migrated in multiple partitions in parallel, which should provide a marked increase in the migration speed.
+If any of the above conditions are satisfied, the table is migrated in multiple partitions in parallel, which should provide a marked increase in the migration speed.
##### How it works - The migration tool looks up the maximum and minimum integer value of the Primary key/Unique index of that table that must be split up and migrated in parallel. - If the difference between the minimum and maximum value is more than 10000000 (10m), then the table is split into multiple parts and each part is migrated separately, in parallel.
-In summary, the Single to Flexible migration tool will migrate a table in parallel threads and reduce the migration time if:
+In summary, the Single to Flexible migration tool migrates a table in parallel threads and reduce the migration time if:
1. The table has a column with a simple primary key or unique index of type int or big int. 2. The table has at least 10000000 (10m) rows so that the difference between the minimum and maximum value of the primary key is more than 10000000 (10m).
-3. The SKU used has idle cores which can be leveraged for migrating the table in parallel.
+3. The SKU used has idle cores, which can be used for migrating the table in parallel.
### Post migration
postgresql How To Migrate Single To Flexible Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-portal.md
The first tab is **Setup**. Just in case you missed it, allowlist necessary exte
**Migration name** is the unique identifier for each migration to this Flexible Server target. This field accepts only alphanumeric characters and doesn't accept any special characters except a hyphen (-). The name can't start with a hyphen and should be unique for a target server. No two migrations to the same Flexible Server target can have the same name.
-The second attribute on the **Source** tab is **Migration mode**. The migration tool offers offline mode of migration as default.
+**Migration Option** gives you the option to perform validations before triggering a migration. You can pick any of the following options
+ - **Validate** - Checks your server and database readiness for migration to the target.
+ - **Migrate** - Skips validations and starts migrations.
+ - **Validate and Migrate** - Performs validation before triggering a migration. Migration gets triggered only if there are no validation failures.
+
+It is always a good practice to choose **Validate** or **Validate and Migrate** option to perform pre-migration validations before running the migration. To learn more about the pre-migration validation refer to this [documentation](./concepts-single-to-flexible.md#pre-migration-validations).
+
+**Migration mode** gives you the option to pick the mode for the migration. **Offline** is the default option. Support for online migrations will be introduced later in the tool.
Select the **Next** button.
The **Source** tab prompts you to give details related to the Single Server that
:::image type="content" source="./media/concepts-single-to-flexible/flexible-migration-source.png" alt-text="Screenshot of source database server details." lightbox="./media/concepts-single-to-flexible/flexible-migration-source.png":::
-After you make the **Subscription** and **Resource Group** selections, the dropdown list for server names shows Single Servers under that resource group across regions. Select the source that you want to migrate databases from. Note that you can migrate databases from a Single Server to a target Flexible Server in the same region - cross region migrations are supported only in China regions.
+After you make the **Subscription** and **Resource Group** selections, the dropdown list for server names shows Single Servers under that resource group across regions. Select the source that you want to migrate databases from. Note that you can migrate databases from a Single Server to a target Flexible Server in the same region. Cross region migrations are supported only in China regions.
After you choose the Single Server source, the **Location**, **PostgreSQL version**, and **Server admin login name** boxes are populated automatically. The server admin login name is the admin username used to create the Single Server. In the **Password** box, enter the password for that admin user. The migration tool performs the migration of single server databases as the admin user.
The **Target** tab displays metadata for the Flexible Server target, like subscr
For **Server admin login name**, the tab displays the admin username used during the creation of the Flexible Server target. Enter the corresponding password for the admin user.
->[!NOTE]
-> The migration tool overwrites existing database(s) on the target Flexible server if a database of the same name is already present on the target.
- Select the **Next** button. ### Select Database(s) for Migration tab
Under this tab, there is a list of user databases inside the Single Server. You
:::image type="content" source="./media/concepts-single-to-flexible/flexible-migration-database.png" alt-text="Screenshot of Databases to migrate." lightbox="./media/concepts-single-to-flexible/flexible-migration-database.png":::
-### Review + create tab
+### Review
>[!NOTE] > Gentle reminder to allowlist necessary [extensions](./concepts-single-to-flexible.md#allow-list-required-extensions) before you select **Create** in case it is not yet complete.
-The **Review + create** tab summarizes all the details for creating the migration. Review the details and select the **Create** button to start the migration.
+The **Review** tab summarizes all the details for creating the validation or migration. Review the details and click on the start button.
:::image type="content" source="./media/concepts-single-to-flexible/flexible-migration-review.png" alt-text="Screenshot of details to review for the migration." lightbox="./media/concepts-single-to-flexible/flexible-migration-review.png"::: ## Monitor the migration
-After you select the **Create** button, a notification appears in a few seconds to say that the migration creation is successful. You are redirected automatically to the **Migration** page of Flexible Server. That page has a new entry for the recently created migration.
+After you click the start button, a notification appears in a few seconds to say that the validation or migration creation is successful. You are redirected automatically to the **Migration** blade of Flexible Server. This has a new entry for the recently created validation or migration.
:::image type="content" source="./media/concepts-single-to-flexible/flexible-migration-monitor.png" alt-text="Screenshot of recently created migration details." lightbox="./media/concepts-single-to-flexible/flexible-migration-monitor.png":::
-The grid that displays the migrations has these columns: **Name**, **Status**, **Source DB server**, **Resource group**, **Region**, **Databases**, and **Start time**. The migrations are in the descending order of migration start time with the most recent migration on top.
+The grid that displays the migrations has these columns: **Name**, **Status**, **Source DB server**, **Resource group**, **Region**, **Databases**, and **Start time**. The entries are displayed in the descending order of the start time with the most recent entry on the top.
+
+You can use the refresh button to refresh the status of the validation or migration.
+You can also select the migration name in the grid to see the associated details.
+
+As soon as the validation or migration is created, it moves to the **InProgress** state and **PerformingPreRequisiteSteps** substate. It takes 2-3 minutes for the workflow to set up the migration infrastructure and network connections.
+
+Let us look at how to monitor migrations for each of the **Migration Option**.
+
+### Validate
-You can use the refresh button to refresh the status of the migrations.
-You can also select the migration name in the grid to see the details of that migration.
+After the **PerformingPreRequisiteSteps** substate is completed, the validation moves to the substate of **Validation in Progress** where checks are done on the source and target server to assess the readiness for migration.
+The validation moves to the **Succeeded** state if all validations are either in **Succeeded** or **Warning** state.
-As soon as the migration is created, the migration moves to the **InProgress** state and **PerformingPreRequisiteSteps** substate. It takes 2-3 minutes for the migration workflow to set up the migration infrastructure and network connections.
+
+The validation grid has the following columns
+- **Finding** - Represents the validation rules that are used to check readiness for migration.
+- **Finding Status** - Represents the result for each rule and can have any of the three values
+ - **Succeeded** - If no errors were found.
+ - **Failed** - If there are validation errors.
+ - **Warning** - If there are validation warnings.
+- **Impacted Object** - Represents the object name for which the errors or warnings are raised.
+- **Object Type** - This can have the value **Database** for database level validations and **Instance** for server level validations.
+
+The validation moves to **Validation Failed** state if there are any errors in the validation. Click on the **Finding** in the grid whose status is **Failed** and a fan-out pane appears giving the details and the corrective action you should take to avoid this error.
++
+### Migrate
After the **PerformingPreRequisiteSteps** substate is completed, the migration moves to the substate of **Migrating Data** when the Cloning/Copying of the databases takes place. The time for migration to complete depends on the size and shape of databases that you are migrating. If the data is mostly evenly distributed across all the tables, the migration is quick. Skewed table sizes take a relatively longer time.
Once the migration moves to the **Succeeded** state, migration of schema and dat
:::image type="content" source="./media/concepts-single-to-flexible/flexible-migration-progress-complete.png" alt-text="Screenshot of the completed migrations." lightbox="./media/concepts-single-to-flexible/flexible-migration-progress-complete.png":::
-The Migration grid gives a top-level view of the completed migration.
+### Validate and Migrate
+
+In this option, validations are performed first before migration starts. After the **PerformingPreRequisiteSteps** sub state is completed, the workflow moves into the sub state of **Validation in Progress**.
+- If validation has errors, the migration will move into a **Failed** state.
+- If validation completes without any error, the migration will start and the workflow will move into the sub state of **Migrating Data**.
+
+You can see the results of validation under the **Validation** tab and monitor the migration under the **Migration** tab.
+ After the migration has moved to the **Succeeded** state, follow the post-migration steps in [Migrate from Azure Database for PostgreSQL Single Server to Flexible Server](./concepts-single-to-flexible.md#post-migration).
Possible migration states include:
- **InProgress**: The migration infrastructure setup is underway, or the actual data migration is in progress. - **Canceled**: The migration is canceled or deleted. - **Failed**: The migration has failed.
+- **Validation Failed** : The validation has failed.
- **Succeeded**: The migration has succeeded and is complete. Possible migration substates include: - **PerformingPreRequisiteSteps**: Infrastructure set up is underway for data migration.
+- **Validation in Progress**: Validation is in progress.
- **MigratingData**: Data migration is in progress. - **CompletingMigration**: Migration is in final stages of completion. - **Completed**: Migration has successfully completed. ## Cancel the migration
-You can cancel any ongoing migrations. To cancel a migration, it must be in the **InProgress** state. You can't cancel a migration that's in the **Succeeded** or **Failed** state.
+You can cancel any ongoing validations or migrations. The workflow must be in the **InProgress** state to be canceled. You can't cancel a validation or migration that's in the **Succeeded** or **Failed** state.
-You can choose multiple ongoing migrations at once and cancel them.
-Cancelling a migration stops further migration activity on your target server. It doesn't drop or roll back any changes on your target server from the migration attempts. Be sure to drop the databases on your target server involved in a canceled migration.
+Canceling a validation stops any further validation activity and the validation moves to a **Cancelled** state.
+Canceling a migration stops further migration activity on your target server and moves to a **Cancelled** state. It doesn't drop or roll back any changes on your target server. Be sure to drop the databases on your target server involved in a canceled migration.
## Migration best practices
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
| Azure Batch (Microsoft.Batch/batchAccounts) | batchAccount | {regionName}.privatelink.batch.azure.com | {regionName}.batch.azure.com | | Azure Batch (Microsoft.Batch/batchAccounts) | nodeManagement | {regionName}.service.privatelink.batch.azure.com | {regionName}.service.batch.azure.com | | Azure Database for PostgreSQL - Single server (Microsoft.DBforPostgreSQL/servers) | postgresqlServer | privatelink.postgres.database.azure.com | postgres.database.azure.com |
-| Azure Database for MySQL (Microsoft.DBforMySQL/servers) | mysqlServer | privatelink.mysql.database.azure.com | mysql.database.azure.com |
+| Azure Database for MySQL - Single Server (Microsoft.DBforMySQL/servers) | mysqlServer | privatelink.mysql.database.azure.com | mysql.database.azure.com |
+| Azure Database for MySQL - Flexible Server (Microsoft.DBforMySQL/flexibleServers) | mysqlServer | privatelink.mysql.database.azure.com | mysql.database.azure.com |
| Azure Database for MariaDB (Microsoft.DBforMariaDB/servers) | mariadbServer | privatelink.mariadb.database.azure.com | mariadb.database.azure.com | | Azure Key Vault (Microsoft.KeyVault/vaults) | vault | privatelink.vaultcore.azure.net | vault.azure.net <br> vaultcore.azure.net | | Azure Key Vault (Microsoft.KeyVault/managedHSMs) | managedhsm | privatelink.managedhsm.azure.net | managedhsm.azure.net |
For Azure services, use the recommended zone names as described in the following
| Azure Batch (Microsoft.Batch/batchAccounts) | batchAccount | privatelink.batch.usgovcloudapi.net | {regionName}.batch.usgovcloudapi.net | | Azure Batch (Microsoft.Batch/batchAccounts) | nodeManagement | privatelink.batch.usgovcloudapi.net | {regionName}.service.batch.usgovcloudapi.net | | Azure Database for PostgreSQL - Single server (Microsoft.DBforPostgreSQL/servers) | postgresqlServer | privatelink.postgres.database.usgovcloudapi.net | postgres.database.usgovcloudapi.net |
-| Azure Database for MySQL (Microsoft.DBforMySQL/servers) | mysqlServer | privatelink.mysql.database.usgovcloudapi.net | mysql.database.usgovcloudapi.net|
+| Azure Database for MySQL - Single Server (Microsoft.DBforMySQL/servers) | mysqlServer | privatelink.mysql.database.usgovcloudapi.net | mysql.database.usgovcloudapi.net |
+| Azure Database for MySQL - Flexible Server (Microsoft.DBforMySQL/flexibleServers) | mysqlServer | privatelink.mysql.database.usgovcloudapi.net | mysql.database.usgovcloudapi.net |
| Azure Database for MariaDB (Microsoft.DBforMariaDB/servers) | mariadbServer | privatelink.mariadb.database.usgovcloudapi.net| mariadb.database.usgovcloudapi.net | | Azure Key Vault (Microsoft.KeyVault/vaults) | vault | privatelink.vaultcore.usgovcloudapi.net | vault.usgovcloudapi.net <br> vaultcore.usgovcloudapi.net | | Azure Search (Microsoft.Search/searchServices) | searchService | privatelink.search.windows.us | search.windows.us |
For Azure services, use the recommended zone names as described in the following
| Azure Batch (Microsoft.Batch/batchAccounts) | batchAccount | privatelink.batch.chinacloudapi.cn | {region}.batch.chinacloudapi.cn | | Azure Batch (Microsoft.Batch/batchAccounts) | nodeManagement | privatelink.batch.chinacloudapi.cn | {region}.service.batch.chinacloudapi.cn | | Azure Database for PostgreSQL - Single server (Microsoft.DBforPostgreSQL/servers) | postgresqlServer | privatelink.postgres.database.chinacloudapi.cn | postgres.database.chinacloudapi.cn |
-| Azure Database for MySQL (Microsoft.DBforMySQL/servers) | mysqlServer | privatelink.mysql.database.chinacloudapi.cn | mysql.database.chinacloudapi.cn |
+| Azure Database for MySQL - Single Server (Microsoft.DBforMySQL/servers) | mysqlServer | privatelink.mysql.database.chinacloudapi.cn | mysql.database.chinacloudapi.cn |
+| Azure Database for MySQL - Flexible Server (Microsoft.DBforMySQL/flexibleServers) | mysqlServer | privatelink.mysql.database.chinacloudapi.cn | mysql.database.chinacloudapi.cn |
| Azure Database for MariaDB (Microsoft.DBforMariaDB/servers) | mariadbServer | privatelink.mariadb.database.chinacloudapi.cn | mariadb.database.chinacloudapi.cn | | Azure Key Vault (Microsoft.KeyVault/vaults) | vault | privatelink.vaultcore.azure.cn | vaultcore.azure.cn | | Azure Event Hubs (Microsoft.EventHub/namespaces) | namespace | privatelink.servicebus.chinacloudapi.cn | servicebus.chinacloudapi.cn |
reliability Availability Zones Service Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-service-support.md
The following regions currently support availability zones:
|||||| | Brazil South | France Central | Qatar Central | South Africa North | Australia East | | Canada Central | Italy North | UAE North | | Central India |
-| Central US | Germany West Central | Israel Central* | | Japan East |
+| Central US | Germany West Central | Israel Central | | Japan East |
| East US | Norway East | | | Korea Central | | East US 2 | North Europe | | | Southeast Asia | | South Central US | UK South | | | East Asia |
reliability Cross Region Replication Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/cross-region-replication-azure.md
The table below lists Azure regions without a region pair:
|--|-| | Qatar | Qatar Central | | Poland | Poland Central |
-| Israel | Israel Central (Coming soon)|
+| Israel | Israel Central|
| Italy | Italy North| | Austria | Austria East (Coming soon) | | Spain | Spain Central (Coming soon) |
service-connector How To Integrate Cosmos Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-cassandra.md
Supported authentication and clients for App Service, Container Apps and Azure S
## Default environment variable names or application properties and Sample code
-Reference the connection details and sample code in the following tables, according to your connection's authentication type and client type, to connect your compute services to Azure Cosmos DB for Apache Cassandra. **Please go to beginning of the documentation to choose authentication type.**
+Reference the connection details and sample code in the following tables, according to your connection's authentication type and client type, to connect your compute services to Azure Cosmos DB for Apache Cassandra.
### Connect with System-assigned Managed Identity
service-connector How To Integrate Cosmos Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-sql.md
Previously updated : 09/19/2022 Last updated : 10/24/2023
This page shows the supported authentication types and client types for the Azur
Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
-### [Azure App Service](#tab/app-service)
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-|--|--|--|--|--|
-| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
-| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-
-### [Azure Container Apps](#tab/container-apps)
- | Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | |--|--|--|--|--| | .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
+| Java - Spring Boot | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-### [Azure Spring Apps](#tab/spring-apps)
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-|--|--|--|--|--|
-| .NET | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
-| Node.js | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Python | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-
-## Default environment variable names or application properties
+## Default environment variable names or application properties and Sample code
Use the connection details below to connect your compute services to the Azure Cosmos DB for NoSQL. For each example below, replace the placeholder texts `<database-server>`, `<database-name>`,`<account-key>`, `<resource-group-name>`, `<subscription-ID>`, `<client-ID>`, `<SQL-server>`, `<client-secret>`, `<tenant-id>`, and `<access-key>` with your own information.
-### Azure App Service and Azure Container Apps
+### System-assigned managed identity
-#### Secret / Connection string
+#### SpringBoot client type
-| Default environment variable name | Description | Example value |
-|--|-|-|
-| AZURE_COSMOS_CONNECTIONSTRING | Azure Cosmos DB for NoSQL connection string | `AccountEndpoint=https://<database-server>.documents.azure.com:443/;AccountKey=<account-key>` |
+Using a system-assigned managed identity as the authentication type is only available for Spring Cloud Azure version 4.0 or higher.
+
+| Default environment variable name | Description | Example value |
+|--|--|--|
+| spring.cloud.azure.cosmos.credential.managed-identity-enabled | Whether to enable managed identity | `true` |
+| spring.cloud.azure.cosmos.database | Your database | `https://management.azure.com/.default` |
+| spring.cloud.azure.cosmos.endpoint | Your resource endpoint | `https://<database-server>.documents.azure.com:443/` |
-#### System-assigned managed identity
+#### Other client types
| Default environment variable name | Description | Example value | |--|--|--|
Use the connection details below to connect your compute services to the Azure C
| AZURE_COSMOS_SCOPE | Your managed identity scope | `https://management.azure.com/.default` | | AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<database-server>.documents.azure.com:443/` |
-#### User-assigned managed identity
+#### Sample code
+
+Refer to the steps and code below to Connect to Azure Cosmos DB for NoSQL.
+
+### User-assigned managed identity
+
+#### SpringBoot client type
+
+Using a user-assigned managed identity as the authentication type is only available for Spring Cloud Azure version 4.0 or higher.
+
+| Default environment variable name | Description | Example value |
+|--|--|--|
+| spring.cloud.azure.cosmos.credential.managed-identity-enabled | Whether to enable managed identity | `true` |
+| spring.cloud.azure.cosmos.database | Your database | `https://management.azure.com/.default` |
+| spring.cloud.azure.cosmos.endpoint | Your resource endpoint | `https://<database-server>.documents.azure.com:443/` |
+| spring.cloud.azure.cosmos.credential.client-id | Your client ID | `<client-ID>` |
+#### Other client types
| Default environment variable name | Description | Example value | |--|--|--| | AZURE_COSMOS_LISTCONNECTIONSTRINGURL | The URL to get the connection string | `https://management.azure.com/subscriptions/<subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.DocumentDB/databaseAccounts/<database-server>/listConnectionStrings?api-version=2021-04-15` | | AZURE_COSMOS_SCOPE | Your managed identity scope | `https://management.azure.com/.default` |
-| AZURE_COSMOS_CLIENTID | Your client secret ID | `<client-ID>` |
+| AZURE_COSMOS_CLIENTID | Your client ID | `<client-ID>` |
| AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<database-server>.documents.azure.com:443/` |
+#### Sample code
+
+Refer to the steps and code below to Connect to Azure Cosmos DB for NoSQL.
+
+### Connection string
+
+#### SpringBoot client type
+
+| Default environment variable name | Description | Example value |
+|--|-|-|
+| azure.cosmos.key | The access key for your database for Spring Cloud Azure version below 4.0 | `<access-key>` |
+| azure.cosmos.database | Your database for Spring Cloud Azure version below 4.0 | `<database-name>` |
+| azure.cosmos.uri | Your database URI for Spring Cloud Azure version below 4.0 | `https://<database-server>.documents.azure.com:443/` |
+| spring.cloud.azure.cosmos.key | The access key for your database for Spring Cloud Azure version over 4.0 | `<access-key>` |
+| spring.cloud.azure.cosmos.database| Your database for Spring Cloud Azure version over 4.0 | `<database-name>` |
+| spring.cloud.azure.cosmos.endpoint| Your database URI for Spring Cloud Azure version over 4.0 | `https://<database-server>.documents.azure.com:443/` |
+
+#### Other client types
+
+| Default environment variable name | Description | Example value |
+|--|-|-|
+| AZURE_COSMOS_CONNECTIONSTRING | Azure Cosmos DB for NoSQL connection string | `AccountEndpoint=https://<database-server>.documents.azure.com:443/;AccountKey=<account-key>` |
+
+#### Sample code
+
+Refer to the steps and code below to Connect to Azure Cosmos DB for NoSQL.
+ #### Service principal
+#### SpringBoot client type
+
+| Default environment variable name | Description | Example value |
+|--|--|--|
+| spring.cloud.azure.cosmos.credential.client-id | Your client ID | `<client-ID>` |
+| spring.cloud.azure.cosmos.credential.client-secret | Your client secret | `<client-secret>` |
+| spring.cloud.azure.cosmos.profile.tenant-id | Your tenant ID | `<tenant-ID>` |
+| spring.cloud.azure.cosmos.database | Your database | `<database-name>` |
+| spring.cloud.azure.cosmos.endpoint | Your resource endpoint | `https://<database-server>.documents.azure.com:443/` |
++
+#### Other client types
| Default environment variable name | Description | Example value | |--|--|--| | AZURE_COSMOS_LISTCONNECTIONSTRINGURL | The URL to get the connection string | `https://management.azure.com/subscriptions/<subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.DocumentDB/databaseAccounts/<database-server>/listConnectionStrings?api-version=2021-04-15` |
Use the connection details below to connect your compute services to the Azure C
| AZURE_COSMOS_TENANTID | Your tenant ID | `<tenant-ID>` | | AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<database-server>.documents.azure.com:443/` |
-### Azure Spring Apps
+#### Sample code
-| Default environment variable name | Description | Example value |
-|--|-|-|
-| azure.cosmos.key | The access key for your database | `<access-key>` |
-| azure.cosmos.database | Your database | `<database-name>` |
-| azure.cosmos.uri | Your database URI | `https://<database-server>.documents.azure.com:443/` |
+Refer to the steps and code below to Connect to Azure Cosmos DB for NoSQL.
## Next steps
service-connector How To Integrate Web Pubsub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-web-pubsub.md
Previously updated : 08/11/2022 Last updated : 10/26/2023 # Integrate Azure Web PubSub with service connector
-This page shows all the supported compute services, clients, and authentication types to connect services to Azure Web PubSub instances, using Service Connector. This page also shows the default environment variable names and application properties needed to create service connections. You might still be able to connect to an Azure Web PubSub instance using other programming languages, without using Service Connector. Learn more about the [service connector environment variable naming conventions](concept-service-connector-internals.md).
+This page shows supported authentication methods and clients, and shows sample code you can use to connect Azure Web PubSub to other cloud services using Service Connector. You might still be able to connect to App Configuration using other methods. For more information about naming conventions, check the [Service Connector internals](concept-service-connector-internals.md#configuration-naming-convention) article.
## Supported compute services
This page shows all the supported compute services, clients, and authentication
Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
-### [Azure App Service](#tab/app-service)
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal |
-|-|::|::|::|::|
-| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-
-### [Azure Container Apps](#tab/container-apps)
- | Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal | |-|::|::|::|::| | .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
Supported authentication and clients for App Service, Container Apps and Azure S
| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-### [Azure Spring Apps](#tab/spring-apps)
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal |
-|-|::|::|::|::|
-| .NET | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Node.js | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Python | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
--
-## Default environment variable names or application properties
+## Default environment variable names or application properties and sample code
-Use the environment variable names and application properties listed below to connect an Azure service to Web PubSub using .NET, Java, Node.js, or Python. For each example below, replace the placeholder texts `<name>`, `<client-id>`, `<client-secret`, `<access-key>`, and `<tenant-id>` with your own resource name, client ID, client secret, access-key, and tenant ID.
+Use the environment variable names and application properties listed below, according to your connection's authentication type and client type, to connect compute services to Web PubSub using .NET, Java, Node.js, or Python. For each example below, replace the placeholder texts `<name>`, `<client-id>`, `<client-secret`, `<access-key>`, and `<tenant-id>` with your own resource name, client ID, client secret, access-key, and tenant ID.
### System-assigned managed identity
Use the environment variable names and application properties listed below to co
| | | - | | AZURE_WEBPUBSUB_HOST | Azure Web PubSub host | `<name>.webpubsub.azure.com` |
+#### Sample code
+
+Refer to the steps and code below to connect to Azure Web PubSub using a system-assigned managed identity.
+ ### User-assigned managed identity | Default environment variable name | Description | Sample value |
Use the environment variable names and application properties listed below to co
| AZURE_WEBPUBSUB_HOST | Azure Web PubSub host | `<name>.webpubsub.azure.com` | | AZURE_WEBPUBSUB_CLIENTID | Azure Web PubSub client ID | `<client-id>` |
-### Secret/connection string
+#### Sample code
+
+Refer to the steps and code below to connect to Azure Web PubSub using a user-assigned managed identity.
+
+### Connection string
> [!div class="mx-tdBreakAll"] > | Default environment variable name | Description | Sample value | > | | --| -| > | AZURE_WEBPUBSUB_CONNECTIONSTRING | Azure Web PubSub connection string | `Endpoint=https://<name>.webpubsub.azure.com;AccessKey=<access-key>;Version=1.0;` |
+#### Sample code
+
+Refer to the steps and code below to connect to Azure Web PubSub using a connection string.
+ ### Service principal | Default environment variable name | Description | Sample value |
Use the environment variable names and application properties listed below to co
| AZURE_WEBPUBSUB_CLIENTSECRET | Azure Web PubSub client secret | `<client-secret>` | | AZURE_WEBPUBSUB_TENANTID | Azure Web PubSub tenant ID | `<tenant-id>` |
+#### Sample code
+
+Refer to the steps and code below to connect to Azure Web PubSub using a service principal.
+ ## Next steps Read the article listed below to learn more about Service Connector.
spring-apps How To Bind Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-bind-cosmos.md
All the connection strings and credentials are injected as environment variables
For the default environment variable names, see the following articles: * [Azure Cosmos DB for Table](../service-connector/how-to-integrate-cosmos-table.md?tabs=spring-apps#default-environment-variable-names-or-application-properties)
-* [Azure Cosmos DB for NoSQL](../service-connector/how-to-integrate-cosmos-sql.md?tabs=spring-apps#default-environment-variable-names-or-application-properties)
+* [Azure Cosmos DB for NoSQL](../service-connector/how-to-integrate-cosmos-sql.md?tabs=spring-apps#default-environment-variable-names-or-application-properties-and-sample-code)
* [Azure Cosmos DB for MongoDB](../service-connector/how-to-integrate-cosmos-db.md?tabs=spring-apps#default-environment-variable-names-or-application-properties) * [Azure Cosmos DB for Gremlin](../service-connector/how-to-integrate-cosmos-gremlin.md?tabs=spring-apps#default-environment-variable-names-or-application-properties) * [Azure Cosmos DB for Cassandra](../service-connector/how-to-integrate-cosmos-cassandra.md?tabs=spring-apps#default-environment-variable-names-or-application-properties-and-sample-code)
spring-apps How To Configure Enterprise Spring Cloud Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-configure-enterprise-spring-cloud-gateway.md
For other supported environment variables, see the following sources:
- [AppDynamics environment variables](https://docs.appdynamics.com/21.11/en/application-monitoring/install-app-server-agents/java-agent/monitor-azure-spring-cloud-with-java-agent#MonitorAzureSpringCloudwithJavaAgent-ConfigureUsingtheEnvironmentVariablesorSystemProperties) - [Elastic environment variables](https://www.elastic.co/guide/en/apm/agent/java/master/configuration.html)
-#### Manage APM in VMware Spring Cloud Gateway
+#### Configure APM integration on the service instance level (recommended)
+
+To enable APM monitoring in your VMware Spring Cloud Gateway, you can create APM configuration on the service instance level and bind it to Spring Cloud Gateway. In this way, you can conveniently configure the APM only once and bind the same APM to Spring Cloud Gateway and to your apps.
+
+##### [Azure portal](#tab/Azure-portal)
+
+Use the following steps to set up APM by using the Azure portal:
+
+1. Configure APM on the service instance level with the APM name, type, and properties. For more information, see the [Manage APMs on the service instance level (recommended)](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md#manage-apms-on-the-service-instance-level-recommended) section of [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md).
+
+ :::image type="content" source="media/how-to-configure-enterprise-spring-cloud-gateway/service-level-apm-configure.png" alt-text="Screenshot of Azure portal that shows the Azure Spring Apps APM editor page." lightbox="media/how-to-configure-enterprise-spring-cloud-gateway/service-level-apm-configure.png":::
+
+1. Select **Spring Cloud Gateway** on the navigation pane, then select **APM**.
+
+1. Choose the APM name in the **APM reference names** list. The list includes all the APM names configured in step 1.
+
+1. Select **Save** to bind APM reference names to Spring Cloud Gateway. Your gateway restarts to enable APM monitoring.
+
+##### [Azure CLI](#tab/Azure-CLI)
+
+Use the following steps to set up APM in Spring Cloud Gateway by using the Azure CLI:
+
+1. Configure APM on the service instance level with the APM name, type, and properties. For more information, see the [Manage APMs on the service instance level (recommended)](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md#manage-apms-on-the-service-instance-level-recommended) section of [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md).
+
+1. Use the following command to update gateway with APM reference names:
+
+ ```azurecli
+ az spring gateway update \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --apms <APM-reference-name>
+ ```
+
+ The value for `--apms` is a space-separated list of APM reference names, which you created in step 1.
+
+ > [!NOTE]
+ > Spring Cloud Gateway is deprecating APM types. Use APM reference names to configure APM in a gateway.
+++
+#### Manage APM in VMware Spring Cloud Gateway (deprecated)
You can use the Azure portal or the Azure CLI to set up APM in VMware Spring Cloud Gateway. You can also specify the types of APM Java agents to use and the corresponding APM environment variables that they support.
storage Lifecycle Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-overview.md
description: Use Azure Storage lifecycle management policies to create automated
Previously updated : 08/30/2023 Last updated : 10/26/2023
Lifecycle management supports tiering and deletion of current versions, previous
<sup>1</sup> The `enableAutoTierToHotFromCool` action is available only when used with the `daysAfterLastAccessTimeGreaterThan` run condition. That condition is described in the next table.
-<sup>2</sup> When applied to an account with a hierarchical namespace enabled, a `delete` action removes empty directories. If the directory isn't empty, then the `delete` action removes objects that meet the policy conditions within the first 24-hour cycle. If that action results in an empty directory that also meets the policy conditions, then that directory will be removed within the next 24-hour cycle, and so on.
+<sup>2</sup> When applied to an account with a hierarchical namespace enabled, a delete action removes empty directories. If the directory isn't empty, then the delete action removes objects that meet the policy conditions within the first lifecycle policy execution cycle. If that action results in an empty directory that also meets the policy conditions, then that directory will be removed within the next execution cycle, and so on.
<sup>3</sup> A lifecycle management policy will not delete the current version of a blob until any previous versions or snapshots associated with that blob have been deleted. If blobs in your storage account have previous versions or snapshots, then you must include previous versions and snapshots when you specify a delete action as part of the policy.
The run conditions are based on age. Current versions use the last modified time
## Lifecycle policy runs
-The platform runs the lifecycle policy once a day. Once you configure or edit a policy, it can take up to 24 hours for changes to go into effect. Once the policy is in effect, it could take up to 24 hours for some actions to run. Therefore, the policy actions may take up to 48 hours to complete.
+The platform runs the lifecycle policy once a day. When you configure or edit a lifecycle policy., it can take up to 24 hours for changes to go into effect and for the first execution to start. The time taken for policy actions to complete depends on the number of blobs evaluated and operated on.
If you disable a policy, then no new policy runs will be scheduled, but if a run is already in progress, that run will continue until it completes and you're billed for any actions that are required to complete the run. See [Regional availability and pricing](#regional-availability-and-pricing).
storage Storage Account Keys Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-keys-manage.md
Previously updated : 03/22/2023 Last updated : 10/26/2023
Two access keys are assigned so that you can rotate your keys. Having two keys e
> [!WARNING] > Regenerating your access keys can affect any applications or Azure services that are dependent on the storage account key. Any clients that use the account key to access the storage account must be updated to use the new key, including media services, cloud, desktop and mobile applications, and graphical user interface applications for Azure Storage, such as [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/).
+>
+> Additionally, rotating or regenerating access keys revokes shared access signatures (SAS) that are generated based on that key. After access key rotation, you must regenerate **account** and **service** SAS tokens to avoid disruptions to applications. Note that **user delegation** SAS tokens are secured with Microsoft Entra credentials and aren't affected by key rotation.
If you plan to manually rotate access keys, Microsoft recommends that you set a key expiration policy. For more information, see [Create a key expiration policy](#create-a-key-expiration-policy).
storage Storage Choose Data Transfer Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-choose-data-transfer-solution.md
Last updated 09/25/2020
+<!--
+10/26/23: 100 (869/0)
+Prev score: 87 (850/8)
+-->
+ # Choose an Azure solution for data transfer This article provides an overview of some of the common Azure data transfer solutions. The article also links out to recommended options depending on the network bandwidth in your environment and the size of the data you intend to transfer.
Data transfer can be offline or over the network connection. Choose your solutio
The data movement can be of the following types: -- **Offline transfer using shippable devices** - Use physical shippable devices when you want to do offline one-time bulk data transfer. Microsoft sends you a disk, or a secure specialized device. Alternatively, you can purchase and ship your own disks. You copy data to the device and then ship it to Azure where the data is uploaded. The available options for this case are Data Box Disk, Data Box, Data Box Heavy, and Import/Export (use your own disks).
+- **Offline transfer using shippable devices** - Use physical shippable devices when you want to do offline one-time bulk data transfer. This use case involves copying data to either a disk or specialized device, and then shipping it to a secure Microsoft facility where the data is uploaded. You can purchase and ship your own disks, or you order a Microsoft-supplied disk or device. Microsoft-supplied solutions for offline transfer include Azure [Data Box](../../databox/data-box-overview.md), [Data Box Disk](../../databox/data-box-disk-overview.md), and [Data Box Heavy](../../databox/data-box-heavy-overview.md).
-- **Network Transfer** - You transfer your data to Azure over your network connection. This can be done in many ways.
+- **Network Transfer** - You transfer your data to Azure over your network connection. This transfer can be done in many ways.
- - **Graphical interface** - If you occasionally transfer just a few files and do not need to automate the data transfer, you can choose a graphical interface tool such as Azure Storage Explorer or a web-based exploration tool in Azure portal.
- - **Scripted or programmatic transfer** - You can use optimized software tools that we provide or call our REST APIs/SDKs directly. The available scriptable tools are AzCopy, Azure PowerShell, and Azure CLI. For programmatic interface, use one of the SDKs for .NET, Java, Python, Node/JS, C++, Go, PHP or Ruby.
+ - **Hybrid migration service** - [Azure Storage Mover](../../storage-mover/service-overview.md) is a new, fully managed migration service that enables you to migrate your files and folders to Azure Storage while minimizing downtime for your workload. Azure Storage Mover is a hybrid cloud service consisting of a cloud service component and an on-premises migration agent virtual machine (VM). Storage Mover is used for migration scenarios such as *lift-and-shift*, and for cloud migrations that you repeat occasionally.
- **On-premises devices** - We supply you a physical or virtual device that resides in your datacenter and optimizes data transfer over the network. These devices also provide a local cache of frequently used files. The physical device is the Azure Stack Edge and the virtual device is the Data Box Gateway. Both run permanently in your premises and connect to Azure over the network.
+ - **Graphical interface** - If you occasionally transfer just a few files and don't need to automate the data transfer, you can choose a graphical interface tool such as Azure Storage Explorer or a web-based exploration tool in Azure portal.
+ - **Scripted or programmatic transfer** - You can use optimized software tools that we provide or call our REST APIs/SDKs directly. The available scriptable tools are AzCopy, Azure PowerShell, and Azure CLI. For programmatic interface, use one of the SDKs for .NET, Java, Python, Node/JS, C++, Go, PHP or Ruby.
- **Managed data pipeline** - You can set up a cloud pipeline to regularly transfer files between several Azure services, on-premises or a combination of two. Use Azure Data Factory to set up and manage data pipelines, and move and transform data for analysis. The following visual illustrates the guidelines to choose the various Azure data transfer tools depending upon the network bandwidth available for transfer, data size intended for transfer, and frequency of the transfer.
The following visual illustrates the guidelines to choose the various Azure data
Answer the following questions to help select a data transfer solution: -- Is your available network bandwidth limited or non-existent, and you want to transfer large datasets?
+- Is your available network bandwidth limited or nonexistent, and you want to transfer large datasets?
If yes, see: [Scenario 1: Transfer large datasets with no or low network bandwidth](storage-solution-large-dataset-low-network.md). - Do you want to transfer large datasets over network and you have a moderate to high network bandwidth?
Answer the following questions to help select a data transfer solution:
## Data transfer feature in Azure portal
-You can also go to your Azure Storage account in Azure portal and select the **Data transfer** feature. Provide the network bandwidth in your environment, the size of the data you want to transfer, and the frequency of data transfer. You will see the optimum data transfer solutions corresponding to the information that you have provided.
+You can also provide information specific to your scenario and review a list of optimal data transfer solutions. To view the list, navigate to your Azure Storage account within the Azure portal and select the **Data transfer** feature. After providing the network bandwidth in your environment, the size of the data you want to transfer, and the frequency of data transfer, you're shown a list of solutions corresponding to the information that you have provided.
## Next steps
You can also go to your Azure Storage account in Azure portal and select the **D
- [Read an overview of AzCopy](./storage-use-azcopy-v10.md). - [Quickstart: Upload, download, and list blobs with PowerShell](../blobs/storage-quickstart-blobs-powershell.md) - [Quickstart: Create, download, and list blobs with Azure CLI](../blobs/storage-quickstart-blobs-cli.md)
+- Learn about:
+ - [Azure Storage Mover](../../storage-mover/service-overview.md), a hybrid migration service.
+ - [Cloud migration using Azure Storage Mover](../../storage-mover/migration-basics.md).
- Learn about: - [Azure Data Box, Azure Data Box Disk, and Azure Data Box Heavy for offline transfers](../../databox/index.yml). - [Azure Data Box Gateway and Azure Stack Edge for online transfers](../../databox-online/index.yml).-- [Learn what is Azure Data Factory](../../data-factory/copy-activity-overview.md).+
+- [Learn about Azure Data Factory](../../data-factory/copy-activity-overview.md).
- Use the REST APIs to transfer data - [In .NET](/dotnet/api/overview/azure/storage)
storage Storage Performance Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-performance-checklist.md
# Performance and scalability checklist for Queue Storage
-Microsoft has developed a number of proven practices for developing high-performance applications with Queue Storage. This checklist identifies key practices that developers can follow to optimize performance. Keep these practices in mind while you are designing your application and throughout the process.
+Microsoft has developed many proven practices for developing high-performance applications with Queue Storage. This checklist identifies key practices that developers can follow to optimize performance. Keep these practices in mind while you're designing your application and throughout the process.
Azure Storage has scalability and performance targets for capacity, transaction rate, and bandwidth. For more information about Azure Storage scalability targets, see [Scalability and performance targets for standard storage accounts](../common/scalability-targets-standard-account.md?toc=/azure/storage/queues/toc.json) and [Scalability and performance targets for Queue Storage](scalability-targets.md).
This article organizes proven practices for performance into a checklist you can
| &nbsp; | Networking | [Do client-side devices have a high quality network link?](#link-quality) | | &nbsp; | Networking | [Is the client application in the same region as the storage account?](#location) | | &nbsp; | Direct client access | [Are you using shared access signatures (SAS) and cross-origin resource sharing (CORS) to enable direct access to Azure Storage?](#sas-and-cors) |
-| &nbsp; | .NET configuration | [Are you using .NET Core 2.1 or later for optimum performance?](#use-net-core) |
-| &nbsp; | .NET configuration | [Have you configured your client to use a sufficient number of concurrent connections?](#increase-default-connection-limit) |
-| &nbsp; | .NET configuration | [For .NET applications, have you configured .NET to use a sufficient number of threads?](#increase-the-minimum-number-of-threads) |
+| &nbsp; |.NET configuration |[For .NET Framework applications, have you configured your client to use a sufficient number of concurrent connections?](#increase-default-connection-limit) |
+| &nbsp; |.NET configuration |[For .NET Framework applications, have you configured .NET to use a sufficient number of threads?](#increase-minimum-number-of-threads) |
| &nbsp; | Parallelism | [Have you ensured that parallelism is bounded appropriately so that you don't overload your client's capabilities or approach the scalability targets?](#unbounded-parallelism) | | &nbsp; | Tools | [Are you using the latest versions of Microsoft-provided client libraries and tools?](#client-libraries-and-tools) | | &nbsp; | Retries | [Are you using a retry policy with an exponential backoff for throttling errors and timeouts?](#timeout-and-server-busy-errors) |
This article organizes proven practices for performance into a checklist you can
## Scalability targets
-If your application approaches or exceeds any of the scalability targets, it may encounter increased transaction latencies or throttling. When Azure Storage throttles your application, the service begins to return 503 (`Server Busy`) or 500 (`Operation Timeout`) error codes. Avoiding these errors by staying within the limits of the scalability targets is an important part of enhancing your application's performance.
+If your application approaches or exceeds any of the scalability targets, it might encounter increased transaction latencies or throttling. When Azure Storage throttles your application, the service begins to return 503 (`Server Busy`) or 500 (`Operation Timeout`) error codes. Avoiding these errors by staying within the limits of the scalability targets is an important part of enhancing your application's performance.
For more information about scalability targets for Queue Storage, see [Azure Storage scalability and performance targets](./scalability-targets.md#scale-targets-for-queue-storage).
If your application is approaching the scalability targets for a single storage
- If the scalability targets for queues are insufficient for your application, then use multiple queues and distribute messages across them. - Reconsider the workload that causes your application to approach or exceed the scalability target. Can you design it differently to use less bandwidth or capacity, or fewer transactions? - If your application must exceed one of the scalability targets, then create multiple storage accounts and partition your application data across those multiple storage accounts. If you use this pattern, then be sure to design your application so that you can add more storage accounts in the future for load balancing. Storage accounts themselves have no cost other than your usage in terms of data stored, transactions made, or data transferred.-- If your application is approaching the bandwidth targets, consider compressing data on the client side to reduce the bandwidth required to send the data to Azure Storage. While compressing data may save bandwidth and improve network performance, it can also have negative effects on performance. Evaluate the performance impact of the additional processing requirements for data compression and decompression on the client side. Keep in mind that storing compressed data can make troubleshooting more difficult because it may be more challenging to view the data using standard tools.-- If your application is approaching the scalability targets, then make sure that you are using an exponential backoff for retries. It's best to try to avoid reaching the scalability targets by implementing the recommendations described in this article. However, using an exponential backoff for retries will prevent your application from retrying rapidly, which could make throttling worse. For more information, see the [Timeout and Server Busy errors](#timeout-and-server-busy-errors) section.
+- If your application is approaching the bandwidth targets, consider compressing data on the client side to reduce the bandwidth required to send the data to Azure Storage. While compressing data might save bandwidth and improve network performance, it can also have negative effects on performance. Evaluate the performance impact of the additional processing requirements for data compression and decompression on the client side. Keep in mind that storing compressed data can make troubleshooting more difficult because it might be more challenging to view the data using standard tools.
+- If your application is approaching the scalability targets, then make sure that you're using an exponential backoff for retries. It's best to try to avoid reaching the scalability targets by implementing the recommendations described in this article. However, using an exponential backoff for retries prevents your application from retrying rapidly, which could make throttling worse. For more information, see the [Timeout and Server Busy errors](#timeout-and-server-busy-errors) section.
## Networking
-The physical network constraints of the application may have a significant impact on performance. The following sections describe some of limitations users may encounter.
+The physical network constraints of the application might have a significant impact on performance. The following sections describe some of limitations users might encounter.
### Client network capability
Bandwidth and the quality of the network link play important roles in applicatio
#### Throughput
-For bandwidth, the problem is often the capabilities of the client. Larger Azure instances have NICs with greater capacity, so you should consider using a larger instance or more VMs if you need higher network limits from a single machine. If you are accessing Azure Storage from an on-premises application, then the same rule applies: understand the network capabilities of the client device and the network connectivity to the Azure Storage location and either improve them as needed or design your application to work within their capabilities.
+For bandwidth, the problem is often the capabilities of the client. Larger Azure instances have NICs with greater capacity, so you should consider using a larger instance or more VMs if you need higher network limits from a single machine. If you're accessing Azure Storage from an on-premises application, then the same rule applies: understand the network capabilities of the client device and the network connectivity to the Azure Storage location and either improve them as needed or design your application to work within their capabilities.
#### Link quality
-As with any network usage, keep in mind that network conditions resulting in errors and packet loss will slow effective throughput. Using Wireshark or Network Monitor may help in diagnosing this issue.
+As with any network usage, keep in mind that network conditions resulting in errors and packet loss slows effective throughput. Using Wireshark or Network Monitor might help in diagnosing this issue.
### Location In any distributed environment, placing the client near to the server delivers in the best performance. For accessing Azure Storage with the lowest latency, the best location for your client is within the same Azure region. For example, if you have an Azure web app that uses Azure Storage, then locate them both within a single region, such as West US or Southeast Asia. Co-locating resources reduces the latency and the cost, as bandwidth usage within a single region is free.
-If client applications will access Azure Storage but are not hosted within Azure, such as mobile device apps or on-premises enterprise services, then locating the storage account in a region near to those clients may reduce latency. If your clients are broadly distributed (for example, some in North America, and some in Europe), then consider using one storage account per region. This approach is easier to implement if the data the application stores is specific to individual users, and does not require replicating data between storage accounts.
+If client applications access Azure Storage but aren't hosted within Azure, such as mobile device apps or on-premises enterprise services, then locating the storage account in a region near to those clients might reduce latency. If your clients are broadly distributed (for example, some in North America, and some in Europe), then consider using one storage account per region. This approach is easier to implement if the data the application stores is specific to individual users, and doesn't require replicating data between storage accounts.
## SAS and CORS
Suppose that you need to authorize code such as JavaScript that is running in a
You can avoid using a service application as a proxy for Azure Storage by using shared access signatures (SAS). Using SAS, you can enable your user's device to make requests directly to Azure Storage by using a limited access token. For example, if a user wants to upload a photo to your application, then your service application can generate a SAS and send it to the user's device. The SAS token can grant permission to write to an Azure Storage resource for a specified interval of time, after which the SAS token expires. For more information about SAS, see [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md).
-Typically, a web browser will not allow JavaScript in a page that is hosted by a website on one domain to perform certain operations, such as write operations, to another domain. Known as the same-origin policy, this policy prevents a malicious script on one page from obtaining access to data on another web page. However, the same-origin policy can be a limitation when building a solution in the cloud. Cross-origin resource sharing (CORS) is a browser feature that enables the target domain to communicate to the browser that it trusts requests originating in the source domain.
+Typically, a web browser won't allow JavaScript in a page that is hosted by a website on one domain to perform certain operations, such as write operations, to another domain. Known as the same-origin policy, this policy prevents a malicious script on one page from obtaining access to data on another web page. However, the same-origin policy can be a limitation when building a solution in the cloud. Cross-origin resource sharing (CORS) is a browser feature that enables the target domain to communicate to the browser that it trusts requests originating in the source domain.
For example, suppose a web application running in Azure makes a request for a resource to an Azure Storage account. The web application is the source domain, and the storage account is the target domain. You can configure CORS for any of the Azure Storage services to communicate to the web browser that requests from the source domain are trusted by Azure Storage. For more information about CORS, see [Cross-origin resource sharing (CORS) support for Azure Storage](/rest/api/storageservices/Cross-Origin-Resource-Sharing--CORS--Support-for-the-Azure-Storage-Services).
Both SAS and CORS can help you avoid unnecessary load on your web application.
## .NET configuration
-If using the .NET Framework, this section lists several quick configuration settings that you can use to make significant performance improvements. If using other languages, check to see whether similar concepts apply in your chosen language.
-
-### Use .NET Core
-
-Develop your Azure Storage applications with .NET Core 2.1 or later to take advantage of performance enhancements. Using .NET Core 3.x is recommended when possible.
-
-For more information on performance improvements in .NET Core, see the following blog posts:
--- [Performance improvements in .NET Core 3.0](https://devblogs.microsoft.com/dotnet/performance-improvements-in-net-core-3-0/)-- [Performance improvements in .NET Core 2.1](https://devblogs.microsoft.com/dotnet/performance-improvements-in-net-core-2-1/)
+For projects using .NET Framework, this section lists some quick configuration settings that you can use to make significant performance improvements. If you're using a language other than .NET, check to see if similar concepts apply in your chosen language.
### Increase default connection limit
-In .NET, the following code increases the default connection limit (which is usually 2 in a client environment or 10 in a server environment) to 100. Typically, you should set the value to approximately the number of threads used by your application.
+> [!NOTE]
+> This section applies to projects using .NET Framework, as connection pooling is controlled by the ServicePointManager class. .NET Core introduced a significant change around connection pool management, where connection pooling happens at the HttpClient level and the pool size is not limited by default. This means that HTTP connections are automatically scaled to satisfy your workload. Using the latest version of .NET is recommended, when possible, to take advantage of performance enhancements.
+
+For projects using .NET Framework, you can use the following code to increase the default connection limit (which is usually 2 in a client environment or 10 in a server environment) to 100. Typically, you should set the value to approximately the number of threads used by your application. Set the connection limit before opening any connections.
```csharp ServicePointManager.DefaultConnectionLimit = 100; //(Or More) ```
-Set the connection limit before opening any connections.
-
-For other programming languages, see that language's documentation to determine how to set the connection limit.
+To learn more about connection pool limits in .NET Framework, see [.NET Framework Connection Pool Limits and the new Azure SDK for .NET](https://devblogs.microsoft.com/azure-sdk/net-framework-connection-pool-limits/).
-For more information, see the blog post [Web
+For other programming languages, see the documentation to determine how to set the connection limit.
-### Increase the minimum number of threads
+### Increase minimum number of threads
-If you are using synchronous calls together with asynchronous tasks, you may want to increase the number of threads in the thread pool:
+If you're using synchronous calls together with asynchronous tasks, you might want to increase the number of threads in the thread pool:
```csharp ThreadPool.SetMinThreads(100,100); //(Determine the right number for your application) ```
-For more information, see the [`ThreadPool.SetMinThreads`](/dotnet/api/system.threading.threadpool.setminthreads) method.
+For more information, see the [ThreadPool.SetMinThreads](/dotnet/api/system.threading.threadpool.setminthreads) method.
## Unbounded parallelism
-While parallelism can be great for performance, be careful about using unbounded parallelism, meaning that there is no limit enforced on the number of threads or parallel requests. Be sure to limit parallel requests to upload or download data, to access multiple partitions in the same storage account, or to access multiple items in the same partition. If parallelism is unbounded, your application can exceed the client device's capabilities or the storage account's scalability targets, resulting in longer latencies and throttling.
+While parallelism can be great for performance, be careful about using unbounded parallelism, meaning that there's no limit enforced on the number of threads or parallel requests. Be sure to limit parallel requests to upload or download data, to access multiple partitions in the same storage account, or to access multiple items in the same partition. If parallelism is unbounded, your application can exceed the client device's capabilities or the storage account's scalability targets, resulting in longer latencies and throttling.
## Client libraries and tools
-For best performance, always use the latest client libraries and tools provided by Microsoft. Azure Storage client libraries are available for a variety of languages. Azure Storage also supports PowerShell and Azure CLI. Microsoft actively develops these client libraries and tools with performance in mind, keeps them up-to-date with the latest service versions, and ensures that they handle many of the proven performance practices internally. For more information, see the [Azure Storage reference documentation](./reference.md).
+For best performance, always use the latest client libraries and tools provided by Microsoft. Azure Storage client libraries are available for various languages. Azure Storage also supports PowerShell and Azure CLI. Microsoft actively develops these client libraries and tools with performance in mind, keeps them up-to-date with the latest service versions, and ensures that they handle many of the proven performance practices internally. For more information, see the [Azure Storage reference documentation](./reference.md).
## Handle service errors
-Azure Storage returns an error when the service cannot process a request. Understanding the errors that may be returned by Azure Storage in a given scenario is helpful for optimizing performance.
+Azure Storage returns an error when the service can't process a request. Understanding the errors that might be returned by Azure Storage in a given scenario is helpful for optimizing performance.
### Timeout and Server Busy errors
-Azure Storage may throttle your application if it approaches the scalability limits. In some cases, Azure Storage may be unable to handle a request due to some transient condition. In both cases, the service may return a 503 (`Server Busy`) or 500 (`Timeout`) error. These errors can also occur if the service is rebalancing data partitions to allow for higher throughput. The client application should typically retry the operation that causes one of these errors. However, if Azure Storage is throttling your application because it is exceeding scalability targets, or even if the service was unable to serve the request for some other reason, aggressive retries may make the problem worse. Using an exponential back off retry policy is recommended, and the client libraries default to this behavior. For example, your application may retry after 2 seconds, then 4 seconds, then 10 seconds, then 30 seconds, and then give up completely. In this way, your application significantly reduces its load on the service, rather than exacerbating behavior that could lead to throttling.
+Azure Storage might throttle your application if it approaches the scalability limits. In some cases, Azure Storage might be unable to handle a request due to some transient condition. In both cases, the service might return a 503 (`Server Busy`) or 500 (`Timeout`) error. These errors can also occur if the service is rebalancing data partitions to allow for higher throughput. The client application should typically retry the operation that causes one of these errors. However, if Azure Storage is throttling your application because it's exceeding scalability targets, or even if the service was unable to serve the request for some other reason, aggressive retries might make the problem worse. Using an exponential back off retry policy is recommended, and the client libraries default to this behavior. For example, your application might retry after 2 seconds, then 4 seconds, then 10 seconds, then 30 seconds, and then give up completely. In this way, your application significantly reduces its load on the service, rather than exacerbating behavior that could lead to throttling.
-Connectivity errors can be retried immediately, because they are not the result of throttling and are expected to be transient.
+Connectivity errors can be retried immediately, because they aren't the result of throttling and are expected to be transient.
### Non-retryable errors
-The client libraries handle retries with an awareness of which errors can be retried and which cannot. However, if you are calling the Azure Storage REST API directly, there are some errors that you should not retry. For example, a 400 (`Bad Request`) error indicates that the client application sent a request that could not be processed because it was not in the expected form. Resending this request results the same response every time, so there is no point in retrying it. If you are calling the Azure Storage REST API directly, be aware of potential errors and whether they should be retried.
+The client libraries handle retries with an awareness of which errors can be retried and which can't. However, if you're calling the Azure Storage REST API directly, there are some errors that you shouldn't retry. For example, a 400 (`Bad Request`) error indicates that the client application sent a request that couldn't be processed because it wasn't in the expected form. Resending this request results the same response every time, so there's no point in retrying it. If you're calling the Azure Storage REST API directly, be aware of potential errors and whether they should be retried.
For more information on Azure Storage error codes, see [Status and error codes](/rest/api/storageservices/status-and-error-codes2). ## Disable Nagle's algorithm
-Nagle's algorithm is widely implemented across TCP/IP networks as a means to improve network performance. However, it is not optimal in all circumstances (such as highly interactive environments). Nagle's algorithm has a negative impact on the performance of requests to Azure Table Storage, and you should disable it if possible.
+Nagle's algorithm is widely implemented across TCP/IP networks as a means to improve network performance. However, it isn't optimal in all circumstances (such as highly interactive environments). Nagle's algorithm has a negative impact on the performance of requests to Azure Table Storage, and you should disable it if possible.
## Message size
You can retrieve up to 32 messages from a queue in a single operation. Batch ret
## Queue polling interval
-Most applications poll for messages from a queue, which can be one of the largest sources of transactions for that application. Select your polling interval wisely: polling too frequently could cause your application to approach the scalability targets for the queue. However, at 200,000 transactions for $0.01 (at the time of writing), a single processor polling once every second for a month would cost less than 15 cents so cost is not typically a factor that affects your choice of polling interval.
+Most applications poll for messages from a queue, which can be one of the largest sources of transactions for that application. Select your polling interval wisely: polling too frequently could cause your application to approach the scalability targets for the queue. However, at 200,000 transactions for $0.01 (at the time of writing), a single processor polling once every second for a month would cost less than 15 cents so cost isn't typically a factor that affects your choice of polling interval.
For up-to-date cost information, see [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage/).
storage Storage Performance Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/storage-performance-checklist.md
# Performance and scalability checklist for Table storage
-Microsoft has developed a number of proven practices for developing high-performance applications with Table storage. This checklist identifies key practices that developers can follow to optimize performance. Keep these practices in mind while you are designing your application and throughout the process.
+Microsoft has developed many proven practices for developing high-performance applications with Table storage. This checklist identifies key practices that developers can follow to optimize performance. Keep these practices in mind while you're designing your application and throughout the process.
Azure Storage has scalability and performance targets for capacity, transaction rate, and bandwidth. For more information about Azure Storage scalability targets, see [Scalability and performance targets for standard storage accounts](../common/scalability-targets-standard-account.md?toc=/azure/storage/tables/toc.json) and [Scalability and performance targets for Table storage](scalability-targets.md).
This article organizes proven practices for performance into a checklist you can
| &nbsp; |Networking |[Is the client application in the same region as the storage account?](#location) | | &nbsp; |Direct Client Access |[Are you using shared access signatures (SAS) and cross-origin resource sharing (CORS) to enable direct access to Azure Storage?](#sas-and-cors) | | &nbsp; |Batching |[Is your application batching updates by using entity group transactions?](#batch-transactions) |
-| &nbsp; |.NET configuration |[Are you using .NET Core 2.1 or later for optimum performance?](#use-net-core) |
-| &nbsp; |.NET configuration |[Have you configured your client to use a sufficient number of concurrent connections?](#increase-default-connection-limit) |
-| &nbsp; |.NET configuration |[For .NET applications, have you configured .NET to use a sufficient number of threads?](#increase-minimum-number-of-threads) |
+| &nbsp; |.NET configuration |[For .NET Framework applications, have you configured your client to use a sufficient number of concurrent connections?](#increase-default-connection-limit) |
+| &nbsp; |.NET configuration |[For .NET Framework applications, have you configured .NET to use a sufficient number of threads?](#increase-minimum-number-of-threads) |
| &nbsp; |Parallelism |[Have you ensured that parallelism is bounded appropriately so that you don't overload your client's capabilities or approach the scalability targets?](#unbounded-parallelism) | | &nbsp; |Tools |[Are you using the latest versions of Microsoft-provided client libraries and tools?](#client-libraries-and-tools) | | &nbsp; |Retries |[Are you using a retry policy with an exponential backoff for throttling errors and timeouts?](#timeout-and-server-busy-errors) |
This article organizes proven practices for performance into a checklist you can
| &nbsp; |Hot partitions |[Are your inserts/updates spread across many partitions?](#high-traffic-data) | | &nbsp; |Query scope |[Have you designed your schema to allow for point queries to be used in most cases, and table queries to be used sparingly?](#query-scope) | | &nbsp; |Query density |[Do your queries typically only scan and return rows that your application will use?](#query-density) |
-| &nbsp; |Limiting returned data |[Are you using filtering to avoid returning entities that are not needed?](#limiting-the-amount-of-data-returned) |
-| &nbsp; |Limiting returned data |[Are you using projection to avoid returning properties that are not needed?](#limiting-the-amount-of-data-returned) |
+| &nbsp; |Limiting returned data |[Are you using filtering to avoid returning entities that aren't needed?](#limiting-the-amount-of-data-returned) |
+| &nbsp; |Limiting returned data |[Are you using projection to avoid returning properties that aren't needed?](#limiting-the-amount-of-data-returned) |
| &nbsp; |Denormalization |[Have you denormalized your data such that you avoid inefficient queries or multiple read requests when trying to get data?](#denormalization) | | &nbsp; |Insert, update, and delete |[Are you batching requests that need to be transactional or can be done at the same time to reduce round-trips?](#batching) | | &nbsp; |Insert, update, and delete |[Are you avoiding retrieving an entity just to determine whether to call insert or update?](#upsert) |
-| &nbsp; |Insert, update, and delete |[Have you considered storing series of data that will frequently be retrieved together in a single entity as properties instead of multiple entities?](#storing-data-series-in-a-single-entity) |
-| &nbsp; |Insert, update, and delete |[For entities that will always be retrieved together and can be written in batches (for example, time series data), have you considered using blobs instead of tables?](#storing-structured-data-in-blobs) |
+| &nbsp; |Insert, update, and delete |[Have you considered storing series of data that are frequently retrieved together in a single entity as properties instead of multiple entities?](#storing-data-series-in-a-single-entity) |
+| &nbsp; |Insert, update, and delete |[For entities that are retrieved together and can be written in batches (for example, time series data), have you considered using blobs instead of tables?](#storing-structured-data-in-blobs) |
## Scalability targets
-If your application approaches or exceeds any of the scalability targets, it may encounter increased transaction latencies or throttling. When Azure Storage throttles your application, the service begins to return 503 (Server busy) or 500 (Operation timeout) error codes. Avoiding these errors by staying within the limits of the scalability targets is an important part of enhancing your application's performance.
+If your application approaches or exceeds any of the scalability targets, it might encounter increased transaction latencies or throttling. When Azure Storage throttles your application, the service begins to return 503 (Server busy) or 500 (Operation timeout) error codes. Avoiding these errors by staying within the limits of the scalability targets is an important part of enhancing your application's performance.
For more information about scalability targets for the Table service, see [Scalability and performance targets for Table storage](scalability-targets.md).
If your application is approaching the scalability targets for a single storage
- Reconsider the workload that causes your application to approach or exceed the scalability target. Can you design it differently to use less bandwidth or capacity, or fewer transactions? - If your application must exceed one of the scalability targets, then create multiple storage accounts and partition your application data across those multiple storage accounts. If you use this pattern, then be sure to design your application so that you can add more storage accounts in the future for load balancing. Storage accounts themselves have no cost other than your usage in terms of data stored, transactions made, or data transferred. - If your application is approaching the bandwidth targets, consider compressing data on the client side to reduce the bandwidth required to send the data to Azure Storage.
- While compressing data may save bandwidth and improve network performance, it can also have negative effects on performance. Evaluate the performance impact of the additional processing requirements for data compression and decompression on the client side. Keep in mind that storing compressed data can make troubleshooting more difficult because it may be more challenging to view the data using standard tools.
+ While compressing data might save bandwidth and improve network performance, it can also have negative effects on performance. Evaluate the performance impact of the additional processing requirements for data compression and decompression on the client side. Keep in mind that storing compressed data can make troubleshooting more difficult because it may be more challenging to view the data using standard tools.
- If your application is approaching the scalability targets, then make sure that you are using an exponential backoff for retries. It's best to try to avoid reaching the scalability targets by implementing the recommendations described in this article. However, using an exponential backoff for retries will prevent your application from retrying rapidly, which could make throttling worse. For more information, see the section titled [Timeout and Server Busy errors](#timeout-and-server-busy-errors). ### Targets for data operations
-Azure Storage load balances as the traffic to your storage account increases, but if the traffic exhibits sudden bursts, you may not be able to get this volume of throughput immediately. Expect to see throttling and/or timeouts during the burst as Azure Storage automatically load balances your table. Ramping up slowly generally provides better results, as the system has time to load balance appropriately.
+Azure Storage load balances as the traffic to your storage account increases, but if the traffic exhibits sudden bursts, you might not be able to get this volume of throughput immediately. Expect to see throttling and/or timeouts during the burst as Azure Storage automatically load balances your table. Ramping up slowly generally provides better results, as the system has time to load balance appropriately.
#### Entities per second (storage account)
Within a single partition, the scalability target for accessing tables is 2,000
## Networking
-The physical network constraints of the application may have a significant impact on performance. The following sections describe some of limitations users may encounter.
+The physical network constraints of the application might have a significant impact on performance. The following sections describe some of limitations users might encounter.
### Client network capability
Bandwidth and the quality of the network link play important roles in applicatio
#### Throughput
-For bandwidth, the problem is often the capabilities of the client. Larger Azure instances have NICs with greater capacity, so you should consider using a larger instance or more VMs if you need higher network limits from a single machine. If you are accessing Azure Storage from an on premises application, then the same rule applies: understand the network capabilities of the client device and the network connectivity to the Azure Storage location and either improve them as needed or design your application to work within their capabilities.
+For bandwidth, the problem is often the capabilities of the client. Larger Azure instances have NICs with greater capacity, so you should consider using a larger instance or more VMs if you need higher network limits from a single machine. If you're accessing Azure Storage from an on premises application, then the same rule applies: understand the network capabilities of the client device and the network connectivity to the Azure Storage location and either improve them as needed or design your application to work within their capabilities.
#### Link quality
-As with any network usage, keep in mind that network conditions resulting in errors and packet loss will slow effective throughput. Using WireShark or NetMon may help in diagnosing this issue.
+As with any network usage, keep in mind that network conditions resulting in errors and packet loss slows effective throughput. Using WireShark or NetMon may help in diagnosing this issue.
### Location In any distributed environment, placing the client near to the server delivers in the best performance. For accessing Azure Storage with the lowest latency, the best location for your client is within the same Azure region. For example, if you have an Azure web app that uses Azure Storage, then locate them both within a single region, such as US West or Asia Southeast. Co-locating resources reduces the latency and the cost, as bandwidth usage within a single region is free.
-If client applications will access Azure Storage but are not hosted within Azure, such as mobile device apps or on premises enterprise services, then locating the storage account in a region near to those clients may reduce latency. If your clients are broadly distributed (for example, some in North America, and some in Europe), then consider using one storage account per region. This approach is easier to implement if the data the application stores is specific to individual users, and does not require replicating data between storage accounts.
+If client applications will access Azure Storage but aren't hosted within Azure, such as mobile device apps or on premises enterprise services, then locating the storage account in a region near to those clients might reduce latency. If your clients are broadly distributed (for example, some in North America, and some in Europe), then consider using one storage account per region. This approach is easier to implement if the data the application stores is specific to individual users, and doesn't require replicating data between storage accounts.
## SAS and CORS
Suppose that you need to authorize code such as JavaScript that is running in a
You can avoid using a service application as a proxy for Azure Storage by using shared access signatures (SAS). Using SAS, you can enable your user's device to make requests directly to Azure Storage by using a limited access token. For example, if a user wants to upload a photo to your application, then your service application can generate a SAS and send it to the user's device. The SAS token can grant permission to write to an Azure Storage resource for a specified interval of time, after which the SAS token expires. For more information about SAS, see [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md).
-Typically, a web browser will not allow JavaScript in a page that is hosted by a website on one domain to perform certain operations, such as write operations, to another domain. Known as the same-origin policy, this policy prevents a malicious script on one page from obtaining access to data on another web page. However, the same-origin policy can be a limitation when building a solution in the cloud. Cross-origin resource sharing (CORS) is a browser feature that enables the target domain to communicate to the browser that it trusts requests originating in the source domain.
+Typically, a web browser won't allow JavaScript in a page that is hosted by a website on one domain to perform certain operations, such as write operations, to another domain. Known as the same-origin policy, this policy prevents a malicious script on one page from obtaining access to data on another web page. However, the same-origin policy can be a limitation when building a solution in the cloud. Cross-origin resource sharing (CORS) is a browser feature that enables the target domain to communicate to the browser that it trusts requests originating in the source domain.
For example, suppose a web application running in Azure makes a request for a resource to an Azure Storage account. The web application is the source domain, and the storage account is the target domain. You can configure CORS for any of the Azure Storage services to communicate to the web browser that requests from the source domain are trusted by Azure Storage. For more information about CORS, see [Cross-origin resource sharing (CORS) support for Azure Storage](/rest/api/storageservices/Cross-Origin-Resource-Sharing--CORS--Support-for-the-Azure-Storage-Services).
The Table service supports batch transactions on entities that are in the same t
## .NET configuration
-If using the .NET Framework, this section lists several quick configuration settings that you can use to make significant performance improvements. If using other languages, check to see if similar concepts apply in your chosen language.
-
-### Use .NET Core
-
-Develop your Azure Storage applications with .NET Core 2.1 or later to take advantage of performance enhancements. Using .NET Core 3.x is recommended when possible.
-
-For more information on performance improvements in .NET Core, see the following blog posts:
--- [Performance Improvements in .NET Core 3.0](https://devblogs.microsoft.com/dotnet/performance-improvements-in-net-core-3-0/)-- [Performance Improvements in .NET Core 2.1](https://devblogs.microsoft.com/dotnet/performance-improvements-in-net-core-2-1/)
+For projects using .NET Framework, this section lists some quick configuration settings that you can use to make significant performance improvements. If you're using a language other than .NET, check to see if similar concepts apply in your chosen language.
### Increase default connection limit
-In .NET, the following code increases the default connection limit (which is usually 2 in a client environment or 10 in a server environment) to 100. Typically, you should set the value to approximately the number of threads used by your application.
+> [!NOTE]
+> This section applies to projects using .NET Framework, as connection pooling is controlled by the ServicePointManager class. .NET Core introduced a significant change around connection pool management, where connection pooling happens at the HttpClient level and the pool size is not limited by default. This means that HTTP connections are automatically scaled to satisfy your workload. Using the latest version of .NET is recommended, when possible, to take advantage of performance enhancements.
+
+For projects using .NET Framework, you can use the following code to increase the default connection limit (which is usually 2 in a client environment or 10 in a server environment) to 100. Typically, you should set the value to approximately the number of threads used by your application. Set the connection limit before opening any connections.
```csharp ServicePointManager.DefaultConnectionLimit = 100; //(Or More) ```
-Set the connection limit before opening any connections.
-
-For other programming languages, see that language's documentation to determine how to set the connection limit.
+To learn more about connection pool limits in .NET Framework, see [.NET Framework Connection Pool Limits and the new Azure SDK for .NET](https://devblogs.microsoft.com/azure-sdk/net-framework-connection-pool-limits/).
-For more information, see the blog post [Web
+For other programming languages, see the documentation to determine how to set the connection limit.
### Increase minimum number of threads
-If you are using synchronous calls together with asynchronous tasks, you may want to increase the number of threads in the thread pool:
+If you're using synchronous calls together with asynchronous tasks, you might want to increase the number of threads in the thread pool:
```csharp ThreadPool.SetMinThreads(100,100); //(Determine the right number for your application)
For more information, see the [ThreadPool.SetMinThreads](/dotnet/api/system.thre
## Unbounded parallelism
-While parallelism can be great for performance, be careful about using unbounded parallelism, meaning that there is no limit enforced on the number of threads or parallel requests. Be sure to limit parallel requests to upload or download data, to access multiple partitions in the same storage account, or to access multiple items in the same partition. If parallelism is unbounded, your application can exceed the client device's capabilities or the storage account's scalability targets, resulting in longer latencies and throttling.
+While parallelism can be great for performance, be careful about using unbounded parallelism, meaning that there's no limit enforced on the number of threads or parallel requests. Be sure to limit parallel requests to upload or download data, to access multiple partitions in the same storage account, or to access multiple items in the same partition. If parallelism is unbounded, your application can exceed the client device's capabilities or the storage account's scalability targets, resulting in longer latencies and throttling.
## Client libraries and tools
-For best performance, always use the latest client libraries and tools provided by Microsoft. Azure Storage client libraries are available for a variety of languages. Azure Storage also supports PowerShell and Azure CLI. Microsoft actively develops these client libraries and tools with performance in mind, keeps them up-to-date with the latest service versions, and ensures that they handle many of the proven performance practices internally.
+For best performance, always use the latest client libraries and tools provided by Microsoft. Azure Storage client libraries are available for various languages. Azure Storage also supports PowerShell and Azure CLI. Microsoft actively develops these client libraries and tools with performance in mind, keeps them up-to-date with the latest service versions, and ensures that they handle many of the proven performance practices internally.
## Handle service errors
-Azure Storage returns an error when the service cannot process a request. Understanding the errors that may be returned by Azure Storage in a given scenario is helpful for optimizing performance.
+Azure Storage returns an error when the service can't process a request. Understanding the errors returned by Azure Storage in a given scenario is helpful for optimizing performance.
### Timeout and Server Busy errors
-Azure Storage may throttle your application if it approaches the scalability limits. In some cases, Azure Storage may be unable to handle a request due to some transient condition. In both cases, the service may return a 503 (Server Busy) or 500 (Timeout) error. These errors can also occur if the service is rebalancing data partitions to allow for higher throughput. The client application should typically retry the operation that causes one of these errors. However, if Azure Storage is throttling your application because it is exceeding scalability targets, or even if the service was unable to serve the request for some other reason, aggressive retries may make the problem worse. Using an exponential back off retry policy is recommended, and the client libraries default to this behavior. For example, your application may retry after 2 seconds, then 4 seconds, then 10 seconds, then 30 seconds, and then give up completely. In this way, your application significantly reduces its load on the service, rather than exacerbating behavior that could lead to throttling.
+Azure Storage might throttle your application if it approaches the scalability limits. In some cases, Azure Storage might be unable to handle a request due to some transient condition. In both cases, the service might return a 503 (Server Busy) or 500 (Timeout) error. These errors can also occur if the service is rebalancing data partitions to allow for higher throughput. The client application should typically retry the operation that causes one of these errors. However, if Azure Storage is throttling your application because it's exceeding scalability targets, or even if the service was unable to serve the request for some other reason, aggressive retries might make the problem worse. Using an exponential back off retry policy is recommended, and the client libraries default to this behavior. For example, your application might retry after 2 seconds, then 4 seconds, then 10 seconds, then 30 seconds, and then give up completely. In this way, your application significantly reduces its load on the service, rather than exacerbating behavior that could lead to throttling.
-Connectivity errors can be retried immediately, because they are not the result of throttling and are expected to be transient.
+Connectivity errors can be retried immediately, because they aren't the result of throttling and are expected to be transient.
### Non-retryable errors
-The client libraries handle retries with an awareness of which errors can be retried and which cannot. However, if you are calling the Azure Storage REST API directly, there are some errors that you should not retry. For example, a 400 (Bad Request) error indicates that the client application sent a request that could not be processed because it was not in the expected form. Resending this request results the same response every time, so there is no point in retrying it. If you are calling the Azure Storage REST API directly, be aware of potential errors and whether they should be retried.
+The client libraries handle retries with an awareness of which errors can be retried and which can't. However, if you're calling the Azure Storage REST API directly, there are some errors that you shouldn't retry. For example, a 400 (Bad Request) error indicates that the client application sent a request that couldn't be processed because it wasn't in the expected form. Resending this request results the same response every time, so there's no point in retrying it. If you're calling the Azure Storage REST API directly, be aware of potential errors and whether they should be retried.
For more information on Azure Storage error codes, see [Status and error codes](/rest/api/storageservices/status-and-error-codes2).
For more information, see the post [Microsoft Azure Tables: Introducing JSON](/a
### Disable Nagle
-Nagle's algorithm is widely implemented across TCP/IP networks as a means to improve network performance. However, it is not optimal in all circumstances (such as highly interactive environments). Nagle's algorithm has a negative impact on the performance of requests to the Azure Table service, and you should disable it if possible.
+Nagle's algorithm is widely implemented across TCP/IP networks as a means to improve network performance. However, it isn't optimal in all circumstances (such as highly interactive environments). Nagle's algorithm has a negative impact on the performance of requests to the Azure Table service, and you should disable it if possible.
## Schema
How you represent and query your data is the biggest single factor that affects
Tables are divided into partitions. Every entity stored in a partition shares the same partition key and has a unique row key to identify it within that partition. Partitions provide benefits but also introduce scalability limits. - Benefits: You can update entities in the same partition in a single, atomic, batch transaction that contains up to 100 separate storage operations (limit of 4 MB total size). Assuming the same number of entities to be retrieved, you can also query data within a single partition more efficiently than data that spans partitions (though read on for further recommendations on querying table data).-- Scalability limit: Access to entities stored in a single partition cannot be load-balanced because partitions support atomic batch transactions. For this reason, the scalability target for an individual table partition is lower than for the Table service as a whole.
+- Scalability limit: Access to entities stored in a single partition can't be load-balanced because partitions support atomic batch transactions. For this reason, the scalability target for an individual table partition is lower than for the Table service as a whole.
Because of these characteristics of tables and partitions, you should adopt the following design principles: -- Locate data that your client application frequently updates or queries in the same logical unit of work in the same partition. For example, locate data in the same partition if your application is aggregating writes or you are performing atomic batch operations. Also, data in a single partition can be more efficiently queried in a single query than data across partitions.-- Locate data that your client application does not insert, update, or query in the same logical unit of work (that is, in a single query or batch update) in separate partitions. Keep in mind that there is no limit to the number of partition keys in a single table, so having millions of partition keys is not a problem and will not impact performance. For example, if your application is a popular website with user login, using the User ID as the partition key could be a good choice.
+- Locate data that your client application frequently updates or queries in the same logical unit of work in the same partition. For example, locate data in the same partition if your application is aggregating writes or you're performing atomic batch operations. Also, data in a single partition can be more efficiently queried in a single query than data across partitions.
+- Locate data that your client application doesn't insert, update, or query in the same logical unit of work (that is, in a single query or batch update) in separate partitions. Keep in mind that there's no limit to the number of partition keys in a single table, so having millions of partition keys isn't a problem and won't impact performance. For example, if your application is a popular website with user sign in, using the User ID as the partition key could be a good choice.
#### Hot partitions
-A hot partition is one that is receiving a disproportionate percentage of the traffic to an account, and cannot be load balanced because it is a single partition. In general, hot partitions are created one of two ways:
+A hot partition is one that is receiving a disproportionate percentage of the traffic to an account, and can't be load balanced because it's a single partition. In general, hot partitions are created one of two ways:
#### Append Only and Prepend Only patterns
-The "Append Only" pattern is one where all (or nearly all) of the traffic to a given partition key increases and decreases according to the current time. For example, suppose that your application uses the current date as a partition key for log data. This design results in all of the inserts going to the last partition in your table, and the system cannot load balance properly. If the volume of traffic to that partition exceeds the partition-level scalability target, then it will result in throttling. It's better to ensure that traffic is sent to multiple partitions, to enable load balance the requests across your table.
+The "Append Only" pattern is one where all (or nearly all) of the traffic to a given partition key increases and decreases according to the current time. For example, suppose that your application uses the current date as a partition key for log data. This design results in all of the inserts going to the last partition in your table, and the system can't load balance properly. If the volume of traffic to that partition exceeds the partition-level scalability target, then it results in throttling. It's better to ensure that traffic is sent to multiple partitions, to enable load balance the requests across your table.
#### High-traffic data
-If your partitioning scheme results in a single partition that just has data that is far more used than other partitions, you may also see throttling as that partition approaches the scalability target for a single partition. It's better to make sure that your partition scheme results in no single partition approaching the scalability targets.
+If your partitioning scheme results in a single partition that just has data that is far more used than other partitions, you might also see throttling as that partition approaches the scalability target for a single partition. It's better to make sure that your partition scheme results in no single partition approaching the scalability targets.
### Querying
There are several ways to specify the range of entities to query. The following
- **Point queries:**- A point query retrieves exactly one entity by specifying both the partition key and row key of the entity to retrieve. These queries are efficient, and you should use them wherever possible. - **Partition queries:** A partition query is a query that retrieves a set of data that shares a common partition key. Typically, the query specifies a range of row key values or a range of values for some entity property in addition to a partition key. These queries are less efficient than point queries, and should be used sparingly.-- **Table queries:** A table query is a query that retrieves a set of entities that does not share a common partition key. These queries are not efficient and you should avoid them if possible.
+- **Table queries:** A table query is a query that retrieves a set of entities that doesn't share a common partition key. These queries aren't efficient and you should avoid them if possible.
In general, avoid scans (queries larger than a single entity), but if you must scan, try to organize your data so that your scans retrieve the data you need without scanning or returning significant amounts of entities you don't need. #### Query density
-Another key factor in query efficiency is the number of entities returned as compared to the number of entities scanned to find the returned set. If your application performs a table query with a filter for a property value that only 1% of the data shares, the query will scan 100 entities for every one entity it returns. The table scalability targets discussed previously all relate to the number of entities scanned, and not the number of entities returned: a low query density can easily cause the Table service to throttle your application because it must scan so many entities to retrieve the entity you are looking for. For more information on how to avoid throttling, see the section titled [Denormalization](#denormalization).
+Another key factor in query efficiency is the number of entities returned as compared to the number of entities scanned to find the returned set. If your application performs a table query with a filter for a property value that only 1% of the data shares, the query scans 100 entities for every one entity it returns. The table scalability targets discussed previously all relate to the number of entities scanned, and not the number of entities returned: a low query density can easily cause the Table service to throttle your application because it must scan so many entities to retrieve the entity you're looking for. For more information on how to avoid throttling, see the section titled [Denormalization](#denormalization).
#### Limiting the amount of data returned
-When you know that a query will return entities that you don't need in the client application, consider using a filter to reduce the size of the returned set. While the entities not returned to the client still count toward the scalability limits, your application performance will improve because of the reduced network payload size and the reduced number of entities that your client application must process. Keep in mind that the scalability targets relate to the number of entities scanned, so a query that filters out many entities may still result in throttling, even if few entities are returned. For more information on making queries efficient, see the section titled [Query density](#query-density).
+When you know that a query returns entities that you don't need in the client application, consider using a filter to reduce the size of the returned set. While the entities not returned to the client still count toward the scalability limits, your application performance improves because of the reduced network payload size and the reduced number of entities that your client application must process. Keep in mind that the scalability targets relate to the number of entities scanned, so a query that filters out many entities might still result in throttling, even if few entities are returned. For more information on making queries efficient, see the section titled [Query density](#query-density).
If your client application needs only a limited set of properties from the entities in your table, you can use projection to limit the size of the returned data set. As with filtering, projection helps to reduce network load and client processing. #### Denormalization
-Unlike working with relational databases, the proven practices for efficiently querying table data lead to denormalizing your data. That is, duplicating the same data in multiple entities (one for each key you may use to find the data) to minimize the number of entities that a query must scan to find the data the client needs, rather than having to scan large numbers of entities to find the data your application needs. For example, in an e-commerce website, you may want to find an order both by the customer ID (give me this customer's orders) and by the date (give me orders on a date). In Table Storage, it is best to store the entity (or a reference to it) twice ΓÇô once with Table Name, PK, and RK to facilitate finding by customer ID, once to facilitate finding it by the date.
+Unlike working with relational databases, the proven practices for efficiently querying table data lead to denormalizing your data. That is, duplicating the same data in multiple entities (one for each key you might use to find the data) to minimize the number of entities that a query must scan to find the data the client needs, rather than having to scan large numbers of entities to find the data your application needs. For example, in an e-commerce website, you might want to find an order both by the customer ID (give me this customer's orders) and by the date (give me orders on a date). In Table Storage, it's best to store the entity (or a reference to it) twice ΓÇô once with Table Name, PK, and RK to facilitate finding by customer ID, once to facilitate finding it by the date.
### Insert, update, and delete
This section describes proven practices for modifying entities stored in the Tab
#### Batching
-Batch transactions are known as entity group transactions in Azure Storage. All operations within an entity group transaction must be on a single partition in a single table. Where possible, use entity group transactions to perform inserts, updates, and deletes in batches. Using entity group transactions reduces the number of round trips from your client application to the server, reduces the number of billable transactions (an entity group transaction counts as a single transaction for billing purposes and can contain up to 100 storage operations), and enables atomic updates (all operations succeed or all fail within an entity group transaction). Environments with high latencies such as mobile devices will benefit greatly from using entity group transactions.
+Batch transactions are known as entity group transactions in Azure Storage. All operations within an entity group transaction must be on a single partition in a single table. Where possible, use entity group transactions to perform inserts, updates, and deletes in batches. Using entity group transactions reduces the number of round trips from your client application to the server, reduces the number of billable transactions (an entity group transaction counts as a single transaction for billing purposes and can contain up to 100 storage operations), and enables atomic updates (all operations succeed or all fail within an entity group transaction). Environments with high latencies such as mobile devices benefit greatly from using entity group transactions.
#### Upsert Use table **Upsert** operations wherever possible. There are two types of **Upsert**, both of which can be more efficient than a traditional **Insert** and **Update** operations: -- **InsertOrMerge**: Use this operation when you want to upload a subset of the entity's properties, but aren't sure whether the entity already exists. If the entity exists, this call updates the properties included in the **Upsert** operation, and leaves all existing properties as they are, if the entity does not exist, it inserts the new entity. This is similar to using projection in a query, in that you only need to upload the properties that are changing.-- **InsertOrReplace**: Use this operation when you want to upload an entirely new entity, but you aren't sure whether it already exists. Use this operation when you know that the newly uploaded entity is entirely correct because it completely overwrites the old entity. For example, you want to update the entity that stores a user's current location regardless of whether or not the application has previously stored location data for the user; the new location entity is complete, and you do not need any information from any previous entity.
+- **InsertOrMerge**: Use this operation when you want to upload a subset of the entity's properties, but aren't sure whether the entity already exists. If the entity exists, this call updates the properties included in the **Upsert** operation, and leaves all existing properties as they are, if the entity doesn't exist, it inserts the new entity. This is similar to using projection in a query, in that you only need to upload the properties that are changing.
+- **InsertOrReplace**: Use this operation when you want to upload an entirely new entity, but you aren't sure whether it already exists. Use this operation when you know that the newly uploaded entity is entirely correct because it completely overwrites the old entity. For example, you want to update the entity that stores a user's current location regardless of whether or not the application has previously stored location data for the user; the new location entity is complete, and you don't need any information from any previous entity.
#### Storing data series in a single entity Sometimes, an application stores a series of data that it frequently needs to retrieve all at once: for example, an application might track CPU usage over time in order to plot a rolling chart of the data from the last 24 hours. One approach is to have one table entity per hour, with each entity representing a specific hour and storing the CPU usage for that hour. To plot this data, the application needs to retrieve the entities holding the data from the 24 most recent hours.
-Alternatively, your application could store the CPU usage for each hour as a separate property of a single entity: to update each hour, your application can use a single **InsertOrMerge Upsert** call to update the value for the most recent hour. To plot the data, the application only needs to retrieve a single entity instead of 24, making for an efficient query. For more information on query efficiency, see the section titled [Query scope](#query-scope)).
+Alternatively, your application could store the CPU usage for each hour as a separate property of a single entity: to update each hour, your application can use a single **InsertOrMerge Upsert** call to update the value for the most recent hour. To plot the data, the application only needs to retrieve a single entity instead of 24, making for an efficient query. For more information on query efficiency, see the section titled [Query scope](#query-scope).
#### Storing structured data in blobs
-If you are performing batch inserts and then retrieving ranges of entities together, consider using blobs instead of tables. A good example is a log file. You can batch several minutes of logs and insert them, and then retrieve several minutes of logs at a time. In this case, performance is better if you use blobs instead of tables, since you can significantly reduce the number of objects written to or read, and also possibly the number of requests that need made.
+If you're performing batch inserts and then retrieving ranges of entities together, consider using blobs instead of tables. A good example is a log file. You can batch several minutes of logs and insert them, and then retrieve several minutes of logs at a time. In this case, performance is better if you use blobs instead of tables, since you can significantly reduce the number of objects written to or read, and also possibly the number of requests that need made.
## Next steps
virtual-desktop Create Application Group Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-application-group-workspace.md
- Title: Create an application group, a workspace, and assign users - Azure Virtual Desktop
-description: Learn how to create an application group and a workspace, and assign users in Azure Virtual Desktop by using the Azure portal, Azure CLI, or Azure PowerShell.
---- Previously updated : 03/22/2023--
-# Create an application group, a workspace, and assign users in Azure Virtual Desktop
-
-This article shows you how to create an application group and a workspace, then add the application group to a workspace and assign users by using the Azure portal, Azure CLI, or Azure PowerShell. Before you complete these steps, you should have already [created a host pool](create-host-pool.md).
-
-For more information on the terminology used in this article, see [Azure Virtual Desktop terminology](environment-setup.md).
-
-## Prerequisites
-
-Review the [Prerequisites for Azure Virtual Desktop](prerequisites.md) for a general idea of what's required. In addition, you'll need:
--- An Azure account with an active subscription.--- An existing host pool. See [Create a host pool](create-host-pool.md) to find out how to create one.--- The account must have the following built-in role-based access control (RBAC) roles on the resource group, or on a subscription to create the resources.-
- | Resource type | RBAC role |
- |--|--|
- | Workspace | [Desktop Virtualization Workspace Contributor](rbac.md#desktop-virtualization-workspace-contributor) |
- | Application group | [Desktop Virtualization Application Group Contributor](rbac.md#desktop-virtualization-application-group-contributor) |
-
- Alternatively you can assign the [Desktop Virtualization Contributor](rbac.md#desktop-virtualization-contributor) RBAC role to create all of these resource types.
--- To assign users to the application group, you'll also need `Microsoft.Authorization/roleAssignments/write` permissions on the application group. Built-in RBAC roles that include this permission are [*User Access Administrator*](../role-based-access-control/built-in-roles.md#user-access-administrator) and [*Owner*](../role-based-access-control/built-in-roles.md#owner). --- If you want to use Azure CLI or Azure PowerShell locally, see [Use Azure CLI and Azure PowerShell with Azure Virtual Desktop](cli-powershell.md) to make sure you have the [desktopvirtualization](/cli/azure/desktopvirtualization) Azure CLI extension or the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization) PowerShell module installed. Alternatively, use the [Azure Cloud Shell](../cloud-shell/overview.md).-
-## Create an application group
-
-To create an application group, select the relevant tab for your scenario and follow the steps.
-
-# [Portal](#tab/portal)
-
-Here's how to create an application group using the Azure portal.
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
-
-1. Select **Application groups**, then select **Create**.
-
-1. On the **Basics** tab, complete the following information:
-
- | Parameter | Value/Description |
- |--|--|
- | Subscription | Select the subscription you want to create the application group in from the drop-down list. |
- | Resource group | Select an existing resource group or select **Create new** and enter a name. |
- | Host pool | Select the host pool for the application group. |
- | Location | Metadata is stored in the same location as the host pool. |
- | Application group type | Select the [application group type](environment-setup.md#app-groups) for this host pool from *Desktop* or *RemoteApp*. |
- | Application group name | Enter a name for the application group, for example *Session Desktop*. |
-
- > [!TIP]
- > Once you've completed this tab, select **Next: Review + create**. You don't need to complete the other tabs to create an application group, but you'll need to [create a workspace](#create-a-workspace), [add an application group to a workspace](#add-an-application-group-to-a-workspace) and [assign users to the application group](#assign-users-to-an-application-group) before users can access the resources.
- >
- > If you created an application group for RemoteApp, you will also need to add applications. For more information, see [Add applications to an application group](manage-app-groups.md)
-
-1. On the **Review + create** tab, ensure validation passes and review the information that will be used during deployment.
-
-1. Select **Create** to create the application group.
-
-1. Once the application group has been created, select **Go to resource** to go to the overview of your new application group, then select **Properties** to view its properties.
-
-# [Azure PowerShell](#tab/powershell)
-
-Here's how to create an application group using the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization) PowerShell module.
-
-> [!IMPORTANT]
-> In the following examples, you'll need to change the `<placeholder>` values for your own.
--
-2. Get the resource ID of the host pool you want to create an application group for and store it in a variable by running the following command:
-
- ```azurepowershell
- $hostPoolArmPath = (Get-AzWvdHostPool -Name <HostPoolName> -ResourceGroupName <ResourceGroupName).Id
- ```
-
-3. Use the `New-AzWvdApplicationGroup` cmdlet with the following examples to create an application group. For more information, see the [New-AzWvdApplicationGroup PowerShell reference](/powershell/module/az.desktopvirtualization/new-azwvdapplicationgroup).
-
- 1. To create a Desktop application group in the Azure region UK South, run the following command:
-
- ```azurepowershell
- $parameters = @{
- Name = '<Name>'
- ResourceGroupName = '<ResourceGroupName>'
- ApplicationGroupType = 'Desktop'
- HostPoolArmPath = $hostPoolArmPath
- Location = 'uksouth'
- }
-
- New-AzWvdApplicationGroup @parameters
- ```
-
- 1. To create a RemoteApp application group in the Azure region UK South, run the following command. You can only create a RemoteApp application group with a pooled host pool.
-
- ```azurepowershell
- $parameters = @{
- Name = '<Name>'
- ResourceGroupName = '<ResourceGroupName>'
- ApplicationGroupType = 'RemoteApp'
- HostPoolArmPath = $hostPoolArmPath
- Location = 'uksouth'
- }
-
- New-AzWvdApplicationGroup @parameters
- ```
-
-4. You can view the properties of your new workspace by running the following command:
-
- ```azurepowershell
- Get-AzWvdApplicationGroup -Name <Name> -ResourceGroupName <ResourceGroupName> | FL *
- ```
-
-# [Azure CLI](#tab/cli)
-
-Here's how to create an application group using the [desktopvirtualization](/cli/azure/desktopvirtualization) extension for Azure CLI.
-
-> [!IMPORTANT]
-> In the following examples, you'll need to change the `<placeholder>` values for your own.
--
-2. Get the resource ID of the host pool you want to create an application group for and store it in a variable by running the following command:
-
- ```azurecli
- hostPoolArmPath=$(az desktopvirtualization hostpool show \
- --name <Name> \
- --resource-group <ResourceGroupName> \
- --query [id] \
- --output tsv)
- ```
-
-3. Use the `az desktopvirtualization applicationgroup create` command with the following examples to create an application group. For more information, see the [az desktopvirtualization applicationgroup Azure CLI reference](/cli/azure/desktopvirtualization/applicationgroup).
-
- 1. To create a Desktop application group in the Azure region UK South, run the following command:
-
- ```azurecli
- az desktopvirtualization applicationgroup create \
- --name <Name> \
- --resource-group <ResourceGroupName> \
- --application-group-type Desktop \
- --host-pool-arm-path $hostPoolArmPath \
- --location uksouth
- ```
-
- 1. To create a RemoteApp application group in the Azure region UK South, run the following command. You can only create a RemoteApp application group with a pooled host pool.
-
- ```azurecli
- az desktopvirtualization applicationgroup create \
- --name <Name> \
- --resource-group <ResourceGroupName> \
- --application-group-type RemoteApp \
- --host-pool-arm-path $hostPoolArmPath \
- --location uksouth
- ```
-
-4. You can view the properties of your new application group by running the following command:
-
- ```azurecli
- az desktopvirtualization applicationgroup show --name <Name> --resource-group <ResourceGroupName>
- ```
---
-## Create a workspace
-
-Next, to create a workspace, select the relevant tab for your scenario and follow the steps.
-
-# [Portal](#tab/portal)
-
-Here's how to create a workspace using the Azure portal.
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
-
-1. Select **Workspaces**, then select **Create**.
-
-1. On the **Basics** tab, complete the following information:
-
- | Parameter | Value/Description |
- |--|--|
- | Subscription | Select the subscription you want to create the workspace in from the drop-down list. |
- | Resource group | Select an existing resource group or select **Create new** and enter a name. |
- | Workspace name | Enter a name for the workspace, for example *workspace01*. |
- | Friendly name | *Optional*: Enter a friendly name for the workspace. |
- | Description | *Optional*: Enter a description for the workspace. |
- | Location | Select the Azure region where your workspace will be deployed. |
-
- > [!TIP]
- > Once you've completed this tab, select **Next: Review + create**. You don't need to complete the other tabs to create a workspace, but you'll need to [add an application group to a workspace](#add-an-application-group-to-a-workspace) and [assign users to the application group](#assign-users-to-an-application-group) before they can access its applications.
-
-1. On the **Review + create** tab, ensure validation passes and review the information that will be used during deployment.
-
-1. Select **Create** to create the workspace.
-
-1. Once the workspace has been created, select **Go to resource** to go to the overview of your new workspace, then select **Properties** to view its properties.
--
-# [Azure PowerShell](#tab/powershell)
-
-Here's how to create a workspace using the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization) PowerShell module.
-
-> [!IMPORTANT]
-> In the following examples, you'll need to change the `<placeholder>` values for your own.
--
-2. Use the `New-AzWvdWorkspace` cmdlet with the following example to create a workspace. More parameters are available, such as to register existing application groups. For more information, see the [New-AzWvdWorkspace PowerShell reference](/powershell/module/az.desktopvirtualization/new-azwvdworkspace).
-
- ```azurepowershell
- New-AzWvdWorkspace -Name <Name> -ResourceGroupName <ResourceGroupName>
- ```
-
-3. You can view the properties of your new workspace by running the following command:
-
- ```azurepowershell
- Get-AzWvdWorkspace -Name <Name> -ResourceGroupName <ResourceGroupName> | FL *
- ```
-
-# [Azure CLI](#tab/cli)
-
-Here's how to create a workspace using the [desktopvirtualization](/cli/azure/desktopvirtualization) extension for Azure CLI.
-
-> [!IMPORTANT]
-> In the following examples, you'll need to change the `<placeholder>` values for your own.
--
-2. Use the `az desktopvirtualization workspace create` command with the following example to create a workspace. More parameters are available, such as to register existing application groups. For more information, see the [az desktopvirtualization workspace Azure CLI reference](/cli/azure/desktopvirtualization/workspace).
-
- ```azurecli
- az desktopvirtualization workspace create --name <Name> --resource-group <ResourceGroupName>
- ```
-
-3. You can view the properties of your new workspace by running the following command:
-
- ```azurecli
- az desktopvirtualization workspace show --name <Name> --resource-group <ResourceGroupName>
- ```
---
-## Add an application group to a workspace
-
-Next, to add an application group to a workspace, select the relevant tab for your scenario and follow the steps.
-
-# [Portal](#tab/portal)
-
-Here's how to add an application group to a workspace using the Azure portal.
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
-
-1. Select **Workspaces**, then select the name of the workspace you want to assign an application group to.
-
-1. From the workspace overview, select **Application groups**, then select **+ Add**.
-
-1. Select the plus icon (**+**) next to an application group from the list. Only application groups that aren't already assigned to a workspace are listed.
-
-1. Select **Select**. The application group will be added to the workspace.
--
-# [Azure PowerShell](#tab/powershell)
-
-Here's how to add an application group to a workspace using the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization) PowerShell module.
-
-> [!IMPORTANT]
-> In the following examples, you'll need to change the `<placeholder>` values for your own.
--
-2. Use the `Update-AzWvdWorkspace` cmdlet with the following example to add an application group to a workspace:
-
- ```azurepowershell
- # Get the resource ID of the application group you want to add to the workspace
- $appGroupPath = (Get-AzWvdApplicationGroup -Name <Name -ResourceGroupName <ResourceGroupName>).Id
-
- # Add the application group to the workspace
- Update-AzWvdWorkspace -Name <Name> -ResourceGroupName <ResourceGroupName> -ApplicationGroupReference $appGroupPath
- ```
-
-3. You can view the properties of your workspace by running the following command. The key **ApplicationGroupReference** contains an array of the application groups added to the workspace.
-
- ```azurepowershell
- Get-AzWvdWorkspace -Name <Name> -ResourceGroupName <ResourceGroupName> | FL *
- ```
-
-# [Azure CLI](#tab/cli)
-
-Here's how to add an application group to a workspace using the [desktopvirtualization](/cli/azure/desktopvirtualization) extension for Azure CLI.
-
-> [!IMPORTANT]
-> In the following examples, you'll need to change the `<placeholder>` values for your own.
--
-2. Use the `az desktopvirtualization workspace update` command with the following example to add an application group to a workspace:
-
- ```azurecli
- # Get the resource ID of the application group you want to add to the workspace
- appGroupPath=$(az desktopvirtualization applicationgroup show \
- --name <Name> \
- --resource-group <ResourceGroupName> \
- --query [id] \
- --output tsv)
-
- # Add the application group to the workspace
- az desktopvirtualization workspace update \
- --name <Name> \
- --resource-group <ResourceGroupName> \
- --application-group-references $appGroupPath
- ```
-
-3. You can view the properties of your workspace by running the following command. The key **applicationGroupReferences** contains an array of the application groups added to the workspace.
-
- ```azurecli
- az desktopvirtualization applicationgroup show \
- --name <Name> \
- --resource-group <ResourceGroupName>
- ```
---
-## Assign users to an application group
-
-Finally, to assign users or user groups to an application group, select the relevant tab for your scenario and follow the steps. We recommend you assign user groups to application groups to make ongoing management simpler.
-
-# [Portal](#tab/portal)
-
-Here's how to assign users or user groups to an application group to a workspace using the Azure portal.
-
-1. From the host pool overview, select **Application groups**.
-
-1. Select the application group from the list.
-
-1. From the application group overview, select **Assignments**.
-
-1. Select **+ Add**, then search for and select the user account or user group you want to assign to this application group.
-
-1. Finish by selecting **Select**.
--
-# [Azure PowerShell](#tab/powershell)
-
-Here's how to assign users or user groups to an application group to a workspace using [Az.Resources](/powershell/module/az.resources) PowerShell module.
-
-> [!IMPORTANT]
-> In the following examples, you'll need to change the `<placeholder>` values for your own.
--
-2. Use the `New-AzRoleAssignment` cmdlet with the following examples to assign users or user groups to an application group.
-
- 1. To assign users to the application group, run the following commands:
-
- ```azurepowershell
- $parameters = @{
- SignInName = '<UserPrincipalName>'
- ResourceName = '<ApplicationGroupName>'
- ResourceGroupName = '<ResourceGroupName>'
- RoleDefinitionName = 'Desktop Virtualization User'
- ResourceType = 'Microsoft.DesktopVirtualization/applicationGroups'
- }
-
- New-AzRoleAssignment @parameters
- ```
-
- 1. To assign user groups to the application group, run the following commands:
-
- ```azurepowershell
- # Get the object ID of the user group you want to assign to the application group
- $userGroupId = (Get-AzADGroup -DisplayName "<UserGroupName>").Id
-
- # Assign users to the application group
- $parameters = @{
- ObjectId = $userGroupId
- ResourceName = '<ApplicationGroupName>'
- ResourceGroupName = '<ResourceGroupName>'
- RoleDefinitionName = 'Desktop Virtualization User'
- ResourceType = 'Microsoft.DesktopVirtualization/applicationGroups'
- }
-
- New-AzRoleAssignment @parameters
- ```
-
-# [Azure CLI](#tab/cli)
-
-Here's how to assign users or user groups to an application group to a workspace using the [role](/cli/azure/role/assignment) extension for Azure CLI.
-
-> [!IMPORTANT]
-> In the following examples, you'll need to change the `<placeholder>` values for your own.
--
-2. Use the `az role assignment create` command with the following examples to assign users or user groups to an application group.
-
- 1. To assign users to the application group, run the following commands:
-
- ```azurecli
- # Get the resource ID of the application group you want to add to the workspace
- appGroupPath=$(az desktopvirtualization applicationgroup show \
- --name <Name> \
- --resource-group <ResourceGroupName> \
- --query [id] \
- --output tsv)
-
- # Assign users to the application group
- az role assignment create \
- --assignee '<UserPrincipalName>' \
- --role 'Desktop Virtualization User' \
- --scope $appGroupPath
- ```
-
- 1. To assign user groups to the application group, run the following commands:
-
- ```azurecli
- # Get the resource ID of the application group you want to add to the workspace
- appGroupPath=$(az desktopvirtualization applicationgroup show \
- --name <Name> \
- --resource-group <ResourceGroupName> \
- --query [id] \
- --output tsv)
-
- # Get the object ID of the user group you want to assign to the application group
- userGroupId=$(az ad group show \
- --group <UserGroupName> \
- --query [id] \
- --output tsv)
-
- # Assign users to the application group
- az role assignment create \
- --assignee $userGroupId \
- --role 'Desktop Virtualization User' \
- --scope $appGroupPath
- ```
---
-## Next steps
-
-Now that you've created an application group and a workspace, added the application group to a workspace and assigned users, you'll need to:
--- [Add session hosts to the host pool](add-session-hosts-host-pool.md), if you haven't done so already.--- [Add applications to an application group](manage-app-groups.md), if you created a RemoteApp application group.
virtual-desktop Create Host Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-host-pool.md
- Title: Create a host pool - Azure Virtual Desktop
-description: Learn how to create a host pool in Azure Virtual Desktop by using the Azure portal, Azure CLI, or Azure PowerShell.
---- Previously updated : 07/11/2023--
-# Create a host pool in Azure Virtual Desktop
-
-This article shows you how to create a host pool by using the Azure portal, Azure CLI, or Azure PowerShell. When using the Azure portal, you can optionally create session hosts, a workspace, register the default desktop application group from this host pool, and enable diagnostics settings in the same process, but you can also do this separately.
-
-For more information on the terminology used in this article, see [Azure Virtual Desktop terminology](environment-setup.md).
-
-You can create host pools in the following Azure regions:
-
- :::column:::
- - Australia East
- - Canada Central
- - Canada East
- - Central India
- - Central US
- - East US
- - East US 2
- - Japan East
- - North Central US
- :::column-end:::
- :::column:::
- - North Europe
- - South Central US
- - UK South
- - UK West
- - West Central US
- - West Europe
- - West US
- - West US 2
- - West US 3
- :::column-end:::
-
-This list refers to the list of regions where the *metadata* for the host pool will be stored. Session hosts added to a host pool can be located in any Azure region, and on-premises when using [Azure Virtual Desktop on Azure Stack HCI](azure-stack-hci-overview.md).
-
-## Prerequisites
-
-Review the [Prerequisites for Azure Virtual Desktop](prerequisites.md) for a general idea of what's required, such as operating systems, virtual networks, and identity providers. Select the relevant tab for your scenario.
-
-# [Portal](#tab/portal)
-
-In addition, you'll need:
--- The Azure account you use must have the following built-in role-based access control (RBAC) roles as a minimum on a resource group or subscription to create the following resource types. If you want to assign the roles to a resource group, you'll need to create this first.-
- | Resource type | RBAC role(s) |
- |--|--|
- | Host pool | [Desktop Virtualization Host Pool Contributor](rbac.md#desktop-virtualization-host-pool-contributor)<br />[Desktop Virtualization Application Group Contributor](rbac.md#desktop-virtualization-application-group-contributor) |
- | Workspace | [Desktop Virtualization Workspace Contributor](rbac.md#desktop-virtualization-workspace-contributor) |
- | Application group | [Desktop Virtualization Application Group Contributor](rbac.md#desktop-virtualization-application-group-contributor) |
- | Session hosts | [Virtual Machine Contributor](../role-based-access-control/built-in-roles.md#virtual-machine-contributor) |
-
- Alternatively you can assign the [Contributor](../role-based-access-control/built-in-roles.md#contributor) RBAC role to create all of these resource types.
--- Don't disable [Windows Remote Management](/windows/win32/winrm/about-windows-remote-management) (WinRM) when creating session hosts using the Azure portal, as it's required by [PowerShell DSC](/powershell/dsc/overview).-
-# [Azure PowerShell](#tab/powershell)
-
-In addition, you'll need:
--- The account must have the following built-in role-based access control (RBAC) roles as a minimum on a resource group or subscription to create the following resource types. If you want to assign the roles to a resource group, you'll need to create this first.-
- | Resource type | RBAC role |
- |--|--|
- | Host pool | [Desktop Virtualization Host Pool Contributor](rbac.md#desktop-virtualization-host-pool-contributor) |
- | Workspace | [Desktop Virtualization Workspace Contributor](rbac.md#desktop-virtualization-workspace-contributor) |
- | Application group | [Desktop Virtualization Application Group Contributor](rbac.md#desktop-virtualization-application-group-contributor) |
- | Session hosts | [Virtual Machine Contributor](../role-based-access-control/built-in-roles.md#virtual-machine-contributor) |
-
- Alternatively you can assign the [Contributor](../role-based-access-control/built-in-roles.md#contributor) RBAC role to create all of these resource types.
--- If you want to use Azure PowerShell locally, see [Use Azure CLI and Azure PowerShell with Azure Virtual Desktop](cli-powershell.md) to make sure you have the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization) PowerShell module installed. Alternatively, use the [Azure Cloud Shell](../cloud-shell/overview.md).-
-> [!IMPORTANT]
-> If you want to create Microsoft Entra joined session hosts, we only support this using the Azure portal with the Azure Virtual Desktop service.
-
-# [Azure CLI](#tab/cli)
-
-In addition, you'll need:
--- The account must have the following built-in role-based access control (RBAC) roles as a minimum on a resource group or subscription to create the following resource types. If you want to assign the roles to a resource group, you'll need to create this first.-
- | Resource type | RBAC role |
- |--|--|
- | Host pool | [Desktop Virtualization Host Pool Contributor](rbac.md#desktop-virtualization-host-pool-contributor) |
- | Workspace | [Desktop Virtualization Workspace Contributor](rbac.md#desktop-virtualization-workspace-contributor) |
- | Application group | [Desktop Virtualization Application Group Contributor](rbac.md#desktop-virtualization-application-group-contributor) |
- | Session hosts | [Virtual Machine Contributor](../role-based-access-control/built-in-roles.md#virtual-machine-contributor) |
-
- Alternatively you can assign the [Contributor](../role-based-access-control/built-in-roles.md#contributor) RBAC role to create all of these resource types.
--- If you want to use Azure CLI locally, see [Use Azure CLI and Azure PowerShell with Azure Virtual Desktop](cli-powershell.md) to make sure you have the [desktopvirtualization](/cli/azure/desktopvirtualization) Azure CLI extension installed. Alternatively, use the [Azure Cloud Shell](../cloud-shell/overview.md).-
-> [!IMPORTANT]
-> If you want to create Microsoft Entra joined session hosts, we only support this using the Azure portal with the Azure Virtual Desktop service.
---
-## Create a host pool
-
-To create a host pool, select the relevant tab for your scenario and follow the steps.
-
-# [Portal](#tab/portal)
-
-Here's how to create a host pool using the Azure portal.
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
-
-1. Select **Host pools**, then select **Create**.
-
-1. On the **Basics** tab, complete the following information:
-
- | Parameter | Value/Description |
- |--|--|
- | Subscription | Select the subscription you want to create the host pool in from the drop-down list. |
- | Resource group | Select an existing resource group or select **Create new** and enter a name. |
- | Host pool name | Enter a name for the host pool, for example *hostpool01*. |
- | Location | Select the Azure region where your host pool will be deployed. |
- | Validation environment | Select **Yes** to create a host pool that is used as a [validation environment](create-validation-host-pool.md).<br /><br />Select **No** (*default*) to create a host pool that isn't used as a validation environment. |
- | Preferred app group type | Select the preferred [application group type](environment-setup.md#app-groups) for this host pool from *Desktop* or *RemoteApp*. |
- | Host pool type | Select whether your host pool will be Personal or Pooled.<br /><br />If you select **Personal**, a new option will appear for **Assignment type**. Select either **Automatic** or **Direct**.<br /><br />If you select **Pooled**, two new options will appear for **Load balancing algorithm** and **Max session limit**.<br /><br />- For **Load balancing algorithm**, choose either **breadth-first** or **depth-first**, based on your usage pattern.<br /><br />- For **Max session limit**, enter the maximum number of users you want load-balanced to a single session host. |
-
- > [!TIP]
- > Once you've completed this tab, you can continue to optionally configure networking, create session hosts, a workspace, register the default desktop application group from this host pool, and enable diagnostics settings. Alternatively, if you want to create and configure these separately, select **Next: Review + create** and go to step 10.
-
-1. *Optional*: On the **Networking** tab, select how end users and session hosts will connect to the Azure Virtual Desktop service. You also need to configure Azure Private Link to use private access. For more information, see [Azure Private Link with Azure Virtual Desktop](private-link-overview.md).
-
- | Parameter | Value/Description |
- |--|--|
- | **Enable public access from all networks** | End users can access the feed and session hosts securely over the public internet or the private endpoints. |
- | **Enable public access for end users, use private access for session hosts** | End users can access the feed securely over the public internet but must use private endpoints to access session hosts. |
- | **Disable public access and use private access** | End users can only access the feed and session hosts over the private endpoints. |
-
- Once you've completed this tab, select **Next: Virtual Machines**.
-
-1. *Optional*: If you want to add session hosts in this process, on the **Virtual machines** tab, complete the following information:
-
- | Parameter | Value/Description |
- |--|--|
- | Add Azure virtual machines | Select **Yes**. This shows several new options. |
- | Resource group | This automatically defaults to the resource group you chose your host pool to be in on the *Basics* tab, but you can also select an alternative. |
- | Name prefix | Enter a name for your session hosts, for example **me-id-hp01-sh**.<br /><br />This will be used as the prefix for your session host VMs. Each session host has a suffix of a hyphen and then a sequential number added to the end, for example **me-id-hp01-sh-0**.<br /><br />This name prefix can be a maximum of 11 characters and is used in the computer name in the operating system. The prefix and the suffix combined can be a maximum of 15 characters. Session host names must be unique. |
- | Virtual machine location | Select the Azure region where your session host VMs will be deployed. This must be the same region that your virtual network is in. |
- | Availability options | Select from **[availability zones](../reliability/availability-zones-overview.md)**, **[availability set](../virtual-machines/availability-set-overview.md)**, or **No infrastructure dependency required**. If you select availability zones or availability set, complete the extra parameters that appear. |
- | Security type | Select from **Standard**, **[Trusted launch virtual machines](../virtual-machines/trusted-launch.md)**, or **[Confidential virtual machines](../confidential-computing/confidential-vm-overview.md)**.<br /><br />- If you select **Trusted launch virtual machines**, options for **secure boot** and **vTPM** are automatically selected.<br /><br />- If you select **Confidential virtual machines**, options for **secure boot**, **vTPM**, and **integrity monitoring** are automatically selected. You can't opt out of vTPM when using a confidential VM. |
- | Image | Select the OS image you want to use from the list, or select **See all images** to see more, including any images you've created and stored as an [Azure Compute Gallery shared image](../virtual-machines/shared-image-galleries.md) or a [managed image](../virtual-machines/windows/capture-image-resource.md). |
- | Virtual machine size | Select a SKU. If you want to use different SKU, select **Change size**, then select from the list. |
- | Number of VMs | Enter the number of virtual machines you want to deploy. You can deploy up to 400 session host VMs at this point if you wish (depending on your [subscription quota](../quotas/view-quotas.md)), or you can add more later.<br /><br />For more information, see [Azure Virtual Desktop service limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-virtual-desktop-service-limits) and [Virtual Machines limits](../azure-resource-manager/management/azure-subscription-service-limits.md#virtual-machines-limitsazure-resource-manager). |
- | OS disk type | Select the disk type to use for your session hosts. We recommend only **Premium SSD** is used for production workloads. |
- | Confidential computing encryption | If you're using a confidential VM, you must select the **Confidential compute encryption** check box to enable OS disk encryption.<br /><br />This check box only appears if you selected **Confidential virtual machines** as your security type. |
- | Boot Diagnostics | Select whether you want to enable [boot diagnostics](../virtual-machines/boot-diagnostics.md). |
- | **Network and security** | |
- | Virtual network | Select your virtual network. An option to select a subnet will appear. |
- | Subnet | Select a subnet from your virtual network. |
- | Network security group | Select whether you want to use a network security group (NSG).<br /><br />- **None** won't create a new NSG.<br /><br />- **Basic** will create a new NSG for the VM NIC.<br /><br />- **Advanced** enables you to select an existing NSG.<br /><br />We recommend that you don't create an NSG here, but [create an NSG on the subnet instead](../virtual-network/manage-network-security-group.md). |
- | Public inbound ports | You can select a port to allow from the list. Azure Virtual Desktop doesn't require public inbound ports, so we recommend you select **No**. |
- | **Domain to join** | |
- | Select which directory you would like to join | Select from **Microsoft Entra ID** or **Active Directory** and complete the relevant parameters for the option you select. |
- | **Virtual Machine Administrator account** | |
- | Username | Enter a name to use as the local administrator account for the new session host VMs. |
- | Password | Enter a password for the local administrator account. |
- | Confirm password | Re-enter the password. |
- | **Custom configuration** | |
- | ARM template file URL | If you want to use an extra ARM template during deployment you can enter the URL here. |
- | ARM template parameter file URL | Enter the URL to the parameters file for the ARM template. |
-
- Once you've completed this tab, select **Next: Workspace**.
-
-1. *Optional*: If you want to create a workspace and register the default desktop application group from this host pool in this process, on the **Workspace** tab, complete the following information:
-
- | Parameter | Value/Description |
- |--|--|
- | Register desktop app group | Select **Yes**. This registers the default desktop application group to the selected workspace. |
- | To this workspace | Select an existing workspace from the list, or select **Create new** and enter a name, for example **Microsoft Entra ID-ws01**. |
-
- Once you've completed this tab, select **Next: Advanced**.
-
-1. *Optional*: If you want to enable diagnostics settings in this process, on the **Advanced** tab, complete the following information:
-
- | Parameter | Value/Description |
- |--|--|
- | Enable diagnostics settings | Check the box. |
- | Choosing destination details to send logs to | Select one of the following:<br /><br />- Send to Log Analytics workspace<br /><br />- Archive to storage account<br /><br />- Stream to an event hub |
-
- Once you've completed this tab, select **Next: Tags**.
-
-1. *Optional*: On the **Tags** tab, you can enter any name/value pairs you need, then select **Next: Review + create**.
-
-1. On the **Review + create** tab, ensure validation passes and review the information that will be used during deployment.
-
-1. Select **Create** to create the host pool.
-
-1. Once the host pool has been created, select **Go to resource** to go to the overview of your new host pool, then select **Properties** to view its properties.
-
-## Optional: Post deployment
-
-If you also added session hosts to your host pool, there's some extra configuration you may need to do, which is covered in the following sections.
--
-# [Azure PowerShell](#tab/powershell)
-
-Here's how to create a host pool using the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization) PowerShell module. The following examples show you how to create a pooled host pool and a personal host pool.
-
-> [!IMPORTANT]
-> In the following examples, you'll need to change the `<placeholder>` values for your own.
-
-2. Use the `New-AzWvdHostPool` cmdlet with the following examples to create a host pool. More parameters are available; for more information, see the [New-AzWvdHostPool PowerShell reference](/powershell/module/az.desktopvirtualization/new-azwvdhostpool).
-
- 1. To create a pooled host pool using the *breadth-first* [load-balancing algorithm](host-pool-load-balancing.md) and *Desktop* as the preferred [app group type](environment-setup.md#app-groups), run the following command:
-
- ```azurepowershell
- $parameters = @{
- Name = '<Name>'
- ResourceGroupName = '<ResourceGroupName>'
- HostPoolType = 'Pooled'
- LoadBalancerType = 'BreadthFirst'
- PreferredAppGroupType = 'Desktop'
- MaxSessionLimit = '<value>'
- Location = '<AzureRegion>'
- }
-
- New-AzWvdHostPool @parameters
- ```
-
- 1. To create a personal host pool using the *Automatic* assignment type and *Desktop* as the preferred [app group type](environment-setup.md#app-groups), run the following command:
-
- ```azurepowershell
- $parameters = @{
- Name = '<Name>'
- ResourceGroupName = '<ResourceGroupName>'
- HostPoolType = 'Personal'
- LoadBalancerType = 'Persistent'
- PreferredAppGroupType = 'Desktop'
- PersonalDesktopAssignmentType = 'Automatic'
- Location = '<AzureRegion>'
- }
-
- New-AzWvdHostPool @parameters
- ```
-
-3. You can view the properties of your new host pool by running the following command:
-
- ```azurepowershell
- Get-AzWvdHostPool -Name <Name> -ResourceGroupName <ResourceGroupName> | FL *
- ```
-
-# [Azure CLI](#tab/cli)
-
-Here's how to create a host pool using the [desktopvirtualization](/cli/azure/desktopvirtualization) extension for Azure CLI. The following examples show you how to create a pooled host pool and a personal host pool.
-
-> [!IMPORTANT]
-> In the following examples, you'll need to change the `<placeholder>` values for your own.
-
-2. Use the `az desktopvirtualization hostpool create` command with the following examples to create a host pool. More parameters are available; for more information, see the [az desktopvirtualization hostpool Azure CLI reference](/cli/azure/desktopvirtualization/hostpool).
-
- 1. To create a pooled host pool using the *breadth-first* [load-balancing algorithm](host-pool-load-balancing.md) and *Desktop* as the preferred [app group type](environment-setup.md#app-groups), run the following command:
-
- ```azurecli
- az desktopvirtualization hostpool create \
- --name <Name> \
- --resource-group <ResourceGroupName> \
- --host-pool-type Pooled \
- --load-balancer-type BreadthFirst \
- --preferred-app-group-type Desktop \
- --max-session-limit <value> \
- --location <AzureRegion>
- ```
-
- 1. To create a personal host pool using the *Automatic* assignment type, run the following command:
-
- ```azurecli
- az desktopvirtualization hostpool create \
- --name <Name> \
- --resource-group <ResourceGroupName> \
- --host-pool-type Personal \
- --load-balancer-type Persistent \
- --preferred-app-group-type Desktop \
- --personal-desktop-assignment-type Automatic \
- --location <AzureRegion>
- ```
-
-3. You can view the properties of your new host pool by running the following command:
-
- ```azurecli
- az desktopvirtualization hostpool show --name <Name> --resource-group <ResourceGroupName>
- ```
---
-## Next steps
-
-# [Portal](#tab/portal)
-
-If you didn't complete the optional sections when creating a host pool, you'll still need to do the following tasks separately:
--- [Create an application group and a workspace, then add the application group to a workspace and assign users](create-application-group-workspace.md).--- [Add session hosts to a host pool](add-session-hosts-host-pool.md).--- [Enable diagnostics settings](diagnostics-log-analytics.md).-
-
-# [Azure PowerShell](#tab/powershell)
-
-Now that you've created a host pool, you'll still need to do the following tasks:
--- [Create an application group and a workspace, then add the application group to a workspace and assign users](create-application-group-workspace.md).--- [Add session hosts to a host pool](add-session-hosts-host-pool.md).--- [Enable diagnostics settings](diagnostics-log-analytics.md).-
-# [Azure CLI](#tab/cli)
-
-Now that you've created a host pool, you'll still need to do the following tasks:
--- [Create an application group and a workspace, then add the application group to a workspace and assign users](create-application-group-workspace.md).--- [Add session hosts to a host pool](add-session-hosts-host-pool.md).--- [Enable diagnostics settings](diagnostics-log-analytics.md).
virtual-desktop Deploy Azure Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/deploy-azure-virtual-desktop.md
+
+ Title: Deploy Azure Virtual Desktop - Azure Virtual Desktop
+description: Learn how to deploy Azure Virtual Desktop by creating a host pool, workspace, application group, session hosts, and assign users.
+++ Last updated : 10/25/2023++
+# Deploy Azure Virtual Desktop
+
+This article shows you how to deploy Azure Virtual Desktop by using the Azure portal, Azure CLI, or Azure PowerShell. You create a host pool, workspace, application group, and session hosts and can optionally enable diagnostics settings. You also assign users or groups to the application group for users to get access to their desktops and applications. You can do all these tasks in the same process when using the Azure portal, but you can also do them separately.
+
+The process covered in this article is an in-depth and adaptable approach to deploying Azure Virtual Desktop. If you want a more simple approach to deploy a sample Windows 11 desktop in Azure Virtual Desktop, see [Tutorial: Deploy a sample Azure Virtual Desktop infrastructure with a Windows 11 desktop](tutorial-try-deploy-windows-11-desktop.md) or use the [getting started feature](getting-started-feature.md).
+
+For more information on the terminology used in this article, see [Azure Virtual Desktop terminology](environment-setup.md), and to learn about the service architecture and resilience of the Azure Virtual Desktop service, see [Azure Virtual Desktop service architecture and resilience](service-architecture-resilience.md).
+
+## Prerequisites
+
+Review the [Prerequisites for Azure Virtual Desktop](prerequisites.md) for a general idea of what's required and supported, such as operating systems, virtual networks, and identity providers. It also includes a list of the [supported Azure regions](prerequisites.md#azure-regions) in which you can deploy host pools, workspaces, and application groups. This list of regions is where the *metadata* for the host pool can be stored. However, session hosts can be located in any Azure region, and on-premises when using [Azure Virtual Desktop on Azure Stack HCI](azure-stack-hci-overview.md). For more information about the types of data and locations, see [Data locations for Azure Virtual Desktop](data-locations.md).
+
+Select the relevant tab for your scenario for more prerequisites.
+
+# [Portal](#tab/portal)
+
+In addition, you need:
+
+- The Azure account you use must be assigned the following built-in role-based access control (RBAC) roles as a minimum on a resource group or subscription to create the following resource types. If you want to assign the roles to a resource group, you need to create this first.
+
+ | Resource type | RBAC role |
+ |--|--|
+ | Host pool, workspace, and application group | [Desktop Virtualization Contributor](rbac.md#desktop-virtualization-contributor) |
+ | Session hosts | [Virtual Machine Contributor](../role-based-access-control/built-in-roles.md#virtual-machine-contributor) |
+
+ Alternatively you can assign the [Contributor](../role-based-access-control/built-in-roles.md#contributor) RBAC role to create all of these resource types.
+
+ For ongoing management of host pools, workspaces, and application groups, you can use more granular roles for each resource type. For more information, see [Built-in Azure RBAC roles for Azure Virtual Desktop](rbac.md).
+
+- Don't disable [Windows Remote Management](/windows/win32/winrm/about-windows-remote-management) (WinRM) when creating session hosts using the Azure portal, as [PowerShell DSC](/powershell/dsc/overview) requires it.
+
+# [Azure PowerShell](#tab/powershell)
+
+In addition, you need:
+
+- The Azure account you use must be assigned the following built-in role-based access control (RBAC) roles as a minimum on a resource group or subscription to create the following resource types. If you want to assign the roles to a resource group, you need to create this first.
+
+ | Resource type | RBAC role |
+ |--|--|
+ | Host pool, workspace, and application group | [Desktop Virtualization Contributor](rbac.md#desktop-virtualization-contributor) |
+ | Session hosts | [Virtual Machine Contributor](../role-based-access-control/built-in-roles.md#virtual-machine-contributor) |
+
+ Alternatively you can assign the [Contributor](../role-based-access-control/built-in-roles.md#contributor) RBAC role to create all of these resource types.
+
+ For ongoing management of host pools, workspaces, and application groups, you can use more granular roles for each resource type. For more information, see [Built-in Azure RBAC roles for Azure Virtual Desktop](rbac.md).
+
+- If you want to use Azure PowerShell locally, see [Use Azure CLI and Azure PowerShell with Azure Virtual Desktop](cli-powershell.md) to make sure you have the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization) PowerShell module installed. Alternatively, use the [Azure Cloud Shell](../cloud-shell/overview.md).
+
+> [!IMPORTANT]
+> If you want to create Microsoft Entra joined session hosts, we only support this using the Azure portal.
+
+# [Azure CLI](#tab/cli)
+
+In addition, you need:
+
+- The Azure account you use must be assigned the following built-in role-based access control (RBAC) roles as a minimum on a resource group or subscription to create the following resource types. If you want to assign the roles to a resource group, you need to create this first.
+
+ | Resource type | RBAC role |
+ |--|--|
+ | Host pool, workspace, and application group | [Desktop Virtualization Contributor](rbac.md#desktop-virtualization-contributor) |
+ | Session hosts | [Virtual Machine Contributor](../role-based-access-control/built-in-roles.md#virtual-machine-contributor) |
+
+ Alternatively you can assign the [Contributor](../role-based-access-control/built-in-roles.md#contributor) RBAC role to create all of these resource types.
+
+ For ongoing management of host pools, workspaces, and application groups, you can use more granular roles for each resource type. For more information, see [Built-in Azure RBAC roles for Azure Virtual Desktop](rbac.md).
+
+- If you want to use Azure CLI locally, see [Use Azure CLI and Azure PowerShell with Azure Virtual Desktop](cli-powershell.md) to make sure you have the [desktopvirtualization](/cli/azure/desktopvirtualization) Azure CLI extension installed. Alternatively, use the [Azure Cloud Shell](../cloud-shell/overview.md).
+
+> [!IMPORTANT]
+> If you want to create Microsoft Entra joined session hosts, we only support this using the Azure portal.
+++
+## Create a host pool
+
+To create a host pool, select the relevant tab for your scenario and follow the steps.
+
+# [Portal](#tab/portal)
+
+Here's how to create a host pool using the Azure portal.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
+
+1. Select **Host pools**, then select **Create**.
+
+1. On the **Basics** tab, complete the following information:
+
+ | Parameter | Value/Description |
+ |--|--|
+ | Subscription | Select the subscription you want to create the host pool in from the drop-down list. |
+ | Resource group | Select an existing resource group or select **Create new** and enter a name. |
+ | Host pool name | Enter a name for the host pool, for example **hp01**. |
+ | Location | Select the Azure region where you want to create your host pool. |
+ | Validation environment | Select **Yes** to create a host pool that is used as a [validation environment](create-validation-host-pool.md).<br /><br />Select **No** (*default*) to create a host pool that isn't used as a validation environment. |
+ | Preferred app group type | Select the preferred [application group type](environment-setup.md#app-groups) for this host pool from *Desktop* or *RemoteApp*. |
+ | Host pool type | Select whether you want your host pool to be Personal or Pooled.<br /><br />If you select **Personal**, a new option appears for **Assignment type**. Select either **Automatic** or **Direct**.<br /><br />If you select **Pooled**, two new options appear for **Load balancing algorithm** and **Max session limit**.<br /><br />- For **Load balancing algorithm**, choose either **breadth-first** or **depth-first**, based on your usage pattern.<br /><br />- For **Max session limit**, enter the maximum number of users you want load-balanced to a single session host. |
+
+ > [!TIP]
+ > Once you've completed this tab, you can continue to optionally create session hosts, a workspace, register the default desktop application group from this host pool, and enable diagnostics settings by selecting **Next: Virtual Machines**. Alternatively, if you want to create and configure these separately, select **Next: Review + create** and go to step 9.
+
+1. *Optional*: On the **Virtual machines** tab, if you want to add session hosts, complete the following information:
+
+ | Parameter | Value/Description |
+ |--|--|
+ | Add Azure virtual machines | Select **Yes**. This shows several new options. |
+ | Resource group | This automatically defaults to the resource group you chose your host pool to be in on the *Basics* tab, but you can also select an alternative. |
+ | Name prefix | Enter a name for your session hosts, for example **hp01-sh**.<br /><br />This value is used as the prefix for your session host VMs. Each session host has a suffix of a hyphen and then a sequential number added to the end, for example **hp01-sh-0**.<br /><br />This name prefix can be a maximum of 11 characters and is used in the computer name in the operating system. The prefix and the suffix combined can be a maximum of 15 characters. Session host names must be unique. |
+ | Virtual machine location | Select the Azure region where you want to deploy your session host VMs. This must be the same region that your virtual network is in. |
+ | Availability options | Select from **[availability zones](../reliability/availability-zones-overview.md)**, **[availability set](../virtual-machines/availability-set-overview.md)**, or **No infrastructure dependency required**. If you select availability zones or availability set, complete the extra parameters that appear. |
+ | Security type | Select from **Standard**, **[Trusted launch virtual machines](../virtual-machines/trusted-launch.md)**, or **[Confidential virtual machines](../confidential-computing/confidential-vm-overview.md)**.<br /><br />- If you select **Trusted launch virtual machines**, options for **secure boot** and **vTPM** are automatically selected.<br /><br />- If you select **Confidential virtual machines**, options for **secure boot**, **vTPM**, and **integrity monitoring** are automatically selected. You can't opt out of vTPM when using a confidential VM. |
+ | Image | Select the OS image you want to use from the list, or select **See all images** to see more, including any images you've created and stored as an [Azure Compute Gallery shared image](../virtual-machines/shared-image-galleries.md) or a [managed image](../virtual-machines/windows/capture-image-resource.md). |
+ | Virtual machine size | Select a SKU. If you want to use different SKU, select **Change size**, then select from the list. |
+ | Number of VMs | Enter the number of virtual machines you want to deploy. You can deploy up to 400 session host VMs at this point if you wish (depending on your [subscription quota](../quotas/view-quotas.md)), or you can add more later.<br /><br />For more information, see [Azure Virtual Desktop service limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-virtual-desktop-service-limits) and [Virtual Machines limits](../azure-resource-manager/management/azure-subscription-service-limits.md#virtual-machines-limitsazure-resource-manager). |
+ | OS disk type | Select the disk type to use for your session hosts. We recommend only **Premium SSD** is used for production workloads. |
+ | Confidential computing encryption | If you're using a confidential VM, you must select the **Confidential compute encryption** check box to enable OS disk encryption.<br /><br />This check box only appears if you selected **Confidential virtual machines** as your security type. |
+ | Boot Diagnostics | Select whether you want to enable [boot diagnostics](../virtual-machines/boot-diagnostics.md). |
+ | **Network and security** | |
+ | Virtual network | Select your virtual network. An option to select a subnet appears. |
+ | Subnet | Select a subnet from your virtual network. |
+ | Network security group | Select whether you want to use a network security group (NSG).<br /><br />- **None** doesn't create a new NSG.<br /><br />- **Basic** creates a new NSG for the VM NIC.<br /><br />- **Advanced** enables you to select an existing NSG.<br /><br />We recommend that you don't create an NSG here, but [create an NSG on the subnet instead](../virtual-network/manage-network-security-group.md). |
+ | Public inbound ports | You can select a port to allow from the list. Azure Virtual Desktop doesn't require public inbound ports, so we recommend you select **No**. |
+ | **Domain to join** | |
+ | Select which directory you would like to join | Select from **Microsoft Entra ID** or **Active Directory** and complete the relevant parameters for the option you select. |
+ | **Virtual Machine Administrator account** | |
+ | Username | Enter a name to use as the local administrator account for the new session host VMs. |
+ | Password | Enter a password for the local administrator account. |
+ | Confirm password | Reenter the password. |
+ | **Custom configuration** | |
+ | ARM template file URL | If you want to use an extra ARM template during deployment you can enter the URL here. |
+ | ARM template parameter file URL | Enter the URL to the parameters file for the ARM template. |
+
+ Once you've completed this tab, select **Next: Workspace**.
+
+1. *Optional*: On the **Workspace** tab, if you want to create a workspace and register the default desktop application group from this host pool, complete the following information:
+
+ | Parameter | Value/Description |
+ |--|--|
+ | Register desktop app group | Select **Yes**. This registers the default desktop application group to the selected workspace. |
+ | To this workspace | Select an existing workspace from the list, or select **Create new** and enter a name, for example **ws01**. |
+
+ Once you've completed this tab, select **Next: Advanced**.
+
+1. *Optional*: On the **Advanced** tab, if you want to enable diagnostics settings, complete the following information:
+
+ | Parameter | Value/Description |
+ |--|--|
+ | Enable diagnostics settings | Check the box. |
+ | Choosing destination details to send logs to | Select one of the following destinations:<br /><br />- Send to Log Analytics workspace<br /><br />- Archive to storage account<br /><br />- Stream to an event hub |
+
+ Once you've completed this tab, select **Next: Tags**.
+
+1. *Optional*: On the **Tags** tab, you can enter any name/value pairs you need, then select **Next: Review + create**.
+
+1. On the **Review + create** tab, ensure validation passes and review the information that is during deployment.
+
+1. Select **Create** to create the host pool.
+
+1. Once the host pool has been created, select **Go to resource** to go to the overview of your new host pool, then select **Properties** to view its properties.
+
+### Optional: Post deployment
+
+If you also added session hosts to your host pool, there's some extra configuration you might need to do, which is covered in the following sections.
++
+> [!NOTE]
+> - If you created a host pool, workspace, and registered the default desktop application group from this host pool in the same process, go to the section [Assign users to an application group](#assign-users-to-an-application-group) and complete the rest of the article.
+>
+> - If you created a host pool and workspace in the same process, but didn't register the default desktop application group from this host pool, go to the section [Create an application group](#create-an-application-group) and complete the rest of the article.
+>
+> - If you didn't create a workspace, continue to the next section and complete the rest of the article.
+
+# [Azure PowerShell](#tab/powershell)
+
+Here's how to create a host pool using the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization) PowerShell module. The following examples show you how to create a pooled host pool and a personal host pool.
+
+> [!IMPORTANT]
+> In the following examples, you'll need to change the `<placeholder>` values for your own.
++
+2. Use the `New-AzWvdHostPool` cmdlet with the following examples to create a host pool. More parameters are available; for more information, see the [New-AzWvdHostPool PowerShell reference](/powershell/module/az.desktopvirtualization/new-azwvdhostpool).
+
+ 1. To create a pooled host pool using the *breadth-first* [load-balancing algorithm](host-pool-load-balancing.md) and *Desktop* as the preferred [app group type](environment-setup.md#app-groups), run the following command:
+
+ ```azurepowershell
+ $parameters = @{
+ Name = '<Name>'
+ ResourceGroupName = '<ResourceGroupName>'
+ HostPoolType = 'Pooled'
+ LoadBalancerType = 'BreadthFirst'
+ PreferredAppGroupType = 'Desktop'
+ MaxSessionLimit = '<value>'
+ Location = '<AzureRegion>'
+ }
+
+ New-AzWvdHostPool @parameters
+ ```
+
+ 1. To create a personal host pool using the *Automatic* assignment type and *Desktop* as the preferred [app group type](environment-setup.md#app-groups), run the following command:
+
+ ```azurepowershell
+ $parameters = @{
+ Name = '<Name>'
+ ResourceGroupName = '<ResourceGroupName>'
+ HostPoolType = 'Personal'
+ LoadBalancerType = 'Persistent'
+ PreferredAppGroupType = 'Desktop'
+ PersonalDesktopAssignmentType = 'Automatic'
+ Location = '<AzureRegion>'
+ }
+
+ New-AzWvdHostPool @parameters
+ ```
+
+3. You can view the properties of your new host pool by running the following command:
+
+ ```azurepowershell
+ Get-AzWvdHostPool -Name <Name> -ResourceGroupName <ResourceGroupName> | FL *
+ ```
+
+# [Azure CLI](#tab/cli)
+
+Here's how to create a host pool using the [desktopvirtualization](/cli/azure/desktopvirtualization) extension for Azure CLI. The following examples show you how to create a pooled host pool and a personal host pool.
+
+> [!IMPORTANT]
+> In the following examples, you'll need to change the `<placeholder>` values for your own.
++
+2. Use the `az desktopvirtualization hostpool create` command with the following examples to create a host pool. More parameters are available; for more information, see the [az desktopvirtualization hostpool Azure CLI reference](/cli/azure/desktopvirtualization/hostpool).
+
+ 1. To create a pooled host pool using the *breadth-first* [load-balancing algorithm](host-pool-load-balancing.md) and *Desktop* as the preferred [app group type](environment-setup.md#app-groups), run the following command:
+
+ ```azurecli
+ az desktopvirtualization hostpool create \
+ --name <Name> \
+ --resource-group <ResourceGroupName> \
+ --host-pool-type Pooled \
+ --load-balancer-type BreadthFirst \
+ --preferred-app-group-type Desktop \
+ --max-session-limit <value> \
+ --location <AzureRegion>
+ ```
+
+ 1. To create a personal host pool using the *Automatic* assignment type, run the following command:
+
+ ```azurecli
+ az desktopvirtualization hostpool create \
+ --name <Name> \
+ --resource-group <ResourceGroupName> \
+ --host-pool-type Personal \
+ --load-balancer-type Persistent \
+ --preferred-app-group-type Desktop \
+ --personal-desktop-assignment-type Automatic \
+ --location <AzureRegion>
+ ```
+
+3. You can view the properties of your new host pool by running the following command:
+
+ ```azurecli
+ az desktopvirtualization hostpool show --name <Name> --resource-group <ResourceGroupName>
+ ```
+++
+## Create a workspace
+
+Next, to create a workspace, select the relevant tab for your scenario and follow the steps.
+
+# [Portal](#tab/portal)
+
+Here's how to create a workspace using the Azure portal.
+
+1. From the Azure Virtual Desktop overview, select **Workspaces**, then select **Create**.
+
+1. On the **Basics** tab, complete the following information:
+
+ | Parameter | Value/Description |
+ |--|--|
+ | Subscription | Select the subscription you want to create the workspace in from the drop-down list. |
+ | Resource group | Select an existing resource group or select **Create new** and enter a name. |
+ | Workspace name | Enter a name for the workspace, for example *workspace01*. |
+ | Friendly name | *Optional*: Enter a friendly name for the workspace. |
+ | Description | *Optional*: Enter a description for the workspace. |
+ | Location | Select the Azure region where you want to deploy your workspace. |
+
+ > [!TIP]
+ > Once you've completed this tab, you can continue to optionally register an existing application group to this workspace, if you have one, and enable diagnostics settings by selecting **Next: Application groups**. Alternatively, if you want to create and configure these separately, select **Review + create** and go to step 9.
+
+1. *Optional*: On the **Application groups** tab, if you want to register an existing application group to this workspace, complete the following information:
+
+ | Parameter | Value/Description |
+ |--|--|
+ | Register application groups | Select **Yes**, then select **+ Register application groups**. In the new pane that opens, select the **Add** icon for the application group(s) you want to add, then select **Select**. |
+
+ Once you've completed this tab, select **Next: Advanced**.
+
+1. *Optional*: On the **Advanced** tab, if you want to enable diagnostics settings, complete the following information:
+
+ | Parameter | Value/Description |
+ |--|--|
+ | Enable diagnostics settings | Check the box. |
+ | Choosing destination details to send logs to | Select one of the following destinations:<br /><br />- Send to Log Analytics workspace<br /><br />- Archive to storage account<br /><br />- Stream to an event hub |
+
+ Once you've completed this tab, select **Next: Tags**.
+
+1. *Optional*: On the **Tags** tab, you can enter any name/value pairs you need, then select **Next: Review + create**.
+
+1. On the **Review + create** tab, ensure validation passes and review the information that is used during deployment.
+
+1. Select **Create** to create the workspace.
+
+1. Once the workspace has been created, select **Go to resource** to go to the overview of your new workspace, then select **Properties** to view its properties.
+
+> [!NOTE]
+> - If you added an application group to this workspace, go to the section [Assign users to an application group](#assign-users-to-an-application-group) and complete the rest of the article.
+>
+> - If you didn't add an application group to this workspace, continue to the next section and complete the rest of the article.
+
+# [Azure PowerShell](#tab/powershell)
+
+Here's how to create a workspace using the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization) PowerShell module.
+
+1. In the same PowerShell session, use the `New-AzWvdWorkspace` cmdlet with the following example to create a workspace. More parameters are available, such as to register existing application groups. For more information, see the [New-AzWvdWorkspace PowerShell reference](/powershell/module/az.desktopvirtualization/new-azwvdworkspace).
+
+ ```azurepowershell
+ New-AzWvdWorkspace -Name <Name> -ResourceGroupName <ResourceGroupName>
+ ```
+
+1. You can view the properties of your new workspace by running the following command:
+
+ ```azurepowershell
+ Get-AzWvdWorkspace -Name <Name> -ResourceGroupName <ResourceGroupName> | FL *
+ ```
+
+# [Azure CLI](#tab/cli)
+
+Here's how to create a workspace using the [desktopvirtualization](/cli/azure/desktopvirtualization) extension for Azure CLI.
+
+1. In the same CLI session, use the `az desktopvirtualization workspace create` command with the following example to create a workspace. More parameters are available, such as to register existing application groups. For more information, see the [az desktopvirtualization workspace Azure CLI reference](/cli/azure/desktopvirtualization/workspace).
+
+ ```azurecli
+ az desktopvirtualization workspace create --name <Name> --resource-group <ResourceGroupName>
+ ```
+
+1. You can view the properties of your new workspace by running the following command:
+
+ ```azurecli
+ az desktopvirtualization workspace show --name <Name> --resource-group <ResourceGroupName>
+ ```
+++
+## Create an application group
+
+To create an application group, select the relevant tab for your scenario and follow the steps.
+
+# [Portal](#tab/portal)
+
+Here's how to create an application group using the Azure portal.
+
+1. From the Azure Virtual Desktop overview, select **Application groups**, then select **Create**.
+
+1. On the **Basics** tab, complete the following information:
+
+ | Parameter | Value/Description |
+ |--|--|
+ | Subscription | Select the subscription you want to create the application group in from the drop-down list. |
+ | Resource group | Select an existing resource group or select **Create new** and enter a name. |
+ | Host pool | Select the host pool for the application group. |
+ | Location | Metadata is stored in the same location as the host pool. |
+ | Application group type | Select the [application group type](environment-setup.md#app-groups) for the host pool you selected from *Desktop* or *RemoteApp*. |
+ | Application group name | Enter a name for the application group, for example *Session Desktop*. |
+
+ > [!TIP]
+ > Once you've completed this tab, select **Next: Review + create**. You don't need to complete the other tabs to create an application group, but you'll need to [create a workspace](#create-a-workspace), [add an application group to a workspace](#add-an-application-group-to-a-workspace) and [assign users to the application group](#assign-users-to-an-application-group) before users can access the resources.
+ >
+ > If you created an application group for RemoteApp, you will also need to add applications. For more information, see [Add applications to an application group](manage-app-groups.md)
+
+1. *Optional*: If you selected to create a RemoteApp application group, you can add applications to this application group. On the **Application groups** tab, select **+ Add applications**, then select an application. For more information on the application parameters, see [Publish applications with RemoteApp](manage-app-groups.md). At least one session host in the host pool must be powered on and available in Azure Virtual Desktop.
+
+ Once you've completed this tab, or if you're creating a desktop application group, select **Next: Assignments**.
+
+1. *Optional*: On the **Assignments** tab, if you want to assign users or groups to this application group, select **+ Add Microsoft Entra users or user groups**. In the new pane that opens, check the box next to the users or groups you want to add, then select **Select**.
+
+ Once you've completed this tab, select **Next: Workspace**.
+
+1. *Optional*: On the **Workspace** tab, if you're creating a desktop application group, you can register the default desktop application group from the host pool you selected by completing the following information:
+
+ | Parameter | Value/Description |
+ |--|--|
+ | Register application group | Select **Yes**. This registers the default desktop application group to the selected workspace. |
+ | Register application group | Select an existing workspace from the list. |
+
+ Once you've completed this tab, select **Next: Advanced**.
+
+1. *Optional*: If you want to enable diagnostics settings, on the **Advanced** tab, complete the following information:
+
+ | Parameter | Value/Description |
+ |--|--|
+ | Enable diagnostics settings | Check the box. |
+ | Choosing destination details to send logs to | Select one of the following destinations:<br /><br />- Send to Log Analytics workspace<br /><br />- Archive to storage account<br /><br />- Stream to an event hub |
+
+ Once you've completed this tab, select **Next: Tags**.
+
+1. *Optional*: On the **Tags** tab, you can enter any name/value pairs you need, then select **Next: Review + create**.
+
+1. On the **Review + create** tab, ensure validation passes and review the information that is used during deployment.
+
+1. Select **Create** to create the application group.
+
+1. Once the application group has been created, select **Go to resource** to go to the overview of your new application group, then select **Properties** to view its properties.
+
+> [!NOTE]
+> - If you created a desktop application group, assigned users or groups, and registered the default desktop application group to a workspace, your assigned users can connect to the desktop and you don't need to complete the rest of the article.
+>
+> - If you created a RemoteApp application group, added applications, and assigned users or groups, go to the section [Add an application group to a workspace](#add-an-application-group-to-a-workspace) and complete the rest of the article.
+>
+> - If you didn't add applications, assign users or groups, or register the application group to a workspace continue to the next section and complete the rest of the article.
+
+# [Azure PowerShell](#tab/powershell)
+
+Here's how to create an application group using the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization) PowerShell module.
+
+1. In the same PowerShell session, get the resource ID of the host pool you want to create an application group for and store it in a variable by running the following command:
+
+ ```azurepowershell
+ $hostPoolArmPath = (Get-AzWvdHostPool -Name <HostPoolName> -ResourceGroupName <ResourceGroupName).Id
+ ```
+
+1. Use the `New-AzWvdApplicationGroup` cmdlet with the following examples to create an application group. For more information, see the [New-AzWvdApplicationGroup PowerShell reference](/powershell/module/az.desktopvirtualization/new-azwvdapplicationgroup).
+
+ 1. To create a Desktop application group in the Azure region UK South, run the following command:
+
+ ```azurepowershell
+ $parameters = @{
+ Name = '<Name>'
+ ResourceGroupName = '<ResourceGroupName>'
+ ApplicationGroupType = 'Desktop'
+ HostPoolArmPath = $hostPoolArmPath
+ Location = 'uksouth'
+ }
+
+ New-AzWvdApplicationGroup @parameters
+ ```
+
+ 1. To create a RemoteApp application group in the Azure region UK South, run the following command. You can only create a RemoteApp application group with a pooled host pool.
+
+ ```azurepowershell
+ $parameters = @{
+ Name = '<Name>'
+ ResourceGroupName = '<ResourceGroupName>'
+ ApplicationGroupType = 'RemoteApp'
+ HostPoolArmPath = $hostPoolArmPath
+ Location = 'uksouth'
+ }
+
+ New-AzWvdApplicationGroup @parameters
+ ```
+
+1. You can view the properties of your new workspace by running the following command:
+
+ ```azurepowershell
+ Get-AzWvdApplicationGroup -Name <Name> -ResourceGroupName <ResourceGroupName> | FL *
+ ```
+
+# [Azure CLI](#tab/cli)
+
+Here's how to create an application group using the [desktopvirtualization](/cli/azure/desktopvirtualization) extension for Azure CLI.
+
+1. In the same CLI session, get the resource ID of the host pool you want to create an application group for and store it in a variable by running the following command:
+
+ ```azurecli
+ hostPoolArmPath=$(az desktopvirtualization hostpool show \
+ --name <Name> \
+ --resource-group <ResourceGroupName> \
+ --query [id] \
+ --output tsv)
+ ```
+
+1. Use the `az desktopvirtualization applicationgroup create` command with the following examples to create an application group. For more information, see the [az desktopvirtualization applicationgroup Azure CLI reference](/cli/azure/desktopvirtualization/applicationgroup).
+
+ 1. To create a Desktop application group in the Azure region UK South, run the following command:
+
+ ```azurecli
+ az desktopvirtualization applicationgroup create \
+ --name <Name> \
+ --resource-group <ResourceGroupName> \
+ --application-group-type Desktop \
+ --host-pool-arm-path $hostPoolArmPath \
+ --location uksouth
+ ```
+
+ 1. To create a RemoteApp application group in the Azure region UK South, run the following command. You can only create a RemoteApp application group with a pooled host pool.
+
+ ```azurecli
+ az desktopvirtualization applicationgroup create \
+ --name <Name> \
+ --resource-group <ResourceGroupName> \
+ --application-group-type RemoteApp \
+ --host-pool-arm-path $hostPoolArmPath \
+ --location uksouth
+ ```
+
+1. You can view the properties of your new application group by running the following command:
+
+ ```azurecli
+ az desktopvirtualization applicationgroup show --name <Name> --resource-group <ResourceGroupName>
+ ```
+++
+## Add an application group to a workspace
+
+Next, to add an application group to a workspace, select the relevant tab for your scenario and follow the steps.
+
+# [Portal](#tab/portal)
+
+Here's how to add an application group to a workspace using the Azure portal.
+
+1. From the Azure Virtual Desktop overview, select **Workspaces**, then select the name of the workspace you want to assign an application group to.
+
+1. From the workspace overview, select **Application groups**, then select **+ Add**.
+
+1. Select the plus icon (**+**) next to an application group from the list. Only application groups that aren't already assigned to a workspace are listed.
+
+1. Select **Select**. The application group is added to the workspace.
+
+# [Azure PowerShell](#tab/powershell)
+
+Here's how to add an application group to a workspace using the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization) PowerShell module.
+
+1. In the same PowerShell session, use the `Update-AzWvdWorkspace` cmdlet with the following example to add an application group to a workspace:
+
+ ```azurepowershell
+ # Get the resource ID of the application group you want to add to the workspace
+ $appGroupPath = (Get-AzWvdApplicationGroup -Name <Name -ResourceGroupName <ResourceGroupName>).Id
+
+ # Add the application group to the workspace
+ Update-AzWvdWorkspace -Name <Name> -ResourceGroupName <ResourceGroupName> -ApplicationGroupReference $appGroupPath
+ ```
+
+1. You can view the properties of your workspace by running the following command. The key **ApplicationGroupReference** contains an array of the application groups added to the workspace.
+
+ ```azurepowershell
+ Get-AzWvdWorkspace -Name <Name> -ResourceGroupName <ResourceGroupName> | FL *
+ ```
+
+# [Azure CLI](#tab/cli)
+
+Here's how to add an application group to a workspace using the [desktopvirtualization](/cli/azure/desktopvirtualization) extension for Azure CLI.
+
+1. In the same CLI session, use the `az desktopvirtualization workspace update` command with the following example to add an application group to a workspace:
+
+ ```azurecli
+ # Get the resource ID of the application group you want to add to the workspace
+ appGroupPath=$(az desktopvirtualization applicationgroup show \
+ --name <Name> \
+ --resource-group <ResourceGroupName> \
+ --query [id] \
+ --output tsv)
+
+ # Add the application group to the workspace
+ az desktopvirtualization workspace update \
+ --name <Name> \
+ --resource-group <ResourceGroupName> \
+ --application-group-references $appGroupPath
+ ```
+
+1. You can view the properties of your workspace by running the following command. The key **applicationGroupReferences** contains an array of the application groups added to the workspace.
+
+ ```azurecli
+ az desktopvirtualization applicationgroup show \
+ --name <Name> \
+ --resource-group <ResourceGroupName>
+ ```
+++
+## Assign users to an application group
+
+Finally, to assign users or user groups to an application group, select the relevant tab for your scenario and follow the steps. We recommend you assign user groups to application groups to make ongoing management simpler.
+
+# [Portal](#tab/portal)
+
+Here's how to assign users or user groups to an application group to a workspace using the Azure portal.
+
+1. From the Azure Virtual Desktop overview, select **Application groups**.
+
+1. Select the application group from the list.
+
+1. From the application group overview, select **Assignments**.
+
+1. Select **+ Add**, then search for and select the user account or user group you want to assign to this application group.
+
+1. Finish by selecting **Select**.
+
+# [Azure PowerShell](#tab/powershell)
+
+Here's how to assign users or user groups to an application group to a workspace using [Az.Resources](/powershell/module/az.resources) PowerShell module.
+
+1. In the same PowerShell session, use the `New-AzRoleAssignment` cmdlet with the following examples to assign users or user groups to an application group.
+
+ 1. To assign users to the application group, run the following commands:
+
+ ```azurepowershell
+ $parameters = @{
+ SignInName = '<UserPrincipalName>'
+ ResourceName = '<ApplicationGroupName>'
+ ResourceGroupName = '<ResourceGroupName>'
+ RoleDefinitionName = 'Desktop Virtualization User'
+ ResourceType = 'Microsoft.DesktopVirtualization/applicationGroups'
+ }
+
+ New-AzRoleAssignment @parameters
+ ```
+
+ 1. To assign user groups to the application group, run the following commands:
+
+ ```azurepowershell
+ # Get the object ID of the user group you want to assign to the application group
+ $userGroupId = (Get-AzADGroup -DisplayName "<UserGroupName>").Id
+
+ # Assign users to the application group
+ $parameters = @{
+ ObjectId = $userGroupId
+ ResourceName = '<ApplicationGroupName>'
+ ResourceGroupName = '<ResourceGroupName>'
+ RoleDefinitionName = 'Desktop Virtualization User'
+ ResourceType = 'Microsoft.DesktopVirtualization/applicationGroups'
+ }
+
+ New-AzRoleAssignment @parameters
+ ```
+
+# [Azure CLI](#tab/cli)
+
+Here's how to assign users or user groups to an application group to a workspace using the [role](/cli/azure/role/assignment) extension for Azure CLI.
+
+1. In the same CLI session, use the `az role assignment create` command with the following examples to assign users or user groups to an application group.
+
+ 1. To assign users to the application group, run the following commands:
+
+ ```azurecli
+ # Get the resource ID of the application group you want to add to the workspace
+ appGroupPath=$(az desktopvirtualization applicationgroup show \
+ --name <Name> \
+ --resource-group <ResourceGroupName> \
+ --query [id] \
+ --output tsv)
+
+ # Assign users to the application group
+ az role assignment create \
+ --assignee '<UserPrincipalName>' \
+ --role 'Desktop Virtualization User' \
+ --scope $appGroupPath
+ ```
+
+ 1. To assign user groups to the application group, run the following commands:
+
+ ```azurecli
+ # Get the resource ID of the application group you want to add to the workspace
+ appGroupPath=$(az desktopvirtualization applicationgroup show \
+ --name <Name> \
+ --resource-group <ResourceGroupName> \
+ --query [id] \
+ --output tsv)
+
+ # Get the object ID of the user group you want to assign to the application group
+ userGroupId=$(az ad group show \
+ --group <UserGroupName> \
+ --query [id] \
+ --output tsv)
+
+ # Assign users to the application group
+ az role assignment create \
+ --assignee $userGroupId \
+ --role 'Desktop Virtualization User' \
+ --scope $appGroupPath
+ ```
+++
+## Next steps
+
+Once you've deployed Azure Virtual Desktop, your users can connect. There are several platforms you can connect from, including from a web browser. For more information, see [Remote Desktop clients for Azure Virtual Desktop](users/remote-desktop-clients-overview.md) and [Connect to Azure Virtual Desktop with the Remote Desktop Web client](users/connect-web.md).
+
+Here are some extra tasks you might want to do:
+
+- Configure profile management with FSLogix. To learn more, see [FSLogix profile containers](fslogix-containers-azure-files.md).
+
+- [Add session hosts to a host pool](add-session-hosts-host-pool.md).
+
+- [Enable diagnostics settings](diagnostics-log-analytics.md).
virtual-desktop Getting Started Feature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/getting-started-feature.md
Title: Deploy Azure Virtual Desktop with the getting started feature
+ Title: Use the getting started feature to create a sample infrastructure - Azure Virtual Desktop
description: A quickstart guide for how to quickly set up Azure Virtual Desktop with the Azure portal's getting started feature.
-# Deploy Azure Virtual Desktop with the getting started feature
+# Use the getting started feature to create a sample infrastructure
You can quickly deploy Azure Virtual Desktop with the *getting started* feature in the Azure portal. This can be used in smaller scenarios with a few users and apps, or you can use it to evaluate Azure Virtual Desktop in larger enterprise scenarios. It works with existing Active Directory Domain Services (AD DS) or Microsoft Entra Domain Services deployments, or it can deploy Microsoft Entra Domain Services for you. Once you've finished, a user will be able to sign in to a full virtual desktop session, consisting of one host pool (with one or more session hosts), one application group, and one user. To learn about the terminology used in Azure Virtual Desktop, see [Azure Virtual Desktop terminology](environment-setup.md).
-Joining session hosts to Microsoft Entra ID with the getting started feature is not supported. If you want to want to join session hosts to Microsoft Entra ID, follow the [tutorial to create a host pool](create-host-pools-azure-marketplace.md).
+Joining session hosts to Microsoft Entra ID with the getting started feature is not supported. If you want to join session hosts to Microsoft Entra ID, follow the [tutorial to create a host pool](create-host-pools-azure-marketplace.md).
> [!TIP] > Enterprises should plan an Azure Virtual Desktop deployment using information from [Enterprise-scale support for Microsoft Azure Virtual Desktop](/azure/cloud-adoption-framework/scenarios/wvd/enterprise-scale-landing-zone). You can also find more a granular deployment process in a [series of tutorials](create-host-pools-azure-marketplace.md), which also cover programmatic methods and less permission.
virtual-desktop Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/prerequisites.md
Title: Prerequisites for Azure Virtual Desktop description: Find what prerequisites you need to complete to successfully connect your users to their Windows desktops and applications.- Previously updated : 05/03/2023++ - Last updated : 10/25/2023 # Prerequisites for Azure Virtual Desktop There are a few things you need to start using Azure Virtual Desktop. Here you can find what prerequisites you need to complete to successfully provide your users with desktops and applications.
-At a high level, you'll need:
+At a high level, you need:
> [!div class="checklist"] > - An Azure account with an active subscription
At a high level, you'll need:
You need an Azure account with an active subscription to deploy Azure Virtual Desktop. If you don't have one already, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-To deploy Azure Virtual Desktop, you need to assign the relevant Azure role-based access control (RBAC) roles. The specific role requirements are covered in each related article for deploying Azure Virtual Desktop, which are listed in the [Next steps](#next-steps) section.
+To deploy Azure Virtual Desktop, you need to assign the relevant Azure role-based access control (RBAC) roles. The specific role requirements are covered in each of the related articles for deploying Azure Virtual Desktop, which are listed in the [Next steps](#next-steps) section.
Also make sure you've registered the *Microsoft.DesktopVirtualization* resource provider for your subscription. To check the status of the resource provider and register if needed, select the relevant tab for your scenario and follow the steps.
Also make sure you've registered the *Microsoft.DesktopVirtualization* resource
[!INCLUDE [include-cloud-shell-local-cli](includes/include-cloud-shell-local-cli.md)]
-2. Register the **Microsoft.DesktopVirtualization** resource provider by running the following command. You can run this even if the resource provider is already registered.
+2. Register the **Microsoft.DesktopVirtualization** resource provider by running the following command. You can run this command even if the resource provider is already registered.
```azurecli-interactive az provider register --namespace Microsoft.DesktopVirtualization
Also make sure you've registered the *Microsoft.DesktopVirtualization* resource
[!INCLUDE [include-cloud-shell-local-powershell](includes/include-cloud-shell-local-powershell.md)]
-2. Register the **Microsoft.DesktopVirtualization** resource provider by running the following command. You can run this even if the resource provider is already registered.
+2. Register the **Microsoft.DesktopVirtualization** resource provider by running the following command. You can run this command even if the resource provider is already registered.
```azurepowershell-interactive Register-AzResourceProvider -ProviderNamespace Microsoft.DesktopVirtualization
To join session hosts to Microsoft Entra ID or an Active Directory domain, you n
### Users
-Your users need accounts that are in Microsoft Entra ID. If you're also using AD DS or Microsoft Entra Domain Services in your deployment of Azure Virtual Desktop, these accounts will need to be [hybrid identities](../active-directory/hybrid/whatis-hybrid-identity.md), which means the user accounts are synchronized. You'll need to keep the following things in mind based on which identity provider you use:
+Your users need accounts that are in Microsoft Entra ID. If you're also using AD DS or Microsoft Entra Domain Services in your deployment of Azure Virtual Desktop, these accounts need to be [hybrid identities](../active-directory/hybrid/whatis-hybrid-identity.md), which means the user accounts are synchronized. You need to keep the following things in mind based on which identity provider you use:
-- If you're using Microsoft Entra ID with AD DS, you'll need to configure [Microsoft Entra Connect](../active-directory/hybrid/whatis-azure-ad-connect.md) to synchronize user identity data between AD DS and Microsoft Entra ID.
+- If you're using Microsoft Entra ID with AD DS, you need to configure [Microsoft Entra Connect](../active-directory/hybrid/whatis-azure-ad-connect.md) to synchronize user identity data between AD DS and Microsoft Entra ID.
- If you're using Microsoft Entra ID with Microsoft Entra Domain Services, user accounts are synchronized one way from Microsoft Entra ID to Microsoft Entra Domain Services. This synchronization process is automatic. ### Supported identity scenarios
The following table summarizes identity scenarios that Azure Virtual Desktop cur
| Microsoft Entra ID + Microsoft Entra Domain Services | Joined to Microsoft Entra ID | In Microsoft Entra ID and Microsoft Entra Domain Services, synchronized| | Microsoft Entra-only | Joined to Microsoft Entra ID | In Microsoft Entra ID |
-To use [FSLogix Profile Container](/fslogix/configure-profile-container-tutorial) when joining your session hosts to Microsoft Entra ID, you will need to [store profiles on Azure Files](create-profile-container-azure-ad.md) and your user accounts must be [hybrid identities](../active-directory/hybrid/whatis-hybrid-identity.md). This means you must create these accounts in AD DS and synchronize them to Microsoft Entra ID. To learn more about deploying FSLogix Profile Container with different identity scenarios, see the following articles:
+To use [FSLogix Profile Container](/fslogix/configure-profile-container-tutorial) when joining your session hosts to Microsoft Entra ID, you need to [store profiles on Azure Files](create-profile-container-azure-ad.md) and your user accounts must be [hybrid identities](../active-directory/hybrid/whatis-hybrid-identity.md). You must create these accounts in AD DS and synchronize them to Microsoft Entra ID. To learn more about deploying FSLogix Profile Container with different identity scenarios, see the following articles:
- [Set up FSLogix Profile Container with Azure Files and Active Directory Domain Services or Microsoft Entra Domain Services](fslogix-profile-container-configure-azure-files-active-directory.md). - [Set up FSLogix Profile Container with Azure Files and Microsoft Entra ID](create-profile-container-azure-ad.md).
To use [FSLogix Profile Container](/fslogix/configure-profile-container-tutorial
### Deployment parameters
-You'll need to enter the following identity parameters when deploying session hosts:
+You need to enter the following identity parameters when deploying session hosts:
- Domain name, if using AD DS or Microsoft Entra Domain Services. - Credentials to join session hosts to the domain.
You have a choice of operating systems (OS) that you can use for session hosts t
|Operating system |User access rights| ||| |<ul><li>[Windows 11 Enterprise multi-session](/lifecycle/products/windows-11-enterprise-and-education)</li><li>[Windows 11 Enterprise](/lifecycle/products/windows-11-enterprise-and-education)</li><li>[Windows 10 Enterprise multi-session](/lifecycle/products/windows-10-enterprise-and-education)</li><li>[Windows 10 Enterprise](/lifecycle/products/windows-10-enterprise-and-education)</li><ul>|License entitlement:<ul><li>Microsoft 365 E3, E5, A3, A5, F3, Business Premium, Student Use Benefit</li><li>Windows Enterprise E3, E5</li><li>Windows VDA E3, E5</li><li>Windows Education A3, A5</li></ul>External users can use [per-user access pricing](https://azure.microsoft.com/pricing/details/virtual-desktop/) by enrolling an Azure subscription instead of license entitlement.</li></ul>|
-|<ul><li>[Windows Server 2022](/lifecycle/products/windows-server-2022)</li><li>[Windows Server 2019](/lifecycle/products/windows-server-2019)</li><li>[Windows Server 2016](/lifecycle/products/windows-server-2016)</li></ul>|License entitlement:<ul><li>Remote Desktop Services (RDS) Client Access License (CAL) with Software Assurance (per-user or per-device), or RDS User Subscription Licenses.</li></ul>Per-user access pricing is not available for Windows Server operating systems.|
+|<ul><li>[Windows Server 2022](/lifecycle/products/windows-server-2022)</li><li>[Windows Server 2019](/lifecycle/products/windows-server-2019)</li><li>[Windows Server 2016](/lifecycle/products/windows-server-2016)</li></ul>|License entitlement:<ul><li>Remote Desktop Services (RDS) Client Access License (CAL) with Software Assurance (per-user or per-device), or RDS User Subscription Licenses.</li></ul>Per-user access pricing isn't available for Windows Server operating systems.|
> [!IMPORTANT] > - The following items are not supported:
You can deploy a virtual machines (VMs) to be used as session hosts from these i
- Manually by [adding session hosts to an existing host pool](add-session-hosts-host-pool.md?tabs=portal%2Cgui) in the Azure portal. - Programmatically, with [Azure CLI](add-session-hosts-host-pool.md?tabs=cli%2Ccmd) or [Azure PowerShell](add-session-hosts-host-pool.md?tabs=powershell%2Ccmd).
-If your license entitles you to use Azure Virtual Desktop, you don't need to install or apply a separate license, however if you're using per-user access pricing for external users, you will need to [enroll an Azure Subscription](remote-app-streaming/per-user-access-pricing.md). You will need to make sure the Windows license used on your session hosts is correctly assigned in Azure and the operating system is activated. For more information, see [Apply Windows license to session host virtual machines](apply-windows-license.md).
+If your license entitles you to use Azure Virtual Desktop, you don't need to install or apply a separate license, however if you're using per-user access pricing for external users, you need to [enroll an Azure Subscription](remote-app-streaming/per-user-access-pricing.md). You need to make sure the Windows license used on your session hosts is correctly assigned in Azure and the operating system is activated. For more information, see [Apply Windows license to session host virtual machines](apply-windows-license.md).
> [!TIP] > To simplify user access rights during initial development and testing, Azure Virtual Desktop supports [Azure Dev/Test pricing](https://azure.microsoft.com/pricing/dev-test/). If you deploy Azure Virtual Desktop in an Azure Dev/Test subscription, end users may connect to that deployment without separate license entitlement in order to perform acceptance tests or provide feedback. ## Network
-There are several network requirements you'll need to meet to successfully deploy Azure Virtual Desktop. This lets users connect to their desktops and applications while also giving them the best possible user experience.
+There are several network requirements you need to meet to successfully deploy Azure Virtual Desktop. This lets users connect to their desktops and applications while also giving them the best possible user experience.
Users connecting to Azure Virtual Desktop securely establish a reverse connection to the service, which means you don't need to open any inbound ports. Transmission Control Protocol (TCP) on port 443 is used by default, however RDP Shortpath can be used for [managed networks](shortpath.md) and [public networks](shortpath-public.md) that establishes a direct User Datagram Protocol (UDP)-based transport.
-To successfully deploy Azure Virtual Desktop, you'll need to meet the following network requirements:
+To successfully deploy Azure Virtual Desktop, you need to meet the following network requirements:
-- You'll need a virtual network and subnet for your session hosts. If you create your session hosts at the same time as a host pool, you must create this virtual network in advance for it to appear in the drop-down list. Your virtual network must be in the same Azure region as the session host.
+- You need a virtual network and subnet for your session hosts. If you create your session hosts at the same time as a host pool, you must create this virtual network in advance for it to appear in the drop-down list. Your virtual network must be in the same Azure region as the session host.
-- Make sure this virtual network can connect to your domain controllers and relevant DNS servers if you're using AD DS or Microsoft Entra Domain Services, since you'll need to join session hosts to the domain.
+- Make sure this virtual network can connect to your domain controllers and relevant DNS servers if you're using AD DS or Microsoft Entra Domain Services, since you need to join session hosts to the domain.
- Your session hosts and users need to be able to connect to the Azure Virtual Desktop service. These connections also use TCP on port 443 to a specific list of URLs. For more information, see [Required URL list](safe-url-list.md). You must make sure these URLs aren't blocked by network filtering or a firewall in order for your deployment to work properly and be supported. If your users need to access Microsoft 365, make sure your session hosts can connect to [Microsoft 365 endpoints](/microsoft-365/enterprise/microsoft-365-endpoints). Also consider the following: -- Your users may need access to applications and data that is hosted on different networks, so make sure your session hosts can connect to them.
+- Your users might need access to applications and data that is hosted on different networks, so make sure your session hosts can connect to them.
- Round-trip time (RTT) latency from the client's network to the Azure region that contains the host pools should be less than 150 ms. Use the [Experience Estimator](https://azure.microsoft.com/services/virtual-desktop/assessment/) to view your connection health and recommended Azure region. To optimize for network performance, we recommend you create session hosts in the Azure region closest to your users.
Also consider the following:
- To help secure your Azure Virtual Desktop environment in Azure, we recommend you don't open inbound port 3389 on your session hosts. Azure Virtual Desktop doesn't require an open inbound port to be open. If you must open port 3389 for troubleshooting purposes, we recommend you use [just-in-time VM access](../security-center/security-center-just-in-time.md). We also recommend you don't assign a public IP address to your session hosts.
+To learn more, see [Understanding Azure Virtual Desktop network connectivity](network-connectivity.md).
+ > [!NOTE] > To keep Azure Virtual Desktop reliable and scalable, we aggregate traffic patterns and usage to check the health and performance of the infrastructure control plane. We aggregate this information from all locations where the service infrastructure is, then send it to the US region. The data sent to the US region includes scrubbed data, but not customer data. For more information, see [Data locations for Azure Virtual Desktop](data-locations.md).
-To learn more, see [Understanding Azure Virtual Desktop network connectivity](network-connectivity.md).
- ## Session host management
-Consider the following when managing session hosts:
+Consider the following points when managing session hosts:
-- Don't enable any policies or configurations that disable *Windows Installer*. If you disable Windows Installer, the service won't be able to install agent updates on your session hosts, and your session hosts won't function properly.
+- Don't enable any policies or configurations that disable *Windows Installer*. If you disable Windows Installer, the service can't install agent updates on your session hosts, and your session hosts won't function properly.
-- If you're joining session hosts to an AD DS domain and you want to manage them using [Intune](/mem/intune/fundamentals/what-is-intune), you'll need to configure [Microsoft Entra Connect](../active-directory/hybrid/whatis-azure-ad-connect.md) to enable [Microsoft Entra hybrid join](../active-directory/devices/hybrid-join-plan.md).
+- If you're joining session hosts to an AD DS domain and you want to manage them using [Intune](/mem/intune/fundamentals/what-is-intune), you need to configure [Microsoft Entra Connect](../active-directory/hybrid/whatis-azure-ad-connect.md) to enable [Microsoft Entra hybrid join](../active-directory/devices/hybrid-join-plan.md).
- If you're joining session hosts to a Microsoft Entra Domain Services domain, you can't manage them using [Intune](/mem/intune/fundamentals/what-is-intune). -- If you're using Microsoft Entra join with Windows Server for your session hosts, you can't enroll them in Intune as Windows Server is not supported with Intune. You'll need to use Microsoft Entra hybrid join and Group Policy from an Active Directory domain, or local Group Policy on each session host.
+- If you're using Microsoft Entra join with Windows Server for your session hosts, you can't enroll them in Intune as Windows Server isn't supported with Intune. You need to use Microsoft Entra hybrid join and Group Policy from an Active Directory domain, or local Group Policy on each session host.
+
+## Azure regions
+
+You can deploy session hosts in any Azure region to use with Azure Virtual Desktop. For host pools, workspaces, and application groups, you can deploy them in the following Azure regions:
+
+ :::column:::
+ - Australia East
+ - Canada Central
+ - Canada East
+ - Central India
+ - Central US
+ - East US
+ - East US 2
+ - Japan East
+ - North Central US
+ :::column-end:::
+ :::column:::
+ - North Europe
+ - South Central US
+ - UK South
+ - UK West
+ - West Central US
+ - West Europe
+ - West US
+ - West US 2
+ - West US 3
+ :::column-end:::
+
+This list of regions is where the *metadata* for the host pool can be stored. However, session hosts can be located in any Azure region, and on-premises when using [Azure Virtual Desktop on Azure Stack HCI](azure-stack-hci-overview.md). For more information about the types of data and locations, see [Data locations for Azure Virtual Desktop](data-locations.md).
+
+To learn more about the architecture and resilience of the Azure Virtual Desktop service, see [Azure Virtual Desktop service architecture and resilience](service-architecture-resilience.md).
## Remote Desktop clients
-Your users will need a [Remote Desktop client](/windows-server/remote/remote-desktop-services/clients/remote-desktop-clients) to connect to desktops and applications. The following clients support Azure Virtual Desktop:
+Your users need a [Remote Desktop client](/windows-server/remote/remote-desktop-services/clients/remote-desktop-clients) to connect to desktops and applications. The following clients support Azure Virtual Desktop:
- [Windows Desktop client](./users/connect-windows.md) - [Azure Virtual Desktop Store app for Windows](./users/connect-windows-azure-virtual-desktop-app.md)
To learn which URLs clients use to connect and that you must allow through firew
## Next steps -- For a simple way to get started with Azure Virtual Desktop by creating a sample infrastructure, see [Tutorial: Try Azure Virtual Desktop with a Windows 11 desktop](tutorial-create-connect-personal-desktop.md).
+- For a simple way to get started with Azure Virtual Desktop by creating a sample infrastructure, see [Tutorial: Deploy a sample Azure Virtual Desktop infrastructure with a Windows 11 desktop](tutorial-create-connect-personal-desktop.md).
-- For a more in-depth and adaptable approach to deploying Azure Virtual Desktop, see [Create a host pool in Azure Virtual Desktop](create-host-pool.md).
+- For a more in-depth and adaptable approach to deploying Azure Virtual Desktop, see [Deploy Azure Virtual Desktop](create-host-pool.md).
virtual-desktop Tutorial Try Deploy Windows 11 Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/tutorial-try-deploy-windows-11-desktop.md
+
+ Title: "Tutorial: Try Azure Virtual Desktop with Windows 11"
+description: This tutorial shows you how to deploy Azure Virtual Desktop with a Windows 11 desktop in a sample infrastructure with the Azure portal.
+++ Last updated : 10/25/2023++
+# Tutorial: Deploy a sample Azure Virtual Desktop infrastructure with a Windows 11 desktop
+
+Azure Virtual Desktop enables you to access desktops and applications from virtually anywhere. This tutorial shows you how to deploy a *Windows 11 Enterprise* desktop in Azure Virtual Desktop using the Azure portal and how to connect to it. To learn more about the terminology used for Azure Virtual Desktop, see [Azure Virtual Desktop terminology](environment-setup.md) and [What is Azure Virtual Desktop?](overview.md)
+
+You'll deploy a sample infrastructure by:
+
+> [!div class="checklist"]
+> * Creating a personal host pool.
+> * Creating a session host virtual machine (VM) joined to your Microsoft Entra tenant with Windows 11 Enterprise and add it to the host pool.
+> * Creating a workspace and an application group that publishes a desktop to the session host VM.
+> * Assigning users to the application group.
+> * Connecting to the desktop.
+
+> [!TIP]
+> This tutorial shows a simple way you can get started with Azure Virtual Desktop. It doesn't provide an in-depth guide of the different options and you can't publish a RemoteApp in addition to the desktop. For a more in-depth and adaptable approach to deploying Azure Virtual Desktop, see [Deploy Azure Virtual Desktop](deploy-azure-virtual-desktop.md), or for suggestions of what else you can configure, see the articles we list in [Next steps](#next-steps).
+
+## Prerequisites
+
+You need:
+
+- An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+- The Azure account must be assigned the following built-in role-based access control (RBAC) roles as a minimum on the subscription, or on a resource group. For more information, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). If you want to assign the roles to a resource group, you need to create this first.
+
+ | Resource type | RBAC role |
+ |--|--|
+ | Host pool, workspace, and application group | [Desktop Virtualization Contributor](rbac.md#desktop-virtualization-contributor) |
+ | Session hosts | [Virtual Machine Contributor](../role-based-access-control/built-in-roles.md#virtual-machine-contributor) |
+
+ Alternatively if you already have the [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Owner](../role-based-access-control/built-in-roles.md#owner) RBAC role, you're already able to create all of these resource types.
+
+- A [virtual network](../virtual-network/quick-create-portal.md) in the same Azure region you want to deploy your session hosts to.
+
+- A user account in Microsoft Entra ID you can use for connecting to the desktop. This account must be assigned the *Virtual Machine User Login* or *Virtual Machine Administrator Login* RBAC role on the subscription. Alternatively you can assign the role to the account on the session host VM or the resource group containing the VM after deployment.
+
+- A Remote Desktop client installed on your device to connect to the desktop. You can find a list of supported clients in [Remote Desktop clients for Azure Virtual Desktop](users/remote-desktop-clients-overview.md). Alternatively you can use the [Remote Desktop Web client](users/connect-web.md), which you can use through a supported web browser without installing any extra software.
+
+## Create a personal host pool, workspace, application group, and session host VM
+
+To create a personal host pool, workspace, application group, and session host VM running Windows 11:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
+
+1. From the Azure Virtual Desktop overview page, select **Create a host pool**.
+
+1. On the **Basics** tab, complete the following information:
+
+ | Parameter | Value/Description |
+ |--|--|
+ | **Project details** | |
+ | Subscription | Select the subscription you want to deploy your host pool, session hosts, workspace, and application group in from the drop-down list. |
+ | Resource group | Select an existing resource group or select **Create new** and enter a name. |
+ | Host pool name | Enter a name for the host pool, for example **hp01**. |
+ | Location | Select the Azure region from the list where you want to create your host pool, workspace, and application group. |
+ | Validation environment | Select **No**. This setting enables your host pool to receive service updates before all other production host pools, but isn't needed for this tutorial.|
+ | Preferred app group type | Select **Desktop**. With this personal host pool, you publish a desktop, but you can't also add a RemoteApp application group for the same host pool to also publish applications. See [Next steps](#next-steps) for more advanced scenarios. |
+ | **Host pool type** | |
+ | Host pool type | Select **Personal**. This means that end users have a dedicated assigned session host that they always connect to. Selecting **Personal** shows a new option for **Assignment type**. |
+ | Assignment type | Select **Automatic**. Automatic assignment means that a user automatically gets assigned the first available session host when they first sign in, which is then dedicated to that user. |
+
+ Once you've completed this tab, select **Next: Networking**.
+
+1. On the **Networking** tab, select **Enable public access from all networks**, where end users can access the feed and session hosts securely over the public internet. Once you've completed this tab, select **Next: Virtual Machines**.
+
+1. On the **Virtual machines** tab, complete the following information:
+
+ | Parameter | Value/Description |
+ |--|--|
+ | Add Azure virtual machines | Select **Yes**. This shows several new options. |
+ | Resource group | This automatically defaults to the resource group you chose your host pool to be in on the *Basics* tab. |
+ | Name prefix | Enter a name for your session hosts, for example **hp01-sh**.<br /><br />This name prefix is used as the prefix for your session host VMs. Each session host has a suffix of a hyphen and then a sequential number added to the end, for example **hp01-sh-0**.<br /><br />The prefix can be a maximum of 11 characters and is used in the computer name in the operating system. The prefix and the suffix combined can be a maximum of 15 characters. Session host names must be unique. |
+ | Virtual machine location | Select the Azure region where you want to deploy your session host VMs. It must be the same region that your virtual network is in. |
+ | Availability options | Select **No infrastructure dependency required**. This means that your session host VMs aren't deployed in an availability set or in availability zones. |
+ | Security type | Select **Trusted launch virtual machines**. Leave the subsequent defaults of **Enable secure boot** and **Enable vTPM** checked, and **Integrity monitoring** unchecked. For more information, see [Trusted launch](security-guide.md#trusted-launch). |
+ | Image | Select **Windows 11 Enterprise, version 22H2**. |
+ | Virtual machine size | Accept the default SKU. If you want to use a different SKU, select **Change size**, then select from the list. |
+ | Number of VMs | Enter **1** as a minimum. You can deploy up to 400 session host VMs at this point if you wish, or you can add more separately.<br /><br />With a personal host pool, each session host can only be assigned to one user, so you need one session host for each user connecting to this host pool. Once you've completed this tutorial, you can create a pooled host pool, where multiple users can connect to the same session host. |
+ | OS disk type | Select **Premium SSD** for best performance. |
+ | Boot Diagnostics | Select **Enable with managed storage account (recommended)**. |
+ | **Network and security** | |
+ | Virtual network | Select your virtual network and subnet to connect session hosts to. |
+ | Network security group | Select **Basic**. |
+ | Public inbound ports | Select **No** as you don't need to open inbound ports to connect to Azure Virtual Desktop. Learn more at [Understanding Azure Virtual Desktop network connectivity](network-connectivity.md). |
+ | **Domain to join** | |
+ | Select which directory you would like to join | Select **Microsoft Entra ID**. |
+ | Enroll VM with Intune | Select **No.** |
+ | **Virtual Machine Administrator account** | |
+ | Username | Enter a name to use as the local administrator account for these session host VMs. |
+ | Password | Enter a password for the local administrator account. |
+ | Confirm password | Reenter the password. |
+ | **Custom configuration** | |
+ | Custom configuration script URL | Leave this blank. |
+
+ Once you've completed this tab, select **Next: Workspace**.
+
+1. On the **Workspace** tab, complete the following information:
+
+ | Parameter | Value/Description |
+ |--|--|
+ | Register desktop app group | Select **Yes**. This registers the default desktop application group to the selected workspace. |
+ | To this workspace | Select **Create new** and enter a name, for example **ws01**. |
+
+ Once you've completed this tab, select **Next: Review + create**. You don't need to complete the other tabs.
+
+1. On the **Review + create** tab, ensure validation passes and review the information that is used during deployment. If validation doesn't pass, review the error message and check what you entered in each tab.
+
+1. Select **Create**. A host pool, workspace, application group, and session host is created. Once your deployment is complete, select **Go to resource** to go to the host pool overview.
+
+1. Finally, from the host pool overview, select **Session hosts** and verify the status of the session hosts is **Available**.
+
+## Assign users to the application group
+
+Once your host pool, workspace, application group, and session host VM(s) have been deployed, you need to assign users to the application group that was automatically created. After users are assigned to the application group, they'll automatically be assigned to an available session host VM because *Assignment type* was set to **Automatic** when the host pool was created.
+
+1. From the host pool overview, select **Application groups**.
+
+1. Select the application group from the list, for example **hp01-DAG**.
+
+1. From the application group overview, select **Assignments**.
+
+1. Select **+ Add**, then search for and select the user account you want to be assigned to this application group.
+
+1. Finish by selecting **Select**.
+
+## Enable connections from Remote Desktop clients
+
+> [!TIP]
+> This section is optional if you're going to use a Windows device to connect to Azure Virtual Desktop that is joined to the same Microsoft Entra tenant as your session host VMs and you're using the [Remote Desktop client for Windows](users/connect-windows.md?toc=%2Fazure%2Fvirtual-desktop%2Ftoc.json).
+
+To enable connections from all of the Remote Desktop clients, you need to add an RDP property to your host pool configuration.
+
+1. Go back to the host pool overview, then select **RDP Properties**.
+
+1. Select the **Advanced** tab.
+
+1. In the **RDP Properties** box, add `targetisaadjoined:i:1;` to the start of the text in the box.
+
+1. Select **Save**.
+
+## Connect to the desktop
+
+You're ready to connect to the desktop. The desktop takes longer to load the first time as the profile is being created, however subsequent connections are quicker.
+
+> [!IMPORTANT]
+> Make sure the user account you're using to connect has been assigned the *Virtual Machine User Login* or *Virtual Machine Administrator Login* RBAC role on the subscription, session host VM, or the resource group containing the VM, as mentioned in the prerequisites, else you won't be able to connect.
+
+Select the relevant tab and follow the steps, depending on which Remote Desktop client you're using. We've only listed the steps here for Windows, Web and macOS, but if you want to connect using one of our other Remote Desktop clients, see [Remote Desktop clients for Azure Virtual Desktop](users/remote-desktop-clients-overview.md).
+
+# [Windows](#tab/windows-client)
+
+1. Open the **Remote Desktop** app on your device.
+
+1. Select the three dots in the top right-hand corner, then select **Subscribe with URL**.
+
+1. In the **Email or Workspace URL** box, enter `https://rdweb.wvd.microsoft.com`. After a few seconds, the message **We found Workspaces at the following URLs** should be displayed.
+
+1. Select **Next**.
+
+1. Sign in with the user account you assigned to the application group. After a few seconds, the workspace should show with an icon named **SessionDesktop**.
+
+1. Double-click **SessionDesktop** to launch a desktop session. You need to enter the password for the user account again.
+
+# [Web](#tab/web-client)
+
+1. Open a [supported web browser](users/connect-web.md#prerequisites) and go to [**https://client.wvd.microsoft.com/arm/webclient**](https://client.wvd.microsoft.com/arm/webclient).
+
+1. Sign in with the user account you assigned to the application group. After a few seconds, the workspace should show with an icon named **SessionDesktop**.
+
+1. Select **SessionDesktop** to launch a desktop session.
+
+1. A prompt shows asking you for permission to **Access local resources**. You can also select whether you want to allow access to your microphone, clipboard, printer, and/or file transfer in the remote session. Once you've made your selections, select **Allow**. If you allowed access to the microphone and/or clipboard, an extra prompt shows requesting further confirmation.
+
+1. The **Log in** prompt shows with your username prepopulated. You need to enter the password for the user account again, then select **Submit**.
+
+# [macOS](#tab/macos-client)
+
+1. Open the **Microsoft Remote Desktop** app on your device.
+
+1. Select **Workspaces**.
+
+1. Select **+**, then select **Add Workspace**.
+
+1. In the **Email or Workspace URL** box, enter `https://rdweb.wvd.microsoft.com`. After a few seconds, the message **We found Workspaces at the following URLs** should be displayed.
+
+1. Select **Add**.
+
+1. Sign in with the user account you assigned to the application group. After a few seconds, the workspace should show with an icon named **SessionDesktop**.
+
+1. Double-click **SessionDesktop** to launch a desktop session. You need to enter the password for the user account again, then select **Continue**.
+++
+## Next steps
+
+Now that you've created and connected to a Windows 11 desktop with Azure Virtual Desktop there's much more you can do. For a more in-depth and adaptable approach to deploying Azure Virtual Desktop, see [Deploy Azure Virtual Desktop](deploy-azure-virtual-desktop.md), or for suggestions of what else you can configure, see:
+
+- [Add session hosts to a host pool](add-session-hosts-host-pool.md).
+
+- [Publish applications](manage-app-groups.md).
+
+- Manage user profiles using [FSLogix profile containers and Azure Files](create-profile-container-azure-ad.md).
+
+- [Understand network connectivity](network-connectivity.md).
+
+- Learn about [supported identities and authentication methods](authentication.md)
+
+- [Set up email discovery to subscribe to Azure Virtual Desktop](/windows-server/remote/remote-desktop-services/rds-email-discovery?toc=%2Fazure%2Fvirtual-desktop%2Ftoc.json).
+
+- [Configure single sign-on for Azure Virtual Desktop using Microsoft Entra authentication](configure-single-sign-on.md).
+
+- Learn about [session host virtual machine sizing guidelines](/windows-server/remote/remote-desktop-services/virtual-machine-recs?toc=%2Fazure%2Fvirtual-desktop%2Ftoc.json).
+
+- [Use Microsoft Teams on Azure Virtual Desktop](teams-on-avd.md).
+
+- [Monitor your deployment with Azure Virtual Desktop Insights](azure-monitor.md).
virtual-machine-scale-sets Virtual Machine Scale Sets Automatic Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md
Previously updated : 10/17/2023 Last updated : 10/26/2023 # Azure Virtual Machine Scale Set automatic OS image upgrades
The platform orchestrated updates process is followed for rolling out supported
### Upgrading VMs in a scale set The region of a scale set becomes eligible to get image upgrades either through the availability-first process for platform images or replicating new custom image versions for Share Image Gallery. The image upgrade is then applied to an individual scale set in a batched manner as follows:+ 1. Before you begin the upgrade process, the orchestrator will ensure that no more than 20% of instances in the entire scale set are unhealthy (for any reason). 2. The upgrade orchestrator identifies the batch of VM instances to upgrade, with any one batch having a maximum of 20% of the total instance count, subject to a minimum batch size of one virtual machine. There is no minimum scale set size requirement and scale sets with 5 or fewer instances will have 1 VM per upgrade batch (minimum batch size). 3. The OS disk of every VM in the selected upgrade batch is replaced with a new OS disk created from the latest image. All specified extensions and configurations in the scale set model are applied to the upgraded instance.
To modify the default settings associated with Rolling Upgrades, review Azure's
> [!NOTE] >Automatic OS upgrade does not upgrade the reference image Sku on the scale set. To change the Sku (such as Ubuntu 18.04-LTS to 20.04-LTS), you must update the [scale set model](virtual-machine-scale-sets-upgrade-scale-set.md#the-scale-set-model) directly with the desired image Sku. Image publisher and offer can't be changed for an existing scale set.
+## OS image upgrade versus reimage
+
+Both **OS Image Upgrade** and **[Reimage](/rest/api/compute/virtual-machine-scale-sets/reimage)** are methods used to update VMs within a scale set, but they serve different purposes and have distinct impacts.
+
+OS image upgrade involves updating the underlying operating system image that is used to create new instances in a scale set. When you perform an OS image upgrade, Azure will create new VM instances with the updated OS image and gradually replace the old VM instances in the scale set with the new ones. This process is typically performed in stages to ensure high availability. OS image upgrades are a non-disruptive way to apply updates or changes to the underlying OS of the VMs in a scale set. Existing VM instances are not affected until they are replaced with the new instances.
+
+Reimaging a VM instance in a scale set is a more immediate and disruptive action. When you choose to reimage a VM instance, Azure will stop the selected VM instance, perform the reimage operation, and then restart the VM using the same OS image. This effectively reinstalls the OS on that specific VM instance. Reimaging is typically used when you need to troubleshoot or reset a specific VM instance due to issues with that instance.
+
+**Key differences:**
+- OS Image Upgrade is a gradual and non-disruptive process that updates the OS image for the entire Virtual Machine Scale Set over time, ensuring minimal impact on running workloads.
+- Reimage is a more immediate and disruptive action that affects only the selected VM instance, stopping it temporarily and reinstalling the OS.
+
+**When to use each method:**
+- Use OS Image Upgrade when you want to update the OS image for the entire scale set while maintaining high availability.
+- Use Reimage when you need to troubleshoot or reset a specific VM instance within the virtual Machine Scale Set.
+
+It's essential to carefully plan and choose the appropriate method based on your specific requirements to minimize any disruption to your applications and services running in a Virtual Machine Scale Set.
+ ## Supported OS images Only certain OS platform images are currently supported. Custom images [are supported](virtual-machine-scale-sets-automatic-upgrade.md#automatic-os-image-upgrade-for-custom-images) if the scale set uses custom images through [Azure Compute Gallery](../virtual-machines/shared-image-galleries.md).
az vmss rolling-upgrade start --resource-group "myResourceGroup" --name "myScale
## Investigate and Resolve Auto Upgrade Errors
-The platform can return errors on VMs while performing Automatic Image Upgrade with Rolling Upgrade policy. The [Get Instance View](/rest/api/compute/virtual-machine-scale-sets/get-instance-view) of a VM contains the detailed error message to investigate and resolve an error. The [Rolling Upgrades - Get Latest](/rest/api/compute/virtual-machine-scale-sets/get) can provide more details on rolling upgrade configuration and status. The [Get OS Upgrade History](/rest/api/compute/virtual-machine-scale-sets/get) provides details on the last image upgrade operation on the scale set. Below are the top most errors that can result in Rolling Upgrades.
+The platform can return errors on VMs while performing Automatic Image Upgrade with Rolling Upgrade policy. The [Get Instance View](/rest/api/compute/virtual-machine-scale-sets/get-instance-view) of a VM contains the detailed error message to investigate and resolve an error. The [Rolling Upgrades - Get Latest](/rest/api/compute/virtual-machine-scale-sets/get) can provide more details on rolling upgrade configuration and status. The [Get OS Upgrade History](/rest/api/compute/virtual-machine-scale-sets/get) provides details on the last image upgrade operation on the scale set. Below are the topmost errors that can result in Rolling Upgrades.
**RollingUpgradeInProgressWithFailedUpgradedVMs** - Error is triggered for a VM failure.
The platform can return errors on VMs while performing Automatic Image Upgrade w
## Next steps > [!div class="nextstepaction"]
-> [Learn about the Application Health Extension](../virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension.md)
+> [Learn about the Application Health Extension](../virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension.md)
virtual-machines Windows Desktop Multitenant Hosting Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/windows-desktop-multitenant-hosting-deployment.md
win11-22h2-pron Windows-11 MicrosoftWindowsDesktop westus
For more information on available images, see [Find and use Azure Marketplace VM images with Azure PowerShell](./cli-ps-findimage.md)
+> [!NOTE]
+> If you are upgrading to a newer version of Windows 11 with Trusted launch enabled and you are currently on a Windows 11 version without Trusted Launch enabled, the VM needs to be deallocated before proceeding with the upgrade. For more information, see [Enabling Trusted Launch on existing Azure VMs](../../virtual-machines/trusted-launch-existing-vm.md)
+ ## Uploading Windows 11 VHD to Azure If you're uploading a generalized Windows 11 VHD, note Windows 11 doesn't have built-in administrator account enabled by default. To enable the built-in administrator account, include the following command as part of the Custom Script extension.
virtual-network-manager Concept Network Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-network-groups.md
In this article, you learn about *network groups* and how they can help you group virtual networks together for easier management. Also, you learn about *Static group membership* and *Dynamic group membership* and how to use each type of membership.
-> [!IMPORTANT]
-> Azure Virtual Network Manager is generally available for Virtual Network Manager and hub and spoke connectivity configurations.
->
-> Mesh connectivity configurations and security admin rules remain in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Network group
virtual-network-manager Concept Network Manager Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-network-manager-scope.md
In this article, you learn about how Azure Virtual Network Manager uses the concept of *scope* to enable management groups or subscriptions to use certain features of Virtual Network Manager. Also, you learn about *hierarchy* and how that can affect your users when using Virtual Network Manager.
-> [!IMPORTANT]
-> Azure Virtual Network Manager is generally available for Virtual Network Manager and hub and spoke connectivity configurations.
->
-> Mesh connectivity configurations and security admin rules remain in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Network Manager
virtual-network-manager Concept Remove Components Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-remove-components-checklist.md
In this article, you see a checklist of steps you need to complete to remove or update a configuration component of Azure Virtual Network Manager.
-> [!IMPORTANT]
-> Azure Virtual Network Manager is generally available for Virtual Network Manager and hub and spoke connectivity configurations.
->
-> Mesh connectivity configurations and security admin rules remain in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## <a name="remove"></a>Remove components checklist
virtual-network-manager Concept Security Admins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-security-admins.md
In this article, you'll learn about security admin rules in Azure Virtual Network Manager. Security admin rules are used to define global network security rules that apply to all virtual networks within a [network group](concept-network-groups.md). You learn about what security admin rules are, how they work, and when to use them.
-> [!IMPORTANT]
-> Azure Virtual Network Manager is generally available for Virtual Network Manager and hub and spoke connectivity configurations.
->
-> Mesh connectivity configurations and security admin rules remain in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## What is a security admin rule?
virtual-network-manager Concept Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-use-cases.md
Learn about use cases for Azure Virtual Network Manager including managing connectivity of virtual networks, and securing network traffic.
-> [!IMPORTANT]
-> Azure Virtual Network Manager is generally available for Virtual Network Manager and hub and spoke connectivity configurations.
->
-> Mesh connectivity configurations and security admin rules remain in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.
virtual-network-manager Concept Virtual Network Flow Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-virtual-network-flow-logs.md
Learn more about [VNet flow logs (Preview)](../network-watcher/vnet-flow-logs-ov
> [!IMPORTANT] > VNet flow logs is currently in PREVIEW. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-> [!IMPORTANT]
-> Azure Virtual Network Manager is generally available for Virtual Network Manager and hub-and-spoke connectivity configurations. Mesh connectivity configurations and security admin rules remain in public preview.
->
-> This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Enable VNet flow logs (Preview)
virtual-network-manager Create Virtual Network Manager Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-bicep.md
In this quickstart, you deploy three virtual networks and use Azure Virtual Netw
:::image type="content" source="media/create-virtual-network-manager-portal/virtual-network-manager-resources-diagram.png" alt-text="Diagram of resources deployed for a mesh virtual network topology with Azure virtual network manager.":::
-> [!IMPORTANT]
-> Azure Virtual Network Manager is generally available for Virtual Network Manager and hub-and-spoke connectivity configurations. Mesh connectivity configurations and security admin rules remain in public preview.
->
-> This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Bicep Template Modules
virtual-network-manager Create Virtual Network Manager Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-cli.md
In this quickstart, you deploy three virtual networks and use Azure Virtual Netw
:::image type="content" source="media/create-virtual-network-manager-portal/virtual-network-manager-resources-diagram.png" alt-text="Diagram of resources deployed for a mesh virtual network topology with Azure virtual network manager.":::
-> [!IMPORTANT]
-> Azure Virtual Network Manager is generally available for Virtual Network Manager and hub-and-spoke connectivity configurations. Mesh connectivity configurations and security admin rules remain in public preview.
->
-> This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
virtual-network-manager Create Virtual Network Manager Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-portal.md
In this quickstart, you deploy three virtual networks and use Azure Virtual Netw
:::image type="content" source="media/create-virtual-network-manager-portal/virtual-network-manager-resources-diagram.png" alt-text="Diagram of resources deployed for a mesh virtual network topology with Azure virtual network manager.":::
-> [!IMPORTANT]
-> Azure Virtual Network Manager is generally available for Virtual Network Manager and hub-and-spoke connectivity configurations. Mesh connectivity configurations and security admin rules remain in public preview.
->
-> This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
virtual-network-manager Create Virtual Network Manager Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-powershell.md
In this quickstart, you deploy three virtual networks and use Azure Virtual Netw
:::image type="content" source="media/create-virtual-network-manager-portal/virtual-network-manager-resources-diagram.png" alt-text="Diagram of resources deployed for a mesh virtual network topology with Azure virtual network manager.":::
-> [!IMPORTANT]
-> Azure Virtual Network Manager is generally available for Virtual Network Manager and hub-and-spoke connectivity configurations. Mesh connectivity configurations and security admin rules remain in public preview.
->
-> This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
virtual-network-manager Create Virtual Network Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-template.md
Get started with Azure Virtual Network Manager by using Azure Resource Manager t
In this quickstart, an Azure Resource Manager template is used to deploy Azure Virtual Network Manager with different connectivity topology and network group membership types. Use deployment parameters to specify the type of configuration to deploy.
-> [!IMPORTANT]
-> Azure Virtual Network Manager is generally available for Virtual Network Manager and hub-and-spoke connectivity configurations. Mesh connectivity configurations and security admin rules remain in public preview.
->
-> This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
virtual-network-manager Create Virtual Network Manager Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-terraform.md
Get started with Azure Virtual Network Manager by using Terraform to provision c
In this quickstart, you deploy three virtual networks and use Azure Virtual Network Manager to create a mesh network topology. Then, you verify that the connectivity configuration was applied.
-> [!IMPORTANT]
-> Azure Virtual Network Manager is generally available for Virtual Network Manager and hub-and-spoke connectivity configurations. Mesh connectivity configurations and security admin rules remain in public preview.
->
-> This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
[!INCLUDE [Terraform abstract](~/azure-dev-docs-pr/articles/terraform/includes/abstract.md)]
virtual-network-manager Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/faq.md
## General
-> [!IMPORTANT]
-> Azure Virtual Network Manager is generally available for Virtual Network Manager and hub and spoke connectivity configurations.
->
-> Mesh connectivity configurations and security admin rules remain in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)..
### Which Azure regions support Azure Virtual Network Manager?
virtual-network-manager How To Block High Risk Ports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-block-high-risk-ports.md
In this article, you learn to block high risk network ports using [Azure Virtual
While this article focuses on a single port, SSH, you can protect any high-risk ports in your environment with the same steps. To learn more, review this list of [high risk ports](concept-security-admins.md#protect-high-risk-ports)
-> [!IMPORTANT]
-> Azure Virtual Network Manager is generally available for Virtual Network Manager and hub and spoke connectivity configurations.
->
-> Mesh connectivity configurations and security admin rules remain in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites - You understand how to create an [Azure Virtual Network Manager](./create-virtual-network-manager-portal.md)
virtual-network-manager How To Block Network Traffic Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-block-network-traffic-portal.md
This article shows you how to create a security admin rule to block inbound network traffic on RDP port 3389 that you can add to a rule collection. For more information, see [Security admin rules](concept-security-admins.md).
-> [!IMPORTANT]
-> Azure Virtual Network Manager is generally available for Virtual Network Manager and hub and spoke connectivity configurations.
->
-> Mesh connectivity configurations and security admin rules remain in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
virtual-network-manager How To Block Network Traffic Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-block-network-traffic-powershell.md
This article shows you how to create a security rule to block outbound network traffic to port 80 and 443 that you can add to your rule collections. For more information, see [Security admin rules](concept-security-admins.md).
-> [!IMPORTANT]
-> Azure Virtual Network Manager is generally available for Virtual Network Manager and hub and spoke connectivity configurations.
->
-> Mesh connectivity configurations and security admin rules remain in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
virtual-network-manager How To Configure Cross Tenant Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-configure-cross-tenant-cli.md
In this article, you'll learn how to create [cross-tenant connections](concept-c
First, you'll create the scope connection on the central network manager. Then, you'll create the network manager connection on the connecting tenant and verify the connection. Last, you'll add virtual networks from different tenants and verify. After you complete all the tasks, you can centrally manage the resources of other tenants from your network manager.
-> [!IMPORTANT]
-> Azure Virtual Network Manager is generally available for Virtual Network Manager and hub and spoke connectivity configurations.
->
-> Mesh connectivity configurations and security admin rules remain in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
virtual-network-manager How To Create Mesh Network Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-create-mesh-network-powershell.md
In this article, you'll learn how to create a mesh network topology with Azure Virtual Network Manager using Azure PowerShell. With this configuration, all the virtual networks of the same region in the same network group can communicate with one another. You can enable cross region connectivity by enabling the global mesh setting in the connectivity configuration.
-> [!IMPORTANT]
-> Azure Virtual Network Manager is generally available for Virtual Network Manager and hub and spoke connectivity configurations.
->
-> Mesh connectivity configurations and security admin rules remain in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
virtual-network-manager How To Define Network Group Membership Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-define-network-group-membership-azure-policy.md
In this article, you learn how to use Azure Policy conditional statements to cre
[Azure Policy](../governance/policy/overview.md) is a service to enable you to enforce per-resource governance at scale. It can be used to specify conditional expressions that define group membership, as opposed to explicit lists of virtual networks. This condition continues to power your network groups dynamically, allowing virtual networks to join and leave the group automatically as their fulfillment of the condition changes, with no Network Manager operation required.
-> [!IMPORTANT]
-> Azure Virtual Network Manager is generally available for Virtual Network Manager and hub and spoke connectivity configurations.
->
-> Mesh connectivity configurations and security admin rules remain in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Prerequisites
virtual-network-manager How To View Applied Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-view-applied-configurations.md
Azure Virtual Network Manager provides a few different ways for you to verify if configurations are being applied correctly. In this article, we'll look at how you can verify configurations applied both at virtual network and virtual machine level. We'll also go over operations you'll see in the activity log.
-> [!IMPORTANT]
-> Azure Virtual Network Manager is generally available for Virtual Network Manager and hub and spoke connectivity configurations.
->
-> Mesh connectivity configurations and security admin rules remain in public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Virtual network visibility Effective network group membership and applied configurations can be viewed on the per virtual network level.
virtual-network-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/overview.md
Azure Virtual Network Manager is a management service that enables you to group, configure, deploy, and manage virtual networks globally across subscriptions. With Virtual Network Manager, you can define network groups to identify and logically segment your virtual networks. Then you can determine the connectivity and security configurations you want and apply them across all the selected virtual networks in network groups at once.
-> [!IMPORTANT]
-> Azure Virtual Network Manager is now in General Availability for Virtual Network Manager and hub and spoke connectivity configurations.
->
-> Mesh connectivity configurations and security admin rules remain in Public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## How does Azure Virtual Network Manager work?
virtual-network-manager Resource Manager Template Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/resource-manager-template-samples.md
For the JSON syntax and properties to use in templates, see [Microsoft.Network r
> [!IMPORTANT] > In cases where a template is deploying connectivity or security configurations, the template requires a custom deployment script to deploy the configuration. The script is located at the end of the ARM template, and it uses the `Microsoft.Resources/deploymentScripts` resource type. For more information on deployment scripts, review [Use deployment scripts in ARM templates](../azure-resource-manager/templates/deployment-script-template.md). + ## Samples | Example | Description | |-- | -- |
virtual-network-manager Tutorial Create Secured Hub And Spoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/tutorial-create-secured-hub-and-spoke.md
In this tutorial, you create a hub and spoke network topology using Azure Virtual Network Manager. You then deploy a virtual network gateway in the hub virtual network to allow resources in the spoke virtual networks to communicate with remote networks using VPN. Also, you configure a security configuration to block outbound network traffic to the internet on ports 80 and 443. Last, you verify that configurations were applied correctly by looking at the virtual network and virtual machine settings.
-> [!IMPORTANT]
-> Azure Virtual Network Manager is now in General Availability for Virtual Network Manager and hub and spoke connectivity configurations.
->
-> Mesh connectivity configurations and security admin rules remain in Public preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
In this tutorial, you learn how to:
virtual-network Default Outbound Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/default-outbound-access.md
The public IPv4 address used for the access is called the default outbound acces
## When is default outbound access provided?
-If you deploy a virtual machine in Azure and it doesn't have explicit outbound connectivity, it's assigned a default outbound access IP.
+If you deploy a virtual machine in Azure and it doesn't have explicit outbound connectivity, it's assigned a default outbound access IP. The image below shows the underlying logic behind deciding which method of outbound to utilize, with default outbound being a "last resort".
:::image type="content" source="./media/default-outbound-access/decision-tree-load-balancer.svg" alt-text="Diagram of decision tree for default outbound access.":::
vpn-gateway Gateway Sku Change https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/gateway-sku-change.md
+
+ Title: 'Change a gateway SKU'
+
+description: Learn how to change a gateway SKU.
+++ Last updated : 10/24/2023+++
+# Change a gateway SKU
+
+This article helps you change a VPN Gateway virtual network gateway SKU. Before beginning the workflow to change your SKU, check the table in the [Considerations](#considerations) section of this article to see if you can, instead, [resize](gateway-sku-resize.md) your SKU. If you have the option to resize a SKU, select that method rather than changing a SKU.
++
+> [!NOTE]
+> The steps in this article apply to current Resource Manager deployments and not to legacy classic (service management) deployments.
+
+## Considerations
+
+There are a number of things to consider when moving to a new gateway SKU. This section outlines the main items and also provides a table that helps you select the best method to use.
++
+The following table helps you understand the required method to move from one SKU to another.
++
+## Workflow
+
+The following steps illustrate the workflow to change a SKU.
+
+1. Remove any connections to the virtual network gateway.
+1. Delete the old VPN gateway.
+1. Create the new VPN gateway.
+1. Update your on-premises VPN devices with the new VPN gateway IP address (for site-to-site connections).
+1. Update the gateway IP address value for any VNet-to-VNet local network gateways that connect to this gateway.
+1. Download new client VPN configuration packages for point-to-site clients connecting to the virtual network through this VPN gateway.
+1. Recreate the connections to the virtual network gateway.
+
+## Next steps
+
+For more information about SKUs, see [VPN Gateway settings](vpn-gateway-about-vpn-gateway-settings.md).
vpn-gateway Gateway Sku Resize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/gateway-sku-resize.md
description: Learn how to resize a gateway SKU.
Previously updated : 10/20/2023 Last updated : 10/25/2023
-# Resize a gateway SKU for VPN Gateway
+# Resize a gateway SKU
-This article helps you resize a gateway SKU. Resizing a gateway SKU is a relatively fast process. You don't need to delete and recreate your existing VPN gateway to resize. However, there are certain limitations and restrictions for resizing and not all SKUs are available when resizing.
+This article helps you resize a VPN Gateway virtual network gateway SKU. Resizing a gateway SKU is a relatively fast process. You don't need to delete and recreate your existing VPN gateway to resize. However, there are certain limitations and restrictions for resizing and not all SKUs are available when resizing.
+ When using the portal to resize your SKU, notice that the dropdown list of available SKUs is based on the SKU you currently have. If you don't see the SKU you want to resize to, instead of resizing, you have to change to a new SKU. For more information, see [About VPN Gateway settings](vpn-gateway-about-vpn-gateway-settings.md).
+> [!NOTE]
+> The steps in this article apply to current Resource Manager deployments and not to legacy classic (service management) deployments.
+
+## Considerations
+
+There are a number of things to consider when moving to a new gateway SKU. This section outlines the main items and also provides a table that helps you select the best method to use.
++
+The following table helps you understand the required method to move from one SKU to another.
++ ## Resize a SKU
+Resizing a SKU takes about 45 minutes to complete.
+ 1. Go to the **Configuration** page for your virtual network gateway. 1. On the right side of the page, click the dropdown arrow to show a list of available SKUs. The options listed are based on the starting SKU and SKU Generation. :::image type="content" source="./media/gateway-sku-resize/resize-sku.png" alt-text="Screenshot showing how to resize the gateway SKU." lightbox ="./media/gateway-sku-resize/resize-sku.png"::: 1. Select the SKU from the dropdown. 1. **Save** your changes.
+1. It takes about 45 minutes for the gateway SKU to complete resizing.
## Next steps
-For more information about SKUs, see [VPN Gateway settings](vpn-gateway-about-vpn-gateway-settings.md).
+For more information about SKUs, see [VPN Gateway settings](vpn-gateway-about-vpn-gateway-settings.md).