Updates from: 11/09/2024 02:05:40
Service Microsoft Docs article Related commit history on GitHub Change details
api-center Enable Managed Api Analysis Linting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/enable-managed-api-analysis-linting.md
Title: Managed API linting and analysis - Azure API Center
description: Enable managed linting of API definitions in your API center to analyze compliance of APIs with the organization's API style guide. Previously updated : 08/23/2024 Last updated : 11/01/2024
In this scenario:
* Currently, only OpenAPI specification documents in JSON or YAML format are analyzed. * By default, you enable analysis with the [`spectral:oas` ruleset](https://docs.stoplight.io/docs/spectral/4dec24461f3af-open-api-rules). To learn more about the built-in rules, see the [Spectral GitHub repo](https://github.com/stoplightio/spectral/blob/develop/docs/reference/openapi-rules.md). * Currently, you configure a single ruleset, and it's applied to all OpenAPI definitions in your API center.
-* The following are limits for maximum number of API definitions linted per 4 hours:
- * Free tier: 10
- * Standard tier: 100
+* There are [limits](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=/azure/api-center/toc.json&bc=/azure/api-center/breadcrumb/toc.json#api-center-limits) for the maximum number of API definitions analyzed. Analysis can take a few minutes to up to 24 hours to complete.
## Prerequisites
api-center Synchronize Api Management Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/synchronize-api-management-apis.md
When you link an API Management instance as an API source, the following happens
API Management APIs automatically synchronize to the API center whenever existing APIs' settings change (for example, new versions are added), new APIs are created, or APIs are deleted. This synchronization is one-way from API Management to your Azure API center, meaning API updates in the API center aren't synchronized back to the API Management instance. > [!NOTE]
-> * API updates in API Management can take a few minutes to synchronize to your API center.
> * There are [limits](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=/azure/api-center/toc.json&bc=/azure/api-center/breadcrumb/toc.json#api-center-limits) for the number of linked API Management instances (API sources).
+> * API updates in API Management can take a few minutes to up to 24 hours to synchronize to your API center.
### Entities synchronized from API Management
api-management Api Management Howto Disaster Recovery Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-disaster-recovery-backup-restore.md
Previously updated : 01/31/2023 Last updated : 09/06/2024
Backup-AzApiManagement -ResourceGroupName $apiManagementResourceGroup -Name $api
Backup is a long-running operation that may take several minutes to complete. During this time the API gateway continues to handle requests, but the state of the service is Updating.
+### [CLI](#tab/cli)
+
+[Sign in](/cli/azure/authenticate-azure-cli) with Azure CLI.
+
+In the following examples:
+
+* An API Management instance named *myapim* is in resource group *apimresourcegroup*.
+* A storage account named *backupstorageaccount* is in resource group *storageresourcegroup*. The storage account has a container named *backups*.
+* A backup blob will be created with name *ContosoBackup.apimbackup*.
+
+Set variables in Bash:
+
+```azurecli-interactive
+apiManagementName="myapim";
+apiManagementResourceGroup="apimresourcegroup";
+storageAccountName="backupstorageaccount";
+storageResourceGroup="storageresourcegroup";
+containerName="backups";
+backupName="ContosoBackup.apimbackup";
+```
+
+### Access using storage access key
+
+```azurecli-interactive
+storageKey=$(az storage account keys list --resource-group $storageResourceGroup --account-name $storageAccountName --query [0].value --output tsv)
+
+az apim backup --resource-group $apiManagementResourceGroup --name $apiManagementName \
+ --storage-account-name $storageAccountName --storage-account-key $storageKey --storage-account-container $containerName --backup-name $backupName
+```
+
+Backup is a long-running operation that may take several minutes to complete. During this time the API gateway continues to handle requests, but the state of the service is Updating.
+ ### [REST](#tab/rest) See [Azure REST API reference](/rest/api/azure/) for information about authenticating and calling Azure REST APIs.
Restore-AzApiManagement -ResourceGroupName $apiManagementResourceGroup -Name $ap
Restore is a long-running operation that may take up to 45 minutes or more to complete.
+### [CLI](#tab/cli)
+
+In the following examples,
+
+* An API Management instance named *myapim* is restored from the backup blob named *ContosoBackup.apimbackup* in storage account *backupstorageaccount*.
+* The backup blob is in a container named *backups*.
+
+Set variables in Bash:
+
+```azurecli-interactive
+apiManagementName="myapim";
+apiManagementResourceGroup="apimresourcegroup";
+storageAccountName="backupstorageaccount";
+storageResourceGroup="storageresourcegroup";
+containerName="backups";
+backupName="ContosoBackup.apimbackup"
+```
+
+### Access using storage access key
+
+```azurecli-interactive
+storageKey=$(az storage account keys list --resource-group $storageResourceGroup --account-name $storageAccountName --query [0].value --output tsv)
+
+az apim restore --resource-group $apiManagementResourceGroup --name $apiManagementName \
+ --storage-account-name $storageAccountName --storage-account-key $storageKey --storage-account-container $containerName --backup-name $backupName
+```
+
+Restore is a long-running operation that may take up to 45 minutes or more to complete.
+ ### [REST](#tab/rest) To restore an API Management service from a previously created backup, make the following HTTP request:
app-service Overview Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-vnet-integration.md
Because subnet size can't be changed after assignment, use a subnet that's large
With multi plan subnet join (MPSJ), you can join multiple App Service plans in to the same subnet. All App Service plans must be in the same subscription but the virtual network/subnet can be in a different subscription. Each instance from each App Service plan requires an IP address from the subnet and to use MPSJ a minimum size of `/26` subnet is required. If you plan to join many and/or large scale plans, you should plan for larger subnet ranges.
+> [!IMPORTANT]
+> Due to a known bug, MPSJ fails if multiple sites are created and attempt to integrate with the virtual network at the same time. A fix will be deployed soon. In the meantime, you can work around the issue with either of the following methods:
+> * If you create sites manually, create and integrate the sites one by one.
+> * If you create sites programmatically, for example using Terraform or ARM templates, add a [dependsOn](/azure/azure-resource-manager/templates/resource-dependency#dependson) element to each site in your templates to depend on the creation of the previous site for all but the first site in the template. This creates a delay between the site creation and the virtual network integration for each site and therefore isn't blocked by the known bug. For more information see, [Define the order for deploying resources in ARM templates](/azure/azure-resource-manager/templates/resource-dependency).
+>
+ ### Windows Containers specific limits Windows Containers uses an extra IP address per app for each App Service plan instance, and you need to size the subnet accordingly. If you have, for example, 10 Windows Container App Service plan instances with four apps running, you need 50 IP addresses and extra addresses to support horizontal (in/out) scale.
application-gateway Alb Controller Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/alb-controller-release-notes.md
Previously updated : 5/9/2024 Last updated : 11/7/2024
Instructions for new or existing deployments of ALB Controller are found in the
| ALB Controller Version | Gateway API Version | Kubernetes Version | Release Notes | | - | - | | - |
-| 1.2.3| v1.1 | v1.26, v1.27, v1.28, v1.29, v1.30 | Gateway API v1.1, gRPC support, frontend mutual authentication, readiness probe fixes, custom health probe port and TLS mode |
+| 1.3.7| v1.1 | v1.26, v1.27, v1.28, v1.29, v1.30 | Minor fixes and improvements |
## Release history | ALB Controller Version | Gateway API Version | Kubernetes Version | Release Notes | | - | - | | - |
+| 1.2.3| v1.1 | v1.26, v1.27, v1.28, v1.29, v1.30 | Gateway API v1.1, gRPC support, frontend mutual authentication, readiness probe fixes, custom health probe port and TLS mode |
| 1.0.2| v1 | v1.26, v1.27, v1.28, v1.29 | ECDSA + RSA certificate support for both Ingress and Gateway API, Ingress fixes, Server-sent events support | | 1.0.0| v1 | v1.26, v1.27, v1.28 | General Availability! URL redirect for both Gateway and Ingress API, v1beta1 -> v1 of Gateway API, quality improvements<br/>Breaking Changes: TLS Policy for Gateway API [PolicyTargetReference](https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io%2fv1alpha2.PolicyTargetReferenceWithSectionName)<br/>Listener is now referred to as [SectionName](https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1.SectionName)<br/>Fixes: Request timeout of 3 seconds, [HealthCheckPolicy interval](https://github.com/Azure/AKS/issues/4086), [pod crash for missing API fields](https://github.com/Azure/AKS/issues/4087) | | 0.6.3 | v1beta1 | v1.25 | Hotfix to address handling of Application Gateway for Containers frontends during controller restart in managed scenario |
application-gateway How To Backend Mtls Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-backend-mtls-gateway-api.md
Previously updated : 02/27/2024 Last updated : 11/5/2024
See the following figure:
Apply the following deployment.yaml file on your cluster to create a sample web application and deploy sample secrets to demonstrate backend mutual authentication (mTLS). ```bash
- kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/https-scenario/end-to-end-ssl-with-backend-mtls/deployment.yaml
+ kubectl apply -f https://learn.microsoft.com/azure/application-gateway/for-containers/examples/https-scenario/end-to-end-ssl-with-backend-mtls/deployment.yaml
``` This command creates the following on your cluster:
application-gateway How To End To End Tls Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-end-to-end-tls-gateway-api.md
+
+ Title: End-to-end TLS Azure Application Gateway for Containers - Gateway API
+description: Learn how to encrypt traffic to and from Application Gateway for Containers using Gateway API.
++++ Last updated : 11/5/2024+++
+# End-to-end TLS with Application Gateway for Containers - Gateway API
+
+This document helps set up an example application that uses the following resources from Gateway API. Steps are provided to:
+
+- Create a [Gateway](https://gateway-api.sigs.k8s.io/concepts/api-overview/#gateway) resource with one HTTPS listener.
+- Create an [HTTPRoute](https://gateway-api.sigs.k8s.io/api-types/httproute) that references a backend service.
+
+## Background
+
+Application Gateway for Containers enables end-to-end TLS for improved privacy and security. In this design, traffic between the client and an Application Gateway for Containers' frontend is encrypted and traffic proxied from Application Gateway for Containers to the backend target is encrypted. See the following example scenario:
+
+![A figure showing end-to-end TLS with Application Gateway for Containers.](./media/how-to-end-to-end-tls-gateway-api/e2e-tls.png)
+
+## Prerequisites
+
+1. If following the BYO deployment strategy, ensure that you set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md)
+2. If following the ALB managed deployment strategy, ensure that you provision your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).
+3. Deploy sample HTTPS application
+ Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate TLS/SSL offloading.
+
+ ```bash
+ kubectl apply -f https://learn.microsoft.com/azure/application-gateway/for-containers/examples/https-scenario/end-to-end-tls/deployment.yaml
+ ```
+
+ This command creates the following on your cluster:
+ - a namespace called `test-infra`
+ - one service called `https-app` in the `test-infra` namespace
+ - one deployment called `https-app` in the `test-infra` namespace
+ - one configmap called `https-app-cm` in the `test-infra` namespace
+ - one secret called `contoso.com` in the `test-infra` namespace
+ - one secret called `contoso.xyz` in the `test-infra` namespace
+
+## Deploy the required Gateway API resources
+
+# [ALB managed deployment](#tab/alb-managed)
+
+1. Create a Gateway
+
+ ```bash
+ kubectl apply -f - <<EOF
+ apiVersion: gateway.networking.k8s.io/v1
+ kind: Gateway
+ metadata:
+ name: gateway-01
+ namespace: test-infra
+ annotations:
+ alb.networking.azure.io/alb-namespace: alb-test-infra
+ alb.networking.azure.io/alb-name: alb-test
+ spec:
+ gatewayClassName: azure-alb-external
+ listeners:
+ - name: https-listener
+ port: 443
+ protocol: HTTPS
+ allowedRoutes:
+ namespaces:
+ from: Same
+ tls:
+ mode: Terminate
+ certificateRefs:
+ - kind : Secret
+ group: ""
+ name: contoso.com
+ EOF
+ ```
++
+# [Bring your own (BYO) deployment](#tab/byo)
+
+1. Set the following environment variables
+
+ ```bash
+ RESOURCE_GROUP='<resource group name of the Application Gateway For Containers resource>'
+ RESOURCE_NAME='alb-test'
+
+ RESOURCE_ID=$(az network alb show --resource-group $RESOURCE_GROUP --name $RESOURCE_NAME --query id -o tsv)
+ FRONTEND_NAME='frontend'
+ ```
+
+2. Create a Gateway
+
+ ```bash
+ kubectl apply -f - <<EOF
+ apiVersion: gateway.networking.k8s.io/v1
+ kind: Gateway
+ metadata:
+ name: gateway-01
+ namespace: test-infra
+ annotations:
+ alb.networking.azure.io/alb-id: $RESOURCE_ID
+ spec:
+ gatewayClassName: azure-alb-external
+ listeners:
+ - name: https-listener
+ port: 443
+ protocol: HTTPS
+ allowedRoutes:
+ namespaces:
+ from: Same
+ tls:
+ mode: Terminate
+ certificateRefs:
+ - kind : Secret
+ group: ""
+ name: contoso.com
+ addresses:
+ - type: alb.networking.azure.io/alb-frontend
+ value: $FRONTEND_NAME
+ EOF
+ ```
+++
+When the gateway resource is created, ensure the status is valid, the listener is _Programmed_, and an address is assigned to the gateway.
+
+```bash
+kubectl get gateway gateway-01 -n test-infra -o yaml
+```
+
+Example output of successful gateway creation.
+
+```yaml
+status:
+ addresses:
+ - type: Hostname
+ value: xxxx.yyyy.alb.azure.com
+ conditions:
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: Valid Gateway
+ observedGeneration: 1
+ reason: Accepted
+ status: "True"
+ type: Accepted
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: Application Gateway For Containers resource has been successfully updated.
+ observedGeneration: 1
+ reason: Programmed
+ status: "True"
+ type: Programmed
+ listeners:
+ - attachedRoutes: 0
+ conditions:
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: ""
+ observedGeneration: 1
+ reason: ResolvedRefs
+ status: "True"
+ type: ResolvedRefs
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: Listener is accepted
+ observedGeneration: 1
+ reason: Accepted
+ status: "True"
+ type: Accepted
+ - lastTransitionTime: "2023-06-19T21:04:55Z"
+ message: Application Gateway For Containers resource has been successfully updated.
+ observedGeneration: 1
+ reason: Programmed
+ status: "True"
+ type: Programmed
+ name: https-listener
+ supportedKinds:
+ - group: gateway.networking.k8s.io
+ kind: HTTPRoute
+```
+
+Once the gateway is created, create an HTTPRoute resource.
+
+```bash
+kubectl apply -f - <<EOF
+apiVersion: gateway.networking.k8s.io/v1
+kind: HTTPRoute
+metadata:
+ name: https-route
+ namespace: test-infra
+spec:
+ parentRefs:
+ - name: gateway-01
+ rules:
+ - backendRefs:
+ - name: https-app
+ port: 443
+EOF
+```
+
+Once the HTTPRoute resource is created, ensure the route is _Accepted_ and the Application Gateway for Containers resource is _Programmed_.
+
+```bash
+kubectl get httproute https-route -n test-infra -o yaml
+```
+
+Verify the Application Gateway for Containers resource is successfully updated.
+
+```yaml
+status:
+ parents:
+ - conditions:
+ - lastTransitionTime: "2023-06-19T22:18:23Z"
+ message: ""
+ observedGeneration: 1
+ reason: ResolvedRefs
+ status: "True"
+ type: ResolvedRefs
+ - lastTransitionTime: "2023-06-19T22:18:23Z"
+ message: Route is Accepted
+ observedGeneration: 1
+ reason: Accepted
+ status: "True"
+ type: Accepted
+ - lastTransitionTime: "2023-06-19T22:18:23Z"
+ message: Application Gateway For Containers resource has been successfully updated.
+ observedGeneration: 1
+ reason: Programmed
+ status: "True"
+ type: Programmed
+ controllerName: alb.networking.azure.io/alb-controller
+ parentRef:
+ group: gateway.networking.k8s.io
+ kind: Gateway
+ name: gateway-01
+ namespace: test-infra
+ ```
+
+Create a BackendTLSPolicy
+
+```bash
+kubectl apply -f - <<EOF
+apiVersion: alb.networking.azure.io/v1
+kind: BackendTLSPolicy
+metadata:
+ name: https-app-tls-policy
+ namespace: test-infra
+spec:
+ targetRef:
+ group: ""
+ kind: Service
+ name: https-app
+ namespace: test-infra
+ default:
+ sni: contoso.xyz
+ ports:
+ - port: 443
+EOF
+```
+
+Once the BackendTLSPolicy object is created, check the status on the object to ensure that the policy is valid:
+
+```bash
+kubectl get backendtlspolicy -n test-infra https-app-tls-policy -o yaml
+```
+
+Example output of valid BackendTLSPolicy object creation:
+
+```yaml
+status:
+ conditions:
+ - lastTransitionTime: "2023-06-29T16:54:42Z"
+ message: Valid BackendTLSPolicy
+ observedGeneration: 1
+ reason: Accepted
+ status: "True"
+ type: Accepted
+```
+
+## Test access to the application
+
+Now we're ready to send some traffic to our sample application, via the FQDN assigned to the frontend. Use the following command to get the FQDN.
+
+```bash
+fqdn=$(kubectl get gateway gateway-01 -n test-infra -o jsonpath='{.status.addresses[0].value}')
+```
+
+Curling this FQDN should return responses from the backend as configured on the HTTPRoute.
+
+```bash
+fqdnIp=$(dig +short $fqdn)
+curl -k --resolve contoso.com:443:$fqdnIp https://contoso.com --insecure
+```
+
+The following result should be present:
+
+```
+Hello world!
+```
+
+Congratulations, you have installed ALB Controller, deployed a backend application and routed traffic to the application via the ingress on Application Gateway for Containers.
application-gateway How To End To End Tls Ingress Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-end-to-end-tls-ingress-api.md
+
+ Title: End-to-end TLS Azure Application Gateway for Containers - Ingress API
+description: Learn how to encrypt traffic to and from Application Gateway for Containers using Ingress API.
++++ Last updated : 11/5/2024+++
+# End-to-end TLS with Application Gateway for Containers - Ingress API
+
+This document helps set up an example application that uses the _Ingress_ resource from [Ingress API](https://kubernetes.io/docs/concepts/services-networking/ingress/):
+
+## Background
+
+Application Gateway for Containers enables end-to-end TLS for improved privacy and security. In this design, traffic between the client and an Application Gateway for Containers' frontend is encrypted and traffic proxied from Application Gateway for Containers to the backend target is encrypted. See the following example scenario:
+
+![A figure showing end-to-end TLS with Application Gateway for Containers.](./media/how-to-end-to-end-tls-ingress-api/e2e-tls.png)
+
+## Prerequisites
+
+1. If following the BYO deployment strategy, ensure that you set up your Application Gateway for Containers resources and [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md)
+2. If following the ALB managed deployment strategy, ensure that you provision your [ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md) and the Application Gateway for Containers resources via the [ApplicationLoadBalancer custom resource](quickstart-create-application-gateway-for-containers-managed-by-alb-controller.md).
+3. Deploy sample HTTPS application
+ Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate TLS/SSL offloading.
+
+ ```bash
+ kubectl apply -f https://learn.microsoft.com/azure/application-gateway/for-containers/examples/https-scenario/end-to-end-tls/deployment.yaml
+ ```
+
+ This command creates the following on your cluster:
+ - a namespace called `test-infra`
+ - one service called `https-app` in the `test-infra` namespace
+ - one deployment called `https-app` in the `test-infra` namespace
+ - one configmap called `https-app-cm` in the `test-infra` namespace
+ - one secret called `contoso.com` in the `test-infra` namespace
+ - one secret called `contoso.xyz` in the `test-infra` namespace
+
+## Deploy the required Ingress API resources
+
+# [ALB managed deployment](#tab/alb-managed)
+
+1. Create an Ingress
+```bash
+kubectl apply -f - <<EOF
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: ingress-01
+ namespace: test-infra
+ annotations:
+ alb.networking.azure.io/alb-name: alb-test
+ alb.networking.azure.io/alb-namespace: alb-test-infra
+spec:
+ ingressClassName: azure-alb-external
+ tls:
+ - hosts:
+ - contoso.com
+ secretName: contoso.com
+ rules:
+ - host: contoso.com
+ http:
+ paths:
+ - path: /
+ pathType: Prefix
+ backend:
+ service:
+ name: https-app
+ port:
+ number: 443
+EOF
+```
++
+# [Bring your own (BYO) deployment](#tab/byo)
+
+1. Set the following environment variables
+
+```bash
+RESOURCE_GROUP='<resource group name of the Application Gateway For Containers resource>'
+RESOURCE_NAME='alb-test'
+
+RESOURCE_ID=$(az network alb show --resource-group $RESOURCE_GROUP --name $RESOURCE_NAME --query id -o tsv)
+FRONTEND_NAME='frontend'
+```
+
+2. Create an Ingress resource.
+
+```bash
+kubectl apply -f - <<EOF
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: ingress-01
+ namespace: test-infra
+ annotations:
+ alb.networking.azure.io/alb-id: $RESOURCE_ID
+ alb.networking.azure.io/alb-frontend: $FRONTEND_NAME
+spec:
+ ingressClassName: azure-alb-external
+ tls:
+ - hosts:
+ - contoso.com
+ secretName: contoso.com
+ rules:
+ - host: contoso.com
+ http:
+ paths:
+ - path: /
+ pathType: Prefix
+ backend:
+ service:
+ name: https-app
+ port:
+ number: 443
+EOF
+```
+++
+When the ingress resource is created, ensure the status shows the hostname of your load balancer and that both ports are listening for requests.
+
+```bash
+kubectl get ingress ingress-01 -n test-infra -o yaml
+```
+
+Example output of successful Ingress creation.
+
+```yaml
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ annotations:
+ alb.networking.azure.io/alb-frontend: FRONTEND_NAME
+ alb.networking.azure.io/alb-id: /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/yyyyyyyy/providers/Microsoft.ServiceNetworking/trafficControllers/zzzzzz
+ kubectl.kubernetes.io/last-applied-configuration: |
+ {"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{"alb.networking.azure.io/alb-frontend":"FRONTEND_NAME","alb.networking.azure.io/alb-id":"/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/yyyyyyyy/providers/Microsoft.ServiceNetworking/trafficControllers/zzzzzz"},"name"
+:"ingress-01","namespace":"test-infra"},"spec":{"ingressClassName":"azure-alb-external","rules":[{"host":"contoso.com","http":{"paths":[{"backend":{"service":{"name":"https-app","port":{"number":443}}},"path":"/","pathType":"Prefix"}]}}],"tls":[{"hosts":["contoso.com"],"secretName":"contoso.com"}]}}
+ creationTimestamp: "2023-07-22T18:02:13Z"
+ generation: 2
+ name: ingress-01
+ namespace: test-infra
+ resourceVersion: "278238"
+ uid: 17c34774-1d92-413e-85ec-c5a8da45989d
+spec:
+ ingressClassName: azure-alb-external
+ rules:
+ - host: contoso.com
+ http:
+ paths:
+ - backend:
+ service:
+ name: https-app
+ port:
+ number: 443
+ path: /
+ pathType: Prefix
+ tls:
+ - hosts:
+ - contoso.com
+ secretName: contoso.com
+status:
+ loadBalancer:
+ ingress:
+ - hostname: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.fzyy.alb.azure.com
+ ports:
+ - port: 443
+ protocol: TCP
+```
+
+Create an IngressExtension to configure Application Gateway for Containers to initiate connections to the backend server over TLS.
+
+```bash
+kubectl apply -f - <<EOF
+apiVersion: alb.networking.azure.io/v1
+kind: IngressExtension
+metadata:
+ name: https-ingress
+ namespace: test-infra
+spec:
+ backendSettings:
+ - service: https-app
+ ports:
+ - port: 443
+ protocol: HTTPS
+ trustedRootCertificate: contoso.xyz
+EOF
+```
+
+Once the IngressExtension resource is created, check the status on the object to ensure that the policy is valid:
+
+```bash
+kubectl get IngressExtension https-ingress -n test-infra -o yaml
+```
+
+Example output of valid IngressExtension object creation:
+
+```yaml
+status:
+ conditions:
+ - lastTransitionTime: "2023-06-29T16:54:42Z"
+ message: Valid IngressExtension
+ observedGeneration: 1
+ reason: Accepted
+ status: "True"
+ type: Accepted
+```
+
+## Test access to the application
+
+Now we're ready to send some traffic to our sample application, via the FQDN assigned to the frontend. Use the following command to get the FQDN.
+
+```bash
+fqdn=$(kubectl get ingress ingress-01 -n test-infra -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
+```
+
+Curling this FQDN should return responses from the backend as configured on the HTTPRoute.
+
+```bash
+fqdnIp=$(dig +short $fqdn)
+curl -k --resolve contoso.com:443:$fqdnIp https://contoso.com --insecure
+```
+
+The following result should be present:
+
+```
+Hello world!
+```
+
+Congratulations, you have installed ALB Controller, deployed a backend application and routed traffic to the application via the ingress on Application Gateway for Containers.
application-gateway How To Frontend Mtls Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-frontend-mtls-gateway-api.md
Previously updated : 9/18/2024 Last updated : 11/5/2024
The revoked client certificate flow shows a client presenting a revoked certific
Apply the following deployment.yaml file on your cluster to create a sample web application and deploy sample secrets to demonstrate frontend mutual authentication (mTLS). ```bash
- kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/https-scenario/ssl-termination/deployment.yaml
+ kubectl apply -f https://learn.microsoft.com/azure/application-gateway/for-containers/examples/https-scenario/ssl-termination/deployment.yaml
``` This command creates the following on your cluster:
application-gateway How To Header Rewrite Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-header-rewrite-gateway-api.md
description: Learn how to rewrite headers in Gateway API for Application Gateway
- Previously updated : 5/9/2024+ Last updated : 11/5/2024
The following figure illustrates a request with a specific user agent being rewr
Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate the header rewrite. ```bash
- kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/traffic-split-scenario/deployment.yaml
+ kubectl apply -f https://learn.microsoft.com/azure/application-gateway/for-containers/examples/traffic-split-scenario/deployment.yaml
``` This command creates the following on your cluster:
application-gateway How To Header Rewrite Ingress Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-header-rewrite-ingress-api.md
description: Learn how to rewrite headers in Ingress API for Application Gateway
- Previously updated : 5/9/2024+ Last updated : 11/5/2024
The following figure illustrates an example of a request with a specific user ag
Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate the header rewrite. ```bash
- kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/traffic-split-scenario/deployment.yaml
+ kubectl apply -f https://learn.microsoft.com/azure/application-gateway/for-containers/examples/traffic-split-scenario/deployment.yaml
``` This command creates the following on your cluster:
kind: Ingress
metadata: annotations: alb.networking.azure.io/alb-frontend: FRONTEND_NAME
- alb.networking.azure.io/alb-id: /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/yyyyyyyy/providers/Microsoft.ServiceNetworking/trafficControllers/zzzzzz
+ alb.networking.azure.io/alb-id: /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/yyyyyyyy/providers/Microsoft.ServiceNetworking/trafficControllers/zzzzzz
kubectl.kubernetes.io/last-applied-configuration: |
- {"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{"alb.networking.azure.io/alb-frontend":"FRONTEND_NAME","alb.networking.azure.io/alb-id":"/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/yyyyyyyy/providers/Microsoft.ServiceNetworking/trafficControllers/zzzzzz", "alb.networking.azure.io/alb-ingress-extension":"header-rewrite"},"name"
+ {"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{"alb.networking.azure.io/alb-frontend":"FRONTEND_NAME","alb.networking.azure.io/alb-id":"/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/yyyyyyyy/providers/Microsoft.ServiceNetworking/trafficControllers/zzzzzz", "alb.networking.azure.io/alb-ingress-extension":"header-rewrite"},"name"
:"ingress-01","namespace":"test-infra"},"spec":{"ingressClassName":"azure-alb-external","rules":[{"host":"contoso.com","http":{"paths":[{"backend":{"service":{"name":"backend-v1","port":{"number":8080}}},"path":"/","pathType":"Prefix"}]}}]}} creationTimestamp: "2023-07-22T18:02:13Z" generation: 2
application-gateway How To Multiple Site Hosting Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-multiple-site-hosting-gateway-api.md
Previously updated : 02/27/2024 Last updated : 11/5/2024
Application Gateway for Containers enables multi-site hosting by allowing you to
Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate path, query, and header based routing. ```bash
- kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/traffic-split-scenario/deployment.yaml
+ kubectl apply -f https://learn.microsoft.com/azure/application-gateway/for-containers/examples/traffic-split-scenario/deployment.yaml
``` This command creates the following on your cluster:
application-gateway How To Multiple Site Hosting Ingress Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-multiple-site-hosting-ingress-api.md
Previously updated : 02/27/2024 Last updated : 11/5/2024
Application Gateway for Containers enables multi-site hosting by allowing you to
Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate path, query, and header based routing. ```bash
- kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/traffic-split-scenario/deployment.yaml
+ kubectl apply -f https://learn.microsoft.com/azure/application-gateway/for-containers/examples/traffic-split-scenario/deployment.yaml
``` This command creates the following on your cluster:
kind: Ingress
metadata: annotations: alb.networking.azure.io/alb-frontend: FRONTEND_NAME
- alb.networking.azure.io/alb-id: /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/yyyyyyyy/providers/Microsoft.ServiceNetworking/trafficControllers/zzzzzz
+ alb.networking.azure.io/alb-id: /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/yyyyyyyy/providers/Microsoft.ServiceNetworking/trafficControllers/zzzzzz
kubectl.kubernetes.io/last-applied-configuration: |
- {"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{"alb.networking.azure.io/alb-frontend":"FRONTEND_NAME","alb.networking.azure.io/alb-id":"/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/yyyyyyyy/providers/Microsoft.ServiceNetworking/trafficControllers/zzzzzz"},"name"
+ {"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{"alb.networking.azure.io/alb-frontend":"FRONTEND_NAME","alb.networking.azure.io/alb-id":"/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/yyyyyyyy/providers/Microsoft.ServiceNetworking/trafficControllers/zzzzzz"},"name"
:"ingress-01","namespace":"test-infra"},"spec":{"ingressClassName":"azure-alb-external","rules":[{"host":"example.com","http":{"paths":[{"backend":{"service":{"name":"echo","port":{"number":80}}},"path":"/","pathType":"Prefix"}]}}],"tls":[{"hosts":["example.com"],"secretName":"listener-tls-secret"}]}} creationTimestamp: "2023-07-22T18:02:13Z" generation: 2
application-gateway How To Path Header Query String Routing Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-path-header-query-string-routing-gateway-api.md
Previously updated : 02/27/2024 Last updated : 11/5/2024
Application Gateway for Containers enables traffic routing based on URL path, qu
Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate path, query, and header based routing. ```bash
- kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/traffic-split-scenario/deployment.yaml
+ kubectl apply -f https://learn.microsoft.com/azure/application-gateway/for-containers/examples/traffic-split-scenario/deployment.yaml
``` This command creates the following on your cluster:
application-gateway How To Ssl Offloading Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-ssl-offloading-gateway-api.md
Previously updated : 02/27/2024 Last updated : 11/5/2024
Application Gateway for Containers enables SSL [offloading](/azure/architecture/
Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate TLS/SSL offloading. ```bash
- kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/https-scenario/ssl-termination/deployment.yaml
+ kubectl apply -f https://learn.microsoft.com/azure/application-gateway/for-containers/examples/https-scenario/ssl-termination/deployment.yaml
``` This command creates the following on your cluster:
application-gateway How To Ssl Offloading Ingress Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-ssl-offloading-ingress-api.md
Previously updated : 02/27/2024 Last updated : 11/5/2024
Application Gateway for Containers enables SSL [offloading](/azure/architecture/
Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate TLS/SSL offloading. ```bash
- kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/https-scenario/ssl-termination/deployment.yaml
+ kubectl apply -f https://learn.microsoft.com/azure/application-gateway/for-containers/examples/https-scenario/ssl-termination/deployment.yaml
``` This command creates the following on your cluster:
kind: Ingress
metadata: annotations: alb.networking.azure.io/alb-frontend: FRONTEND_NAME
- alb.networking.azure.io/alb-id: /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/yyyyyyyy/providers/Microsoft.ServiceNetworking/trafficControllers/zzzzzz
+ alb.networking.azure.io/alb-id: /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/yyyyyyyy/providers/Microsoft.ServiceNetworking/trafficControllers/zzzzzz
kubectl.kubernetes.io/last-applied-configuration: |
- {"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{"alb.networking.azure.io/alb-frontend":"FRONTEND_NAME","alb.networking.azure.io/alb-id":"/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/yyyyyyyy/providers/Microsoft.ServiceNetworking/trafficControllers/zzzzzz"},"name"
+ {"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{"alb.networking.azure.io/alb-frontend":"FRONTEND_NAME","alb.networking.azure.io/alb-id":"/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/yyyyyyyy/providers/Microsoft.ServiceNetworking/trafficControllers/zzzzzz"},"name"
:"ingress-01","namespace":"test-infra"},"spec":{"ingressClassName":"azure-alb-external","rules":[{"host":"example.com","http":{"paths":[{"backend":{"service":{"name":"echo","port":{"number":80}}},"path":"/","pathType":"Prefix"}]}}],"tls":[{"hosts":["example.com"],"secretName":"listener-tls-secret"}]}} creationTimestamp: "2023-07-22T18:02:13Z" generation: 2
application-gateway How To Traffic Splitting Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-traffic-splitting-gateway-api.md
Previously updated : 02/27/2024 Last updated : 11/5/2024
Application Gateway for Containers enables you to set weights and shift traffic
Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate traffic splitting / weighted round robin support. ```bash
- kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/traffic-split-scenario/deployment.yaml
+ kubectl apply -f https://learn.microsoft.com/azure/application-gateway/for-containers/examples/traffic-split-scenario/deployment.yaml
``` This command creates the following on your cluster:
application-gateway How To Url Redirect Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-url-redirect-gateway-api.md
description: Learn how to redirect URLs in Gateway API for Application Gateway f
- Previously updated : 5/9/2024+ Last updated : 11/5/2024
The following figure illustrates an example of a request destined for _contoso.c
Apply the following deployment.yaml file on your cluster to deploy a sample TLS certificate to demonstrate redirect capabilities. ```bash
- kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/https-scenario/ssl-termination/deployment.yaml
+ kubectl apply -f https://learn.microsoft.com/azure/application-gateway/for-containers/examples/https-scenario/ssl-termination/deployment.yaml
``` This command creates the following on your cluster:
application-gateway How To Url Redirect Ingress Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-url-redirect-ingress-api.md
description: Learn how to redirect URLs in Ingress API for Application Gateway f
- Previously updated : 9/16/2024+ Last updated : 11/5/2024
The following figure illustrates an example of a request destined for _contoso.c
Apply the following deployment.yaml file on your cluster to deploy a sample TLS certificate to demonstrate redirect capabilities. ```bash
- kubectl apply -f kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/https-scenario/ssl-termination/deployment.yaml
+ kubectl apply -f kubectl apply -f https://learn.microsoft.com/azure/application-gateway/for-containers/examples/https-scenario/ssl-termination/deployment.yaml
``` This command creates the following on your cluster:
application-gateway How To Url Rewrite Gateway Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-url-rewrite-gateway-api.md
description: Learn how to rewrite URLs in Gateway API for Application Gateway fo
- Previously updated : 09/16/2024+ Last updated : 11/5/2024
The following figure illustrates an example of a request destined for _contoso.c
Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate traffic splitting / weighted round robin support. ```bash
- kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/traffic-split-scenario/deployment.yaml
+ kubectl apply -f https://learn.microsoft.com/azure/application-gateway/for-containers/examples/traffic-split-scenario/deployment.yaml
``` This command creates the following on your cluster:
Via the response we should see:
} ```
-Congratulations, you have installed ALB Controller, deployed a backend application and used filtering to rewrite the client requested URL, prior to traffic being set to the target on Application Gateway for Containers.
+Congratulations, you have installed ALB Controller and deployed a backend application that includes filtering to rewrite the client requested URL. The target on Application Gateway for Containers is ready to receive traffic.
application-gateway How To Url Rewrite Ingress Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-url-rewrite-ingress-api.md
description: Learn how to rewrite URLs in Ingress API for Application Gateway fo
- Previously updated : 02/27/2024+ Last updated : 11/5/2024
URL rewrite enables you to translate an incoming request to a different URL when
The following figure illustrates a request destined for _contoso.com/shop_ being rewritten to _contoso.com/ecommerce_ when the request is initiated to the backend target by Application Gateway for Containers:
-[ ![A diagram showing the Application Gateway for Containers rewriting a URL to the backend.](./media/how-to-url-rewrite-gateway-api/url-rewrite.png) ](./media/how-to-url-rewrite-gateway-api/url-rewrite.png#lightbox)
+[![A diagram showing the Application Gateway for Containers rewriting a URL to the backend.](./media/how-to-url-rewrite-gateway-api/url-rewrite.png)](./media/how-to-url-rewrite-gateway-api/url-rewrite.png#lightbox)
## Prerequisites
The following figure illustrates a request destined for _contoso.com/shop_ being
3. Deploy sample HTTP application:<br> Apply the following deployment.yaml file on your cluster to create a sample web application to demonstrate path, query, and header based routing. ```bash
- kubectl apply -f https://trafficcontrollerdocs.blob.core.windows.net/examples/traffic-split-scenario/deployment.yaml
+ kubectl apply -f https://learn.microsoft.com/azure/application-gateway/for-containers/examples/traffic-split-scenario/deployment.yaml
``` This command creates the following on your cluster:
kind: Ingress
metadata: annotations: alb.networking.azure.io/alb-frontend: FRONTEND_NAME
- alb.networking.azure.io/alb-id: /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/yyyyyyyy/providers/Microsoft.ServiceNetworking/trafficControllers/zzzzzz
+ alb.networking.azure.io/alb-id: /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/yyyyyyyy/providers/Microsoft.ServiceNetworking/trafficControllers/zzzzzz
kubectl.kubernetes.io/last-applied-configuration: |
- {"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{"alb.networking.azure.io/alb-frontend":"FRONTEND_NAME","alb.networking.azure.io/alb-id":"/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/yyyyyyyy/providers/Microsoft.ServiceNetworking/trafficControllers/zzzzzz"},"name"
+ {"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{"alb.networking.azure.io/alb-frontend":"FRONTEND_NAME","alb.networking.azure.io/alb-id":"/subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourcegroups/yyyyyyyy/providers/Microsoft.ServiceNetworking/trafficControllers/zzzzzz"},"name"
:"ingress-01","namespace":"test-infra"},"spec":{"ingressClassName":"azure-alb-external","rules":[{"host":"contoso.com","http":{"paths":[{"backend":{"service":{"name":"backend-v2","port":{"number":8080}}},"path":"/","pathType":"Prefix"}]}}]}} creationTimestamp: "2023-07-22T18:02:13Z" generation: 2
application-gateway Quickstart Deploy Application Gateway For Containers Alb Controller https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/quickstart-deploy-application-gateway-for-containers-alb-controller.md
Previously updated : 9/16/2024 Last updated : 11/7/2024
You need to complete the following tasks before deploying Application Gateway fo
az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NAME helm install alb-controller oci://mcr.microsoft.com/application-lb/charts/alb-controller \ --namespace $HELM_NAMESPACE \
- --version 1.2.3 \
+ --version 1.3.7 \
--set albController.namespace=$CONTROLLER_NAMESPACE \ --set albController.podIdentity.clientID=$(az identity show -g $RESOURCE_GROUP -n azure-alb-identity --query clientId -o tsv) ```
You need to complete the following tasks before deploying Application Gateway fo
az aks get-credentials --resource-group $RESOURCE_GROUP --name $AKS_NAME helm upgrade alb-controller oci://mcr.microsoft.com/application-lb/charts/alb-controller \ --namespace $HELM_NAMESPACE \
- --version 1.2.3 \
+ --version 1.3.7 \
--set albController.namespace=$CONTROLLER_NAMESPACE \ --set albController.podIdentity.clientID=$(az identity show -g $RESOURCE_GROUP -n azure-alb-identity --query clientId -o tsv) ```
application-gateway Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/troubleshooting-guide.md
Previously updated : 10/15/2024 Last updated : 11/7/2024
Example output:
| NAME | READY | UP-TO-DATE | AVAILABLE | AGE | CONTAINERS | IMAGES | SELECTOR | | | -- | - | | - | -- | - | -- |
-| alb-controller | 2/2 | 2 | 2 | 18d | alb-controller | mcr.microsoft.com/application-lb/images/alb-controller:**1.2.3** | app=alb-controller |
-| alb-controller-bootstrap | 1/1 | 1 | 1 | 18d | alb-controller-bootstrap | mcr.microsoft.com/application-lb/images/alb-controller-bootstrap:**1.2.3** | app=alb-controller-bootstrap |
+| alb-controller | 2/2 | 2 | 2 | 18d | alb-controller | mcr.microsoft.com/application-lb/images/alb-controller:**1.3.7** | app=alb-controller |
+| alb-controller-bootstrap | 1/1 | 1 | 1 | 18d | alb-controller-bootstrap | mcr.microsoft.com/application-lb/images/alb-controller-bootstrap:**1.3.7** | app=alb-controller-bootstrap |
-In this example, the ALB controller version is **1.2.3**.
+In this example, the ALB controller version is **1.3.7**.
The ALB Controller version can be upgraded by running the `helm upgrade alb-controller` command. For more information, see [Install the ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md#install-the-alb-controller).
azure-maps Map Add Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-controls.md
Title: Add controls to a map | Microsoft Azure Maps
description: How to add zoom control, pitch control, rotate control and a style picker to a map in Microsoft Azure Maps. Previously updated : 05/15/2023 Last updated : 11/05/2024
map.controls.add(new atlas.control.CompassControl(), {
> [!VIDEO //codepen.io/azuremaps/embed/GBEoRb/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true] ->
+## Add scale control
+
+A scale control adds a scale bar to the map. The following code sample creates an instance of the [ScaleControl] class and adds it to the bottom-left corner of the map.
+
+```javascript
+//Construct a scale control and add it to the map.
+map.controls.add(new atlas.control.ScaleControl(), {
+ position: 'bottom-left'
+});
+```
+
+## Add fullscreen control
+
+A fullscreen control adds a button to toggle the map or specified HTML element between fullscreen and normal view. The following code sample creates an instance of the [FullscreenControl] class and adds it to the top-right corner of the map.
+
+```javascript
+//Construct a fullscreen control and add it to the map.
+map.controls.add(new atlas.control.FullscreenControl(), {
+ position: 'top-right'
+});
+```
+ ## A Map with all controls Multiple controls can be put into an array and added to the map all at once and positioned in the same area of the map to simplify development. The following code snippet adds the standard navigation controls to the map using this approach. ```javascript
-map.controls.add([
+map.controls.add(
+ [
new atlas.control.ZoomControl(),
- new atlas.control.CompassControl(),
new atlas.control.PitchControl(),
- new atlas.control.StyleControl()
-], {
- position: "top-right"
-});
+ new atlas.control.CompassControl(),
+ new atlas.control.StyleControl(),
+ new atlas.control.FullscreenControl(),
+ new atlas.control.ScaleControl(),
+ ],
+ {
+ position: 'top-right',
+ }
+);
```
-The following image shows a map with the zoom, compass, pitch, and style picker controls in the top-right corner of the map. Notice how they automatically stack. The order of the control objects in the script dictates the order in which they appear on the map. To change the order of the controls on the map, you can change their order in the array.
+The following image shows a map with the zoom, pitch, compass, style, fullscreen, and scale controls in the top-right corner of the map. Notice how they automatically stack. The order of the control objects in the script dictates the order in which they appear on the map. To change the order of the controls on the map, you can change their order in the array.
<!- <br/>
The style picker control is defined by the [StyleControl] class. For more inform
The [Navigation Control Options] sample is a tool to test out the various options for customizing the controls. For the source code for this sample, see [Navigation Control Options source code]. +
+The [Fullscreen Control Options] sample provides a tool to test out the options for customizing the fullscreen control. For the source code for this sample, see [Fullscreen Control Options source code].
+ <!- <br/>
If you want to create customized navigation controls, create a class that extend
Learn more about the classes and methods used in this article:
+> [!div class="nextstepaction"]
+> [ZoomControl]
+ > [!div class="nextstepaction"] > [CompassControl]
Learn more about the classes and methods used in this article:
> [StyleControl] > [!div class="nextstepaction"]
-> [ZoomControl]
+> [ScaleControl]
+
+> [!div class="nextstepaction"]
+> [FullscreenControl]
See the following articles for full code:
See the following articles for full code:
[PitchControl]: /javascript/api/azure-maps-control/atlas.control.pitchcontrol [CompassControl]: /javascript/api/azure-maps-control/atlas.control.compasscontrol [StyleControl]: /javascript/api/azure-maps-control/atlas.control.stylecontrol
+[ScaleControl]: /javascript/api/azure-maps-control/atlas.control.scalecontrol
+[FullscreenControl]: /javascript/api/azure-maps-control/atlas.control.fullscreencontrol
[Navigation Control Options]: https://samples.azuremaps.com/controls/map-navigation-control-options [Navigation Control Options source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Controls/Map%20Navigation%20Control%20Options/Map%20Navigation%20Control%20Options.html
+[Fullscreen Control Options]: https://samples.azuremaps.com/controls/fullscreen-control-options
+[Fullscreen Control Options source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Controls/Fullscreen%20control%20options/Fullscreen%20control%20options.html
[choose a map style]: choose-map-style.md [Add a pin]: map-add-pin.md [Add a popup]: map-add-popup.md
azure-netapp-files Understand Path Lengths https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/understand-path-lengths.md
Instead of mapping the SMB share to the top level of the volume to navigate down
### Special character considerations
-Azure NetApp Files volumes use a language type of [C.UTF-8](/cpp/build/reference/utf-8-set-source-and-executable-character-sets-to-utf-8), which covers many countries and languages including German, Cyrillic, Hebrew, and most Chinese/Japanese/Korean (CJK). Most common text characters in Unicode are 3 bytes or less. Special characters--such as emojis, musical symbols, and mathematical symbols--are often larger than 3 bytes. Some use [UTF-16 surrogate pair logic](/windows/win32/intl/surrogates-and-supplementary-characters).
+Azure NetApp Files volumes use a language type of [C.UTF-8](/cpp/build/reference/utf-8-set-source-and-executable-character-sets-to-utf-8), which covers many countries/regions and languages including German, Cyrillic, Hebrew, and most Chinese/Japanese/Korean (CJK). Most common text characters in Unicode are 3 bytes or less. Special characters--such as emojis, musical symbols, and mathematical symbols--are often larger than 3 bytes. Some use [UTF-16 surrogate pair logic](/windows/win32/intl/surrogates-and-supplementary-characters).
If you use a character that Azure NetApp Files doesn't support, you might see a warning requesting a different file name.
backup Azure Kubernetes Service Backup Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-backup-overview.md
AKS backup integrates with Backup center in Azure, providing a single view that can help you govern, monitor, operate, and analyze backups at scale. Your backups are also available in the Azure portal under **Settings** in the service menu for an AKS instance.
->[!Note]
->Vaulted backup and Cross Region Restore for AKS using Azure Backup are currently in preview.
- ## How does AKS backup work? Use AKS backup to back up your AKS workloads and persistent volumes that are deployed in AKS clusters. The solution requires the [Backup extension](/azure/azure-arc/kubernetes/conceptual-extensions) to be installed inside the AKS cluster. The Backup vault communicates to the extension to complete operations that are related to backup and restore. Using the Backup extension is mandatory, and the extension must be installed inside the AKS cluster to enable backup and restore for the cluster. When you configure AKS backup, you add values for a storage account and a blob container where backups are stored.
Azure Backup for AKS supports two storage tiers as backup datastores:
- **Operational Tier**: The Backup Extension installed in the AKS cluster first takes the backup by taking Volume snapshots via CSI Driver and stores cluster state in a blob container in your own tenant. This tier supports lower RPO with the minimum duration between two backups of four hours. Additionally, for Azure Disk-based volumes, Operational Tier supports quicker restores. -- **Vault standard Tier (preview)**: To store backup data for longer duration at lower cost than snapshots, AKS backup supports Vault-standard datastore. As per the retention rules set in the backup policy, the first successful backup (of a day, week, month, or year) is moved to a blob container outside your tenant. This datastore not only allows longer retention, but also provides ransomware protection. You can also move backups stored in the vault to another region (Azure Paired Region) for recovery by enabling *Geo redundancy* and *Cross Region Restore* in the Backup vault.
+- **Vault standard Tier**: To store backup data for longer duration at lower cost than snapshots, AKS backup supports Vault-standard datastore. As per the retention rules set in the backup policy, the first successful backup (of a day, week, month, or year) is moved to a blob container outside your tenant. This datastore not only allows longer retention, but also provides ransomware protection. You can also move backups stored in the vault to another region (Azure Paired Region) for recovery by enabling *Geo redundancy* and *Cross Region Restore* in the Backup vault.
> [!Note] > You can store the backup data in a vault-standard datastore via Backup Policy by defining retention rules. Only one scheduled recovery point per day is moved to Vault Tier. However, you can move any number of on-demand backups to the Vault as per the rule selected.
backup Azure Kubernetes Service Cluster Backup Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup-concept.md
This article describes the prerequisites for Azure Kubernetes Service (AKS) back
Azure Backup now allows you to back up AKS clusters (cluster resources and persistent volumes attached to the cluster) using a backup extension, which must be installed in the cluster. Backup vault communicates with the cluster via this Backup Extension to perform backup and restore operations. Based on the least privileged security model, a Backup vault must have *Trusted Access* enabled to communicate with the AKS cluster.
->[!Note]
->Vaulted backup and Cross Region Restore for AKS using Azure Backup are currently in preview.
- ## Backup Extension - The extension enables backup and restore capabilities for the containerized workloads and persistent volumes used by the workloads running in AKS clusters.
backup Azure Kubernetes Service Cluster Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup-support-matrix.md
You can use [Azure Backup](./backup-overview.md) to help protect Azure Kubernete
- Operational Tier support for AKS backup is supported in all the following Azure public cloud regions: East US, North Europe, West Europe, South East Asia, West US 2, East US 2, West US, North Central US, Central US, France Central, Korea Central, Australia East, UK South, East Asia, West Central US, Japan East, South Central US, West US 3, Canada Central, Canada East, Australia South East, Central India, Norway East, Germany West Central, Switzerland North, Sweden Central, Japan West, UK West, Korea South, South Africa North, South India, France South, Brazil South, UAE North, China East 2, China East 3, China North 2, China North 3, USGov Virginia, USGov Arizona and USGov Texas. -- Vault Tier and Cross Region Restore support (preview) for AKS backup are available in the following regions: East US, West US, West US 3, North Europe, West Europe, North Central US, South Central US, West Central US, East US 2, Central US, UK South, UK West, East Asia, South-East Asia, Japan East South India, Central India, Canada Central and Norway East.
+- Vault Tier and Cross Region Restore support for AKS backup are available in the following regions: East US, West US, West US 3, North Europe, West Europe, North Central US, South Central US, West Central US, East US 2, Central US, UK South, UK West, East Asia, South-East Asia, Japan East South India, Central India, Canada Central and Norway East.
>[!Note]
- >Vaulted backup and Cross Region Restore for AKS using Azure Backup are currently in preview.
- >
>To access backups stored in Vault Tier in the Azure paired region, enable Cross Region Restore capability for your Backup Vault. See the [list of Azure Paired Region](../reliability/cross-region-replication-azure.md#azure-paired-regions). ## Limitations
You can use [Azure Backup](./backup-overview.md) to help protect Azure Kubernete
- Configuration of a storage account with private endpoint is supported.
-### Additional limitations for Vaulted backup and Cross Region Restore (preview)
+### Additional limitations for Vaulted backup and Cross Region Restore
- Only Azure Disk with Persistent Volumes of size <= 1 TB are eligible to be moved to the Vault Tier; otherwise, they are skipped in the backup data.
backup Azure Kubernetes Service Cluster Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup.md
Title: Back up Azure Kubernetes Service by using Azure Backup
description: Learn how to back up Azure Kubernetes Service (AKS) by using Azure Backup. -
- - ignite-2023
Previously updated : 01/03/2024 Last updated : 11/04/2024
This article describes how to configure and back up Azure Kubernetes Service (AK
You can use Azure Backup to back up AKS clusters (cluster resources and persistent volumes attached to the cluster) by using the Backup extension, which must be installed in the cluster. The Backup vault communicates with the cluster via the Backup extension to perform backup and restore operations.
-> [!NOTE]
-> Vaulted backup and Cross Region Restore for AKS using Azure Backup are currently in preview.
+## Prerequisites
-## Before you begin
+Things to ensure before you configure backup for AKS cluster:
- Currently, AKS Backup supports only Azure Disk Storage-based persistent volumes enabled by CSI driver. The backups are stored in an operational datastore only (backup data is stored in your tenant and isn't moved to a vault). The Backup vault and AKS cluster must be in the same region. - AKS Backup uses a blob container and a resource group to store the backups. The blob container holds the AKS cluster resources. Persistent volume snapshots are stored in the resource group. The AKS cluster and the storage locations must be in the same region. Learn [how to create a blob container](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container).
backup Azure Kubernetes Service Cluster Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-restore.md
This article describes how to restore backed-up Azure Kubernetes Service (AKS).
Azure Backup now allows you to back up AKS clusters (cluster resources and persistent volumes attached to the cluster) using a backup extension, which must be installed in the cluster. Backup vault communicates with the cluster via this Backup Extension to perform backup and restore operations.
-> [!NOTE]
-> Vaulted backup and Cross Region Restore for AKS using Azure Backup are currently in preview.
- ## Before you start - AKS backup allows you to restore to original AKS cluster (that was backed up) and to an alternate AKS cluster. AKS backup allows you to perform a full restore and item-level restore. You can utilize [restore configurations](#restore-configurations) to define parameters based on the cluster resources that are to be restored.
Azure Backup for AKS currently supports the following two options when doing a r
> [!NOTE] > AKS backup currently doesn't delete and recreate resources in the target cluster if they already exist. If you attempt to restore Persistent Volumes in the original location, delete the existing Persistent Volumes, and then do the restore operation.
-## Restore in secondary region (preview)
+## Restore in secondary region
-To restore the AKS cluster in the secondary region, [configure Geo redundancy and Cross Region Restore in the Backup vault](azure-kubernetes-service-cluster-backup.md#create-a-backup-vault), and then [trigger restore](tutorial-restore-aks-backups-across-regions.md#restore-in-secondary-region-preview).
+To restore the AKS cluster in the secondary region, [configure Geo redundancy and Cross Region Restore in the Backup vault](azure-kubernetes-service-cluster-backup.md#create-a-backup-vault), and then [trigger restore](tutorial-restore-aks-backups-across-regions.md#restore-in-secondary-region).
## Next steps
backup Quick Backup Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-backup-aks.md
Title: "Quickstart: Configure an Azure Kubernetes Services cluster backup" description: Learn how to configure backup for an Azure Kubernetes Service (AKS) cluster, and then use Azure Backup to back up specific items in the cluster. Previously updated : 10/01/2024 Last updated : 11/04/2024 -
- - ignite-2023
# Quickstart: Configure backup for an AKS cluster
-In this quickstart, you configure backup for an Azure Kubernetes Service (AKS) cluster, and then use the Azure Backup configuration to back up specific items in the cluster.
+In this quickstart, you configure vaulted backup for an Azure Kubernetes Service (AKS) cluster, and then use the Azure Backup configuration to back up specific items in the cluster.
You can use Azure Backup to back up AKS clusters by installing the Backup extension. The extension must be installed in the cluster. An AKS cluster backup includes cluster resources and persistent volumes that are attached to the cluster.
The Backup vault communicates with the cluster via the Backup extension to compl
## Prerequisites
+Before you configure vaulted backup for AKS cluster, ensure the following prerequisites are met:
+ - Identify or [create a Backup vault](create-manage-backup-vault.md) in the same region where you want to back up an AKS cluster. - [Install the Backup extension](quick-install-backup-extension.md) in the AKS cluster that you want to back up.
-## Configure backup for an AKS cluster
+## Configure vaulted backup for an AKS cluster
-1. In the Azure portal, go to the AKS cluster that you want to back up.
+1. In the [Azure portal](https://portal.azure.com), go to the AKS cluster that you want to back up.
1. In the resource menu, select **Backup**, and then select **Configure Backup**.
The Backup vault communicates with the cluster via the Backup extension to compl
:::image type="content" source="./media/quick-backup-aks/backup-vault-review.png" alt-text="Screenshot that shows the review page for Configure Backup." lightbox="./media/quick-backup-aks/backup-vault-review.png":::
- > [!NOTE]
- > Before you enable Trusted Access, enable the `TrustedAccessPreview` feature flag for the `Microsoft.ContainerServices` resource provider on the subscription.
-
-1. Select a backup policy, which defines the schedule for backups and their retention period. Then select **Next**.
+1. Select a backup policy, which defines the schedule for backups and their retention period in both Operation and Vault-standard. Then select **Next**.
:::image type="content" source="./media/azure-kubernetes-service-cluster-backup/select-backup-policy.png" alt-text="Screenshot that shows the Backup policy tab." lightbox="./media/azure-kubernetes-service-cluster-backup/select-backup-policy.png":::
The Backup vault communicates with the cluster via the Backup extension to compl
:::image type="content" source="./media/azure-kubernetes-service-cluster-backup/validate-snapshot-resource-group-selection.png" alt-text="Screenshot that shows the Snapshot resource group dropdown." lightbox="./media/azure-kubernetes-service-cluster-backup/validate-snapshot-resource-group-selection.png":::
-1. When validation is finished, if required roles aren't assigned to the vault in the snapshot resource group, an error appears.
-
- :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/validation-error-permissions-not-assigned.png" alt-text="Screenshot that shows a validation error." lightbox="./media/azure-kubernetes-service-cluster-backup/validation-error-permissions-not-assigned.png":::
+When validation is finished, if required roles aren't assigned to the vault in the snapshot resource group, an error appears.
+ :::image type="content" source="./media/azure-kubernetes-service-cluster-backup/validation-error-permissions-not-assigned.png" alt-text="Screenshot that shows a validation error." lightbox="./media/azure-kubernetes-service-cluster-backup/validation-error-permissions-not-assigned.png":::
1. To resolve the error, under **Datasource name**, select the datasource, and then select **Assign missing roles**.
backup Quick Kubernetes Backup Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-kubernetes-backup-arm.md
+
+ Title: Quickstart - Configure vaulted backup for an Azure Kubernetes Service (AKS) cluster using Azure Backup via Azure Resource Manager
+description: Learn how to quickly configure backup for a Kubernetes cluster using Azure Resource Manager.
++ Last updated : 05/31/2024++++++
+# Quickstart: Configure vaulted backup for an Azure Kubernetes Service (AKS) cluster using Azure Resource Manager
+
+This quickstart describes how to configure vaulted backup for an Azure Kubernetes Service (AKS) cluster using Azure Resource Manager.
+
+Azure Backup for AKS is a cloud-native, enterprise-ready, application-centric backup service that lets you quickly configure backup for AKS clusters.[Azure Backup](backup-azure-mysql-flexible-server-about.md) allows you to back up your AKS clusters using multiple options - such as Azure portal, PowerShell, CLI, Azure Resource Manager, Bicep, and so on. This quickstart describes how to back up an AKS clusters with an Azure Resource Manager template and Azure PowerShell. For more information on developing ARM templates, see the [Azure Resource Manager documentation](../azure-resource-manager/index.yml)
+
+An Azure Resource Manager (ARM) template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. The template uses declarative syntax. You describe your intended deployment without writing the sequence of programming commands to create the deployment.
+
+## Review the template
+
+This template enables you to configure backup for an AKS cluster. In this template, we create a backup vault with a backup policy for the AKS cluster with a *four hourly* schedule and a *seven day* retention duration.
+
+```JSON
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "resourceGroupName": { "type": "string" },
+ "resourceGroupLocation": { "type": "string" },
+ "backupResourceGroupName": { "type": "string" },
+ "backupResourceGroupLocation": { "type": "string" },
+ "aksClusterName": { "type": "string" },
+ "dnsPrefix": { "type": "string" },
+ "nodeCount": { "type": "int" },
+ "backupVaultName": { "type": "string" },
+ "datastoreType": { "type": "string" },
+ "redundancy": { "type": "string" },
+ "backupPolicyName": { "type": "string" },
+ "backupExtensionName": { "type": "string" },
+ "backupExtensionType": { "type": "string" },
+ "storageAccountName": { "type": "string" }
+ },
+ "variables": {
+ "backupContainerName": "tfbackup"
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Resources/resourceGroups",
+ "apiVersion": "2021-04-01",
+ "location": "[parameters('resourceGroupLocation')]",
+ "name": "[parameters('resourceGroupName')]"
+ },
+ {
+ "type": "Microsoft.Resources/resourceGroups",
+ "apiVersion": "2021-04-01",
+ "location": "[parameters('backupResourceGroupLocation')]",
+ "name": "[parameters('backupResourceGroupName')]"
+ },
+ {
+ "type": "Microsoft.ContainerService/managedClusters",
+ "apiVersion": "2023-05-01",
+ "location": "[parameters('resourceGroupLocation')]",
+ "name": "[parameters('aksClusterName')]",
+ "properties": {
+ "dnsPrefix": "[parameters('dnsPrefix')]",
+ "agentPoolProfiles": [
+ {
+ "name": "agentpool",
+ "count": "[parameters('nodeCount')]",
+ "vmSize": "Standard_D2_v2",
+ "type": "VirtualMachineScaleSets",
+ "mode": "System"
+ }
+ ],
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "networkProfile": {
+ "networkPlugin": "kubenet",
+ "loadBalancerSku": "standard"
+ }
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Resources/resourceGroups', parameters('resourceGroupName'))]",
+ "[resourceId('Microsoft.Resources/resourceGroups', parameters('backupResourceGroupName'))]"
+ ]
+ },
+ {
+ "type": "Microsoft.DataProtection/backupVaults",
+ "apiVersion": "2023-01-01",
+ "location": "[parameters('resourceGroupLocation')]",
+ "name": "[parameters('backupVaultName')]",
+ "identity": {
+ "type": "SystemAssigned"
+ },
+ "properties": {
+ "dataStoreType": "[parameters('datastoreType')]",
+ "redundancy": "[parameters('redundancy')]"
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.ContainerService/managedClusters', parameters('aksClusterName'))]"
+ ]
+ },
+ {
+ "type": "Microsoft.DataProtection/backupVaults/backupPolicies",
+ "apiVersion": "2023-01-01",
+ "name": "[concat(parameters('backupVaultName'), '/', parameters('backupPolicyName'))]",
+ "properties": {
+ "repeatingTimeIntervals": ["R/2024-04-14T06:33:16+00:00/PT4H"],
+ "defaultRetentionRule": {
+ "lifeCycle": {
+ "duration": "P7D",
+ "dataStoreType": "OperationalStore"
+ }
+ }
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.DataProtection/backupVaults', parameters('backupVaultName'))]"
+ ]
+ },
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2022-05-01",
+ "location": "[parameters('backupResourceGroupLocation')]",
+ "name": "[parameters('storageAccountName')]",
+ "sku": {
+ "name": "Standard_LRS"
+ },
+ "kind": "StorageV2",
+ "dependsOn": [
+ "[resourceId('Microsoft.ContainerService/managedClusters', parameters('aksClusterName'))]"
+ ]
+ },
+ {
+ "type": "Microsoft.Storage/storageAccounts/blobServices/containers",
+ "apiVersion": "2021-04-01",
+ "name": "[concat(parameters('storageAccountName'), '/default/', variables('backupContainerName'))]",
+ "properties": {
+ "publicAccess": "None"
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]"
+ ]
+ },
+ {
+ "type": "Microsoft.KubernetesConfiguration/extensions",
+ "apiVersion": "2023-05-01",
+ "name": "[concat(parameters('aksClusterName'), '/', parameters('backupExtensionName'))]",
+ "properties": {
+ "extensionType": "[parameters('backupExtensionType')]",
+ "configurationSettings": {
+ "configuration.backupStorageLocation.bucket": "[variables('backupContainerName')]",
+ "configuration.backupStorageLocation.config.storageAccount": "[parameters('storageAccountName')]",
+ "configuration.backupStorageLocation.config.resourceGroup": "[parameters('backupResourceGroupName')]",
+ "configuration.backupStorageLocation.config.subscriptionId": "[subscription().subscriptionId]",
+ "credentials.tenantId": "[subscription().tenantId]"
+ }
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Storage/storageAccounts/blobServices/containers', parameters('storageAccountName'), 'default', variables('backupContainerName'))]"
+ ]
+ }
+ ],
+ "outputs": {
+ "aksClusterId": {
+ "type": "string",
+ "value": "[resourceId('Microsoft.ContainerService/managedClusters', parameters('aksClusterName'))]"
+ },
+ "backupVaultId": {
+ "type": "string",
+ "value": "[resourceId('Microsoft.DataProtection/backupVaults', parameters('backupVaultName'))]"
+ }
+ }
+}
+```
+
+## Deploy the template
+
+To deploy the template, store the template in a GitHub repository and then paste the following PowerShell script into the shell window.
+
+```azurepowershell-interactive
+$projectName = Read-Host -Prompt "Enter a project name (limited to eight characters) that is used to generate Azure resource names"
+$location = Read-Host -Prompt "Enter the location (for example, centralus)"
+
+$resourceGroupName = "${projectName}rg"
+$templateUri = "https//templateuri"
+
+New-AzResourceGroup -Name $resourceGroupName -Location $location
+New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateUri $templateUri -projectName
+```
+
+## Next steps
+
+- [Restore Azure Kubernetes Service cluster using PowerShell](azure-kubernetes-service-cluster-restore-using-powershell.md)
+- [Manage Azure Kubernetes Service cluster backups](azure-kubernetes-service-cluster-manage-backups.md)
+- [About Azure Kubernetes Service cluster backup](azure-kubernetes-service-cluster-backup-concept.md)
backup Quick Kubernetes Backup Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-kubernetes-backup-bicep.md
+
+ Title: Quickstart - Configure vaulted backup for an Azure Kubernetes Service (AKS) cluster using Azure Backup via Azure Bicep
+description: Learn how to quickly configure backup for a Kubernetes cluster using Azure Bicep.
++ Last updated : 05/31/2024++++++
+# Quickstart: Configure vaulted backup for an Azure Kubernetes Service (AKS) cluster using Azure Bicep
+
+This quickstart describes how to configure vaulted backup for an Azure Kubernetes Service (AKS) cluster using Azure Bicep.
+
+Azure Backup for AKS is a cloud-native, enterprise-ready, application-centric backup service that lets you quickly configure backup for AKS clusters.[Azure Backup](backup-azure-mysql-flexible-server-about.md) allows you to back up your AKS clusters using multiple options - such as Azure portal, PowerShell, CLI, Azure Resource Manager, Bicep, and so on. This quickstart describes how to back up an AKS clusters with a Bicep template and Azure PowerShell. For more information on developing Bicep templates, see the [Bicep documentation](../azure-resource-manager/bicep/deploy-cli.md).
+
+Bicep is a language for declaratively deploying Azure resources. You can use Bicep instead of JSON to develop your Azure Resource Manager templates (ARM templates). Bicep syntax reduces the complexity and improves the development experience. Bicep is a transparent abstraction over ARM template JSON that provides all JSON template capabilities. During deployment, the Bicep CLI converts a Bicep file into an ARM template JSON. A Bicep file states the Azure resources and resource properties, without writing a sequence of programming commands to create resources.
+
+Resource types, API versions, and properties that are valid in an ARM template, are also valid in a Bicep file.
+
+## Prerequisites
+
+To set up your environment for Bicep development, see [Install Bicep tools](../azure-resource-manager/bicep/install.md).
+
+>[!Note]
+>Install the latest [Azure PowerShell module](/powershell/azure/new-azureps-module-az) and the Bicep CLI as detailed in article.
+
+## Review the template
+
+This template enables you to configure backup for an AKS cluster. In this template, we create a backup vault with a backup policy for the AKS cluster with a *four hourly* schedule and a *seven day* retention duration.
+
+```bicep
+@description('Location for the resource group')
+param resourceGroupLocation string
+@description('Name of the resource group for AKS and Backup Vault')
+param resourceGroupName string
+@description('Name of the resource group for storage account and snapshots')
+param backupResourceGroupName string
+@description('Location for the backup resource group')
+param backupResourceGroupLocation string
+@description('AKS Cluster name')
+param aksClusterName string
+@description('DNS prefix for AKS')
+param dnsPrefix string
+@description('Node count for the AKS Cluster')
+param nodeCount int
+@description('Name of the Backup Vault')
+param backupVaultName string
+@description('Datastore type for the Backup Vault')
+param datastoreType string
+@description('Redundancy type for the Backup Vault')
+param redundancy string
+@description('Backup policy name')
+param backupPolicyName string
+@description('Name of the Backup Extension')
+param backupExtensionName string
+@description('Type of Backup Extension')
+param backupExtensionType string
+@description('Name of the Storage Account')
+param storageAccountName string
+
+var backupContainerName = 'tfbackup'
+
+resource rg 'Microsoft.Resources/resourceGroups@2021-04-01' = {
+ location: resourceGroupLocation
+ name: resourceGroupName
+}
+
+resource backupRg 'Microsoft.Resources/resourceGroups@2021-04-01' = {
+ location: backupResourceGroupLocation
+ name: backupResourceGroupName
+}
+
+resource aksCluster 'Microsoft.ContainerService/managedClusters@2023-05-01' = {
+ location: resourceGroupLocation
+ name: aksClusterName
+ properties: {
+ dnsPrefix: dnsPrefix
+ agentPoolProfiles: [
+ {
+ name: 'agentpool'
+ count: nodeCount
+ vmSize: 'Standard_D2_v2'
+ type: 'VirtualMachineScaleSets'
+ mode: 'System'
+ }
+ ]
+ identity: {
+ type: 'SystemAssigned'
+ }
+ networkProfile: {
+ networkPlugin: 'kubenet'
+ loadBalancerSku: 'standard'
+ }
+ }
+ dependsOn: [
+ rg
+ backupRg
+ ]
+}
+
+resource backupVault 'Microsoft.DataProtection/backupVaults@2023-01-01' = {
+ location: resourceGroupLocation
+ name: backupVaultName
+ identity: {
+ type: 'SystemAssigned'
+ }
+ properties: {
+ dataStoreType: datastoreType
+ redundancy: redundancy
+ }
+ dependsOn: [
+ aksCluster
+ ]
+}
+
+resource backupPolicy 'Microsoft.DataProtection/backupVaults/backupPolicies@2023-01-01' = {
+ name: '${backupVaultName}/${backupPolicyName}'
+ properties: {
+ backupRepeatingTimeIntervals: ['R/2024-04-14T06:33:16+00:00/PT4H']
+ defaultRetentionRule: {
+ lifeCycle: {
+ duration: 'P7D'
+ dataStoreType: 'OperationalStore'
+ }
+ }
+ }
+ dependsOn: [
+ backupVault
+ ]
+}
+
+resource storageAccount 'Microsoft.Storage/storageAccounts@2022-05-01' = {
+ location: backupResourceGroupLocation
+ name: storageAccountName
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'StorageV2'
+ dependsOn: [
+ aksCluster
+ ]
+}
+
+resource backupContainer 'Microsoft.Storage/storageAccounts/blobServices/containers@2021-04-01' = {
+ name: '${storageAccount.name}/default/${backupContainerName}'
+ properties: {
+ publicAccess: 'None'
+ }
+ dependsOn: [
+ storageAccount
+ ]
+}
+
+resource backupExtension 'Microsoft.KubernetesConfiguration/extensions@2023-05-01' = {
+ name: '${aksClusterName}/${backupExtensionName}'
+ properties: {
+ extensionType: backupExtensionType
+ configurationSettings: {
+ 'configuration.backupStorageLocation.bucket': backupContainerName
+ 'configuration.backupStorageLocation.config.storageAccount': storageAccountName
+ 'configuration.backupStorageLocation.config.resourceGroup': backupResourceGroupName
+ 'configuration.backupStorageLocation.config.subscriptionId': subscription().subscriptionId
+ 'credentials.tenantId': subscription().tenantId
+ }
+ }
+ dependsOn: [
+ backupContainer
+ ]
+}
+
+output aksClusterId string = aksCluster.id
+output backupVaultId string = backupVault.id
+
+```
++
+## Deploy the template
+
+To deploy this template, store it in GitHub or your preferred location and then paste the following PowerShell script in the shell window. To paste the code, right-click the shell window and then select **Paste**.
++
+```azurepowershell
+$projectName = Read-Host -Prompt "Enter a project name (limited to eight characters) that is used to generate Azure resource names"
+$location = Read-Host -Prompt "Enter the location (for example, centralus)"
+
+$resourceGroupName = "${projectName}rg"
+$templateUri = "templateURI"
+
+New-AzResourceGroup -Name $resourceGroupName -Location $location
+New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateUri $templateUri -projectName $projectName
+
+```
+
+## Next steps
+
+- [Restore Azure Kubernetes Service cluster using PowerShell](azure-kubernetes-service-cluster-restore-using-powershell.md)
+- [Manage Azure Kubernetes Service cluster backups](azure-kubernetes-service-cluster-manage-backups.md)
+- [About Azure Kubernetes Service cluster backup](azure-kubernetes-service-cluster-backup-concept.md)
backup Quick Kubernetes Backup Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-kubernetes-backup-cli.md
+
+ Title: Quickstart - Configure vaulted backup for an Azure Kubernetes Service (AKS) cluster using Azure Backup via Azure CLI
+description: Learn how to quickly configure backup for a Kubernetes cluster using Azure CLI.
++ Last updated : 05/31/2024+++++++
+# Quickstart: Configure vaulted backup for an Azure Kubernetes Service (AKS) cluster using Azure CLI
+
+This quickstart describes how to configure vaulted backup for an Azure Kubernetes Service (AKS) cluster using Azure CLI.
+
+Azure Backup for AKS is a cloud-native, enterprise-ready, application-centric backup service that lets you quickly configure backup for AKS clusters.
+
+## Before you start
+
+Before you configure vaulted backup for AKS cluster, ensure the following prerequisites are met:
+
+- Perform [all the prerequisites](azure-kubernetes-service-cluster-backup-concept.md) before initiating backup operation for AKS backup.
+
+## Create a Backup vault
+
+To create the Backup vault, run the following command:
+
+```azurecli
+az dataprotection backup-vault create --resource-group $backupvaultresourcegroup --vault-name $backupvault --location $region --type SystemAssigned --storage-settings datastore-type="VaultStore" type="GloballyRedundant"
+```
+
+The newly created vault has storage settings set as Globally Redundant, thus backups stored in vault tier will be available in the Azure paired region. Once the vault creation is complete, create a backup policy to protect AKS clusters.
+
+## Create a backup policy
+
+Retrieve the policy template using the command `az dataprotection backup-policy get-default-policy-template`.
+
+```azurecli
+az dataprotection backup-policy get-default-policy-template --datasource-type AzureKubernetesService > akspolicy.json
+```
+
+We update the default template for the backup policy and add a retention rule to retain **first successful backup per day** in the **Vault tier** for 30 days.
+
+```azurecli
+
+az dataprotection backup-policy retention-rule create-lifecycle --count 30 --retention-duration-type Days --copy-option ImmediateCopyOption --target-datastore VaultStore --source-datastore OperationalStore > ./retentionrule.json
+
+az dataprotection backup-policy retention-rule set --lifecycles ./retentionrule.json --name Daily --policy ./akspolicy.json > ./akspolicy.json
+
+```
+
+Once the policy JSON has all the required values, proceed to create a new policy from the policy object.
+
+```azurecli
+az dataprotection backup-policy create -g testBkpVaultRG --vault-name TestBkpVault -n mypolicy --policy policy.json
+```
+
+## Prepare AKS cluster for backup
+
+Once the vault and policy creation are complete, you need to perform the following prerequisites to get the AKS cluster ready for backup:
+
+1. **Create a storage account and blob container**.
+
+ Backup for AKS stores Kubernetes resources in a blob container as backups. To get the AKS cluster ready for backup, you need to install an extension in the cluster. This extension requires the storage account and blob container as inputs.
+
+ To create a new storage account, run the following command:
+
+ ```azurecli
+ az storage account create --name $storageaccount --resource-group $storageaccountresourcegroup --location $region --sku Standard_LRS
+ ```
+
+ Once the storage account creation is complete, create a blob container inside by running the following command:
+
+ ```azurecli
+ az storage container create --name $blobcontainer --account-name $storageaccount --auth-mode login
+ ```
+
+2. **Install Backup Extension**.
+
+ Backup Extension is mandatory to be installed in the AKS cluster to perform any backup and restore operations. The Backup Extension creates a namespace `dataprotection-microsoft` in the cluster and uses the same to deploy its resources. The extension requires the storage account and blob container as inputs for installation.
+
+ ```azurecli
+ az k8s-extension create --name azure-aks-backup --extension-type microsoft.dataprotection.kubernetes --scope cluster --cluster-type managedClusters --cluster-name $akscluster --resource-group $aksclusterresourcegroup --release-train stable --configuration-settings blobContainer=$blobcontainer storageAccount=$storageaccount storageAccountResourceGroup=$storageaccountresourcegroup storageAccountSubscriptionId=$subscriptionId
+ ```
+
+ As part of extension installation, a user identity is created in the AKS cluster's Node Pool Resource Group. For the extension to access the storage account, you need to provide this identity the **Storage Blob Data Contributor** role. To assign the required role, run the following command:
+
+ ```azurecli
+ az role assignment create --assignee-object-id $(az k8s-extension show --name azure-aks-backup --cluster-name $akscluster --resource-group $aksclusterresourcegroup --cluster-type managedClusters --query aksAssignedIdentity.principalId --output tsv) --role 'Storage Blob Data Contributor' --scope /subscriptions/$subscriptionId/resourceGroups/$storageaccountresourcegroup/providers/Microsoft.Storage/storageAccounts/$storageaccount
+ ```
+
+3. **Enable Trusted Access**
+
+ For the Backup vault to connect with the AKS cluster, you must enable *Trusted Access* as it allows the Backup vault to have a direct line of sight to the AKS cluster.
++
+ To enable Trusted Access, run the following command:
+
+ ```azurecli
+ az aks trustedaccess rolebinding create --cluster-name $akscluster --name backuprolebinding --resource-group $aksclusterresourcegroup --roles Microsoft.DataProtection/backupVaults/backup-operator --source-resource-id /subscriptions/$subscriptionId/resourceGroups/$backupvaultresourcegroup/providers/Microsoft.DataProtection/BackupVaults/$backupvault
+ ```
+
+## Configure vaulted backups for AKS cluster
+
+With the created Backup vault and backup policy, and the AKS cluster in *ready-to-be-backed-up* state, you can now start to back up your AKS cluster.
+
+### Prepare the request
+
+The configuration of backup is performed in two steps:
+
+1. Prepare backup configuration to define which cluster resources are to be backed up using the `az dataprotection backup-instance initialize-backupconfig` command. The command generates a JSON, which you can update to define backup configuration for your AKS cluster as required.
+
+ ```azurecli
+ az dataprotection backup-instance initialize-backupconfig --datasource-type AzureKubernetesService > aksbackupconfig.json
+ ```
+
+
+2. Prepare the relevant request using the relevant vault, policy, AKS cluster, backup configuration, and snapshot resource group using the `az dataprotection backup-instance initialize` command.
+
+ ```azurecli
+ az dataprotection backup-instance initialize --datasource-id /subscriptions/$subscriptionId/resourceGroups/$aksclusterresourcegroup/providers/Microsoft.ContainerService/managedClusters/$akscluster --datasource-location $region --datasource-type AzureKubernetesService --policy-id /subscriptions/$subscriptionId/resourceGroups/$backupvaultresourcegroup/providers/Microsoft.DataProtection/backupVaults/$backupvault/backupPolicies/$backuppolicy --backup-configuration ./aksbackupconfig.json --friendly-name ecommercebackup --snapshot-resource-group-name $snapshotresourcegroup > backupinstance.json
+ ```
+
+Now, use the JSON output of this command to configure backup for the AKS cluster.
+
+### Assign required permissions and validate
+
+With the request prepared, first you need to validate if the required roles are assigned to the resources involved by running the following command:
+
+```azurecli
+az dataprotection backup-instance validate-for-backup --backup-instance ./backupinstance.json --ids /subscriptions/$subscriptionId/resourceGroups/$backupvaultresourcegroup/providers/Microsoft.DataProtection/backupVaults/$backupvault
+```
+
+If the validation fails and there are certain permissions missing, then you can assign them by running the following command:
+
+```azurecli
+az dataprotection backup-instance update-msi-permissions command.
+az dataprotection backup-instance update-msi-permissions --datasource-type AzureKubernetesService --operation Backup --permissions-scope ResourceGroup --vault-name $backupvault --resource-group $backupvaultresourcegroup --backup-instance backupinstance.json
+
+```
+
+Once the permissions are assigned, revalidate using the earlier *validate for backup* command and then proceed to configure backup:
+
+```azurecli
+az dataprotection backup-instance create --backup-instance backupinstance.json --resource-group $backupvaultresourcegroup --vault-name $backupvault
+```
+
+## Next steps
+
+- [Restore Azure Kubernetes Service cluster using Azure CLI](azure-kubernetes-service-cluster-restore-using-cli.md)
+- [Manage Azure Kubernetes Service cluster backups](azure-kubernetes-service-cluster-manage-backups.md)
+- [About Azure Kubernetes Service cluster backup](azure-kubernetes-service-cluster-backup-concept.md)
backup Quick Kubernetes Backup Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-kubernetes-backup-powershell.md
+
+ Title: Quickstart - Configure vaulted backup for an Azure Kubernetes Service (AKS) cluster using Azure Backup via PowerShell
+description: Learn how to quickly configure backup for a Kubernetes cluster using PowerShell.
++ Last updated : 05/31/2024+++++++
+# Quickstart: Configure vaulted backup for an Azure Kubernetes Service (AKS) cluster using PowerShell
+
+This quickstart describes how to configure vaulted backup for an Azure Kubernetes Service (AKS) cluster using PowerShell.
+
+Azure Backup for AKS is a cloud-native, enterprise-ready, application-centric backup service that lets you quickly configure backup for AKS clusters.
+
+## Before you start
+
+Before you configure vaulted backup for AKS cluster, ensure the following prerequisites are met:
+
+- Perform [all the prerequisites](azure-kubernetes-service-cluster-backup-concept.md) before initiating backup or restore operation for AKS backup.
+
+## Create a Backup vault
+
+To create the Backup vault, run the following command:
+
+```azurepowershell
+$storageSetting = New-AzDataProtectionBackupVaultStorageSettingObject -Type GloballyRedundant -DataStoreType VaultStore
+
+New-AzDataProtectionBackupVault -ResourceGroupName testBkpVaultRG -VaultName TestBkpVault -Location westus -StorageSetting $storageSetting
+
+$TestBkpVault = Get-AzDataProtectionBackupVault -VaultName TestBkpVault
+```
+
+The newly created vault has storage settings set as Globally Redundant, thus backups stored in vault tier will be available in the Azure paired region. Once the vault creation is complete, create a backup policy to protect AKS clusters.
+
+## Create a backup policy
+
+Retrieve the policy template using the command `Get-AzDataProtectionPolicyTemplate`.
+
+```azurepowershell
+$policyDefn = Get-AzDataProtectionPolicyTemplate -DatasourceType AzureKubernetesService
+```
+
+The policy template consists of a trigger criteria (which decides the factors to trigger the backup job) and a lifecycle (which decides when to delete, copy, or move the backups). In AKS backup, the default value for trigger is a scheduled hourly trigger is *every 4 hours (PT4H)* and retention of each backup is *seven days*. For vaulted backups add retention for vault datastore.
+
+```azurepowershell
+New-AzDataProtectionBackupPolicy -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name -Name aksBkpPolicy -Policy $policyDefn
+
+$aksBkpPol = Get-AzDataProtectionBackupPolicy -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name -Name "aksBkpPolicy"
+```
+
+Once the policy JSON has all the required values, proceed to create a new policy from the policy object.
+
+```azurepowershell
+az dataprotection backup-policy create -g testBkpVaultRG --vault-name TestBkpVault -n mypolicy --policy policy.json
+```
+
+## Prepare AKS cluster for backup
+
+Once the vault and policy creation are complete, you need to perform the following prerequisites to get the AKS cluster ready for backup:
+
+1. **Create a storage account and blob container**.
+
+ Backup for AKS stores Kubernetes resources in a blob container as backups. To get the AKS cluster ready for backup, you need to install an extension in the cluster. This extension requires the storage account and blob container as inputs.
+
+ To create a new storage account and a blob container, see [these steps](../storage/blobs/blob-containers-powershell.md#create-a-container).
+
+2. **Install Backup Extension**.
+
+ Backup Extension is mandatory to be installed in the AKS cluster to perform any backup and restore operations. The Backup Extension creates a namespace `dataprotection-microsoft` in the cluster and uses the same to deploy its resources. The extension requires the storage account and blob container as inputs for installation. Learn about the [extension installation commands](./azure-kubernetes-service-cluster-manage-backups.md#install-backup-extension).
+
+ As part of extension installation, a user identity is created in the AKS cluster's Node Pool Resource Group. For the extension to access the storage account, you need to provide this identity the **Storage Account Contributor** role. To assign the required role, [run these command](azure-kubernetes-service-cluster-manage-backups.md#grant-permission-on-storage-account)
+
+3. **Enable Trusted Access**
+
+ For the Backup vault to connect with the AKS cluster, you must enable Trusted Access as it allows the Backup vault to have a direct line of sight to the AKS cluster. Learn [how to enable Trusted Access](azure-kubernetes-service-cluster-manage-backups.md#trusted-access-related-operations).
+
+> [!NOTE]
+> For Backup Extension installation and Trusted Access enablement, the commands are available in Azure CLI only.
+
+## Configure backups
+
+With the created Backup vault and backup policy, and the AKS cluster in *ready-to-be-backed-up* state, you can now start to back up your AKS cluster.
+
+### Key entities
+
+- **AKS cluster to be protected**
+
+ Fetch the Azure Resource Manager ID of the AKS cluster to be protected. This serves as the identifier of the cluster. In this example, let's use an AKS cluster named *PSTestAKSCluster*, under a resource group *aksrg*, in a different subscription:
+
+ ```azurepowershell
+ $sourceClusterId = "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/aksrg /providers/Microsoft.ContainerService/managedClusters/ PSTestAKSCluster "
+ ```
+
+- **Snapshot resource group**
+
+ The persistent volume snapshots are stored in a resource group in your subscription. We recommend you to create a dedicated resource group as a snapshot datastore to be used by the Azure Backup service.
+
+ ```azurepowershell
+ $snapshotrg = "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx/resourcegroups/snapshotrg"
+ ```
+
+### Prepare the request
+
+The configuration of backup is performed in two steps:
++
+The configuration of backup is performed in two steps:
+
+1. Prepare backup configuration to define which cluster resources are to be backed up using the `New-AzDataProtectionBackupConfigurationClientObject` cmdlet. In this example, we're going to use the default configuration and perform a full cluster backup.
+
+ ```azurepowershell
+ $backupConfig = New-AzDataProtectionBackupConfigurationClientObject -SnapshotVolume $true -IncludeClusterScopeResource $true -DatasourceType AzureKubernetesService -LabelSelector "env=prod"
+ ```
+
+2. Prepare the relevant request using the relevant vault, policy, AKS cluster, backup configuration, and snapshot resource group using the `Initialize-AzDataProtectionBackupInstance` cmdlet.
+
+ ```azurepowershell
+ $backupInstance = Initialize-AzDataProtectionBackupInstance -DatasourceType AzureKubernetesService -DatasourceLocation $dataSourceLocation -PolicyId $ aksBkpPol.Id -DatasourceId $sourceClusterId -SnapshotResourceGroupId $ snapshotrg -FriendlyName $friendlyName -BackupConfiguration $backupConfig
+ ```
+
+### Assign required permissions and validate
+
+With the request prepared, first you need to assign required roles o the resources involved by running the following command:
+
+```azurepowershell
+Set-AzDataProtectionMSIPermission -BackupInstance $backupInstance -VaultResourceGroup $rgName -VaultName $vaultName -PermissionsScope "ResourceGroup"
+```
++
+Once permissions are assigned, run the following cmdlet to test the readiness of the instance created.
+
+```azurepowershell
+test-AzDataProtectionBackupInstanceReadiness -ResourceGroupName $resourceGroupName -VaultName $vaultName -BackupInstance $backupInstance.Property
+```
+
+When the validation is successful, you can submit the request to protect the AKS cluster using the `New-AzDataProtectionBackupInstance` cmdlet.
+
+```azurepowershell
+New-AzDataProtectionBackupInstance -ResourceGroupName "testBkpVaultRG" -VaultName $TestBkpVault.Name -BackupInstance $backupInstance
+```
+
+## Next steps
+
+- [Restore Azure Kubernetes Service cluster using PowerShell](azure-kubernetes-service-cluster-restore-using-powershell.md)
+- [Manage Azure Kubernetes Service cluster backups](azure-kubernetes-service-cluster-manage-backups.md)
+- [About Azure Kubernetes Service cluster backup](azure-kubernetes-service-cluster-backup-concept.md)
backup Quick Kubernetes Backup Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-kubernetes-backup-terraform.md
Title: Quickstart - Configure backup for an Azure Kubernetes Service (AKS) cluster using Azure Backup via Terraform
+ Title: Quickstart - Configure vaulted backup for an Azure Kubernetes Service (AKS) cluster using Azure Backup via Terraform
description: Learn how to quickly configure backup for a Kubernetes cluster using Terraform. Previously updated : 05/31/2024 Last updated : 11/04/2024
content_well_notification:
#Customer intent: As a developer or backup operator, I want to quickly configure backup for an Azure Kubernetes Cluster using Azure Backup for AKS.
-# Quickstart: Configure backup for an Azure Kubernetes Service (AKS) cluster using Terraform
+# Quickstart: Configure vaulted backup for an Azure Kubernetes Service (AKS) cluster using Terraform
-This quickstart describes how to configure backup for an Azure Kubernetes Service (AKS) cluster using Terraform.
+This quickstart describes how to configure vaulted backup for an Azure Kubernetes Service (AKS) cluster using Terraform.
Azure Backup for AKS is a cloud-native, enterprise-ready, application-centric backup service that lets you quickly configure backup for AKS clusters.
backup Tutorial Restore Aks Backups Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-restore-aks-backups-across-regions.md
Title: Tutorial - Enable Vault Tier protection for Azure Kubernetes Cluster (AKS) clusters and restore backups in secondary region using Azure Backup description: Learn how to enable Vault Tier protection for AKS clusters and restore backups in secondary region using Azure Backup. Previously updated : 12/25/2023 Last updated : 11/04/2023 -
- - ignite-2023
-# Tutorial: Enable Vault Tier backups for AKS and restore across regions by using Azure Backup (preview)
+# Tutorial: Enable Vault Tier backups for AKS and restore across regions by using Azure Backup
This tutorial describes how to create backups for an AKS cluster stored in the Secondary Region (Azure Paired region). Then perform a Cross Region Restore to recover the AKS Cluster during regional disaster.
-Azure Backup allows you to store AKS cluster backups in both **Operational Tier as snapshot** and **Vault Tier as blobs** (preview). This feature enables you to move snapshot-based AKS backups stored in Operational Tier to a Vault-standard Tier. You can use the backup policy, to define whether to store backups just in Operational Tier as snapshots or also protect them in Vault Tier along with Operational. Vaulted backups are stored offsite, which protects them from tenant compromise, malicious attacks, and ransomware threats. You can also retain the backup data for long term. Additionally, you can perform Cross Region Restore by configuring the Backup vault with storage redundancy set as global and Cross Region Restore property enabled. [Learn more](azure-kubernetes-service-backup-overview.md).
+Azure Backup allows you to store AKS cluster backups in both **Operational Tier as snapshot** and **Vault Tier as blobs**. This feature enables you to move snapshot-based AKS backups stored in Operational Tier to a Vault-standard Tier. You can use the backup policy, to define whether to store backups just in Operational Tier as snapshots or also protect them in Vault Tier along with Operational. Vaulted backups are stored offsite, which protects them from tenant compromise, malicious attacks, and ransomware threats. You can also retain the backup data for long term. Additionally, you can perform Cross Region Restore by configuring the Backup vault with storage redundancy set as global and Cross Region Restore property enabled. [Learn more](azure-kubernetes-service-backup-overview.md).
## Consideration
For backups to be available in Secondary region (Azure Paired Region), [create a
:::image type="content" source="./media/azure-kubernetes-service-cluster-backup/enable-cross-region-restore.png" alt-text="Screenshot shows how to enable the Cross Region Restore parameter.":::
-## Configure Vault Tier backup (preview)
+## Configure Vault Tier backup
To use AKS backup for regional disaster recovery, store the backups in Vault Tier. You can enable this capability by [creating a backup policy](azure-kubernetes-service-cluster-backup.md#create-a-backup-policy) with retention policy set for Vault-standard datastore.
To set the retention policy in a backup policy, follow these steps:
With the new backup policy, you can [configure protection for the AKS cluster](azure-kubernetes-service-cluster-backup.md#configure-backup) and store in both Operational Tier (as snapshot) and Vault Tier (as blobs). Once the configuration is complete, the backups stored in the vault are available in the Secondary Region (an [Azure paired region](../reliability/cross-region-replication-azure.md#azure-paired-regions)) for restore that can be used when during regional outage.
-## Restore in secondary region (preview)
+## Restore in secondary region
If there is an outage in the primary region, you can use the recovery points stored in Vault Tier in the secondary region to restore the AKS cluster. Follow these steps:
backup Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/whats-new.md
Azure Backup is constantly improving and releasing new features that enhance the
You can learn more about the new releases by bookmarking this page or by [subscribing to updates here](https://azure.microsoft.com/updates/?query=backup). ## Updates summary
+- November 2024
+ - [Vaulted backup and Cross Region Restore support for AKS is generally available.](#vaulted-backup-and-cross-region-restore-support-for-aks-is-generally-available)
- October 2024 - [GRS and CRR support for Azure VMs using Premium SSD v2 and Ultra Disk is now generally available.](#grs-and-crr-support-for-azure-vms-using-premium-ssd-v2-and-ultra-disk-is-now-generally-available) - [Back up Azure VMs with Extended Zones](#back-up-azure-vms-with-extended-zones-preview)
You can learn more about the new releases by bookmarking this page or by [subscr
- February 2021 - [Backup for Azure Blobs (in preview)](#backup-for-azure-blobs-in-preview)
+## Vaulted backup and Cross Region Restore support for AKS is generally available
+
+Azure Backup supports storing AKS backups offsite, which is protected against tenant compromise, malicious attacks and ransomware threats. Along with backup stored in a vault, you can also use the backups in a regional disaster scenario and recover backups.
+
+Once the feature is enabled, your snapshot-based AKS backups stored in Operational Tier are converted into blobs and moved to a Vault-standard tier outside of your tenant. You can enable/disable this feature by updating the retention rules of your backup policy. This feature also allows you to back up data for long term storage as per the compliance and regulatory requirements. With this feature, you can also enable a Backup vault to be *Globally redundant* with *Cross Region Restore*, and then your vaulted backups will be available in an Azure Paired region for restore. In case of primary region outage, you can use these backups to restore your AKS clusters in a secondary region.
+
+For more information, see [Overview of AKS backup](azure-kubernetes-service-backup-overview.md).
## GRS and CRR support for Azure VMs using Premium SSD v2 and Ultra Disk is now generally available.
communication-services Number Lookup Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/number-lookup-concept.md
description: Learn about Communication Services Number Lookup API concepts.
- Last updated 05/02/2023
Key features of Azure Communication Services Number Lookup include:
- **High Accuracy** We gather data from the most reliable suppliers to ensure that you receive accurate data. Our data is updated regularly to guarantee the highest quality possible. - **High Velocity** Our API is designed to deliver fast and accurate data, even when dealing with high volumes of data. It is optimized for speed and performance to ensure you always receive the information you need quickly and reliably. - **Number Capability Check** Our API provides the associated number type that generally can help determine if an SMS can be sent to a particular number. This helps to avoid frustrating attempts to send messages to non-SMS-capable numbers.-- **Carrier Details** We provide information about the country of destination and carrier information which helps to estimate potential costs and find alternative messaging methods (e.g., sending an email).
+- **Carrier Details** We provide information about the country or region of destination and carrier information which helps to estimate potential costs and find alternative messaging methods (e.g., sending an email).
## Value Proposition
data-factory Connector Deprecation Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-deprecation-frequently-asked-questions.md
Title: Connector deprecation FAQ
-description: Get answers to frequently asked questions about connector deprecation.
+ Title: Connector upgrade FAQ
+description: Get answers to frequently asked questions about connector upgrade.
Previously updated : 10/17/2024 Last updated : 11/08/2024
-# Connector deprecation FAQ
+# Connector upgrade FAQ
-This article provides answers to frequently asked questions about connector deprecation.
+This article provides answers to frequently asked questions about connector upgrade.
## Why does Azure Data Factory (ADF) release new connectors and ask users to upgrade their existing connectors?
databox-online Azure Stack Edge Gpu 2008 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2008-release-notes.md
description: Describes critical open issues and resolutions for the Azure Stack
-+ Last updated 03/05/2021
databox Data Box Deploy Copy Data Via Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-copy-data-via-rest.md
# Tutorial: Use REST APIs to Copy data to Azure Data Box Blob storage
-> [!IMPORTANT]
-> Azure Data Box now supports access tier assignment at the blob level. The steps contained within this tutorial reflect the updated data copy process and are specific to block blobs.
->
->For help with determining the appropriate access tier for your block blob data, refer to the [Determine appropriate access tiers for block blobs](#determine-appropriate-access-tiers-for-block-blobs) section. Follow the steps containined within the [Copy data to Data Box](#copy-data-to-data-box) section to copy your data to the appropriate access tier.
->
-> The information contained within this section applies to orders placed after April 1, 2024.
- > [!CAUTION] > This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](/azure/virtual-machines/workloads/centos/centos-end-of-life).
Before you begin, make sure that:
3. You review the [system requirements for Data Box Blob storage](data-box-system-requirements-rest.md) and are familiar with supported versions of APIs, SDKs, and tools. 4. You have access to a host computer that has the data that you want to copy over to Data Box. Your host computer must: * Run a [Supported operating system](data-box-system-requirements.md).
- * Be connected to a high-speed network. We strongly recommend that you have at least one 10-GbE connection. If a 10-GbE connection isn't available, a 1-GbE data link can be used but the copy speeds are impacted.
+ * Be connected to a high-speed network. We strongly recommend that you have at least one 10-GbE connection. You can use a 1-GbE data link if a 10-GbE connection isn't available, though copy speeds are impacted.
5. [Download AzCopy V10](../storage/common/storage-use-azcopy-v10.md) on your host computer. AzCopy is used to copy data to Azure Data Box Blob storage from your host computer. ## Connect via http or https
Use the Azure portal to download certificate.
### Import certificate
-Accessing Data Box Blob storage over HTTPS requires a TLS/SSL certificate for the device. The way in which this certificate is made available to the client application varies from application to application and across operating systems and distributions. Some applications can access the certificate after it's imported into the system's certificate store, while other applications don't make use of that mechanism.
+Accessing Data Box Blob storage over HTTPS requires a TLS/SSL certificate for the device. The way in which this certificate is made available to the client application varies from application to application and across operating systems and distributions. Some applications can access the certificate after importing it into the system's certificate store, while other applications don't make use of that mechanism.
Specific information for some applications is mentioned in this section. For more information on other applications, see the documentation for the application and the operating system used.
Follow the same steps to [add device IP address and blob service endpoint when c
Follow the steps to [Configure partner software that you used while connecting over *http*](#verify-connection-and-configure-partner-software). The only difference is that you should leave the *Use http option* unchecked.
-## Determine appropriate access tiers for block blobs
-
-> [!IMPORTANT]
-> The information contained within this section applies to orders placed after April 1<sup>st</sup>, 2024.
-
-Azure Storage allows you to store block blob data in multiple access tiers within the same storage account. This ability allows data to be organized and stored more efficiently based on how often it's accessed. The following table contains information and recommendations about Azure Storage access tiers.
-
-| Tier | Recommendation | Best practice |
-||-||
-| Hot | Useful for online data accessed or modified frequently. This tier has the highest storage costs, but the lowest access costs. | Data in this tier should be in regular and active use. |
-| Cool | Useful for online data accessed or modified infrequently. This tier has lower storage costs and higher access costs than the hot tier. | Data in this tier should be stored for at least 30 days. |
-| Cold | Useful for online data accessed or modified rarely but still requiring fast retrieval. This tier has lower storage costs and higher access costs than the cool tier.| Data in this tier should be stored for a minimum of 90 days. |
-| Archive | Useful for offline data rarely accessed and having lower latency requirements. | Data in this tier should be stored for a minimum of 180 days. Data removed from the archive tier within 180 days is subject to an early deletion charge. |
-
-For more information about blob access tiers, see [Access tiers for blob data](../storage/blobs/access-tiers-overview.md). For more detailed best practices, see [Best practices for using blob access tiers](../storage/blobs/access-tiers-best-practices.md).
-
-You can transfer your block blob data to the appropriate access tier by copying it to the corresponding folder within Data Box. This process is discussed in greater detail within the [Copy data to Azure Data Box](#copy-data-to-data-box) section.
- ## Copy data to Data Box
-After connecting to one or more Data Box shares, the next step is to copy data. Before you begin the data copy, consider the following limitations:
+After one or more Data Box shares are connected, the next step is to copy data. Before you initiate data copy operations, consider the following limitations:
* While copying data, ensure that the data size conforms to the size limits described in the [Azure storage and Data Box limits](data-box-limits.md). * Simultaneous uploads by Data Box and another non-Data Box application could potentially result in upload job failures and data corruption.
The first step is to create a container, because blobs are always uploaded into
![Blob Containers context menu, Create Blob Container](media/data-box-deploy-copy-data-via-rest/create-blob-container-1.png)
-4. A text box appears below the **Blob Containers** folder. Enter the name for your blob container. See the [Create the container and set permissions](../storage/blobs/storage-quickstart-blobs-dotnet.md) for information on rules and restrictions on naming blob containers.
-5. Press **Enter** when done to create the blob container, or **Esc** to cancel. After the blob container is successfully created, it's displayed under the **Blob Containers** folder for the selected storage account.
+4. A text box appears below the **Blob Containers** folder. Enter the name for your blob container. See the [Create the container and set permissions](../storage/blobs/storage-quickstart-blobs-dotnet.md) for information on rules and restrictions on naming blob containers.
+5. Press **Enter** when done to create the blob container, or **Esc** to cancel. After successful creation, the blob container is displayed under the selected storage account's **Blob Containers** folder.
![Blob container created](media/data-box-deploy-copy-data-via-rest/create-blob-container-2.png)
In this tutorial, you learned about Azure Data Box topics such as:
> > * Prerequisites for copy data to Azure Data Box Blob storage using REST APIs > * Connecting to Data Box Blob storage via *http* or *https*
-> * Determining appropriate access tiers for block blobs
> * Copy data to Data Box Advance to the next tutorial to learn how to ship your Data Box back to Microsoft.
hdinsight Hdinsight Plan Virtual Network Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-plan-virtual-network-deployment.md
description: Learn how to plan an Azure Virtual Network deployment to connect HD
Previously updated : 09/06/2024 Last updated : 09/19/2024 # Plan a virtual network for Azure HDInsight
The following are the questions that you must answer when planning to install HD
* Do you need to install HDInsight into an existing virtual network? Or are you creating a new network?
- If you're using an existing virtual network, you may need to modify the network configuration before you can install HDInsight. For more information, see the [add HDInsight to an existing virtual network](#existingvnet) section.
+ If you're using an existing virtual network, you may need to modify the network configuration before you can install HDInsight. For more information, see the [added HDInsight to an existing virtual network](#existingvnet) section.
* Do you want to connect the virtual network containing HDInsight to another virtual network or your on-premises network?
Use the steps in this section to discover how to add a new HDInsight to an exist
As a managed service, HDInsight requires unrestricted access to several IP addresses in the Azure data center. To allow communication with these IP addresses, update any existing network security groups or user-defined routes.
- HDInsight hosts multiple services, which use a variety of ports. Don't block traffic to these ports. For a list of ports to allow through virtual appliance firewalls, see the Security section.
+ HDInsight hosts multiple services, which use various ports. Don't block traffic to these ports. For a list of ports to allow through virtual appliance firewalls, see the Security section.
To find your existing security configuration, use the following Azure PowerShell or Azure CLI commands:
Use the steps in this section to discover how to add a new HDInsight to an exist
az network nsg list --resource-group RESOURCEGROUP ```
- For more information, see the [Troubleshoot network security groups](../virtual-network/diagnose-network-traffic-filter-problem.md) document.
+ For more information, see [Troubleshoot network security groups](../virtual-network/diagnose-network-traffic-filter-problem.md) document.
> [!IMPORTANT] > Network security group rules are applied in order based on rule priority. The first rule that matches the traffic pattern is applied, and no others are applied for that traffic. Order rules from most permissive to least permissive. For more information, see the [Filter network traffic with network security groups](../virtual-network/network-security-groups-overview.md) document.
Use the steps in this section to discover how to add a new HDInsight to an exist
az network route-table list --resource-group RESOURCEGROUP ```
- For more information, see the [Troubleshoot routes](../virtual-network/diagnose-network-routing-problem.md) document.
+ For more information, see the [Diagnose a virtual machine routing problem](../virtual-network/diagnose-network-routing-problem.md) document.
-3. Create an HDInsight cluster and select the Azure Virtual Network during configuration. Use the steps in the following documents to understand the cluster creation process:
+3. Create a HDInsight cluster and select the Azure Virtual Network during configuration. Use the steps in the following documents to understand the cluster creation process:
* [Create HDInsight using the Azure portal](hdinsight-hadoop-create-linux-clusters-portal.md) * [Create HDInsight using Azure PowerShell](hdinsight-hadoop-create-linux-clusters-azure-powershell.md)
For more information, see the [Name Resolution for VMs and Role Instances](../vi
## Directly connect to Apache Hadoop services
-You can connect to the cluster at `https://CLUSTERNAME.azurehdinsight.net`. This address uses a public IP, which may not be reachable if you have used NSGs to restrict incoming traffic from the internet. Additionally, when you deploy the cluster in a VNet you can access it using the private endpoint `https://CLUSTERNAME-int.azurehdinsight.net`. This endpoint resolves to a private IP inside the VNet for cluster access.
+You can connect to the cluster at `https://CLUSTERNAME.azurehdinsight.net`. This address uses a public IP, which may not be reachable if you have used NSGs to restrict incoming traffic from the internet. Additionally, when you deploy the cluster in a virtual network you can access it using the private endpoint `https://CLUSTERNAME-int.azurehdinsight.net`. This endpoint resolves to a private IP inside the virtual network for cluster access.
To connect to Apache Ambari and other web pages through the virtual network, use the following steps:
To connect to Apache Ambari and other web pages through the virtual network, use
## Load balancing
-When you create an HDInsight cluster, a load balancer is created as well. The type of this load balancer is at the [basic SKU level](../load-balancer/skus.md), which has certain constraints. One of these constraints is that if you have two virtual networks in different regions, you cannot connect to basic load balancers. See [virtual networks FAQ: constraints on global vnet peering](../virtual-network/virtual-networks-faq.md#what-are-the-constraints-related-to-global-virtual-network-peering-and-load-balancers), for more information.
+When you create a HDInsight cluster, several load balancers are created as well. Due to the [retirement of the basic load balancer](https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/), the type of load balancers is at theΓÇ»[standard SKU level](/azure/load-balancer/skus), which has certain constraints. Inbound flows to the standard load balancers are closed unless allowedΓÇ» by a network security group. You may need to bond a network security to your subnet and configure the network security rules.
-Another constraint is that the HDInsight load balancers should not be deleted or modified. **Any changes to the load balancer rules will get overwritten during certain maintenance events such as certificate renewals.** If the load balancers are modified and it affects the cluster functionality, you may need to recreate the cluster.
+There are [several outbound connectivity methods](/azure/load-balancer/load-balancer-outbound-connections) enabled for the standard load balancer. ItΓÇÖs worth noting that the default outbound access will be retired soon. If a NAT gateway is adopted to provide outbound network access, the subnet is not capable with the basic load balancer. If you intend to bond a NAT gateway to a subnet, there should be no basic load balancer existed in this subnet. With the NAT gateway as the outbound access method, a newly created HDInsight cluster can't share the same subnet with previously created HDInsight clusters with basic load balancers.
+
+Another constraint is that the HDInsight load balancers shouldn't be deleted or modified. **Any changes to the load balancer rules will get overwritten during certain maintenance events such as certificate renewals.** If the load balancers are modified and it affects the cluster functionality, you may need to recreate the cluster.
## Next steps
hdinsight Hdinsight Restrict Public Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-restrict-public-connectivity.md
description: Learn how to remove access to all outbound public IP addresses.
Previously updated : 01/04/2024 Last updated : 09/19/2024 # Restrict public connectivity in Azure HDInsight
If you want public connectivity between your HDInsight cluster and dependent res
The following diagram shows what a potential HDInsight virtual network architecture might look like when `resourceProviderConnection` is set to *outbound*: > [!NOTE] > Restricting public connectivity is a prerequisite for enabling Private Link and shouldn't be considered the same capability.
hdinsight Hdinsight Virtual Network Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-virtual-network-architecture.md
Title: Azure HDInsight virtual network architecture
-description: Learn the resources available when you create an HDInsight cluster in an Azure Virtual Network.
+description: Learn the resources available when you create a HDInsight cluster in an Azure Virtual Network.
Previously updated : 12/05/2023 Last updated : 01/09/2024 # Azure HDInsight virtual network architecture
-This article explains the resources that are present when you deploy an HDInsight cluster into a custom Azure Virtual Network. This information helps you to connect on-premises resources to your HDInsight cluster in Azure. For more information on Azure Virtual Networks, see [What is Azure Virtual Network?](../virtual-network/virtual-networks-overview.md).
+This article explains the resources that are present when you deploy a HDInsight cluster into a custom Azure Virtual Network. This information helps you to connect on-premises resources to your HDInsight cluster in Azure. For more information on Azure Virtual Networks, see [What is Azure Virtual Network?](../virtual-network/virtual-networks-overview.md).
-## Resource types in Azure HDInsight clusters
+## Resource types in Azure HDInsight cluster
Azure HDInsight clusters have different types of virtual machines, or nodes. Each node type plays a role in the operation of the system. The following table summarizes these node types and their roles in the cluster.
Azure HDInsight clusters have different types of virtual machines, or nodes. Eac
Use Fully Qualified Domain Names (FQDNs) when addressing nodes in your cluster. You can get the FQDNs for various node types in your cluster using the [Ambari API](hdinsight-hadoop-manage-ambari-rest-api.md).
-These FQDNs will be of the form `<node-type-prefix><instance-number>-<abbreviated-clustername>.<unique-identifier>.cx.internal.cloudapp.net`.
+These FQDNs are of the form `<node-type-prefix><instance-number>-<abbreviated-clustername>.<unique-identifier>.cx.internal.cloudapp.net`.
-The `<node-type-prefix>` will be `hn` for headnodes, `wn` for worker nodes and `zn` for zookeeper nodes.
+The `<node-type-prefix>` is `hn` for headnodes, `wn` for worker nodes and `zn` for zookeeper nodes.
If you need just the host name, use only the first part of the FQDN: `<node-type-prefix><instance-number>-<abbreviated-clustername>`
If you need just the host name, use only the first part of the FQDN: `<node-type
The following diagram shows the placement of HDInsight nodes and network resources in Azure. The default resources in an Azure Virtual Network include the cluster node types mentioned in the previous table. And network devices that support communication between the virtual network and outside networks.
The following network resources present are automatically created inside the vir
| Networking resource | Number present | Details | | | | |
-|Load balancer | three | |
+|Load balancer | two |The load balancer provides inbound network access for the nodes. The two load balancers are for: two head nodes and two gateway nodes. The load balancers are standard SKU.|
|Network Interfaces | nine | This value is based on a normal cluster, where each node has its own network interface. The nine interfaces are for: two head nodes, three zookeeper nodes, two worker nodes, and two gateway nodes mentioned in the previous table. |
-|Public IP Addresses | two | |
+|Public IP Addresses | two | Two public IP addresses are bonded to the load balancers. |
+
+There are several outbound connectivity methods could be used with the custom virtual network illustrated in [Source Network Address Translation (SNAT) for outbound connections - Azure Load Balancer](/azure/load-balancer/load-balancer-outbound-connections).
+
+> [!NOTE]
+> The most recommended way is to associate the subnet with a NAT gateway. It requires a NAT gateway, and a network security group created in the subnet before you create the HDInsight cluster. You could bond a public IP or a public IP prefix with the NAT gateway. For the NSG rules to create, see [Control network traffic in Azure HDInsight](./control-network-traffic.md#hdinsight-with-network-security-groups)
## Endpoints for connecting to HDInsight
iot-edge Tutorial Configure Est Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-configure-est-server.md
Title: Tutorial - Configure Enrollment over Secure Transport Server (EST) for Az
description: This tutorial shows you how to set up an Enrollment over Secure Transport (EST) server for Azure IoT Edge. Previously updated : 06/10/2024 Last updated : 11/07/2024
You can keep the resources and configurations that you created in this tutorial
* To use EST server to issue Edge CA certificates, see [example configuration](https://github.com/Azure/iotedge/blob/main/edgelet/doc/est.md#edge-ca-certificate). * Using username and password to bootstrap authentication to EST server isn't recommended for production. Instead, consider using long-lived *bootstrap certificates* that can be stored onto the device during manufacturing [similar to the recommended approach for DPS](../iot-hub/iot-hub-x509ca-concept.md). To see how to configure bootstrap certificate for EST server, see [Authenticate a Device Using Certificates Issued Dynamically via EST](https://github.com/Azure/iotedge/blob/main/edgelet/doc/est.md). * EST server can be used to issue certificates for all devices in a hierarchy as well. Depending on if you have ISA-95 requirements, it may be necessary to run a chain of EST servers with one at every layer or use the API proxy module to forward the requests. To learn more, see [Kevin's blog](https://kevinsaye.wordpress.com/2021/07/21/deep-dive-creating-hierarchies-of-azure-iot-edge-devices-isa-95-part-3/).
-* For enterprise grade solutions, consider: [GlobalSign IoT Edge Enroll](https://www.globalsign.com/en/iot-edge-enroll) or [DigiCert IoT Device Manager](https://www.digicert.com/iot/iot-device-manager)
+* For enterprise grade solutions, consider: [GlobalSign IoT Edge Enroll](https://www.globalsign.com/en/iot-edge-enroll), [DigiCert IoT Device Manager](https://www.digicert.com/iot/iot-device-manager), and [Keytos EZCA](https://www.keytos.io/docs/azure-pki/azure-iot-hub/how-to-create-azure-iot-est-certificate-authority/).
* To learn more about certificates, see [Understand how Azure IoT Edge uses certificates](iot-edge-certs.md).
openshift Howto Infrastructure Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-infrastructure-nodes.md
spec:
userDataSecret: name: worker-user-data vmSize: <Standard_E4s_v5, Standard_E8s_v5, Standard_E16s_v5>
- vnet: aro-vnet
+ vnet: <VNET_NAME>
zone: <ZONE> taints: - key: node-role.kubernetes.io/infra
sentinel Threat Intelligence Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/threat-intelligence-integration.md
To connect to TIP feeds, see [Connect threat intelligence platforms to Microsoft
- EclecticIQ Platform integrates with Microsoft Sentinel to enhance threat detection, hunting, and response. Learn more about the [benefits and use cases](https://www.eclecticiq.com/resources/microsoft-sentinel-and-eclecticiq-intelligence-center) of this two-way integration.
+### Filigran OpenCTI
+
+- [Filigran OpenCTI](https://filigran.io/solutions/open-cti/) can send threat intelligence to Microsoft Sentinel via either a [dedicated connector](https://filigran.notion.site/Microsoft-Sentinel-Intel-11c8fce17f2a80209a60e8914e6d1009) which runs in realtime, or by acting as a TAXII 2.1 server that Sentinel will poll regularly. It can also receive structured incidents from Sentinel via the [Microsoft Sentinel Incident connector](https://filigran.notion.site/Microsoft-Sentinel-Incidents-11c8fce17f2a80f1b461c6379265d5d3).
++ ### GroupIB Threat Intelligence and Attribution - To connect [GroupIB Threat Intelligence and Attribution](https://www.group-ib.com/products/threat-intelligence/) to Microsoft Sentinel, GroupIB makes use of Logic Apps. See the [specialized instructions](https://techcommunity.microsoft.com/t5/azure-sentinel/group-ib-threat-intelligence-and-attribution-connector-azure/ba-p/2252904) that are necessary to take full advantage of the complete offering.
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
The following table summarizes Site Recovery limits.
- The current limit for per virtual machine data churn is 54 MB/s, regardless of size.
-**Storage target** | **Average source disk I/O** |**Average source disk data churn** | **Total source disk data churn per day**
+**Replica Disk type ** | **Average source disk I/O** | **Average source disk data churn** | **Total source disk data churn per day**
||| Standard storage | 8 KB | 2 MB/s | 168 GB per disk
-Premium P10 or P15 disk | 8 KB | 2 MB/s | 168 GB per disk
-Premium P10 or P15 disk | 16 KB | 4 MB/s | 336 GB per disk
-Premium P10 or P15 disk | 32 KB or greater | 8 MB/s | 672 GB per disk
-Premium P20 or P30 or P40 or P50 disk | 8 KB | 5 MB/s | 421 GB per disk
-Premium P20 or P30 or P40 or P50 disk | 16 KB or greater |20 MB/s | 1684 GB per disk
+Premium SSD with disk size 128 GiB or more | 8 KB | 2 MB/s | 168 GB per disk
+Premium SSD with disk size 128 GiB or more | 16 KB | 4 MB/s | 336 GB per disk
+Premium SSD with disk size 128 GiB or more | 32 KB or greater | 8 MB/s | 672 GB per disk
+Premium SSD with disk size 512 GiB or more | 8 KB | 5 MB/s | 421 GB per disk
+Premium SSD with disk size 512 GiB or more | 16 KB or greater |20 MB/s | 1684 GB per disk
>[!Note]
site-recovery Azure To Azure Troubleshoot Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-troubleshoot-replication.md
If you select the event, you should see the exact disk information:
The following table provides the Azure Site Recovery limits. These limits are based on our tests, but they can't cover all possible application input-output (I/O) combinations. Actual results can vary based on your application I/O mix.
-There are two limits to consider: data churn per disk and data churn per virtual machine. Let's look at the Premium P20 disk in the following table for an example. For a single VM, Site Recovery can handle 5 MB/s of churn per disk with a maximum of five such disks. Site Recovery has a limit of 54 MB/s of total churn per VM.
+There are two limits to consider: data churn per disk and data churn per virtual machine. Review churn limits [here](./azure-to-azure-support-matrix.md#limits-and-data-change-rates).
-**Replication storage target** | **Average I/O size for source disk** |**Average data churn for source disk** | **Total data churn per day for source data disk**
-|||
-Standard storage | 8 KB | 2 MB/s | 168 GB per disk
-Premium P10 or P15 disk | 8 KB | 2 MB/s | 168 GB per disk
-Premium P10 or P15 disk | 16 KB | 4 MB/s | 336 GB per disk
-Premium P10 or P15 disk | 32 KB or greater | 8 MB/s | 672 GB per disk
-Premium P20 or P30 or P40 or P50 disk | 8 KB | 5 MB/s | 421 GB per disk
-Premium P20 or P30 or P40 or P50 disk | 16 KB or greater |20 MB/s | 1684 GB per disk
### Solution
Azure Site Recovery has limits on data change rates, depending on the type of di
A spike in data change rate might come from an occasional data burst. If the data change rate is greater than 10 MB/s (for Premium) or 2 MB/s (for Standard) and comes down, replication will catch up. If the churn is consistently well beyond the supported limit, consider one of these options: - Exclude the disk that's causing a high data-change rate: First, disable the replication. Then you can exclude the disk by using [PowerShell](azure-to-azure-exclude-disks.md).-- Change the tier of the disaster recovery storage disk: This option is possible only if the disk data churn is less than 20 MB/s. For example, a VM with a P10 disk has a data churn of greater than 8 MB/s but less than 10 MB/s. If the customer can use a P30 disk for target storage during protection, the problem can be solved. This solution is only possible for machines that are using Premium-Managed Disks. Follow these steps:
+- Change the disk size of the replica disk. This option is useful only if the disk data churn is less than 20 MB/s per disk, or less than 50 MB/s per disk for [High Churn](./concepts-azure-to-azure-high-churn-support.md). For example, assuming you have not opted for high churn support and have a VM with disk of 128 GiB and a data churn between 8 MB/s and 10 MB/s. Now since, disk size of 128 GiB has churn limit of 8 MB/s, you can increase the disk size to 512 GiB to support higher churn. This solution is only possible for machines that use Premium-Managed Disks. Follow these steps:
1. Go to **Disks** of the affected replicated machine and copy the replica disk name. 1. Go to this replica of the managed disk. 1. You might see a banner in **Overview** that says an SAS URL has been generated. Select this banner and cancel the export. Ignore this step if you don't see the banner. 1. As soon as the SAS URL is revoked, go to **Size + Performance** for the managed disk. Increase the size so that Site Recovery supports the observed churn rate on the source disk.
+> [!IMPORTANT]
+> The churn limit supported by Azure Site Recovery depends on the disk size of the replica premium SSD disk. This limit remains the same even if you [change the performance tier](https://learn.microsoft.com/azure/virtual-machines/disks-change-performance) of the replica disk. For example, if you have premium SSD replica disk of disk size 128 GiB created, its base performance tier is P10. If you update its performance tier to P50 without changing the disk size, the churn limit won't change.
++ ### Disk tier/SKU change considerations Whenever Disk tier or SKU is changed, all the snapshots (bookmarks) corresponding to the disk are created by the disk resource provider. Thus, you may have recovery points where some of the underlying snapshots don't exist at the end of the disk resource provider.
site-recovery Concepts Azure To Azure High Churn Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/concepts-azure-to-azure-high-churn-support.md
Azure Site Recovery supports churn (data change rate) up to 100 MB/s per virtual
The following table summarizes Site Recovery limits:
-|Target Disk Type|Avg I/O Size|Avg Churn Supported|
+|Replica Disk type |Avg I/O Size|Avg Churn Supported|
||||
-|Standard or P10 or P15 |8 KB|2 MB/s|
-|Standard or P10 or P15|16 KB|4 MB/s|
-|Standard or P10 or P15|24 KB|6 MB/s|
-|Standard or P10 or P15|32 KB and later |10 MB/s|
-|P20|8 KB|10 MB/s|
-|P20 |16 KB|20 MB/s|
-|P20|24 KB and later|30 MB/s|
-|P30 and later|8 KB|20 MB/s|
-|P30 and later|16 KB|35 MB/s|
-|P30 and later|24 KB and later|50 MB/s|
+|Standard |8 KB|2 MB/s|
+|Standard |16 KB|4 MB/s|
+|Standard |24 KB|6 MB/s|
+|Standard |32 KB and later |8 MB/s|
+|Premium SSD with disk size 128 GiB or more |8 KB|10 MB/s|
+|Premium SSD with disk size 128 GiB or more |16 KB|20 MB/s|
+|Premium SSD with disk size 128 GiB or more |24 KB and later |30 MB/s|
+|Premium SSD with disk size 512 GiB or more |8 KB|10 MB/s|
+|Premium SSD with disk size 512 GiB or more |16 KB|20 MB/s|
+|Premium SSD with disk size 512 GiB or more |24 KB and later |30 MB/s|
+|Premium SSD with disk size 1TiB or more |8 KB|20 MB/s|
+|Premium SSD with disk size 1TiB or more |16 KB|35 MB/s|
+|Premium SSD with disk size 1TiB or more |24 KB and later |50 MB/s|
+ ## How to enable High Churn support
storage Storage Files Identity Ad Ds Update Password https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-update-password.md
description: Learn how to update the password of the Active Directory Domain Ser
Previously updated : 05/09/2024 Last updated : 11/08/2024 recommendations: false
$NewPassword = ConvertTo-SecureString -String $KerbKey -AsPlainText -Force
Set-ADAccountPassword -Identity <domain-object-identity> -Reset -NewPassword $NewPassword ```+
+## Test that the AD DS account password matches a Kerberos key
+
+Now that you've updated the AD DS account password, you can test it using the following PowerShell command.
+
+```powershell
+ Test-AzStorageAccountADObjectPasswordIsKerbKey -ResourceGroupName "<your-resource-group-name>" -Name "<your-storage-account-name>" -Verbose
+```
+
storsimple Storsimple Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-overview.md
ms.assetid: 7144d218-db21-4495-88fb-e3b24bbe45d1 Previously updated : 07/10/2023 Last updated : 11/07/2024
The following resources are available to help you migrate backup files or to cop
|Resource |Description | ||-| |[Azure StorSimple 8000 Series Copy Utility](https://aka.ms/storsimple-copy-utility) |Microsoft is providing a read-only data copy utility to recover and migrate your backup files from StorSimple cloud snapshots. The StorSimple 8000 Series Copy Utility is designed to run in your environment. You can install and configure the Utility, and then use your Service Encryption Key to authenticate and download your metadata from the cloud.|
-|Azure StorSimple 8000 Series Copy Utility documentation |Instructions for use of the Copy Utility. |
-|StorSimple archived documentation |Archived StorSimple articles from Microsoft technical documentation. |
+|[Azure StorSimple 8000 Series Copy Utility documentation](https://aka.ms/storsimple-copy-utility-docs) |Instructions for use of the Copy Utility. |
+|[StorSimple archived documentation](https://aka.ms/storsimple-archive-docs) |Archived StorSimple articles from Microsoft technical documentation. |
## Copy data and then decommission your appliance
synapse-analytics Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/known-issues.md
To learn more about Azure Synapse Analytics, see the [Azure Synapse Analytics Ov
|Azure Synapse Workspace|[Known issue incorporating square brackets [] in the value of Tags](#known-issue-incorporating-square-brackets--in-the-value-of-tags)|Has workaround| |Azure Synapse Workspace|[Deployment Failures in Synapse Workspace using Synapse-workspace-deployment v1.8.0 in GitHub actions with ARM templates](#deployment-failures-in-synapse-workspace-using-synapse-workspace-deployment-v180-in-github-actions-with-arm-templates)|Has workaround| |Azure Synapse Workspace|[No `GET` API operation dedicated to the `Microsoft.Synapse/workspaces/trustedServiceBypassEnabled` setting](#no-get-api-operation-dedicated-to-the-microsoftsynapseworkspacestrustedservicebypassenabled-setting)|Has workaround|
-|Azure Synapse Apache Spark pool|[Query failure with a LIKE clause using Synapse Dedicated SQL Pool Connector in Spark 3.4 runtime](#query-failure-with-a-like-clause-using-synapse-dedicated-sql-pool-connector-in-spark-34-runtime)|Has Workaround|
- ## Azure Synapse Analytics dedicated SQL pool active known issues summary
When you query the view for which the underlying schema has changed after the vi
**Workaround**: Manually adjust the view definition.
-## Azure Synapse Analytics Apache Spark pool active known issues summary
-
-The following are known issues with the Synapse Spark.
-
-### Query failure with a LIKE clause using Synapse Dedicated SQL Pool Connector in Spark 3.4 runtime
-
-The open source Apache Spark 3.4 has introduced an [issue](https://issues.apache.org/jir), it can generate an invalid SQL query for Synapse SQL and the Synapse Spark notebook or batch job would throw an error similar to:
-
-`com.microsoft.spark.sqlanalytics.SQLAnalyticsConnectorException: com.microsoft.sqlserver.jdbc.SQLServerException: Parse error at line: 1, column: XXX: Incorrect syntax near ''%test%''`
-
-**Workaround**: The engineering team is currently aware of this behavior and working on a fix. If you encountered a similar error, please engage Microsoft Support Team for assistance and to provide a temporary workaround.
- ## Recently closed known issues
The open source Apache Spark 3.4 has introduced an [issue](https://issues.apache
|Azure Synapse serverless SQL pool|[Query failures while reading Cosmos DB data using OPENROWSET](#query-failures-while-reading-azure-cosmos-db-data-using-openrowset)|Resolved|March 2023| |Azure Synapse Apache Spark pool|[Failed to write to SQL Dedicated Pool from Synapse Spark using Azure Synapse dedicated SQL pool Connector for Apache Spark when using notebooks in pipelines](#failed-to-write-to-sql-dedicated-pool-from-synapse-spark-using-azure-synapse-dedicated-sql-pool-connector-for-apache-spark-when-using-notebooks-in-pipelines)|Resolved|June 2023| |Azure Synapse Apache Spark pool|[Certain spark job or task fails too early with Error Code 503 due to storage account throttling](#certain-spark-job-or-task-fails-too-early-with-error-code-503-due-to-storage-account-throttling)|Resolved|November 2023|
+|Azure Synapse Apache Spark pool|[Query failure with a LIKE clause using Synapse Dedicated SQL Pool Connector in Spark 3.4 runtime](#query-failure-with-a-like-clause-using-synapse-dedicated-sql-pool-connector-in-spark-34-runtime)|Resolved|October 2024|
## Azure Synapse Analytics serverless SQL pool recently closed known issues summary
Between October 3, 2023 and November 16, 2023, few Azure Synapse Analytics Apach
**Status**: Resolved
+### Query failure with a LIKE clause using Synapse Dedicated SQL Pool Connector in Spark 3.4 runtime
+
+The open source Apache Spark 3.4 has introduced an [issue](https://issues.apache.org/jir), it can generate an invalid SQL query for Synapse SQL and the Synapse Spark notebook or batch job would throw an error similar to:
+
+`com.microsoft.spark.sqlanalytics.SQLAnalyticsConnectorException: com.microsoft.sqlserver.jdbc.SQLServerException: Parse error at line: 1, column: XXX: Incorrect syntax near ''%test%''`
+
+**Status**: Resolved
+ ## Related content - [Synapse Studio troubleshooting](troubleshoot/troubleshoot-synapse-studio.md)
update-manager Workflow Update Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/workflow-update-manager.md
Update Manager assesses and applies updates to all Azure machines and Azure Arc-enabled servers for both Windows and Linux.
-![Diagram that shows the Update Manager workflow.](./media/overview/update-management-center-overview.png)
++ ## Update Manager VM extensions
virtual-desktop Add Session Hosts Host Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/add-session-hosts-host-pool.md
For a general idea of what's required, such as supported operating systems, virt
- Your Azure subscription registered with the respective Azure Extended Zone. For more information, see [Request access to an Azure Extended Zone](../extended-zones/request-access.md).
- - An existing [Azure load balancer](../load-balancer/load-balancer-outbound-connections.md) on the virtual network to which you're deploying the session hosts.
+ - An [Azure load balancer](../load-balancer/load-balancer-outbound-connections.md) with an outbound rule on the virtual network to which you're deploying session hosts. You can use an existing load balancer or you create a new one when adding session hosts.
- If you want to use the Azure CLI or Azure PowerShell locally, see [Use the Azure CLI and Azure PowerShell with Azure Virtual Desktop](cli-powershell.md) to make sure you have the [desktopvirtualization](/cli/azure/desktopvirtualization) Azure CLI extension or the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization) Azure PowerShell module installed. Alternatively, use [Azure Cloud Shell](../cloud-shell/overview.md).
Here's how to create session hosts and register them to a host pool by using the
| **Confirm password** | Reenter the password. | | **Custom configuration** | | | **Custom configuration script URL** | If you want to run a PowerShell script during deployment, you can enter the URL here. |+ </details> <details>
Here's how to create session hosts and register them to a host pool by using the
| **Username** | Enter a name to use as the local administrator account for the new session hosts. | | **Password** | Enter a password for the local administrator account. | | **Confirm password** | Reenter the password. |+ </details> <details>
Here's how to create session hosts and register them to a host pool by using the
| **Resource group** | This value defaults to the resource group that you chose to contain your host pool on the **Basics** tab, but you can select an alternative. | | **Name prefix** | Enter a name prefix for your session hosts, such as **hp01-sh**.<br /><br />Each session host has a suffix of a hyphen and then a sequential number added to the end, such as **hp01-sh-0**.<br /><br />This name prefix can be a maximum of 11 characters and is used in the computer name in the operating system. The prefix and the suffix combined can be a maximum of 15 characters. Session host names must be unique. | | **Virtual machine type** | Select **Azure virtual machine**. |
- | **Virtual machine location** | Select the Azure region where you want to deploy your session hosts. It must be the same region that contains your virtual network. Then select **Deploy to an Azure Extended Zone**. |
- | **Azure Extended Zones** | |
- | **Azure Extended Zone** | Select **Los Angeles**. |
- | **Place the session host(s) behind an existing load balancing solution?** | Select the box. This action shows options for selecting a load balancer and a back-end pool.|
- | **Select a load balancer** | Select an existing load balancer on the virtual network to which you're deploying the session hosts. |
- | **Select a backend pool** | Select a back-end pool on the load balancer in which you want to place the sessions hosts. |
- | **Availability options** | Select from [availability zones](/azure/reliability/availability-zones-overview), [availability set](/azure/virtual-machines/availability-set-overview), or **No infrastructure dependency required**. If you select **availability zones** or **availability set**, complete the extra parameters that appear. |
- | **Security type** | Select from **Standard**, [Trusted launch virtual machines](/azure/virtual-machines/trusted-launch), or [Confidential virtual machines](/azure/confidential-computing/confidential-vm-overview).<br /><br />- If you select **Trusted launch virtual machines**, options for **secure boot** and **vTPM** are automatically selected.<br /><br />- If you select **Confidential virtual machines**, options for **secure boot**, **vTPM**, and **integrity monitoring** are automatically selected. You can't opt out of vTPM when using a confidential VM. |
+ | **Virtual machine location** | Select **Deploy to an Azure Extended Zone**. |
+ | **Azure Extended Zone** | Select the Extended Zone you require. |
+ | **Network and security** | |
+ | **Select a load balancer** | Select an existing Azure load balancer on the same virtual network you want to use for your session hosts, or select **Create a load balancer** to create a new load balancer.|
+ | **Select a backend pool** | Select a backend pool on the load balancer you want to use for your session hosts. If you're creating a new load balancer, select **Create new** to create a new backend pool for the new load balancer. |
+ | **Add outbound rule** | If you're creating a new load balancer, select **Create new** to create a new outbound rule for it. |
+ </details> After you complete this tab, select **Next: Tags**.
virtual-desktop Deploy Azure Virtual Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/deploy-azure-virtual-desktop.md
In addition to the general prerequisites, you need:
- Your Azure subscription registered with the respective Azure Extended Zone. For more information, see [Request access to an Azure Extended Zone](../extended-zones/request-access.md).
- - An existing [Azure load balancer](../load-balancer/load-balancer-outbound-connections.md) on the virtual network where you're deploying the session hosts.
+ - An [Azure load balancer](../load-balancer/load-balancer-outbound-connections.md) with an outbound rule on the virtual network to which you're deploying session hosts. You can use an existing load balancer or you create a new one when adding session hosts.
# [Azure PowerShell](#tab/powershell-standard)
Here's how to create a host pool by using the Azure portal:
| **Resource group** | This value defaults to the resource group that you chose to contain your host pool on the **Basics** tab, but you can select an alternative. | | **Name prefix** | Enter a name prefix for your session hosts, such as **hp01-sh**.<br /><br />Each session host has a suffix of a hyphen and then a sequential number added to the end, such as **hp01-sh-0**.<br /><br />This name prefix can be a maximum of 11 characters and is used in the computer name in the operating system. The prefix and the suffix combined can be a maximum of 15 characters. Session host names must be unique. | | **Virtual machine type** | Select **Azure virtual machine**. |
- | **Virtual machine location** | Select the Azure region where you want to deploy your session hosts. This value must be the same region that contains your virtual network. Then select **Deploy to an Azure Extended Zone**. |
- | **Azure Extended Zones** | |
- | **Azure Extended Zone** | Select **Los Angeles**. |
- | **Place the session host(s) behind an existing load balancing solution?** | Select the box. This action shows options for selecting a load balancer and a back-end pool.|
- | **Select a load balancer** | Select an existing load balancer on the virtual network where you're deploying the session hosts. |
- | **Select a backend pool** | Select a back-end pool on the load balancer where you want to place the session hosts. |
- | **Availability options** | Select from [availability zones](/azure/reliability/availability-zones-overview), [availability set](/azure/virtual-machines/availability-set-overview), or **No infrastructure dependency required**. If you select **availability zones** or **availability set**, complete the extra parameters that appear. |
- | **Security type** | Select from **Standard**, [Trusted launch virtual machines](/azure/virtual-machines/trusted-launch), or [Confidential virtual machines](/azure/confidential-computing/confidential-vm-overview).<br /><br />- If you select **Trusted launch virtual machines**, options for **secure boot** and **vTPM** are automatically selected.<br /><br />- If you select **Confidential virtual machines**, options for **secure boot**, **vTPM**, and **integrity monitoring** are automatically selected. You can't opt out of vTPM when using a confidential VM. |
+ | **Virtual machine location** | Select **Deploy to an Azure Extended Zone**. |
+ | **Azure Extended Zone** | Select the Extended Zone you require. |
+ | **Network and security** | |
+ | **Select a load balancer** | Select an existing Azure load balancer on the same virtual network you want to use for your session hosts, or select **Create a load balancer** to create a new load balancer.|
+ | **Select a backend pool** | Select a backend pool on the load balancer you want to use for your session hosts. If you're creating a new load balancer, select **Create new** to create a new backend pool for the new load balancer. |
+ | **Add outbound rule** | If you're creating a new load balancer, select **Create new** to create a new outbound rule for it. |
</details>
vpn-gateway Point To Site Certificate Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-certificate-gateway.md
Title: 'Configure P2S server configuration - certificate authentication: Azure portal'
+ Title: 'Configure VPN gateway for P2S certificate authentication: Azure portal'
-description: Learn how to configure VPN Gateway server settings for P2S configurations - certificate authentication.
+description: Learn how to configure VPN Gateway server settings for point-to-site configurations - certificate authentication.
- Previously updated : 09/06/2024 Last updated : 11/07/2024 # Configure server settings for P2S VPN Gateway certificate authentication
-This article helps you configure the necessary VPN Gateway point-to-site (P2S) server settings to let you securely connect individual clients running Windows, Linux, or macOS to an Azure virtual network (VNet). P2S VPN connections are useful when you want to connect to your VNet from a remote location, such as when you're telecommuting from home or a conference. You can also use P2S instead of a site-to-site (S2S) VPN when you have only a few clients that need to connect to a virtual network (VNet).
+This article helps you configure the necessary VPN Gateway point-to-site (P2S) server settings to let you securely connect individual clients running Windows, Linux, or macOS to an Azure virtual network (VNet). P2S VPN connections are useful when you want to connect to your virtual network from a remote location, such as when you're telecommuting from home or a conference. You can also use P2S instead of a site-to-site (S2S) VPN when you have only a few clients that need to connect to a virtual network.
P2S connections don't require a VPN device or a public-facing IP address. There are various different configuration options available for P2S. For more information about point-to-site VPN, see [About point-to-site VPN](point-to-site-about.md).
-The steps in this article create a P2S configuration that uses **certificate authentication** and the Azure portal. To create this configuration using the Azure PowerShell, see the [Configure P2S - Certificate - PowerShell](vpn-gateway-howto-point-to-site-rm-ps.md) article. For RADIUS authentication, see the [P2S RADIUS](point-to-site-how-to-radius-ps.md) article. For Microsoft Entra authentication, see the [P2S Microsoft Entra ID](openvpn-azure-ad-tenant.md) article.
+The steps in this article use the Azure portal to configure your Azure VPN gateway for point-to-site **certificate authentication**.
[!INCLUDE [P2S basic architecture](../../includes/vpn-gateway-p2s-architecture.md)] ## Prerequisites
-Verify that you have an Azure subscription. If you don't already have an Azure subscription, you can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details) or sign up for a [free account](https://azure.microsoft.com/pricing/free-trial).
+This article assumes the following prerequisites:
-### <a name="example"></a>Example values
+* An Azure virtual network.
+* A route-based VPN gateway that's compatible with the P2S configuration that you want to create and the connecting VPN clients. To help determine the P2S configuration that you need, see the [VPN client table](#type). If your gateway uses the Basic SKU, understand that the Basic SKU has P2S limitations and doesn't support IKEv2 or RADIUS authentication. For more information, see [About gateway SKUs](about-gateway-skus.md).
-You can use the following values to create a test environment, or refer to these values to better understand the examples in this article:
-
-**VNet**
-
-* **VNet Name:** VNet1
-* **Address space:** 10.1.0.0/16<br>For this example, we use only one address space. You can have more than one address space for your VNet.
-* **Subnet name:** FrontEnd
-* **Subnet address range:** 10.1.0.0/24
-* **Subscription:** If you have more than one subscription, verify that you're using the correct one.
-* **Resource Group:** TestRG1
-* **Location:** East US
-
-**Virtual network gateway**
-
-* **Virtual network gateway name:** VNet1GW
-* **Gateway type:** VPN
-* **VPN type:** Route-based (required for P2S)
-* **SKU:** VpnGw2
-* **Generation:** Generation2
-* **Gateway subnet address range:** 10.1.255.0/27
-* **Public IP address name:** VNet1GWpip
-* **Public IP address name 2:** VNet1GWpip2 - for active-active mode gateways.
-
-**Connection type and client address pool**
-
-* **Connection type:** Point-to-site
-* **Client address pool:** 172.16.201.0/24<br>VPN clients that connect to the VNet using this point-to-site connection receive an IP address from the client address pool.
-
-## <a name="createvnet"></a>Create a VNet
-
-In this section, you create a VNet. Refer to the [Example values](#example) section for the suggested values to use for this configuration.
---
-## Create a gateway subnet
-
-The virtual network gateway requires a specific subnet named **GatewaySubnet**. The gateway subnet is part of the IP address range for your virtual network and contains the IP addresses that the virtual network gateway resources and services use. Specify a gateway subnet that's /27 or larger.
--
-## <a name="creategw"></a>Create the VPN gateway
-
-In this step, you create the virtual network gateway for your VNet. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU.
-
-> [!NOTE]
-> The Basic gateway SKU does not support IKEv2 or RADIUS authentication. If you plan on having Mac clients connect to your VNet, do not use the Basic SKU.
---
-You can see the deployment status on the **Overview** page for your gateway. After the gateway is created, you can view the IP address that has been assigned to it by looking at the VNet in the portal. The gateway appears as a connected device.
-
+If you don't yet have a functioning VPN gateway that's compatible with the P2S configuration that you want to create, see [Create and manage a VPN gateway](tutorial-create-gateway-portal.md). Create a compatible VPN gateway, then return to this article to configure P2S settings.
## <a name="generatecert"></a>Generate certificates
-Certificates are used by Azure to authenticate clients connecting to a VNet over a point-to-site VPN connection. Once you obtain a root certificate, you [upload](#uploadfile) the public key information to Azure. The root certificate is then considered 'trusted' by Azure for connection over P2S to the VNet.
+Certificates are used by Azure to authenticate clients connecting to a virtual network over a point-to-site VPN connection. Once you obtain a root certificate, you upload the public key information to Azure. The root certificate is then considered 'trusted' by Azure for connection over P2S to the virtual network.
-You also generate client certificates from the trusted root certificate, and then install them on each client computer. The client certificate is used to authenticate the client when it initiates a connection to the VNet.
+You also generate client certificates from the trusted root certificate, and then install them on each client computer. The client certificate is used to authenticate the client when it initiates a connection to the virtual network.
The root certificate must be generated and extracted before you configure the point-to-site gateway settings.
The root certificate must be generated and extracted before you configure the po
[!INCLUDE [generate-client-cert](../../includes/vpn-gateway-p2s-clientcert-include.md)]
-## <a name="addresspool"></a>Add the address pool
-
-The **Point-to-site configuration** page contains the configuration information that's needed for the P2S VPN. Once all the P2S settings have been configured and the gateway has been updated, the Point-to-site configuration page is used to view or change P2S VPN settings.
+## <a name="addresspool"></a>Add the VPN client address pool
-1. Go to the gateway you created in the previous section.
-1. In the left pane, select **Point-to-site configuration**.
-1. Click **Configure now** to open the configuration page.
-The client address pool is a range of private IP addresses that you specify. The clients that connect over a point-to-site VPN dynamically receive an IP address from this range. Use a private IP address range that doesn't overlap with the on-premises location that you connect from, or the VNet that you want to connect to. If you configure multiple protocols and SSTP is one of the protocols, then the configured address pool is split between the configured protocols equally.
+## <a name="type"></a>Specify the tunnel and authentication type
- :::image type="content" source="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/configuration-address-pool.png" alt-text="Screenshot of Point-to-site configuration page - address pool." lightbox="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/configuration-address-pool.png":::
+In this section, you specify the tunnel type and the authentication type. These settings can become complex. You can select options that contain multiple tunnel types from the dropdown, such as *IKEv2 and OpenVPN(SSL)* or *IKEv2 and SSTP (SSL)*. Only certain combinations of tunnel types and authentication types are available.
-1. On the **Point-to-site configuration** page, in the **Address pool** box, add the private IP address range that you want to use. VPN clients dynamically receive an IP address from the range that you specify. The minimum subnet mask is 29 bit for active/passive and 28 bit for active/active configuration.
+The tunnel type and the authentication type must correspond to the VPN client software you want use to connect to Azure. When you have various VPN clients connecting from different operating systems, planning the tunnel type and authentication type is important. The following table shows available tunnel types and authentication types as they relate to VPN client software.
-If your VPN gateway is configured with an availability zone SKU (AZ) and is in active-active mode, point-to-site VPN configurations require three public IP addresses. You can use the example value **VNet1GWpip3**.
+**VPN client table**
-## <a name="type"></a>Specify tunnel and authentication type
> [!NOTE] > If you don't see tunnel type or authentication type on the **Point-to-site configuration** page, your gateway is using the Basic SKU. The Basic SKU doesn't support IKEv2 or RADIUS authentication. If you want to use these settings, you need to delete and re-create the gateway using a different gateway SKU.
->
-
-In this section, you specify the tunnel type and the authentication type. These settings can become complex, depending on the tunnel type you require and the VPN client software that will be used to make the connection from the user's operating system. The steps in this article walk you through basic configuration settings and choices.
-
-You can select options that contain multiple tunnel types from the dropdown - such as *IKEv2 and OpenVPN(SSL)* or *IKEv2 and SSTP (SSL)*, however, only certain combinations of tunnel types and authentication types are supported. For example, Microsoft Entra authentication can only be used when you select *OpenVPN (SSL)* from the tunnel type dropdown, and not *IKEv2 and OpenVPN(SSL)*.
-
-Additionally, the tunnel type and the authentication type correspond to the VPN client software that can be used to connect to Azure. For example, one VPN client software application might be only able to connect via IKEv2, while another can only connect via OpenVPN. And some client software, while it supports a certain tunnel type, might not support the authentication type you choose.
-
-As you can tell, planning the tunnel type and authentication type is important when you have various VPN clients connecting from different operating systems. Consider the following criteria when you choose your tunnel type in combination with **Azure certificate** authentication. Other authentication types have different considerations.
-
-* **Windows**:
-
- * Windows computers connecting via the native VPN client already installed in the operating system try IKEv2 first and, if that doesn't connect, they fall back to SSTP (if you selected both IKEv2 and SSTP from the tunnel type dropdown).
- * If you select the OpenVPN tunnel type, you can connect using an OpenVPN Client or the Azure VPN Client.
- * The Azure VPN Client can support [optional configuration settings](azure-vpn-client-optional-configurations.md) such as custom routes and forced tunneling.
-
-* **macOS and iOS**:
-
- * The native VPN client for iOS and macOS can only use the IKEv2 tunnel type to connect to Azure.
- * The Azure VPN Client isn't supported for certificate authentication at this time, even if you select the OpenVPN tunnel type.
- * If you want to use the OpenVPN tunnel type with certificate authentication, you can use an OpenVPN client.
- * For macOS, you can use the Azure VPN Client with the OpenVPN tunnel type and Microsoft Entra ID authentication (not certificate authentication).
-
-* **Linux**:
-
- * The Azure VPN Client for Linux supports the OpenVPN tunnel type.
- * The strongSwan client on Android and Linux can use only the IKEv2 tunnel type to connect.
-
-### Tunnel and authentication type
- 1. For **Tunnel type**, select the tunnel type that you want to use. For this exercise, from the dropdown, select **IKEv2 and OpenVPN(SSL)**.
-1. For **Authentication type**, select the authentication type that you want to use. For this exercise, from the dropdown, select **Azure certificate**. If you're interested in other authentication types, see the articles for [Microsoft Entra ID](openvpn-azure-ad-tenant.md) and [RADIUS](point-to-site-how-to-radius-ps.md).
+1. For **Authentication type**, from the dropdown, select **Azure certificate**.
+
+ :::image type="content" source="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/authentication.png" alt-text="Screenshot of Point-to-site configuration page - authentication type." lightbox="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/authentication.png":::
-## <a name="publicip3"></a>Additional IP address
+## <a name="publicip3"></a>Add another public IP address
-If you have an active-active mode gateway that uses an availability zone SKU (AZ SKU), you need a third public IP address. If this setting doesn't apply to your gateway, you don't need to add an additional IP address.
+If you have an active-active mode gateway, you need to specify a third public IP address to configure point-to-site. In the example, we create the third public IP address using the example value **VNet1GWpip3**. If your gateway isn't in active-active mode, you don't need to add another public IP address.
:::image type="content" source="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/public-ip.png" alt-text="Screenshot of Point-to-site configuration page - public IP address." lightbox="./media/vpn-gateway-howto-point-to-site-resource-manager-portal/public-ip.png"::: ## <a name="uploadfile"></a>Upload root certificate public key information
-In this section, you upload public root certificate data to Azure. Once the public certificate data is uploaded, Azure can use it to authenticate clients that have installed a client certificate generated from the trusted root certificate.
+In this section, you upload public root certificate data to Azure. Once the public certificate data is uploaded, Azure uses it to authenticate connecting clients. The connecting clients have an installed client certificate generated from the trusted root certificate.
1. Make sure that you exported the root certificate as a **Base-64 encoded X.509 (.CER)** file in the previous steps. You need to export the certificate in this format so you can open the certificate with text editor. You don't need to export the private key.
In this section, you upload public root certificate data to Azure. Once the publ
## <a name="profile-files"></a>Generate VPN client profile configuration files
-All the necessary configuration settings for the VPN clients are contained in a VPN client profile configuration zip file. VPN client profile configuration files are specific to the P2S VPN gateway configuration for the VNet. If there are any changes to the P2S VPN configuration after you generate the files, such as changes to the VPN protocol type or authentication type, you need to generate new VPN client profile configuration files and apply the new configuration to all of the VPN clients that you want to connect. For more information about P2S connections, see [About point-to-site VPN](point-to-site-about.md).
+All the necessary configuration settings for the VPN clients are contained in a VPN client profile configuration zip file. VPN client profile configuration files are specific to the P2S VPN gateway configuration for the virtual network. If there are any changes to the P2S VPN configuration after you generate the files, such as changes to the VPN protocol type or authentication type, you need to generate new VPN client profile configuration files and apply the new configuration to all of the VPN clients that you want to connect. For more information about P2S connections, see [About point-to-site VPN](point-to-site-about.md).
You can generate client profile configuration files using PowerShell, or by using the Azure portal. The following examples show both methods. Either method returns the same zip file.
You can generate client profile configuration files using PowerShell, or by usin
[!INCLUDE [Generate profile configuration files - Azure portal](../../includes/vpn-gateway-generate-profile-portal.md)]
-### PowerShell
-- ## <a name="clientconfig"></a>Configure VPN clients and connect to Azure
-For steps to configure your VPN clients and connect to Azure, see the following articles:
--
-## <a name="verify"></a>Verify your connection
-
-These instructions apply to Windows clients.
-
-1. To verify that your VPN connection is active, open an elevated command prompt, and run *ipconfig/all*.
-1. View the results. Notice that the IP address you received is one of the addresses within the point-to-site VPN Client Address Pool that you specified in your configuration. The results are similar to this example:
-
- ```
- PPP adapter VNet1:
- Connection-specific DNS Suffix .:
- Description.....................: VNet1
- Physical Address................:
- DHCP Enabled....................: No
- Autoconfiguration Enabled.......: Yes
- IPv4 Address....................: 172.16.201.3(Preferred)
- Subnet Mask.....................: 255.255.255.255
- Default Gateway.................:
- NetBIOS over Tcpip..............: Enabled
- ```
-
-## <a name="connectVM"></a>Connect to a virtual machine
-
-These instructions apply to Windows clients.
--
-* Verify that the VPN client configuration package was generated after the DNS server IP addresses were specified for the VNet. If you updated the DNS server IP addresses, generate and install a new VPN client configuration package.
-
-* Use 'ipconfig' to check the IPv4 address assigned to the Ethernet adapter on the computer from which you're connecting. If the IP address is within the address range of the VNet that you're connecting to, or within the address range of your VPNClientAddressPool, this is referred to as an overlapping address space. When your address space overlaps in this way, the network traffic doesn't reach Azure, it stays on the local network.
+For steps to configure your VPN clients and connect to Azure, see the **VPN client table** in the [Specify tunnel and authentication type](#type) section. The table contains links to articles that provide detailed steps to configure the VPN client software.
## <a name="add"></a>Add or remove trusted root certificates
-You can add and remove trusted root certificates from Azure. When you remove a root certificate, clients that have a certificate generated from that root won't be able to authenticate, and thus won't be able to connect. If you want a client to authenticate and connect, you need to install a new client certificate generated from a root certificate that is trusted (uploaded) to Azure.
+You can add and remove trusted root certificates from Azure. When you remove a root certificate, clients that have a certificate generated from that root won't be able to authenticate, and as a result, can't connect. If you want a client to authenticate and connect, you need to install a new client certificate generated from a root certificate that is trusted (uploaded) to Azure.
You can add up to 20 trusted root certificate .cer files to Azure. For instructions, see the section [Upload a trusted root certificate](#uploadfile).
For frequently asked questions, see the [FAQ](vpn-gateway-vpn-faq.md#P2S).
## Next steps
-Once your connection is complete, you can add virtual machines to your VNets. For more information, see [Virtual Machines](../index.yml). To understand more about networking and virtual machines, see [Azure and Linux VM network overview](../virtual-network/network-overview.md).
+Once your connection is complete, you can add virtual machines to your virtual networks. For more information, see [Virtual Machines](../index.yml). To understand more about networking and virtual machines, see [Azure and Linux VM network overview](../virtual-network/network-overview.md).
For P2S troubleshooting information, [Troubleshooting Azure point-to-site connections](vpn-gateway-troubleshoot-vpn-point-to-site-connection-problems.md).