Updates from: 01/30/2024 02:09:57
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Concept Add On Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-add-on-capabilities.md
Query fields are an add-on capability to extend the schema extracted from any pr
> [!NOTE] >
-> Document Intelligence Studio query field extraction is currently available with the Layout and Prebuilt models starting with the `2023-10-31-preview` API and later releases.
+> Document Intelligence Studio query field extraction is currently available with the Layout and Prebuilt models starting with the `2023-10-31-preview` API and later releases except for the ```us.tax.*``` models (W2, 1098s and 1099s models).
### Query field extraction
ai-studio Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/quota.md
When opening the support request to increase the total compute limit, provide th
Azure AI Studio provides a pool of shared quota that is available for different users across various regions to use concurrently. Depending upon availability, users can temporarily access quota from the shared pool, and use the quota to perform testing for a limited amount of time. The specific time duration depends on the use case. By temporarily using quota from the quota pool, you no longer need to file a support ticket for a short-term quota increase or wait for your quota request to be approved before you can proceed with your workload.
-Use of the shared quota pool is available for testing inferencing for Llama models from the Model Catalog. You should use the shared quota only for creating temporary test endpoints, not production endpoints. For endpoints in production, you should [request dedicated quota](#view-and-request-quotas-in-the-studio). Billing for shared quota is usage-based, just like billing for dedicated virtual machine families.
+Use of the shared quota pool is available for testing inferencing for Llama-2, Phi, Nemotron, Mistral, Dolly and Deci-DeciLM models from the Model Catalog. You should use the shared quota only for creating temporary test endpoints, not production endpoints. For endpoints in production, you should [request dedicated quota](#view-and-request-quotas-in-the-studio). Billing for shared quota is usage-based, just like billing for dedicated virtual machine families.
## Container Instances
aks App Routing Dns Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing-dns-ssl.md
Last updated 12/04/2023
-# Set up advanced Ingress configurations with the application routing add-on
+# Set up a custom domain name and SSL certificate with the application routing add-on
An Ingress is an API object that defines rules, which allow external access to services in an Azure Kubernetes Service (AKS) cluster. When you create an Ingress object that uses the application routing add-on nginx Ingress classes, the add-on creates, configures, and manages one or more Ingress controllers in your AKS cluster.
This article shows you how to set up an advanced Ingress configuration to encryp
The application routing add-on with nginx delivers the following: * Easy configuration of managed nginx Ingress controllers.
-* Integration with an external DNS such as [Azure DNS][azure-dns-overview] for public and private zone management
+* Integration with an external DNS such as [Azure DNS][azure-dns-overview] for global and private zone management
* SSL termination with certificates stored in a key vault, such as [Azure Key Vault][azure-key-vault-overview]. ## Prerequisites - An AKS cluster with the [application routing add-on][app-routing-add-on-basic-configuration]. - Azure Key Vault if you want to configure SSL termination and store certificates in the vault hosted in Azure.-- Azure DNS if you want to configure public and private zone management and host them in Azure.
+- Azure DNS if you want to configure global and private zone management and host them in Azure.
- To attach an Azure Key Vault or Azure DNS Zone, you need the [Owner][rbac-owner], [Azure account administrator][rbac-classic], or [Azure co-administrator][rbac-classic] role on your Azure subscription. ## Connect to your AKS cluster
az aks approuting update -g <ResourceGroupName> -n <ClusterName> --enable-kv --a
To enable support for DNS zones, review the following prerequisite:
-* The app routing add-on can be configured to automatically create records on one or more Azure public and private DNS zones for hosts defined on Ingress resources. All public Azure DNS zones need to be in the same resource group, and all private Azure DNS zones need to be in the same resource group. If you don't have an Azure DNS zone, you can [create one][create-an-azure-dns-zone].
+* The app routing add-on can be configured to automatically create records on one or more Azure global and private DNS zones for hosts defined on Ingress resources. All global Azure DNS zones need to be in the same resource group, and all private Azure DNS zones need to be in the same resource group. If you don't have an Azure DNS zone, you can [create one][create-an-azure-dns-zone].
+ ### Create a public Azure DNS zone
aks App Routing Nginx Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing-nginx-configuration.md
+
+ Title: Advanced ingress and NGINX ingress controller configuration
+description: Understand the advanced configuration options that are supported with the application routing add-on with the NGINX ingress controller for Azure Kubernetes Service.
+++ Last updated : 11/21/2023++
+# Advanced NGINX ingress controller and ingress configurations with the application routing add-on
+
+The application routing add-on supports two ways to configure ingress controllers and ingress objects:
+- [Configuration of the NGINX ingress controller](#configuration-of-the-nginx-ingress-controller) such as creating multiple controllers, configuring private load balancers, and setting static IP addresses.
+- [Configuration per ingress resource](#configuration-per-ingress-resource-through-annotations) through annotations.
+
+## Prerequisites
+
+An AKS cluster with the [application routing add-on][app-routing-add-on-basic-configuration].
+
+## Connect to your AKS cluster
+
+To connect to the Kubernetes cluster from your local computer, you use `kubectl`, the Kubernetes command-line client. You can install it locally using the [az aks install-cli][az-aks-install-cli] command. If you use the Azure Cloud Shell, `kubectl` is already installed.
+
+Configure kubectl to connect to your Kubernetes cluster using the [`az aks get-credentials`][az-aks-get-credentials] command.
+
+```azurecli-interactive
+az aks get-credentials -g <ResourceGroupName> -n <ClusterName>
+```
+
+## Configuration of the NGINX ingress controller
+
+The application routing add-on uses a Kubernetes [custom resource definition (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) called [`NginxIngressController`](https://github.com/Azure/aks-app-routing-operator/blob/main/config/crd/bases/approuting.kubernetes.azure.com_nginxingresscontrollers.yaml) to configure NGINX ingress controllers. You can create more ingress controllers or modify existing configuration.
+
+`NginxIngressController` CRD has a `loadBalancerAnnotations` field to control the behavior of the NGINX ingress controller's service by setting [load balancer annotations](load-balancer-standard.md#customizations-via-kubernetes-annotations).
++
+### The default NGINX ingress controller
+
+When you enable the application routing add-on with NGINX, it creates an ingress controller called `default` in the `app-routing-namespace` configured with a public facing Azure load balancer. That ingress controller uses an ingress class name of `webapprouting.kubernetes.azure.com`.
+
+You can modify the configuration of the default ingress controller by editing its configuration.
+
+```bash
+kubectl edit nginxingresscontroller default -n app-routing-system
+```
+
+### Create another public facing NGINX ingress controller
+
+To create another NGINX ingress controller with a public facing Azure Load Balancer:
+
+1. Copy the following YAML manifest into a new file named **nginx-public-controller.yaml** and save the file to your local computer.
+
+ ```yml
+ apiVersion: approuting.kubernetes.azure.com/v1alpha1
+ kind: NginxIngressController
+ metadata:
+ name: nginx-public
+ spec:
+ ingressClassName: nginx-public
+ controllerNamePrefix: nginx-public
+ ```
+
+1. Create the NGINX ingress controller resources using the [`kubectl apply`][kubectl-apply] command.
+
+ ```bash
+ kubectl apply -f nginx-public-controller.yaml
+ ```
+
+ The following example output shows the created resource:
+
+ ```output
+ nginxingresscontroller.approuting.kubernetes.azure.com/nginx-public created
+ ```
+
+### Create an internal NGINX ingress controller with a private IP address
+
+To create an NGINX ingress controller with an internal facing Azure Load Balancer with a private IP address:
+
+1. Copy the following YAML manifest into a new file named **nginx-internal-controller.yaml** and save the file to your local computer.
+
+ ```yml
+ apiVersion: approuting.kubernetes.azure.com/v1alpha1
+ kind: NginxIngressController
+ metadata:
+ name: nginx-internal
+ spec:
+ ingressClassName: nginx-internal
+ controllerNamePrefix: nginx-internal
+ loadBalancerAnnotations:
+ service.beta.kubernetes.io/azure-load-balancer-internal: "true"
+ ```
+
+1. Create the NGINX ingress controller resources using the [`kubectl apply`][kubectl-apply] command.
+
+ ```bash
+ kubectl apply -f nginx-internal-controller.yaml
+ ```
+
+ The following example output shows the created resource:
+
+ ```output
+ nginxingresscontroller.approuting.kubernetes.azure.com/nginx-internal created
+ ```
+
+### Create an NGINX ingress controller with a static IP address
+
+To create an NGINX ingress controller with a static IP address on the Azure Load Balancer:
+
+1. Create an Azure resource group using the [`az group create`][az-group-create] command.
+
+ ```azurecli-interactive
+ az group create --name myNetworkResourceGroup --location eastus
+ ```
+
+1. Create a static public IP address using the [`az network public ip create`][az-network-public-ip-create] command.
+
+ ```azurecli-interactive
+ az network public-ip create \
+ --resource-group myNetworkResourceGroup \
+ --name myIngressPublicIP \
+ --sku Standard \
+ --allocation-method static
+ ```
+
+ > [!NOTE]
+ > If you're using a *Basic* SKU load balancer in your AKS cluster, use *Basic* for the `--sku` parameter when defining a public IP. Only *Basic* SKU IPs work with the *Basic* SKU load balancer and only *Standard* SKU IPs work with *Standard* SKU load balancers.
+
+1. Ensure the cluster identity used by the AKS cluster has delegated permissions to the public IP's resource group using the [`az role assignment create`][az-role-assignment-create] command.
+
+ > [!NOTE]
+ > Update *`<ClusterName>`* and *`<ClusterResourceGroup>`* with your AKS cluster's name and resource group name.
+
+ ```azurecli-interactive
+ CLIENT_ID=$(az aks show --name <ClusterName> --resource-group <ClusterResourceGroup> --query identity.principalId -o tsv)
+ RG_SCOPE=$(az group show --name myNetworkResourceGroup --query id -o tsv)
+ az role assignment create \
+ --assignee ${CLIENT_ID} \
+ --role "Network Contributor" \
+ --scope ${RG_SCOPE}
+ ```
+
+1. Copy the following YAML manifest into a new file named **nginx-staticip-controller.yaml** and save the file to your local computer.
+
+ > [!NOTE]
+ > You can either use `service.beta.kubernetes.io/azure-pip-name` for public IP name, or use `service.beta.kubernetes.io/azure-load-balancer-ipv4` for an IPv4 address and `service.beta.kubernetes.io/azure-load-balancer-ipv6` for an IPv6 address, as shown in the example YAML. Adding the `service.beta.kubernetes.io/azure-pip-name` annotation ensures the most efficient LoadBalancer creation and is highly recommended to avoid potential throttling.
+
+ ```yml
+ apiVersion: approuting.kubernetes.azure.com/v1alpha1
+ kind: NginxIngressController
+ metadata:
+ name: nginx-static
+ spec:
+ ingressClassName: nginx-static
+ controllerNamePrefix: nginx-static
+ loadBalancerAnnotations:
+ service.beta.kubernetes.io/azure-pip-name: "myIngressPublicIP"
+ service.beta.kubernetes.io/azure-load-balancer-resource-group: "myNetworkResourceGroup"
+ ```
+
+1. Create the NGINX ingress controller resources using the [`kubectl apply`][kubectl-apply] command.
+
+ ```bash
+ kubectl apply -f nginx-staticip-controller.yaml
+ ```
+
+ The following example output shows the created resource:
+
+ ```output
+ nginxingresscontroller.approuting.kubernetes.azure.com/nginx-static created
+ ```
+
+### Verify the ingress controller was created
+
+You can verify the status of the NGINX ingress controller using the [`kubectl get nginxingresscontroller`][kubectl-get] command.
+
+> [!NOTE]
+> Update *`<IngressControllerName>`* with name you used when creating the `NginxIngressController``.
+
+```bash
+kubectl get nginxingresscontroller -n <IngressControllerName>
+```
+
+The following example output shows the created resource. It may take a few minutes for the controller to be available:
+
+```output
+NAME INGRESSCLASS CONTROLLERNAMEPREFIX AVAILABLE
+nginx-public nginx-public nginx True
+```
+
+You can also view the conditions to troubleshoot any issues:
+
+```bash
+kubectl get nginxingresscontroller -n <IngressControllerName> -o jsonpath='{range .items[*].status.conditions[*]}{.lastTransitionTime}{"\t"}{.status}{"\t"}{.type}{"\t"}{.message}{"\n"}{end}'
+```
+
+The following example output shows the conditions of a healthy ingress controller:
+
+```output
+2023-11-29T19:59:24Z True IngressClassReady Ingress Class is up-to-date
+2023-11-29T19:59:50Z True Available Controller Deployment has minimum availability and IngressClass is up-to-date
+2023-11-29T19:59:50Z True ControllerAvailable Controller Deployment is available
+2023-11-29T19:59:25Z True Progressing Controller Deployment has successfully progressed
+```
+
+### Use the ingress controller in an ingress
+
+1. Copy the following YAML manifest into a new file named **ingress.yaml** and save the file to your local computer.
+
+ > [!NOTE]
+ > Update *`<Hostname>`* with your DNS host name.
+ > The *`<IngressClassName>`* is the one you defined when creating the `NginxIngressController`.
+
+ ```yml
+ apiVersion: networking.k8s.io/v1
+ kind: Ingress
+ metadata:
+ name: aks-helloworld
+ namespace: hello-web-app-routing
+ spec:
+ ingressClassName: <IngressClassName>
+ rules:
+ - host: <Hostname>
+ http:
+ paths:
+ - backend:
+ service:
+ name: aks-helloworld
+ port:
+ number: 80
+ path: /
+ pathType: Prefix
+ ```
+
+3. Create the cluster resources using the [`kubectl apply`][kubectl-apply] command.
+
+ ```bash
+ kubectl apply -f ingress.yaml -n hello-web-app-routing
+ ```
+
+ The following example output shows the created resource:
+
+ ```output
+ ingress.networking.k8s.io/aks-helloworld created
+ ```
++
+### Verify the managed Ingress was created
+
+You can verify the managed Ingress was created using the [`kubectl get ingress`][kubectl-get] command.
+
+```bash
+kubectl get ingress -n hello-web-app-routing
+```
+
+The following example output shows the created managed Ingress. The ingress class, host and IP address may be different:
+
+```output
+NAME CLASS HOSTS ADDRESS PORTS AGE
+aks-helloworld webapprouting.kubernetes.azure.com myapp.contoso.com 20.51.92.19 80, 443 4m
+```
+
+### Clean up of ingress controllers
+
+You can remove the NGINX ingress controller using the [`kubectl delete nginxingresscontroller`][kubectl-delete] command.
+
+> [!NOTE]
+> Update *`<IngressControllerName>`* with name you used when creating the `NginxIngressController`.
+
+```bash
+kubectl delete nginxingresscontroller -n <IngressControllerName>
+```
+
+## Configuration per ingress resource through annotations
+
+The NGINX ingress controller supports adding [annotations to specific Ingress objects](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/) to customize their behavior.
+
+You can [annotate](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) the ingress object by adding the respective annotation in the `metadata.annotations` field.
+
+> [!NOTE]
+> Annotation keys and values can only be strings. Other types, such as boolean or numeric values must be quoted, i.e. `"true"`, `"false"`, `"100"`.
+
+Here are some examples annotations for common configurations. Review the [NGINX ingress annotations documentation](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/) for a full list.
+
+### Custom max body size
+
+For NGINX, a 413 error is returned to the client when the size in a request exceeds the maximum allowed size of the client request body. To override the default value, use the annotation:
+
+```yml
+nginx.ingress.kubernetes.io/proxy-body-size: 4m
+```
+
+Here's an example ingress configuration using this annotation:
+
+> [!NOTE]
+> Update *`<Hostname>`* with your DNS host name.
+> The *`<IngressClassName>`* is the one you defined when creating the `NginxIngressController`.
+
+```yml
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: aks-helloworld
+ namespace: hello-web-app-routing
+ annotations:
+ nginx.ingress.kubernetes.io/proxy-body-size: 4m
+spec:
+ ingressClassName: <IngressClassName>
+ rules:
+ - host: <Hostname>
+ http:
+ paths:
+ - backend:
+ service:
+ name: aks-helloworld
+ port:
+ number: 80
+ path: /
+ pathType: Prefix
+```
+
+### Custom connection timeout
+
+You can change the timeout that the NGINX ingress controller waits to close a connection with your workload. All timeout values are unitless and in seconds. To override the default timeout, use the following annotation to set a valid 120-seconds proxy read timeout:
+
+```yml
+nginx.ingress.kubernetes.io/proxy-read-timeout: "120"
+```
+
+Review [custom timeouts](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#custom-timeouts) for other configuration options.
+
+Here's an example ingress configuration using this annotation:
+
+> [!NOTE]
+> Update *`<Hostname>`* with your DNS host name.
+> The *`<IngressClassName>`* is the one you defined when creating the `NginxIngressController`.
+
+```yml
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: aks-helloworld
+ namespace: hello-web-app-routing
+ annotations:
+ nginx.ingress.kubernetes.io/proxy-read-timeout: "120"
+spec:
+ ingressClassName: <IngressClassName>
+ rules:
+ - host: <Hostname>
+ http:
+ paths:
+ - backend:
+ service:
+ name: aks-helloworld
+ port:
+ number: 80
+ path: /
+ pathType: Prefix
+```
+
+### Backend protocol
+
+By default the NGINX ingress controller uses `HTTP` to reach the services. To configure alternative backend protocols such as `HTTPS` or `GRPC`, use the annotation:
+
+```yml
+nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
+```
+or
+```yml
+nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
+```
+
+Review [backend protocols](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#backend-protocol) for other configuration options.
+
+Here's an example ingress configuration using this annotation:
+
+> [!NOTE]
+> Update *`<Hostname>`* with your DNS host name.
+> The *`<IngressClassName>`* is the one you defined when creating the `NginxIngressController`.
+
+```yml
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: aks-helloworld
+ namespace: hello-web-app-routing
+ annotations:
+ nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
+spec:
+ ingressClassName: <IngressClassName>
+ rules:
+ - host: <Hostname>
+ http:
+ paths:
+ - backend:
+ service:
+ name: aks-helloworld
+ port:
+ number: 80
+ path: /
+ pathType: Prefix
+```
+
+### Cross-Origin Resource Sharing (CORS)
+
+To enable Cross-Origin Resource Sharing (CORS) in an Ingress rule, use the annotation:
+
+```yml
+nginx.ingress.kubernetes.io/enable-cors: "true"
+```
+
+Review [enable CORS](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#enable-cors) for other configuration options.
+
+Here's an example ingress configuration using this annotation:
+
+> [!NOTE]
+> Update *`<Hostname>`* with your DNS host name.
+> The *`<IngressClassName>`* is the one you defined when creating the `NginxIngressController`.
+
+```yml
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: aks-helloworld
+ namespace: hello-web-app-routing
+ annotations:
+ nginx.ingress.kubernetes.io/enable-cors: "true"
+spec:
+ ingressClassName: <IngressClassName>
+ rules:
+ - host: <Hostname>
+ http:
+ paths:
+ - backend:
+ service:
+ name: aks-helloworld
+ port:
+ number: 80
+ path: /
+ pathType: Prefix
+```
+
+### Disable SSL redirect
+
+By default the controller redirects (308) to HTTPS if TLS is enabled for an ingress. To disable this feature for specific ingress resources, use the annotation:
+
+```yml
+nginx.ingress.kubernetes.io/ssl-redirect: "false"
+```
+
+Review [server-side HTTPS enforcement through redirect](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#server-side-https-enforcement-through-redirect) for other configuration options.
+
+Here's an example ingress configuration using this annotation:
+
+> [!NOTE]
+> Update *`<Hostname>`* with your DNS host name.
+> The *`<IngressClassName>`* is the one you defined when creating the `NginxIngressController`.
+
+```yml
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: aks-helloworld
+ namespace: hello-web-app-routing
+ annotations:
+ nginx.ingress.kubernetes.io/ssl-redirect: "false"
+spec:
+ ingressClassName: <IngressClassName>
+ rules:
+ - host: <Hostname>
+ http:
+ paths:
+ - backend:
+ service:
+ name: aks-helloworld
+ port:
+ number: 80
+ path: /
+ pathType: Prefix
+```
+
+### URL rewriting
+
+In some scenarios, the exposed URL in the backend service differs from the specified path in the Ingress rule. Without a rewrite any request returns 404. This is particularly useful with [path based routing](https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/) where you can serve two different web applications under the same domain. You can set path expected by the service using the annotation:
+
+```yml
+nginx.ingress.kubernetes.io/rewrite-target": /$2
+```
+
+Here's an example ingress configuration using this annotation:
+
+> [!NOTE]
+> Update *`<Hostname>`* with your DNS host name.
+> The *`<IngressClassName>`* is the one you defined when creating the `NginxIngressController`.
+
+```yml
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: aks-helloworld
+ namespace: hello-web-app-routing
+ annotations:
+ nginx.ingress.kubernetes.io/rewrite-target: /$2
+ nginx.ingress.kubernetes.io/use-regex: "true"
+spec:
+ ingressClassName: <IngressClassName>
+ rules:
+ - host: <Hostname>
+ http:
+ paths:
+ - path: /app-one(/|$)(.*)
+ pathType: Prefix
+ backend:
+ service:
+ name: app-one
+ port:
+ number: 80
+ - path: /app-two(/|$)(.*)
+ pathType: Prefix
+ backend:
+ service:
+ name: app-two
+ port:
+ number: 80
+```
+
+## Next steps
+
+Learn about monitoring the ingress-nginx controller metrics included with the application routing add-on with [with Prometheus in Grafana][prometheus-in-grafana] as part of analyzing the performance and usage of your application.
+
+<!-- LINKS - external -->
+[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
+[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
+[kubectl-delete]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#delete
+
+<!-- LINKS - internal -->
+[az-network-public-ip-create]: /cli/azure/network/public-ip#az_network_public_ip_create
+[az-network-public-ip-list]: /cli/azure/network/public-ip#az_network_public_ip_list
+[az-group-create]: /cli/azure/group#az-group-create
+[summary-msi]: use-managed-identity.md#summary-of-managed-identities
+[rbac-owner]: ../role-based-access-control/built-in-roles.md#owner
+[rbac-classic]: ../role-based-access-control/rbac-and-directory-admin-roles.md#classic-subscription-administrator-roles
+[app-routing-add-on-basic-configuration]: app-routing.md
+[csi-secrets-store-autorotation]: csi-secrets-store-configuration-options.md#enable-and-disable-auto-rotation
+[azure-key-vault-overview]: ../key-vault/general/overview.md
+[az-aks-approuting-update]: /cli/azure/aks/approuting#az-aks-approuting-update
+[az-aks-approuting-zone]: /cli/azure/aks/approuting/zone
+[az-network-dns-zone-show]: /cli/azure/network/dns/zone#az-network-dns-zone-show
+[az-network-dns-zone-create]: /cli/azure/network/dns/zone#az-network-dns-zone-create
+[az-keyvault-certificate-import]: /cli/azure/keyvault/certificate#az-keyvault-certificate-import
+[az-keyvault-create]: /cli/azure/keyvault#az-keyvault-create
+[authorization-systems]: ../key-vault/general/rbac-access-policy.md
+[az-aks-install-cli]: /cli/azure/aks#az-aks-install-cli
+[az-aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials
+[create-and-export-a-self-signed-ssl-certificate]: #create-and-export-a-self-signed-ssl-certificate
+[create-an-azure-dns-zone]: #create-a-global-azure-dns-zone
+[azure-dns-overview]: ../dns/dns-overview.md
+[az-keyvault-certificate-show]: /cli/azure/keyvault/certificate#az-keyvault-certificate-show
+[prometheus-in-grafana]: app-routing-nginx-prometheus.md
aks App Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing.md
The application routing add-on with nginx delivers the following:
* Integration with [Azure DNS][azure-dns-overview] for public and private zone management * SSL termination with certificates stored in Azure Key Vault.
-For additional configuration information related to SSL encryption and DNS integration, review the [application routing add-on configuration][custom-ingress-configurations].
+For other configuration information related to SSL encryption and DNS integration, review [DNS and SSL configuration][dns-ssl-configuration] and [application routing add-on configuration][custom-ingress-configurations].
With the retirement of [Open Service Mesh][open-service-mesh-docs] (OSM) by the Cloud Native Computing Foundation (CNCF), using the application routing add-on is the default method for all AKS clusters.
With the retirement of [Open Service Mesh][open-service-mesh-docs] (OSM) by the
## Limitations - The application routing add-on supports up to five Azure DNS zones.-- All public Azure DNS zones integrated with the add-on have to be in the same resource group.
+- All global Azure DNS zones integrated with the add-on have to be in the same resource group.
- All private Azure DNS zones integrated with the add-on have to be in the same resource group. - Editing any resources in the `app-routing-system` namespace, including the Ingress-nginx ConfigMap isn't supported.-- Snippet annotations on the Ingress resources through `nginx.ingress.kubernetes.io/configuration-snippet` aren't supported. ## Enable application routing using Azure CLI
When the application routing add-on is disabled, some Kubernetes resources might
## Next steps
-* [Configure custom ingress configurations][custom-ingress-configurations] shows how to create an advanced Ingress configuration to encrypt the traffic and use Azure DNS to manage DNS zones.
+* [Configure custom ingress configurations][custom-ingress-configurations] shows how to create an advanced Ingress configuration and [configure a custom domain using Azure DNS to manage DNS zones and setup a secure ingress][dns-ssl-configuration].
* Learn about monitoring the ingress-nginx controller metrics included with the application routing add-on with [with Prometheus in Grafana][prometheus-in-grafana] (preview) as part of analyzing the performance and usage of your application.
When the application routing add-on is disabled, some Kubernetes resources might
[az-aks-install-cli]: /cli/azure/aks#az-aks-install-cli [az-aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials [install-azure-cli]: /cli/azure/install-azure-cli
-[custom-ingress-configurations]: app-routing-dns-ssl.md
+[dns-ssl-configuration]: app-routing-dns-ssl.md
+[custom-ingress-configurations]: app-routing-nginx-configuration.md
[az-aks-create]: /cli/azure/aks#az-aks-create [prometheus-in-grafana]: app-routing-nginx-prometheus.md
aks Azure Nfs Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-nfs-volume.md
description: Learn how to manually create an Ubuntu Linux NFS Server persistent volume for use with pods in Azure Kubernetes Service (AKS) Previously updated : 06/13/2022 Last updated : 01/24/2024 # Manually create and use a Linux NFS (Network File System) Server with Azure Kubernetes Service (AKS)
-Sharing data between containers is often a necessary component of container-based services and applications. You usually have various pods that need access to the same information on an external persistent volume. While Azure Files is an option, creating an NFS Server on an Azure VM is another form of persistent shared storage.
+Sharing data between containers is often a necessary component of container-based services and applications. You usually have various pods that need access to the same information on an external persistent volume. While [Azure Files][azure-files-overview] is an option, creating an NFS Server on an Azure VM is another form of persistent shared storage.
This article will show you how to create an NFS Server on an Azure Ubuntu virtual machine, and set up your AKS cluster with access to this shared file system as a persistent volume. ## Before you begin
-This article assumes that you have the following components and configuration to support this configuration:
+This article assumes that you have the following to support this configuration:
-* An existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
+* An existing AKS cluster. If you don't have an AKS cluster, for guidance on a designing an enterprise-scale implementation of AKS, see [Plan your AKS design][plan-aks-design].
* Your AKS cluster needs to be on the same or peered Azure virtual network (VNet) as the NFS Server. The cluster must be created on an existing VNet, which can be the same VNet as your NFS Server VM. The steps for configuring with an existing VNet are described in the following articles: [creating AKS Cluster in existing VNET][aks-virtual-network] and [connecting virtual networks with VNET peering][peer-virtual-networks]. * An Azure Ubuntu [Linux virtual machine][azure-linux-vm] running version 18.04 or later. To deploy a Linux VM on Azure, see [Create and manage Linux VMs][linux-create].
-If you deploy your AKS cluster first, Azure automatically populates the virtual network settings when deploying your Azure Ubuntu VM, associating the Ubuntu VM on the same VNet. But if you want to work with peered networks instead, consult the documentation above.
+If you deploy your AKS cluster first, Azure automatically populates the virtual network settings when deploying your Azure Ubuntu VM, associating the Ubuntu VM on the same VNet. If you want to work with peered networks instead, consult the documentation above.
## Deploying the NFS Server onto a virtual machine
-1. To deploy an NFS Server on the Azure Ubuntu virtual machine, copy the following Bash script and save it to your local machine. Replace the value for the variable **AKS_SUBNET** with the correct one from your AKS cluster or else the default value specified opens your NFS Server to all ports and connections. In this article, the file is named `nfs-server-setup.sh`.
+1. To deploy an NFS Server on the Azure Ubuntu virtual machine, copy the following Bash script and save it to your local machine. Replace the value for the variable **AKS_SUBNET** with the correct one from your AKS cluster, otherwise the default value specified opens your NFS Server to all ports and connections. In this article, the file is named `nfs-server-setup.sh`.
```bash #!/bin/bash
If you deploy your AKS cluster first, Azure automatically populates the virtual
## Connecting AKS cluster to NFS Server
-You can connect the NFS Server to your AKS cluster by provisioning a persistent volume and persistent volume claim that specifies how to access the volume. Connecting the two resources in the same or peered virtual networks is necessary. To learn how to set up the cluster in the same VNet, see: [Creating AKS Cluster in existing VNet][aks-virtual-network].
+You can connect to the NFS Server from your AKS cluster by provisioning a persistent volume and persistent volume claim that specifies how to access the volume. Connecting the two resources in the same or peered virtual networks is necessary. To learn how to set up the cluster in the same VNet, see: [Creating AKS Cluster in existing VNet][aks-virtual-network].
-Once both resources are on the same virtual or peered VNet, next provision a persistent volume and a persistent volume claim in your AKS Cluster. The containers can then mount the NFS drive to their local directory.
+Once both resources are on the same virtual or peered VNet, provision a persistent volume and a persistent volume claim in your AKS Cluster. The containers can then mount the NFS drive to their local directory.
-1. Create a *pv-azurefilesnfs.yaml* file with a *PersistentVolume*. For example:
+1. Create a YAML manifest named *pv-azurefilesnfs.yaml* with a *PersistentVolume*. For example:
```yaml apiVersion: v1
Once both resources are on the same virtual or peered VNet, next provision a per
Replace the values for **NFS_INTERNAL_IP**, **NFS_NAME** and **NFS_EXPORT_FILE_PATH** with the actual settings from your NFS Server.
-2. Create a *pvc-azurefilesnfs.yaml* file with a *PersistentVolumeClaim* that uses the *PersistentVolume*. For example:
+2. Create a YAML manifest named *pvc-azurefilesnfs.yaml* with a *PersistentVolumeClaim* that uses the *PersistentVolume*. For example:
>[!IMPORTANT] >**storageClassName** value needs to remain an empty string or the claim won't work.
Once both resources are on the same virtual or peered VNet, next provision a per
If you can't connect to the server from your AKS cluster, the issue might be the exported directory or its parent, doesn't have sufficient permissions to access the NFS Server VM.
-Check that both your export directory and its parent directory have 777 permissions.
+Check that both your export directory and its parent directory are granted 777 permissions.
You can check permissions by running the following command and the directories should have *'drwxrwxrwx'* permissions:
ls -l
* To learn more on setting up your NFS Server or to help debug issues, see the following tutorial from the Ubuntu community [NFS Tutorial][nfs-tutorial] <!-- LINKS - external -->
-[kubernetes-volumes]: https://kubernetes.io/docs/concepts/storage/volumes/
[nfs-tutorial]: https://help.ubuntu.com/community/SettingUpNFSHowTo#Pre-Installation_Setup <!-- LINKS - internal -->
+[plan-aks-design]: /azure/architecture/reference-architectures/containers/aks-start-here?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json
[aks-virtual-network]: ./configure-kubenet.md#create-an-aks-cluster-in-the-virtual-network [peer-virtual-networks]: ../virtual-network/tutorial-connect-virtual-networks-portal.md
-[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
-[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
[operator-best-practices-storage]: operator-best-practices-storage.md [azure-linux-vm]: ../virtual-machines/linux/endorsed-distros.md
-[create-nfs-share-linux-vm]: ../storage/files/storage-files-quick-create-use-linux.md
-[require-secure-transfer]: ../storage/common/storage-require-secure-transfer.md
[linux-create]: ../virtual-machines/linux/tutorial-manage-vm.md
+[azure-files-overview]: ../storage/files/storage-files-introduction.md
aks Node Pool Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-pool-snapshot.md
Title: Snapshot Azure Kubernetes Service (AKS) node pools
description: Learn how to snapshot AKS cluster node pools and create clusters and node pools from a snapshot. Previously updated : 06/05/2023 Last updated : 01/29/2024
The snapshot is an Azure resource that contains the configuration information fr
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS cluster. If you don't have an AKS cluster, for guidance on a designing an enterprise-scale implementation of AKS, see [Plan your AKS design][plan-aks-design].
### Limitations
az aks create --name myAKSCluster2 --resource-group myResourceGroup --snapshot-i
- Learn more about multiple node pools with [Create multiple node pools][use-multiple-node-pools]. <!-- LINKS - internal -->
-[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
-[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
+[plan-aks-design]: /azure/architecture/reference-architectures/containers/aks-start-here?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json
[supported-versions]: supported-kubernetes-versions.md [upgrade-cluster]: upgrade-cluster.md [node-image-upgrade]: node-image-upgrade.md
aks Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/planned-maintenance.md
description: Learn how to use Planned Maintenance to schedule and control cluster and node image upgrades in Azure Kubernetes Service (AKS). Previously updated : 01/26/2024 Last updated : 01/29/2024 # Use Planned Maintenance to schedule and control upgrades for your Azure Kubernetes Service (AKS) cluster
-Your AKS cluster has regular maintenance performed on it automatically. There are two types of regular maintenance - AKS initiated and those that you initiate. Planned Maintenance feature allows you to run both types of maintenance in a cadence of your choice thereby minimizing any workload impact.
+Your AKS cluster has regular maintenance performed on it automatically. There are two types of regular maintenance - AKS initiated and those that you initiate. Planned Maintenance feature allows you to run both types of maintenance in a cadence of your choice, thereby minimizing any workload impact.
AKS initiated maintenance refers to the AKS releases. These releases are weekly rounds of fixes and feature and component updates that affect your clusters. The type of maintenance that you initiate regularly are [cluster auto-upgrades][aks-upgrade] and [Node OS automatic security updates][node-image-auto-upgrade].
-There are currently three available configuration types: `default`, `aksManagedAutoUpgradeSchedule`, `aksManagedNodeOSUpgradeSchedule`:
+This article describes the maintenance options availble and how to configure a maintenance schedule for your AKS clusters.
+
+## Overview
+
+There are currently three available maintenance schedule configuration types: `default`, `aksManagedAutoUpgradeSchedule`, `aksManagedNodeOSUpgradeSchedule`:
- `default` corresponds to a basic configuration that is used to control AKS releases, these releases can take up to two weeks to roll out to all regions from the initial time of shipping due to Azure Safe Deployment Practices (SDP). Choose `default` to schedule these updates in such a way that it's least disruptive for you. You can monitor the status of an ongoing AKS release by region from the [weekly releases tracker][release-tracker].
There are currently three available configuration types: `default`, `aksManagedA
- `aksManagedNodeOSUpgradeSchedule` controls when the node operating system security patching scheduled by your node OS auto-upgrade channel are performed. More finely controlled cadence and recurrence settings are possible than in a `default configuration. For more information on node OS auto-upgrade channel, see [Automatically patch and update AKS cluster node images][node-image-auto-upgrade]
-We recommend using `aksManagedAutoUpgradeSchedule` for all cluster upgrade scenarios and `aksManagedNodeOSUpgradeSchedule` for all node OS security patching scenarios, while `default` is meant exclusively for the AKS weekly releases. You can port `default` configurations to the `aksManagedAutoUpgradeSchedule` or `aksManagedNodeOSUpgradeSchedule` configurations via the `az aks maintenanceconfiguration update` command.
+We recommend using `aksManagedAutoUpgradeSchedule` for all cluster upgrade scenarios and `aksManagedNodeOSUpgradeSchedule` for all node OS security patching scenarios. The `default` option is meant exclusively for AKS weekly releases. You can switch the `default` configuration to the `aksManagedAutoUpgradeSchedule` or `aksManagedNodeOSUpgradeSchedule` configurations using the `az aks maintenanceconfiguration update` command.
## Before you begin
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS cluster. If you don't have an AKS cluster, for guidance on a designing an enterprise-scale implementation of AKS, see [Plan your AKS design][plan-aks-design].
Be sure to upgrade Azure CLI to the latest version using [`az upgrade`](/cli/azure/update-azure-cli#manual-update). ## Creating a maintenance window
-To create a maintenance window, you can use the `az aks maintenanceconfiguration add` command using the `--name` value `default`, `aksManagedAutoUpgradeSchedule`, or `aksManagedNodeOSUpgradeSchedule`. The name value should reflect the desired configuration type. Using any other name causes your maintenance window not to run.
+To create a maintenance window, you can use the `az aks maintenanceconfiguration add` command using the `--name` value `default`, `aksManagedAutoUpgradeSchedule`, or `aksManagedNodeOSUpgradeSchedule`. The name value should reflect the desired configuration type. Using any other name prevents your maintenance window from running.
> [!NOTE] > When using auto-upgrade, to ensure proper functionality, use a maintenance window with a duration of four hours or more.
az aks maintenanceconfiguration delete -g myResourceGroup --cluster-name myAKSCl
- To get started with upgrading your AKS cluster, see [Upgrade an AKS cluster][aks-upgrade] <!-- LINKS - Internal -->
-[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
-[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
+[plan-aks-design]: /azure/architecture/reference-architectures/containers/aks-start-here?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json
[aks-support-policies]: support-policies.md [aks-faq]: faq.md [az-extension-add]: /cli/azure/extension#az_extension_add
aks Scale Down Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/scale-down-mode.md
When an Azure VM is in the `Stopped` (deallocated) state, you will not be charge
> In order to preserve any deallocated VMs, you must set Scale-down Mode to Deallocate. That includes VMs that have been deallocated using IaaS APIs (Virtual Machine Scale Set APIs). Setting Scale-down Mode to Delete will remove any deallocate VMs. > Once applied the deallocated mode and scale down operation occurred, those nodes keep registered in APIserver and appear as NotReady state.
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
+This article assumes that you have an existing AKS cluster. If you don't have an AKS cluster, for guidance on a designing an enterprise-scale implementation of AKS, see [Plan your AKS design][plan-aks-design].
### Limitations
az aks nodepool add --enable-cluster-autoscaler --min-count 1 --max-count 10 --m
- To learn more about the cluster autoscaler, see [Automatically scale a cluster to meet application demands on AKS][cluster-autoscaler] <!-- LINKS - Internal -->
-[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
-[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
+[plan-aks-design]: /azure/architecture/reference-architectures/containers/aks-start-here?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json
[aks-upgrade]: upgrade-cluster.md [cluster-autoscaler]: cluster-autoscaler.md [ephemeral-os]: concepts-storage.md#ephemeral-os-disk
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
Note the following important changes before you upgrade to any of the available
|Kubernetes Version | AKS Managed Addons | AKS Components | OS components | Breaking Changes | Notes |--||-||-||
-| 1.25 | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.9.3<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.5.3<br>Image Cleaner v1.1.1<br>Azure Workload identity v1.0.0<br>MDC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 18.04 Cgroups V1 <br>ContainerD 1.7<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>| Ubuntu 22.04 by default with cgroupv2 and Overlay VPA 0.13.0 |CgroupsV2 - If you deploy Java applications with the JDK, prefer to use JDK 11.0.16 and later or JDK 15 and later, which fully support cgroup v2
-| 1.26 | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.9.3<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.5.3<br>Image Cleaner v1.1.1<br>Azure Workload identity v1.0.0<br>MDC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>|No breaking changes |None
-| 1.27 | Azure policy 1.1.0<br>Metrics-Server 0.6.3<br>KEDA 2.10.0<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.7.2<br>Image Cleaner v1.1.1<br>Azure Workload identity v1.0.0<br>MDC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0|Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7 for Linux and 1.6 for Windows<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>|Keda 2.10.0 |Because of Ubuntu 22.04 FIPS certification status, we'll switch AKS FIPS nodes from 18.04 to 20.04 from 1.27 onwards.
-| 1.28 | Azure policy 1.2.1<br>Metrics-Server 0.6.3<br>KEDA 2.11.2<br>Open Service Mesh 1.2.7<br>Core DNS V1.9.4<br>0.12.0</br>Overlay VPA 0.13.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.7.2<br>Image Cleaner v1.2.2<br>Azure Workload identity v1.2.0<br>MDC Defender Security Publisher 1.0.68<br>MDC Defender Old File Cleaner 1.3.68<br>MDC Defender Pod Collector 1.0.78<br>MDC Defender Low Level Collector 1.3.81<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.8.1|Cilium 1.13.5<br>CNI v1.4.43.1 (Default)/v1.5.11 (Azure CNI Overlay)<br> Cluster Autoscaler 1.27.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7.5 for Linux and 1.7.1 for Windows<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>|No breaking changes|None
+| 1.25 | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.9.3<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.5.3<br>Image Cleaner v1.1.1<br>Azure Workload identity v1.0.0<br>MDC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 18.04 Cgroups V1 <br>ContainerD 1.7<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>| Ubuntu 22.04 by default with cgroupv2 and Overlay VPA 0.13.0 |CgroupsV2 - If you deploy Java applications with the JDK, prefer to use JDK 11.0.16 and later or JDK 15 and later, which fully support cgroup v2
+| 1.26 | Azure policy 1.0.1<br>Metrics-Server 0.6.3<br>KEDA 2.9.3<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.5.3<br>Image Cleaner v1.1.1<br>Azure Workload identity v1.0.0<br>MDC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0| Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>|No breaking changes |None
+| 1.27 | Azure policy 1.1.0<br>Metrics-Server 0.6.3<br>KEDA 2.10.0<br>Open Service Mesh 1.2.3<br>Core DNS V1.9.4<br>Overlay VPA 0.11.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.7.2<br>Image Cleaner v1.1.1<br>Azure Workload identity v1.0.0<br>MDC Defender 1.0.56<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.7.0<br>KMS 0.5.0|Cilium 1.12.8<br>CNI 1.4.44<br> Cluster Autoscaler 1.8.5.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7 for Linux and 1.6 for Windows<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>|Keda 2.10.0 |Because of Ubuntu 22.04 FIPS certification status, we'll switch AKS FIPS nodes from 18.04 to 20.04 from 1.27 onwards.
+| 1.28 | Azure policy 1.2.1<br>Metrics-Server 0.6.3<br>KEDA 2.11.2<br>Open Service Mesh 1.2.7<br>Core DNS V1.9.4<br>Overlay VPA 0.13.0<br>Azure-Keyvault-SecretsProvider 1.4.1<br>Application Gateway Ingress Controller (AGIC) 1.7.2<br>Image Cleaner v1.2.2<br>Azure Workload identity v1.2.0<br>MDC Defender Security Publisher 1.0.68<br>MDC Defender Old File Cleaner 1.3.68<br>MDC Defender Pod Collector 1.0.78<br>MDC Defender Low Level Collector 1.3.81<br>Azure Active Directory Pod Identity 1.8.13.6<br>GitOps 1.8.1|Cilium 1.13.5<br>CNI v1.4.43.1 (Default)/v1.5.11 (Azure CNI Overlay)<br> Cluster Autoscaler 1.27.3<br> | OS Image Ubuntu 22.04 Cgroups V2 <br>ContainerD 1.7.5 for Linux and 1.7.1 for Windows<br>Azure Linux 2.0<br>Cgroups V1<br>ContainerD 1.6<br>|No breaking changes|None
## Alias minor version
api-center Enable Api Center Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/enable-api-center-portal.md
+
+ Title: Enable API Center portal - Azure API Center
+description: Enable the API Center portal, an automatically generated website that enables discovery of your API inventory.
+++ Last updated : 01/26/2024++
+# Customer intent: As an API program manager, I want to enable a portal for developers and other API stakeholders in my organization to discover the APIs in my organization's API center.
++
+# Enable your API Center portal
+
+This article shows how to enable your *API Center portal*, an automatically generated website that developers and other stakeholders in your organization can use to discover the APIs in your [API center](overview.md). The portal is hosted by Azure at a unique URL and restricts user access based on Azure role-based access control.
++
+## Prerequisites
+
+* An API center in your Azure subscription. If you haven't created one already, see [Quickstart: Create your API center](set-up-api-center.md).
+
+* Permissions to create an app registration in a Microsoft Entra tenant associated with your Azure subscription, and permissions to grant access to data in your API center.
+
+## Create Microsoft Entra app registration
+
+First configure an app registration in your Microsoft Entra ID tenant. The app registration enables the API Center portal to access data from your API center on behalf of a signed-in user.
+
+1. In the [Azure portal](https://portal.azure.com), navigate to **Microsoft Entra ID** > **App registrations**.
+1. Select **+ New registration**.
+1. On the **Register an application** page, set the values as follows:
+
+ * Set **Name** to a meaningful name such as *api-center-portal*
+ * Under **Supported account types**, select **Accounts in this organizational directory (Single tenant)**.
+ * In **Redirect URI**, select **Single-page application (SPA)** and enter the following URI, substituting your API center name and region where indicated:
+
+ `https://<api-center-name>.portal.<region>.azure-apicenter.ms`
+
+ Example: `https://contoso.portal.westeurope.azure-apicenter.ms`
+
+ * Select **Register**.
+1. On the **Overview** page, copy the **Application (client) ID**. You use this value when you configure the identity provider for the portal in your API center.
+
+1. On the **API permissions** page, select **+ Add a permission**.
+ 1. On the **Request API permissions** page, select the **APIs my organization uses** tab. Search for and select **Azure API Center**.
+ 1. On the **Request permissions** page, select **user_impersonation**.
+ 1. Select **Add permissions**.
+
+ The Azure API Center permissions appear under **Configured permissions**.
+
+ :::image type="content" source="media/enable-api-center-portal/configure-app-permissions.png" alt-text="Screenshot of required permissions in Microsoft Entra ID app registration in the portal." lightbox="media/enable-api-center-portal/configure-app-permissions.png":::
++
+## Configure Microsoft Entra ID provider for API Center portal
+
+In your API center, configure the Microsoft Entra ID identity provider for the API Center portal.
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your API center.
+1. In the left menu, under **API Center portal**, select **Portal settings**.
+1. Select **Identity provider** > **Start set up**.
+1. On the **Set up user sign-in with Microsoft Entra ID** page, in **Client ID**, enter the **Application (client) ID** of the app registration that you created in the previous section.
+
+ :::image type="content" source="media/enable-api-center-portal/set-up-sign-in-portal.png" alt-text="Screenshot of the Microsoft Entra ID provider settings in the API Center portal." lightbox="media/enable-api-center-portal/set-up-sign-in-portal.png":::
+
+1. Select **Save + publish**. The Microsoft Entra ID provider appears on the **Identity provider** page.
+
+1. To view the API Center portal, on the **Portal settings** page, select **View API Center portal**.
+
+The portal is published at the following URL that you can share with developers in your organization: `https://<api-center-name>.<region>.azure-apicenter.ms`.
++
+## Customize portal name
+
+By default, the name that appears on the upper left of the API Center portal is the name of your API center. You can customize this name.
+
+1. In the Azure portal, go to the **Portal settings** > **Site profile** page.
+1. Enter a new name in **Add a website name**.
+1. Select **Save + publish**.
+
+ :::image type="content" source="media/enable-api-center-portal/add-website-name.png" alt-text="Screenshot of adding a custom website name in the Azure portal.":::
+
+ The new name appears after you refresh the API Center portal.
+
+## Enable sign-in to portal by Microsoft Entra users and groups
+
+While the portal URL is publicly accessible, users must sign in to see the APIs in your API center. To enable sign-in, assign the **Azure API Center Data Reader** role to users or groups in your organization, scoped to your API center.
+
+> [!IMPORTANT]
+> By default, you and other administrators of the API center don't have access to APIs in the API Center portal. Be sure to assign the **Azure API Center Data Reader** role to yourself and other administrators.
+
+For detailed prerequisites and steps to assign a role to users and groups, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). Brief steps follow:
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your API center.
+1. In the left menu, select **Access control (IAM)** > **+ Add role assignment**.
+1. In the **Add role assignment** pane, set the values as follows:
+ * On the **Role** page, search for and select **Azure API Center Data Reader**. Select **Next**.
+ * On the **Members** page, In **Assign access to**, select **User, group, or service principal** > **+ Select members**.
+ * On the **Select members** page, search for and select the users or groups to assign the role to. Click **Select** and then **Next**.
+ * Review the role assignment, and select **Review + assign**.
+
+> [!NOTE]
+> To streamline access configuration for new users, we recommend that you assign the role to a Microsoft Entra group and configure a dynamic group membership rule. To learn more, see [Create or update a dynamic group in Microsoft Entra ID](/entra/identity/users/groups-create-rule).
+
+After you configure access to the portal, configured users can sign in to the portal and view the APIs in your API center.
+
+> [!NOTE]
+> The first user to sign in to the portal is prompted to consent to the permissions requested by the API Center portal app registration. Thereafter, other configured users aren't prompted to consent.
++
+## Troubleshooting
+
+### Error: "You are not authorized to access this portal"
+
+Under certain conditions, a user might encounter the following error message after signing into the API Center portal with a configured user account:
+
+`You are not authorized to access this portal. Please contact your portal administrator for assistance.`
+`
+
+First, confirm that the user is assigned the **Azure API Center Data Reader** role in your API center.
+
+If the user is assigned the role, there might be a problem with the registration of the **Microsoft.ApiCenter** resource provider in your subscription, and you might need to re-register the resource provider. To do this, run the following command in the Azure CLI:
+
+```azurecli
+az provider register --namespace Microsoft.ApiCenter
+```
+
+For more information and steps to register the resource provider using other tools, see [Register resource provider](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider).
++
+## Related content
+
+* [Azure CLI reference for API Center](/cli/azure/apic)
+* [What is Azure role-based access control (RBAC)?](../role-based-access-control/overview.md)
+* [Best practices for Azure RBAC](../role-based-access-control/best-practices.md)
api-management Api Management Howto Deploy Multi Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-deploy-multi-region.md
description: Learn how to deploy a Premium tier Azure API Management instance to
Previously updated : 01/26/2023 Last updated : 01/26/2024
Under some conditions, you might need to temporarily disable routing to one of t
* To redirect traffic to other regions during a planned disaster recovery drill that simulates an unavailable region, or during a regional failure To disable routing to a regional gateway in your API Management instance, update the gateway's `disableGateway` property value to `true`. You can set the value using the [Create or update service](/rest/api/apimanagement/current-glet, or other Azure tools.+
+>[!NOTE]
+> You can only disable routing to a regional gateway when you are using API Management's default routing, not a custom routing solution.
To disable a regional gateway using the Azure CLI: 1. Use the [az apim show](/cli/azure/apim#az-apim-show) command to show the locations, gateway status, and regional URLs configured for the API Management instance. ```azurecli
- az apim show --name contoso --resource-group myResourceGroup \
+ az apim show --name contoso --resource-group apim-hello-world-resource \
--query "additionalLocations[].{Location:location,Disabled:disableGateway,Url:gatewayRegionalUrl}" \ --output table ```
To disable a regional gateway using the Azure CLI:
``` 1. Use the [az apim update](/cli/azure/apim#az-apim-update) command to disable the gateway in an available location, such as West US 2. ```azurecli
- az apim update --name contoso --resource-group myResourceGroup \
+ az apim update --name contoso --resource-group apim-hello-world-resource \
--set additionalLocations[location="West US 2"].disableGateway=true ```
app-service App Service Ip Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-ip-restrictions.md
ms.assetid: 3be1f4bd-8a81-4565-8a56-528c037b24bd Previously updated : 10/05/2022 Last updated : 01/29/2024 # Set up Azure App Service access restrictions
By setting up access restrictions, you can define a priority-ordered allow/deny
The access restriction capability works with all Azure App Service-hosted workloads. The workloads can include web apps, API apps, Linux apps, Linux custom containers and Functions.
-When a request is made to your app, the FROM address is evaluated against the rules in your access restriction list. If the FROM address is in a subnet that's configured with service endpoints to Microsoft.Web, the source subnet is compared against the virtual network rules in your access restriction list. If the address isn't allowed access based on the rules in the list, the service replies with an [HTTP 403](https://en.wikipedia.org/wiki/HTTP_403) status code.
+When a request is made to your app, the FROM address is evaluated against the rules in your access restriction list. If the FROM address is in a subnet configured with service endpoints to `Microsoft.Web`, the source subnet is compared against the virtual network rules in your access restriction list. If the address isn't allowed access based on the rules in the list, the service replies with an [HTTP 403](https://en.wikipedia.org/wiki/HTTP_403) status code.
The access restriction capability is implemented in the App Service front-end roles, which are upstream of the worker hosts where your code runs. Therefore, access restrictions are effectively network access-control lists (ACLs).
-The ability to restrict access to your web app from an Azure virtual network is enabled by [service endpoints][serviceendpoints]. With service endpoints, you can restrict access to a multi-tenant service from selected subnets. It doesn't work to restrict traffic to apps that are hosted in an App Service Environment. If you're in an App Service Environment, you can control access to your app by applying IP address rules.
+The ability to restrict access to your web app from an Azure virtual network uses [service endpoints][serviceendpoints]. With service endpoints, you can restrict access to a multitenant service from selected subnets. It doesn't work to restrict traffic to apps that are hosted in an App Service Environment. If you're in an App Service Environment, you can control access to your app by applying IP address rules.
> [!NOTE] > The service endpoints must be enabled both on the networking side and for the Azure service that they're being enabled with. For a list of Azure services that support service endpoints, see [Virtual Network service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md).
The ability to restrict access to your web app from an Azure virtual network is
## Manage access restriction rules in the portal
-To add an access restriction rule to your app, do the following:
+To add an access restriction rule to your app, do the following steps:
1. Sign in to the Azure portal. 1. Select the app that you want to add access restrictions to.
-1. On the left pane, select **Networking**.
+1. On the left menu, select **Networking**.
-1. On the **Networking** pane, under **Access Restrictions**, select **Configure Access Restrictions**.
+1. On the **Networking** page, under **Inbound traffic configuration**, select the **Public network access** setting.
- :::image type="content" source="media/app-service-ip-restrictions/access-restrictions.png" alt-text="Screenshot of the App Service networking options pane in the Azure portal.":::
+ :::image type="content" source="media/app-service-ip-restrictions/access-restrictions.png" alt-text="Screenshot of the App Service networking options page in the Azure portal.":::
1. On the **Access Restrictions** page, review the list of access restriction rules that are defined for your app. :::image type="content" source="media/app-service-ip-restrictions/access-restrictions-browse.png" alt-text="Screenshot of the Access Restrictions page in the Azure portal, showing the list of access restriction rules defined for the selected app.":::
- The list displays all the current restrictions that are applied to the app. If you have a virtual network restriction on your app, the table shows whether the service endpoints are enabled for Microsoft.Web. If no restrictions are defined on your app, the app is accessible from anywhere.
+ The list displays all the current restrictions that are applied to the app. If you have a virtual network restriction on your app, the table shows whether the service endpoints are enabled for Microsoft.Web. If no restrictions are defined on your app and your unmatched rule isn't set to Deny, the app is accessible from anywhere.
### Permissions
-You must have at least the following Role-based access control permissions on the subnet or at a higher level to configure access restrictions through Azure portal, CLI or when setting the site config properties directly:
+The following Role-based access control permissions on the subnet or at a higher level are required to configure access restrictions through Azure portal, CLI or when setting the site config properties directly:
| Action | Description | |-|-|
You must have at least the following Role-based access control permissions on th
**only required when adding a virtual network (service endpoint) rule.*
-***only required if you are updating access restrictions through Azure portal.*
+***only required if you're updating access restrictions through Azure portal.*
-If you're adding a service endpoint-based rule and the virtual network is in a different subscription than the app, you must ensure that the subscription with the virtual network is registered for the Microsoft.Web resource provider. You can explicitly register the provider [by following this documentation](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider), but it will also automatically be registered when creating the first web app in a subscription.
+If you're adding a service endpoint-based rule and the virtual network is in a different subscription than the app, you must ensure that the subscription with the virtual network is registered for the `Microsoft.Web` resource provider. You can explicitly register the provider [by following this documentation](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider), but also automatically registered when creating the first web app in a subscription.
### Add an access restriction rule
-To add an access restriction rule to your app, on the **Access Restrictions** pane, select **Add rule**. After you add a rule, it becomes effective immediately.
+To add an access restriction rule to your app, on the **Access Restrictions** page, select **Add**. The rule is only effective after saving.
-Rules are enforced in priority order, starting from the lowest number in the **Priority** column. An implicit *deny all* is in effect after you add even a single rule.
+Rules are enforced in priority order, starting from the lowest number in the **Priority** column. If you don't configure unmatched rule, an implicit *deny all* is in effect after you add even a single rule.
On the **Add Access Restriction** pane, when you create a rule, do the following:
-1. Under **Action**, select either **Allow** or **Deny**.
+1. Under **Action**, select either **Allow** or **Deny**.
:::image type="content" source="media/app-service-ip-restrictions/access-restrictions-ip-add.png?v2" alt-text="Screenshot of the 'Add Access Restriction' pane."::: 1. Optionally, enter a name and description of the rule. 1. In the **Priority** box, enter a priority value. 1. In the **Type** drop-down list, select the type of rule. The different types of rules are described in the following sections.
-1. After typing in the rule specific input select **Save** to save the changes.
+1. Select **Add rule** after typing in the rule specific input to add the rule to the list.
+
+Finally select **Save** back in the **Access Restrictions** page.
> [!NOTE] > - There is a limit of 512 access restriction rules. If you require more than 512 access restriction rules, we suggest that you consider installing a standalone security product, such as Azure Front Door, Azure App Gateway, or an alternative WAF.
On the **Add Access Restriction** pane, when you create a rule, do the following
#### Set an IP address-based rule Follow the procedure as outlined in the preceding section, but with the following addition:
-* For step 4, in the **Type** drop-down list, select **IPv4** or **IPv6**.
+* For step 4, in the **Type** drop-down list, select **IPv4** or **IPv6**.
Specify the **IP Address Block** in Classless Inter-Domain Routing (CIDR) notation for both the IPv4 and IPv6 addresses. To specify an address, you can use something like *1.2.3.4/32*, where the first four octets represent your IP address and */32* is the mask. The IPv4 CIDR notation for all addresses is 0.0.0.0/0. To learn more about CIDR notation, see [Classless Inter-Domain Routing](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing).
Specify the **IP Address Block** in Classless Inter-Domain Routing (CIDR) notati
Specify the **Subscription**, **Virtual Network**, and **Subnet** drop-down lists, matching what you want to restrict access to.
-By using service endpoints, you can restrict access to selected Azure virtual network subnets. If service endpoints aren't already enabled with Microsoft.Web for the subnet that you selected, they'll be automatically enabled unless you select the **Ignore missing Microsoft.Web service endpoints** check box. The scenario where you might want to enable service endpoints on the app but not the subnet depends mainly on whether you have the permissions to enable them on the subnet.
+By using service endpoints, you can restrict access to selected Azure virtual network subnets. If service endpoints aren't already enabled with `Microsoft.Web` for the subnet that you selected, they're automatically enabled unless you select the **Ignore missing `Microsoft.Web` service endpoints** check box. The scenario where you might want to enable service endpoints on the app but not the subnet depends mainly on whether you have the permissions to enable them on the subnet.
-If you need someone else to enable service endpoints on the subnet, select the **Ignore missing Microsoft.Web service endpoints** check box. Your app will be configured for service endpoints in anticipation of having them enabled later on the subnet.
+If you need someone else to enable service endpoints on the subnet, select the **Ignore missing Microsoft.Web service endpoints** check box. Your app is configured for service endpoints in anticipation of having them enabled later on the subnet.
You can't use service endpoints to restrict access to apps that run in an App Service Environment. When your app is in an App Service Environment, you can control access to it by applying IP access rules. With service endpoints, you can configure your app with application gateways or other web application firewall (WAF) devices. You can also configure multi-tier applications with secure back ends. For more information, see [Networking features and App Service](networking-features.md) and [Application Gateway integration with service endpoints](networking/app-gateway-with-service-endpoints.md). > [!NOTE]
-> - Service endpoints aren't currently supported for web apps that use IP-based TLS/SSL bindings with a virtual IP (VIP).
+> - Service endpoints aren't supported for web apps that use IP-based TLS/SSL bindings with a virtual IP (VIP).
> #### Set a service tag-based rule
All available service tags are supported in access restriction rules. Each servi
:::image type="content" source="media/app-service-ip-restrictions/access-restrictions-ip-edit.png?v2" alt-text="Screenshot of the 'Edit Access Restriction' pane in the Azure portal, showing the fields for an existing access restriction rule."::: > [!NOTE]
- > When you edit a rule, you can't switch between rule types.
+ > When you edit a rule, you can't switch between rule types.
### Delete a rule
The following sections describe some advanced scenarios using access restriction
### Filter by http header
-As part of any rule, you can add additional http header filters. The following http header names are supported:
+As part of any rule, you can add http header filters. The following http header names are supported:
* X-Forwarded-For * X-Forwarded-Host * X-Azure-FDID
For each header name, you can add up to eight values separated by comma. The htt
### Multi-source rules
-Multi-source rules allow you to combine up to eight IP ranges or eight Service Tags in a single rule. You might use this if you've more than 512 IP ranges or you want to create logical rules where multiple IP ranges are combined with a single http header filter.
+Multi-source rules allow you to combine up to eight IP ranges or eight Service Tags in a single rule. You use multi-source rules if you have more than 512 IP ranges or you want to create logical rules. Logical rules could be where multiple IP ranges are combined with a single http header filter.
Multi-source rules are defined the same way you define single-source rules, but with each range separated with comma.
For a scenario where you want to explicitly block a single IP address or a block
### Restrict access to an SCM site
-In addition to being able to control access to your app, you can restrict access to the SCM (Advanced tool) site that's used by your app. The SCM site is both the web deploy endpoint and the Kudu console. You can assign access restrictions to the SCM site from the app separately or use the same set of restrictions for both the app and the SCM site. When you select the **Use main site rules** check box, the rules list will be hidden, and it will use the rules from the main site. If you clear the check box, your SCM site settings will appear again.
+In addition to being able to control access to your app, you can restrict access to the SCM (Advanced tool) site used by your app. The SCM site is both the web deploy endpoint and the Kudu console. You can assign access restrictions to the SCM site from the app separately or use the same set of restrictions for both the app and the SCM site. When you select the **Use main site rules** check box, the rules list is hidden, and it uses the rules from the main site. If you clear the check box, your SCM site settings appear again.
:::image type="content" source="media/app-service-ip-restrictions/access-restrictions-advancedtools-browse.png" alt-text="Screenshot of the 'Access Restrictions' page in the Azure portal, showing that no access restrictions are set for the SCM site or the app."::: ### Restrict access to a specific Azure Front Door instance
-Traffic from Azure Front Door to your application originates from a well known set of IP ranges defined in the AzureFrontDoor.Backend service tag. Using a service tag restriction rule, you can restrict traffic to only originate from Azure Front Door. To ensure traffic only originates from your specific instance, you'll need to further filter the incoming requests based on the unique http header that Azure Front Door sends.
+Traffic from Azure Front Door to your application originates from a well known set of IP ranges defined in the `AzureFrontDoor.Backend` service tag. Using a service tag restriction rule, you can restrict traffic to only originate from Azure Front Door. To ensure traffic only originates from your specific instance, you need to further filter the incoming requests based on the unique http header that Azure Front Door sends.
:::image type="content" source="media/app-service-ip-restrictions/access-restrictions-frontdoor.png?v2" alt-text="Screenshot of the 'Access Restrictions' page in the Azure portal, showing how to add Azure Front Door restriction.":::
You can run the following command in the [Cloud Shell](https://shell.azure.com).
-HttpHeader @{'x-azure-fdid'='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'} ```
-### [ARM](#tab/arm)
+### [Azure Resource Manager](#tab/arm)
For ARM templates, modify the `ipSecurityRestrictions` block. A sample ARM template snippet is provided for you.
You can run the following command in the [Cloud Shell](https://shell.azure.com).
-Name "Ip example rule" -Priority 100 -Action Allow -IpAddress 122.133.144.0/24 -TargetScmSite ```
-### [ARM](#tab/arm)
+### [Azure Resource Manager](#tab/arm)
For ARM templates, modify the `scmIpSecurityRestrictions` block. A sample ARM template snippet is provided for you.
You can run the following command in the [Cloud Shell](https://shell.azure.com).
$Resource | Set-AzResource -Force ```
-### [ARM](#tab/arm)
+### [Azure Resource Manager](#tab/arm)
For ARM templates, modify the property `ipSecurityRestrictionsDefaultAction`. Accepted values for `ipSecurityRestrictionsDefaultAction` are `Allow` or `Deny`. A sample ARM template snippet is provided for you.
You can run the following command in the [Cloud Shell](https://shell.azure.com).
$Resource | Set-AzResource -Force ```
-### [ARM](#tab/arm)
+### [Azure Resource Manager](#tab/arm)
For ARM templates, modify the property `scmIpSecurityRestrictionsDefaultAction`. Accepted values for `scmIpSecurityRestrictionsDefaultAction` are `Allow` or `Deny`. A sample ARM template snippet is provided for you.
app-service Overview Access Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-access-restrictions.md
Title: App Service Access restrictions
description: This article provides an overview of the access restriction features in App Service Previously updated : 01/03/2024 Last updated : 01/25/2024
You have the option of configuring a set of access restriction rules for each si
## App access
-App access allows you to configure if access is available through the default (public) endpoint. If the setting isn't configured, the default behavior is to enable access unless a private endpoint exists which changes the implicit behavior to disable access. You have the ability to explicitly configure this behavior to either enabled or disabled even if private endpoints exist.
+App access allows you to configure if access is available through the default (public) endpoint. You configure this behavior to either be `Disabled` or `Enabled`. When access is enabled, you can add [Site access](#site-access) restriction rules to control access from select virtual networks and IP addresses. If the setting isn't configured, the default behavior is to enable access unless a private endpoint exists which changes the behavior to disable access.
:::image type="content" source="media/overview-access-restrictions/app-access-portal.png" alt-text="Screenshot of app access option in Azure portal.":::
-In the Azure Resource Manager API, app access is called `publicNetworkAccess`. For ILB App Service Environment, the default entry point for apps is always internal to the virtual network. Enabling app access (`publicNetworkAccess`) doesn't grant direct public access to the web application; instead, it allows access from the default entry point, which corresponds to the internal IP address of the App Service Environment. If you disable app access on an ILB App Service Environment, you can only access the apps through private endpoints added to the individual apps.
+In the Azure Resource Manager API, app access is called `publicNetworkAccess`. For ILB App Service Environment, the default entry point for apps is always internal to the virtual network. Enabling app access (`publicNetworkAccess`) doesn't grant direct public access to the apps; instead, it allows access from the default entry point, which corresponds to the internal IP address of the App Service Environment. If you disable app access on an ILB App Service Environment, you can only access the apps through private endpoints added to the individual apps.
## Site access
Some use cases for http header filtering are:
## Diagnostic logging
-App Service can [send various logging categories to Azure Monitor](./troubleshoot-diagnostic-logs.md#send-logs-to-azure-monitor). One of those categories is called *IPSecurity Audit logs* and represent the activities in access restrictions. All requests that match a rule (except the unmatched rule), both allow and deny, is logged and can be used to validate configuration of access restrictions. The logging capability is also a powerful tool when troubleshooting rules configuration.
+App Service can [send various logging categories to Azure Monitor](./troubleshoot-diagnostic-logs.md#send-logs-to-azure-monitor). One of those categories is called `IPSecurity Audit logs` and represent the activities in access restrictions. All requests that match a rule (except the unmatched rule), both allow and deny, is logged and can be used to validate configuration of access restrictions. The logging capability is also a powerful tool when troubleshooting rules configuration.
## Advanced use cases
azure-app-configuration Enable Dynamic Configuration Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-python.md
Title: Use dynamic configuration in Python (preview)
+ Title: Use dynamic configuration in Python
description: Learn how to dynamically update configuration data for Python
ms.devlang: python Previously updated : 10/05/2023 Last updated : 01/29/2024 #Customer intent: As a Python developer, I want to dynamically update my app to use the latest configuration data in Azure App Configuration.
-# Tutorial: Use dynamic configuration in Python (preview)
+# Tutorial: Use dynamic configuration in Python
The Azure App Configuration Python provider includes built-in caching and refreshing capabilities. This tutorial shows how to enable dynamic configuration in Python applications.
-> [!NOTE]
-> Requires [azure-appconfiguration-provider](https://pypi.org/project/azure-appconfiguration-provider/1.1.0b3/) package version 1.1.0b3 or later.
- ## Prerequisites - An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/).
azure-arc System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/system-requirements.md
By default, these files are generated in the current CLI directory when `createc
### Kubeconfig
-The appliance VM hosts a management Kubernetes cluster. The kubeconfig is a low-privilege Kubernetes configuration file that is used to maintain the appliance VM. By default, it's generated in the current CLI directory when the `deploy` command completes. The kubeconfig should be saved in a secure location to the management machine, because it's required for maintaining the appliance VM.
+The appliance VM hosts a management Kubernetes cluster. The kubeconfig is a low-privilege Kubernetes configuration file that is used to maintain the appliance VM. By default, it's generated in the current CLI directory when the `deploy` command completes. The kubeconfig should be saved in a secure location on the management machine, because it's required for maintaining the appliance VM. If the kubeconfig is lost, it can be retrieved by running the `az arcappliance get-credentials` command.
### HCI login configuration file (Azure Stack HCI only)
azure-arc Ssh Arc Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/ssh-arc-troubleshoot.md
Resolution:
- Confirm success by running ```az provider show -n Microsoft.HybridConnectivity```, verify that `registrationState` is set to `Registered` - Restart the hybrid agent on the Arc-enabled server
+### Cannot connect after updating CLI tool and Arc agent
+
+This issue occurs when the updated command creates a new service configuration before the Arc agent is updated. This will only impact Azure Arc versions older than 1.31 when updating to a version 1.31 or newer. Error:
+
+- Connection closed by UNKNOWN port 65535
+
+ Resolution:
+
+ - Delete the existing service configuration and allow it to be re-created by the CLI command at the next connection. Run ```az rest --method delete --uri https://management.azure.com/subscriptions/<SUB_ID>/resourceGroups/<RG_NAME>/providers/Microsoft.HybridCompute/machines/<VM_NAME>/providers/Microsoft.HybridConnectivity/endpoints/default/serviceconfigurations/SSH?api-version=2023-03-15```
## Disable SSH to Arc-enabled servers
azure-functions Durable Functions Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-diagnostics.md
Clients will get the following response:
> [!WARNING] > The custom status payload is limited to 16 KB of UTF-16 JSON text because it needs to be able to fit in an Azure Table Storage column. You can use external storage if you need larger payload.
+## Distributed Tracing
+
+Distributed Tracing tracks requests and shows how different services interact with each other. In Durable Functions, it also correlates orchestrations and activities together. This is helpful to understand how much time steps of the orchestration take relative to the entire orchestration. It is also useful to understand where an application is having an issue or where an exception was thrown. This feature is supported for all languages and storage providers.
+
+> [!NOTE]
+> Distributed Tracing V2 requires [Durable Functions v2.12.0](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.DurableTask/2.12.0) or greater. Also, Distributed Tracing V2 is in a preview state and therefore some Durable Functions patterns are not instrumented. For example, Durable Entities operations are not instrumented and traces will not show up in Application Insights.
+
+### Setting up Distributed Tracing
+
+To set up distributed tracing, please update the host.json and set up an Application Insights resource.
+
+#### host.json
+```
+"durableTask": {
+ "tracing": {
+ "distributedTracingEnabled": true,
+ "Version": "V2"
+ }
+}
+```
+
+#### Application Insights
+If the Function app is not configured with an Application Insights resource, then please configure it following the instructions [here](../configure-monitoring.md#enable-application-insights-integration).
+
+### Inspecting the traces
+In the Application Insights resource, navigate to **Transaction Search**. In the results, check for `Request` and `Dependency` events that start with Durable Functions specific prefixes (e.g. `orchestration:`, `activity:`, etc.). Selecting one of these events will open up a Gantt chart that will show the end to end distributed trace.
+
+[![Gantt Chart showing Application Insights Distributed Trace.](./media/durable-functions-diagnostics/app-insights-distributed-trace-gantt-chart.png)](./media/durable-functions-diagnostics/app-insights-distributed-trace-gantt-chart.png#lightbox)
+
+### Troubleshooting
+If you don't see the traces in Application Insights, please make sure to wait about five minutes after running the application to ensure that all of the data is propagated to the Application Insights resource.
+ ## Debugging Azure Functions supports debugging function code directly, and that same support carries forward to Durable Functions, whether running in Azure or locally. However, there are a few behaviors to be aware of when debugging:
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Nanavati Consulting, Inc.](https://www.nanavaticonsulting.com)| |[Navisite LLC](https://www.navisite.com/)| |[NCI](https://www.nciinc.com/)|
+|[NeoSystems LLC](https://www.neosystemscorp.com/)|
|[NeoTech Solutions Inc.](https://neotechreps.com)| |[Neovera Inc.](https://www.neovera.com)| |[NetData Consulting Services Inc.](https://www.netdatacs.com)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[rmsource, Inc.](https://www.rmsource.com)| |[RoboTech Science, Inc. (Cyberscend)](https://cyberscend.com)| |[Rollout Systems LLC](http://www.rolloutsys.com/)|
+|[RSM US, LLP](https://rsmus.com)|
|[RV Global Solutions](https://rvglobalsolutions.com/)| |[RyanTech Inc.](https://ryantechinc.com)| |[Saiph Technologies Corporation](http://www.saiphtech.com/)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Leidos](https://www.leidos.com/)| |[LiftOff, LLC](https://www.liftoffonline.com)| |[ManTech](https://www.mantech.com/)|
+|[NeoSustems LLC](https://www.neosystemscorp.com/solutions-services/microsoft-licenses/microsoft-365-licenses/)|
|[Nimbus Logic, LLC](https://www.nimbus-logic.com/)| |[Northrop Grumman](https://www.northropgrumman.com/)| |[Novetta](https://www.novetta.com)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Quiet Professionals, LLC](https://quietprofessionalsllc.com)| |[R3, LLC](https://www.r3-it.com/)| |[Red River](https://www.redriver.com)|
-|[RSMUS, LLC](https://rsmus.com)|
+|[RSM US, LLP](https://rsmus.com)|
|[SAIC](https://www.saic.com)| |[SentinelBlue LLC](https://www.sentinelblue.com/)| |[Smartronix](https://www.smartronix.com)|
azure-monitor Azure Monitor Agent Send Data To Event Hubs And Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-send-data-to-event-hubs-and-storage.md
The Azure Monitor Agent is the new, consolidated telemetry agent for collecting
- Linux: - Syslog ΓÇô to eventhub and storage - Perf counters ΓÇô to eventhub and storage
- - Custom Logs / Log files ΓÇô to eventhub and storage
+ - Custom Logs / Log files ΓÇô to storage
### Operating systems
azure-monitor Troubleshooter Ama Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/troubleshooter-ama-linux.md
Title: Use Azure Monitor Agent Troubleshooter for Linux
+ Title: How to use the Linux Operating System (OS) Azure Monitor Agent Troubleshooter
description: Detailed instructions on using the Linux agent troubleshooter tool to diagnose potential issues. -+ Last updated 12/14/2023
# customer-intent: When AMA is experiencing issues, I want to investigate the issues and determine if I can resolve the issue on my own.
-# Use the Azure Monitor Agent Troubleshooter for Linux
+# How to use the Linux operating system (OS) Azure Monitor Agent Troubleshooter
The Azure Monitor Agent Troubleshooter (AMA) is designed to help identify issues with the agent and perform general health assessments. It can perform various checks to ensure that the agent is properly installed and connected, and can also gather AMA-related logs from the machine being diagnosed. > [!Note] > The AMA Troubleshooter is an executable that is shipped with the agent for all versions newer than **1.25.1** for Linux. ## Prerequisites
-The linux Troubleshooter requires Python 2.6+ or any Python 3 installed on the machine. In addition, the following Python packages are required to run (all should be present on a default install of Python 2 or Python 3):
+
+### Python requirement
+The Linux AMA Troubleshooter requires **Python 2.6+** or any **Python 3** version installed on the machine.
+
+To check if python is installed on your machine, copy the following command and run in Bash as root:
+```Bash
+sudo python -V
+sudo python3 -V
+```
++
+Multiple versions of Python can be installed and aliased ΓÇô if multiple versions are installed, use:
+
+```Bash
+ls -ls /usr/bing/python*
+```
++
+If your virtual machine is using a distro that doesn't include Python 3 by default, then you must install it. The following sample commands install Python 3 on different distros:
+
+# [Red Hat, CentOS, Oracle](#tab/redhat)
+```Bash
+sudo yum install -y python3
+```
+# [Ubuntu, Debian](#tab/ubuntu)
+```Bash
+sudo apt-get update
+sudo apt-get install -y python3
+```
+# [Suse](#tab/suse)
+```Bash
+sudo zypper install -y python3
+```
+++
+In addition, the following Python packages are required to run (all should be present on a default install of Python 2 or Python 3):
|Python Package|Required for Python 2?|Required for Python 3?| |:|:|:|
The linux Troubleshooter requires Python 2.6+ or any Python 3 installed on the m
|url lib|yes|no| |xml.dom.minidom|yes|yes|
-On the machine to be diagnosed, does this directory exist:
+### Troubleshooter existence check
+Check for the existence of the AMA Agent Troubleshooter directory on the machine to be diagnosed to confirm the installation of the agent troubleshooter:
+ ***/var/lib/waagent/Microsoft.Azure.Monitor.AzureMonitorLinuxAgent-{version}***
-To verify the Agent Troubleshooter is present, copy the following command and run in Bash as root:
+To verify the Azure Monitor Agent Troubleshooter is presence, copy the following command and run in Bash as root:
```Bash ls -ltr /var/lib/waagent | grep "Microsoft.Azure.Monitor.AzureMonitorLinuxAgent-*"
ls -ltr /var/lib/waagent | grep "Microsoft.Azure.Monitor.AzureMonitorLinuxAgent-
:::image type="content" source="./media/use-azure-monitor-agent-troubleshooter/ama-nix-prerequisites-shell.png" alt-text="Screenshot of the Bash window, which shows the result of ls command for the AMA installation directory." lightbox="media/use-azure-monitor-agent-troubleshooter/ama-nix-prerequisites-shell.png":::
-If not, the directory doesn't exist and the installation failed. In this case, follow [Basic troubleshooting steps](azure-monitor-agent-troubleshoot-linux-vm.md#basic-troubleshooting-steps) instead.
+If directory doesn't exist or the installation is failed, follow [Basic troubleshooting steps](azure-monitor-agent-troubleshoot-linux-vm.md#basic-troubleshooting-steps).
-Yes, the directory exists. Proceed to [Run the Troubleshooter](#run-the-troubleshooter).
+If the directory exists, proceed to [Run the Troubleshooter](#run-the-troubleshooter).
## Run the Troubleshooter
-On the machine to be diagnosed, run the Agent Troubleshooter.
+On the machine to be diagnosed, run the Agent Troubleshooter.
+
+**Log Mode** enables the collection of logs, which can then be compressed into .tgz format for export or review. **Interactive Mode** allows users to actively engage in troubleshooting scenarios and view the output directly within the shell.
# [Log Mode](#tab/GenerateLogs)
It runs a series of scenarios and displays the results.
> [!Note] > The interactive mode will **not** generate log files, but will **only** output results to the screen. Switch to log mode, if you need to generate log files.
It isn't possible to use the Troubleshooter to diagnose an older version of the
## Next Steps - [Troubleshooting guidance for the Azure Monitor agent](../agents/azure-monitor-agent-troubleshoot-linux-vm.md) on Linux virtual machines and scale sets-- [Syslog troubleshooting guide for Azure Monitor Agent](../agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md) for Linux
+- [Syslog troubleshooting guide for Azure Monitor Agent](../agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md) for Linux
azure-monitor Troubleshooter Ama Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/troubleshooter-ama-windows.md
Title: Use Azure Monitor Agent Troubleshooter for Windows
+ Title: How to use the Windows operating system (OS) Azure Monitor Agent Troubleshooter
description: Detailed instructions on using the Windows agent troubleshooter tool to diagnose potential issues.
# customer-intent: When AMA is experiencing issues, I want to investigate the issues and determine if I can resolve the issue on my own.
-# Use the Azure Monitor Agent Troubleshooter for Windows
+# How to use the Windows operating system (OS) Azure Monitor Agent Troubleshooter
The Azure Monitor Agent (AMA) Troubleshooter is designed to help identify issues with the agent and perform general health assessments. It can perform various checks to ensure that the agent is properly installed and connected, and can also gather AMA-related logs from the machine being diagnosed. > [!Note]
-> The AMA Troubleshooter is a command line executable that is shipped with the agent for all versions newer than **1.12.0.0** for Windows.
+> The Windows AMA Troubleshooter is a command line executable that is shipped with the agent for all versions newer than **1.12.0.0**.
## Prerequisites
-On the machine to be diagnosed, does this directory exist:
+### Troubleshooter existence check
+Check for the existence of the AMA Agent Troubleshooter directory on the machine to be diagnosed to confirm the installation of the agent troubleshooter:
+ ***C:/Packages/Plugins/Microsoft.Azure.Monitor.AzureMonitorWindowsAgent*** # [PowerShell](#tab/WindowsPowerShell)
If the directory exists, the cd command changes directories successfully.
-If not, the directory doesn't exist and the installation failed. In this case, follow [Basic troubleshooting steps](../agents/azure-monitor-agent-troubleshoot-windows-vm.md#basic-troubleshooting-steps-installation-agent-not-running-configuration-issues) instead.
+If directory doesn't exist or the installation is failed, follow [Basic troubleshooting steps](../agents/azure-monitor-agent-troubleshoot-windows-vm.md#basic-troubleshooting-steps-installation-agent-not-running-configuration-issues).
Yes, the directory exists. Proceed to [Run the Troubleshooter](#run-the-troubleshooter).
It isn't possible to use the Troubleshooter to diagnose an older version of the
## Next Steps - [Troubleshooting guidance for the Azure Monitor agent](../agents/azure-monitor-agent-troubleshoot-windows-vm.md) on Windows virtual machines and scale sets-- [Troubleshooting guidance for the Azure Monitor agent](../agents/azure-monitor-agent-troubleshoot-windows-arc.md) on Windows Arc-enabled server
+- [Troubleshooting guidance for the Azure Monitor agent](../agents/azure-monitor-agent-troubleshoot-windows-arc.md) on Windows Arc-enabled server
azure-monitor Alerts Manage Alert Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-manage-alert-instances.md
Title: Manage your alert instances description: The alerts page summarizes all alert instances in all your Azure resources generated in the last 30 days and allows you to manage your alert instances. Previously updated : 07/11/2023 Last updated : 01/21/2024 # Manage your alert instances The **Alerts** page summarizes all alert instances in all your Azure resources generated in the last 30 days. Alerts are stored for 30 days and are deleted after the 30-day retention period.
-For stateful alerts, while the alert itself is deleted after 30 days, and is not viewable on the alerts page, the alert condition is stored until the alert is resolved, to prevent firing another alert, and so that notifications can be sent when the alert is resolved. For more information, see [Alerts and state](alerts-overview.md#alerts-and-state).
+For stateful alerts, while the alert itself is deleted after 30 days, and isn't viewable on the alerts page, the alert condition is stored until the alert is resolved, to prevent firing another alert, and so that notifications can be sent when the alert is resolved. For more information, see [Alerts and state](alerts-overview.md#alerts-and-state).
+
+## Access the Alerts page
You can get to the **Alerts** page in a few ways:
You can get to the **Alerts** page in a few ways:
## Alerts summary pane
-The **Alerts** summary pane summarizes the alerts fired in the last 24 hours. You can filter the list of alert instances by **Time range**, **Subscription**, **Alert condition**, **Severity**, and more. If you selected a specific alert severity to open the **Alerts** page, the list is pre-filtered for that severity.
+The **Alerts** summary pane summarizes the alerts fired in the last 24 hours. You can filter the list of alert instances by **Time range**, **Subscription**, **Alert condition**, **Severity**, and more. If you selected a specific alert severity to open the **Alerts** page, the list is prefiltered for that severity.
To see more information about a specific alert instance, select the alert instance to open the **Alert details** page.
- :::image type="content" source="media/alerts-managing-alert-instances/alerts-page.png" alt-text="Screenshot that shows the Alerts summary page in the Azure portal.":::
-
+ :::image type="content" source="media/alerts-managing-alert-instances/alerts-page.png" lightbox="media/alerts-managing-alert-instances/alerts-page.png" alt-text="Screenshot that shows the Alerts summary page in the Azure portal.":::
## View alerts as a timeline (preview)
-You can see your alerts in a timeline view. In this view, you can see the number of alerts fired in a specific time range.
+You can see your alerts in a timeline view. In this view, you can see the number of alerts fired in a specific time range. The timeline shows you which resource the alerts were fired on to give you context of the alert in your Azure hierarchy. The alerts are grouped by the time they were fired. You can filter the alerts by severity, resource, and more. You can also select a specific time range to see the alerts fired in that time range.
+
+ :::image type="content" source="media/alerts-managing-alert-instances/alerts-timeline.png" lightbox="media/alerts-managing-alert-instances/alerts-timeline.png" alt-text="Screenshot that shows the Alerts timeline page in the Azure portal.":::
+
+To see the alerts in a timeline view, select **View as timeline** at the top of the Alerts summary page. You can choose to see the alerts timeline in with the severity of the alerts indicated by color, or a simplified view with critical or noncritical alerts.
+
+ :::image type="content" source="media/alerts-managing-alert-instances/alerts-view-timeline.png" lightbox="media/alerts-managing-alert-instances/alerts-view-timeline.png" alt-text="Screenshot that shows the view timeline button in the Alerts summary page in the Azure portal.":::
+
+You can drill down into a specific time range. Select one of the cards in the timeline to see the alerts fired in that time range.
+
+ :::image type="content" source="media/alerts-managing-alert-instances/alerts-timeline-details.png" lightbox="media/alerts-managing-alert-instances/alerts-timeline-details.png" alt-text="Screenshot that shows the drilldown into a specific time range in the Alerts timeline page in the Azure portal.":::
++
+### Customize the timeline view
+
+You can customize the timeline view to suit your needs by changing the grouping of your alerts.
+
+1. From the timeline view of the alerts page, select the **Edit** icon in the groups box at the top of the page.
+
+ :::image type="content" source="media/alerts-managing-alert-instances/alerts-timeline-edit-pencil.png" alt-text="Screenshot that shows the pencil icon to edit the timeline view of the alerts page in the Azure portal.":::
+
+1. In the **Edit group** pane, drag and drop the fields to group by. You can change the order of the groupings, and add new dimensions, tags, labels, and more. Validation is run on the grouping to make sure that the grouping is valid. If you are at the alerts page for a specific resource, the options for grouping are filtered by that resource, and you can only group by items related to the resource.
+
+ For AKS clusters, we provide suggested views based on popular groupings.
+1. Select **Save**.
+
+ :::image type="content" source="media/alerts-managing-alert-instances/alerts-edit-timeline-view.png" lightbox="media/alerts-managing-alert-instances/alerts-edit-timeline-view.png" alt-text="Screenshot that shows the edit group pane in the timeline view of the alerts page in the Azure portal.":::
+1. The timeline displays the alerts grouped by the fields you selected. Alerts that don't logically belong in the grouping you selected are listed in a group called **Other**.
+1. When you have the grouping you want, select **Save view** to save the view.
-To see the alerts in a timeline view, select **View as timeline** at the top of the Alerts summary page.
+### Manage timeline views
- :::image type="content" source="media/alerts-managing-alert-instances/alerts-view-timeline.png" alt-text="Screenshot that shows the view timeline button in the Alerts summary page in the Azure portal.":::
+You can save up to 10 views of the alerts timeline. The **default** view is the Azure default view.
-The timeline shows you which resource the alerts were fired on to give you context of the alert in your Azure hierarchy. The alerts are grouped by the time they were fired. You can filter the alerts by severity, resource, and more. You can also select a specific time range to see the alerts fired in that time range.
+1. From the main **Alerts** page, select **Manage views** to see the list of views you saved.
+1. Select **Save view as** to save a new view.
+1. Mark a view as **Favorite** to see that view every time you come to the **Alerts** page.
+1. Select **Browse all views** to see all the views you saved, select a favorite view, or delete a view. You can only see all of the views from the main alerts page, not from the alerts of an individual resource.
- :::image type="content" source="media/alerts-managing-alert-instances/alerts-timeline.png" alt-text="Screenshot that shows the Alerts timeline page in the Azure portal.":::
## Alert details page The **Alert details** page provides more information about the selected alert:
+ - To change the user response to the alert, select the pencil near **User response**.
+ - To see the details of the alert, expand the **Additional details** section.
- To see all closed alerts, select the **History** tab. ## Manage your alerts programmatically
azure-monitor Alerts Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot.md
If you can see a fired alert in the portal, but its configured action did not tr
1. **Did your webhook become unresponsive or return errors?**
- The webhook response timeout period is 10 seconds. When the HTTP endpoint does not respond or when the following HTTP status codes are returned, the webhook call is retried up to two times:
-
- - `408`
- - `429`
- - `503`
- - `504`
-
- One retry occurs after 10 seconds and another retry occurs after 100 seconds. If the second retry fails, the endpoint is not called again for 15 minutes for any action group.
+ Webhook action groups generally follow these rules when called:
+ - When a webhook is invoked, if the first call fails, it is retried at least 1 more time, and up to 5 times (5 retries) at various delay intervals (5, 20, 40 seconds).
+ - The delay between 1st and 2nd attempt is 5 seconds
+ - The delay between 2nd and 3rd attempt is 20 seconds
+ - The delay between 3rd and 4th attempt is 5 seconds
+ - The delay between 4th and 5th attempt is 40 seconds
+ - The delay between 5th and 6th attempt is 5 seconds
+ - After retries attempted to call the webhook fail, no action group calls the endpoint for 15 minutes.
+ - The retry logic assumes that the call can be retried. The status codes: 408, 429, 503, 504, or `HttpRequestException`, `WebException`, or `TaskCancellationException` allow for the call to be retried.
## Action or notification happened more than once
azure-monitor Alerts Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-types.md
description: This article explains the different types of Azure Monitor alerts a
Previously updated : 09/14/2022 Last updated : 01/22/2024
azure-monitor Availability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-overview.md
# Application Insights availability tests
-After you've deployed your web app or website, you can set up recurring tests to monitor availability and responsiveness. [Application Insights](./app-insights-overview.md) sends web requests to your application at regular intervals from points around the world. It can alert you if your application isn't responding or responds too slowly.
+After you deploy your web app or website, you can set up recurring tests to monitor availability and responsiveness. [Application Insights](./app-insights-overview.md) sends web requests to your application at regular intervals from points around the world. It can alert you if your application isn't responding or responds too slowly.
You can set up availability tests for any HTTP or HTTPS endpoint that's accessible from the public internet. You don't have to make any changes to the website you're testing. In fact, it doesn't even have to be a site that you own. You can test the availability of a REST API that your service depends on. ## Types of tests
+> [!IMPORTANT]
+> On September 30, 2026, URL ping tests in Application Insights will be retired. Existing URL ping tests will be removed from your resources. Review the [pricing](https://azure.microsoft.com/pricing/details/monitor/#pricing) for standard tests and [transition](https://aka.ms/availabilitytestmigration) to using them before September 30, 2026 to ensure you can continue to run single-step availability tests in your Application Insights resources.
+ There are four types of availability tests: * [Standard test](availability-standard-tests.md): This single request test is similar to the URL ping test. It includes TLS/SSL certificate validity, proactive lifetime check, HTTP request verb (for example, `GET`, `HEAD`, or `POST`), custom headers, and custom data associated with your HTTP request.
azure-monitor Availability Test Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-test-migration.md
The following steps walk you through the process of creating [standard tests](av
$dynamicParameters = @{}; if ($pingTestRequest.IgnoreHttpStatusCode -eq [bool]::FalseString) {
-
$dynamicParameters["RuleExpectedHttpStatusCode"] = [convert]::ToInt32($pingTestRequest.ExpectedHttpStatusCode, 10);
-
} if ($pingTestValidationRule -and $pingTestValidationRule.DisplayName -eq "Find Text" `
-
-and $pingTestValidationRule.RuleParameters.RuleParameter[0].Name -eq "FindText" ` -and $pingTestValidationRule.RuleParameters.RuleParameter[0].Value) { $dynamicParameters["ContentMatch"] = $pingTestValidationRule.RuleParameters.RuleParameter[0].Value;
The following steps walk you through the process of creating [standard tests](av
-RequestUrl $pingTestRequest.Url -RequestHttpVerb "GET" -GeoLocation $pingTest.PropertiesLocations -Frequency $pingTest.Frequency ` -Timeout $pingTest.Timeout -RetryEnabled:$pingTest.RetryEnabled -Enabled:$pingTest.Enabled ` -RequestParseDependent:($pingTestRequest.ParseDependentRequests -eq [bool]::TrueString);
-
``` 5. The new standard test doesn't have alert rules by default, so it doesn't create noisy alerts. No changes are made to your URL ping test so you can continue to rely on it for alerts.
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
Workspace-based resources:
> [!IMPORTANT] > * On February 29, 2024, continuous export will be deprecated as part of the classic Application Insights deprecation.
-> * When you [migrate to a workspace-based Application Insights resource](convert-classic-resource.md), you must use [diagnostic settings](export-telemetry.md#diagnostic-settings-based-export) for exporting telemetry. All [workspace-based Application Insights resources](./create-workspace-resource.md) must use [diagnostic settings](./create-workspace-resource.md#export-telemetry).
+>
+
+>
+> * You can enable [diagnostic settings on classic Application Insights]() before you [migrate to a workspace-based Application Insights resource](convert-classic-resource.md).All [workspace-based Application Insights resources](./create-workspace-resource.md) must use [diagnostic settings](./create-workspace-resource.md#export-telemetry).
+>
> * Diagnostic settings export might increase costs. For more information, see [Diagnostic settings-based export](export-telemetry.md#diagnostic-settings-based-export).
+>
## New capabilities
If you don't need to migrate an existing resource, and instead want to create a
> [!NOTE] > The migration process shouldn't introduce any application downtime or restarts nor change your existing instrumentation key or connection string.+ ## Prerequisites - A Log Analytics workspace with the access control mode set to the **Use resource or workspace permissions** setting:
No. There's no impact to [Live Metrics](live-stream.md#live-metrics-monitor-and-
### What happens with continuous export after migration?
-Continuous export doesn't support workspace-based resources.
+To continue with automated exports, you will need to migrate to [diagnostic settings](/previous-versions/azure/azure-monitor/app/continuous-export-diagnostic-setting) before migrating to workspace-based resource. The diagnostic setting will carry over in the migration to workspace-based Application Insights.
+
+### How do I ensure a successful migration of my App Insights resource using Terraform?
+
+If you are using Terraform to manage your Azure resources, it is important to use the latest version of the Terraform azurerm provider before attempting to upgrade your App Insights resource. Using an older version of the provider, such as version 3.12, may result in the deletion of the classic component before creating the replacement workspace-based Application Insights resource. This can cause the loss of previous data and require updating the configurations in your monitored apps with new connection string and instrumentation key values.
-Switch to [diagnostic settings](../essentials/diagnostic-settings.md#diagnostic-settings-in-azure-monitor).
+To avoid this issue, make sure to use the latest version of the Terraform [azurerm provider](https://registry.terraform.io/providers/hashicorp/azurerm/latest), version 3.89 or higher, which performs the proper migration steps by issuing the appropriate ARM call to upgrade the App Insights classic resource to a workspace-based resource while preserving all the old data and connection string/instrumentation key values.
## Troubleshooting
If you can't change the access control mode for security reasons for your curren
**Error message:** "Continuous Export needs to be disabled before continuing. After migration, use Diagnostic Settings for export."
-The legacy **Continuous export** functionality isn't supported for workspace-based resources. Prior to migrating, you need to disable continuous export.
+The legacy **Continuous export** functionality isn't supported for workspace-based resources. Prior to migrating, you need to enable diagnostic settings and disable continuous export.
+1. [Enable Diagnostic Settings](/previous-versions/azure/azure-monitor/app/continuous-export-diagnostic-setting) on you classic Application Insights resource.
1. From your Application Insights resource view, under the **Configure** heading, select **Continuous export**. :::image type="content" source="./media/convert-classic-resource/continuous-export.png" lightbox="./media/convert-classic-resource/continuous-export.png" alt-text="Screenshot that shows the Continuous export menu item.":::
azure-monitor Autoscale Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-overview.md
When the conditions in the rules are met, one or more autoscale actions are trig
Autoscale scales in and out, or horizontally. Scaling horizontally is an increase or decrease of the number of resource instances. For example, for a virtual machine scale set, scaling out means adding more virtual machines. Scaling in means removing virtual machines. Horizontal scaling is flexible in a cloud situation because you can use it to run a large number of VMs to handle load.
-In contrast, scaling up and down, or vertical scaling, keeps the same number of resource instances constant but gives them more capacity in terms of memory, CPU speed, disk space, and network. Vertical scaling is limited by the availability of larger hardware, which eventually reaches an upper limit. Hardware size availability varies in Azure by region. Vertical scaling might also require a restart of the VM during the scaling process. Autoscale does not support vertical scaling.
+Autoscale does not support vertical scaling. In contrast, scaling up and down, or vertical scaling, keeps the same number of resource instances constant but gives them more capacity in terms of memory, CPU speed, disk space, and network. Vertical scaling is limited by the availability of larger hardware, which eventually reaches an upper limit. Hardware size availability varies in Azure by region. Vertical scaling might also require a restart of the VM during the scaling process.
:::image type="content" source="./media/autoscale-overview/vertical-scaling.png" lightbox="./media/autoscale-overview/vertical-scaling.png" alt-text="A diagram that shows scaling up by adding CPU and memory to a virtual machine.":::
To learn more about autoscale, see the following resources:
* [Autoscale CLI reference](/cli/azure/monitor/autoscale) * [ARM template resource definition](/azure/templates/microsoft.insights/autoscalesettings) * [PowerShell Az.Monitor reference](/powershell/module/az.monitor/#monitor)
-* [REST API reference: Autoscale settings](/rest/api/monitor/autoscale-settings)
+* [REST API reference: Autoscale settings](/rest/api/monitor/autoscale-settings)
azure-monitor Container Insights Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-syslog.md
Container Insights offers the ability to collect Syslog events from Linux nodes in your [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md) clusters. This includes the ability to collect logs from control plane components like kubelet. Customers can also use Syslog for monitoring security and health events, typically by ingesting syslog into a SIEM system like [Microsoft Sentinel](https://azure.microsoft.com/products/microsoft-sentinel/#overview).
-> [!IMPORTANT]
-> Syslog collection is now GA. However due to slower rollouts towards the year end, the agent version with the GA changes will not be in all regions until the end of January 2024. Agent versions 3.1.16 and above have Syslog GA changes. Please check agent version before enabling in production.
- ## Prerequisites - You need to have managed identity authentication enabled on your cluster. To enable, see [migrate your AKS cluster to managed identity authentication](container-insights-enable-existing-clusters.md?tabs=azure-cli#migrate-to-managed-identity-authentication). Note: Enabling Managed Identity will create a new Data Collection Rule (DCR) named `MSCI-<WorkspaceRegion>-<ClusterName>`
azure-monitor Prometheus Metrics Scrape Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-scrape-configuration.md
See the [Apply config file](prometheus-metrics-scrape-validate.md#deploy-config-
## Next steps
+[Setup Alerts on Prometheus metrics](./container-insights-metric-alerts.md)<br>
+[Query Prometheus metrics](../essentials/prometheus-grafana.md)<br>
[Learn more about collecting Prometheus metrics](../essentials/prometheus-metrics-overview.md)
azure-monitor Cost Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/cost-usage.md
Operation
| render columnchart ```
-(This functionality of reporting the benefits used in the `Operation` table came online in January 2024.)
+(This functionality of reporting the benefits used in the `Operation` table started January 27, 2024.)
+
+> [!TIP]
+> If you [increase the data retention](logs/data-retention-archive.md) of the [Operation](/azure/azure-monitor/reference/tables/operation) table, you will be able to view these benefit trends over longer periods.
+>
## Usage and estimated costs You can get additional usage details about Log Analytics workspaces and Application Insights resources from the **Usage and Estimated Costs** option for each.
azure-monitor Migrate To Batch Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/migrate-to-batch-api.md
Heavy use of the [metrics API](/rest/api/monitor/metrics/list?tabs=HTTP) can res
## Request format The metrics:getBatch API request has the following format: ```http
-POST /subscriptions/<subscriptionId>/metrics:getBatch?metricNamespace=<resource type namespace>&api-version=2023-03-01-preview
+POST /subscriptions/<subscriptionId>/metrics:getBatch?metricNamespace=<resource type namespace>&api-version=2023-10-01
Host: <region>.metrics.monitor.azure.com Content-Type: application/json Authorization: Bearer <token>
Authorization: Bearer <token>
For example, ```http
-POST /subscriptions/12345678-1234-1234-1234-123456789abc/metrics:getBatch?metricNamespace=microsoft.compute/virtualMachines&api-version=2023-03-01-preview
+POST /subscriptions/12345678-1234-1234-1234-123456789abc/metrics:getBatch?metricNamespace=microsoft.compute/virtualMachines&api-version=2023-10-01
Host: eastus.metrics.monitor.azure.com Content-Type: application/json Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhb...TaXzf6tmC4jhog
GET https://management.azure.com/subscriptions/12345678-1234-1234-1234-123456789
to `/subscriptions/12345678-1234-1234-1234-123456789abc/metrics:getBatch` 1. The `metricNamespace` query param is required for metrics:getBatch. For Azure standard metrics, the namespace name is usually the resource type of the resources you've specified. To check the namespace value to use, see the [metrics namespaces API](/rest/api/monitor/metric-namespaces/list?tabs=HTTP)
-1. Update the api-version query parameter as follows: `&api-version=2023-03-01-preview`
+1. Switch from using the `timespan` query param to using `starttime` and `endtime`. For example, `?timespan=2023-04-20T12:00:00.000Z/2023-04-22T12:00:00.000Z` becomes `?startime=2023-04-20T12:00:00.000Z&endtime=2023-04-22T12:00:00.000Z`.
+1. Update the api-version query parameter as follows: `&api-version=2023-10-01`
1. The filter query param isn't prefixed with a `$` in the metrics:getBatch API. Change the query param from `$filter=` to `filter=`. 1. The metrics:getBatch API is a POST call with a body that contains a comma-separated list of resourceIds in the following format: For example:
GET https://management.azure.com/subscriptions/12345678-1234-1234-1234-123456789
The following example shows the converted batch request. ```http
- POST https://westus2.metrics.monitor.azure.com/subscriptions/12345678-1234-1234-1234-123456789abc/metrics:getBatch?timespan=2023-04-20T12:00:00.000Z/2023-04-22T12:00:00.000Z&interval=PT6H&metricNamespace=microsoft.storage%2Fstorageaccounts&metricnames=Ingress,Egress&aggregation=total,average,minimum,maximum&top=10&orderby=total desc&filter=ApiName eq '*'&api-version=2023-03-01-preview
+ POST https://westus2.metrics.monitor.azure.com/subscriptions/12345678-1234-1234-1234-123456789abc/metrics:getBatch?starttime=2023-04-20T12:00:00.000Z&endtime=2023-04-22T12:00:00.000Z&interval=PT6H&metricNamespace=microsoft.storage%2Fstorageaccounts&metricnames=Ingress,Egress&aggregation=total,average,minimum,maximum&top=10&orderby=total desc&filter=ApiName eq '*'&api-version=2023-10-01
{ "resourceids": [
A `resourceid` property has been added to each resources' metrics list in the me
```http { "cost": 11516,
- "timespan": "2023-04-20T12:00:00Z/2023-04-22T12:00:00Z",
+ "startime": "2023-04-20T12:00:00Z",
+ "endtime": "2023-04-22T12:00:00Z",
"interval": "P1D", "value": [ {
A `resourceid` property has been added to each resources' metrics list in the me
"values": [ { "cost": 11516,
- "timespan": "2023-04-20T12:00:00Z/2023-04-22T12:00:00Z",
+ "starttime": "2023-04-20T12:00:00Z",
+ "endtime": "2023-04-22T12:00:00Z",
"interval": "P1D", "value": [ {
A `resourceid` property has been added to each resources' metrics list in the me
}, { "cost": 11516,
- "timespan": "2023-04-20T12:00:00Z/2023-04-22T12:00:00Z",
+ "starttime": "2023-04-20T12:00:00Z",
+ "endtime": "2023-04-22T12:00:00Z",
"interval": "P1D", "value": [ {
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
All custom tables created with or migrated to the [data collection rule (DCR)-ba
| Container Apps Environments | [AppEnvSpringAppConsoleLogs](/azure/azure-monitor/reference/tables/AppEnvSpringAppConsoleLogs) | | Communication Services | [ACSCallAutomationIncomingOperations](/azure/azure-monitor/reference/tables/ACSCallAutomationIncomingOperations)<br>[ACSCallAutomationMediaSummary](/azure/azure-monitor/reference/tables/ACSCallAutomationMediaSummary)<br>[ACSCallClientMediaStatsTimeSeries](/azure/azure-monitor/reference/tables/ACSCallClientMediaStatsTimeSeries)<br>[ACSCallClientOperations](/azure/azure-monitor/reference/tables/ACSCallClientOperations)<br>[ACSCallRecordingIncomingOperations](/azure/azure-monitor/reference/tables/ACSCallRecordingIncomingOperations)<br>[ACSCallRecordingSummary](/azure/azure-monitor/reference/tables/ACSCallRecordingSummary)<br>[ACSCallSummary](/azure/azure-monitor/reference/tables/ACSCallSummary)<br>[ACSJobRouterIncomingOperations](/azure/azure-monitor/reference/tables/ACSJobRouterIncomingOperations)<br>[ACSRoomsIncomingOperations](/azure/azure-monitor/reference/tables/acsroomsincomingoperations)<br>[ACSCallClosedCaptionsSummary](/azure/azure-monitor/reference/tables/acscallclosedcaptionssummary) | | Confidential Ledgers | [CCFApplicationLogs](/azure/azure-monitor/reference/tables/CCFApplicationLogs) |
+ Cosmos DB | [CDBDataPlaneRequests](/azure/azure-monitor/reference/tables/cdbdataplanerequests)<br>[CDBPartitionKeyStatistics](/azure/azure-monitor/reference/tables/cdbpartitionkeystatistics)<br>[CDBPartitionKeyRUConsumption](/azure/azure-monitor/reference/tables/cdbpartitionkeyruconsumption)<br>[CDBQueryRuntimeStatistics](/azure/azure-monitor/reference/tables/cdbqueryruntimestatistics)<br>[CDBMongoRequests](/azure/azure-monitor/reference/tables/cdbmongorequests)<br>[CDBCassandraRequests](/azure/azure-monitor/reference/tables/cdbcassandrarequests)<br>[CDBGremlinRequests](/azure/azure-monitor/reference/tables/cdbgremlinrequests)<br>[CDBControlPlaneRequests](/azure/azure-monitor/reference/tables/cdbcontrolplanerequests) |
| Cosmos DB for MongoDB (vCore) | [VCoreMongoRequests](/azure/azure-monitor/reference/tables/VCoreMongoRequests) | | Kubernetes clusters - Azure Arc | [ArcK8sAudit](/azure/azure-monitor/reference/tables/ArcK8sAudit)<br>[ArcK8sAuditAdmin](/azure/azure-monitor/reference/tables/ArcK8sAuditAdmin)<br>[ArcK8sControlPlane](/azure/azure-monitor/reference/tables/ArcK8sControlPlane) | | Data Manager for Energy | [OEPDataplaneLogs](/azure/azure-monitor/reference/tables/OEPDataplaneLogs) |
azure-monitor Snapshot Debugger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger.md
For bug reports and feedback, [open an issue on GitHub](https://github.com/micro
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
+### [1.4.6](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.6)
+A point release to address a regression when using .NET 8 applications.
+
+#### Bug fixes
+- Exceptions thrown from dynamically generated methods (e.g. compiled expression trees) in .NET 8 are not being tracked correctly. Fixed.
+ ### [1.4.5](https://www.nuget.org/packages/Microsoft.ApplicationInsights.SnapshotCollector/1.4.5) A point release to address a user-reported bug.
azure-sql-edge High Availability Sql Edge Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/high-availability-sql-edge-containers.md
To create a container in Kubernetes, see [Deploy a Azure SQL Edge container in K
## Next steps To deploy Azure SQL Edge containers in Azure Kubernetes Service (AKS), see the following articles:-- [Deploy a Azure SQL Edge container in Kubernetes](deploy-Kubernetes.md)
+- [Deploy an Azure SQL Edge container in Kubernetes](deploy-Kubernetes.md)
- [Machine Learning and Artificial Intelligence with ONNX in SQL Edge](onnx-overview.md). - [Building an end to end IoT Solution with SQL Edge using IoT Edge](tutorial-deploy-azure-resources.md). - [Data Streaming in Azure SQL Edge](stream-data.md)
azure-vmware Azure Vmware Solution Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-known-issues.md
description: This article provides details about the known issues of Azure VMwar
Previously updated : 11/28/2023 Last updated : 1/29/2024 # Known issues: Azure VMware Solution
Refer to the table to find details about resolution dates or possible workaround
| When I build a VMware HCX Service Mesh with the Enterprise license, the Replication Assisted vMotion Migration option isn't available. | 2023 | The default VMware HCX Compute Profile doesn't have the Replication Assisted vMotion Migration option enabled. From the Azure VMware Solution vSphere Client, select the VMware HCX option and edit the default Compute Profile to enable Replication Assisted vMotion Migration. | 2023 | | [VMSA-2023-023](https://www.vmware.com/security/advisories/VMSA-2023-0023.html) VMware vCenter Server Out-of-Bounds Write Vulnerability (CVE-2023-34048) publicized in October 2023 | October 2023 | Microsoft is currently working with its security teams and partners to evaluate the risk to Azure VMware Solution and its customers. Initial investigations show that controls in place within Azure VMware Solution reduce the risk of CVE-2023-03048. However Microsoft is working on a plan to roll out security fixes soon to completely remediate the security vulnerability. | October 2023 | | The AV64 SKU currently supports RAID-1 FTT1, RAID-5 FTT1, and RAID-1 FTT2 vSAN storage policies. For more information, see [AV64 supported RAID configuration](introduction.md#av64-supported-raid-configuration) | Nov 2023 | Use AV36, AV36P, or AV52 SKUs when RAID-6 FTT2 or RAID-1 FTT3 storage policies are needed. | N/A |
+| VMware HCX version 4.8.0 Network Extension (NE) Appliance VMs running in High Availability (HA) mode may experience intermittent Standby to Active failover. For more information, see [HCX - NE appliances in HA mode experience intermittent failover (96352)](https://kb.vmware.com/s/article/96352) | Jan 2024 | Avoid upgrading to VMware HCX 4.8.0 if you are using NE appliances in a HA configuration. | N/A |
In this article, you learned about the current known issues with the Azure VMware Solution.
azure-vmware Deploy Vmware Cloud Director Availability In Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-vmware-cloud-director-availability-in-azure-vmware-solution.md
+
+ Title: Deploy VMware Cloud Director Availability in Azure VMware Solution
+description: Learn how to install and configure VMware Cloud Director Availability in Azure VMware Solution
+++ Last updated : 1/22/2024
+
+
+# Deploy VMware Cloud Director Availability in Azure VMware Solution
+
+In this article, learn how to deploy VMware Cloud Director Availability in Azure VMware Solution.
+
+Customers can use [VMware Cloud Director Availability](https://docs.vmware.com/en/VMware-Cloud-Director-Availability/https://docsupdatetracker.net/index.html), a Disaster Recovery as a Service (DRaaS) solution, to protect and migrate workloads both to and from the VMware Cloud Director service associated with Azure VMware Solution. The native integration of VMware Cloud Director Availability with VMware Cloud Director and VMware Cloud Director service (CDS) enables provider and their tenants to efficiently manage migration and disaster recovery for workloads through the VMware Cloud Director Availability provider and tenant portal.
+
+## VMware Cloud Director Availability scenarios on Azure VMware Solution
+
+You can use VMware Cloud Director Availability with Azure VMware Solution for the following two scenarios:
+
+- On-Premises to Azure VMware Solution
+
+ VMware Cloud Director Availability provides migration, protection, failover, and reverse failover of VMs, vApps, and templates across on-premises VMware vCenter, VMware Cloud Director, or VMware Cloud Director service (CDS) to VMware CDS on Azure VMware Solution.
+
+- Azure VMware Solution to Azure VMware Solution
+
+ VMware Cloud Director Availability provides a flexible solution for multitenant customers. The flexible solution enables smooth workload migration between Cloud Director service (CDS) instances hosted on Azure VMware Solution SDDC, which empowers efficient cloud-to-cloud migration at the tenant level when using CDs with Azure VMware Solution.
+
+## Key components of VMware Cloud Director Availability
+
+VMware Cloud Director Availability consists of the following types of appliances.
+
+### Replication Management Appliance
+
+This appliance, also known as the manager, enables communication with VMware Cloud Director. The enabled communication gives VMware Cloud Director Availability the capability to discover resources like: Organization Virtual datacenter (OrgVDC), storage policies, datastores, and networks managed by VMware Cloud director and used by tenants.
+
+The manager plays a vital role in identifying vApps and virtual machines (VMs) eligible for replication of migration and suitable destinations for incoming replications and migrations. It also provides user interface (UI) and API interfaces, which serve as a communication bridge for users interacting with VMware Cloud Director availability.
+
+The responsibility of the manager extends to communication with local and remote replicators and collecting data about each protected or migrated workload.
+
+### Replication appliance instances
+
+VMware Cloud Director Availability Cloud Replication appliance serves as the entity responsible for transferring replication data to and from ESXi hosts in the cloud. For outgoing replications or migrations, it communicates with the VM Kernel interface of an ESXi host; capturing, encrypting, and optionally compressing the replication data. The data is sent to a remote replicator, whether in the cloud or on-premises.
+
+For incoming replications or migrations, the cloud replicator receives data from a replicator (whether in the cloud or on-premises), decrypts and decompresses it, and then transfers it to ESXi to be written to a datastore. You can deploy more replicators to scale as number of migrations or protections increases.
+
+### Tunnel appliance
+
+Tunnel appliance is the single-entry point to VMware Cloud Director Availability instance in the cloud and its role is to manage incoming management and replication traffic. Tunnel handles both data and management traffic and forwards it respectively to cloud replicators and manager.
+
+### On-premises Cloud director replication appliance
+
+This appliance is deployed in tenant on-premises datacenters. It creates a pairing relation to VMware Cloud Director Availability in the cloud and can protect or migrate VMs running locally to the cloud and vice versa.
+
+VMware Cloud Director Availability installation in the Azure VMware Solution cloud site consists of one Replication Manager, one Tunnel Appliance, and two Replicator Appliances. You can deploy more replicators using Azure portal.
+
+The following diagram shows VMware Cloud Director Availability appliances installed in both on-premises and Azure VMware Solution.
++
+## Install and configure VMware Cloud Director Availability on Azure VMware Solution
+
+Verify the following prerequisites to ensure you're ready to install and configure VMware Cloud Director Availability using Run commands.
+
+### Prerequisites
+
+- Verify the Azure VMware Solution private cloud is configured.
+- Verify the VMware-Cloud-Director-Availability-Providerrelease.number.xxxxxxx-build_sha_OVF10.ova version 4.7 is uploaded under the correct datastore.
+- Verify the subnet, DNS zone and records for the VMware Cloud Director Availability appliances are configured.
+- Verify the subnet has outbound Internet connectivity to communicate with: VMware Cloud Director service, remote VMware Cloud Director Availability sites, and the upgrade repository.
+- Verify the DNS zone has a forwarding capability for the public IP addresses that need to be reached.
+
+For using VMware Cloud Director Availability outside of the local network segment, [turn on public IP addresses to an NSX-T Edge node for NSX-T Data Center](https://learn.microsoft.com/azure/azure-vmware/enable-public-ip-nsx-edge).
+
+- Verify the Cloud Director service is associated, and the Transport Proxy is configured with the Azure VMware Solution private cloud SDDC.
+
+## Install and manage VMware Cloud Directory Availability using Run commands
+
+Customers can deploy VMware Cloud Director Availability using Azure Run commands on Azure portal.
+
+> [!IMPORTANT]
+> Converting from manual installation of VMware Cloud Director Availability to Run command is not supported. Existing customers using VMware Cloud Director Availability can use Run commands and install VMware Cloud Director Availability to fully leverage the classic engine and Disaster Recovery capabilities.
+
+To access Run commands for VCDA:
+1. Navigate to Azure VMware Solution private cloud
+1. Under **Operations**, select **Run command**
+1. Select **VMware.VCDA.AVS package**
+
+The Azure VMware Solution private cloud portal provides a range of Run commands for VCDA as are shown in the following screenshot. The commands empower you to perform various operations, including installation, configuration, uninstallation, scaling, and more.
+
+The Run command **Install-VCDAAVS** installs and configures the VMware Cloud Director Availability instance in Azure VMware Solution. The instance includes VMware Cloud Director Replication Manager, Tunnel, and two Replicators. You can add more replicators by using **Install-VCDARepliactor** to scale.
+
+> [!NOTE]
+> Run the **Initialize-AVSSite** command before you run the install command.
+
+You can also use Run commands to perform many other functions such as start, stop VMware Cloud Director Availability VMs, uninstall VMware Cloud Director availability, and more.
+
+The following image shows the Run commands that are available under **VMware.VCDA.AVS** for VMware Cloud Director Availability on Azure VMware Solution.
++
+Refer to [VMware Cloud Director Availability in Azure VMware Soltion](https://docs.vmware.com/en/VMware-Cloud-Director-Availability/4.7/VMware-Cloud-Director-Availability-in-AVS/GUID-2BF88B54-5775-4414-8213-D3B41BCDE3EB.html) for detailed instructions on utilizing the Run commands to effectively install and manage VMware Cloud Director Availability within your Azure solution private cloud.
+
+## FAQs
+
+### How do I install and configure VMware Cloud Director Availability in Azure VMware Solution and what are the prerequisites?
+
+Deploy VMware Cloud Director Availability using Run commands to enable classic engines and to access Disaster Recovery functionality. See prerequisites and procedures in [Run command in Azure VMware Solution](https://docs.vmware.com/en/VMware-Cloud-Director-Availability/4.7/VMware-Cloud-Director-Availability-in-AVS/GUID-6D0E6E0B-74BC-4669-9A26-5ACC46B2B296.html).
+
+### How is VMware Cloud Director Availability supported?
+
+VMware Cloud Director Availability is a VMware owned and supported product on Azure VMware Solution. For any support queries on VMware Cloud Director availability, contact VMware support for assistance. Both VMware and Microsoft support teams collaborate as necessary to address and resolve VMware Cloud Director Availability issues within Azure VMware Solution.
+
+### What are Run commands in Azure VMware Solution?
+
+For more information, go to [Run Command in Azure VMware Solution](https://learn.microsoft.com/azure/azure-vmware/concepts-run-command).
+
+### How can I add more Replicators in my existing VMware Cloud Director Availability instance in Azure VMware Solution?
+
+You can use Run Command **Install-VCDAReplicator** to install and configure new VMware Cloud Director Availability replicator virtual machines in Azure VMware Solution.
+
+### How can I upgrade VMware Cloud Director availability?
+
+VMware Cloud Director Availability can be upgraded using [Appliances upgrade sequence and prerequisites](https://docs.vmware.com/en/VMware-Cloud-Director-Availability/4.7/VMware-Cloud-Director-Availability-Install-Config-Upgrade-Cloud/GUID-51B25D13-8224-43F1-AE54-65EDDA9E5FAD.html).
+
+## Next steps
+
+Learn more about VMware Cloud Director Availability Run commands in Azure VMware Solution, [VMware Cloud Director availability](https://docs.vmware.com/en/VMware-Cloud-Director-Availability/https://docsupdatetracker.net/index.html).
business-continuity-center Business Continuity Center Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/business-continuity-center/business-continuity-center-support-matrix.md
Title: Azure Business Continuity center support matrix
+ Title: Azure Business Continuity Center support matrix
description: Provides a summary of support settings and limitations for the Azure Business Continuity center service. Previously updated : 11/15/2023 Last updated : 01/29/2024 - references_regions - ignite-2023
-# Support matrix for Azure Business Continuity center (preview)
+# Support matrix for Azure Business Continuity Center (preview)
This article describes supportable scenarios and limitations.
-You can use [Azure Business Continuity center](business-continuity-center-overview.md), a cloud-native unified business continuity and disaster recovery (BCDR) management platform in Azure to manage your protection estate across solutions and environments. This helps enterprises to govern, monitor, operate, and analyze backups and replication at scale. This article summarizes the solutions and scenarios that ABC center supports for each workload type.
+You can use [Azure Business Continuity Center](business-continuity-center-overview.md), a cloud-native unified business continuity and disaster recovery (BCDR) management platform in Azure to manage your protection estate across solutions and environments. This helps enterprises to govern, monitor, operate, and analyze backups and replication at scale. This article summarizes the solutions and scenarios that ABC center supports for each workload type.
## Supported regions
-Azure Business Continuity center currently supports the following region: West Central US.
-
->[!Note]
->To manage Azure resources using Azure Business Continuity center in other regions, write to us at [ABCRegionExpansion@microsoft.com](mailto:ABCRegionExpansion@microsoft.com).
+Azure Business Continuity Center supports all Azure regions.
## Supported solutions and datasources
Action | Restore. | Only for Azure Backup supported datasources given in the abo
## Unsupported scenarios
-This table lists the solutions and scenarios that are unsupported in Azure Business Continuity center for each workload type:
+This table lists the solutions and scenarios that are unsupported in Azure Business Continuity Center for each workload type:
| Category | Scenario | | | |
-| Monitor | Azure Site Recovery replication and failover health are not yet available in Azure Business Continuity center. You can continue to access these views via the individual vault pane. |
+| Monitor | Azure Site Recovery replication and failover health are not yet available in Azure Business Continuity Center. You can continue to access these views via the individual vault pane. |
| Monitor | Metrics view is not yet supported for Azure Database for Azure Backup protected items of Azure Disks, Azure Database for PostgreSQL and for Azure Site Recovery protected items. | | Govern | Protectable resources view currently only shows Azure resources. It doesn't show hosted items in Azure resources like SQL databases in Azure Virtual machines, SAP HANA databases in Azure Virtual machines, Blobs and files in Azure Storage accounts. | | Actions | Undelete action is not available for Azure Backup protected items of Azure Virtual machine, SQL in Azure Virtual machine, SAP in Azure Virtual machine, and Files (Azure Storage account). |
This table lists the solutions and scenarios that are unsupported in Azure Busin
## Next steps -- [About Azure Business Continuity center (preview)](business-continuity-center-overview.md).
+- [About Azure Business Continuity Center (preview)](business-continuity-center-overview.md).
communication-services Enable User Engagement Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/enable-user-engagement-tracking.md
In this quick start, you'll learn about how to enable user engagement tracking f
:::image type="content" source="./media/email-domains-custom-overview.png" alt-text="Screenshot that shows the overview page of the domain." lightbox="media/email-domains-custom-overview-expanded.png":::
-5. The navigation lands in Domain Overview page where you'll able to see User interaction tracking Off by default.
+5. Click turn on to enable engagement tracking.
:::image type="content" source="./media/email-domains-user-engagement.png" alt-text="Screenshot that shows the user engagement turn-on page of the domain." lightbox="media/email-domains-user-engagement-expanded.png":::
-6. Click turn on to enable engagement tracking.
- **Your email domain is now ready to send emails with user engagement tracking. Please be aware that user engagement tracking is applicable to HTML content and will not function if you submit the payload in plaintext.** You can now subscribe to Email User Engagement operational logs - provides information related to 'open' and 'click' user engagement metrics for messages sent from the Email service.
communications-gateway Connect Operator Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/connect-operator-connect.md
If you want to set up Teams Phone Mobile and you didn't select it when you deplo
## Enable Operator Connect or Teams Phone Mobile support > [!NOTE]
-> If you selected Operator Connect or Teams Phone Mobile when you [deployed Azure Communications Gateway](deploy.md), skip this step and go to [Add the Project Synergy application to your Azure tenancy](#add-the-project-synergy-application-to-your-azure-tenancy).
+> If you selected Operator Connect or Teams Phone Mobile when you [deployed Azure Communications Gateway](deploy.md), skip this step and go to [Add the Project Synergy application to your Azure tenant](#add-the-project-synergy-application-to-your-azure-tenant).
1. Sign in to the [Azure portal](https://azure.microsoft.com/). 1. In the search bar at the top of the page, search for your Communications Gateway resource and select it.
If you want to set up Teams Phone Mobile and you didn't select it when you deplo
> Do not add the numbers for integration testing. You will configure numbers for integration testing when you [carry out integration testing and prepare for live traffic](prepare-for-live-traffic-operator-connect.md). 1. Wait for your resource to be updated. When your resource is ready, the **Provisioning Status** field on the resource overview changes to "Complete." We recommend that you check in periodically to see if the Provisioning Status field is "Complete." This step might take up to two weeks.
-## Add the Project Synergy application to your Azure tenancy
+## Add the Project Synergy application to your Azure tenant
Before starting this step, check that the **Provisioning Status** field for your resource is "Complete".
The user who sets up Azure Communications Gateway needs to have the Admin user r
1. Select your **Project Synergy** application. 1. Select **Users and groups** from the left hand side menu. 1. Select **Add user/group**.
-1. Specify the user you want to use for setting up Azure Communications Gateway and give them the **Admin** role.
+1. Specify the user who should set up Azure Communications Gateway and give them the **Admin** role.
+ ## Find the Object ID and Application ID for your Azure Communication Gateway resource
Each Azure Communications Gateway resource automatically receives a [system-assi
## Set up application roles for Azure Communications Gateway
-Azure Communications Gateway contains services that need to access the Operator Connect API on your behalf. To enable this access, you must grant specific application roles to the system-assigned managed identity for Azure Communications Gateway under the Project Synergy Enterprise Application. You created the Project Synergy Enterprise Application in [Add the Project Synergy application to your Azure tenancy](#add-the-project-synergy-application-to-your-azure-tenancy).
+Azure Communications Gateway contains services that need to access the Operator Connect API on your behalf. To enable this access, you must grant specific application roles to the system-assigned managed identity for Azure Communications Gateway under the Project Synergy Enterprise Application. You created the Project Synergy Enterprise Application in [Add the Project Synergy application to your Azure tenant](#add-the-project-synergy-application-to-your-azure-tenant).
> [!IMPORTANT] > Granting permissions has two parts: configuring the system-assigned managed identity for Azure Communications Gateway with the appropriate roles (this step) and adding the application ID of the managed identity to the Operator Connect or Teams Phone Mobile environment. You'll add the application ID to the Operator Connect or Teams Phone Mobile environment later, in [Add the Application IDs for Azure Communications Gateway to Operator Connect](#add-the-application-ids-for-azure-communications-gateway-to-operator-connect).
communications-gateway Interoperability Operator Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/interoperability-operator-connect.md
Previously updated : 09/01/2023 Last updated : 01/31/2024
communications-gateway Manage Enterprise Operator Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/manage-enterprise-operator-connect.md
If you're uploading new numbers for an enterprise customer:
|Information for each number |Notes | |||
-|Calling profile |One of the Calling Profiles created by Microsoft for you.|
+|Calling profile |One of the `CommsGw` Calling Profiles we created for you.|
|Intended usage | Individuals (calling users), applications or conference calls.| |Capabilities |Which types of call to allow (for example, inbound calls or outbound calls).| |Civic address | A physical location for emergency calls. The enterprise must have configured this address in the Teams Admin Center. Only required for individuals (calling users) and only if you don't allow the enterprise to update the address.|
Numbers
+441632960004 ``` ## Go to your Communications Gateway resource
communications-gateway Prepare For Live Traffic Operator Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-for-live-traffic-operator-connect.md
In this article, you learn about the steps that you and your onboarding team mus
|Configuration portal |Required permissions | |||
- |[Operator Connect portal](https://operatorconnect.microsoft.com/) | `Admin` role or `PartnerSettings.Read` and `NumberManagement.Write` roles (configured on the Project Synergy enterprise application that you set up when [you connected to Operator Connect or Teams Phone Mobile](connect-operator-connect.md#add-the-project-synergy-application-to-your-azure-tenancy))|
+ |[Operator Connect portal](https://operatorconnect.microsoft.com/) | `Admin` role or `PartnerSettings.Read` and `NumberManagement.Write` roles (configured on the Project Synergy enterprise application that you set up when [you connected to Operator Connect or Teams Phone Mobile](connect-operator-connect.md#add-the-project-synergy-application-to-your-azure-tenant))|
|[Teams Admin Center](https://admin.teams.microsoft.com/) for your test tenant |User management| + ## Methods In some parts of this article, the steps you must take depend on whether your deployment includes the Number Management Portal. This article provides instructions for both types of deployment. Choose the appropriate instructions.
Integration testing requires setting up your test tenant for Operator Connect or
> [!IMPORTANT] > Do not assign the service verification numbers to test users. Your onboarding team arranges configuration of your service verification numbers.
-1. Ask your onboarding team for the name of the Calling Profile that you must use for these test numbers. The name typically has the suffix `commsgw`. This Calling Profile was created for you during the Azure Communications Gateway deployment process.
+1. Ask your onboarding team for the name of the Calling Profile that you must use for these test numbers. The name typically has the suffix `CommsGw`. We created this Calling Profile for you during the Azure Communications Gateway deployment process.
1. In your test tenant, request service from your company. 1. Sign in to the [Teams Admin Center](https://admin.teams.microsoft.com/) for your test tenant. 1. Select **Voice** > **Operators**.
communications-gateway Reliability Communications Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/reliability-communications-gateway.md
Choose a management region from the following list:
- West Europe - UK South - India Central-- Southeast Asia
+- Canada Central
- Australia East Management regions can be colocated with service regions. We recommend choosing the management region nearest to your service regions.
confidential-computing Choose Confidential Containers Offerings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/choose-confidential-containers-offerings.md
Title: Choose container offerings for confidential computing description: How to choose the right confidential container offerings to meet your security, isolation and developer needs. -++ Last updated 11/01/2021
container-registry Container Registry Tasks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-overview.md
Containers provide new levels of virtualization, isolating application and devel
**ACR Tasks** is a suite of features within Azure Container Registry. It provides cloud-based container image building for [platforms](#image-platforms) including Linux, Windows, and ARM, and can automate [OS and framework patching](#automate-os-and-framework-patching) for your Docker containers. ACR Tasks not only extends your "inner-loop" development cycle to the cloud with on-demand container image builds, but also enables automated builds triggered by source code updates, updates to a container's base image, or timers. For example, with base image update triggers, you can automate your OS and application framework patching workflow, maintaining secure environments while adhering to the principles of immutable containers.
-ACR is temporarily pausing ACR Tasks runs from Azure free credits. This may affect existing Tasks runs. If you encounter problems, open a [support case](../azure-portal/supportability/how-to-create-azure-support-request.md) for our team to provide additional guidance. We'll remove this note when this pause is lifted.
+>[! IMPORTANT]
+> ACR is temporarily pausing ACR Tasks runs from Azure free credits. This may affect existing Tasks runs. If you encounter problems, open a [support case](../azure-portal/supportability/how-to-create-azure-support-request.md) for our team to provide additional guidance. Please note that existing customers will not be affected by this pause. We will update our documentation notice here whenever the pause is lifted.
## Task scenarios
cosmos-db Howto Restore Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-restore-portal.md
Previously updated : 01/21/2024 Last updated : 01/28/2024 # Backup and point-in-time restore of a cluster in Azure Cosmos DB for PostgreSQL
a custom restore point within your retention period.
Enabling geo-redundant backup is possible during cluster creation on the **Scale** screen that can be accessed on the **Basics** tab. Click the **Save** button to apply your selection. > [!NOTE]
-> Geo-redundant backup can be enabled only during cluster creation.
-> You can't disable geo-redundant backup once cluster is created.
+> Geo-redundant backup can be enabled only during cluster creation or cluster restore.
+> You can't disable geo-redundant backup once cluster is created.
## Confirm type of backup To check what type of backup is enabled on a cluster, follow these steps:
earliest existing backup.
1. If cluster has geo-redundant backup enabled, select remote or same region for restore in the **Location** field. On clusters with zone-redundant and locally redundant backup, location field isn't editable.
+1. Set **Geo-redundant backup** checkbox for geo-redundant backup *for the restored cluster* to be stored [in another Azure region](./resources-regions.md).
+ 1. Select **Next**. 1. (optional) Make data encryption selection for restored cluster on the **Encryption** tab.
and time of your choosing.
1. If cluster has geo-redundant backup enabled, select remote or same region for restore in the **Location** field. On clusters with zone-redundant and locally redundant backup, location field isn't editable.
+1. Set **Geo-redundant backup** checkbox for geo-redundant backup *for the restored cluster* to be stored [in another Azure region](./resources-regions.md).
+ 1. Select **Next**. 1. (optional) Make data encryption selection for restored cluster on the **Encryption** tab.
cosmos-db Reference Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-limits.md
Previously updated : 01/21/2024 Last updated : 01/28/2024 # Azure Cosmos DB for PostgreSQL limits and limitations
By default this database is called `citus`. Azure Cosmos DB for PostgreSQL suppo
### Geo-redundant backup and restore * Geo-redundant backup can be enabled only during cluster creation. * You can enable geo-redundant backup when you perform a [cluster restore](./howto-restore-portal.md).
- * You can enable geo-redundant backup when you [promote a cluster read-replica to an independent cluster](./howto-read-replicas-portal.md#promote-a-read-replica).
* Geo-redundant backup can't be disabled once cluster is created. * Geo-redundant backup can't be enabled on single node clusters with [burstable compute](./concepts-burstable-compute.md). * [Customer managed key (CMK)](./concepts-customer-managed-keys.md) isn't supported for clusters with geo-redundant backup enabled.
defender-for-cloud Concept Data Security Posture Prepare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-data-security-posture-prepare.md
To protect AWS resources in Defender for Cloud, set up an AWS connector using a
- Delete/update DB/cluster snapshot with prefix *defenderfordatabases* - List all KMS keys - Use all KMS keys only for RDS on source account
- - Full control on all KMS keys with tag prefix *DefenderForDatabases*
+ - Create & full control on all KMS keys with tag prefix *DefenderForDatabases*
- Create alias for KMS keys
+- KMS keys are created once for each region that contains RDS instances. The creation of a KMS key may incur a minimal additional cost, according to AWS KMS pricing.
### Discovering GCP storage buckets
defender-for-cloud Devops Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/devops-support.md
DevOps security currently supports the following DevOps platforms:
DevOps security requires the following permissions:
-| Feature | Permissions |
+| Feature | Permissions |
|-|-| | Connect DevOps environments to Defender for Cloud | <ul><li>Azure: Subscription Contributor or Security Admin</li><li>Azure DevOps: Project Collection Administrator on target Organization</li><li>GitHub: Organization Owner</li><li>GitLab: Group Owner on target Group</li></ul> | | Review security insights and findings | Security Reader |
DevOps security requires the following permissions:
The following tables summarize the availability and prerequisites for each feature within the supported DevOps platforms: > [!NOTE]
-> Starting March 7, 2024, [Defender CSPM](concept-cloud-security-posture-management.md) must be enabled to have premium DevOps security capabilities which include code-to-cloud contextualization powering security explorer and attack paths and pull request annotations for Infrastructure-as-Code security findings. See details below to learn more.
-
+> Starting March 7, 2024, [Defender CSPM](concept-cloud-security-posture-management.md) must be enabled on at least one subscription or multicloud connector in the tenant to benefit from premium DevOps security capabilities which include code-to-cloud contextualization powering security explorer and attack paths and pull request annotations for Infrastructure-as-Code security findings. See details below to learn more.
### Azure DevOps | Feature | Foundational CSPM | Defender CSPM | Prerequisites |
The following tables summarize the availability and prerequisites for each featu
| [Pull request annotations](review-pull-request-annotations.md) | | ![Yes Icon](./medi) | | [Code to cloud mapping for Containers](container-image-mapping.md) | | ![Yes Icon](./medi#configure-the-microsoft-security-devops-azure-devops-extension-1) | | [Code to cloud mapping for Infrastructure as Code templates](iac-template-mapping.md) | | ![Yes Icon](./medi) |
-| [Attack path analysis](how-to-manage-attack-path.md) | | ![Yes Icon](./media/icons/yes-icon.png) | Enable Defender CSPM on the Azure DevOps connector |
-| [Cloud security explorer](how-to-manage-cloud-security-explorer.md) | | ![Yes Icon](./media/icons/yes-icon.png) | Enable Defender CSPM on the Azure DevOps connector |
+| [Attack path analysis](how-to-manage-attack-path.md) | | ![Yes Icon](./media/icons/yes-icon.png) |Enable Defender CSPM on an Azure Subscription, AWS Connector, or GCP Connector in the same tenant as the DevOps Connector |
+| [Cloud security explorer](how-to-manage-cloud-security-explorer.md) | | ![Yes Icon](./media/icons/yes-icon.png) |Enable Defender CSPM on an Azure Subscription, AWS Connector, or GCP connector in the same tenant as the DevOps Connector|
### GitHub
The following tables summarize the availability and prerequisites for each featu
| [Security recommendations to fix DevOps environment misconfigurations](concept-devops-posture-management-overview.md) | ![Yes Icon](./media/icons/yes-icon.png) | ![Yes Icon](./media/icons/yes-icon.png) | N/A | | [Code to cloud mapping for Containers](container-image-mapping.md) | | ![Yes Icon](./medi) | | [Code to cloud mapping for Infrastructure as Code templates](iac-template-mapping.md) | | ![Yes Icon](./medi) |
-| [Attack path analysis](how-to-manage-attack-path.md) | | ![Yes Icon](./media/icons/yes-icon.png) | Enable Defender CSPM on the GitHub connector |
-| [Cloud security explorer](how-to-manage-cloud-security-explorer.md) | | ![Yes Icon](./media/icons/yes-icon.png) | Enable Defender CSPM on the GitHub connector |
+| [Attack path analysis](how-to-manage-attack-path.md) | | ![Yes Icon](./media/icons/yes-icon.png) | Enable Defender CSPM on an Azure Subscription, AWS Connector, or GCP connector in the same tenant as the DevOps Connector |
+| [Cloud security explorer](how-to-manage-cloud-security-explorer.md) | | ![Yes Icon](./media/icons/yes-icon.png) | Enable Defender CSPM on an Azure Subscription, AWS Connector, or GCP connector in the same tenant as the DevOps Connector |
### GitLab
The following tables summarize the availability and prerequisites for each featu
| [Security recommendations to discover exposed secrets](defender-for-devops-introduction.md#manage-your-devops-environments-in-defender-for-cloud) | ![Yes Icon](./media/icons/yes-icon.png) | ![Yes Icon](./media/icons/yes-icon.png) | [GitLab Ultimate](https://about.gitlab.com/pricing/ultimate/) | | [Security recommendations to fix open source vulnerabilities](defender-for-devops-introduction.md#manage-your-devops-environments-in-defender-for-cloud) | ![Yes Icon](./media/icons/yes-icon.png) | ![Yes Icon](./media/icons/yes-icon.png) | [GitLab Ultimate](https://about.gitlab.com/pricing/ultimate/) | | [Security recommendations to fix infrastructure as code misconfigurations](defender-for-devops-introduction.md#manage-your-devops-environments-in-defender-for-cloud) | ![Yes Icon](./media/icons/yes-icon.png) | ![Yes Icon](./media/icons/yes-icon.png) | [GitLab Ultimate](https://about.gitlab.com/pricing/ultimate/) |
-| [Cloud security explorer](how-to-manage-cloud-security-explorer.md) | | ![Yes Icon](./media/icons/yes-icon.png) | Enable Defender CSPM on the GitLab connector |
+| [Cloud security explorer](how-to-manage-cloud-security-explorer.md) | | ![Yes Icon](./media/icons/yes-icon.png) | Enable Defender CSPM on an Azure Subscription, AWS Connector, or GCP connector in the same tenant as the DevOps Connector |
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
If you're looking for the latest release notes, you can find them in the [What's
| [Four new recommendations for Azure Stack HCI resource type](#four-new-recommendations-for-azure-stack-hci-resource-type) | January 11, 2024 | February 2024 | | [Defender for Servers built-in vulnerability assessment (Qualys) retirement path](#defender-for-servers-built-in-vulnerability-assessment-qualys-retirement-path) | January 9, 2024 | May 2024 | | [Retirement of the Defender for Cloud Containers Vulnerability Assessment powered by Qualys](#retirement-of-the-defender-for-cloud-containers-vulnerability-assessment-powered-by-qualys) | January 9, 2023 | March 2024 |
+| [Enforcement of Defender CSPM for Premium DevOps Security Capabilities](#enforcement-of-defender-cspm-for-premium-devops-security-value) | January 29, 2024 | March 2024 |
| [New version of Defender Agent for Defender for Containers](#new-version-of-defender-agent-for-defender-for-containers) | January 4, 2024 | February 2024 | | [Upcoming change for the Defender for CloudΓÇÖs multicloud network requirements](#upcoming-change-for-the-defender-for-clouds-multicloud-network-requirements) | January 3, 2024 | May 2024 | | [Deprecation of two DevOps security recommendations](#deprecation-of-two-devops-security-recommendations) | November 30, 2023 | January 2024 |
For more information about transitioning to our new container vulnerability asse
For common questions about the transition to Microsoft Defender Vulnerability Management, see [Common questions about the Microsoft Defender Vulnerability Management solution](common-questions-microsoft-defender-vulnerability-management.md).
+## Enforcement of Defender CSPM for Premium DevOps Security Value
+
+**Announcement date: January 29, 2023**
+
+**Estimated date for change: March 7, 2024**
+
+Defender for Cloud will begin enforcing the Defender CSPM plan check for premium DevOps security value beginning **March 7th, 2024**. If you have the Defender CSPM plan enabled on a cloud environment (Azure, AWS, GCP) within the same tenant your DevOps connectors are created in, you will continue to receive premium DevOps capabilities at no additional cost. If you are not a Defender CSPM customer, you have until **March 7th, 2024** to enable Defender CSPM before losing access to these security features. To enable Defender CSPM on a connected cloud environment before March 7th, 2024, follow the enablement documentation outlined [here](tutorial-enable-cspm-plan.md#enable-the-components-of-the-defender-cspm-plan).
+
+For more information about which DevOps security features are available across both the Foundational CSPM and Defender CSPM plans, see [our documentation outlining feature availability](devops-support.md#feature-availability).
+
+For more information about DevOps Security in Defender for Cloud, see the [overview documentation](defender-for-devops-introduction.md).
+
+For more information on the code to cloud security capabilities in Defender CSPM, see [how to protect your resources with Defender CSPM](tutorial-enable-cspm-plan.md).
+ ## New version of Defender Agent for Defender for Containers **Announcement date: January 4, 2024**
dms Howto Sql Server To Azure Sql Managed Instance Powershell Offline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/howto-sql-server-to-azure-sql-managed-instance-powershell-offline.md
To complete these steps, you need:
* To enable the TCP/IP protocol, which is disabled by default with SQL Server Express installation. Enable the TCP/IP protocol by following the article [Enable or Disable a Server Network Protocol](/sql/database-engine/configure-windows/enable-or-disable-a-server-network-protocol#SSMSProcedure). * To configure your [Windows Firewall for database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access). * An Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/) before you begin.
-* A SQL Managed Instance. You can create a SQL Managed Instance by following the detail in the article [Create a ASQL Managed Instance](/azure/azure-sql/managed-instance/instance-create-quickstart).
+* A SQL Managed Instance. You can create a SQL Managed Instance by following the detail in the article [Create an Azure SQL Managed Instance](/azure/azure-sql/managed-instance/instance-create-quickstart).
* To download and install [Data Migration Assistant](https://www.microsoft.com/download/details.aspx?id=53595) v3.3 or later. * A Microsoft Azure Virtual Network created using the Azure Resource Manager deployment model, which provides the Azure Database Migration Service with site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). * A completed assessment of your on-premises database and schema migration using Data Migration Assistant, as described in the article [Performing a SQL Server migration assessment](/sql/dma/dma-assesssqlonprem).
$sourceConnInfo = New-AzDmsConnInfo -ServerType SQL `
-TrustServerCertificate:$true ```
-The next example shows creation of Connection Info for a Azure SQL Managed Instance named ΓÇÿtargetmanagedinstanceΓÇÖ:
+The next example shows creation of Connection Info for an Azure SQL Managed Instance named ΓÇÿtargetmanagedinstanceΓÇÖ:
```powershell $targetResourceId = (Get-AzSqlInstance -Name "targetmanagedinstance").Id
$backupFileShare = New-AzDmsFileShare -Path $backupFileSharePath -Credential $ba
The next step is to select the source and target databases by using the `New-AzDmsSelectedDB` cmdlet.
-The following example is for migrating a single database from SQL Server to a Azure SQL Managed Instance:
+The following example is for migrating a single database from SQL Server to an Azure SQL Managed Instance:
```powershell $selectedDbs = @()
$selectedDbs += New-AzDmsSelectedDB -MigrateSqlServerSqlDbMi `
-BackupFileShare $backupFileShare ` ```
-If an entire SQL Server instance needs a lift-and-shift into a Azure SQL Managed Instance, then a loop to take all databases from the source is provided below. In the following example, for $Server, $SourceUserName, and $SourcePassword, provide your source SQL Server details.
+If an entire SQL Server instance needs a lift-and-shift into an Azure SQL Managed Instance, then a loop to take all databases from the source is provided below. In the following example, for $Server, $SourceUserName, and $SourcePassword, provide your source SQL Server details.
```powershell $Query = "(select name as Database_Name from master.sys.databases where Database_id>4)";
energy-data-services Concepts Entitlements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-entitlements.md
The entitlement service enables three use cases for authorization:
#### Peculiarity of `users.data.root@` group - users.data.root entitlement group is the default member of all data groups when groups are created. If you try to remove users.data.root from any data group, you get error since this membership is enforced by OSDU.-- users.data.root is the default and permanent owner of all the data records as explained in [OSDU validate owner access API](https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/storage-core/src/main/java/org/opengroup/osdu/storage/service/DataAuthorizationService.java?ref_type=heads#L66) and [OSDU users data root check API](https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/storage-core/src/main/java/org/opengroup/osdu/storage/service/EntitlementsAndCacheServiceImpl.java#L98)
+- users.data.root becomes automatically the default and permanent owner of all the data records when the records get created in the system as explained in [OSDU validate owner access API](https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/storage-core/src/main/java/org/opengroup/osdu/storage/service/DataAuthorizationService.java?ref_type=heads#L66) and [OSDU users data root check API](https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/storage-core/src/main/java/org/opengroup/osdu/storage/service/EntitlementsAndCacheServiceImpl.java#L98). As a result, irrespective of the OSDU membership of the user, the system checks if the user is ΓÇ£DataManagerΓÇ¥, i.e., part of data.root group, to grant access of the data record.
+- The default membership in users.data.root is only the `app-id` that is used to set up the instance. You can add other users explicitly to this group to give them default access of data records.
As an example in the scenario, - A data_record_1 has 2 ACLs: ACL_1 and ACL_2.
event-grid Mqtt Automotive Connectivity And Data Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-automotive-connectivity-and-data-solution.md
description: 'Describes the use case of automotive messaging'
- ignite-2023 Previously updated : 11/15/2023 Last updated : 01/29/2024
Other contributors:
* [Jeff Beman](https://www.linkedin.com/in/jeff-beman-4730726/) | Principal Program Manager, Mobility CVP * [Frederick Chong](https://www.linkedin.com/in/frederick-chong-5a00224) | Principal PM Manager, MCIGET SDV & Mobility * [Felipe Prezado](https://www.linkedin.com/in/filipe-prezado-9606bb14) | Principal Program Manager, MCIGET SDV & Mobility
-* [Ashita Rastogi](https://www.linkedin.com/in/ashitarastogi/) | Principal Program Manager, Azure Messaging
+* Ashita Rastogi | Lead Principal Program Manager, Azure Messaging
* [Henning Rauch](https://www.linkedin.com/in/henning-rauch-adx) | Principal Program Manager, Azure Data Explorer (Kusto) * [Rajagopal Ravipati](https://www.linkedin.com/in/rajagopal-ravipati-79020a4/) | Partner Software Engineering Manager, Azure Messaging * [Larry Sullivan](https://www.linkedin.com/in/larry-sullivan-1972654/) | Partner Group Software Engineering Manager, Energy & CVP
event-grid Push Delivery Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/push-delivery-overview.md
The following articles provide you with information on how to use Event Grid or
- [Learn about System Topics](system-topics.md) - [Learn about Partner Topics](partner-events-overview.md)-- [Learn bout Event Domains](event-domains.md)
+- [Learn about Event Domains](event-domains.md)
- [Learn about event handlers](event-handlers.md) - [Learn about event filtering](event-filtering.md) - [Publish and subscribe using custom topics](custom-event-quickstart-portal.md).
event-hubs Event Hubs Kafka Connect Debezium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-kafka-connect-debezium.md
Last updated 10/18/2021
# Integrate Apache Kafka Connect support on Azure Event Hubs with Debezium for Change Data Capture
-**Change Data Capture (CDC)** is a technique used to track row-level changes in database tables in response to create, update, and delete operations. [Debezium](https://debezium.io/) is a distributed platform that builds on top of Change Data Capture features available in different databases (for example, [logical decoding in PostgreSQL](https://www.postgresql.org/docs/current/static/logicaldecoding-explanation.html)). It provides a set of [Kafka Connect connectors](https://debezium.io/documentation/reference/1.2/connectors/https://docsupdatetracker.net/index.html) that tap into row-level changes in database table(s) and convert them into event streams that are then sent to [Apache Kafka](https://kafka.apache.org/).
+**Change Data Capture (CDC)** is a technique used to track row-level changes in database tables in response to create, update, and delete operations. [Debezium](https://debezium.io/) is a distributed platform that builds on top of Change Data Capture features available in different databases (for example, [logical decoding in PostgreSQL](https://www.postgresql.org/docs/current/static/logicaldecoding-explanation.html)). It provides a set of [Kafka Connect connectors](https://debezium.io/documentation/reference/stable/connectors/https://docsupdatetracker.net/index.html) that tap into row-level changes in database tables and convert them into event streams that are then sent to [Apache Kafka](https://kafka.apache.org/).
-This tutorial walks you through how to set up a change data capture based system on Azure using [Event Hubs](./event-hubs-about.md?WT.mc_id=devto-blog-abhishgu) (for Kafka), [Azure DB for PostgreSQL](../postgresql/overview.md) and Debezium. It will use the [Debezium PostgreSQL connector](https://debezium.io/documentation/reference/1.2/connectors/postgresql.html) to stream database modifications from PostgreSQL to Kafka topics in Event Hubs
+This tutorial walks you through how to set up a change data capture based system on Azure using [Event Hubs](./event-hubs-about.md?WT.mc_id=devto-blog-abhishgu) (for Kafka), [Azure Database for PostgreSQL](../postgresql/overview.md) and Debezium. It uses the [Debezium PostgreSQL connector](https://debezium.io/documentation/reference/stable/connectors/postgresql.html) to stream database modifications from PostgreSQL to Kafka topics in Event Hubs.
> [!NOTE] > This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
In this tutorial, you take the following steps:
> * Test change data capture > * (Optional) Consume change data events with a `FileStreamSink` connector
-## Pre-requisites
-To complete this walk through, you'll require:
+## Prerequisites
+To complete this walk through, you require:
- Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/). - Linux/MacOS
To complete this walk through, you'll require:
An Event Hubs namespace is required to send and receive from any Event Hubs service. See [Creating an event hub](event-hubs-create.md) for instructions to create a namespace and an event hub. Get the Event Hubs connection string and fully qualified domain name (FQDN) for later use. For instructions, see [Get an Event Hubs connection string](event-hubs-get-connection-string.md). ## Set up and configure Azure Database for PostgreSQL
-[Azure Database for PostgreSQL](../postgresql/overview.md) is a relational database service based on the community version of open-source PostgreSQL database engine, and is available in three deployment options: Single Server, Flexible Server and Cosmos DB for PostgreSQL. [Follow these instructions](../postgresql/quickstart-create-server-database-portal.md) to create an Azure Database for PostgreSQL server using the Azure portal.
+[Azure Database for PostgreSQL](../postgresql/overview.md) is a relational database service based on the community version of open-source PostgreSQL database engine, and is available in three deployment options: Single Server, Flexible Server, and Cosmos DB for PostgreSQL. [Follow these instructions](../postgresql/quickstart-create-server-database-portal.md) to create an Azure Database for PostgreSQL server using the Azure portal.
## Setup and run Kafka Connect
-This section will cover the following topics:
+This section covers the following topics:
- Debezium connector installation - Configuring Kafka Connect for Event Hubs - Start Kafka Connect cluster with Debezium connector ### Download and setup Debezium connector
-Follow the latest instructions in the [Debezium documentation](https://debezium.io/documentation/reference/1.2/connectors/postgresql.html#postgresql-deploying-a-connector) to download and set up the connector.
+Follow the latest instructions in the [Debezium documentation](https://debezium.io/documentation/reference/stable/connectors/postgresql.html#postgresql-deployment) to download and set up the connector.
- Download the connectorΓÇÖs plug-in archive. For example, to download version `1.2.0` of the connector, use this link - https://repo1.maven.org/maven2/io/debezium/debezium-connector-postgres/1.2.0.Final/debezium-connector-postgres-1.2.0.Final-plugin.tar.gz - Extract the JAR files and copy them to the [Kafka Connect plugin.path](https://kafka.apache.org/documentation/#connectconfigs). ### Configure Kafka Connect for Event Hubs
-Minimal reconfiguration is necessary when redirecting Kafka Connect throughput from Kafka to Event Hubs. The following `connect-distributed.properties` sample illustrates how to configure Connect to authenticate and communicate with the Kafka endpoint on Event Hubs:
+Minimal reconfiguration is necessary when redirecting Kafka Connect throughput from Kafka to Event Hubs. The following `connect-distributed.properties` sample illustrates how to configure Connect to authenticate and communicate with the Kafka endpoint on Event Hubs:
> [!IMPORTANT] > - Debezium will auto-create a topic per table and a bunch of metadata topics. Kafka **topic** corresponds to an Event Hubs instance (event hub). For Apache Kafka to Azure Event Hubs mappings, see [Kafka and Event Hubs conceptual mapping](azure-event-hubs-kafka-overview.md#apache-kafka-and-azure-event-hubs-conceptual-mapping).
Create a configuration file (`pg-source-connector.json`) for the PostgreSQL sour
``` > [!TIP]
-> `database.server.name` attribute is a logical name that identifies and provides a namespace for the particular PostgreSQL database server/cluster being monitored.. For detailed info, check [Debezium documentation](https://debezium.io/documentation/reference/1.2/connectors/postgresql.html#postgresql-property-database-server-name)
+> `database.server.name` attribute is a logical name that identifies and provides a namespace for the particular PostgreSQL database server/cluster being monitored.
To create an instance of the connector, use the Kafka Connect REST API endpoint:
curl -s http://localhost:8083/connectors/todo-connector/status
``` ## Test change data capture
-To see change data capture in action, you'll need to create/update/delete records in the Azure PostgreSQL database.
+To see change data capture in action, you need to create/update/delete records in the Azure PostgreSQL database.
-Start by connecting to your Azure PostgreSQL database (the example below uses [psql](https://www.postgresql.org/docs/12/app-psql.html))
+Start by connecting to your Azure PostgreSQL database (the following example uses [psql](https://www.postgresql.org/docs/12/app-psql.html)).
```bash psql -h <POSTGRES_INSTANCE_NAME>.postgres.database.azure.com -p 5432 -U <POSTGRES_USER_NAME> -W -d <POSTGRES_DB_NAME> --set=sslmode=require
INSERT INTO todos (description, todo_status) VALUES ('configure and install conn
INSERT INTO todos (description, todo_status) VALUES ('start connector', 'pending'); ```
-The connector should now spring into action and send change data events to an Event Hubs topic with the following name `my-server.public.todos`, assuming you have `my-server` as the value for `database.server.name` and `public.todos` is the table whose changes you're tracking (as per `table.whitelist` configuration)
+The connector should now spring into action and send change data events to an Event Hubs topic with the following name `my-server.public.todos`, assuming you have `my-server` as the value for `database.server.name` and `public.todos` is the table whose changes you're tracking (as per `table.whitelist` configuration).
**Check Event Hubs topic**
-Let's introspect the contents of the topic to make sure everything is working as expected. The below example uses [`kafkacat`](https://github.com/Azure/azure-event-hubs-for-kafk)
+Let's introspect the contents of the topic to make sure everything is working as expected. The below example uses [`kafkacat`](https://github.com/Azure/azure-event-hubs-for-kafk).
Create a file named `kafkacat.conf` with the following contents:
UPDATE todos SET todo_status = 'complete' WHERE id = 3;
``` ## (Optional) Install FileStreamSink connector
-Now that all the `todos` table changes are being captured in Event Hubs topic, you'll use the FileStreamSink connector (that is available by default in Kafka Connect) to consume these events.
+Now that all the `todos` table changes are being captured in Event Hubs topic, you use the FileStreamSink connector (that is available by default in Kafka Connect) to consume these events.
-Create a configuration file (`file-sink-connector.json`) for the connector - replace the `file` attribute as per your file system
+Create a configuration file (`file-sink-connector.json`) for the connector - replace the `file` attribute as per your file system.
```json {
tail -f /Users/foo/todos-cdc.txt
## Cleanup
-Kafka Connect creates Event Hub topics to store configurations, offsets, and status that persist even after the Connect cluster has been taken down. Unless this persistence is desired, it's recommended that these topics are deleted. You may also want to delete the `my-server.public.todos` Event Hub that were created during this walk through.
+Kafka Connect creates Event Hubs topics to store configurations, offsets, and status that persist even after the Kafka Connect cluster has been taken down. Unless this persistence is desired, we recommend that you delete these topics. You might also want to delete the `my-server.public.todos` event hub that were created during this walk through.
## Next steps
expressroute Expressroute Howto Linkvnet Portal Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-linkvnet-portal-resource-manager.md
Title: 'Link a virtual network to an ExpressRoute circuit - Azure portal'
-description: This article shows you how to create a connection to link a virtual network to an Azure ExpressRoute circuit using the Azure portal.
+ Title: 'Link a virtual network to ExpressRoute circuits - Azure portal'
+description: This article shows you how to create a connection to link a virtual network to Azure ExpressRoute circuits using the Azure portal.
Last updated 08/31/2023
+zone_pivot_groups: expressroute-experience
-# Connect a virtual network to an ExpressRoute circuit using the Azure portal
+# Connect a virtual network to ExpressRoute circuits using the Azure portal
> [!div class="op_single_selector"] > * [Azure portal](expressroute-howto-linkvnet-portal-resource-manager.md)
> * [PowerShell (classic)](expressroute-howto-linkvnet-classic.md) >
-This article helps you create a connection to link a virtual network (virtual network) to an Azure ExpressRoute circuit using the Azure portal. The virtual networks that you connect to your Azure ExpressRoute circuit can either be in the same subscription or part of another subscription.
+This article helps you create a connection to link a virtual network (virtual network) to Azure ExpressRoute circuits using the Azure portal. The virtual networks that you connect to your Azure ExpressRoute circuit can either be in the same subscription or part of another subscription.
:::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/gateway-circuit.png" alt-text="Diagram showing a virtual network linked to an ExpressRoute circuit.":::
This article helps you create a connection to link a virtual network (virtual ne
### To create a connection
-1. Ensure that your ExpressRoute circuit and Azure private peering have been configured successfully. Follow the instructions in [Create an ExpressRoute circuit](expressroute-howto-circuit-arm.md) and [Create and modify peering for an ExpressRoute circuit](expressroute-howto-routing-arm.md). Your ExpressRoute circuit should look like the following image:
+
+1. Sign in to the Azure portal with this [Preview link](https://aka.ms/expressrouteguidedportal). This link is required to access the new preview connection create experience to an ExpressRoute circuit.
++
+2. Ensure that your ExpressRoute circuit and Azure private peering have been configured successfully. Follow the instructions in [Create an ExpressRoute circuit](expressroute-howto-circuit-arm.md) and [Create and modify peering for an ExpressRoute circuit](expressroute-howto-routing-arm.md). Your ExpressRoute circuit should look like the following image:
:::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/express-route-circuit.png" alt-text="ExpressRoute circuit screenshot":::
-1. You can now start provisioning a connection to link your virtual network gateway to your ExpressRoute circuit. Select **Connection** > **Add** to open the **Add connection** page.
+3. You can now start provisioning a connection to link your virtual network gateway to your ExpressRoute circuit. Select **Connection** > **Add** to open the **Create connection** page.
:::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/add-connection.png" alt-text="Add connection screenshot":::
-1. Enter a name for the connection and then select **Next: Settings >**.
+
+4. Select the **Connection type** as **ExpressRoute** and then select **Next: Settings >**.
+
+ :::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/create-connection-basic-new.png" alt-text="Screenshot of create a connection basic page.":::
+
+5. Select the resiliency type for your connection. You can choose **Maximum resiliency** or **Standard resiliency**.
+
+ **Maximum resiliency** - This option provides the highest level of resiliency to your virtual network. It provides two redundant connections from the virtual network gateway to two different ExpressRoute circuits in different ExpressRoute locations.
+
+ :::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/maximum-resiliency.png" alt-text="Diagram of a virtual network gateway connected to two different ExpressRoute circuits.":::
+
+ **Standard resiliency** - This option provides a single redundant connection from the virtual network gateway to a single ExpressRoute circuit.
+
+ :::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/standard-resiliency.png" alt-text="Diagram of a virtual network gateway connected to a single ExpressRoute circuit.":::
+
+6. Enter the following information for the respective resiliency type and then select **Review + create**. Then select **Create** after validation completes.
+
+ :::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/create-connection-configuration.png" alt-text="Screenshot of the settings page for maximum resiliency ExpressRoute connections to a virtual network gateway.":::
+
+ **Maximum resiliency**
+
+ | Setting | Value |
+ | | |
+ | Virtual network gateway | Select the virtual network gateway that you want to connect to the ExpressRoute circuit. |
+ | Use existing connection or create new | You can augment resiliency for an ExpressRoute connection you already created by selecting **Use existing**. Then select an existing ExpressRoute connection for the first connection. If you select **Use existing**, you only need to configure the second connection. If you select **Create new**, enter following information for both connections. |
+ | Name | Enter a name for the connection. |
+ | ExpressRoute circuit | Select the ExpressRoute circuit that you want to connect to. |
+ | Routing weight | Enter a routing weight for the connection. The routing weight is used to determine the primary and secondary connections. The connection with the higher routing weight is the preferred circuit. |
+ | FastPath | Select the checkbox to enable FastPath. For more information, see [About ExpressRoute FastPath](about-fastpath.md). |
+
+ Complete the same information for the second ExpressRoute connection. When selecting an ExpressRoute circuit for the second connection, you are provided with the distance from the first ExpressRoute circuit. This information appears in the diagram and can help you select the second ExpressRoute location.
+
+ > [!NOTE]
+ > To have maximum resiliency, you should select two circuits in different peering location. You'll be given the following warning if you select two circuits in the same peering location.
+ >
+ > :::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/same-location-warning.png" alt-text="Screenshot of warning in the Azure portal when selecting two ExpressRoute circuits in the same peering location.":::
+
+ **Standard resiliency**
+
+ For standard resiliency, you only need to enter information for one connection.
+
+7. After your connection has been successfully configured, your connection object will show the information for the connection.
+
+ :::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/connection-object.png" alt-text="Screenshot of a created connection resource.":::
+++
+4. Enter a name for the connection and then select **Next: Settings >**.
:::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/create-connection-basic.png" alt-text="Create connection basic page":::
-1. Select the gateway that belongs to the virtual network that you want to link to the circuit and select **Review + create**. Then select **Create** after validation completes.
+5. Select the gateway that belongs to the virtual network that you want to link to the circuit and select **Review + create**. Then select **Create** after validation completes.
:::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/create-connection-settings.png" alt-text="Create connection settings page":::
-1. After your connection has been successfully configured, your connection object will show the information for the connection.
+6. After your connection has been successfully configured, your connection object will show the information for the connection.
:::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/connection-object.png" alt-text="Connection object screenshot":::
You can share an ExpressRoute circuit across multiple subscriptions. The followi
:::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/cross-subscription.png" alt-text="Cross-subscription connectivity":::
-> [!NOTE]
-> Connecting virtual networks between Azure sovereign clouds and Public Azure cloud is not supported. You can only link virtual networks from different subscriptions in the same cloud.
- Each of the smaller clouds within the large cloud is used to represent subscriptions that belong to different departments within an organization. Each of the departments within the organization uses their own subscription for deploying their services--but they can share a single ExpressRoute circuit to connect back to your on-premises network. A single department (in this example: IT) can own the ExpressRoute circuit. Other subscriptions within the organization may use the ExpressRoute circuit.
- > [!NOTE]
- > Connectivity and bandwidth charges for the dedicated circuit will be applied to the ExpressRoute circuit owner. All virtual networks share the same bandwidth.
- >
+> [!NOTE]
+> * Connecting virtual networks between Azure sovereign clouds and Public Azure cloud is not supported. You can only link virtual networks from different subscriptions in the same cloud.
+> * Connectivity and bandwidth charges for the dedicated circuit will be applied to the ExpressRoute circuit owner. All virtual networks share the same bandwidth.
### Administration - About circuit owners and circuit users
You can delete a connection by selecting the **Delete** icon for the authorizati
:::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/delete-authorization-key.png" alt-text="Delete authorization key"::: If you want to delete the connection but retain the authorization key, you can delete the connection from the connection page of the circuit.+ > [!NOTE]
- > Connections redeemed in different subscriptions will not display in the circuit connection page. Navigate to the subscription where the authorization was redeemed and delete the top-level connection resource.
- >
+> Connections redeemed in different subscriptions will not display in the circuit connection page. Navigate to the subscription where the authorization was redeemed and delete the top-level connection resource.
+ :::image type="content" source="./media/expressroute-howto-linkvnet-portal-resource-manager/delete-connection-owning-circuit.png" alt-text="Delete connection owning circuit":::
The circuit user needs the resource ID and an authorization key from the circuit
You can enable [ExpressRoute FastPath](expressroute-about-virtual-network-gateways.md) if your virtual network gateway is Ultra Performance or ErGw3AZ. FastPath improves data path performance such as packets per second and connections per second between your on-premises network and your virtual network. + **Configure FastPath on a new connection** When adding a new connection for your ExpressRoute gateway, select the checkbox for **FastPath**.
firewall Premium Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-features.md
To learn more about TLS inspection, see [Building a POC for TLS inspection in Az
A network intrusion detection and prevention system (IDPS) allows you to monitor your network for malicious activity, log information about this activity, report it, and optionally attempt to block it.
-Azure Firewall Premium provides signature-based IDPS to allow rapid detection of attacks by looking for specific patterns, such as byte sequences in network traffic, or known malicious instruction sequences used by malware. The IDPS signatures are applicable for both application and network-level traffic (Layers 3-7). They're fully managed and continuously updated. IDPS can be applied to inbound, spoke-to-spoke (East-West), and outbound traffic. Spoke-to-spoke (East-West) includes traffic that goes from/to an on-premises network. You can configure your IDPS private IP address ranges using the **Private IP ranges** preview feature. For more information, see [IDPS Private IP ranges](#idps-private-ip-ranges).
+Azure Firewall Premium provides signature-based IDPS to allow rapid detection of attacks by looking for specific patterns, such as byte sequences in network traffic, or known malicious instruction sequences used by malware. The IDPS signatures are applicable for both application and network-level traffic (Layers 3-7). They're fully managed and continuously updated. IDPS can be applied to inbound, spoke-to-spoke (East-West), and outbound traffic. Spoke-to-spoke (East-West) includes traffic that goes from/to an on-premises network. You can configure your IDPS private IP address ranges using the **Private IP ranges** feature. For more information, see [IDPS Private IP ranges](#idps-private-ip-ranges).
The Azure Firewall signatures/rulesets include: - An emphasis on fingerprinting actual malware, Command and Control, exploit kits, and in the wild malicious activity missed by traditional prevention methods.
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/overview.md
Title: Overview of Azure Resource Graph description: Understand how the Azure Resource Graph service enables complex querying of resources at scale across subscriptions and tenants. Previously updated : 01/20/2024 Last updated : 01/29/2024
Resource Graph also supports Azure CLI, Azure PowerShell, and REST API. The quer
You can create alert rules by using either Azure Resources Graph queries or integrating Log Analytics with Azure Resources Graph queries through Azure Monitor. Both methods can be used to create alerts for Azure resources. For examples, go to [Quickstart: Create alerts with Azure Resource Graph and Log Analytics](./alerts-query-quickstart.md).
+## Run queries with Power BI connector
+
+> [!NOTE]
+> The Azure Resource Graph Power BI connector is in public preview.
+
+The Azure Resource Graph Power BI connector runs queries at the tenant level but you can change the scope to subscription or management group. The Power BI connector has an optional setting to return all records if your query results have more than 1,000 records. For more information, go to [Quickstart: Run queries with the Azure Resource Graph Power BI connector](./power-bi-connector-quickstart.md).
+ ## Next steps - Learn more about the [query language](./concepts/query-language.md).
governance Power Bi Connector Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/power-bi-connector-quickstart.md
+
+ Title: Run queries with the Azure Resource Graph Power BI connector
+description: In this quickstart, you learn how to run queries with the Azure Resource Graph Power BI connector.
Last updated : 01/29/2024+++
+# Quickstart: Run queries with the Azure Resource Graph Power BI connector
+
+In this quickstart, you learn how to run queries with the Azure Resource Graph Power BI connector. By default the Power BI connector runs queries at the tenant level but you can change the scope to subscription or management group. Resource Graph by default returns a maximum of 1,000 records but the Power BI connector has an optional setting to return all records if your query results have more than 1,000 records.
+
+> [!NOTE]
+> The Azure Resource Graph Power BI connector is in public preview.
+
+> [!TIP]
+> If you participated in the private preview, delete your _AzureResourceGraph.mez_ preview file. If the file isn't deleted, your custom connector might be used by Power Query instead of the certified connector.
+
+## Prerequisites
+
+- If you don't have an Azure account with an active subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- [Power BI Desktop](https://powerbi.microsoft.com/desktop/).
+- Azure role-based access control rights with at least _Reader_ role assignment to resources. Learn more about [how to assign roles](../../role-based-access-control/role-assignments-portal.md).
+
+## Connect Resource Graph with Power BI connector
+
+After Power BI Desktop is installed, you can connect Resource Graph with Power BI connector so that you can run a query. If you don't have a query to run, you can use the following sample that queries for storage accounts.
+
+```kusto
+resources
+| where type == 'microsoft.storage/storageaccounts'
+```
+
+The following example runs a query with the default settings.
+
+1. Open the Power BI Desktop app on your computer and close any dialog boxes that are displayed.
+1. Go to **Home** > **Get data** > **More** > **Azure** > **Azure Resource Graph** and select **Connect**.
+
+ :::image type="content" source="./media/power-bi-connector-quickstart/power-bi-get-data.png" alt-text="Screenshot of the get data dialog box in Power BI Desktop to select the Azure Resource Graph connector.":::
+
+1. On the **Azure Resource Graph** dialog box, enter your query into the **Query** box.
+
+ :::image type="content" source="./media/power-bi-connector-quickstart/query-dialog-box.png" alt-text="Screenshot of the Azure Resource Graph dialog box to enter a query and use the default settings.":::
+
+1. Select **OK** to run the query and if prompted, enter your credentials.
+1. Select **Connect** to run the query. The results are displayed in Power BI Desktop.
+1. Select **Load** or **Transform Data**.
+
+ - **Load** imports the query results into Power BI Desktop.
+ - **Transform Data** opens the Power Query Editor with your query results.
+
+## Use optional settings
+
+You can select optional values to change the Azure subscription or management group that the query runs against or to get query results of more than 1,000 records.
+
+| Option | Description |
+| - | - |
+| Scope | You can select subscription or management group. Tenant is the default scope when no selection is made. |
+| Subscription ID | Required if you select subscription scope. Specify the Azure subscription ID. Use a comma-separated list to query multiple subscriptions. |
+| Management group ID | Required if you select management group scope. Specify the Azure management group ID. Use a comma-separated list to query multiple management groups. |
+| Advanced options | To get more than 1,000 records change `$resultTruncated` to `FALSE`. By default Resource Graph returns a maximum of 1,000 records. |
+
+For example, to run a query for a subscription that returns more than 1,000 records:
+
+- Set the scope to subscription.
+- Enter a subscription ID.
+- Set `$resultTruncated` to `FALSE`.
++
+## Clean up resources
+
+When you're finished, close any Power BI Desktop or Power Query windows and save or discard your queries.
+
+## Related content
+
+For more information about the query language or how to explore resources, go to the following articles.
+
+- [Understanding the Azure Resource Graph query language](./concepts/query-language.md).
+- [Explore your Azure resources with Resource Graph](./concepts/explore-resources.md).
+- Sample queries listed by [table](./samples/samples-by-table.md) or [category](./samples/samples-by-category.md).
hdinsight Hdinsight Hadoop Create Linux Clusters Azure Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-create-linux-clusters-azure-powershell.md
ms.tool: azure-powershell Previously updated : 09/19/2023 Last updated : 01/29/2024 # Create Linux-based clusters in HDInsight using Azure PowerShell
To create an HDInsight cluster by using Azure PowerShell, you must complete the
The following script demonstrates how to create a new cluster:
-[!code-powershell[main](../../azure_powershell_scripts/hdinsight/create-cluster/create-cluster.ps1?range=5-82)]
+[!code-powershell[main](../../azure_powershell_scripts/hdinsight/create-cluster/create-cluster.ps1?range=5-74)]
The values you specify for the cluster login are used to create the Hadoop user account for the cluster. Use this account to connect to services hosted on the cluster such as web UIs or REST APIs.
healthcare-apis Dicom Services Conformance Statement V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-conformance-statement-v2.md
The following `Accept` header(s) are supported for retrieving instances within a
* `multipart/related; type="application/dicom";` (when transfer-syntax isn't specified, 1.2.840.10008.1.2.1 is used as default) * `multipart/related; type="application/dicom"; transfer-syntax=1.2.840.10008.1.2.1` * `multipart/related; type="application/dicom"; transfer-syntax=1.2.840.10008.1.2.4.90`-- `*/*` (when transfer-syntax isn't specified, `1.2.840.10008.1.2.1` is used as default and mediaType defaults to `application/dicom`)
+- `*/*` (when transfer-syntax isn't specified, `*` is used as default and mediaType defaults to `application/dicom`)
#### Retrieve an Instance
The following `Accept` header(s) are supported for retrieving a specific instanc
* `multipart/related; type="application/dicom"; transfer-syntax=1.2.840.10008.1.2.1` * `application/dicom; transfer-syntax=1.2.840.10008.1.2.4.90` * `multipart/related; type="application/dicom"; transfer-syntax=1.2.840.10008.1.2.4.90`-- `*/*` (when transfer-syntax isn't specified, `1.2.840.10008.1.2.1` is used as default and mediaType defaults to `application/dicom`)
+- `*/*` (when transfer-syntax isn't specified, `*` is used as default and mediaType defaults to `application/dicom`)
#### Retrieve Frames
The following `Accept` headers are supported for retrieving frames:
* `multipart/related; type="image/jp2";` (when transfer-syntax isn't specified, `1.2.840.10008.1.2.4.90` is used as default) * `multipart/related; type="image/jp2";transfer-syntax=1.2.840.10008.1.2.4.90` * `application/octet-stream; transfer-syntax=*` for single frame retrieval-- `*/*` (when transfer-syntax isn't specified, `1.2.840.10008.1.2.1` is used as default and mediaType defaults to `application/octet-stream`)
+- `*/*` (when transfer-syntax isn't specified, `*` is used as default and mediaType defaults to `application/octet-stream`)
#### Retrieve transfer syntax
healthcare-apis Dicom Services Conformance Statement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-conformance-statement.md
The following `Accept` header(s) are supported for retrieving instances within a
* `multipart/related; type="application/dicom";` (when transfer-syntax isn't specified, 1.2.840.10008.1.2.1 is used as default) * `multipart/related; type="application/dicom"; transfer-syntax=1.2.840.10008.1.2.1` * `multipart/related; type="application/dicom"; transfer-syntax=1.2.840.10008.1.2.4.90`-- `*/*` (when transfer-syntax isn't specified, `1.2.840.10008.1.2.1` is used as default and mediaType defaults to `application/dicom`)
+- `*/*` (when transfer-syntax isn't specified, `*` is used as default and mediaType defaults to `application/dicom`)
#### Retrieve an Instance
The following `Accept` header(s) are supported for retrieving a specific instanc
* `multipart/related; type="application/dicom"; transfer-syntax=1.2.840.10008.1.2.1` * `application/dicom; transfer-syntax=1.2.840.10008.1.2.4.90` * `multipart/related; type="application/dicom"; transfer-syntax=1.2.840.10008.1.2.4.90`-- `*/*` (when transfer-syntax isn't specified, `1.2.840.10008.1.2.1` is used as default and mediaType defaults to `application/dicom`)
+- `*/*` (when transfer-syntax isn't specified, `*` is used as default and mediaType defaults to `application/dicom`)
#### Retrieve Frames
The following `Accept` headers are supported for retrieving frames:
* `multipart/related; type="application/octet-stream"; transfer-syntax=1.2.840.10008.1.2.1` * `multipart/related; type="image/jp2";` (when transfer-syntax isn't specified, `1.2.840.10008.1.2.4.90` is used as default) * `multipart/related; type="image/jp2";transfer-syntax=1.2.840.10008.1.2.4.90`-- `*/*` (when transfer-syntax isn't specified, `1.2.840.10008.1.2.1` is used as default and mediaType defaults to `application/octet-stream`)
+- `*/*` (when transfer-syntax isn't specified, `*` is used as default and mediaType defaults to `application/octet-stream`)
#### Retrieve transfer syntax
healthcare-apis Update Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/update-files.md
The [change feed](change-feed-overview.md) records update actions in the same ma
## Supported DICOM modules Any attributes in the [Patient Identification Module](https://dicom.nema.org/dicom/2013/output/chtml/part03/sect_C.2.html#table_C.2-2) and [Patient Demographic Module](https://dicom.nema.org/dicom/2013/output/chtml/part03/sect_C.2.html#table_C.2-3) that aren't sequences can be updated using the bulk update operation. Supported attributes are called out in the tables.
+### Attributes automatically changed in bulk updates
+
+When you perform a bulk update, the DICOM service updates the requested attributes and also two additional metadata fields. Here is the information that is updated automatically:
+
+| Tag | Attribute name | Description | Value
+| --| | | --|
+| (0002,0012) | Implementation Class UID | Uniquely identifies the implementation that wrote this file and its content. | 1.3.6.1.4.1.311.129 |
+| (0002,0013) | Implementation Version Name | Identifies a version for an Implementation Class UID (0002,0012) | Assembly version of the DICOM service (e.g. 0.1.4785) |
+
+Here, the UID `1.3.6.1.4.1.311.129` is a registered under [Microsoft OID arc](https://oidref.com/1.3.6.1.4.1.311) in IANA.
+ #### Patient identification module attributes | Attribute Name | Tag | Description | | - | --| |
iot-operations Howto Deploy Iot Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-iot-ops/howto-deploy-iot-operations.md
Use the Azure portal to deploy Azure IoT Operations components to your Arc-enabl
| **Subscription** | Select the subscription that contains your Arc-enabled Kubernetes cluster. | | **Azure Key vault** | Choose an existing key vault from the drop-down list or create a new one by selecting **Create new**. |
-1. On the **Automation** tab, the automation commands are populated based on your chosen cluster and key vault. Copy either the **Required** or **Optional** CLI command.
+1. On the **Automation** tab, the automation commands are populated based on your chosen cluster and key vault. Select an automation option:
- :::image type="content" source="../get-started/media/quickstart-deploy/install-extension-automation.png" alt-text="Screenshot of copying the CLI command from the automation tab for installing the Azure IoT Operations Arc extension in the Azure portal.":::
+ * **Azure CLI enablement + UI deployment -- Visually guided configuration**: Generates an Azure CLI command that configures your cluster. If you choose this option, you'll return to the Azure portal to complete the Azure IoT Operations deployment.
+ * **Azure CLI deployment -- Efficiency unleashed**: Generates an Azure CLI command that configures your cluster and also deploys Azure IoT Operations.
+
+1. After choosing your automation option, copy the generated CLI command.
+
+ <!-- :::image type="content" source="../get-started/media/quickstart-deploy/install-extension-automation.png" alt-text="Screenshot of copying the CLI command from the automation tab for installing the Azure IoT Operations Arc extension in the Azure portal."::: -->
1. Sign in to Azure CLI on your development machine. To prevent potential permission issues later, sign in interactively with a browser here even if you've already logged in before.
Use the Azure portal to deploy Azure IoT Operations components to your Arc-enabl
Wait for the command to complete.
- If you copied the **Optional** CLI command, then you're done with the cluster configuration and deployment.
+ If you copied the **Azure CLI deployment** CLI command, then you're done with the cluster configuration and deployment.
-1. If you copied the **Required** CLI command, return to the Azure portal and select **Review + Create**.
+1. If you copied the **Azure CLI enablement + UI deployment** CLI command, return to the Azure portal and select **Review + Create**.
1. Wait for the validation to pass and then select **Create**.
iot-operations Quickstart Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/get-started/quickstart-deploy.md
az keyvault create --enable-rbac-authorization false --name "<your unique key va
| **Subscription** | Select the subscription that contains your Arc-enabled Kubernetes cluster. | | **Azure Key Vault** | Use the **Select a key vault** drop-down menu to choose the key vault that you set up in the previous section. |
-1. Once you select a key vault, the **Automation** tab populates Azure CLI commands with your deployment information. Copy the **Required** CLI command.
+1. Once you select a key vault, the **Automation** tab populates an Azure CLI command that configures your cluster with your deployment information. Copy the CLI command.
>[!TIP]
- >The **Required** CLI command configures your cluster with the information that it needs to communicate securely with Azure resources but does not deploy Azure IoT Operations. After running the configuration command, you'll deploy Azure IoT Operations on the **Summary** tab. The **Optional** CLI command does the same configuration tasks on your cluster and then also deploys Azure IoT Operations.
+ >Select the **Azure CLI deployment -- Efficiency unleashed** automation option to generate a CLI command that performs the configuration tasks on your cluster and then also deploys Azure IoT Operations.
- :::image type="content" source="./media/quickstart-deploy/install-extension-automation.png" alt-text="Screenshot of copying the CLI command from the automation tab for installing the Azure IoT Operations Arc extension in the Azure portal.":::
+ <!-- :::image type="content" source="./media/quickstart-deploy/install-extension-automation.png" alt-text="Screenshot of copying the CLI command from the automation tab for installing the Azure IoT Operations Arc extension in the Azure portal."::: -->
1. Sign in to Azure CLI on your development machine or in your codespace terminal. To prevent potential permission issues later, sign in interactively with a browser here even if you've already logged in before.
logic-apps Edit App Settings Host Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/edit-app-settings-host-settings.md
ms.suite: integration Previously updated : 01/18/2024 Last updated : 01/29/2024
App settings in Azure Logic Apps work similarly to app settings in Azure Functio
| `FUNCTIONS_WORKER_RUNTIME` | `node` | Sets the language worker runtime to use with your logic app resource and workflows. However, this setting is no longer necessary due to automatically enabled multi-language support. <br><br>For more information, see [FUNCTIONS_WORKER_RUNTIME](../azure-functions/functions-app-settings.md#functions_worker_runtime). | | `ServiceProviders.Sftp.FileUploadBufferTimeForTrigger` | `00:00:20` <br>(20 seconds) | Sets the buffer time to ignore files that have a last modified timestamp that's greater than the current time. This setting is useful when large file writes take a long time and avoids fetching data for a partially written file. | | `ServiceProviders.Sftp.OperationTimeout` | `00:02:00` <br>(2 min) | Sets the time to wait before timing out on any operation. |
-| `ServiceProviders.Sftp.ServerAliveInterval` | `00:30:00` <br>(30 min) | Send a "keep alive" message to keep the SSH connection active if no data exchange with the server happens during the specified period. |
+| `ServiceProviders.Sftp.ServerAliveInterval` | `00:30:00` <br>(30 min) | Sends a "keep alive" message to keep the SSH connection active if no data exchange with the server happens during the specified period. |
| `ServiceProviders.Sftp.SftpConnectionPoolSize` | `2` connections | Sets the number of connections that each processor can cache. The total number of connections that you can cache is *ProcessorCount* multiplied by the setting value. | | `ServiceProviders.MaximumAllowedTriggerStateSizeInKB` | `10` KB, which is ~1,000 files | Sets the trigger state entity size in kilobytes, which is proportional to the number of files in the monitored folder and is used to detect files. If the number of files exceeds 1,000, increase this value. | | `ServiceProviders.Sql.QueryTimeout` | `00:02:00` <br>(2 min) | Sets the request timeout value for SQL service provider operations. |
+| `TARGET_BASED_SCALING_ENABLED` | `1` | Sets Azure Logic Apps to use target-based scaling (`1`) or incremental scaling (`0`). By default, target-based scaling is automatically enabled. For more information see [Target-based scaling](#scaling). |
| `WEBSITE_LOAD_ROOT_CERTIFICATES` | None | Sets the thumbprints for the root certificates to be trusted. | | `Workflows.Connection.AuthenticationAudience` | None | Sets the audience for authenticating a managed (Azure-hosted) connection. | | `Workflows.CustomHostName` | None | Sets the host name to use for workflow and input-output URLs, for example, "logic.contoso.com". For information to configure a custom DNS name, see [Map an existing custom DNS name to Azure App Service](../app-service/app-service-web-tutorial-custom-domain.md) and [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](../app-service/configure-ssl-bindings.md). |
To add or update an app setting using the Azure CLI, run the command `az logicap
```azurecli az logicapp config appsettings set --name MyLogicApp --resource-group MyResourceGroup --settings CUSTOM_LOGIC_APP_SETTING=12345 ```- <a name="reference-host-json"></a>
The following example shows the syntax for these settings where each workflow ID
"Jobs.SuspendedJobPartitionPrefixes": "<workflow-ID-1>:; <workflow-ID-2>:" ```
+<a name="scaling"></a>
+
+### Target-based scaling
+
+Single-tenant Azure Logic Apps gives you the option to select your preferred compute resources and set up your logic app resources to dynamically scale based on varying workload demands. The target-based scaling model used by Azure Logic Apps includes settings that you can use to fine-tune the model's underlying dynamic scaling mechanism, which can result in faster scale-out and scale-in times. For more information about the target-based scaling model, see the following articles:
+
+- [Target-based scaling support in single-tenant Azure Logic Apps](https://techcommunity.microsoft.com/t5/azure-integration-services-blog/announcement-target-based-scaling-support-in-azure-logic-apps/ba-p/3998712)
+- [Single-tenant Azure Logic Apps target-based scaling performance benchmark - Burst workloads](https://techcommunity.microsoft.com/t5/azure-integration-services-blog/logic-apps-standard-target-based-scaling-performance-benchmark/ba-p/3998807)
+
+#### Considerations
+
+- Target-based scaling isn't available or supported for Standard workflows running on an App Service Environment or Consumption plan.
+
+- If you have scale-in requests without any scale-out requests, Azure Logic Apps uses the maximum scale-in value. Target-based scaling can scale down unused worker instances faster, resulting in more efficient resource usage.
+
+#### Requirements
+
+- Your logic apps must use [Azure Functions runtime version 4.3.0 or later](../azure-functions/set-runtime-version.md).
+
+- Your logic app workflows must use single-tenant Azure Logic Apps runtime version 1.55.1 or later.
+
+#### Target-based scaling settings in host.json
+
+| Setting | Default value | Description |
+|||-|
+| `Runtime.TargetScaler.TargetConcurrency` | `null` | The number of target executions per worker instance. By default, the value is `null`. If you leave this value unchanged, your logic app defaults to using dynamic concurrency. You can set a targeted maximum value for concurrent job polling by using this setting. For an example, see the section following this table. |
+| `Runtime.TargetScaler.TargetScalingCPU` | `70` | The maximum percentage of CPU usage that you expect at target concurrency. You can change this default percentage for each logic app by using this setting. For an example, see the section following this table. |
+| `Runtime.TargetScaler.TargetScalingFactor` | `0.3` | A numerical value from `0.05` to `1.0` that determines the degree of scaling intensity. A higher target scaling factor results in more aggressive scaling. A lower target scaling factor results in more conservative scaling. You can fine-tune the target scaling factor for each logic app by using this setting. For an example, see the section following this table. |
+
+##### TargetConcurrency example
+
+```json
+{
+ "version": "2.0",
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle.Workflows",
+ "version": "[1.*, 2.0.0)"
+ },
+ "extensions": {
+ "workflow": {
+ "Settings": {
+ "Runtime.TargetScaler.TargetConcurrency": "280"
+ }
+ }
+ }
+}
+```
+
+#### TargetScalingCPU example
+
+```json
+{
+ "version": "2.0",
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle.Workflows",
+ "version": "[1.*, 2.0.0)"
+ },
+ "extensions": {
+ "workflow": {
+ "Settings": {
+ "Runtime.TargetScaler.TargetScalingCPU": "76"
+ }
+ }
+ }
+}
+```
+
+##### TargetScalingFactor example
+
+```json
+{
+ "version": "2.0",
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle.Workflows",
+ "version": "[1.*, 2.0.0)"
+ },
+ "extensions": {
+ "workflow": {
+ "Settings": {
+ "Runtime.TargetScaler.TargetScalingFactor": "0.62"
+ }
+ }
+ }
+}
+```
+
+#### Disable target-based scaling
+
+By default, target-based scaling is automatically enabled. To opt out from using target-based scaling and revert back to incremental scaling, add the app setting named **TARGET_BASED_SCALING_ENABLED** and set the value set to **0** in your Standard logic app resource using the Azure portal or in your logic app project's **local.settings.json file** using Visual Studio Code. For more information, see [Manage app settings - local.settings.json](#manage-app-settings).
+ <a name="recurrence-triggers"></a> ### Recurrence-based triggers
logic-apps Logic Apps Securing A Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-securing-a-logic-app.md
ms.suite: integration Previously updated : 10/16/2023 Last updated : 01/29/2024
The Microsoft Authentication Library (MSAL) libraries provide PoP tokens for you
* [A .NET Core daemon console application calling a protected Web API with its own identity](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/2-Call-OwnApi)
-* [SignedHttpRequest aka PoP (Proof of Possession)](https://github.com/AzureAD/azure-activedirectory-identitymodel-extensions-for-dotnet/wiki/SignedHttpRequest-aka-PoP-(Proof-of-Possession))
+* [SignedHttpRequest, also known as PoP (Proof of Possession)](https://github.com/AzureAD/azure-activedirectory-identitymodel-extensions-for-dotnet/wiki/SignedHttpRequest-aka-PoP-(Proof-of-Possession))
To use the PoP token with your Consumption logic app, follow the next section to [set up OAuth with Microsoft Entra ID](#enable-azure-ad-inbound).
Based on the target endpoint's capability, outbound calls sent by the [HTTP trig
This list includes information about TLS/SSL self-signed certificates:
-* For Consumption logic app workflows in the multi-tenant Azure Logic Apps environment, HTTP operations don't permit self-signed TLS/SSL certificates. If your logic app makes an HTTP call to a server and presents a TLS/SSL self-signed certificate, the HTTP call fails with a `TrustFailure` error.
+* For Consumption logic app workflows in the multitenant Azure Logic Apps environment, HTTP operations don't permit self-signed TLS/SSL certificates. If your logic app makes an HTTP call to a server and presents a TLS/SSL self-signed certificate, the HTTP call fails with a `TrustFailure` error.
* For Standard logic app workflows in the single-tenant Azure Logic Apps environment, HTTP operations support self-signed TLS/SSL certificates. However, you have to complete a few extra steps for this authentication type. Otherwise, the call fails. For more information, review [TLS/SSL certificate authentication for single-tenant Azure Logic Apps](../connectors/connectors-native-http.md#tlsssl-certificate-authentication).
If the [Client Certificate](../active-directory/authentication/active-directory-
| Property (designer) | Property (JSON) | Required | Value | Description | ||--|-|-|-|
-| **Authentication** | `type` | Yes | **Client Certificate** <br>or <br>`ClientCertificate` | The authentication type to use. You can manage certificates with [Azure API Management](../api-management/api-management-howto-mutual-certificates.md). <p></p>**Note**: Custom connectors don't support certificate-based authentication for both inbound and outbound calls. |
-| **Pfx** | `pfx` | Yes | <*encoded-pfx-file-content*> | The base64-encoded content from a Personal Information Exchange (PFX) file <p><p>To convert the PFX file into base64-encoded format, you can use PowerShell 7 by following these steps: <p>1. Save the certificate content into a variable: <p> `$pfx_cert = [System.IO.File]::ReadAllBytes('c:\certificate.pfx')` <p>2. Convert the certificate content by using the `ToBase64String()` function and save that content to a text file: <p> `[System.Convert]::ToBase64String($pfx_cert) | Out-File 'pfx-encoded-bytes.txt'` <p><p>**Troubleshooting**: If you use the `cert mmc/PowerShell` command, you might get this error: <p><p>`Could not load the certificate private key. Please check the authentication certificate password is correct and try again.` <p><p>To resolve this error, try converting the PFX file to a PEM file and back again by using the `openssl` command: <p><p>`openssl pkcs12 -in certificate.pfx -out certificate.pem` <br>`openssl pkcs12 -in certificate.pem -export -out certificate2.pfx` <p><p>Afterwards, when you get the base64-encoded string for the certificate's newly converted PFX file, the string now works in Azure Logic Apps. |
+| **Authentication** | `type` | Yes | **Client Certificate** <br>or <br>`ClientCertificate` | The authentication type to use. You can manage certificates with [Azure API Management](../api-management/api-management-howto-mutual-certificates.md). <br><br></p>**Note**: Custom connectors don't support certificate-based authentication for both inbound and outbound calls. |
+| **Pfx** | `pfx` | Yes | <*encoded-pfx-file-content*> | The base64-encoded content from a Personal Information Exchange (PFX) file <br><br>To convert the PFX file into base64-encoded format, you can use PowerShell 7 by following these steps: <br><br>1. Save the certificate content into a variable: <br><br> `$pfx_cert = [System.IO.File]::ReadAllBytes('c:\certificate.pfx')` <br><br>2. Convert the certificate content by using the `ToBase64String()` function and save that content to a text file: <br><br> `[System.Convert]::ToBase64String($pfx_cert) | Out-File 'pfx-encoded-bytes.txt'` <br><br>**Troubleshooting**: If you use the `cert mmc/PowerShell` command, you might get this error: <br><br>`Could not load the certificate private key. Please check the authentication certificate password is correct and try again.` <br><br>To resolve this error, try converting the PFX file to a PEM file and back again by using the `openssl` command: <br><br>`openssl pkcs12 -in certificate.pfx -out certificate.pem` <br>`openssl pkcs12 -in certificate.pem -export -out certificate2.pfx` <br><br>Afterwards, when you get the base64-encoded string for the certificate's newly converted PFX file, the string now works in Azure Logic Apps. |
| **Password** | `password`| No | <*password-for-pfx-file*> | The password for accessing the PFX file |
-|||||
+
+> [!NOTE]
+>
+> If you try to authenticate with a client certificate using OpenSSL, you might get the following error:
+>
+> `BadRequest: Could not load private key`
+>
+> To resolve this error, follow these steps:
+>
+> 1. Uninstall all OpenSSL instances.
+> 2. Install OpenSSL version 1.1.1t.
+> 3. Resign your certificate using the new update.
+> 4. Add the new certificate to the HTTP operation when using client certificate authentication.
When you use [secured parameters](#secure-action-parameters) to handle and secure sensitive information, for example, in an [Azure Resource Manager template for automating deployment](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md), you can use expressions to access these parameter values at runtime. This example HTTP action definition specifies the authentication `type` as `ClientCertificate` and uses the [parameters() function](../logic-apps/workflow-definition-language-functions-reference.md#parameters) to get the parameter values:
When you use [secured parameters](#secure-action-parameters) to handle and secur
``` > [!IMPORTANT]
-> If you have a **Logic App (Standard)** resource in single-tenant Azure Logic Apps,
-> and you want to use an HTTP operation with a TSL/SSL certificate, client certificate,
-> or Microsoft Entra ID Open Authentication (Microsoft Entra ID OAuth) with the `Certificate`
-> credential type, make sure to complete the extra setup steps for this authentication type.
-> Otherwise, the call fails. For more information, review
+>
+> If you have a Standard logic app resource in single-tenant Azure Logic Apps, and you want to use an HTTP
+> operation with a TSL/SSL certificate, client certificate, or Microsoft Entra ID Open Authentication
+> (Microsoft Entra ID OAuth) with the `Certificate` credential type, make sure to complete the extra setup
+> steps for this authentication type. Otherwise, the call fails. For more information, review
> [Authentication in single-tenant environment](../connectors/connectors-native-http.md#single-tenant-authentication). For more information about securing services by using client certificate authentication, review these topics:
When you use [secured parameters](#secure-action-parameters) to handle and secur
``` > [!IMPORTANT]
-> If you have a **Logic App (Standard)** resource in single-tenant Azure Logic Apps,
-> and you want to use an HTTP operation with a TSL/SSL certificate, client certificate,
-> or OAuth with Microsoft Entra ID with the `Certificate`
-> credential type, make sure to complete the extra setup steps for this authentication type.
-> Otherwise, the call fails. For more information, review
+>
+> If you have a Standard logic app resource in single-tenant Azure Logic Apps, and you want to use an HTTP
+> operation with a TSL/SSL certificate, client certificate, or Microsoft Entra ID Open Authentication
+> (Microsoft Entra ID OAuth) with the `Certificate` credential type, make sure to complete the extra setup
+> steps for this authentication type. Otherwise, the call fails. For more information, review
> [Authentication in single-tenant environment](../connectors/connectors-native-http.md#single-tenant-authentication). <a name="raw-authentication"></a>
When you use [secured parameters](#secure-action-parameters) to handle and secur
When the [managed identity](../active-directory/managed-identities-azure-resources/overview.md) option is available on the [trigger or action that supports managed identity authentication](#authentication-types-supported-triggers-actions), your logic app can use this identity for authenticating access to Azure resources that are protected by Microsoft Entra ID, rather than credentials, secrets, or Microsoft Entra tokens. Azure manages this identity for you and helps you secure your credentials because you don't have to manage secrets or directly use Microsoft Entra tokens. Learn more about [Azure services that support managed identities for Microsoft Entra authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication).
-* The **Logic App (Consumption)** resource type can use the system-assigned identity or a *single* manually created user-assigned identity.
+* A Consumption logic app resource can use the system-assigned identity or a *single* manually created user-assigned identity.
-* The **Logic App (Standard)** resource type supports having the [system-assigned managed identity *and* multiple user-assigned managed identities](create-managed-service-identity.md) enabled at the same time, though you still can only select one identity to use at any time.
+* A Standard logic app resource supports having the [system-assigned managed identity *and* multiple user-assigned managed identities](create-managed-service-identity.md) enabled at the same time, though you still can only select one identity to use at any time.
> [!NOTE] > By default, the system-assigned identity is already enabled to authenticate connections at run time.
When the [managed identity](../active-directory/managed-identities-azure-resourc
||--|-|-|-| | **Authentication** | `type` | Yes | **Managed Identity** <br>or <br>`ManagedServiceIdentity` | The authentication type to use | | **Managed Identity** | `identity` | No | <*user-assigned-identity-ID*> | The user-assigned managed identity to use. **Note**: Don't include this property when using the system-assigned managed identity. |
- | **Audience** | `audience` | Yes | <*target-resource-ID*> | The resource ID for the target resource that you want to access. <p>For example, `https://storage.azure.com/` makes the [access tokens](../active-directory/develop/access-tokens.md) for authentication valid for all storage accounts. However, you can also specify a root service URL, such as `https://fabrikamstorageaccount.blob.core.windows.net` for a specific storage account. <p>**Note**: The **Audience** property might be hidden in some triggers or actions. To make this property visible, in the trigger or action, open the **Add new parameter** list, and select **Audience**. <p><p>**Important**: Make sure that this target resource ID *exactly matches* the value that Microsoft Entra ID expects, including any required trailing slashes. So, the `https://storage.azure.com/` resource ID for all Azure Blob Storage accounts requires a trailing slash. However, the resource ID for a specific storage account doesn't require a trailing slash. To find these resource IDs, review [Azure services that support Microsoft Entra ID](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication). |
+ | **Audience** | `audience` | Yes | <*target-resource-ID*> | The resource ID for the target resource that you want to access. <br><br>For example, `https://storage.azure.com/` makes the [access tokens](../active-directory/develop/access-tokens.md) for authentication valid for all storage accounts. However, you can also specify a root service URL, such as `https://fabrikamstorageaccount.blob.core.windows.net` for a specific storage account. <br><br>**Note**: The **Audience** property might be hidden in some triggers or actions. To make this property visible, in the trigger or action, open the **Add new parameter** list, and select **Audience**. <br><br>**Important**: Make sure that this target resource ID *exactly matches* the value that Microsoft Entra ID expects, including any required trailing slashes. So, the `https://storage.azure.com/` resource ID for all Azure Blob Storage accounts requires a trailing slash. However, the resource ID for a specific storage account doesn't require a trailing slash. To find these resource IDs, review [Azure services that support Microsoft Entra ID](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication). |
When you use [secured parameters](#secure-action-parameters) to handle and secure sensitive information, for example, in an [Azure Resource Manager template for automating deployment](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md), you can use expressions to access these parameter values at runtime. For example, this HTTP action definition specifies the authentication `type` as `ManagedServiceIdentity` and uses the [parameters() function](../logic-apps/workflow-definition-language-functions-reference.md#parameters) to get the parameter values:
machine-learning How To Access Data Batch Endpoints Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-batch-endpoints-jobs.md
Literal inputs are only supported in pipeline component deployments. See [Create
### Data outputs
-Data outputs refer to the location where the results of a batch job should be placed. Outputs are identified by name, and Azure Machine Learning automatically assigns a unique path to each named output. However, you can specify another path if required. Batch endpoints only support writing outputs in blob Azure Machine Learning data stores.
+Data outputs refer to the location where the results of a batch job should be placed. Outputs are identified by name, and Azure Machine Learning automatically assigns a unique path to each named output. However, you can specify another path if required.
+
+> [!IMPORTANT]
+> Batch endpoints only support writing outputs in Azure Blob Storage datastores.
## Create jobs with data inputs
machine-learning How To Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-assign-roles.md
This article explains how to manage access (authorization) to Azure Machine Lear
## Default roles
-Azure Machine Learning workspaces have five built-in roles that are available by default. When adding users to a workspace, they can be assigned one of the following roles.
+Azure Machine Learning workspaces have built-in roles that are available by default. When adding users to a workspace, they can be assigned one of the following roles.
| Role | Access level | | | |
The following table is a summary of Azure Machine Learning activities and the pe
| Scoring against a deployed AKS endpoint | Not required | Not required | Owner, contributor, or custom role allowing: `/workspaces/services/aks/score/action`, `/workspaces/services/aks/listkeys/action` (when you don't use Microsoft Entra auth) OR `/workspaces/read` (when you use token auth) | | Accessing storage using interactive notebooks | Not required | Not required | Owner, contributor, or custom role allowing: `/workspaces/computes/read`, `/workspaces/notebooks/samples/read`, `/workspaces/notebooks/storage/*`, `/workspaces/listStorageAccountKeys/action`, `/workspaces/listNotebookAccessToken/read`| | Create new custom role | Owner, contributor, or custom role allowing `Microsoft.Authorization/roleDefinitions/write` | Not required | Owner, contributor, or custom role allowing: `/workspaces/computes/write` |
-| Create/manage online endpoints and deployments | Not required | Not required | Owner, contributor, or custom role allowing `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/*`. If you use studio to create/manage online endpoints/deployments, you need an additional permission `Microsoft.Resources/deployments/write` from the resource group owner. |
+| Create/manage online endpoints and deployments | Not required | To deploy on studio, `Microsoft.Resources/deployments/write` | Owner, contributor, or custom role allowing `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/*`. |
| Retrieve authentication credentials for online endpoints | Not required | Not required | Owner, contributor, or custom role allowing `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/token/action` and `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/listkeys/action` | 1. If you receive a failure when trying to create a workspace for the first time, make sure that your role allows `Microsoft.MachineLearningServices/register/action`. This action allows you to register the Azure Machine Learning resource provider with your Azure subscription. 2. When attaching an AKS cluster, you also need to have the [Azure Kubernetes Service Cluster Admin Role](/azure/role-based-access-control/built-in-roles#azure-kubernetes-service-cluster-admin-role) on the cluster.
+### Deploy into a virtual network or subnet
++ ### Differences between actions for V1 and V2 APIs There are certain differences between actions for V1 APIs and V2 APIs.
Here are a few things to be aware of while you use Azure RBAC:
- When there are two role assignments to the same Microsoft Entra user with conflicting sections of Actions/NotActions, your operations listed in NotActions from one role might not take effect if they're also listed as Actions in another role. To learn more about how Azure parses role assignments, read [How Azure RBAC determines if a user has access to a resource](/azure/role-based-access-control/overview#how-azure-rbac-determines-if-a-user-has-access-to-a-resource) - - It can sometimes take up to one hour for your new role assignments to take effect over cached permissions across the stack. ## Next steps
machine-learning How To Authenticate Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-authenticate-batch-endpoint.md
In this case, we want to execute a batch endpoint using a service principal alre
``` > [!IMPORTANT]
- > Notice that the resource scope for invoking a batch endpoints (`https://ml.azure.com1) is different from the resource scope used to manage them. All management APIs in Azure use the resource scope `https://management.azure.com`, including Azure Machine Learning.
+ > Notice that the resource scope for invoking a batch endpoints (`https://ml.azure.com) is different from the resource scope used to manage them. All management APIs in Azure use the resource scope `https://management.azure.com`, including Azure Machine Learning.
3. Once authenticated, use the query to run a batch deployment job:
machine-learning How To Managed Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-managed-network.md
The following diagram shows a managed VNet configured to __allow only approved o
:::image type="content" source="./media/how-to-managed-network/only-approved-outbound.svg" alt-text="Diagram of managed VNet isolation configured for allow only approved outbound." lightbox="./media/how-to-managed-network/only-approved-outbound.svg":::
+> [!NOTE]
+> Once a managed VNet workspace is configured to __allow only approved outbound__, the workspace cannot be reconfigured to __allow internet outbound__. Please keep this in mind when configuring managed VNet for your workspace.
++ ### Azure Machine Learning studio If you want to use the integrated notebook or create datasets in the default storage account from studio, your client needs access to the default storage account. Create a _private endpoint_ or _service endpoint_ for the default storage account in the Azure Virtual Network that the clients use.
machine-learning How To Setup Mlops Github Azure Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-mlops-github-azure-ml.md
Previously updated : 11/29/2022 Last updated : 01/29/2024
machine-learning How To Use Automated Ml For Ml Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automated-ml-for-ml-models.md
Otherwise, you see a list of your recent automated ML experiments, including th
| Primary metric| Main metric used for scoring your model. [Learn more about model metrics](how-to-configure-auto-train.md#primary-metric). Enable ensemble stacking | Ensemble learning improves machine learning results and predictive performance by combining multiple models as opposed to using single models. [Learn more about ensemble models](concept-automated-ml.md#ensemble).
- Blocked algorithm| Select algorithms you want to exclude from the training job. <br><br> Allowing algorithms is only available for [SDK experiments](how-to-configure-auto-train.md#supported-algorithms). <br> See the [supported algorithms for each task type](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels).
+ Blocked models| Select models you want to exclude from the training job. <br><br> Allowing models is only available for [SDK experiments](how-to-configure-auto-train.md#supported-algorithms). <br> See the [supported algorithms for each task type](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels).
Explain best model| Automatically shows explainability on the best model created by Automated ML.
+ Positive class label| Label that Automated ML will use to calculate binary metrics.
1. (Optional) View featurization settings: if you choose to enable **Automatic featurization** in the **Additional configuration settings** form, default featurization techniques are applied. In the **View featurization settings**, you can change these defaults and customize accordingly. Learn how to [customize featurizations](#customize-featurization).
- ![Screenshot shows the Select task type dialog box with View featurization settings called out.](media/how-to-use-automated-ml-for-ml-models/view-featurization-settings.png)
+ ![Screenshot shows the Select task type dialog box with View featurization settings called out.](media/how-to-use-automated-ml-for-ml-models/view-featurization.png)
1. The **[Optional] Limits** form allows you to do the following.
b. Provide a test dataset (preview) to evaluate the recommended model that autom
* The test dataset shouldn't be the same as the training dataset or the validation dataset. * Forecasting jobs don't support train/test split.
-![Screenshot shows the form where to select validation data and test data](media/how-to-use-automated-ml-for-ml-models/validate-test-form.png)
+![Screenshot shows the form where to select validation data and test data](media/how-to-use-automated-ml-for-ml-models/validate-and-test.png)
## Customize featurization
The following table summarizes the customizations currently available via the st
Column| Customization |
-Included | Specifies which columns to include for training.
Feature type| Change the value type for the selected column. Impute with| Select what value to impute missing values with in your data.
-![Azure Machine Learning studio custom featurization](media/how-to-use-automated-ml-for-ml-models/custom-featurization.png)
+![Screenshot showing Azure Machine Learning studio custom featurization.](media/how-to-use-automated-ml-for-ml-models/updated-featurization.png)
## Run experiment and view results
The **Job Detail** screen opens to the **Details** tab. This screen shows you a
The **Models** tab contains a list of the models created ordered by the metric score. By default, the model that scores the highest based on the chosen metric is at the top of the list. As the training job tries out more models, they're added to the list. Use this to get a quick comparison of the metrics for the models produced so far.
-![Job detail](./media/how-to-use-automated-ml-for-ml-models/explore-models.gif)
- ### View training job details
-Drill down on any of the completed models to see training job details. In the **Model** tab, you can view details like a model summary and the hyperparameters used for the selected model.
-
-[![Hyperparameter details](media/how-to-use-automated-ml-for-ml-models/hyperparameter-button.png)](media/how-to-use-automated-ml-for-ml-models/hyperparameter-details.png#lightbox)
-
- You can also see model specific performance metric charts on the **Metrics** tab. [Learn more about charts](how-to-understand-automated-ml.md).
+Drill down on any of the completed models to see training job details.
-![Iteration details](media/how-to-use-automated-ml-for-ml-models/iteration-details-expanded.png)
-
-On the Data transformation tab, you can see a diagram of what data preprocessing, feature engineering, scaling techniques and the machine learning algorithm that were applied to generate this model.
-
->[!IMPORTANT]
-> The Data transformation tab is in preview. This capability should be considered [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) and may change at any time.
+You can see model specific performance metric charts on the **Metrics** tab. [Learn more about charts](how-to-understand-automated-ml.md).
-![Data transformation](./media/how-to-use-automated-ml-for-ml-models/data-transformation.png)
+This is also where you can find details on all the properties of the model along with associated code, child jobs, and images.
## View remote test job results (preview)
To generate a Responsible AI dashboard for a particular model,
- ![Select Explain best model from the Automated ML job configuration page](media/how-to-use-automated-ml-for-ml-models/best-model-selection.png)
+ ![Screenshot showing the Automated ML job configuration page with Explain best model selected.](media/how-to-use-automated-ml-for-ml-models/best-model-selection-updated.png)
3. Proceed to the **Compute** page of the setup form and choose the **Serverless** option for your compute.
machine-learning How To Use Batch Pipeline From Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-pipeline-from-job.md
- how-to - devplatv2 - ignite-2023
+ - update-code
Last updated 11/15/2023
machine-learning How To Use Batch Scoring Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-scoring-pipeline.md
- devplatv2 - event-tier1-build-2023 - ignite-2023
- - update-code
+ - update-code2
# How to deploy a pipeline to perform batch scoring with preprocessing
machine-learning How To Use Batch Training Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-training-pipeline.md
- devplatv2 - event-tier1-build-2023 - ignite-2023
- - update-code
+ - update-code2
# How to operationalize a training pipeline with batch endpoints
machine-learning Overview What Is Azure Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/overview-what-is-azure-machine-learning.md
Previously updated : 09/22/2022 Last updated : 01/29/2024 - event-tier1-build-2022 - ignite-2022
Anyone on an ML team can use their preferred tools to get the job done. Whether
* [Azure Machine Learning studio](https://ml.azure.com) * [Python SDK (v2)](https://aka.ms/sdk-v2-install)
-* [Azure CLI (v2)](how-to-configure-cli.md))
+* [Azure CLI (v2)](how-to-configure-cli.md)
* [Azure Resource Manager REST APIs](/rest/api/azureml/) As you're refining the model and collaborating with others throughout the rest of the Machine Learning development cycle, you can share and find assets, resources, and metrics for your projects on the Machine Learning studio UI.
Other integrations with Azure services support an ML project from end to end. Th
* [Microsoft Purview, which allows you to discover and catalog data assets across your organization](../purview/register-scan-azure-machine-learning.md). > [!Important]
-> Machine Learning doesn't store or process your data outside of the region where you deploy.
+> Azure Machine Learning doesn't store or process your data outside of the region where you deploy.
## Machine learning project workflow
You can deploy models to the managed inferencing solution, for both real-time an
## Train models
-In Machine Learning, you can run your training script in the cloud or build a model from scratch. Customers often bring models they've built and trained in open-source frameworks so that they can operationalize them in the cloud.
+In Azure Machine Learning, you can run your training script in the cloud or build a model from scratch. Customers often bring models they've built and trained in open-source frameworks so that they can operationalize them in the cloud.
### Open and interoperable
-Data scientists can use models in Machine Learning that they've created in common Python frameworks, such as:
+Data scientists can use models in Azure Machine Learning that they've created in common Python frameworks, such as:
* PyTorch * TensorFlow
Scaling an ML project might require scaling embarrassingly parallel model traini
## Deploy models
-To bring a model into production, it's deployed. The Machine Learning managed endpoints abstract the required infrastructure for both batch or real-time (online) model scoring (inferencing).
+To bring a model into production, you deploy the model. The Azure Machine Learning managed endpoints abstract the required infrastructure for both batch or real-time (online) model scoring (inferencing).
### Real-time and batch scoring (inferencing)
If you use Apache Airflow, the [airflow-provider-azure-machinelearning](https://
Start using Azure Machine Learning: -- [Set up an Azure Machine Learning workspace](quickstart-create-resources.md)-- [Tutorial: Build a first machine learning project](tutorial-1st-experiment-hello-world.md)-- [Run training jobs](how-to-train-model.md)
+* [Set up an Azure Machine Learning workspace](quickstart-create-resources.md)
+* [Tutorial: Build a first machine learning project](tutorial-1st-experiment-hello-world.md)
+* [Run training jobs](how-to-train-model.md)
machine-learning Concept Model Monitoring Generative Ai Evaluation Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/concept-model-monitoring-generative-ai-evaluation-metrics.md
Title: Monitoring evaluation metrics descriptions and use cases (preview)
description: Understand the metrics used when monitoring the performance of generative AI models deployed to production on Azure Machine Learning. --++
machine-learning Reference Yaml Deployment Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-deployment-batch.md
- cliv2 - event-tier1-build-2022 - ignite-2023
+ - update-code
machine-learning Tutorial Get Started With Feature Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-get-started-with-feature-store.md
Last updated 11/28/2023 -+ #Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
machine-learning Tutorial Network Isolation For Feature Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-network-isolation-for-feature-store.md
Last updated 09/13/2023 -+ #Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
machine-learning Tutorial Pipeline Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-pipeline-python-sdk.md
--++ Last updated 10/20/2023
machine-learning Azure Machine Learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/azure-machine-learning-release-notes.md
Previously updated : 11/13/2023 Last updated : 01/29/2024 # Azure Machine Learning Python SDK release notes
In this article, learn about Azure Machine Learning Python SDK releases. For th
__RSS feed__: Get notified when this page is updated by copying and pasting the following URL into your feed reader: `https://learn.microsoft.com/api/search/rss?search=%22Azure+machine+learning+release+notes%22&locale=en-us`
+## 2024-01-29
+### Azure Machine Learning SDK for Python v1.55.0
+ + **azureml-core**
+ + Enable Application Insights re-mapping for new region China East 3, since it doesn't support classic resource mode. Also fixed the missing update for China North 3.
+ + **azureml-defaults**
+ + Bumped azureml-inference-server-http pin to 1.0.0 in azureml-defaults.
+ + **azureml-interpret**
+ + updated azureml-interpret package to interpret-community 0.31.*
+ + **azureml-responsibleai**
+ + updated common environment and azureml-responsibleai package to raiwidgets and responsibleai 0.33.0
+ + Increase responsibleai and fairlearn dependency versions
+ ## 2023-11-13 + **azureml-automl-core, azureml-automl-runtime, azureml-contrib-automl-dnn-forecasting, azureml-train-automl-client, azureml-train-automl-runtime, azureml-training-tabular** + statsmodels, pandas and scipy were upgraded to versions 1.13, 1.3.5 and 1.10.1 - fbprophet 0.7.1 was replaced by prophet 1.1.4 When loading a model in a local environment, the versions of these packages should match what the model was trained on.
machine-learning How To Monitor Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-monitor-datasets.md
--++ Last updated 08/08/2023
managed-grafana How To Connect Azure Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-connect-azure-data-explorer.md
Enter Azure Data Explorer configuration settings.
When you configure an Azure Data Explorer data source with the Current User authentication method, Grafana queries Azure Data Explorer using the user's credentials.
- > [!NOTE]
- > Rollout of the user-based authentication in Azure Managed Grafana is in progress.
- > [!CAUTION] > User-based authentication in Grafana data sources is experimental.
mysql Quickstart Create Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-create-arm-template.md
Create an **azuredeploy.json** file with the following content to create an Azur
"defaultValue": "Burstable", "allowedValues": [ "Burstable",
- "Generalpurpose",
+ "GeneralPurpose",
"MemoryOptimized" ], "metadata": {
Create an **azuredeploy.json** file with the following content to create an Azur
"defaultValue": "Burstable", "allowedValues": [ "Burstable",
- "Generalpurpose",
+ "GeneralPurpose",
"MemoryOptimized" ], "metadata": {
networking Nva Accelerated Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/nva-accelerated-connections.md
This list will be updated as more regions become available. The following region
* West US * East US 2 * Central US
+* South UK
+* West Europe
## Supported SKUs
openshift Howto Monitor Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-monitor-alerts.md
+
+ Title: Configure Azure Resource Health alerts for Azure Red Hat OpenShift (ARO) clusters
+description: Learn how to configure Azure Monitor alerts for Azure Red Hat OpenShift (ARO) clusters.
+++
+keywords: openshift, red hat, monitor, cluster, alerts
+ Last updated : 01/25/2024+++
+# Configure Azure Resource Health alerts for Azure Red Hat OpenShift (ARO) clusters
+
+[Azure Resource Health](/azure/service-health/resource-health-overview?WT.mc_id=Portal-Microsoft_Azure_Health) is a component of Azure Monitor that can be configured to generate alerts based on signals from Azure Red Hat OpenShift clusters. These alerts help you prepare for events such as planned and unplanned maintenance.
+
+Resource Health alert signals for ARO clusters include the following:
+
+- **Cluster maintenance operation pending:** This signal indicates that your Azure Red Hat OpenShift cluster will undergo a maintenance operation within the next two weeks. This may cause rolling reboots of nodes resulting in workload pod restarts.
+- **Cluster maintenance operation in progress:** This signal indicates one of the following operation types:
+ - **Planned:** A planned maintenance operation has started on your Azure Red Hat OpenShift cluster. This may cause rolling reboots of nodes resulting in workload pod restarts.
+ - **Unplanned:** An unplanned maintenance operation has started on your Azure Red Hat OpenShift cluster. This may cause rolling reboots of nodes resulting in workload pod restarts.
+
+- **Action needed to complete maintenance operation:** This signal indicates that action is needed to complete an ongoing maintenance operation of your Azure Red Hat OpenShift cluster. Contact Azure Support to complete the ongoing maintenance operation of your Azure Red Hat OpenShift cluster.
+
+- **Cluster API server is unreachable:** This signal indicates that the Azure Red Hat OpenShift service Resource Provider is unable to reach your cluster's API server. Your cluster is hence unable to be monitored and is unmanageable.
+
+Once the underlying condition causing an alert signal is remediated, the signal is cleared and the alert condition ends.
+
+## Creating alert rules
+
+Configuring Resource Health alerts for an ARO cluster requires an alert rule. Alert rules define the conditions in which alert signals are generated.
+
+1. In the [Azure portal](https://ms.portal.azure.com/), go to the ARO cluster for which you want to configure alerts.
+
+1. Select **Resource health**, then select **Add resource health alert**.
+
+1. Enter all applicable parameters for the alert rule in the various tabs of the window, including an **Alert rule name** in the **Details** tab.
+
+1. Select **Review + Create**.
+
+## Cluster alert notifications
+
+When Azure Monitor detects a signal related to the cluster, it generates an alert. For detailed instructions on using and creating alert rules, see [What are Azure Monitor alerts?](/azure/azure-monitor/alerts/alerts-overview)
+
+## View cluster alerts in Azure portal
+
+You can view the status of the cluster at any time from the Azure portal. Go the applicable cluster in the portal and select **Resource health** to see if the cluster is available or unavailable and any events associated with it. For more information, see [Resource Health overview](/azure/service-health/resource-health-overview).
+
+You can also view the alert rule you created for the cluster. Viewing the alert rule in the portal allows you to see any alerts fired against the rule.
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
This page provides latest news and updates regarding feature additions, engine v
## Release: January 2024 * General availability of [Server logs](./how-to-server-logs-portal.md) including Portal and CLI support.
+* General availability of UAE Central region.
+* General availability of Israel Central region.
## Release: December 2023 * Public preview of [Server logs](./how-to-server-logs-portal.md).
This page provides latest news and updates regarding feature additions, engine v
* General availability of PostgreSQL 16 for Azure Database for PostgreSQL flexible server. * General availability of [near-zero downtime scaling](./concepts-scaling-resources.md). * General availability of [Pgvector 0.5.1](concepts-extensions.md) extension.
-* Public preview of Italy North region.
+* General availability of Italy North region.
* Public preview of [premium SSD v2](concepts-compute-storage.md). * Public preview of [decoupling storage and IOPS](concepts-compute-storage.md). * Public preview of [private endpoints](concepts-networking-private-link.md).
postgresql Connect Rust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-rust.md
ms.devlang: rust Previously updated : 06/24/2022 Last updated : 01/29/2024 # Quickstart: Use Rust to interact with Azure Database for PostgreSQL - Single Server
For this quickstart, you need:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free). - A recent version of [Rust](https://www.rust-lang.org/tools/install) installed.-- An Azure Database for PostgreSQL single server. Create one using [Azure portal](./quickstart-create-server-database-portal.md) <br/> or [Azure CLI](./quickstart-create-server-database-azure-cli.md).
+- An Azure Database for PostgreSQL single server. Create one using [Azure CLI](./quickstart-create-server-database-azure-cli.md).
- Based on whether you are using public or private access, complete **ONE** of the actions below to enable connectivity. |Action| Connectivity method|How-to guide|
fn delete(pg_client: &mut postgres::Client) {
deleted item info: id = 4, name = item-4, quantity = 16 ```
-4. To confirm, you can also connect to Azure Database for PostgreSQL [using psql](./quickstart-create-server-database-portal.md#connect-to-the-server-with-psql) and run queries against the database, for example:
+4. To confirm, you can also connect to Azure Database for PostgreSQL [using psql](./quickstart-create-server-database-azure-cli.md#connect-to-the-azure-database-for-postgresql-server-by-using-psql) and run queries against the database, for example:
```sql select * from inventory;
postgresql Quickstart Create Server Database Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-server-database-portal.md
Previously updated : 06/24/2022 Last updated : 01/29/2024 # Quickstart: Create an Azure Database for PostgreSQL server by using the Azure portal
Last updated 06/24/2022
[!INCLUDE [azure-database-for-postgresql-single-server-deprecation](../includes/azure-database-for-postgresql-single-server-deprecation.md)]
-Azure Database for PostgreSQL is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. This quickstart shows you how to create a single Azure Database for PostgreSQL server and connect to it.
-
-## Prerequisites
-
-An Azure subscription is required. If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.
-
-## Create an Azure Database for PostgreSQL server
-
-Go to the [Azure portal](https://portal.azure.com/) to create an Azure Database for PostgreSQL Single Server database. Search for and select *Azure Database for PostgreSQL servers*.
-
->[!div class="mx-imgBorder"]
-> :::image type="content" source="./media/quickstart-create-database-portal/search-postgres.png" alt-text="Find Azure Database for PostgreSQL.":::
-
-1. Select **+ Create**.
-
-2. On the Create an Azure Database for PostgreSQL page, select **Single server**.
-
- >[!div class="mx-imgBorder"]
- > :::image type="content" source="./media/quickstart-create-database-portal/select-single-server.png" alt-text="Select single server":::
-
-3. Now enter the **Basics** form with the following information.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/quickstart-create-database-portal/create-basics.png" alt-text="Screenshot that shows the Basics tab for creating a single server.":::
-
- |Setting|Suggested value|Description|
- |:|:|:|
- |Subscription|your subscription name|select the desired Azure Subscription.|
- |Resource group|*myresourcegroup*| A new or an existing resource group from your subscription.|
- |Server name |*mydemoserver*|A unique name that identifies your Azure Database for PostgreSQL server. The domain name *postgres.database.azure.com* is appended to the server name that you provide. The server can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain 3 to 63 characters.|
- |Data source | None | Select **None** to create a new server from scratch. Select **Backup** only if you were restoring from a geo-backup of an existing server.|
- |Admin username |*myadmin*| Enter your server admin username. It can't start with **pg_** and these values are not allowed: **azure_superuser**, **azure_pg_admin**, **admin**, **administrator**, **root**, **guest**, or **public**.|
- |Password |your password| A new password for the server admin user. It must contain 8 to 128 characters from three of the following categories: English uppercase letters, English lowercase letters, numbers (0 through 9), and non-alphanumeric characters (for example, !, $, #, %).|
- |Location|your desired location| Select a location from the dropdown list.|
- |Version|The latest major version| The latest PostgreSQL major version, unless you have specific requirements otherwise.|
- |Compute + storage | *use the defaults*| The default pricing tier is **General Purpose** with **4 vCores** and **100 GB** storage. Backup retention is set to **7 days** with **Geographically Redundant** backup option.<br/>Learn about the [pricing](https://azure.microsoft.com/pricing/details/postgresql/server/) and update the defaults if needed.|
-
- > [!NOTE]
- > Consider using the Basic pricing tier if light compute and I/O are adequate for your workload. Note that servers created in the Basic pricing tier can't later be scaled to General Purpose or Memory Optimized.
-
-5. Select **Review + create** to review your selections. Select **Create** to provision the server. This operation might take a few minutes.
- > [!NOTE]
- > An empty database, **postgres**, is created. You'll also find an **azure_maintenance** database that's used to separate the managed service processes from user actions. You can't access the **azure_maintenance** database.
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/quickstart-create-database-portal/deployment-success.png" alt-text="success deployment.":::
-
-[Having issues? Let us know.](https://aka.ms/postgres-doc-feedback)
-
-## Configure a firewall rule
-
-By default, the server that you create is not publicly accessible. You need to give permissions to your IP address. Go to your server resource in the Azure portal and select **Connection security** from left-side menu for your server resource. If you're not sure how to find your resource, see [Open resources](../../azure-resource-manager/management/manage-resources-portal.md#open-resources).
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="./media/quickstart-create-database-portal/add-current-ip-firewall.png" alt-text="Screenshot that shows firewall rules for connection security.":::
-
-Select **Add current client IP address**, and then select **Save**. You can add more IP addresses or provide an IP range to connect to your server from those IP addresses. For more information, see [Firewall rules in Azure Database for PostgreSQL](./concepts-firewall-rules.md).
- > [!NOTE]
-> To avoid connectivity issues, check if your network allows outbound traffic over port 5432. Azure Database for PostgreSQL uses that port.
-
-[Having issues? Let us know.](https://aka.ms/postgres-doc-feedback)
-
-## Connect to the server with psql
-
-You can use [psql](http://postgresguide.com/utilities/psql.html) or [pgAdmin](https://www.pgadmin.org/docs/pgadmin4/latest/connecting.html), which are popular PostgreSQL clients. For this quickstart, we'll connect by using psql in [Azure Cloud Shell](../../cloud-shell/overview.md) within the Azure portal.
-
-1. Make a note of your server name, server admin login name, password, and subscription ID for your newly created server from the **Overview** section of your server.
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="./media/quickstart-create-database-portal/overview-new.png" alt-text="get connection information.":::
-
-2. Open Azure Cloud Shell in the portal by selecting the icon on the upper-left side.
-
- > [!NOTE]
- > If you're opening Cloud Shell for the first time, you'll see a prompt to create a resource group and a storage account. This is a one-time step and will be automatically attached for all sessions.
-
- > [!div class="mx-imgBorder"]
- > :::image type="content" source="media/quickstart-create-database-portal/use-in-cloud-shell.png" alt-text="Screenshot that shows server information and the icon for opening Azure Cloud Shell.":::
-
-3. Run the following command in the Azure Cloud Shell terminal. Replace values with your actual server name and admin user login name. Use the empty database **postgres** with admin user in this format: `<admin-username>@<servername>`.
-
- ```azurecli-interactive
- psql --host=mydemoserver.postgres.database.azure.com --port=5432 --username=myadmin@mydemoserver --dbname=postgres
- ```
-
- Here's how the experience looks in the Cloud Shell terminal:
-
- ```bash
- Requesting a Cloud Shell.Succeeded.
- Connecting terminal...
-
- Welcome to Azure Cloud Shell
-
- Type "az" to use Azure CLI
- Type "help" to learn about Cloud Shell
-
- user@Azure:~$psql --host=mydemoserver.postgres.database.azure.com --port=5432 --username=myadmin@mydemoserver --dbname=postgres
- Password for user myadmin@mydemoserver.postgres.database.azure.com:
- psql (12.2 (Ubuntu 12.2-2.pgdg16.04+1), server 11.6)
- SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
- Type "help" for help.
-
- postgres=>
- ```
-4. In the same Azure Cloud Shell terminal, create a database called **guest**.
-
- ```bash
- postgres=> CREATE DATABASE guest;
- ```
-
-5. Switch connections to the newly created **guest** database.
-
- ```bash
- \c guest
- ```
-6. Type `\q`, and then select the Enter key to close psql.
-
-[Having issues? Let us know.](https://aka.ms/postgres-doc-feedback)
-
-## Clean up resources
-
-You've successfully created an Azure Database for PostgreSQL server in a resource group. If you don't expect to need these resources in the future, you can delete them by deleting either the resource group or the PostgreSQL server.
-
-To delete the resource group:
-
-1. In the Azure portal, search for and select **Resource groups**.
-2. In the resource group list, choose the name of your resource group.
-3. On the **Overview** page of your resource group, select **Delete resource group**.
-4. In the confirmation dialog box, enter the name of your resource group, and then select **Delete**.
-
-To delete the server, select the **Delete** button on the **Overview** page of your server:
-
-> [!div class="mx-imgBorder"]
-> :::image type="content" source="media/quickstart-create-database-portal/12-delete.png" alt-text="Screenshot that shows the button for deleting a server.":::
-
-[Having issues? Let us know.](https://aka.ms/postgres-doc-feedback)
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Migrate your database using export and import](./how-to-migrate-using-export-and-import.md) <br/>
+> Please be aware that as part of the ongoing retirement process for Azure Database for PostgreSQL - Single Server, the option to create new instances via the Azure portal is no longer available.
+> While portal-based creation is discontinued, you can continue to create Single Server instances using methods such as [Azure CLI](quickstart-create-server-database-azure-cli.md), [Azure CLI up command](quickstart-create-server-up-azure-cli.md) or the [ARM template](quickstart-create-postgresql-server-database-using-arm-template.md). However, please note that as of March 2025, these methods will also no longer be used for creating new instances.
-> [!div class="nextstepaction"]
-> [Design a database](./tutorial-design-database-using-azure-portal.md#create-tables-in-the-database)
+
-[Cannot find what you are looking for? Let us know.](https://aka.ms/postgres-doc-feedback)
private-5g-core Configure Sim Policy Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/configure-sim-policy-azure-portal.md
*SIM policies* allow you to define different sets of policies and interoperability settings that can each be assigned to a group of SIMs. The SIM policy also defines the default Quality of Service settings for any services that policy uses. You'll need to assign a SIM policy to a SIM before the user equipment (UE) using that SIM can access the private mobile network. In this how-to-guide, you'll learn how to configure a SIM policy.
+A SIM policy takes effect on a UE when it attaches or re-attaches to the network. Therefore, changes to the policy are not dynamically implemented on existing UE sessions. However, if a SIM policy is removed from a UE's SIM, then Azure Private 5G Core will perform a network-initiated detach, disconnecting the UE from the network.
+ ## Prerequisites - Ensure you can sign in to the Azure portal using an account with access to the active subscription you identified in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md). This account must have the built-in Contributor role at the subscription scope.
private-5g-core Policy Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/policy-control.md
Each SIM policy includes:
You can create multiple SIM policies to offer different QoS policy settings to separate groups of SIMs on the same data network. For example, you may want to create SIM policies with differing sets of services.
+A SIM policy takes effect on a UE when it attaches or re-attaches to the network. Therefore, changes to the policy are not dynamically implemented on existing UE sessions. However, if a SIM policy is removed from a UE's SIM, then Azure Private 5G Core will perform a network-initiated detach, disconnecting the UE from the network.
+ ## Network slicing Network slicing allows you to host multiple independent logical networks in the same Azure Private 5G Core deployment by segmenting a common shared physical network into multiple virtual *network slices*. Slices play an important role in Azure Private 5G Core's flexible traffic handling by letting you apply different policies, QoS characteristics, priorities, and/or network connections to your UEs.
reliability Availability Zones Service Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-service-support.md
The following regions currently support availability zones:
| West US 3 | Switzerland North | | | | ||Poland Central |||| + \* To learn more about availability zones and available services support in these regions, contact your Microsoft sales or customer representative. For the upcoming regions that will support availability zones, see [Azure geographies](https://azure.microsoft.com/global-infrastructure/geographies/). ## Azure services with availability zone support
Azure offerings are grouped into three categories that reflect their _regional_
>[!IMPORTANT] >Some services, although they are zone-redundant, may have limited support for availability zones. For example, some may only support availability zones for certain tiers, regions, or SKUs. To get more information on service limitations for availability zone support, select that service in the table below.
-### ![An icon that signifies this service is foundational.](media/icon-foundational.svg) Foundational services
+### ![An icon that signifies this service is foundational.](media/icon-foundational.svg) Foundational services
| **Products** | **Resiliency** | | | |
Azure offerings are grouped into three categories that reflect their _regional_
| [Azure Monitor](../azure-monitor/logs/availability-zones.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Monitor: Application Insights](../azure-monitor/logs/availability-zones.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Monitor: Log Analytics](../azure-monitor/logs/availability-zones.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
-| [Azure NAT Gateway](../nat-gateway/nat-availability-zones.md) | ![An icon that signifies this service is zonal](media/icon-zonal.svg) |
+| [Azure NAT Gateway](../nat-gateway/nat-availability-zones.md) | ![An icon that signifies this service is zonal](media/icon-zonal.svg) |
| [Azure Network Watcher](../network-watcher/frequently-asked-questions.yml) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Network Watcher: Traffic Analytics](../network-watcher/frequently-asked-questions.yml) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Notification Hubs](../notification-hubs/availability-zones.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
remote-rendering Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/how-tos/conversion/blob-storage.md
A SAS URI can be generated using one of:
An example of using Shared Access Signatures in asset conversion is shown in Conversion.ps1 of the [PowerShell Example Scripts](../../samples/powershell-example-scripts.md#script-conversionps1).
+> [!IMPORTANT]
+> When configuring the storage account, do **not** specify an allowed IP address range, even when it allow-lists all IP addresses:
+>
+> ![Screenshot of blob storage settings in Azure portal that show how to configure an allowed IP address range.](./media/blob-storage-ip-allowlist.png)
+>
+> With any IP range being specified, the SAS token may not work with ARR and model loading might fail.
+ ## Upload an input model To start converting a model, you need to upload it, using one of the following options:
sap Deploy S4hana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/deploy-s4hana.md
# Deploy S/4HANA infrastructure with Azure Center for SAP solutions ---- In this how-to guide, you'll learn how to deploy S/4HANA infrastructure in *Azure Center for SAP solutions*. There are [three deployment options](#deployment-types): distributed with High Availability (HA), distributed non-HA, and single server. ## Prerequisites
The following operating system (OS) software versions are compatible with these
1. For **Managed identity name**, enter a name for a new identity you want to create or select an existing identity from the drop down menu. If you are selecting an existing identity, it should have **Contributor** role access on the Subscription or on Resource Groups related to this SAP system you are trying to deploy. That is, it requires Contributor access to the SAP application Resource Group, Virtual Network Resource Group and Resource Group which has the existing SSHKEY. If you wish to later install the SAP system using Azure Center for SAP Solutions, we also recommend giving the **Storage Blob Data Reader and Reader** and **Data Access roles** on the Storage Account which has the SAP software media.
+1. Under **Managed resource settings**, choose the network settings for the managed storage account deployed into your subscription. This storage account is required for ACSS to orchestrate the deployment of new SAP system and further power all the SAP management capabilities.
+
+ 1. For **Storage account network access**, select Enable access from specific virtual network for enhanced network security access for the managed storage account. This option ensures that this storage account is accessible only from the virtual network in which the SAP system exists.
++
+ > [!IMPORTANT]
+ > To use the secure network access option, you must enable Microsoft.Storage [service endpoint](../../virtual-network/virtual-network-service-endpoints-overview.md) on the Application and Database subnets. You can learn more about storage account network security in [this documentation](../../storage/common/storage-network-security.md). Private endpoint on managed storage account is not currently supported in this scenario.
+
+
+ When you choose to limit network access to specific virtual networks, Azure Center for SAP solutions service accesses this storage account using [**trusted access**](../../storage/common/storage-network-security.md?tabs=azure-portal#grant-access-to-trusted-azure-services) based on the managed identity associated with the VIS resource.
+ 1. Select **Next: Virtual machines**. 1. In the **Virtual machines** tab, generate SKU size and total VM count recommendations for each SAP instance from Azure Center for SAP solutions.
sap Get Alerts Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/monitor/get-alerts-portal.md
Last updated 10/19/2022
# Configure alerts in Azure Monitor for SAP solutions in Azure portal
-In this how-to guide, you'll learn how to configure alerts in Azure Monitor for SAP solutions. You can configure alerts and notifications from the [Azure portal](https://azure.microsoft.com/features/azure-portal) using its browser-based interface.
+In this how-to guide, you learn how to configure alerts in Azure Monitor for SAP solutions. You can configure alerts and notifications from the [Azure portal](https://azure.microsoft.com/features/azure-portal) using its browser-based interface.
## Prerequisites
In this how-to guide, you'll learn how to configure alerts in Azure Monitor for
:::image type="content" source="./media/get-alerts-portal/ams-alert-5.png" alt-text="Screenshot showing result of alert configuration." lightbox="./media/get-alerts-portal/ams-alert-5.png":::
+## View and manage alerts in a centralized experience (Preview)
+This enhanced view introduces powerful capabilities that streamline alert management, providing a unified view of all alerts and alert rules across various providers. This consolidated approach enables you to efficiently manage and monitor alerts, improving your overall experience with Azure Monitor for SAP Solutions.
+
+- **Centralized Alert Management**:
+Gain a holistic view of all alerts fired across different providers within a single, intuitive interface. With the new Alerts experience, you can easily track and manage alerts from various sources in one place, providing a comprehensive overview of your SAP landscape's health.
+
+- **Unified Alert Rules**:
+Simplify your alert configuration by centralizing all alert rules across different providers. This streamlined approach ensures consistency in rule management, making it easier to define, update, and maintain alert rules for your SAP solutions.
+
+- **Grid View for Rule Status and Bulk Operations**:
+Efficiently manage your alert rules using the grid view, allowing you to see the status of all rules and make bulk changes with ease. Enable or disable multiple rules simultaneously, providing a seamless experience for maintaining the health of your SAP environment.
+
+- **Alert Action Group Management**:
+Take control of your alert action groups directly from the new Alerts experience. Manage and configure alert action groups effortlessly, ensuring that the right stakeholders are notified promptly when critical alerts are triggered.
+
+- **Alert Processing Rules for Maintenance Periods**
+Enable alert processing rules, a powerful feature that allows you to take specific actions or suppress alerts during maintenance periods. Customize the behavior of alerts to align with your maintenance schedule, minimizing unnecessary notifications and disruptions.
+
+- **Export to CSV**:
+Facilitate data analysis and reporting by exporting fired alerts and alert rules to CSV format. This feature empowers you to share, analyze, and archive alert data seamlessly, supporting your organization's reporting and compliance requirements.
+
+To access the new Alerts experience in Azure Monitor for SAP Solutions:
+
+1. Navigate to the Azure portal.
+1. Select your Azure Monitor for SAP Solutions instance.
+ :::image type="content" source="./media/get-alerts-portal/new-alerts-view.png" alt-text="Screenshot showing central alerts view." lightbox="./media/get-alerts-portal/new-alerts-view.png":::
+1. Click on the "Alerts" tab to explore the enhanced alert management capabilities.
+ ## Next steps Learn more about Azure Monitor for SAP solutions.
search Search Get Started Retrieval Augmented Generation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-retrieval-augmented-generation.md
In this quickstart:
1. Provide an index name that's unique in your search service.
-1. Check **Add vector search to this search index.**
+1. Check **Add vector search to this search index.** This option tokenizes your content and generates embeddings.
-1. Select **Azure OpenaI - text-embedding-ada-002**.
+1. Select **Azure OpenaI - text-embedding-ada-002**. This embedding model accepts a maximum of 8192 tokens for each chunk. Data chunking is internal and nonconfigurable.
1. Check the acknowledgment that Azure AI Search is a billable service. If you're using an existing search service, there's no extra charge for vector store unless you add semantic ranking. If you're creating a new service, Azure AI Search becomes billable upon service creation. 1. Select **Next**.
-1. In Upload files, select the four files and then select **Upload**.
+1. In Upload files, select the four files and then select **Upload**. File size limit is 16 MB.
1. Select **Next**.
In this quickstart:
## Chat with your data
-1. Review advanced settings that determine how much flexibility the chat model has in supplementing the grounding data, and how many chunks are provided to the model to generate its response.
+The playground gives you options for configuring and monitoring chat. On the right, model configuration determines which model formulates an answer using the search results from Azure AI Search. The input token progress indicator keeps track of the token count of the question you submit.
- Strictness determines whether the model supplements the query with its own information. Level of 5 is no supplementation. Only your grounding data is used, which means the search engine plays a large role in the quality of the response. Semantic ranking can be helpful in this scenario because the ranking models do a better job of inferring the intent of the query.
+Advanced settings on the left determine how much flexibility the chat model has in supplementing the grounding data, and how many chunks are provided to the model to generate its response.
- Lower levels of strictness produce more verbose answers, but might also include information that isn't in your index.
++ Strictness level 5 means no supplementation. Only your grounding data is used, which means the search engine plays a large role in the quality of the response. Semantic ranking can be helpful in this scenario because the ranking models do a better job of inferring the intent of the query. Lower levels of strictness produce more verbose answers, but might also include information that isn't in your index.
- :::image type="content" source="media/search-get-started-rag/azure-openai-studio-advanced-settings.png" alt-text="Screenshot of the advanced settings.":::
++ Retrieved documents are the number of matching search results used to answer the question. It's capped at 20 to minimize latency and to stay under the model input limits.
-1. Start with these settings:
+ :::image type="content" source="media/search-get-started-rag/azure-openai-studio-advanced-settings.png" alt-text="Screenshot of the advanced settings.":::
+
+1. Start with these advanced settings:
+ Verify the **Limit responses to your data content** option is selected. + Strictness set to 3 or 4.
- + Retrieved documents set to 20. Given chunk sizes of 1024 tokens, a setting of 20 gives you roughly 20,000 tokens to use for generating responses. The tradeoff is query latency, but you can experiment with chat replay to find the right balance.
+ + Retrieved documents set to 20. Maximum documents give the model more information to work with when generating responses. The tradeoff for maximum documents is increased query latency, but you can experiment with chat replay to find the right balance.
1. Send your first query. The chat models perform best in question and answer exercises. For example, "who gave the Gettysburg speech" or "when was the Gettysburg speech delivered". More complex queries, such as "why was Gettysburg important", perform better if the model has some latitude to answer (lower levels of strictness) or if semantic ranking is enabled.
- Queries that require deeper analysis or language understanding, such as "how many speeches are in the vector store" or "what's in this vector store", will probably fail to return a response. In RAG pattern chat scenarios, information retrieval is keyword and similarity search against the query string, where the search engine looks for chunks having exact or similar terms, phrases, or construction. The return payload might be insufficient for handling an open-ended question.
+ Queries that require deeper analysis or language understanding, such as "how many speeches are in the vector store", will probably fail. Remember that the search engine looks for chunks having exact or similar terms, phrases, or construction to the query. And while the model might understand the question, if search results are chunks from speeches, it's not the right information to answer that kind of question.
- Finally, chats are constrained by the number of documents (chunks) returned in the response (limited to 3-20 in Azure OpenAI Studio playground). As you can imagine, posing a question about "all of the titles" requires a full scan of the entire vector store, which means adopting an approach that allows more than 20 chunks. You could modify the generated code (assuming you [deploy the solution](/azure/ai-services/openai/use-your-data-quickstart#deploy-your-model)) to allow for [exhaustive search](vector-search-how-to-create-index.md#add-a-vector-search-configuration) on your queries.
+ Finally, chats are constrained by the number of documents (chunks) returned in the response (limited to 3-20 in Azure OpenAI Studio playground). As you can imagine, posing a question about "all of the titles" requires a full scan of the entire vector store. You could modify the generated code (assuming you [deploy the solution](/azure/ai-services/openai/use-your-data-quickstart#deploy-your-model)) to allow for [service-side exhaustive search](vector-search-how-to-create-index.md#add-a-vector-search-configuration) on your queries.
:::image type="content" source="media/search-get-started-rag/chat-results.png" lightbox="media/search-get-started-rag/chat-results.png" alt-text="Screenshot of a chat session.":::
In this quickstart:
In the playground, it's easy to start over with different data and configurations and compare the results. If you didn't try **Hybrid + semantic** the first time, perhaps try again with [semantic ranking enabled](semantic-how-to-enable-disable.md).
-We also provide code samples that demonstrate the full range of APIs for RAG applications. Samples are available in [Python](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-python), [C#](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-dotnet), and [JavaScript](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-javascript).
+If you need customization and tuning that the playground can't provide, take a look at code samples that demonstrate the full range of APIs for RAG applications based on Azure AI Search. Samples are available in [Python](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-python), [C#](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-dotnet), and [JavaScript](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-javascript).
## Clean up
security Recover From Identity Compromise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/recover-from-identity-compromise.md
We recommend the following actions to ensure your general security posture:
- **Ensure that your organization has extended detection and response (XDR) and security information and event management (SIEM) solutions in place**, such as [Microsoft Defender XDR for Endpoint](/microsoft-365/security/defender/microsoft-365-defender), [Microsoft Sentinel](../../sentinel/overview.md), and [Microsoft Defender for IoT](../../defender-for-iot/organizations/index.yml). -- **Review MicrosoftΓÇÖs Enterprise access model**.
+- **Review [MicrosoftΓÇÖs Enterprise access model](/security/privileged-access-workstations/privileged-access-access-model)**.
### Improve identity security posture
This section provides possible methods and steps to consider when building your
> [!IMPORTANT] > The exact steps required in your organization will depend on what persistence you've discovered in your investigation, and how confident you are that your investigation was complete and has discovered all possible entry and persistence methods. >
-> Ensure that any actions taken are performed from a trusted device, built from a clean source. For example, use a fresh, privileged access workstation.
+> Ensure that any actions taken are performed from a trusted device, built from a [clean source](/security/privileged-access-workstations/privileged-access-access-model). For example, use a fresh, [privileged access workstation](/security/privileged-access-workstations/privileged-access-deployment).
> The following sections include the following types of recommendations for remediating and retaining administrative control:
In addition to the recommendations listed earlier in this article, we also recom
|Activity |Description | ||| |**Rebuild affected systems** | Rebuild systems that were identified as compromised by the attacker during your investigation. |
-|**Remove unnecessary admin users** | Remove unnecessary members from Domain Admins, Backup Operators, and Enterprise Admin groups. For more information, see Securing Privileged Access. |
+|**Remove unnecessary admin users** | Remove unnecessary members from Domain Admins, Backup Operators, and Enterprise Admin groups. For more information, see [Securing Privileged Access](/security/privileged-access-workstations/overview). |
|**Reset passwords to privileged accounts** | Reset passwords of all privileged accounts in the environment. <br><br>**Note**: Privileged accounts are not limited to built-in groups, but can also be groups that are delegated access to server administration, workstation administration, or other areas of your environment. | |**Reset the krbtgt account** | Reset the **krbtgt** account twice using the [New-KrbtgtKeys](https://github.com/microsoft/New-KrbtgtKeys.ps1/blob/master/New-KrbtgtKeys.ps1) script. <br><br>**Note**: If you are using Read-Only Domain Controllers, you will need to run the script separately for Read-Write Domain Controllers and for Read-Only Domain Controllers. | |**Schedule a system restart** | After you validate that no persistence mechanisms created by the attacker exist or remain on your system, schedule a system restart to assist with removing memory-resident malware. |
service-connector Quickstart Cli Functions Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-cli-functions-connection.md
This quickstart shows you how to connect Azure Functions to other Cloud resource
- This quickstart requires version 2.30.0 or higher of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. - This quickstart assumes that you already have an Azure Function. If you don't have one yet, [create an Azure Function](../azure-functions/create-first-function-cli-python.md).-- This quickstart assumes that you already have an Azure Storage account. If you don't have one yet, [create a Azure Storage account](../storage/common/storage-account-create.md).
+- This quickstart assumes that you already have an Azure Storage account. If you don't have one yet, [create an Azure Storage account](../storage/common/storage-account-create.md).
## Initial set-up
service-fabric Service Fabric Tutorial Create Vnet And Linux Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-tutorial-create-vnet-and-linux-cluster.md
The following is a template snippet for the Service Fabric Linux extension:
"durabilityLevel": "Silver", "enableParallelJobs": true, "nicPrefixOverride": "[variables('subnet0Prefix')]",
- "dataPath": "D:\\\\SvcFab",
"certificate": { "commonNames": [ "[parameters('certificateCommonName')]"
spring-apps Concept Manage Monitor App Spring Boot Actuator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/concept-manage-monitor-app-spring-boot-actuator.md
To observe the configuration and configurable environment, we need to enable `en
You can now go back to the app overview pane and wait until the Provisioning Status changes to "Succeeded". There will be more than one running instance.
-> [!Note]
+> [!NOTE]
> Once you expose the app to public, these actuator endpoints are exposed to public as well. You can hide all endpoints by deleting the environment variables `management.endpoints.web.exposure.include`, and set `management.endpoints.web.exposure.exclude=*` ## View the actuator endpoint to view application information
spring-apps How To Appdynamics Java Agent Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-appdynamics-java-agent-monitor.md
To understand the limitations of the AppDynamics Agent, see [Monitor Azure Sprin
## Next steps
-* [Use Application Insights Java In-Process Agent in Azure Spring Apps](./how-to-application-insights.md)
+[Use Application Insights Java In-Process Agent in Azure Spring Apps](./how-to-application-insights.md)
spring-apps How To Dynatrace One Agent Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-dynatrace-one-agent-monitor.md
For information about limitations when deploying Dynatrace OneAgent in applicati
## Next steps
-* [Use Application Insights Java In-Process Agent in Azure Spring Apps](how-to-application-insights.md)
+[Use Application Insights Java In-Process Agent in Azure Spring Apps](how-to-application-insights.md)
spring-apps How To Launch From Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-launch-from-source.md
az spring app show-deploy-log --name <app-name>
## Next steps
-* [Quickstart: Monitoring Azure Spring Apps with logs, metrics, and tracing](./quickstart-logs-metrics-tracing.md)
+[Quickstart: Monitoring Azure Spring Apps with logs, metrics, and tracing](quickstart-logs-metrics-tracing.md)
More samples are available on GitHub: [Azure Spring Apps Samples](https://github.com/Azure-Samples/azure-spring-apps-samples).
spring-apps How To New Relic Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-new-relic-monitor.md
For a vnet injection instance of Azure Spring Apps, you need to make sure the ou
## Next steps
-* [Use Application Insights Java In-Process Agent in Azure Spring Apps](how-to-application-insights.md)
+[Use Application Insights Java In-Process Agent in Azure Spring Apps](how-to-application-insights.md)
spring-apps How To Prepare App Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-prepare-app-deployment.md
This article shows how to prepare an existing Steeltoe application for deploymen
This article explains the dependencies, configuration, and code that are required to run a .NET Core Steeltoe app in Azure Spring Apps. For information about how to deploy an application to Azure Spring Apps, see [Deploy your first Spring Boot app in Azure Spring Apps](./quickstart.md).
->[!Note]
+> [!NOTE]
> Steeltoe support for Azure Spring Apps is currently offered as a public preview. Public preview offerings allow customers to experiment with new features prior to their official release. Public preview features and services are not meant for production use. For more information about support during previews, see the [FAQ](https://azure.microsoft.com/support/faq/) or file a [Support request](../azure-portal/supportability/how-to-create-azure-support-request.md). ## Supported versions
storage Elastic San Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-performance.md
The performance of an individual volume is determined by its capacity. The maxim
## Example configuration
-Each of the example scenarios in this article uses the following configuration for the VMs and the Elastic SAN:
+Each of the example scenarios in this article uses the following configuration for the Elastic SAN:
-### VM SKUs
--- Standard_D2_v5 (AKS)-- Standard_D4s_v5 (workload 1)-- Standard_D32_v5 (workload 2)-- Standard_D48_v5 (workload 3)-
-### Elastic SAN limits
|Resource |Capacity |IOPS | ||||
storage Files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-whats-new.md
description: Learn about new features and enhancements in Azure Files and Azure
Previously updated : 01/28/2024 Last updated : 01/29/2024
Azure Files and Azure File Sync are updated regularly to offer new features and
#### Snapshot support for NFS Azure premium file shares is generally available
-Customers using NFS Azure file shares can now take point-in-time snapshots of file shares. This enables users to roll back their entire filesystem to a previous point in time, or restore specific files that were accidentally deleted or corrupted. Customers using this feature can perform share-level Snapshot management operations via REST API, PowerShell, and Azure CLI. This feature is now available in all Azure public cloud regions. [Learn more](storage-files-how-to-mount-nfs-shares.md#nfs-file-share-snapshots).
+Customers using NFS Azure file shares can now take point-in-time snapshots of file shares. This enables users to roll back their entire filesystem to a previous point in time, or restore specific files that were accidentally deleted or corrupted. Customers using this feature can perform share-level snapshot management operations via the Azure portal, REST API, Azure PowerShell, and Azure CLI. This feature is now available in all Azure public cloud regions except West US 2. [Learn more](storage-files-how-to-mount-nfs-shares.md#nfs-file-share-snapshots).
#### Sync upload performance improvements for Azure File Sync
storage Storage Files How To Mount Nfs Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-how-to-mount-nfs-shares.md
If your mount failed, it's possible that your private endpoint wasn't set up cor
## NFS file share snapshots
-Customers using NFS Azure file shares can create, list, and delete NFS Azure file share snapshots. This capability allows users to roll back entire file systems or recover files that were accidentally deleted or corrupted. This feature is now available in all Azure public cloud regions.
+Customers using NFS Azure file shares can create, list, and delete NFS Azure file share snapshots. This capability allows users to roll back entire file systems or recover files that were accidentally deleted or corrupted.
> [!IMPORTANT] > You should mount your file share before creating snapshots. If you create a new NFS file share and take snapshots before mounting the share, attempting to list the snapshots for the share will return an empty list. We recommend deleting any snapshots taken before the first mount and re-creating them after you've mounted the share.
Azure Backup isn't currently supported for NFS file shares.
AzCopy isn't currently supported for NFS file shares. To copy data from an NFS Azure file share or share snapshot, use file system copy tools such as rsync or fpsync.
+NFS Azure file share snapshots are available in all Azure public cloud regions except West US 2.
+ ### Create a snapshot You can create a snapshot of an NFS Azure file share using the Azure portal, Azure PowerShell, or Azure CLI. A share can support the creation of up to 200 share snapshots.
storage Storage Snapshots Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-snapshots-files.md
Share snapshots provide only file-level protection. Share snapshots don't preven
- [Azure PowerShell](/powershell/module/az.storage/new-azrmstorageshare) - [Azure CLI](/cli/azure/storage/share#az-storage-share-snapshot) - [Windows](storage-how-to-use-files-windows.md#accessing-share-snapshots-from-windows)
+ - [NFS file share snapshots](storage-files-how-to-mount-nfs-shares.md#nfs-file-share-snapshots)
- [Share snapshot FAQ](storage-files-faq.md#share-snapshots)
stream-analytics Write To Delta Table Adls Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/write-to-delta-table-adls-gen2.md
Title: Write to a Delta Table in ADLS Gen2 (Azure Stream Analytics)
+ Title: Write to a Delta Table in Dale Lake Storage Gen2 (Azure Stream Analytics)
description: This article shows how to create an ASA job writing to a delta table stored in ADLS Gen2. - Previously updated : 10/12/2022 Last updated : 01/29/2024 # Tutorial: Write to a Delta Table stored in Azure Data Lake Storage Gen2 (Public Preview)
Last updated 10/12/2022
This tutorial shows how you can create a Stream Analytics job to write to a Delta table in Azure Data Lake Storage Gen2. In this tutorial, you learn how to: > [!div class="checklist"]
-> * Deploy an event generator that sends data to your event hub
+> * Deploy an event generator that sends sample data to your event hub
> * Create a Stream Analytics job
-> * Configure Azure Data Lake Storage Gen2 to which the Delta table will be stored in
+> * Configure Azure Data Lake Storage Gen2 with a delta table
> * Run the Stream Analytics job ## Prerequisites
-Before you start, make sure you've completed the following steps:
+Before you start, complete the following steps:
* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/).
-* Deploy the TollApp event generator to Azure, use this link to [Deploy TollApp Azure Template](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-stream-analytics%2Fmaster%2FSamples%2FTollApp%2FVSProjects%2FTollAppDeployment%2Fazuredeploy.json). Set the 'interval' parameter to 1. And use a new resource group for this step.
+* Deploy the TollApp event generator to Azure, use this link to [Deploy TollApp Azure Template](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-stream-analytics%2Fmaster%2FSamples%2FTollApp%2FVSProjects%2FTollAppDeployment%2Fazuredeploy.json). Set the 'interval' parameter to 1. Create and use a new resource group for this step.
* Create a [Data Lake Storage Gen2 account](../storage/blobs/create-data-lake-storage-account.md). ## Create a Stream Analytics job 1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Select **Create a resource** in the upper left-hand corner of the Azure portal.
-3. Select **Analytics** > **Stream Analytics job** from the results list.
-4. On the **New Stream Analytics job** page, follow these steps:
+1. Select **All services** on the left menu.
+1. Move the mouse over **Stream Analytics jobs** in the **Analytics** section, and select **+ (plus)**.
+
+ :::image type="content" source="./media/write-to-delta-table-data-lake-storage/all-services-stream-analytics.png" alt-text="Screenshot that shows the selection of Stream Analytics jobs in the All services page.":::
+1. Select **Create a resource** in the upper left-hand corner of the Azure portal.
+1. Select **Analytics** > **Stream Analytics job** from the results list.
+1. On the **New Stream Analytics job** page, follow these steps:
1. For **Subscription**, select your Azure subscription. 2. For **Resource group**, select the same resource that you used earlier in the TollApp deployment. 3. For **Name**, enter a name for the job. Stream Analytics job name can contain alphanumeric characters, hyphens, and underscores only and it must be between 3 and 63 characters long. 4. For **Hosting environment**, confirm that **Cloud** is selected. 5. For **Stream units**, select **1**. Streaming units represent the computing resources that are required to execute a job. To learn about scaling streaming units, refer to [understanding and adjusting streaming units](stream-analytics-streaming-unit-consumption.md) article.
- 6. Select **Review + create** at the bottom of the page.
-
-
-5. On the **Review + create** page, review settings, and select **Create** to create a Stream Analytics page.
-6. On the deployment page, select **Go to resource** to navigate to the **Stream Analytics job** page.
+
+ :::image type="content" source="./media/write-to-delta-table-data-lake-storage/create-job.png" alt-text="Screenshot that shows the Create Stream Analytics job page.":::
+1. Select **Review + create** at the bottom of the page.
+1. On the **Review + create** page, review settings, and select **Create** to create a Stream Analytics page.
+1. On the deployment page, select **Go to resource** to navigate to the **Stream Analytics job** page.
## Configure job input The next step is to define an input source for the job to read data using the event hub created in the TollApp deployment. 1. Find the Stream Analytics job created in the previous section.- 2. In the **Job Topology** section of the Stream Analytics job, select **Inputs**.
+3. Select **+ Add input** and **Event hub**.
-3. Select **+ Add stream input** and **Event hub**.
-
+ :::image type="content" source="./media/write-to-delta-table-data-lake-storage/add-input.png" alt-text="Screenshot that shows the Inputs page.":::
4. Fill out the input form with the following values created through [TollApp Azure Template](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-stream-analytics%2Fmaster%2FSamples%2FTollApp%2FVSProjects%2FTollAppDeployment%2Fazuredeploy.json): 1. For **Input alias**, enter **entrystream**.
The next step is to define an input source for the job to read data using the ev
4. For **Event Hub namespace**, select the event hub namespace you created in the previous section. 5. Use default options on the remaining settings and select **Save**.
+ :::image type="content" source="./media/write-to-delta-table-data-lake-storage/select-event-hub.png" alt-text="Screenshot that shows the selection of the input event hub.":::
+ ## Configure job output The next step is to define an output sink where the job can write data to. In this tutorial, you write output to a Delta table in Azure Data Lake Storage Gen2. 1. In the **Job Topology** section of the Stream Analytics job, select the **Outputs** option.
+2. Select **+ Add output** > **Blob storage/ADLS Gen2**.
-2. Select **+ Add** > **Blob storage/ADLS Gen2**.
-
-3. Fill the output form with the following details and select **Save**:
+ :::image type="content" source="./media/write-to-delta-table-data-lake-storage/output-type.png" alt-text="Screenshot that shows the Outputs page.":::
+1. Fill the output form with the following details and select **Save**:
1. For **Output alias**, enter **DeltaOutput**. 2. Choose **Select Blob storage/ADLS Gen2 from your subscriptions**. 3. For **Subscription**, select your Azure subscription.
- 4. For **Storage account**, choose the ADLS Gen2 account you created.
- 5. For **container**, provide a unique container name.
- 6. For **Event Serialization Format**, select **Delta Lake**. Although Delta lake is listed as one of the options here, it isn't a data format. Delta Lake uses versioned Parquet files to store your data. To learn more about [Delta lake](write-to-delta-lake.md).
+ 4. For **Storage account**, choose the ADLS Gen2 account (the one that starts with **tollapp**) you created.
+ 5. For **container**, select **Create new** and provide a unique **container name**.
+ 6. For **Event Serialization Format**, select **Delta Lake (Preview)**. Although Delta lake is listed as one of the options here, it isn't a data format. Delta Lake uses versioned Parquet files to store your data. To learn more about [Delta lake](write-to-delta-lake.md).
7. For **Delta table path**, enter **tutorial folder/delta table**. 8. Use default options on the remaining settings and select **Save**.
-
+ :::image type="content" source="./media/write-to-delta-table-data-lake-storage/configure-output.png" alt-text="Screenshot that shows configuration of the output.":::
+ ## Create queries At this point, you have a Stream Analytics job set up to read an incoming data stream. The next step is to create a query that analyzes the data in real time. The queries use a SQL-like language that has some extensions specific to Stream Analytics.
At this point, you have a Stream Analytics job set up to read an incoming data s
INTO DeltaOutput FROM EntryStream TIMESTAMP BY EntryTime ```- 3. Select **Save query** on the toolbar.
+ :::image type="content" source="./media/write-to-delta-table-data-lake-storage/configure-query.png" alt-text="Screenshot that shows query for the job.":::
++ ## Start the Stream Analytics job and check the output 1. Return to the job overview page in the Azure portal, and select **Start**.
-2. On the **Start job** page, confirm that **Now** is selected for Job output start time, and then select **Start** at the bottom of the page.
-3. After few minutes, in the portal, find the storage account & the container that you've configured as output for the job. You can now see the delta table in the folder specified in the container. The job takes a few minutes to start for the first time, after it's started, it will continue to run as the data arrives.
+
+ :::image type="content" source="./media/write-to-delta-table-data-lake-storage/start-job-menu.png" alt-text="Screenshot that shows the selection of Start job button on the Overview page.":::
+1. On the **Start job** page, confirm that **Now** is selected for Job output start time, and then select **Start** at the bottom of the page.
+
+ :::image type="content" source="./media/write-to-delta-table-data-lake-storage/start-job-page.png" alt-text="Screenshot that shows the selection of Start job page.":::
+1. After few minutes, in the portal, find the storage account & the container that you've configured as output for the job. You can now see the delta table in the folder specified in the container. The job takes a few minutes to start for the first time, after it's started, it will continue to run as the data arrives.
+
+ :::image type="content" source="./media/write-to-delta-table-data-lake-storage/output-data.png" alt-text="Screenshot that shows output data files in the container." lightbox="./media/write-to-delta-table-data-lake-storage/output-data.png":::
+ ## Clean up resources
virtual-desktop Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/authentication.md
To use a smart card to authenticate to Microsoft Entra ID, you must first [confi
## Session host authentication
-If you haven't already enabled [single sign-on](#single-sign-on-sso) or saved your credentials locally, you'll also need to authenticate to the session host when launching a connection. The following list describes which types of authentication each Azure Virtual Desktop client currently supports.
-
+If you haven't already enabled [single sign-on](#single-sign-on-sso) or saved your credentials locally, you'll also need to authenticate to the session host when launching a connection. The following list describes which types of authentication each Azure Virtual Desktop client currently supports. Some clients might require a specific version to be used, which you can find in the link for each authentication type.
|Client |Supported authentication type(s) | ||| |Windows Desktop client | Username and password <br>Smart card <br>[Windows Hello for Business certificate trust](/windows/security/identity-protection/hello-for-business/hello-hybrid-cert-trust) <br>[Windows Hello for Business key trust with certificates](/windows/security/identity-protection/hello-for-business/hello-deployment-rdp-certs) <br>[Microsoft Entra authentication](configure-single-sign-on.md) | |Azure Virtual Desktop Store app | Username and password <br>Smart card <br>[Windows Hello for Business certificate trust](/windows/security/identity-protection/hello-for-business/hello-hybrid-cert-trust) <br>[Windows Hello for Business key trust with certificates](/windows/security/identity-protection/hello-for-business/hello-deployment-rdp-certs) <br>[Microsoft Entra authentication](configure-single-sign-on.md) | |Remote Desktop app | Username and password |
-|Web client | Username and password |
-|Android client | Username and password |
-|iOS client | Username and password |
-|macOS client | Username and password <br>Smart card: support for smart card-based sign in using smart card redirection at the Winlogon prompt when NLA is not negotiated. |
+|Web client | Username and password<br>[Microsoft Entra authentication](configure-single-sign-on.md) |
+|Android client | Username and password<br>[Microsoft Entra authentication](configure-single-sign-on.md) |
+|iOS client | Username and password<br>[Microsoft Entra authentication](configure-single-sign-on.md) |
+|macOS client | Username and password <br>Smart card: support for smart card-based sign in using smart card redirection at the Winlogon prompt when NLA is not negotiated.<br>[Microsoft Entra authentication](configure-single-sign-on.md) |
>[!IMPORTANT] >In order for authentication to work properly, your local machine must also be able to access the [required URLs for Remote Desktop clients](safe-url-list.md#remote-desktop-clients).
virtual-desktop Azure Stack Hci Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-stack-hci-overview.md
There are different classifications of data for Azure Virtual Desktop, such as c
Azure Virtual Desktop for Azure Stack HCI has the following limitations: -- Session hosts running on Azure Stack HCI don't support some Azure Virtual Desktop features, such as:
+- You can't use some Azure Virtual Desktop features when session hosts running on Azure Stack HCI, such as:
- [Azure Virtual Desktop Insights](insights.md) - [Autoscale](autoscale-scaling-plan.md) - [Session host scaling with Azure Automation](set-up-scaling-script.md) - [Start VM On Connect](start-virtual-machine-connect.md)
- - [Per-user access pricing](./remote-app-streaming/licensing.md)
+ - [Per-user access pricing](licensing.md)
- Each host pool must only contain session hosts on Azure or on Azure Stack HCI. You can't mix session hosts on Azure and on Azure Stack HCI in the same host pool. -- Session hosts on Azure Stack HCI don't support certain cloud-only Azure services.- - Azure Stack HCI supports many types of hardware and on-premises networking capabilities, so performance and user density might vary compared to session hosts running on Azure. Azure Virtual Desktop's [virtual machine sizing guidelines](/windows-server/remote/remote-desktop-services/virtual-machine-recs) are broad, so you should use them for initial performance estimates and monitor after deployment. -- Templates may show failures in certain cases at the domain-joining step. To proceed, you can manually join the session hosts to the domain. For more information, see [VM provisioning through Azure portal on Azure Stack HCI](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines).- ## Next steps To learn how to deploy Azure Virtual Desktop for Azure Stack HCI, see [Deploy Azure Virtual Desktop](deploy-azure-virtual-desktop.md).
virtual-desktop Fslogix Profile Container Configure Azure Files Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/fslogix-profile-container-configure-azure-files-active-directory.md
To use Active Directory accounts for the share permissions of your file share, y
Join-AzStorageAccount ` -ResourceGroupName $ResourceGroupName ` -StorageAccountName $StorageAccountName `
- -DomainAccountType "ComputerAccount" `
- -EncryptionType "AES256"
+ -DomainAccountType "ComputerAccount"
```
- You can also specify the encryption algorithm used for Kerberos authentication in the previous command to `RC4` if you need to. Using AES256 is recommended.
- 1. To verify the storage account has joined your domain, run the commands below and review the output, replacing the values for `$resourceGroupName` and `$storageAccountName` with your values: ```powershell
virtual-desktop Install Windows Client Per User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/install-windows-client-per-user.md
# Install the Remote Desktop client for Windows on a per-user basis with Intune or Configuration Manager
-You can install the [Remote Desktop client for Windows](./users/connect-windows.md) on either a per-system or per-user basis. Installing it on a per-system basis installs the client on the machines for all users by default, and administrators control updates. Per-user installation installs the application to a subfolder within the local AppData folder of each user's profile, enabling users to install updates with needing administrative rights.
+You can install the [Remote Desktop client for Windows](./users/connect-windows.md) on either a per-system or per-user basis. Installing it on a per-system basis installs the client on the machines for all users by default, and administrators control updates. Per-user installation installs the application to a subfolder within the local AppData folder of each user's profile, enabling users to install updates without needing administrative rights.
When you install the client using `msiexec.exe`, per-system is the default method of client installation. You can use the parameters `ALLUSERS=2 MSIINSTALLPERUSER=1` with `msiexec` to install the client per-user, however if you're deploying the client with Intune or Configuration Manager, using `msiexec` directly to install the client causes it to be installed per-system, regardless of the parameters used. Wrapping the `msiexec` command in a PowerShell script enables the client to be successfully installed per-user.
virtual-machines Disk Encryption Key Vault Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-key-vault-aad.md
When you need encryption to be enabled on a running VM in Azure, Azure Disk Encr
### <a name="bkmk_ADappPSH"></a> Set up a Microsoft Entra app and service principal with Azure PowerShell
-To execute the following commands, get and use the [Azure PowerShell module](/powershell/azure/what-is-azure-powershell).https://learn.microsoft.com/en-us/powershell/azure/what-is-azure-powershell?view=azps-11.2.0
+To execute the following commands, get and use the [Azure PowerShell module](/powershell/azure/what-is-azure-powershell).
1. Use the [New-AzADApplication](/powershell/module/az.resources/new-azadapplication) PowerShell cmdlet to create a Microsoft Entra application. MyApplicationHomePage and the MyApplicationUri can be any values you wish.
virtual-machines Expand Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/expand-disks.md
Previously updated : 07/12/2023 Last updated : 01/25/2024
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
-This article describes how to expand managed disks for a Linux virtual machine (VM). You can [add data disks](add-disk.md) to provide for additional storage space, and you can also expand an existing data disk. The default virtual hard disk size for the operating system (OS) is typically 30 GB on a Linux VM in Azure. This article covers expanding either OS disks or data disks.
+This article describes how to expand managed disks for a Linux virtual machine (VM). You can [add data disks](add-disk.md) to provide for additional storage space, and you can also expand an existing data disk. The default virtual hard disk size for the operating system (OS) is typically 30 GB on a Linux VM in Azure. This article covers expanding either OS disks or data disks. You can't expand the size of striped volumes.
An OS disk has a maximum capacity of 4,095 GiB. However, many operating systems are partitioned with [master boot record (MBR)](https://wikipedia.org/wiki/Master_boot_record) by default. MBR limits the usable size to 2 TiB. If you need more than 2 TiB, create and attach data disks and use them for data storage. If you need to store data on the OS disk and require the additional space, convert it to GUID Partition Table (GPT).
An OS disk has a maximum capacity of 4,095 GiB. However, many operating systems
## <a id="identifyDisk"></a>Identify Azure data disk object within the operating system ##
-In the case of expanding a data disk when there are several data disks present on the VM, it may be difficult to relate the Azure LUNs to the Linux devices. If the OS disk needs expansion, it will be clearly labeled in the Azure portal as the OS disk.
+In the case of expanding a data disk when there are several data disks present on the VM, it may be difficult to relate the Azure LUNs to the Linux devices. If the OS disk needs expansion, it is clearly labeled in the Azure portal as the OS disk.
Start by identifying the relationship between disk utilization, mount point, and device, with the ```df``` command.
Filesystem Type Size Used Avail Use% Mounted on
/dev/sde1 ext4 32G 49M 30G 1% /opt/db/log ```
-Here we can see, for example, the `/opt/db/data` filesystem is nearly full, and is located on the `/dev/sdd1` partition. The output of `df` will show the device path regardless of whether the disk is mounted by device path or the (preferred) UUID in the fstab. Also take note of the Type column, indicating the format of the filesystem. This will be important later.
+Here we can see, for example, the `/opt/db/data` filesystem is nearly full, and is located on the `/dev/sdd1` partition. The output of `df` shows the device path regardless of whether the disk is mounted by device path or the (preferred) UUID in the fstab. Also take note of the Type column, indicating the format of the filesystem. This is important later.
-Now locate the LUN which correlates to `/dev/sdd` by examining the contents of `/dev/disk/azure/scsi1`. The output of the following `ls` command will show that the device known as `/dev/sdd` within the Linux OS is located at LUN1 when looking in the Azure portal.
+Now locate the LUN that correlates to `/dev/sdd` by examining the contents of `/dev/disk/azure/scsi1`. The output of the following `ls` command shows that the device known as `/dev/sdd` within the Linux OS is located at LUN1 when looking in the Azure portal.
```bash sudo ls -alF /dev/disk/azure/scsi1/
In the following samples, replace example parameter names such as *myResourceGro
### Detecting a changed disk size
-If a data disk was expanded without downtime using the procedure mentioned previously, the disk size won't be changed until the device is rescanned, which normally only happens during the boot process. This rescan can be called on-demand with the following procedure. In this example we have detected using the methods in this document that the data disk is currently `/dev/sda` and has been resized from 256GB to 512GB.
+If a data disk was expanded without downtime using the procedure mentioned previously, the disk size won't be changed until the device is rescanned, which normally only happens during the boot process. This rescan can be called on-demand with the following procedure. In this example we have detected using the methods in this document that the data disk is currently `/dev/sda` and has been resized from 256 GiB to 512 GiB.
1. Identify the currently recognized size on the first line of output from `fdisk -l /dev/sda`
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
ΓööΓöÇrootvg-rootlv 253:6 0 2G 0 lvm / ```
-1. Expand the partition containing this PV using *growpart*, the device name, and partition number. Doing so will expand the specified partition to use all the free contiguous space on the device.
+1. Expand the partition containing this PV using *growpart*, the device name, and partition number. Doing so expands the specified partition to use all the free contiguous space on the device.
```bash growpart /dev/sda 4
virtual-machines Metrics Vm Usage Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/metrics-vm-usage-rest.md
Previously updated : 06/13/2018 Last updated : 01/25/2024 + # Get Virtual Machine usage metrics using the REST API
This example shows how to retrieve the CPU usage for a Linux Virtual Machine using the [Azure REST API](/rest/api/azure/).
-Complete reference documentation and additional samples for the REST API are available in the [Azure Monitor REST reference](/rest/api/monitor).
+Complete reference documentation and samples for the REST API are available in the [Azure Monitor REST reference article](/rest/api/monitor).
## Build the request
virtual-machines Expand Os Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/expand-os-disk.md
Previously updated : 07/12/2023 Last updated : 01/25/2024
An OS disk has a maximum capacity of 4,095 GiB. However, many operating systems
> Shrinking an existing disk isnΓÇÖt supported and may result in data loss. > > After expanding the disks, you need to [Expand the volume in the operating system](#expand-the-volume-in-the-operating-system) to take advantage of the larger disk.
+>
+> You can't expand the size of striped volumes.
## Expand without downtime