Updates from: 09/09/2022 01:09:36
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Concept Fido2 Hardware Vendor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-fido2-hardware-vendor.md
Last updated 08/02/2021
--++
active-directory App Objects And Service Principals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/app-objects-and-service-principals.md
# Application and service principal objects in Azure Active Directory
-This article describes application registration, application objects, and service principals in Azure Active Directory (Azure AD): what they're, how they're used, and how they're related to each other. A multi-tenant example scenario is also presented to illustrate the relationship between an application's application object and corresponding service principal objects.
+This article describes application registration, application objects, and service principals in Azure Active Directory (Azure AD): what they are, how they're used, and how they're related to each other. A multi-tenant example scenario is also presented to illustrate the relationship between an application's application object and corresponding service principal objects.
## Application registration
active-directory Configure User Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-user-consent.md
Title: Configure how users consent to applications description: Learn how to manage how and when users can consent to applications that will have access to your organization's data. -+ Last updated 08/10/2022-++ #customer intent: As an admin, I want to configure how end-users consent to applications.
active-directory Manage App Consent Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-app-consent-policies.md
Title: Manage app consent policies description: Learn how to manage built-in and custom app consent policies to control when consent can be granted. -+ Last updated 09/02/2021-++ #customer intent: As an admin, I want to manage app consent policies for enterprise applications in Azure AD
active-directory Qs Configure Template Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-template-windows-vm.md
In this section, you assign a user-assigned managed identity to an Azure VM usin
### Assign a user-assigned managed identity to an Azure VM
-To assign a user-assigned identity to a VM, your account needs the [Virtual Machine Contributor](../../role-based-access-control/built-in-roles.md#virtual-machine-contributor) and [Managed Identity Operator](../../role-based-access-control/built-in-roles.md#managed-identity-operator) role assignments. No other Azure AD directory role assignments are required.
+To assign a user-assigned identity to a VM, your account needs the [Managed Identity Operator](../../role-based-access-control/built-in-roles.md#managed-identity-operator) role assignment. No other Azure AD directory role assignments are required.
1. Under the `resources` element, add the following entry to assign a user-assigned managed identity to your VM. Be sure to replace `<USERASSIGNEDIDENTITY>` with the name of the user-assigned managed identity you created.
active-directory Reference Azure Ad Sla Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-azure-ad-sla-performance.md
Title: Azure Active Directory SLA performance | Microsoft Docs
description: Learn about the Azure AD SLA performance documentationcenter: ''-+ editor: ''
na Previously updated : 08/26/2022- Last updated : 09/08/2022+
active-directory Fortigate Ssl Vpn Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fortigate-ssl-vpn-tutorial.md
Previously updated : 05/13/2022 Last updated : 09/08/2022
To complete these steps, you'll need the values you recorded earlier:
set entity-id < Identifier (Entity ID)Entity ID> set single-sign-on-url < Reply URL Reply URL> set single-logout-url <Logout URL>
- set idp-entity-id <Azure AD Identifier>
+ set idp-entity-id <Azure Login URL>
set idp-single-sign-on-url <Azure AD Identifier> set idp-single-logout-url <Azure Logout URL> set idp-cert <Base64 SAML Certificate Name>
active-directory Infrascale Cloud Backup Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/infrascale-cloud-backup-tutorial.md
Previously updated : 06/24/2022 Last updated : 09/08/2022
Follow these steps to enable Azure AD SSO in the Azure portal.
b. In the **Reply URL** textbox, type the URL: `https://dashboard.managedoffsitebackup.net/Account/AssertionConsumerService`
- c. In the **Sign-on URL** text box, type one of the following URLs:
-
- | **Sign-on URL** |
- ||
- | `https://dashboard.avgonlinebackup.com/Account/SingleSignOn` |
- | `https://dashboard.infrascale.com/Account/SingleSignOn` |
- | `https://dashboard.managedoffsitebackup.net/Account/SingleSignOn` |
- | `https://dashboard.sosonlinebackup.com/Account/SingleSignOn` |
- |`https://dashboard.trustboxbackup.com/Account/SingleSignOn` |
- | `https://radialpoint-dashboard.managedoffsitebackup.net/Account/SingleSignOn` |
- | `https://dashboard-cw.infrascale.com/Account/SingleSignOn` |
- | `https://dashboard.digicelcloudbackup.com/Account/SingleSignOn` |
- | `https://dashboard-cw.sosonlinebackup.com/Account/SingleSignOn` |
- |`https://dashboard.my-data.dk/Account/SingleSignOn` |
- |`https://dashboard.beesafe.nu/Account/SingleSignOn` |
- |`https://dashboard.bekcloud.com/Account/SingleSignOn` |
- | `https://dashboard.alltimesecure.com/Account/SingleSignOn` |
- | `https://dashboard-ec1.sosonlinebackup.com/Account/SingleSignOn` |
- | `https://dashboard.glcsecurecloud.com/Account/SingleSignOninfrascalecloudbackup.com/infrascalecloudbackup.com/` |
+ c. The **Sign-on URL** text box requires a specific URL for your company. The general pattern of the URL is:
+ `https://[OptionalPrefix]dashboard[OptionalSuffix].CompanySpecificString.[com/net,etc]/Account/SingleSignOn`
> [!Note]
- > The Identifier value is not real. Update this value with the actual Identifier URL. Contact [Infrascale Cloud Backup support team](mailto:support@infrascale.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > Do not enter this in the Sign-on URL text box. The Identifier value is not a real URL ΓÇô just a general pattern. Update this value with the actual Identifier URL obtained from Infrascale. Contact [Infrascale Cloud Backup support team](mailto:support@infrascale.com) to get the value.
1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
active-directory Decentralized Identifier Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/decentralized-identifier-overview.md
Our digital and physical lives are increasingly linked to the apps, services, and devices we use to access a rich set of experiences. This digital transformation allows us to interact with hundreds of companies and thousands of other users in ways that were previously unimaginable.
-But identity data has too often been exposed in security breaches. These breaches affect our social, professional, and financial lives. Microsoft believes that thereΓÇÖs a better way. Every person has a right to an identity that they own and control, one that securely stores elements of their digital identity and preserves privacy. This primer explains how we are joining hands with a diverse community to build an open, trustworthy, interoperable, and standards-based Decentralized Identity (DID) solution for individuals and organizations.
+But identity data has too often been exposed in security breaches. These breaches affect our social, professional, and financial lives. Microsoft believes that thereΓÇÖs a better way. Every person has a right to an identity that they own and control, one that securely stores elements of their digital identity and preserves privacy. This primer explains how we are joining hands with a diverse community to build an open, trustworthy, interoperable, and standards-based Decentralized Identity solution for individuals and organizations.
## Why we need Decentralized Identity
aks Image Cleaner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/image-cleaner.md
+
+ Title: Use ImageCleaner on Azure Kubernetes Service (AKS)
+description: Learn how to use ImageCleaner to clean up stale images on Azure Kubernetes Service (AKS)
++++ Last updated : 08/26/2022++
+# Use ImageCleaner to clean up stale images on your Azure Kubernetes Service cluster (preview)
+
+It's common to use pipelines to build and deploy images on Azure Kubernetes Service (AKS) clusters. While great for image creation, this process often doesn't account for the stale images left behind and can lead to image bloat on cluster nodes. These images can present security issues as they may contain vulnerabilities. By cleaning these unreferenced images, you can remove an area of risk in your clusters. When done manually, this process can be time intensive, which ImageCleaner can mitigate via automatic image identification and removal.
++
+## Prerequisites
+
+* An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).
+* [Azure CLI][azure-cli-install] or [Azure PowerShell][azure-powershell-install] and the `aks-preview` CLI extension installed.
+* The `EnableImageCleanerPreview` feature flag registered on your subscription:
+
+### [Azure CLI](#tab/azure-cli)
+
+Register the `EnableImageCleanerPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "EnableImageCleanerPreview"
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
+
+```azurecli-interactive
+az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/EnableImageCleanerPreview')].{Name:name,State:properties.state}"
+```
+
+When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+Register the `EnableImageCleanerPreview` feature flag by using the [Register-AzProviderPreviewFeature][register-azproviderpreviewfeature] cmdlet, as shown in the following example:
+
+```azurepowershell-interactive
+Register-AzProviderPreviewFeature -ProviderNamespace Microsoft.ContainerService -Name EnableImageCleanerPreview
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [Get-AzProviderPreviewFeature][get-azproviderpreviewfeature] cmdlet:
+
+```azurepowershell-interactive
+Get-AzProviderPreviewFeature -ProviderNamespace Microsoft.ContainerService -Name EnableImageCleanerPreview |
+ Format-Table -Property Name, @{name='State'; expression={$_.Properties.State}}
+```
+
+When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [Register-AzResourceProvider][register-azresourceprovider] command:
+
+```azurepowershell-interactive
+Register-AzResourceProvider -ProviderNamespace Microsoft.ContainerService
+```
+++
+## Limitations
+
+ImageCleaner does not support the following:
+
+* ARM64 node pools. For more information, see [Azure Virtual Machines with ARM-based processors][arm-vms].
+* Windows node pools.
+
+## How ImageCleaner works
+
+When enabled, an `eraser-controller-manager` pod is deployed on each agent node, which will use an `ImageList` CRD to determine unreferenced and vulnerable images. Vulnerability is determined based on a [trivy][trivy] scan, after which images with a `LOW`, `MEDIUM`, `HIGH`, or `CRITICAL` classification are flagged. An updated `ImageList` will be automatically generated by ImageCleaner based on a set time interval, and can also be supplied manually.
+
+Once an `ImageList` is generated, ImageCleaner will remove all the images in the list from node VMs.
+++
+## Configuration options
+
+In addition to choosing between manual and automatic mode, there are several options for ImageCleaner:
+
+|Name|Description|Required|
+|-|--|--|
+|--enable-image-cleaner|Enable the ImageCleaner feature for an AKS cluster|Yes, unless disable is specified|
+|--disable-image-cleaner|Disable the ImageCleaner feature for an AKS cluster|Yes, unless enable is specified|
+|--image-cleaner-interval-hours|This parameter determines the interval time (in hours) ImageCleaner will use to run. The default value is one week, the minimum value is 24 hours and the maximum is three months.|No|
+
+## Enable ImageCleaner on your AKS cluster
+
+To create a new AKS cluster using the default interval, use [az aks create][az-aks-create]:
+
+```azurecli-interactive
+az aks create -g MyResourceGroup -n MyManagedCluster \
+ --enable-image-cleaner
+```
+
+To enable on an existing AKS cluster, use [az aks update][az-aks-update]:
+
+```azurecli-interactive
+az aks update -g MyResourceGroup -n MyManagedCluster \
+ --enable-image-cleaner
+```
+
+The `--image-cleaner-interval-hours` parameter can be specified at creation time or for an existing cluster. For example, the following command updates the interval for a cluster with ImageCleaner already enabled:
+
+```azurecli-interactive
+az aks update -g MyResourceGroup -n MyManagedCluster \
+ --image-cleaner-interval-hours 48
+```
+
+Based on your configuration, ImageCleaner will generate an `ImageList` containing non-running and vulnerable images at the desired interval. ImageCleaner will automatically remove these images from cluster nodes.
+
+## Manually remove images
+
+To manually remove images from your cluster using ImageCleaner, first create an `ImageList`. For example, save the following as `image-list.yml`:
+
+```yml
+apiVersion: eraser.sh/v1alpha1
+kind: ImageList
+metadata:
+ name: imagelist
+spec:
+ images:
+ - docker.io/library/alpine:3.7.3 # You can also use "*" to specify all non-running images
+```
+
+And apply it to the cluster:
+
+```bash
+kubectl apply -f image-list.yml
+```
+
+A job will trigger which causes ImageCleaner to remove the desired images from all nodes.
+
+## Disable ImageCleaner
+
+To stop using ImageCleaner, you can disable it via the `--disable-image-cleaner` flag:
+
+```azurecli-interactive
+az aks update -g MyResourceGroup -n MyManagedCluster
+ --disable-image-cleaner
+```
+
+## Logging
+
+The deletion logs are stored in the `image-cleaner-kind-worker` pods. You can check these via `kubectl logs` or via the Container Insights pod log table if the [Azure Monitor add-on](./monitor-aks.md) is enabled.
+
+<!-- LINKS -->
+
+[azure-cli-install]: /cli/azure/install-azure-cli
+[azure-powershell-install]: /powershell/azure/install-az-ps
+
+[az-aks-create]: /cli/azure/aks#az_aks_create
+[az-aks-update]: /cli/azure/aks#az_aks_update
+[az-feature-register]: /cli/azure/feature#az_feature_register
+[register-azproviderpreviewfeature]: /powershell/module/az.resources/register-azproviderpreviewfeature
+[az-feature-list]: /cli/azure/feature#az_feature_list
+[get-azproviderpreviewfeature]: /powershell/module/az.resources/get-azproviderpreviewfeature
+[az-provider-register]: /cli/azure/provider#az_provider_register
+[register-azresourceprovider]: /powershell/module/az.resources/register-azresourceprovider
+
+[arm-vms]: https://azure.microsoft.com/blog/azure-virtual-machines-with-ampere-altra-arm-based-processors-generally-available/
+[trivy]: https://github.com/aquasecurity/trivy
aks Manage Abort Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-abort-operations.md
Title: Abort an Azure Kubernetes Service (AKS) long running operation
description: Learn how to terminate a long running operation on an Azure Kubernetes Service cluster at the node pool or cluster level. Previously updated : 09/06/2022 Last updated : 09/08/2022
This article assumes that you have an existing AKS cluster. If you need an AKS c
## Abort a long running operation
-### [Azure REST API](#tab/azure-rest)
+### [Azure CLI](#tab/azure-cli)
-You can use the Azure REST API [Abort](/rest/api/aks/managed-clusters) operation to stop an operation against the Managed Cluster.
+You can use the [az aks nodepool](/cli/azure/aks/nodepool) command with the `operation-abort` argument to abort an operation on a node pool or a managed cluster.
-The following example terminates a process for a specified agent pool.
+The following example terminates an operation on a node pool on a specified cluster by its name and resource group that holds the cluster.
-```rest
-/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedclusters/{resourceName}/agentPools/{agentPoolName}/abort
+```azurecli-interactive
+az aks nodepool operation-abort --resource-group myResourceGroup --cluster-name myAKSCluster --name myNodePool
```
-The following example terminates a process for a specified managed cluster.
+The following example terminates an operation against a specified managed cluster its name and resource group that holds the cluster.
-```rest
-/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedclusters/{resourceName}/abort
+```azurecli-interactive
+az aks operation-abort --name myAKSCluster --resource-group myResourceGroup
``` In the response, an HTTP status code of 204 is returned.
-### [Azure CLI](#tab/azure-cli)
-
-You can use the [az aks nodepool](/cli/azure/aks/nodepool) command with the `operation-abort` argument to abort an operation on a node pool or a managed cluster.
-
-The following example terminates an operation on a node pool on a specified cluster by its name and resource group that holds the cluster.
+### [Azure REST API](#tab/azure-rest)
-```azurecli-interactive
-az aks nodepool operation-abort\
+You can use the Azure REST API [Abort](/rest/api/aks/managed-clusters) operation to stop an operation against the Managed Cluster.
resource-group myResourceGroup \
+The following example terminates a process for a specified agent pool.
cluster-name myAKSCluster \
+```rest
+/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedclusters/{resourceName}/agentPools/{agentPoolName}/abort
```
-The following example terminates an operation against a specified managed cluster its name and resource group that holds the cluster.
+The following example terminates a process for a specified managed cluster.
-```azurecli-interactive
-az aks operation-abort --name myAKSCluster --resource-group myResourceGroup
+```rest
+/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedclusters/{resourceName}/abort
``` In the response, an HTTP status code of 204 is returned.
aks Quickstart Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quickstart-event-grid.md
Title: Subscribe to Azure Kubernetes Service events with Azure Event Grid (Preview)
+ Title: Subscribe to Azure Kubernetes Service events with Azure Event Grid
description: Use Azure Event Grid to subscribe to Azure Kubernetes Service events
Last updated 07/12/2021
-# Quickstart: Subscribe to Azure Kubernetes Service (AKS) events with Azure Event Grid (Preview)
+# Quickstart: Subscribe to Azure Kubernetes Service (AKS) events with Azure Event Grid
Azure Event Grid is a fully managed event routing service that provides uniform event consumption using a publish-subscribe model. In this quickstart, you'll create an AKS cluster and subscribe to AKS events. - ## Prerequisites * An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free). * [Azure CLI][azure-cli-install] or [Azure PowerShell][azure-powershell-install] installed.
-### Register the `EventgridPreview` preview feature
-
-To use the feature, you must also enable the `EventgridPreview` feature flag on your subscription.
-
-### [Azure CLI](#tab/azure-cli)
-
-Register the `EventgridPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "EventgridPreview"
-```
-
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
-
-```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/EventgridPreview')].{Name:name,State:properties.state}"
-```
-
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
--
-### [Azure PowerShell](#tab/azure-powershell)
-
-Register the `EventgridPreview` feature flag by using the [Register-AzProviderPreviewFeature][register-azproviderpreviewfeature] cmdlet, as shown in the following example:
-
-```azurepowershell-interactive
-Register-AzProviderPreviewFeature -ProviderNamespace Microsoft.ContainerService -Name EventgridPreview
-```
-
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [Get-AzProviderPreviewFeature][get-azproviderpreviewfeature] cmdlet:
-
-```azurepowershell-interactive
-Get-AzProviderPreviewFeature -ProviderNamespace Microsoft.ContainerService -Name EventgridPreview |
- Format-Table -Property Name, @{name='State'; expression={$_.Properties.State}}
-```
-
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [Register-AzResourceProvider][register-azresourceprovider] command:
-
-```azurepowershell-interactive
-Register-AzResourceProvider -ProviderNamespace Microsoft.ContainerService
-```
---- ## Create an AKS cluster ### [Azure CLI](#tab/azure-cli)
To learn more about AKS, and walk through a complete code to deployment example,
[new-azeventhub]: /powershell/module/az.eventhub/new-azeventhub [az-eventgrid-event-subscription-create]: /cli/azure/eventgrid/event-subscription#az-eventgrid-event-subscription-create [new-azeventgridsubscription]: /powershell/module/az.eventgrid/new-azeventgridsubscription
-[az-feature-register]: /cli/azure/feature#az_feature_register
-[register-azproviderpreviewfeature]: /powershell/module/az.resources/register-azproviderpreviewfeature
-[az-feature-list]: /cli/azure/feature#az_feature_list
-[get-azproviderpreviewfeature]: /powershell/module/az.resources/get-azproviderpreviewfeature
-[az-provider-register]: /cli/azure/provider#az_provider_register
-[register-azresourceprovider]: /powershell/module/az.resources/register-azresourceprovider
[az-group-delete]: /cli/azure/group#az_group_delete [sp-delete]: kubernetes-service-principal.md#other-considerations [remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup
aks Upgrade Windows 2019 2022 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-windows-2019-2022.md
+
+ Title: Upgrade Kubernetes workloads from Windows Server 2019 to 2022
+description: Learn how to upgrade the OS version for Windows workloads on AKS
++ Last updated : 8/18/2022++++
+# Upgrade Kubernetes workloads from Windows Server 2019 to 2022
+
+Upgrading the OS version of a running Windows workload on Azure Kubernetes Service (AKS) requires you to deploy a new node pool as Windows versions must match on each node pool. This article describes the steps to upgrade the OS version for Windows workloads as well as other important aspects.
+
+## Limitations
+
+Windows Server 2019 and Windows Server 2022 cannot co-exist on the same node pool on AKS. A new node pool must be created to host the new OS version. It's important that you match the permissions and access of the previous node pool to the new one.
+
+## Before you begin
+
+- Update the FROM statement on your dockerfile to the new OS version.
+- Check your application and verify that the container app works on the new OS version.
+- Deploy the verified container app on AKS to a development or testing environment.
+- Take note of the new image name or tag. This will be used below to replace the 2019 version of the image on the YAML file to be deployed to AKS.
+
+> [!NOTE]
+> Check out [Dockerfile on Windows](/virtualization/windowscontainers/manage-docker/manage-windows-dockerfile) and [Optimize Windows Dockerfiles](/virtualization/windowscontainers/manage-docker/optimize-windows-dockerfile) to learn more about how to build a dockerfile for Windows workloads.
+
+## Add a Windows Server 2022 node pool to the existing cluster
+
+Windows Server 2019 and 2022 cannot co-exist on the same node pool on AKS. To upgrade your application, you need a separate node pool for Windows Server 2022.
+For more information on how to add a new Windows Server 2022 node pool to an existing AKS cluster, see [Add a Windows Server 2022 node pool](./learn/quick-windows-container-deploy-cli.md).
+
+## Update your YAML file
+
+Node Selector is the most common and recommended option for placement of Windows pods on Windows nodes. To use Node Selector, make the following annotation to your YAML files:
+
+```yaml
+ nodeSelector:
+ "kubernetes.io/os": windows
+```
+
+The above annotation will find *any* Windows node available and place the pod on that node (of course, following all other scheduling rules). When upgrading from Windows Server 2019 to Windows Server 2022, you need to enforce not only the placement on a Windows node, but also on a node that is running the latest OS version. To accomplish this, one option is to use a different annotation:
+
+```yaml
+ nodeSelector:
+ "kubernetes.azure.com/os-sku": Windows2022
+```
+
+Once you update the nodeSelector on the YAML file, you should also update the container image to be used. You can get this information from the previous step on which you created a new version of the containerized application by changing the FROM statement on your dockerfile.
+
+> [!NOTE]
+> You should leverage the same YAML file you used to deploy the application in the first place - this will ensure no other configuration is changed, only the nodeSelector and the image to be used.
+
+## Apply the new YAML file to the existing workload
+
+If you have an application deployed already, ensure you follow the steps recommended above to deploy a new node pool with Windows Server 2022 nodes. Once deployed, your environment will show Windows Server 2019 and 2022 nodes, with the workloads running on the 2019 nodes:
+
+```console
+kubectl get nodes -o wide
+```
+The command above will show all nodes on your AKS cluster with additional details on the output:
+
+```output
+NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
+aks-agentpool-18877473-vmss000000 Ready agent 5h40m v1.23.8 10.240.0.4 <none> Ubuntu 18.04.6 LTS 5.4.0-1085-azure containerd://1.5.11+azure-2
+akspoolws000000 Ready agent 3h15m v1.23.8 10.240.0.208 <none> Windows Server 2022 Datacenter 10.0.20348.825 containerd://1.6.6+azure
+akspoolws000001 Ready agent 3h17m v1.23.8 10.240.0.239 <none> Windows Server 2022 Datacenter 10.0.20348.825 containerd://1.6.6+azure
+akspoolws000002 Ready agent 3h17m v1.23.8 10.240.1.14 <none> Windows Server 2022 Datacenter 10.0.20348.825 containerd://1.6.6+azure
+akswspool000000 Ready agent 5h37m v1.23.8 10.240.0.115 <none> Windows Server 2019 Datacenter 10.0.17763.3165 containerd://1.6.6+azure
+akswspool000001 Ready agent 5h37m v1.23.8 10.240.0.146 <none> Windows Server 2019 Datacenter 10.0.17763.3165 containerd://1.6.6+azure
+akswspool000002 Ready agent 5h37m v1.23.8 10.240.0.177 <none> Windows Server 2019 Datacenter 10.0.17763.3165 containerd://1.6.6+azure
+```
+
+With the Windows Server 2022 node pool deployed and the YAML file configured, you can now deploy the new version of the YAML:
+
+```console
+kubectl apply -f <filename>
+```
+
+The command above should return a "configured" status for the deployment:
+
+```output
+deployment.apps/sample configured
+service/sample unchanged
+```
+At this point, AKS will start the process of terminating the existing pods and deploying new pods to the Windows Server 2022 nodes. You can check the status of your deployment by running:
+
+```console
+kubectl get pods -o wide
+```
+The command above return the status of the pods on the default namespace. You might need to change the command above to list the pods on specific namespaces.
+
+```output
+NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
+sample-7794bfcc4c-k62cq 1/1 Running 0 2m49s 10.240.0.238 akspoolws000000 <none> <none>
+sample-7794bfcc4c-rswq9 1/1 Running 0 2m49s 10.240.1.10 akspoolws000001 <none> <none>
+sample-7794bfcc4c-sh78c 1/1 Running 0 2m49s 10.240.0.228 akspoolws000000 <none> <none>
+```
+
+## Active Directory, gMSA and Managed Identity implications
+
+If you are leveraging Group Managed Service Accounts (gMSA) you will need to update the Managed Identity configuration for the new node pool. gMSA uses a secret (user account and password) so the node on which the Windows pod is running can authenticate the container against Active Directory. To access that secret on Azure Key Vault, the node uses a Managed Identity that allows the node to access the resource. Since Managed Identities are configured per node pool, and the pod now resides on a new node pool, you need to update that configuration. Check out [Enable Group Managed Service Accounts (GMSA) for your Windows Server nodes on your Azure Kubernetes Service (AKS) cluster](./use-group-managed-service-accounts.md) for more information.
+
+The same principle applies to Managed Identities used for any other pod/node pool when accessing other Azure resources. Any access provided via Managed Identity needs to be updated to reflect the new node pool. To view update and sign-in activities, see [How to view Managed Identity activity](/azure/active-directory/managed-identities-azure-resources/how-to-view-managed-identity-activity).
api-management Api Management Howto Mutual Certificates For Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-mutual-certificates-for-clients.md
For more information, see [API Management access restriction policies](api-manag
You can also create policy expressions with the [`context` variable](api-management-policy-expressions.md#ContextVariables) to check client certificates. Examples in the following sections show expressions using the `context.Request.Certificate` property and other `context` properties. > [!IMPORTANT]
-> Starting May 2021, the `context.Request.Certificate` property only requests the certificate when the API Management instance's [`hostnameConfiguration`](/rest/api/apimanagement/current-ga/api-management-service/create-or-update#hostnameconfiguration) sets the `negotiateClientCertificate` property to True. By default, `negotiateClientCertificate` is set to False.
+> * Starting May 2021, the `context.Request.Certificate` property only requests the certificate when the API Management instance's [`hostnameConfiguration`](/rest/api/apimanagement/current-ga/api-management-service/create-or-update#hostnameconfiguration) sets the `negotiateClientCertificate` property to True. By default, `negotiateClientCertificate` is set to False.
+> * If TLS renegotiation is disabled in your client, you may see TLS errors when requesting the certificate using the `context.Request.Certificate` property. If this occurs, enable TLS renegotation settings in the client.
### Checking the issuer and subject
app-service Configure Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-custom-container.md
The following lists show supported and unsupported Docker Compose configuration
- networks (ignored) - secrets (ignored) - ports other than 80 and 8080 (ignored)-
+- default environment variables like `$variable and ${variable}` unlike in docker
#### Syntax Limitations - "version x.x" always needs to be the first YAML statement in the file
app-service Overview Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-diagnostics.md
To access App Service diagnostics, navigate to your App Service web app or App S
For Azure Functions, navigate to your function app, and in the top navigation, click on **Platform features**, and select **Diagnose and solve problems** from the **Resource management** section.
-In the App Service diagnostics homepage, you can peform a search for a symptom with your app, or choose a diagnostic category that best describes the issue with your app. Next, there is a new feature called Risk Alerts that provides an actionable report to improve your App. Finally, this page is where you can find **Diagnostic Tools**. See [Diagnostic tools](#diagnostic-tools).
+In the App Service diagnostics homepage, you can perform a search for a symptom with your app, or choose a diagnostic category that best describes the issue with your app. Next, there is a new feature called Risk Alerts that provides an actionable report to improve your App. Finally, this page is where you can find **Diagnostic Tools**. See [Diagnostic tools](#diagnostic-tools).
![App Service Diagnose and solve problems homepage with diagnostic search box, Risk Alerts assessments, and Troubleshooting categories for discovering diagnostics for the selected Azure Resource.](./media/app-service-diagnostics/app-service-diagnostics-homepage-1.png)
app-service Quickstart Wordpress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-wordpress.md
# Create a WordPress site
-<!--
-Other WP options on Azure:
-- https://docs.microsoft.com/en-us/azure/mysql/flexible-server/tutorial-deploy-wordpress-on-aks-- https://docs.microsoft.com/en-us/azure/virtual-machines/linux/tutorial-lamp-stack#install-wordpress>
-[WordPress](https://www.wordpress.org) is an open source content management system (CMS) that can be used to create websites, blogs, and other applications. Over 40% of the web uses WordPress from blogs to major news websites.
+[WordPress](https://www.wordpress.org) is an open source content management system (CMS) used by over 40% of the web to create websites, blogs, and other applications. WordPress can be run on a few different Azure
-In this quickstart, you'll learn how to create and deploy your first [WordPress](https://www.wordpress.org) site to [Azure App Service on Linux](overview.md#app-service-on-linux) using the [WordPress on the Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/WordPress.WordPress?tab=Overview). It uses the **Basic** tier and [**incurs a cost**](https://azure.microsoft.com/pricing/details/app-service/linux/) for your Azure subscription. The WordPress installation comes with pre-installed plugins for performance improvements, [W3TC](https://wordpress.org/plugins/w3-total-cache/) for caching and [Smush](https://wordpress.org/plugins/wp-smushit/) for image compression.
+In this quickstart, you'll learn how to create and deploy your first [WordPress](https://www.wordpress.org/) site to [Azure App Service on Linux](overview.md#app-service-on-linux) using the [WordPress Azure Marketplace item by App Service](https://azuremarketplace.microsoft.com/marketplace/apps/WordPress.WordPress?tab=Overview). It uses the **Basic** tier and [**incurs a cost**](https://azure.microsoft.com/pricing/details/app-service/linux/) for your Azure subscription. The WordPress installation comes with pre-installed plugins for performance improvements, [W3TC](https://wordpress.org/plugins/w3-total-cache/) for caching and [Smush](https://wordpress.org/plugins/wp-smushit/) for image compression.
To complete this quickstart, you need an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs).
To complete this quickstart, you need an Azure account with an active subscripti
> - [After November 28, 2022, PHP will only be supported on App Service on Linux.](https://github.com/Azure/app-service-linux-docs/blob/master/Runtime_Support/php_support.md#end-of-life-for-php-74). > - The MySQL Flexible Server is created behind a private [Virtual Network](/azure/virtual-network/virtual-networks-overview) and can't be accessed directly. To access the database, use phpMyAdmin that's deployed with the WordPress site. It can be found at the URL : https://`<sitename>`.azurewebsites.net/phpmyadmin >
-> If you have feedback to improve this WordPress offering on App Service, submit your ideas at [Web Apps Community](https://feedback.azure.com/d365community/forum/b09330d1-c625-ec11-b6e6-000d3a4f0f1c).
-
+> Additional documentation, including [Migrating to App Service](https://github.com/Azure/wordpress-linux-appservice/blob/main/WordPress/wordpress_migration_linux_appservices.md), can be found at [WordPress - App Service on Linux](https://github.com/Azure/wordpress-linux-appservice/tree/main/WordPress). If you have feedback to improve this WordPress offering on App Service, submit your ideas at [Web Apps Community](https://feedback.azure.com/d365community/forum/b09330d1-c625-ec11-b6e6-000d3a4f0f1c).
+>
## Create WordPress site using Azure portal 1. To start creating the WordPress site, browse to [https://ms.portal.azure.com/#create/WordPress.WordPress](https://ms.portal.azure.com/#create/WordPress.WordPress).
When no longer needed, you can delete the resource group, App service, and all r
:::image type="content" source="./media/quickstart-wordpress/delete-resource-group.png" alt-text="Delete resource group."::: ## Change MySQL password
-The WordPress configuration is modified to use [Application Settings](reference-app-settings.md#wordpress) to connect to the MySQL database. To change the MySQL database password, see [update admin password](../mysql/single-server/how-to-create-manage-server-portal.md#update-admin-password). Whenever the MySQL database credentials are changed, the [Application Settings](reference-app-settings.md#wordpress) also need to be updated. The [Application Settings for MySQL database](reference-app-settings.md#wordpress) begin with the **`DATABASE_`** prefix. For more information on updating MySQL passwords, see [WordPress on App Service](https://azure.github.io/AppService/2022/02/23/WordPress-on-App-Service-Public-Preview.html#known-limitations).
+The WordPress configuration is modified to use [Application Settings](reference-app-settings.md#wordpress) to connect to the MySQL database. To change the MySQL database password, see [update admin password](../mysql/single-server/how-to-create-manage-server-portal.md#update-admin-password). Whenever the MySQL database credentials are changed, the [Application Settings](reference-app-settings.md#wordpress) also need to be updated. The [Application Settings for MySQL database](reference-app-settings.md#wordpress) begin with the **`DATABASE_`** prefix. For more information on updating MySQL passwords, see [Changing MySQL database password](https://github.com/Azure/wordpress-linux-appservice/blob/main/WordPress/changing_mysql_database_password.md).
## Change WordPress admin password
-The [Application Settings](reference-app-settings.md#wordpress) for WordPress admin credentials are only for deployment purposes. Modifying these values has no effect on the WordPress installation. To change the WordPress admin password, see [resetting your password](https://wordpress.org/support/article/resetting-your-password/#to-change-your-password). The [Application Settings for WordPress admin credentials](reference-app-settings.md#wordpress) begin with the **`WORDPRESS_ADMIN_`** prefix. For more information on updating the WordPress admin password, see [WordPress on App Service](https://azure.github.io/AppService/2022/02/23/WordPress-on-App-Service-Public-Preview.html#known-limitations).
+The [Application Settings](reference-app-settings.md#wordpress) for WordPress admin credentials are only for deployment purposes. Modifying these values has no effect on the WordPress installation. To change the WordPress admin password, see [resetting your password](https://wordpress.org/support/article/resetting-your-password/#to-change-your-password). The [Application Settings for WordPress admin credentials](reference-app-settings.md#wordpress) begin with the **`WORDPRESS_ADMIN_`** prefix. For more information on updating the WordPress admin password, see [Changing WordPress Admin Credentials](https://github.com/Azure/wordpress-linux-appservice/blob/main/WordPress/changing_wordpress_admin_credentials.md).
## Next steps
app-service Tutorial Nodejs Mongodb App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-nodejs-mongodb-app.md
Title: Deploy a Node.js web app using MongoDB to Azure description: This article shows you have to deploy a Node.js app using Express.js and a MongoDB database to Azure. Azure App Service is used to host the web application and Azure Cosmos DB to host the database using the 100% compatible MongoDB API built into Cosmos DB. Previously updated : 03/07/2022 Last updated : 09/06/2022 ms.role: developer ms.devlang: javascript
# Deploy a Node.js + MongoDB web app to Azure
-In this tutorial, you'll deploy a sample **Express.js** app using a **MongoDB** database to Azure. The Express.js app will be hosted in Azure App Service, which supports hosting Node.js apps in both Linux (Node versions 12, 14, and 16) and Windows (versions 12 and 14) server environments. The MongoDB database will be hosted in Azure Cosmos DB, a cloud native database offering a [100% MongoDB compatible API](../cosmos-db/mongodb/mongodb-introduction.md).
+[Azure App Service](overview.md) provides a highly scalable, self-patching web hosting service using the Linux operating system. This tutorial shows how to create a secure Node.js app in Azure App Service that's connected to a MongoDB database (using [Azure Cosmos DB with MongoDB API](../cosmos-db/mongodb/mongodb-introduction.md)). When you're finished, you'll have an Express.js app running on Azure App Service on Linux.
:::image type="content" source="./media/tutorial-nodejs-mongodb-app/app-diagram.png" alt-text="A diagram showing how the Express.js app will be deployed to Azure App Service and the MongoDB data will be hosted inside of Azure Cosmos DB." lightbox="./media/tutorial-nodejs-mongodb-app/app-diagram-large.png":::
To follow along with this tutorial, clone or download the sample application fro
git clone https://github.com/Azure-Samples/msdocs-nodejs-mongodb-azure-sample-app.git ```
-Follow these steps to run the application locally:
+If you want to run the application locally, do the following:
* Install the package dependencies by running `npm install`. * Copy the `.env.sample` file to `.env` and populate the DATABASE_URL value with your MongoDB URL (for example *mongodb://localhost:27017/*). * Start the application using `npm start`. * To view the app, browse to `http://localhost:3000`.
-## 1 - Create the Azure App Service
+## 1. Create App Service and Cosmos DB
-Azure App Service is used to host the Express.js web app. When setting up the App Service for the application, you'll specify:
+In this step, you create the Azure resources. The steps used in this tutorial create a set of secure-by-default resources that include App Service and Azure Cosmos DB API for MongoDB that's. For the creation process, you'll specify:
* The **Name** for the web app. It's the name used as part of the DNS name for your webapp in the form of `https://<app-name>.azurewebsites.net`.
-* The **Runtime** for the app. It's where you select the version of Node to use for your app.
-* The **App Service plan** which defines the compute resources (CPU, memory) available for the application.
+* The **Region** to run the app physically in the world.
+* The **Runtime stack** for the app. It's where you select the version of Node to use for your app.
+* The **Hosting plan** for the app. It's the pricing tier that includes the set of features and scaling capacity for your app.
* The **Resource Group** for the app. A resource group lets you group (in a logical container) all the Azure resources needed for the application.
-Create Azure resources using the [Azure portal](https://portal.azure.com/), VS Code using the [Azure Tools extension pack](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack), or the Azure CLI.
-
-### [Azure portal](#tab/azure-portal)
- Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps to create your Azure App Service resources.
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Create app service step 1](<./includes/tutorial-nodejs-mongodb-app/create-app-service-azure-portal-1.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-app-service-azure-portal-1-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find App Services in Azure." lightbox="./media/tutorial-nodejs-mongodb-app/create-app-service-azure-portal-1.png"::: |
-| [!INCLUDE [Create app service step 2](<./includes/tutorial-nodejs-mongodb-app/create-app-service-azure-portal-2.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-app-service-azure-portal-2-240px.png" alt-text="A screenshot showing the create button on the App Services page used to create a new web app." lightbox="./media/tutorial-nodejs-mongodb-app/create-app-service-azure-portal-2.png"::: |
-| [!INCLUDE [Create app service step 3](<./includes/tutorial-nodejs-mongodb-app/create-app-service-azure-portal-3.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-app-service-azure-portal-3-240px.png" alt-text="A screenshot showing the form to fill out to create a web app in Azure." lightbox="./media/tutorial-nodejs-mongodb-app/create-app-service-azure-portal-3.png"::: |
-| [!INCLUDE [Create app service step 4](<./includes/tutorial-nodejs-mongodb-app/create-app-service-azure-portal-4.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-app-service-azure-portal-4-240px.png" alt-text="A screenshot of the Spec Picker dialog that lets you select the App Service plan to use for your web app." lightbox="./media/tutorial-nodejs-mongodb-app/create-app-service-azure-portal-4.png"::: |
-| [!INCLUDE [Create app service step 4](<./includes/tutorial-nodejs-mongodb-app/create-app-service-azure-portal-5.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-app-service-azure-portal-5-240px.png" alt-text="A screenshot of the main web app create page showing the button to select on to create your web app in Azure." lightbox="./media/tutorial-nodejs-mongodb-app/create-app-service-azure-portal-5.png"::: |
-
-### [VS Code](#tab/vscode-aztools)
-
-To create Azure resources in VS Code, you must have the [Azure Tools extension pack](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack) installed and be signed into Azure from VS Code.
-
-> [!div class="nextstepaction"]
-> [Download Azure Tools extension pack](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack)
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Create app service step 1](<./includes/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-01.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-01-240px.png" alt-text="A screenshot showing the location of the Azure Tools icon in the left toolbar." lightbox="./media/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-01.png"::: |
-| [!INCLUDE [Create app service step 2](<./includes/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-02.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-02-240px.png" alt-text="A screenshot showing the App Service section of Azure Tools showing how to create a new web app." lightbox="./media/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-02.png"::: |
-| [!INCLUDE [Create app service step 3](<./includes/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-03.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-03-240px.png" alt-text="A screenshot showing the dialog box used to enter the name of the web app in Azure." lightbox="./media/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-03.png"::: |
-| [!INCLUDE [Create app service step 4](<./includes/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-04.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-04-240px.png" alt-text="A screenshot of dialog box used to select a resource group or create a new one for the web app." lightbox="./media/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-04.png"::: |
-| [!INCLUDE [Create app service step 5](<./includes/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-05.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-05-240px.png" alt-text="A screenshot of the dialog box in VS Code used enter a name for the resource group." lightbox="./media/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-05.png"::: |
-| [!INCLUDE [Create app service step 6](<./includes/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-06.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-06-240px.png" alt-text="A screenshot of the dialog box in VS Code used to select Node 14 LTS as the runtime for the web app." lightbox="./media/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-06.png"::: |
-| [!INCLUDE [Create app service step 7](<./includes/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-07.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-07-240px.png" alt-text="A screenshot of the dialog in VS Code used to select operating system to use for hosting the web app." lightbox="./media/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-07.png"::: |
-| [!INCLUDE [Create app service step 8](<./includes/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-08.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-08-240px.png" alt-text="A screenshot of the dialog in VS Code used to select location of the web app resources." lightbox="./media/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-08.png"::: |
-| [!INCLUDE [Create app service step 9](<./includes/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-09.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-09-240px.png" alt-text="A screenshot of the dialog in VS Code used to select an App Service plan or create a new one." lightbox="./media/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-09.png"::: |
-| [!INCLUDE [Create app service step 10](<./includes/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-10.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-10-240px.png" alt-text="A screenshot of the dialog in VS Code used to enter the name of the App Service plan." lightbox="./media/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-10.png"::: |
-| [!INCLUDE [Create app service step 11](<./includes/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-11.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-11-240px.png" alt-text="A screenshot of the dialog in VS Code used to select the pricing tier of the App Service plan." lightbox="./media/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-11.png"::: |
-| [!INCLUDE [Create app service step 12](<./includes/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-12.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-12-240px.png" alt-text="A screenshot of the dialog in VS Code asking if you want to create an App Insights resource for your web app." lightbox="./media/tutorial-nodejs-mongodb-app/create-app-service-visual-studio-code-12.png"::: |
-
-### [Azure CLI](#tab/azure-cli)
----
-## 2 - Create an Azure Cosmos DB in MongoDB compatibility mode
-
-Azure Cosmos DB is a fully managed NoSQL database for modern app development. Among its features are a 100% MongoDB compatible API allowing you to use your existing MongoDB tools, packages, and applications with Cosmos DB.
-
-### [Azure portal](#tab/azure-portal)
-
-You must sign in to the [Azure portal](https://portal.azure.com/) to finish these steps to create a Cosmos DB.
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Create Cosmos DB step 1](<./includes/tutorial-nodejs-mongodb-app/create-cosmos-db-azure-portal-1.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-cosmos-db-azure-portal-1-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find Cosmos DB in Azure." lightbox="./media/tutorial-nodejs-mongodb-app/create-cosmos-db-azure-portal-1.png"::: |
-| [!INCLUDE [Create Cosmos DB step 2](<./includes/tutorial-nodejs-mongodb-app/create-cosmos-db-azure-portal-2.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-cosmos-db-azure-portal-2-240px.png" alt-text="A screenshot showing the create button on the Cosmos DB page used to create a database." lightbox="./media/tutorial-nodejs-mongodb-app/create-cosmos-db-azure-portal-2.png"::: |
-| [!INCLUDE [Create Cosmos DB step 3](<./includes/tutorial-nodejs-mongodb-app/create-cosmos-db-azure-portal-3.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-cosmos-db-azure-portal-3-240px.png" alt-text="A screenshot showing the page where you select the MongoDB API for your Cosmos DB." lightbox="./media/tutorial-nodejs-mongodb-app/create-cosmos-db-azure-portal-3.png"::: |
-| [!INCLUDE [Create Cosmos DB step 4](<./includes/tutorial-nodejs-mongodb-app/create-cosmos-db-azure-portal-4.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-cosmos-db-azure-portal-4-240px.png" alt-text="A screenshot showing how to fill out the page to create a new Cosmos DB." lightbox="./media/tutorial-nodejs-mongodb-app/create-cosmos-db-azure-portal-4.png"::: |
-
-### [VS Code](#tab/vscode-aztools)
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Create Cosmos DB step 1](<./includes/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-1.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-1-240px.png" alt-text="A screenshot showing the database component of the Azure Tools VS Code extension and the location of the button to create a new database." lightbox="./media/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-1.png"::: |
-| [!INCLUDE [Create Cosmos DB step 2](<./includes/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-2.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-2-240px.png" alt-text="A screenshot showing the dialog box used to select the subscription for the new database in Azure." lightbox="./media/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-2.png"::: |
-| [!INCLUDE [Create Cosmos DB step 3](<./includes/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-3.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-3-240px.png" alt-text="A screenshot showing the dialog box used to select the type of database you want to create in Azure." lightbox="./media/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-3.png"::: |
-| [!INCLUDE [Create Cosmos DB step 4](<./includes/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-4.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-4-240px.png" alt-text="A screenshot of dialog box used to enter the name of the new database in Visual Studio Code." lightbox="./media/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-4.png"::: |
-| [!INCLUDE [Create Cosmos DB step 5](<./includes/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-5.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-5-240px.png" alt-text="A screenshot of the dialog to select the throughput mode of the database." lightbox="./media/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-5.png"::: |
-| [!INCLUDE [Create Cosmos DB step 6](<./includes/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-6.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-6-240px.png" alt-text="A screenshot of the dialog in VS Code used to select resource group to put the new database in." lightbox="./media/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-6.png"::: |
-| [!INCLUDE [Create Cosmos DB step 7](<./includes/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-7.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-7-240px.png" alt-text="A screenshot of the dialog in VS Code used to select location for the new database." lightbox="./media/tutorial-nodejs-mongodb-app/create-cosmos-db-visual-studio-code-7.png"::: |
-
-### [Azure CLI](#tab/azure-cli)
----
-## 3 - Connect your App Service to your Cosmos DB
-
-To connect to your Cosmos DB database, you need to provide the connection string for the database to your application. It's done in the sample application by reading the `DATABASE_URL` environment variable. When you locally run it, the sample application uses the [dotenv package](https://www.npmjs.com/package/dotenv) to read the connection string value from the `.env` file.
-
-When you run in Azure, configuration values like connection strings can be stored in the *application settings* of the App Service hosting the web app. These values are then made available to your application as environment variables during runtime. In this way, the application uses the connection string from `process.env` the same way whether being run locally or in Azure. Further, it eliminates the need to manage and deploy environment specific config files with your application.
-
-### [Azure portal](#tab/azure-portal)
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Connection string step 1](<./includes/tutorial-nodejs-mongodb-app/connection-string-azure-portal-1.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/connection-string-azure-portal-1-240px.png" alt-text="A screenshot showing the location of the Cosmos DB connection string on the Cosmos DB quick start page." lightbox="./media/tutorial-nodejs-mongodb-app/connection-string-azure-portal-1.png"::: |
-| [!INCLUDE [Connection string step 2](<./includes/tutorial-nodejs-mongodb-app/connection-string-azure-portal-2.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/connection-string-azure-portal-2-240px.png" alt-text="A screenshot showing how to search for and go to the App Service, where the connection string needs to store the connection string." lightbox="./media/tutorial-nodejs-mongodb-app/connection-string-azure-portal-2.png"::: |
-| [!INCLUDE [Connection string step 3](<./includes/tutorial-nodejs-mongodb-app/connection-string-azure-portal-3.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/connection-string-azure-portal-3-240px.png" alt-text="A screenshot showing how to use the Application settings within an App Service." lightbox="./media/tutorial-nodejs-mongodb-app/connection-string-azure-portal-3.png"::: |
-| [!INCLUDE [Connection string step 4](<./includes/tutorial-nodejs-mongodb-app/connection-string-azure-portal-4.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/connection-string-azure-portal-4-240px.png" alt-text="A screenshot showing the dialog used to set an application setting in Azure App Service." lightbox="./media/tutorial-nodejs-mongodb-app/connection-string-azure-portal-4.png"::: |
-
-### [VS Code](#tab/vscode-aztools)
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Connection string step 1](<./includes/tutorial-nodejs-mongodb-app/connection-string-visual-studio-code-1.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/connection-string-visual-studio-code-1-240px.png" alt-text="A screenshot showing how to copy the connection string for a Cosmos database to your clipboard in VS Code." lightbox="./media/tutorial-nodejs-mongodb-app/connection-string-visual-studio-code-1.png"::: |
-| [!INCLUDE [Connection string step 2](<./includes/tutorial-nodejs-mongodb-app/connection-string-visual-studio-code-2.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/connection-string-visual-studio-code-2-240px.png" alt-text="A screenshot showing how to add a config setting to an App Service in VS Code." lightbox="./media/tutorial-nodejs-mongodb-app/connection-string-visual-studio-code-2.png"::: |
-| [!INCLUDE [Connection string step 3](<./includes/tutorial-nodejs-mongodb-app/connection-string-visual-studio-code-3.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/connection-string-visual-studio-code-3-240px.png" alt-text="A screenshot showing the dialog box used to give a name to an app setting in VS Code." lightbox="./media/tutorial-nodejs-mongodb-app/connection-string-visual-studio-code-3.png"::: |
-| [!INCLUDE [Connection string step 4](<./includes/tutorial-nodejs-mongodb-app/connection-string-visual-studio-code-4.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/connection-string-visual-studio-code-4-240px.png" alt-text="A screenshot showing the dialog used to set the value of an app setting in VS Code." lightbox="./media/tutorial-nodejs-mongodb-app/connection-string-visual-studio-code-4.png"::: |
-| [!INCLUDE [Connection string step 4](<./includes/tutorial-nodejs-mongodb-app/connection-string-visual-studio-code-5.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/connection-string-visual-studio-code-5-240px.png" alt-text="A screenshot showing how to view an app setting for an App Service in VS Code." lightbox="./media/tutorial-nodejs-mongodb-app/connection-string-visual-studio-code-5.png"::: |
-
-### [Azure CLI](#tab/azure-cli)
----
-## 4 - Deploy application code to Azure
-
-Azure App service supports multiple methods to deploy your application code to Azure including support for GitHub Actions and all major CI/CD tools. This article focuses on how to deploy your code from your local workstation to Azure.
-
-### [Deploy using VS Code](#tab/vscode-deploy)
-
-To deploy your application code directly from VS Code, you must have the [Azure Tools extension pack](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack) installed and be signed into Azure from VS Code.
-
-> [!div class="nextstepaction"]
-> [Download Azure Tools extension pack](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack)
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Deploy from VS Code 1](<./includes/tutorial-nodejs-mongodb-app/deploy-visual-studio-code-1.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/deploy-visual-studio-code-1-240px.png" alt-text="A screenshot showing the location of the Azure Tool icon in Visual Studio Code." lightbox="./media/tutorial-nodejs-mongodb-app/deploy-visual-studio-code-1.png"::: |
-| [!INCLUDE [Deploy from VS Code 2](<./includes/tutorial-nodejs-mongodb-app/deploy-visual-studio-code-2.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/deploy-visual-studio-code-2-240px.png" alt-text="A screenshot showing how you deploy an application to Azure by right-clicking on a web app in VS Code and selecting deploy from the context menu." lightbox="./media/tutorial-nodejs-mongodb-app/deploy-visual-studio-code-2.png"::: |
-| [!INCLUDE [Deploy from VS Code 3](<./includes/tutorial-nodejs-mongodb-app/deploy-visual-studio-code-3.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/deploy-visual-studio-code-3-240px.png" alt-text="A screenshot showing the dialog box used to select the deployment directory in VS Code." lightbox="./media/tutorial-nodejs-mongodb-app/deploy-visual-studio-code-3.png"::: |
-| [!INCLUDE [Deploy from VS Code 3](<./includes/tutorial-nodejs-mongodb-app/deploy-visual-studio-code-4.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/deploy-visual-studio-code-4-240px.png" alt-text="A screenshot showing the Output window of VS Code while deploying an application to Azure." lightbox="./media/tutorial-nodejs-mongodb-app/deploy-visual-studio-code-4.png"::: |
--
-### [Deploy using Local Git](#tab/local-git-deploy)
--
-### [Deploy using a ZIP file](#tab/azure-cli-deploy)
----
-## 5 - Browse to the application
-
-The application will have a url of the form `https://<app name>.azurewebsites.net`. Browse to this URL to view the application.
-
-Use the form elements in the application to add and complete tasks.
-
-![A screenshot showing the application running in a browser.](./media/tutorial-nodejs-mongodb-app/sample-app-in-browser.png)
-
-## 6 - Configure and view application logs
+ :::column span="2":::
+ **Step 1.** In the Azure portal:
+ 1. Enter "web app database" in the search bar at the top of the Azure portal.
+ 1. Select the item labeled **Web App + Database** under the **Marketplace** heading.
+ You can also navigate to the [creation wizard](https://portal.azure.com/?feature.customportal=false#create/Microsoft.AppServiceWebAppDatabaseV3) directly.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-create-app-cosmos-1.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find the Web App + Database creation wizard." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-create-app-cosmos-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2.** In the **Create Web App + Database** page, fill out the form as follows.
+ 1. *Resource Group* &rarr; Select **Create new** and use a name of **msdocs-expressjs-mongodb-tutorial**.
+ 1. *Region* &rarr; Any Azure region near you.
+ 1. *Name* &rarr; **msdocs-expressjs-mongodb-XYZ** where *XYZ* is any three random characters. This name must be unique across Azure.
+ 1. *Runtime stack* &rarr; **Node 16 LTS**.
+ 1. *Hosting plan* &rarr; **Basic**. When you're ready, you can [scale up](manage-scale-up.md) to a production pricing tier later.
+ 1. **Cosmos DB API for MongoDB** is selected by default as the database engine. Azure Cosmos DB is a cloud native database offering a 100% MongoDB compatible API. Note the database name that's generated for you (*\<app-name>-database*). You'll need it later.
+ 1. Select **Review + create**.
+ 1. After validation completes, select **Create**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-create-app-cosmos-2.png" alt-text="A screenshot showing how to configure a new app and database in the Web App + Database wizard." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-create-app-cosmos-2.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 3.** The deployment takes a few minutes to complete. Once deployment completes, select the **Go to resource** button. You're taken directly to the App Service app, but the following resources are created:
+ - **Resource group** &rarr; The container for all the created resources.
+ - **App Service plan** &rarr; Defines the compute resources for App Service. A Linux plan in the *Basic* tier is created.
+ - **App Service** &rarr; Represents your app and runs in the App Service plan.
+ - **Virtual network** &rarr; Integrated with the App Service app and isolates back-end network traffic.
+ - **Private endpoint** &rarr; Access endpoint for the database resource in the virtual network.
+ - **Network interface** &rarr; Represents a private IP address for the private endpoint.
+ - **Cosmos DB API for MongoDB** &rarr; Accessible only from behind the private endpoint. A database and a user are created for you on the server.
+ - **Private DNS zone** &rarr; Enables DNS resolution of the Cosmos DB server in the virtual network.
+
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-create-app-cosmos-3.png" alt-text="A screenshot showing the deployment process completed." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-create-app-cosmos-3.png":::
+ :::column-end:::
+
+## 2. Set up database connectivity
+
+The creation wizard generated the MongoDB URI for you already, but your app needs a `DATABASE_URL` variable and a `DATABASE_NAME` variable. In this step, you create [app settings](configure-common.md#configure-app-settings) with the format that your app needs.
+
+ :::column span="2":::
+ **Step 1.** In the App Service page, in the left menu, select Configuration.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-get-connection-string-1.png" alt-text="A screenshot showing how to open the configuration page in App Service." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-get-connection-string-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2.** In the **Application settings** tab of the **Configuration** page, create a `DATABASE_NAME` setting:
+ 1. Select **New application setting**.
+ 1. In the **Name** field, enter *DATABASE_NAME*.
+ 1. In the **Value** field, enter the automatically generated database name from the creation wizard, which looks like *msdocs-expressjs-mongodb-XYZ-database*.
+ 1. Select **OK**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-get-connection-string-2.png" alt-text="A screenshot showing how to see the autogenerated connection string." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-get-connection-string-2.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 3.**
+ 1. Scroll to the bottom of the page and select the connection string **MONGODB_URI**. It was generated by the creation wizard.
+ 1. In the **Value** field, select the **Copy** button and paste the value in a text file for the next step. It's in the [MongoDB connection string URI format](https://www.mongodb.com/docs/manual/reference/connection-string/).
+ 1. Select **Cancel**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-get-connection-string-3.png" alt-text="A screenshot showing how to create an app setting." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-get-connection-string-3.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 4.**
+ 1. Using the same steps in **Step 2**, create an app setting named *DATABASE_URL* and set the value to the one you copied from the `MONGODB_URI` connection string (i.e. `mongodb://...`).
+ 1. In the menu bar at the top, select **Save**.
+ 1. When prompted, select **Continue**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-get-connection-string-4.png" alt-text="A screenshot showing how to save settings in the configuration page." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-get-connection-string-4.png":::
+ :::column-end:::
+
+## 3. Deploy sample code
+
+In this step, you'll configure GitHub deployment using GitHub Actions. It's just one of many ways to deploy to App Service, but also a great way to have continuous integration in your deployment process. By default, every `git push` to your GitHub repository will kick off the build and deploy action.
+
+ :::column span="2":::
+ **Step 1.** In a new browser window:
+ 1. Sign in to your GitHub account.
+ 1. Navigate to [https://github.com/Azure-Samples/msdocs-nodejs-mongodb-azure-sample-app](https://github.com/Azure-Samples/msdocs-nodejs-mongodb-azure-sample-app).
+ 1. Select **Fork**.
+ 1. Select **Create fork**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-deploy-sample-code-1.png" alt-text="A screenshot showing how to create a fork of the sample GitHub repository." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-deploy-sample-code-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2.** In the GitHub page, open Visual Studio Code in the browser by pressing the `.` key.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-deploy-sample-code-2.png" alt-text="A screenshot showing how to open the Visual Studio Code browser experience in GitHub." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-deploy-sample-code-2.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 3.** In Visual Studio Code in the browser, open *config/connection.js* in the explorer.
+ In the `getConnectionInfo` function, see that the app settings you created earlier for the MongoDB connection are used (`DATABASE_URL` and `DATABASE_NAME`).
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-deploy-sample-code-3.png" alt-text="A screenshot showing Visual Studio Code in the browser and an opened file." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-deploy-sample-code-3.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 4.** Back in the App Service page, in the left menu, select **Deployment Center**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-deploy-sample-code-4.png" alt-text="A screenshot showing how to open the deployment center in App Service." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-deploy-sample-code-4.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 5.** In the Deployment Center page:
+ 1. In **Source**, select **GitHub**. By default, **GitHub Actions** is selected as the build provider.
+ 1. Sign in to your GitHub account and follow the prompt to authorize Azure.
+ 1. In **Organization**, select your account.
+ 1. In **Repository**, select **msdocs-nodejs-mongodb-azure-sample-app**.
+ 1. In **Branch**, select **main**.
+ 1. In the top menu, select **Save**. App Service commits a workflow file into the chosen GitHub repository, in the `.github/workflows` directory.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-deploy-sample-code-5.png" alt-text="A screenshot showing how to configure CI/CD using GitHub Actions." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-deploy-sample-code-5.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 6.** In the Deployment Center page:
+ 1. Select **Logs**. A deployment run is already started.
+ 1. In the log item for the deployment run, select **Build/Deploy Logs**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-deploy-sample-code-6.png" alt-text="A screenshot showing how to open deployment logs in the deployment center." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-deploy-sample-code-6.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 7.** You're taken to your GitHub repository and see that the GitHub action is running. The workflow file defines two separate stages, build and deploy. Wait for the GitHub run to show a status of **Complete**. It takes about 15 minutes.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-deploy-sample-code-7.png" alt-text="A screenshot showing a GitHub run in progress." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-deploy-sample-code-7.png":::
+ :::column-end:::
+
+## 4. Browse to the app
+
+ :::column span="2":::
+ **Step 1.** In the App Service page:
+ 1. From the left menu, select **Overview**.
+ 1. Select the URL of your app. You can also navigate directly to `https://<app-name>.azurewebsites.net`.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-browse-app-1.png" alt-text="A screenshot showing how to launch an App Service from the Azure portal." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-browse-app-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2.** Add a few tasks to the list.
+ Congratulations, you're running a secure data-driven Node.js app in Azure App Service.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-browse-app-2.png" alt-text="A screenshot of the Express.js app running in App Service." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-browse-app-2.png":::
+ :::column-end:::
+
+## 5. Stream diagnostic logs
Azure App Service captures all messages logged to the console to assist you in diagnosing issues with your application. The sample app outputs console log messages in each of its endpoints to demonstrate this capability. For example, the `get` endpoint outputs a message about the number of tasks retrieved from the database and an error message appears if something goes wrong. :::code language="javascript" source="~/msdocs-nodejs-mongodb-azure-sample-app/routes/index.js" range="7-21" highlight="8,12":::
-The contents of the App Service diagnostic logs can be reviewed in the Azure portal, VS Code, or using the Azure CLI.
-
-### [Azure portal](#tab/azure-portal)
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Stream logs from Azure portal 1](<./includes/tutorial-nodejs-mongodb-app/stream-logs-azure-portal-1.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/stream-logs-azure-portal-1-240px.png" alt-text="A screenshot showing the location of the Azure Tool icon in Visual Studio Code." lightbox="./media/tutorial-nodejs-mongodb-app/stream-logs-azure-portal-1.png"::: |
-| [!INCLUDE [Stream logs from Azure portal 2](<./includes/tutorial-nodejs-mongodb-app/stream-logs-azure-portal-2.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/stream-logs-azure-portal-2-240px.png" alt-text="A screenshot showing how you deploy an application to Azure by right-clicking on a web app in VS Code and selecting deploy from the context menu." lightbox="./media/tutorial-nodejs-mongodb-app//stream-logs-azure-portal-2.png"::: |
-
-### [VS Code](#tab/vscode-aztools)
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Stream logs from VS Code 1](<./includes/tutorial-nodejs-mongodb-app/stream-logs-visual-studio-code-1.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/stream-logs-visual-studio-code-1-240px.png" alt-text="A screenshot showing the location of the Azure Tool icon in Visual Studio Code." lightbox="./media/tutorial-nodejs-mongodb-app/stream-logs-visual-studio-code-1.png"::: |
-| [!INCLUDE [Stream logs from VS Code 2](<./includes/tutorial-nodejs-mongodb-app/stream-logs-visual-studio-code-2.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/stream-logs-visual-studio-code-2-240px.png" alt-text="A screenshot showing how you deploy an application to Azure by right-clicking on a web app in VS Code and selecting deploy from the context menu." lightbox="./media/tutorial-nodejs-mongodb-app/stream-logs-visual-studio-code-2.png"::: |
-
-### [Azure CLI](#tab/azure-cli)
----
-## 7 - Inspect deployed files using Kudu
+ :::column span="2":::
+ **Step 1.** In the App Service page:
+ 1. From the left menu, select **App Service logs**.
+ 1. Under **Application logging**, select **File System**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-stream-diagnostic-logs-1.png" alt-text="A screenshot showing how to enable native logs in App Service in the Azure portal." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-stream-diagnostic-logs-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 4.** From the left menu, select **Log stream**. You see the logs for your app, including platform logs and logs from inside the container.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-stream-diagnostic-logs-2.png" alt-text="A screenshot showing how to view the log stream in the Azure portal." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-stream-diagnostic-logs-2.png":::
+ :::column-end:::
+
+## 6. Inspect deployed files using Kudu
Azure App Service provides a web-based diagnostics console named [Kudu](./resources-kudu.md) that lets you examine the server hosting environment for your web app. Using Kudu, you can view the files deployed to Azure, review the deployment history of the application, and even open an SSH session into the hosting environment.
-To access Kudu, go to one of the following URLs. You'll need to sign into the Kudu site with your Azure credentials.
-
-* For apps deployed in Free, Shared, Basic, Standard, and Premium App Service plans - `https://<app-name>.scm.azurewebsites.net`.
-* For apps deployed in Isolated service plans - `https://<app-name>.scm.<ase-name>.p.azurewebsites.net`.
-
-From the main page in Kudu, you can access information about the application hosting environment, app settings, deployments, and browse the files in the wwwroot directory.
-
-![A screenshot of the main page in the Kudu SCM app showing the different information available about the hosting environment.](./media/tutorial-nodejs-mongodb-app/kudu-main-page.png)
-
-Selecting the *Deployments* link under the REST API header will show you a history of deployments of your web app.
-
-![A screenshot of the deployments JSON in the Kudu SCM app showing the history of deployments to this web app.](./media/tutorial-nodejs-mongodb-app/kudu-deployments-list.png)
-
-Selecting the *Site wwwroot* link under the Browse Directory heading lets you browse and view the files on the web server.
-
-![A screenshot of files in the wwwroot directory showing how Kudu lets you to see what has been deployed to Azure.](./media/tutorial-nodejs-mongodb-app/kudu-wwwroot-files.png)
-
-## Clean up resources
-
-When you're finished, you can delete all the resources from Azure by deleting the resource group for the application.
-
-### [Azure portal](#tab/azure-portal)
-
-Follow these steps while you're signed-in to the Azure portal to delete a resource group.
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Remove resource group Azure portal 1](<./includes/tutorial-nodejs-mongodb-app/remove-resource-group-azure-portal-1.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/remove-resource-group-azure-portal-1-240px.png" alt-text="A screenshot showing how to search for and go to a resource group in the Azure portal." lightbox="./media/tutorial-nodejs-mongodb-app/remove-resource-group-azure-portal-1.png"::: |
-| [!INCLUDE [Remove resource group Azure portal 2](<./includes/tutorial-nodejs-mongodb-app/remove-resource-group-azure-portal-2.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/remove-resource-group-azure-portal-2-240px.png" alt-text="A screenshot showing the location of the Delete Resource Group button in the Azure portal." lightbox="./media/tutorial-nodejs-mongodb-app/remove-resource-group-azure-portal-2.png"::: |
-| [!INCLUDE [Remove resource group Azure portal 3](<./includes/tutorial-nodejs-mongodb-app/remove-resource-group-azure-portal-3.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/remove-resource-group-azure-portal-3-240px.png" alt-text="A screenshot of the confirmation dialog for deleting a resource group in the Azure portal." lightbox="./media/tutorial-nodejs-mongodb-app/remove-resource-group-azure-portal-3.png"::: |
-
-### [VS Code](#tab/vscode-aztools)
+ :::column span="2":::
+ **Step 1.** In the App Service page:
+ 1. From the left menu, select **Advanced Tools**.
+ 1. Select **Go**. You can also navigate directly to `https://<app-name>.scm.azurewebsites.net`.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-inspect-kudu-1.png" alt-text="A screenshot showing how to navigate to the App Service Kudu page." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-inspect-kudu-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2.** In the Kudu page, select **Deployments**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-inspect-kudu-2.png" alt-text="A screenshot of the main page in the Kudu SCM app showing the different information available about the hosting environment." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-inspect-kudu-2.png":::
+ :::column-end:::
+ :::column span="2":::
+ If you have deployed code to App Service using Git or zip deploy, you'll see a history of deployments of your web app.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-inspect-kudu-3.png" alt-text="A screenshot showing deployment history of an App Service app in JSON format." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-inspect-kudu-3.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 3.** Go back to the Kudu homepage and select **Site wwwroot**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-inspect-kudu-4.png" alt-text="A screenshot showing site wwwroot selected." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-inspect-kudu-4.png":::
+ :::column-end:::
+ :::column span="2":::
+ You can see the deployed folder structure and click to browse and view the files.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-inspect-kudu-5.png" alt-text="A screenshot of deployed files in the wwwroot directory." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-inspect-kudu-5.png":::
+ :::column-end:::
+
+## 7. Clean up resources
+
+When you're finished, you can delete all of the resources from your Azure subscription by deleting the resource group.
+
+ :::column span="2":::
+ **Step 1.** In the search bar at the top of the Azure portal:
+ 1. Enter the resource group name.
+ 1. Select the resource group.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-clean-up-resources-1.png" alt-text="A screenshot showing how to search for and navigate to a resource group in the Azure portal." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-clean-up-resources-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2.** In the resource group page, select **Delete resource group**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-clean-up-resources-2.png" alt-text="A screenshot showing the location of the Delete Resource Group button in the Azure portal." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-clean-up-resources-2.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 3.**
+ 1. Enter the resource group name to confirm your deletion.
+ 1. Select **Delete**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-clean-up-resources-3.png" alt-text="A screenshot of the confirmation dialog for deleting a resource group in the Azure portal." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-clean-up-resources-3.png"::::
+ :::column-end:::
+
+## Frequently asked questions
+
+- [How much does this setup cost?](#how-much-does-this-setup-cost)
+- [How do I connect to the Cosmos DB server that's secured behind the virtual network with other tools?](#how-do-i-connect-to-the-cosmos-db-server-thats-secured-behind-the-virtual-network-with-other-tools)
+- [How does local app development work with GitHub Actions?](#how-does-local-app-development-work-with-github-actions)
+- [Why is the GitHub Actions deployment so slow?](#why-is-the-github-actions-deployment-so-slow)
+
+#### How much does this setup cost?
+
+Pricing for the create resources is as follows:
+
+- The App Service plan is created in **Basic** tier and can be scaled up or down. See [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/).
+- The Cosmos DB server is create in a single region and can be distributed to other regions. See [Azure Cosmos DB pricing](https://azure.microsoft.com/pricing/details/cosmos-db/).
+- The virtual network doesn't incur a charge unless you configure extra functionality, such as peering. See [Azure Virtual Network pricing](https://azure.microsoft.com/pricing/details/virtual-network/).
+- The private DNS zone incurs a small charge. See [Azure DNS pricing](https://azure.microsoft.com/pricing/details/dns/).
+
+#### How do I connect to the Cosmos DB server that's secured behind the virtual network with other tools?
+
+- For basic access from a commmand-line tool, you can run `mongosh` from the app's SSH terminal. The app's container doesn't come with `mongosh`, so you must [install it manually](https://www.mongodb.com/docs/mongodb-shell/install/). Remember that the installed client doesn't persist across app restarts.
+- To connect from a MongoDB GUI client, your machine must be within the virtual network. For example, it could be an Azure VM that's connected to one of the subnets, or a machine in an on-premises network that has a [site-to-site VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md) connection with the Azure virtual network.
+- To connect from the Mongo shell from the Cosmos DB management page in the portal, your machine must also be within the virtual network. You could instead open the Cosmos DB server's firewall for your local machine's IP address, but it increases the attack surface for your configuration.
+
+#### How does local app development work with GitHub Actions?
+
+Take the autogenerated workflow file from App Service as an example, each `git push` kicks off a new build and deployment run. From a local clone of the GitHub repository, you make the desired updates push it to GitHub. For example:
+
+```terminal
+git add .
+git commit -m "<some-message>"
+git push origin main
+```
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Remove resource group VS Code 1](<./includes/tutorial-nodejs-mongodb-app/remove-resource-group-visual-studio-code-1.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/remove-resource-group-visual-studio-code-1-240px.png" alt-text="A screenshot showing how to delete a resource group in VS Code using the Azure Tools extention." lightbox="./media/tutorial-nodejs-mongodb-app/remove-resource-group-visual-studio-code-1.png"::: |
-| [!INCLUDE [Remove resource group VS Code 2](<./includes/tutorial-nodejs-mongodb-app/remove-resource-group-visual-studio-code-2.md>)] | :::image type="content" source="./media/tutorial-nodejs-mongodb-app/remove-resource-group-visual-studio-code-2-240px.png" alt-text="A screenshot of the confirmation dialog for deleting a resource group from VS Code." lightbox="./media/tutorial-nodejs-mongodb-app/remove-resource-group-visual-studio-code-2.png"::: |
+#### Why is the GitHub Actions deployment so slow?
-### [Azure CLI](#tab/azure-cli)
+The autogenerated workflow file from App Service defines build-then-deploy, two-job run. Because each job runs in its own clean environment, the workflow file ensures that the `deploy` job has access to the files from the `build` job:
+- At the end of the `build` job, [upload files as artifacts](https://docs.github.com/actions/using-workflows/storing-workflow-data-as-artifacts).
+- At the beginning of the `deploy` job, download the artifacts.
-
+Most of the time taken by the two-job process is spent uploading and download artifacts. If you want, you can simplify the workflow file by combining the two jobs into one, which eliminates the need for the upload and download steps.
## Next steps
automation Automation Solution Vm Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-solution-vm-management.md
# Start/Stop VMs during off-hours overview > [!NOTE]
-> Start/Stop VM during off-hours, version 1 is currently being deprecated and will be unavailable from the marketplace soon. We recommend that you start using version 2, which is now generally available.
+> Start/Stop VM during off-hours, version 1 is currently being deprecated and will be unavailable from the marketplace soon. We recommend that you start using [version 2](../azure-functions/start-stop-vms/overview.md), which is now generally available.
The new version offers all existing capabilities and provides new features, such as multi-subscription support from a single Start/Stop instance. If you have the version 1 solution already deployed, you can still use the feature, and we will provide support until further announcement. - The Start/Stop VMs during off-hours feature start or stops enabled Azure VMs. It starts or stops machines on user-defined schedules, provides insights through Azure Monitor logs, and sends optional emails by using [action groups](../azure-monitor/alerts/action-groups.md). The feature can be enabled on both Azure Resource Manager and classic VMs for most scenarios.
-> [!NOTE]
-> Before you install version 1, we recommend you to learn about the [version 2](../azure-functions/start-stop-vms/overview.md), which is now generally available. The newer version offers all existing capabilities along with the support to use it with Azure. This also provides new capabilities, such as multi-subscription support from a single Start/Stop instance.
-
-> Start/Stop VMs during off-hours (v1) will be deprecated soon.
- This feature uses [Start-AzVm](/powershell/module/az.compute/start-azvm) cmdlet to start VMs. It uses [Stop-AzVM](/powershell/module/az.compute/stop-azvm) for stopping VMs. > [!NOTE]
availability-zones Migrate Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/migrate-load-balancer.md
+
+ Title: Migrate Load Balancer to availability zone support
+description: Learn how to migrate Load Balancer to availability zone support.
+++ Last updated : 05/09/2022+++
+CustomerIntent: As a cloud architect/engineer, I need general guidance on migrating load balancers to using availability zones.
+
+<!-- CHANGE AUTHOR BEFORE PUBLISH -->
+
+# Migrate Load Balancer to availability zone support
+
+This guide describes how to migrate Load Balancer from non-availability zone support to availability support. We'll take you through the different options for migration.
+
+A standard load balancer supports extra abilities in regions where availability zones are available. availability zones configurations are available for both types of Standard load balancer; public and internal. A zone-redundant frontend survives zone failure by using dedicated infrastructure in all the zones simultaneously. Additionally, you can pin a frontend to a specific zone. A zonal frontend is served by dedicated infrastructure in a single zone. Regardless of the zonal configuration the backend pool can contain VMs from any zone.
+
+For a Standard Zone redundant Load Balancer, the traffic is served by a single IP address. A single frontend IP address will survive zone failure. The frontend IP may be used to reach all (non-impacted) backend pool members no matter the zone. One or more availability zones can fail, and the data path survives as long as one zone in the region remains healthy.
+
+You can choose to have a frontend guaranteed to a single zone, which is known as a zonal. This scenario means any inbound or outbound flow is served by a single zone in a region. Your frontend shares fate with the health of the zone. The data path is unaffected by failures in other zones. You can use zonal frontends to expose an IP address per Availability Zone.
+
+## Prerequisites
+- Availability zones are supported with Standard SKU for both load balancer and Public IP.
+- Basic SKU type isn't supported.
+- To create or move this resource, one should have the Network Contributor role or higher.
+
+## Downtime requirements
+
+Downtime is required. All migration scenarios require some level of downtime down to the changing of resources used by the load balancer configurations.
+## Migration option 1: Enable existing Load Balancer to use availability zones (same region)
+
+Let's say you need to enable an existing load balancer to use availability zones within the same Azure region. You can't just switch an existing Azure load balancer from non-AZ to be AZ aware. However, you won't have to redeploy a load balancer to take advantage of this migration. In order to make your load balancer AZ aware, you'll have to recreate your load balancer's frontend IP configuration using a new zonal/zone-redundant IP and re-associate any existing load balancing rules to the new frontend. Not that this migration will incur downtime as rules are re-associated.
+
+> [!NOTE]
+> It isn't required to have a load balancer for each zone, rather having a single load balancer with multiple frontends (zonal or zone redundant) associated to their respective backend pools will serve the purpose.
+
+As Frontend IP can be either zonal or zone redundant, users need to decide which option to choose based on the requirements. The following are recommendations for each:
+
+| **Frontend IP configuration** | **Recommendation** |
+| -- | -- |
+|Zonal Frontend | We recommend creating zonal frontend when backend is concentrated in a particular zone. For example, if backend instances are pinned to zone 2 then it makes sense to create Frontend IP configuration in Availability zone 2. |
+| Zone Redundant Frontend | When the resources (VMs, NICs, IP addresses, etc.) inside a backend pool are distributed across zones, then it's recommended to create Zone redundant Frontend. This will provide the high availability and ensure seamless connectivity even if a zone goes down. |
+
+## Migration option 2: Migrate Load Balancer to another region with AZs
+
+Depending on the type of load balancer you have, you'll need to follow different steps. The following sections cover migrating both external and internal load balancers
+### Migrate an Internal Load Balancer
+
+When you create an internal load balancer, a virtual network is configured as the network for the load balancer. A private IP address in the virtual network is configured as the frontend (named as LoadBalancerFrontend by default) for the load balancer. While configuring this FE IP, you can select the availability zones.
+
+Azure internal load balancers can't be moved from one region to another. We must associate the new load balancer to resources in the target region. For the migration, we can you use an Azure Resource Manager template to export the existing configuration and virtual network of an internal load balancer. We can then stage the resource in another region by exporting the load balancer and virtual network to a template, modifying the parameters to match the destination region, and then deploy the templates to the new region.
+
+To migrate an internal load balancer to availability zones across regions, see [moving internal Load Balancer across regions](../load-balancer/move-across-regions-internal-load-balancer-portal.md).
+
+#### Migrate a public Load Balancer
+
+Azure external load balancers can't be moved between regions. We need to associate the new load balancer to resources in the target region.
+To redeploy load balancer with the source configuration to a new Zone resilient region, the most suitable approach is to use an Azure Resource Manager template to export the existing configuration external load balancer. You can then stage the resource in another region by exporting the load balancer and public IP to a template, modifying the parameters to match the destination region, and then deploying the template to the new region.
+
+To migrate an internal load balancer to availability zones across regions, see [moving public Load Balancer across regions](../load-balancer/move-across-regions-external-load-balancer-portal.md).
+
+### Limitations
+- Zones can't be changed, updated, or created for the resource after creation.
+- Resources can't be updated from zonal to zone-redundant or vice versa after creation.
+
+## Next steps
+
+ To learn more about load balancers and availability zones, check out [Load Balancer and availability zones](../load-balancer/load-balancer-standard-availability-zones.md).
azure-cache-for-redis Cache Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-configure.md
For more information about databases, see [What are Redis databases?](cache-deve
## Redis commands not supported in Azure Cache for Redis
+Configuration and management of Azure Cache for Redis instances is managed by Microsoft, which makes disables the following commands. If you try to invoke them, you receive an error message similar to `"(error) ERR unknown command"`.
+
+- ACL
+- BGREWRITEAOF
+- BGSAVE
+- CLUSTER - Cluster write commands are disabled, but read-only Cluster commands are permitted.
+- CONFIG
+- DEBUG
+- MIGRATE
+- PSYNC
+- REPLICAOF
+- SAVE
+- SHUTDOWN
+- SLAVEOF
+- SYNC
+ > [!IMPORTANT]
-> Because configuration and management of Azure Cache for Redis instances is managed by Microsoft, the following commands are disabled. If you try to invoke them, you receive an error message similar to `"(error) ERR unknown command"`.
->
->- BGREWRITEAOF
->- BGSAVE
->- CONFIG
->- DEBUG
->- MIGRATE
->- SAVE
->- SHUTDOWN
->- SLAVEOF
->- REPLICAOF
->- ACL
->- CLUSTER - Cluster write commands are disabled, but read-only Cluster commands are permitted.
->
+> Because configuration and management of Azure Cache for Redis instances is managed by Microsoft, some commands are disabled. The commands are listed above. If you try to invoke them, you receive an error message similar to `"(error) ERR unknown command"`.
For more information about Redis commands, see [https://redis.io/commands](https://redis.io/commands).
azure-functions Durable Functions Create First Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-create-first-csharp.md
To complete this tutorial:
* Install [Visual Studio 2022](https://visualstudio.microsoft.com/vs/). Make sure that the **Azure development** workload is also installed. Visual Studio 2019 also supports Durable Functions development, but the UI and steps differ.
-* Verify that you have the [Azure Storage Emulator](../../storage/common/storage-use-emulator.md) installed and running.
+* Verify that you have the [Azurite Emulator](../../storage/common//storage-use-azurite.md) installed and running.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
recommendations: false Previously updated : 09/02/2022 Last updated : 09/08/2022 # Azure, Dynamics 365, Microsoft 365, and Power Platform services compliance scope
For current Azure Government regions and available services, see [Products avail
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and Power Platform cloud services in scope for FedRAMP High, DoD IL2, DoD IL4, DoD IL5, and DoD IL6 authorizations across Azure, Azure Government, and Azure Government Secret cloud environments. For other authorization details in Azure Government Secret and Azure Government Top Secret, contact your Microsoft account representative. ## Azure public services by audit scope
-*Last updated: August 2022*
+*Last updated: September 2022*
### Terminology used
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Kubernetes Service (AKS)](../../aks/index.yml) | &#x2705; | &#x2705; | | [Azure Marketplace portal](https://azuremarketplace.microsoft.com/) | &#x2705; | &#x2705; | | [Azure Maps](../../azure-maps/index.yml) | &#x2705; | &#x2705; |
+| [Azure Metrics Advisor](https://azure.microsoft.com/services/metrics-advisor/) | &#x2705; | &#x2705; |
| [Azure Monitor](../../azure-monitor/index.yml) (incl. [Application Insights](../../azure-monitor/app/app-insights-overview.md), [Log Analytics](../../azure-monitor/logs/data-platform-logs.md), and [Application Change Analysis](../../azure-monitor/app/change-analysis.md)) | &#x2705; | &#x2705; | | [Azure NetApp Files](../../azure-netapp-files/index.yml) | &#x2705; | &#x2705; | | [Azure Policy](../../governance/policy/index.yml) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Cloud Services](../../cloud-services/index.yml) | &#x2705; | &#x2705; | | [Cloud Shell](../../cloud-shell/overview.md) | &#x2705; | &#x2705; | | [Cognitive Search](../../search/index.yml) (formerly Azure Search) | &#x2705; | &#x2705; |
+| [Cognitive
| [Cognitive | [Cognitive | [Cognitive Services Containers](../../cognitive-services/cognitive-services-container-support.md) | &#x2705; | &#x2705; |
azure-monitor Action Groups Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups-logic-app.md
Title: Trigger complex actions with Azure Monitor alerts description: Learn how to create a logic app action to process Azure Monitor alerts.--+ Last updated 09/07/2022
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md
An action group is a **global** service, so there's no dependency on a specific
| Option | Behavior | | | -- |
- | Global | The action groups service decides where to store the action group. The action group is persisted in at least two regions to ensure regional resiliency. Processing of actions may be done in any [geographic region](https://azure.microsoft.com/en-in/global-infrastructure/geographies/#overview).<br></br>Voice, SMS and email actions performed as the result of [service health alerts](https://docs.microsoft.com/en-us/azure/service-health/alerts-activity-log-service-notifications-portal) are resilient to Azure live-site-incidents. |
- | Regional | The action group is stored within the selected region. The action group is [zone-redundant](https://docs.microsoft.com/en-us/azure/availability-zones/az-region#highly-available-services). Processing of actions is performed within the region.</br></br>Use this option if you want to ensure that the processing of your action group is performed within a specific [geographic boundary](https://azure.microsoft.com/en-in/global-infrastructure/geographies/#overview). |
+ | Global | The action groups service decides where to store the action group. The action group is persisted in at least two regions to ensure regional resiliency. Processing of actions may be done in any [geographic region](/global-infrastructure/geographies/#overview).<br></br>Voice, SMS and email actions performed as the result of [service health alerts](/azure/service-health/alerts-activity-log-service-notifications-portal) are resilient to Azure live-site-incidents. |
+ | Regional | The action group is stored within the selected region. The action group is [zone-redundant](/azure/availability-zones/az-region#highly-available-services). Processing of actions is performed within the region.</br></br>Use this option if you want to ensure that the processing of your action group is performed within a specific [geographic boundary](/global-infrastructure/geographies/#overview). |
The action group is saved in the subscription, region and resource group that you select.
azure-monitor Alerts Create New Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
And then defining these elements for the resulting alert actions using:
1. In the **Details** tab, define the **Project details**. - Select the **Subscription**. - Select the **Resource group**.
- - (Optional) If you want to make sure that the data processing for the alert rule takes place within a specific region, and you're creating a metric alert rule that monitors a custom metric, you can select to process the alert rule in one of these regions.
+ - (Optional) If you're creating a metric alert rule that monitors a custom metric with the scope defined as one of the regions below, and you want to make sure that the data processing for the alert rule takes place within that region, you can select to process the alert rule in one of these regions:
- North Europe - West Europe - Sweden Central - Germany West Central
-
+
+ > [!NOTE]
+ > We are continually adding more regions for regional data processing.
1. Define the **Alert rule details**. ### [Metric alert](#tab/metric)
azure-monitor Alerts Troubleshoot Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot-metric.md
Title: Troubleshooting Azure metric alerts
+ Title: Frequently asked questions about Azure metric alerts
description: Common issues with Azure Monitor metric alerts and possible solutions.
Last updated 8/31/2022 ms:reviwer: harelbr
-# Troubleshooting problems in Azure Monitor metric alerts
+# Frequently asked questions about Azure Monitor metric alerts
-This article discusses common problems in Azure Monitor [metric alerts](alerts-metric-overview.md) and how to troubleshoot them.
+This article discusses common questions about Azure Monitor [metric alerts](alerts-metric-overview.md) and how to troubleshoot them.
Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues before the users of your system notice them. For more information on alerting, see [Overview of alerts in Microsoft Azure](./alerts-overview.md).
To avoid having the deployment fail when trying to validate the custom metricΓÇÖ
> [!NOTE] > Using the *skipMetricValidation* parameter might also be required when defining an alert rule on an existing custom metric that hasn't been emitted in several days.
+## Process data for a metric alert rule in a specific region
+
+You can make sure that an alert rule is processed in a specified region if your metric alert rule is defined with a scope of that region and if it monitors a custom metric.
+
+These are the currently support regions for regional processing of metric alert rules:
+- North Europe
+- West Europe
+- Sweden Central
+- Germany West Central
+
+To enable regional data processing in one of these regions, select the specified region in the **Details** section of the [create a new alert rule wizard](./alerts-create-new-alert-rule.md).
+
+> [!NOTE]
+> We are continually adding more regions for regional data processing.
++ ## Export the Azure Resource Manager template of a metric alert rule via the Azure portal Exporting the Resource Manager template of a metric alert rule helps you understand its JSON syntax and properties, and can be used to automate future deployments.
azure-monitor Autoscale Understanding Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-understanding-settings.md
There are three types of Autoscale profiles:
> The Autoscale user interface in the Azure portal enforces end times for recurrence profiles, and begins running the Autoscale setting's default profile in between recurrence profiles. ## Autoscale evaluation
-Given that Autoscale settings can have multiple profiles, and each profile can have multiple metric rules, it is important to understand how an Autoscale setting is evaluated. Each time the Autoscale job runs, it begins by choosing the profile that is applicable. Then Autoscale evaluates the minimum and maximum values, and any metric rules in the profile, and decides if a scale action is necessary.
+Given that Autoscale settings can have multiple profiles, and each profile can have multiple metric rules, it is important to understand how an Autoscale setting is evaluated. The Autoscale job runs every 30 to 60 seconds, depending on the resource type. Each time the Autoscale job runs, it begins by choosing the profile that is applicable. Then Autoscale evaluates the minimum and maximum values, and any metric rules in the profile, and decides if a scale action is necessary.
++ ### Which profile will Autoscale pick?
azure-monitor Container Insights Update Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-update-metrics.md
To update a specific cluster in your subscription by using Azure CLI, run the fo
```azurecli az login az account set --subscription "<subscriptionName>"
-az aks show -g <resourceGroupName> -n <clusterName>
+az aks show -g <resourceGroupName> -n <clusterName> --query "servicePrincipalProfile"
+az aks show -g <resourceGroupName> -n <clusterName> --query "addonProfiles.omsagent.identity"
az role assignment create --assignee <clientIdOfSPN> --scope <clusterResourceId> --role "Monitoring Metrics Publisher" ```
To get the value for `clientIdOfSPNOrMsi`, you can run the command `az aks show`
```azurecli az login az account set --subscription "<subscriptionName>"
-az aks show -g <resourceGroupName> -n <clusterName>
+az aks show -g <resourceGroupName> -n <clusterName> --query "servicePrincipalProfile"
+az aks show -g <resourceGroupName> -n <clusterName> --query "addonProfiles.omsagent.identity"
az role assignment create --assignee <clientIdOfSPNOrMsi> --scope <clusterResourceId> --role "Monitoring Metrics Publisher" ```
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
# What's new in Azure Monitor documentation This article lists significant changes to Azure Monitor documentation.
-## July, 2022
+
+## August 2022
++
+### Agents
+
+| Article | Description |
+|||
+|[Log Analytics agent overview](agents/log-analytics-agent.md)|Restructured the Agents section and rewrote the Agents Overview article to reflect that Azure Monitor Agent is the primary agent for collecting monitoring data.|
+|[Dependency analysis in Azure Migrate Discovery and assessment - Azure Migrate](https://docs.microsoft.com/azure/migrate/concepts-dependency-visualization)|Revamped the guidance for migrating from Log Analytics Agent to Azure Monitor Agent.|
++
+### Alerts
+
+| Article | Description |
+|:|:|
+|[Create Azure Monitor alert rules](alerts/alerts-create-new-alert-rule.md)|Added support for data processing in a specified region, for action groups and for metric alert rules that monitor a custom metric.|
+
+### Application-insights
+
+| Article | Description |
+|||
+|[Azure Application Insights Overview Dashboard](app/overview-dashboard.md)|Important information has been added clarifying that moving or renaming resources will break dashboards, with additional instructions on how to resolve this scenario.|
+|[Azure Application Insights override default SDK endpoints](app/custom-endpoints.md)|We've clarified that endpoint modification isn't recommended and to use connection strings instead.|
+|[Continuous export of telemetry from Application Insights](app/export-telemetry.md)|Important information has been added about avoiding duplicates when saving diagnostic logs in a Log Analytics workspace.|
+|[Dependency Tracking in Azure Application Insights with OpenCensus Python](app/opencensus-python-dependency.md)|Updated Django sample application and documentation in the Azure Monitor OpenCensus Python samples repository.|
+|[Incoming Request Tracking in Azure Application Insights with OpenCensus Python](app/opencensus-python-request.md)|Updated Django sample application and documentation in the Azure Monitor OpenCensus Python samples repository.|
+|[Monitor Python applications with Azure Monitor](app/opencensus-python.md)|Updated Django sample application and documentation in the Azure Monitor OpenCensus Python samples repository.|
+|[Configuration options - Azure Monitor Application Insights for Java](app/java-standalone-config.md)|Updated connection string overrides example.|
+|[Application Insights SDK for ASP.NET Core applications](app/tutorial-asp-net-core.md)|A new tutorial with step-by-step instructions to use the Application Insights SDK with .NET Core applications.|
+|[Application Insights SDK support guidance](app/sdk-support-guidance.md)|Our SDK support guidance has been updated and clarified.|
+|[Azure Application Insights - Dependency Auto-Collection](app/auto-collect-dependencies.md)|The latest currently supported node.js modules have been updated.|
+|[Application Insights custom metrics with .NET and .NET Core](app/tutorial-asp-net-custom-metrics.md)|A new tutorial with step-by-step instructions on how to enable custom metrics with .NET applications.|
+|[Migrate an Application Insights classic resource to a workspace-based resource](app/convert-classic-resource.md)|A comprehensive FAQ section has been added to assist with migration to workspace-based resources.|
+|[Configuration options - Azure Monitor Application Insights for Java](app/java-standalone-config.md)|This article has been fully updated for 3.4.0-BETA.|
+
+### Autoscale
+
+| Article | Description |
+|||
+|[Autoscale in Microsoft Azure](autoscale/autoscale-overview.md)|Updated conceptual diagrams|
+|[Use predictive autoscale to scale out before load demands in virtual machine scale sets (preview)](autoscale/autoscale-predictive.md)|Predictive autoscale (preview) is now available in all regions|
+
+### Change-analysis
+
+| Article | Description |
+|||
+|[Enable Change Analysis](change/change-analysis-enable.md)| Added note for slot-level enablement|
+|[Tutorial - Track a web app outage using Change Analysis](change/tutorial-outages.md)| Added set up steps to tutorial|
+|[Use Change Analysis in Azure Monitor to find web-app issues](change/change-analysis.md)|Updated limitations|
+|[Observability data in Azure Monitor](observability-data.md)| Added "Changes" section|
+### Containers
+
+| Article | Description |
+|||
+|[Monitor an Azure Kubernetes Service (AKS) cluster deployed](containers/container-insights-enable-existing-clusters.md)|Added section on using private link with Container insights.|
+
+### Essentials
+
+| Article | Description |
+|||
+|[Azure activity log](essentials/activity-log.md)|Added instructions for how to stop collecting activity logs using the legacy collection method.|
+|[Azure activity log insights](essentials/activity-log-insights.md)|Created a separate Activity Log Insights article in the Insights section.|
+
+### Logs
+
+| Article | Description |
+|||
+|[Configure data retention and archive in Azure Monitor Logs (Preview)](logs/data-retention-archive.md)|Clarified how data retention and archiving work in Azure Monitor Logs to address repeated customer inquiries.|
++
+## July 2022
### General | Article | Description |
This article lists significant changes to Azure Monitor documentation.
|[What is VM insights?](vm/vminsights-overview.md)|All VM insights content updated for new support of Azure Monitor agent.
-## June, 2022
+## June 2022
### General
This article lists significant changes to Azure Monitor documentation.
| [Tools for migrating to Azure Monitor Agent from legacy agents](agents/azure-monitor-agent-migration-tools.md) | New article that explains how to install and use tools for migrating from legacy agents to the new Azure Monitor agent (AMA).| ### Visualizations
-Azure Monitor Workbooks documentation previously resided on an external GitHub repository. We have migrated all Azure Workbooks content to the same repo as all other Azure Monitor content.
+Azure Monitor Workbooks documentation previously resided on an external GitHub repository. We've migrated all Azure Workbooks content to the same repo as all other Azure Monitor content.
-## May, 2022
+## May 2022
### General
azure-netapp-files Azacsnap Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-introduction.md
na Previously updated : 07/29/2022 Last updated : 09/07/2022
AzAcSnap can be installed on the same host as the database (SAP HANA), or it can
AzAcSnap is a lightweight application that is typically executed from an external scheduler. On most Linux systems, this operation is `cron`, which is what the documentation will focus on. But the scheduler could be an alternative tool as long as it can import the `azacsnap` user's shell profile. Importing the user's environment settings ensures file paths and permissions are initialized correctly.
+## Technical articles
+
+This is a list of technical articles where AzAcSnap has been used as part of a data protection strategy.
+
+* [Manual Recovery Guide for SAP HANA on Azure VMs from Azure NetApp Files snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-hana-on-azure-vms-from-azure/ba-p/3290161)
+* [Manual Recovery Guide for SAP Oracle 19c on Azure VMs from Azure NetApp Files snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-oracle-19c-on-azure-vms-from-azure/ba-p/3242408)
+* [Manual Recovery Guide for SAP HANA on Azure Large Instance from storage snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-hana-on-azure-large-instance-from/ba-p/3242347)
+* [Automating SAP system copy operations with Libelle SystemCopy](https://docs.netapp.com/us-en/netapp-solutions-sap/lifecycle/libelle-sc-overview.html)
+ ## Command synopsis The general format of the commands is as follows:
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
na Previously updated : 09/7/2022 Last updated : 09/07/2022 # Resource limits for Azure NetApp Files
The service dynamically adjusts the `maxfiles` limit for a volume based on its p
| > 4 TiB | 100 million | >[!IMPORTANT]
-> To increase the quota for a volume with a quota of at least 4 TiB, you must initiate [a support request](#request-limit-increase).
+> If your volume has a quota of at least 4 TiB and you want to increase the quota, you must initiate [a support request](#request-limit-increase).
For volumes with at least 4 TiB of quota, you can increase the `maxfiles` (inodes) limit beyond 100 million. For every 100 million files you increase (or a fraction thereof), you need to increase the corresponding volume quota by 4 TiB. For example, if you increase the `maxfiles` limit from 100 million files to 200 million files (or any number in between), you need to increase the volume quota from 4 TiB to 8 TiB.
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
Title: Azure Video Indexer release notes | Microsoft Docs
description: To stay up-to-date with the most recent developments, this article provides you with the latest updates on Azure Video Indexer. Previously updated : 05/20/2022 Last updated : 09/08/2022
In order to upload a video from a URL, change your code to send nu
var uploadRequestResult = await client.PostAsync($"{apiUrl}/{accountInfo.Location}/Accounts/{accountInfo.Id}/Videos?{queryParams}", null); ```
-## August 2022 release updates
+## August 2022
### Update topic inferencing model
azure-vmware Install Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/install-vmware-hcx.md
Last updated 03/29/2022
VMware HCX Advanced and its associated Cloud Manager are no longer pre-deployed in Azure VMware Solution. Instead, you'll install it through the Azure portal as an add-on. You'll still download the HCX Connector OVA and deploy the virtual appliance on your on-premises vCenter Server.
-Any edition of VMware HCX supports 25 site pairings (on-premises to cloud or cloud to cloud) in a single HCX manager system. The default is HCX Advanced, but you can open a [support request](https://rc.portal.azure.com/#create/Microsoft.Support) to have HCX Enterprise Edition enabled. Once the service is generally available, you'll have 30 days to decide on your next steps. You can turn off or opt out of the HCX Enterprise Edition service but keep HCX Advanced as it's part of the node cost.
+Any edition of VMware HCX supports 25 site pairings (on-premises to cloud or cloud to cloud) in a single HCX manager system. The default is HCX Advanced, but you can open a [support request](https://rc.portal.azure.com/#create/Microsoft.Support) to have HCX Enterprise Edition enabled. VMware HCX Enterprise edition is available and supported on Azure VMware Solution, at no additional cost.
Downgrading from HCX Enterprise Edition to HCX Advanced is possible without redeploying. First, ensure you've reverted to an HCX Advanced configuration state and not using the Enterprise features. If you plan to downgrade, ensure that no scheduled migrations, features like RAV and [HCX Mobility Optimized Networking (MON)](https://docs.vmware.com/en/VMware-HCX/4.1/hcx-user-guide/GUID-0E254D74-60A9-479C-825D-F373C41F40BC.html) are in use.
After you're finished, follow the recommended next steps at the end to continue
- [Prepare for HCX installations](https://docs.vmware.com/en/VMware-HCX/4.1/hcx-user-guide/GUID-A631101E-8564-4173-8442-1D294B731CEB.html) -- If you plan to use VMware HCX Enterprise, make sure you've enabled the [VMware HCX Enterprise](https://cloud.vmware.com/community/2019/08/08/introducing-hcx-enterprise/) add-on through a [support request](https://portal.azure.com/#create/Microsoft.Support). It's a free 12-month trial in Azure VMware Solution.
+- If you plan to use VMware HCX Enterprise, make sure you've enabled the [VMware HCX Enterprise](https://cloud.vmware.com/community/2019/08/08/introducing-hcx-enterprise/) add-on through a [support request](https://portal.azure.com/#create/Microsoft.Support).
- [VMware blog series - cloud migration](https://blogs.vmware.com/vsphere/2019/10/cloud-migration-series-part-2.html)
cognitive-services Pronunciation Assessment Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/pronunciation-assessment-tool.md
Previously updated : 06/08/2022 Last updated : 09/08/2022
This article describes how to use the pronunciation assessment tool through the
You can explore and try out pronunciation assessment even without signing in. > [!TIP]
-> To assess more than 5 seconds of speech with your own script, sign in with an Azure account and use your Speech or Cognitive Services resource.
+> To assess more than 5 seconds of speech with your own script, sign in with an [Azure account](https://azure.microsoft.com/free/cognitive-services) and use your <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech resource" target="_blank">Speech resource</a>.
Follow these steps to assess your pronunciation of the reference text:
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
Because prosodic attribute values can vary over a wide range, the speech recogni
| Attribute | Description | Required or optional | | | | -- |
-| `pitch` | Indicates the baseline pitch for the text. You can express the pitch as:<ul><li>An absolute value, expressed as a number followed by "Hz" (Hertz). For example, `<prosody pitch="600Hz">some text</prosody>`.</li><li>A relative value, expressed as a number preceded by "+" or "-" and followed by "Hz" or "st" that specifies an amount to change the pitch. For example: `<prosody pitch="+80Hz">some text</prosody>` or `<prosody pitch="-2st">some text</prosody>`. The "st" indicates the change unit is semitone, which is half of a tone (a half step) on the standard diatonic scale.</li><li>A constant value:<ul><li>x-low</li><li>low</li><li>medium</li><li>high</li><li>x-high</li><li>default</li></ul></li></ul> | Optional |
+| `pitch` | Indicates the baseline pitch for the text. You can express the pitch as:<ul><li>An absolute value: Expressed as a number followed by "Hz" (Hertz). For example, `<prosody pitch="600Hz">some text</prosody>`.</li><li>A relative value:<ul><li>As a relative number: Expressed as a number preceded by "+" or "-" and followed by "Hz" or "st" that specifies an amount to change the pitch. For example: `<prosody pitch="+80Hz">some text</prosody>` or `<prosody pitch="-2st">some text</prosody>`. The "st" indicates the change unit is semitone, which is half of a tone (a half step) on the standard diatonic scale.<li>As a percentage: Expressed as a number preceded by "+" (optionally) or "-" and followed by "%", indicating the relative change. For example: `<prosody pitch="50%">some text</prosody>` or `<prosody pitch="-50%">some text</prosody>`.</li></ul></li><li>A constant value:<ul><li>x-low</li><li>low</li><li>medium</li><li>high</li><li>x-high</li><li>default</li></ul></li></ul> | Optional |
| `contour` | Contour now supports neural voice. Contour represents changes in pitch. These changes are represented as an array of targets at specified time positions in the speech output. Each target is defined by sets of parameter pairs. For example: <br/><br/>`<prosody contour="(0%,+20Hz) (10%,-2st) (40%,+10Hz)">`<br/><br/>The first value in each set of parameters specifies the location of the pitch change as a percentage of the duration of the text. The second value specifies the amount to raise or lower the pitch by using a relative value or an enumeration value for pitch (see `pitch`). | Optional | | `range` | A value that represents the range of pitch for the text. You can express `range` by using the same absolute values, relative values, or enumeration values used to describe `pitch`. | Optional |
-| `rate` | Indicates the speaking rate of the text. You can express `rate` as:<ul><li>A relative value, expressed as a number that acts as a multiplier of the default. For example, a value of *1* results in no change in the rate. A value of *0.5* results in a halving of the rate. A value of *3* results in a tripling of the rate.</li><li>A constant value:<ul><li>x-slow</li><li>slow</li><li>medium</li><li>fast</li><li>x-fast</li><li>default</li></ul></li></ul> | Optional |
-| `volume` | Indicates the volume level of the speaking voice. You can express the volume as:<ul><li>An absolute value, expressed as a number in the range of 0.0 to 100.0, from *quietest* to *loudest*. An example is 75. The default is 100.0.</li><li>A relative value, expressed as a number preceded by "+" or "-" that specifies an amount to change the volume. Examples are +10 or -5.5.</li><li>A constant value:<ul><li>silent</li><li>x-soft</li><li>soft</li><li>medium</li><li>loud</li><li>x-loud</li><li>default</li></ul></li></ul> | Optional |
+| `rate` | Indicates the speaking rate of the text. You can express `rate` as:<ul><li>A relative value: <ul><li>As a relative number: Expressed as a number that acts as a multiplier of the default. For example, a value of *1* results in no change in the original rate. A value of *0.5* results in a halving of the original rate. A value of *2* results in twice the original rate.</li><li>As a percentage: Expressed as a number preceded by "+" (optionally) or "-" and followed by "%", indicating the relative change. For example: `<prosody rate="50%">some text</prosody>` or `<prosody rate="-50%">some text</prosody>`.</li></ul><li>A constant value:<ul><li>x-slow</li><li>slow</li><li>medium</li><li>fast</li><li>x-fast</li><li>default</li></ul></li></ul> | Optional |
+| `volume` | Indicates the volume level of the speaking voice. You can express the volume as:<ul><li>An absolute value: Expressed as a number in the range of 0.0 to 100.0, from *quietest* to *loudest*. An example is 75. The default is 100.0.</li><li>A relative value: <ul><li>As a relative number: Expressed as a number preceded by "+" or "-" that specifies an amount to change the volume. Examples are +10 or -5.5.</li><li>As a percentage: Expressed as a number preceded by "+" (optionally) or "-" and followed by "%", indicating the relative change. For example: `<prosody volume="50%">some text</prosody>` or `<prosody volume="+3%">some text</prosody>`.</li></ul><li>A constant value:<ul><li>silent</li><li>x-soft</li><li>soft</li><li>medium</li><li>loud</li><li>x-loud</li><li>default</li></ul></li></ul> | Optional |
### Change speaking rate
-Speaking rate can be applied at the word or sentence level.
+Speaking rate can be applied at the word or sentence level. The rate changes should be within 0.5 to 2 times the original audio.
**Example**
Speaking rate can be applied at the word or sentence level.
### Change volume
-Volume changes can be applied at the sentence level.
+Volume changes can be applied at the sentence level. The volume changes should be within 0 (silence) to 1.5 times the original audio.
**Example**
Volume changes can be applied at the sentence level.
### Change pitch
-Pitch changes can be applied at the sentence level.
+Pitch changes can be applied at the sentence level. The pitch changes should be within 0.5 to 1.5 times the original audio.
**Example**
cognitive-services Sentence Alignment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/sentence-alignment.md
For a training to succeed, the table below shows the minimum number of sentences
> - Training will not start and will fail if the 10,000 minimum sentence count for Training is not met. > - Tuning and Testing are optional. If you do not provide them, the system will remove an appropriate percentage from Training to use for validation and testing. > - You can train a model using only dictionary data. Please refer to [What is Dictionary](./what-is-dictionary.md).
-> - If your dictionary contains more than 250,000 sentences, our Document Translator is a better choice. Please refer to [Document Translator](../document-translation/overview.md).
+> - If your dictionary contains more than 250,000 sentences, our Document Translation feature is a better choice. Please refer to [Document Translation](../document-translation/overview.md).
> - Free (F0) subscription training has a maximum limit of 2,000,000 characters. ## Next steps
cognitive-services Get Started With Document Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/get-started-with-document-translation.md
A batch Document Translation request is submitted to your Translator service end
### HTTP headers
-The following headers are included with each Document Translator API request:
+The following headers are included with each Document Translation API request:
|HTTP header|Description| ||--|
cognitive-services Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/managed-identity.md
The **Storage Blob Data Contributor** role gives Translator (represented by the
### Headers
-The following headers are included with each Document Translator API request:
+The following headers are included with each Document Translation API request:
|HTTP header|Description| ||--|
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/overview.md
The following document file types are supported by Document Translation:
| File type| File extension|Description| |||--|
-|Adobe PDF|pdf|Portable document file format.|
+|Adobe PDF|pdf|Portable document file format. Document Translation uses optical character recognition (OCR) technology to extract and translate text in scanned PDF document while retaining the original layout.|
|Comma-Separated Values |csv| A comma-delimited raw-data file used by spreadsheet programs.| |HTML|html, htm|Hyper Text Markup Language.| |Localization Interchange File Format|xlf| A parallel document format, export of Translation Memory systems. The languages used are defined inside the file.|
cognitive-services V3 0 Translate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/reference/v3-0-translate.md
Previously updated : 08/15/2022 Last updated : 09/07/2022
Examples of JSON responses are provided in the [examples](#examples) section.
| Headers | Description | | | |
-| X-RequestId | Value generated by the service to identify the request. It's used for troubleshooting purposes. |
-| X-MT-System | Specifies the system type that was used for translation for each 'to' language requested for translation. The value is a comma-separated list of strings. Each string indicates a type: <br><br>* Custom - Request includes a custom system and at least one custom system was used during translation.<br>* Team - All other requests |
+| X-requestid | Value generated by the service to identify the request. It's used for troubleshooting purposes. |
+| X-mt-system | Specifies the system type that was used for translation for each 'to' language requested for translation. The value is a comma-separated list of strings. Each string indicates a type: <br><br>* Custom - Request includes a custom system and at least one custom system was used during translation.<br>* Team - All other requests |
+| X-metered-usage |Specifies consumption (the number of characters for which the user will be charged) for the translation job request. For example, if the word "Hello" is translated from English (en) to French (fr), this field will return the value '5'.|
## Response status codes
The response is:
The alignment information starts with `0:2-0:1`, which means that the first three characters in the source text (`The`) map to the first two characters in the translated text (`La`). #### Limitations+ Obtaining alignment information is an experimental feature that we've enabled for prototyping research and experiences with potential phrase mappings. We may choose to stop supporting this feature in the future. Here are some of the notable restrictions where alignments aren't supported: * Alignment isn't available for text in HTML format that is, textType=html * Alignment is only returned for a subset of the language pairs:
- - English to/from any other language except Chinese Traditional, Cantonese (Traditional) or Serbian (Cyrillic).
- - from Japanese to Korean or from Korean to Japanese.
- - from Japanese to Chinese Simplified and Chinese Simplified to Japanese.
- - from Chinese Simplified to Chinese Traditional and Chinese Traditional to Chinese Simplified.
+ * English to/from any other language except Chinese Traditional, Cantonese (Traditional) or Serbian (Cyrillic).
+ * from Japanese to Korean or from Korean to Japanese.
+ * from Japanese to Chinese Simplified and Chinese Simplified to Japanese.
+ * from Chinese Simplified to Chinese Traditional and Chinese Traditional to Chinese Simplified.
* You won't receive alignment if the sentence is a canned translation. Example of a canned translation is "This is a test", "I love you" and other high frequency sentences. * Alignment isn't available when you apply any of the approaches to prevent translation as described [here](../prevent-translation.md)
curl -X POST "https://api.cognitive.microsofttranslator.com/translate?api-versio
The response is:
-```
+```json
[ { "translations":[
cognitive-services Translator Text Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/translator-text-apis.md
Previously updated : 06/20/2022 Last updated : 09/07/2022 ms.devlang: csharp, golang, java, javascript, python
To call the Translator service via the [REST API](reference/rest-api-guide.md),
|**X-ClientTraceId**|A client-generated GUID to uniquely identify the request. You can omit this header if you include the trace ID in the query string using a query parameter named ClientTraceId.|<ul><li>***Optional***</li></ul> |||
-## Setup your application
+## Set up your application
### [C#](#tab/csharp)
To call the Translator service via the [REST API](reference/rest-api-guide.md),
1. Open the **Program.cs** file.
-1. Delete the pre-existing code, including the line `Console.Writeline("Hello World!")`. You will copy and paste the code samples into your application's Program.cs file. For each code sample, make sure you update the key and endpoint variables with values from your Azure portal Translator instance.
+1. Delete the pre-existing code, including the line `Console.Writeline("Hello World!")`. You'll copy and paste the code samples into your application's Program.cs file. For each code sample, make sure you update the key and endpoint variables with values from your Azure portal Translator instance.
1. Once you've added a desired code sample to your application, choose the green **start button** next to formRecognizer_quickstart to build and run your program, or press **F5**.
You can use any text editor to write Go applications. We recommend using the lat
1. Create a new GO file named **text-translator.go** from the **translator-text-app** directory.
-1. You will copy and paste the code samples into your **text-translator.go** file. Make sure you update the key variable with the value from your Azure portal Translator instance.
+1. You'll copy and paste the code samples into your **text-translator.go** file. Make sure you update the key variable with the value from your Azure portal Translator instance.
1. Once you've added a code sample to your application, your Go program can be executed in a command or terminal prompt. Make sure your prompt's path is set to the **translator-text-app** folder and use the following command:
You can use any text editor to write Go applications. We recommend using the lat
> > * You can also create a new file in your IDE named `TranslatorText.java` and save it to the `java` directory.
-1. You will copy and paste the code samples `TranslatorText.java` file. **Make sure you update the key with one of the key values from your Azure portal Translator instance**.
+1. You'll copy and paste the code samples `TranslatorText.java` file. **Make sure you update the key with one of the key values from your Azure portal Translator instance**.
1. Once you've added a code sample to your application, navigate back to your main project directoryΓÇö**translator-text-app**, open a console window, and enter the following commands:
You can use any text editor to write Go applications. We recommend using the lat
> > * You can also create a new file named `index.js` in your IDE and save it to the `translator-text-app` directory.
-1. You will copy and paste the code samples into your `index.js` file. **Make sure you update the key variable with the value from your Azure portal Translator instance**.
+1. You'll copy and paste the code samples into your `index.js` file. **Make sure you update the key variable with the value from your Azure portal Translator instance**.
1. Once you've added the code sample to your application, run your program:
After a successful call, you should see the following response:
] ```
+You can check the consumption (the number of characters for which you'll be charged) for each request in the [**response headers: x-metered-usage**](reference/v3-0-translate.md#response-headers) field.
+ ## Detect language If you need translation, but don't know the language of the text, you can use the language detection operation. There's more than one way to identify the source text language. In this section, you'll learn how to use language detection using the `translate` endpoint, and the `detect` endpoint.
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/whats-new.md
Document Translation .NET and Python client-library SDKs are now generally avail
### [Document Translation support for scanned PDF documents](https://aka.ms/blog_ScannedPdfTranslation)
-* Document Translator uses optical character recognition (OCR) technology to extract and translate text in scanned PDF document while retaining the original layout.
+* Document Translation uses optical character recognition (OCR) technology to extract and translate text in scanned PDF document while retaining the original layout.
## April 2022
cognitive-services Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/analytics.md
AzureDiagnostics
| project question_, answer_, score_, kbId_ ```
+### Prebuilt question answering inference calls
+
+```kusto
+// Show logs from AzureDiagnostics table
+// Lists the latest logs in AzureDiagnostics table, sorted by time (latest first).
+AzureDiagnostics
+| where OperationName == "CustomQuestionAnswering QueryText"
+| extend answer_ = tostring(parse_json(properties_s).answer)
+| extend question_ = tostring(parse_json(properties_s).question)
+| extend score_ = tostring(parse_json(properties_s).score)
+| extend requestid = tostring(parse_json(properties_s)["apim-request-id"])
+| project TimeGenerated, requestid, question_, answer_, score_
+```
+ ## Next steps > [!div class="nextstepaction"]
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/language-support.md
json
{ "taskName": "analyze 1", "kind": "Healthcare",
- "parameters": {
- "fhirVersion": "4.0.1"
- }
} ] }
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-to-date with recent developments, this article provides you with information about new releases and features.
+## September 2022
+Text Analytics for Health now [supports additional languages](./text-analytics-for-health/language-support.md) in preview: Spanish, French, German Italian, Portuguese and Hebrew. These languages are available when using a docker container to deploy the API service.
+ ## August 2022 * [Role-based access control](./concepts/role-based-access-control.md) for the Language service.
communication-services Teams User Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user-calling.md
Key features of the Calling SDK:
- **Addressing** - Azure Communication Services is using [Azure Active Directory user identifier](/powershell/module/azuread/get-azureaduser) to address communication endpoints. Clients use Azure Active Directory identities to authenticate to the service and communicate with each other. These identities are used in Calling APIs that provide clients visibility into who is connected to a call (the roster). And are also used in [Microsoft Graph API](/graph/api/user-get). - **Encryption** - The Calling SDK encrypts traffic and prevents tampering on the wire. - **Device Management and Media** - The Calling SDK provides facilities for binding to audio and video devices, encodes content for efficient transmission over the communications data plane, and renders content to output devices and views that you specify. APIs are also provided for screen and application sharing.-- **PSTN** - The Calling SDK can receive and initiate voice calls with the traditional publicly switched telephony system [using phone numbers you acquire in the Teams Admin Portal](/microsoftteams/pstn-connectivity).-- **Teams Meetings** - The Calling SDK can [join Teams meetings](../../quickstarts/voice-video-calling/get-started-teams-interop.md) and interact with the Teams voice and video data plane. - **Notifications** - The Calling SDK provides APIs that allow clients to be notified of an incoming call. In situations where your app is not running in the foreground, patterns are available to [fire pop-up notifications](../notifications.md) ("toasts") to inform users of an incoming call. ## Calling capabilities
The following list presents the set of features that are currently available in
| Video Rendering | Render single video in many places (local camera or remote stream) | ✔️ | | | Set / update scaling mode | ✔️ | | | Render remote video stream | ✔️ |-
-Support for streaming, timeouts, platforms, and browsers is shared with [Communication Services calling SDK overview](./../voice-video-calling/calling-sdk-features.md).
-
-## Detailed Teams capabilities
-
-The following list presents the set of Teams capabilities, which are currently available in the Azure Communication Services Calling SDK for JavaScript.
-
-|Group of features | Teams capability | JS |
-|-|--||
-| Core Capabilities | Placing a call honors Teams external access configuration | ✔️ |
-| | Placing a call honors Teams guest access configuration | ✔️ |
-| | Joining Teams meeting honors configuration for automatic people admit in the Lobby | ✔️ |
-| | Actions available in the Teams meeting are defined by assigned role | ✔️ |
-| Mid call control | Receive forwarded call | ✔️ |
-| | Receive simultaneous ringing | ✔️ |
-| | Play music on hold | ❌ |
-| | Park a call | ❌ |
-| | Transfer a call to a person | ✔️ |
-| | Transfer a call to a call | ✔️ |
-| | Transfer a call to Voicemail | ❌ |
-| | Merge ongoing calls | ❌ |
-| | Place a call on behalf of the user | ❌ |
-| | Start call recording | ❌ |
-| | Start call transcription | ❌ |
-| | Start live captions | ❌ |
-| | Receive information of call being recorded | ✔️ |
-| PSTN | Make an Emergency call | ✔️ |
-| | Place a call honors location-based routing | ❌ |
-| | Support for survivable branch appliance | ❌ |
-| Phone system | Receive a call from Teams auto attendant | ✔️ |
-| | Transfer a call to Teams auto attendant | ✔️ |
-| | Receive a call from Teams call queue (only conference mode) | ✔️ |
-| | Transfer a call from Teams call queue (only conference mode) | ✔️ |
-| Compliance | Place a call honors information barriers | ✔️ |
-| | Support for compliance recording | ✔️ |
-| Meeting | [Include participant in Teams meeting attendance report](https://support.microsoft.com/office/view-and-download-meeting-attendance-reports-in-teams-ae7cf170-530c-47d3-84c1-3aedac74d310) | ❌ |
--
-## Teams meeting options
-
-Teams meeting organizers can configure the Teams meeting options to adjust the experience for participants. The following options are supported in Azure Communication Services for Teams users:
-
-|Option name|Description| Supported |
-| | | |
-| [Automatically admit people](/microsoftteams/meeting-policies-participants-and-guests#automatically-admit-people) | Teams user can bypass the lobby, if Teams meeting organizer set value to include "people in my organization" for single tenant meetings and "people in trusted organizations" for cross-tenant meetings. Otherwise, Teams users have to wait in the lobby until an authenticated user admits them.| ✔️ |
-| [Always let callers bypass the lobby](/microsoftteams/meeting-policies-participants-and-guests#allow-dial-in-users-to-bypass-the-lobby)| Participants joining through phone can bypass lobby | Not applicable |
-| Announce when callers join or leave| Participants hear announcement sounds when phone participants join and leave the meeting | ✔️ |
-| [Choose co-organizers](https://support.microsoft.com/office/add-co-organizers-to-a-meeting-in-teams-0de2c31c-8207-47ff-ae2a-fc1792d466e2)| Teams user can be selected as co-organizer. It affects the availability of actions in Teams meetings. | ✔️ |
-| [Who can present in meetings](/microsoftteams/meeting-policies-in-teams-general#designated-presenter-role-mode) | Controls who in the Teams meeting can share screen. | ❌ |
-|[Manage what attendees see](https://support.microsoft.com/office/spotlight-someone-s-video-in-a-teams-meeting-58be74a4-efac-4e89-a212-8d198182081e)|Teams organizer, co-organizer and presenter can spotlight videos for everyone. Azure Communication Services does not receive the spotlight signals. |❌|
-|[Allow mic for attendees](https://support.microsoft.com/office/manage-attendee-audio-and-video-permissions-in-teams-meetings-f9db15e1-f46f-46da-95c6-34f9f39e671a)|If Teams user is attendee, then this option controls whether Teams user can send local audio |✔️|
-|[Allow camera for attendees](https://support.microsoft.com/office/manage-attendee-audio-and-video-permissions-in-teams-meetings-f9db15e1-f46f-46da-95c6-34f9f39e671a)|If Teams user is attendee, then this option controls whether Teams user can send local video |✔️|
-|[Record automatically](/graph/api/resources/onlinemeeting)|Records meeting when anyone starts the meeting. The user in the lobby does not start a recording.|✔️|
-|Allow meeting chat|If enabled, Teams users can use the chat associated with the Teams meeting.|✔️|
-|[Allow reactions](/microsoftteams/meeting-policies-in-teams-general#meeting-reactions)|If enabled, Teams users can use reactions in the Teams meeting. Azure Communication Services doesn't support reactions. |❌|
-|[RTMP-IN](/microsoftteams/stream-teams-meetings)|If enabled, organizers can stream meetings and webinars to external endpoints by providing a Real-Time Messaging Protocol (RTMP) URL and key to the built-in Custom Streaming app in Teams. |Not applicable|
-|[Provide CART Captions](https://support.microsoft.com/office/use-cart-captions-in-a-microsoft-teams-meeting-human-generated-captions-2dd889e8-32a8-4582-98b8-6c96cf14eb47)|Communication access real-time translation (CART) is a service in which a trained CART captioner listens to the speech and instantaneously translates all speech to text. As a meeting organizer, you can set up and offer CART captioning to your audience instead of the Microsoft Teams built-in live captions that are automatically generated.|❌|
| | See together mode video stream | ❌ | | | See Large gallery view | ❌ | | | Receive video stream from Teams media bot | ❌ |
communication-services Call Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-automation.md
+
+ Title: Call Automation overview
+
+description: Learn about Azure Communication Services Call Automation.
++++ Last updated : 09/06/2022+++
+# Call Automation Overview
+
+> [!IMPORTANT]
+> Functionality described on this document is currently in private preview. Private preview includes access to SDKs and documentation for testing purposes that are not yet available publicly.
+> Apply to become an early adopter by filling out the form for [preview access to Azure Communication Services](https://aka.ms/ACS-EarlyAdopter).
+
+Azure Communication Services Call Automation provides developers the ability to build server-based, intelligent call workflows for voice and PSTN channels. The SDKs, available for .NET and Java, uses an action-event model to help you build personalized customer interactions. Your communication applications can listen to real-time call events and perform control plane actions (like answer, transfer, play audio, etc.) to steer and control calls based on your business logic.
+
+## Common Use Cases
+
+Some of the common use cases that can be build using Call Automation include:
+
+- Program VoIP or PSTN calls for transactional workflows such as click-to-call and appointment reminders to improve customer service.
+- Build interactive interaction workflows to self-serve customers for use cases like order bookings and updates, using Play (Audio URL) and Recognize (DTMF) actions.
+- Integrate your communication applications with Contact Centers and your private telephony networks using Direct Routing.
+- Protect your customer's identity by building number masking services to connect buyers to sellers or users to partner vendors on your platform.
+- Increase engagement by building automated customer outreach programs for marketing and customer service.
+
+ACS Call Automation can be used to build calling workflows for customer service scenarios, as depicted in the high-level architecture below. You can answer inbound calls or make outbound calls. Execute actions like playing a welcome message, connecting the customer to a live agent on an ACS Calling SDK client app to answer the incoming call request. With support for ACS PSTN or Direct Routing, you can then connect this workflow back to your contact center.
+
+![Diagram of calling flow for a customer service scenario.](./call-automation-architecture.png)
+
+## Capabilities
+
+The following list presents the set of features that are currently available in the Azure Communication Services Call Automation SDKs.
+
+| Feature Area | Capability | .NET | Java |
+| -| -- | | -- |
+| Pre-call scenarios | Answer a one-to-one call | ✔️ | ✔️ |
+| | Answer a group call | ✔️ | ✔️ |
+| | Place new outbound call to one or more endpoints | ✔️ | ✔️ |
+| | Redirect* (forward) a call to one or more endpoints | ✔️ | ✔️ |
+| | Reject an incoming call | ✔️ | ✔️ |
+| Mid-call scenarios | Add one or more endpoints to an existing call | ✔️ | ✔️ |
+| | Play Audio from an audio file | ✔️ | ✔️ |
+| | Remove one or more endpoints from an existing call| ✔️ | ✔️ |
+| | Blind Transfer** a call to another endpoint | ✔️ | ✔️ |
+| | Hang up a call (remove the call leg) | ✔️ | ✔️ |
+| | Terminate a call (remove all participants and end call)| ✔️ | ✔️ |
+| Query scenarios | Get the call state | ✔️ | ✔️ |
+| | Get a participant in a call | ✔️ | ✔️ |
+| | List all participants in a call | ✔️ | ✔️ |
+
+*Redirecting a call to a phone number is currently not supported.
+
+**Transfer of VoIP call to a phone number is currently not supported.
+
+## Architecture
+
+Call Automation uses a REST API interface to receive requests and provide responses to all actions performed within the service. Due to the asynchronous nature of calling, most actions will have corresponding events that are triggered when the action completes successfully or fails.
+
+Event Grid ΓÇô Azure Communication Services uses Event Grid to deliver the IncomingCall event. This event can be triggered:
+- by an inbound PSTN call to a number you've acquired in the portal,
+- by connecting your telephony infrastructure using an SBC,
+- for one-on-one calls between Communication Service users,
+- when a Communication Services user is added to an existing call (group call),
+- an existing 1:1 call is transferred to a Communication Service user.
+
+Web hooks ΓÇô Calling Automation SDKs use standard web hook HTTP/S callbacks for call state change events and responses to mid-call actions.
+
+![Screenshot of flow for incoming call and actions.](./action-architecture.png)
++
+## Call Actions
+
+### Pre-call actions
+
+These actions are performed before the destination endpoint listed in the IncomingCall event notification is connected. Web hook callback events only communicate the ΓÇ£answerΓÇ¥ pre-call action, not for reject or redirect actions.
+
+**Answer** ΓÇô Using the IncomingCall event from Event Grid and Call Automation SDK, a call can be answered by your application. This action allows for IVR scenarios where an inbound PSTN call can be answered programmatically by your application. Other scenarios include answering a call on behalf of a user.
+
+**Reject** ΓÇô To reject a call means your application can receive the IncomingCall event and prevent the call from being connected to the destination endpoint.
+
+**Redirect** ΓÇô Using the IncomingCall event from Event Grid, a call can be redirected to one or more endpoints creating a single or simultaneous ringing (sim-ring) scenario. This means the call isn't answered by your application, it's simply ΓÇÿredirectedΓÇÖ to another destination endpoint to be answered.
+
+**Make Call** - Make Call action can be used to place outbound calls to phone numbers and to other communication users. Use cases include your application placing outbound calls to proactively inform users about an outage or notify about an order update.
+
+### Mid-call actions
+
+These actions can be performed on the calls that are answered or placed using Call Automation SDKs. Each mid-call action has a corresponding success or failure web hook callback event.
+
+**Add/Remove participant(s)** ΓÇô One or more participants can be added in a single request with each participant being a variation of supported destination endpoints. A web hook callback is sent for every participant successfully added to the call.
+
+**Play** - When your application answers a call or places an outbound call, you can play an audio prompt for the caller. This audio can be looped if needed in scenarios like playing hold music. To learn more, view our [quickstart](../../quickstarts/voice-video-calling/play-action.md)
+
+**Transfer** ΓÇô When your application answers a call or places an outbound call to an endpoint, that endpoint can be transferred to another destination endpoint. Transferring a 1:1 call will remove your application's ability to control the call using the Call Automation SDKs.
+
+**Hang-up** ΓÇô When your application has answered a one-to-one call, the hang-up action will remove the call leg and terminate the call with the other endpoint. If there are more than two participants in the call (group call), performing a ΓÇÿhang-upΓÇÖ action will remove your applicationΓÇÖs endpoint from the group call.
+
+**Terminate** ΓÇô Whether your application has answered a one-to-one or group call, or placed an outbound call with one or more participants, this action will remove all participants and end the call. This operation is triggered by setting `forEveryOne` property to true in Hang-Up call action.
+
+## Events
+
+The following table outlines the current events emitted by Azure Communication Services. The two tables below show events emitted by Event Grid and from the Call Automation as webhook events.
+
+### Event Grid events
+
+Most of the events sent by Event Grid are platform agnostic meaning they're emitted regardless of the SDK (Calling or Call Automation). While you can create a subscription for any event, we recommend you use the IncomingCall event for all Call Automation use-cases where you want to control the call programmatically. Use the other events for reporting/telemetry purposes.
+
+| Event | Description |
+| -- | |
+| IncomingCall | Notification of a call to a communication user or phone number |
+| CallStarted | A call is established (inbound or outbound) |
+| CallEnded | A call is terminated and all participants are removed |
+| ParticipantAdded | A participant has been added to a call |
+| ParticipantRemoved| A participant has been removed from a call |
+
+### Call Automation webhook events
+
+The Call Automation events are sent to the web hook callback URI specified when you answer or place a new outbound call.
+| Event | Description |
+| -- | |
+| CallConnected | Your applicationΓÇÖs call leg is connected (inbound or outbound) |
+| CallDisconnected | Your applicationΓÇÖs call leg is disconnected |
+| CallTransferAccepted | Your applicationΓÇÖs call leg has been transferred to another endpoint |
+| CallTransferFailed | The transfer of your applicationΓÇÖs call leg failed |
+| AddParticipantSucceeded| Your application added a participant |
+|AddParticipantFailed | Your application was unable to add a participant |
+| RemoveParticipantSucceeded|Your application removed a participant |
+| RemoveParticipantFailed |Your application was unable to remove a participant |
+| ParticipantUpdated | The status of a participant changed while your applicationΓÇÖs call leg was connected to a call |
+| PlayCompleted| Your application successfully played the audio file provided |
+| PlayFailed| Your application failed to play audio |
+
+## Known Issues
+
+1. Using the incorrect IdentifierType for endpoints for `Transfer` requests (like using CommunicationUserIdentifier to specify a phone number) returns a 500 error instead of a 400 error code. Solution: Use the correct type, CommunicationUserIdentifier for Communication Users and PhoneNumberIdentifier for phone numbers.
+2. Taking a pre-call action like Answer/Reject on the original call after redirected it gives a 200 success instead of failing on 'call not found'.
+
+## Next Steps
+
+> [!div class="nextstepaction"]
+> [Get started with Call Automation](./../../quickstarts/voice-video-calling/Callflows-for-customer-interactions.md)
communication-services Play Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/play-action.md
+
+ Title: Play action
+description: Conceptual information about using Play action with Call Automation.
+++ Last updated : 09/06/2022+++
+# Play Action Overview
+
+> [!IMPORTANT]
+> Functionality described on this document is currently in private preview. Private preview includes access to SDKs and documentation for testing purposes that are not yet available publicly.
+> Apply to become an early adopter by filling out the form for [preview access to Azure Communication Services](https://aka.ms/ACS-EarlyAdopter).
+
+The play action provided through the call automation SDK allows you to play audio prompts to participants in the call. This action can be accessed through the server-side implementation of your application. The play action allows you to provide ACS access to your pre-recorded audio files with support for authentication.
+
+``Note: ACS currently only supports files of WAV, mono, 16KHz format.``
+
+The Play action allows you to provide access to a pre-recorded audio file of WAV format that ACS can access with support for authentication.
+
+## Common use cases
+
+The play action can be used in many ways, below are some examples of how developers may wish to use the play action in their applications.
+
+### Announcements
+Your application might want to play some sort of announcement when a participant joins or leaves the call, to notify other users.
+
+### Self-serve customers
+
+In scenarios with IVRs and virtual assistants, you can use your application or bots to play audio prompts to callers, this prompt can be in the form of a menu to guide the caller through their interaction.
+
+### Hold music
+The play action can also be used to play hold music for callers. This action can be set up in a loop so that the music keeps playing untill an agent is available to assist the caller.
+
+### Playing compliance messages
+As part of compliance requirements in various industries, vendors are expected to play legal or compliance messages to callers, for example, ΓÇ£This call will be recorded for quality purposesΓÇ¥.
+
+## How the play action workflow looks
+
+![Screenshot of flow for play action.](./play-action-flow.png)
+
+## Known Issues/Limitations
+- Play action isn't enabled to work with Teams Interoperability.
+- Play won't support loop for targeted playing.
+
+## What's coming up next for Play action
+As we invest more into this functionality, we recommend developers sign up to our TAP program that allows you to get early access to the newest feature releases. Over the coming months the play action will add new capabilities that use our integration with Azure Cognitive Services to provide AI capabilities such as Text-to-Speech and fine tuning Text-to-Speech with SSML. With these capabilities you can improve customer interactions to create more personalized messages.
+
+## Next Steps
+Check out the [Play action quickstart](../../quickstarts/voice-video-calling/Play-Action.md) to learn more.
communication-services Manage Inbound Calls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/telephony/manage-inbound-calls.md
+
+ Title: Azure Communication Services Call Automation quickstart for PSTN calls
+
+description: Provides a quickstart for managing inbound telephony calls with Call Automation.
++++ Last updated : 09/06/2022+++
+zone_pivot_groups: acs-csharp-java
++
+# Quickstart: Manage inbound telephony calls with Call Automation
+> [!IMPORTANT]
+> Functionality described on this document is currently in private preview. Private preview includes access to SDKs and documentation for testing purposes that are not yet available publicly.
+> Apply to become an early adopter by filling out the form for [preview access to Azure Communication Services](https://aka.ms/ACS-EarlyAdopter).
+
+Get started with Azure Communication Services by using the Call Automation SDKs to build automated calling workflows that listen for and manage inbound calls placed to a phone number or received via Direct Routing.
+++
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
+
+## Next steps
+
+- Learn more about [Call Automation](../../concepts/voice-video-calling/call-automation.md) and its features.
+- Learn more about [Play action](../../concepts/voice-video-calling/play-Action.md).
+- Learn how to build a [call workflow](../voice-video-calling/callflows-for-customer-interactions.md) for a customer support scenario.
communication-services Callflows For Customer Interactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/callflows-for-customer-interactions.md
+
+ Title: Azure Communication Services Call Automation API tutorial for VoIP calls
+
+description: Tutorial on how to use Call Automation to build call flow for customer interactions.
++++ Last updated : 09/06/2022+++
+zone_pivot_groups: acs-csharp-java
++
+# Tutorial: Build call workflows for customer interactions
+
+> [!IMPORTANT]
+> Functionality described on this document is currently in private preview. Private preview includes access to SDKs and documentation for testing purposes that are not yet available publicly.
+> Apply to become an early adopter by filling out the form for [preview access to Azure Communication Services](https://aka.ms/ACS-EarlyAdopter).
+
+In this tutorial, you'll learn how to build applications that use Azure Communication Services Call Automation to handle common customer support scenarios, such as:
+- receiving notifications for incoming calls to a phone number using Event Grid
+- answering the call and playing an audio file using Call Automation SDK
+- adding a communication user to the call using Call Automation SDK. This user can be a customer service agent who uses a web application built using Calling SDKs to connect to Azure Communication Services
++++
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
+
+## Next steps
+- Learn more about [Call Automation](../../concepts/voice-video-calling/call-automation.md) and its features.
+- Learn how to [manage inbound telephony calls](../telephony/Manage-Inbound-Calls.md) with Call Automation.
+- Learn more about [Play action](../../concepts/voice-video-calling/Play-Action.md).
communication-services Play Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/play-action.md
+
+ Title: Play Audio
+
+description: Provides a quick start for playing audio to participants as part of a call.
+++ Last updated : 09/06/2022+++
+zone_pivot_groups: acs-csharp-java
++
+# Quickstart: Play action
+
+> [!IMPORTANT]
+> Functionality described on this document is currently in private preview. Private preview includes access to SDKs and documentation for testing purposes that are not yet available publicly.
+> Apply to become an early adopter by filling out the form for [preview access to Azure Communication Services](https://aka.ms/ACS-EarlyAdopter).
+
+This quickstart will help you get started with playing audio files to participants by using the play action provided through Azure Communication Services Call Automation SDK.
+++
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
+
+## Next steps
+
+- Learn more about [Call Automation](../../concepts/voice-video-calling/call-automation.md)
connectors Built In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/built-in.md
ms.suite: integration Previously updated : 08/25/2022 Last updated : 09/07/2022 # Built-in connectors in Azure Logic Apps Built-in connectors provide ways for you to control your workflow's schedule and structure, run your own code, manage or manipulate data, and complete other tasks in your workflows. Different from managed connectors, some built-in connectors aren't tied to a specific service, system, or protocol. For example, you can start almost any workflow on a schedule by using the Recurrence trigger. Or, you can have your workflow wait until called by using the Request trigger. All built-in connectors run natively on the Azure Logic Apps runtime. Some don't require that you create a connection before you use them.
-For a smaller number of services, systems, and protocols, Azure Logic Apps provides a built-in version alongside the managed version. The number and range of built-in connectors vary based on whether you create a Consumption logic app workflow that runs in multi-tenant Azure Logic Apps or a Standard logic app workflow that runs in single-tenant Azure Logic Apps. In most cases, the built-in version provides better performance, capabilities, pricing, and so on. In a few cases, some built-in connectors are available only in one logic app type and not the other.
+For a smaller number of services, systems, and protocols, Azure Logic Apps provides a built-in version alongside the managed version. The number and range of built-in connectors vary based on whether you create a Consumption logic app workflow that runs in multi-tenant Azure Logic Apps or a Standard logic app workflow that runs in single-tenant Azure Logic Apps. In most cases, the built-in version provides better performance, capabilities, pricing, and so on. In a few cases, some built-in connectors are available only in one logic app workflow type and not the other.
-For example, a Standard logic app workflow provides both managed connectors and built-in connectors for Azure Blob, Azure Cosmos DB, Azure Event Hubs, Azure Service Bus, DB2, FTP, MQ, SFTP, and SQL Server. A Consumption logic app workflow doesn't have the built-in versions. A Consumption logic app workflow provides built-in connectors for Azure API Management, Azure App Services, and Batch, while a Standard logic app workflow doesn't have these built-in connectors.
+For example, a Standard workflow can use both managed connectors and built-in connectors for Azure Blob, Azure Cosmos DB, Azure Event Hubs, Azure Service Bus, DB2, FTP, MQ, SFTP, and SQL Server. A Consumption workflow doesn't have the built-in versions. A Consumption workflow can use built-in connectors for Azure API Management, Azure App Services, and Batch, while a Standard workflow doesn't have these built-in connectors.
-Also, in Standard logic app workflows, some [built-in connectors with specific attributes are informally known as *service providers*](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). Some built-in connectors support only a single way to authenticate a connection to the underlying service. Other built-in connectors can offer a choice, such as using a connection string, Azure Active Directory (Azure AD), or a managed identity. All built-in connectors run in the same process as the Azure Logic Apps runtime. For more information, review [Single-tenant versus multi-tenant and integration service environment (ISE)](../logic-apps/single-tenant-overview-compare.md).
+Also, in Standard workflows, some [built-in connectors with specific attributes are informally known as *service providers*](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). Some built-in connectors support only a single way to authenticate a connection to the underlying service. Other built-in connectors can offer a choice, such as using a connection string, Azure Active Directory (Azure AD), or a managed identity. All built-in connectors run in the same process as the Azure Logic Apps runtime. For more information, review [Single-tenant versus multi-tenant and integration service environment (ISE)](../logic-apps/single-tenant-overview-compare.md).
-This article provides a general overview about built-in connectors in Consumption logic app workflows versus Standard logic app workflows.
+This article provides a general overview about built-in connectors in Consumption workflows versus Standard workflows.
<a name="built-in-connectors"></a> ## Built-in connectors in Consumption versus Standard
-The following table lists the current and expanding galleries of built-in connectors available for Consumption versus Standard logic app workflows. For Standard workflows, an asterisk (**\***) marks [built-in connectors based on the *service provider* model](#service-provider-interface-implementation), which is described in more detail later.
+The following table lists the current and expanding galleries of built-in connectors available for Consumption versus Standard workflows. For Standard workflows, an asterisk (**\***) marks [built-in connectors based on the *service provider* model](#service-provider-interface-implementation), which is described in more detail later.
| Consumption | Standard | |-|-|
-| Azure API Management<br>Azure App Services <br>Azure Functions <br>Azure Logic Apps <br>Batch <br>Control <br>Data Operations <br>Date Time <br>Flat File <br>HTTP <br>Inline Code <br>Integration Account <br>Liquid <br>Request <br>Schedule <br>Variables <br>XML | Azure Blob* <br>Azure Cosmos DB* <br>Azure Functions <br>Azure Queue* <br>Azure Table Storage* <br>Control <br>Data Operations <br>Date Time <br>DB2* <br>Event Hubs* <br>Flat File <br>FTP* <br>HTTP <br>IBM Host File* <br>Inline Code <br>Liquid operations <br>MQ* <br>Request <br>Schedule <br>Service Bus* <br>SFTP* <br>SQL Server* <br>Variables <br>Workflow operations <br>XML operations |
+| Azure API Management<br>Azure App Services <br>Azure Functions <br>Azure Logic Apps <br>Batch <br>Control <br>Data Operations <br>Date Time <br>Flat File <br>HTTP <br>Inline Code <br>Integration Account <br>Liquid <br>Request <br>Schedule <br>Variables <br>XML | AS2 (v2) <br>Azure Automation* <br>Azure Blob* <br>Azure Cosmos DB* <br>Azure File Storage* <br>Azure Functions <br>Azure Queue* <br>Azure Table Storage* <br>Control <br>Data Operations <br>Date Time <br>DB2* <br>Event Hubs* <br>Flat File <br>FTP* <br>HTTP <br>IBM Host File* <br>Inline Code <br>Key Vault* <br>Liquid operations <br>MQ* <br>Request <br>Schedule <br>Service Bus* <br>SFTP* <br>SMTP* <br>SQL Server* <br>Variables <br>Workflow operations <br>XML operations |
||| <a name="service-provider-interface-implementation"></a> ## Service provider-based built-in connectors
-In Standard logic app workflows, a built-in connector that has the following attributes is informally known as a *service provider*:
+In Standard workflows, a built-in connector that has the following attributes is informally known as a *service provider*:
* Is based on the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md).
-* Provides access from a Standard logic app workflow to a service, such as Azure Blob Storage, Azure Service Bus, Azure Event Hubs, SFTP, and SQL Server.
+* Provides access from a Standard workflow to a service, such as Azure Blob Storage, Azure Service Bus, Azure Event Hubs, SFTP, and SQL Server.
Some built-in connectors support only a single way to authenticate a connection to the underlying service. Other built-in connectors can offer a choice, such as using a connection string, Azure Active Directory (Azure AD), or a managed identity.
In contrast, a built-in connector that's *not a service provider* has the follow
## Custom built-in connectors
-For Standard logic apps, you can create your own built-in connector with the same [built-in connector extensibility model](../logic-apps/custom-connector-overview.md#built-in-connector-extensibility-model) that's used by service provider-based built-in connectors, such as Azure Blob, Azure Event Hubs, Azure Service Bus, SQL Server, and more. This interface implementation is based on the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md) and provides the capability for you to create custom built-in connectors that anyone can use in Standard logic apps.
+For Standard workflows, you can create your own built-in connector with the same [built-in connector extensibility model](../logic-apps/custom-connector-overview.md#built-in-connector-extensibility-model) that's used by service provider-based built-in connectors, such as Azure Blob, Azure Event Hubs, Azure Service Bus, SQL Server, and more. This interface implementation is based on the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md) and provides the capability for you to create custom built-in connectors that anyone can use in Standard workflows.
-For Consumption logic apps, you can't create your own built-in connectors, but you create your own managed connectors.
+For Consumption workflows, you can't create your own built-in connectors, but you create your own managed connectors.
For more information, review the following documentation: * [Custom connectors in Azure Logic Apps](../logic-apps/custom-connector-overview.md#custom-connector-standard)
-* [Create custom built-in connectors for Standard logic apps](../logic-apps/create-custom-built-in-connector-standard.md)
+* [Create custom built-in connectors for Standard workflows](../logic-apps/create-custom-built-in-connector-standard.md)
<a name="general-built-in"></a>
You can use the following built-in connectors to perform general tasks, for exam
[**Recurrence**][schedule-recurrence-doc]: Trigger a workflow based on the specified recurrence. \ \
- [**Sliding Window**][schedule-sliding-window-doc]<br>(*Consumption logic app only*): <br>Trigger a workflow that needs to handle data in continuous chunks.
+ [**Sliding Window**][schedule-sliding-window-doc]<br>(*Consumption workflow only*): <br>Trigger a workflow that needs to handle data in continuous chunks.
\ \ [**Delay**][schedule-delay-doc]: Pause your workflow for the specified duration.
You can use the following built-in connectors to perform general tasks, for exam
[![Batch icon][batch-icon]][batch-doc] \ \
- [**Batch**][batch-doc]<br>(*Consumption logic app only*)
+ [**Batch**][batch-doc]<br>(*Consumption workflow only*)
\ \ [**Batch messages**][batch-doc]: Trigger a workflow that processes messages in batches.
You can use the following built-in connectors to perform general tasks, for exam
![FTP icon][ftp-icon] \ \
- **FTP**<br>(*Standard logic app only*)
+ **FTP**<br>(*Standard workflow only*)
\ \ Connect to FTP or FTPS servers you can access from the internet so that you can work with your files and folders.
You can use the following built-in connectors to perform general tasks, for exam
![SFTP-SSH icon][sftp-ssh-icon] \ \
- **SFTP-SSH**<br>(*Standard logic app only*)
+ **SFTP-SSH**<br>(*Standard workflow only*)
\ \ Connect to SFTP servers that you can access from the internet by using SSH so that you can work with your files and folders.
You can use the following built-in connectors to perform general tasks, for exam
## Built-in connectors for specific services and systems
-You can use the following built-in connectors to access specific services and systems. In Standard logic app workflows, some of these built-in connectors are also informally known as *service providers*, which can differ from their managed connector counterparts in some ways.
+You can use the following built-in connectors to access specific services and systems. In Standard workflows, some of these built-in connectors are also informally known as *service providers*, which can differ from their managed connector counterparts in some ways.
:::row::: :::column::: [![Azure API Management icon][azure-api-management-icon]][azure-api-management-doc] \ \
- [**Azure API Management**][azure-api-management-doc]<br>(*Consumption logic app only*)
+ [**Azure API Management**][azure-api-management-doc]<br>(*Consumption workflow only*)
\ \
- Call your own triggers and actions in APIs that you define, manage, and publish using [Azure API Management](../api-management/api-management-key-concepts.md). <p><p>**Note**: Not supported when using [Consumption tier for API Management](../api-management/api-management-features.md).
+ Call your own triggers and actions in APIs that you define, manage, and publish using [Azure API Management](../api-management/api-management-key-concepts.md). <br><br>**Note**: Not supported when using [Consumption tier for API Management](../api-management/api-management-features.md).
:::column-end::: :::column::: [![Azure App Services icon][azure-app-services-icon]][azure-app-services-doc] \ \
- [**Azure App Services**][azure-app-services-doc]<br>(*Consumption logic app only*)
+ [**Azure App Services**][azure-app-services-doc]<br>(*Consumption workflow only*)
\ \ Call apps that you create and host on [Azure App Service](../app-service/overview.md), for example, API Apps and Web Apps.
You can use the following built-in connectors to access specific services and sy
![Azure Blob icon][azure-blob-storage-icon] \ \
- **Azure Blob**<br>(*Standard logic app only*)
+ **Azure Blob**<br>(*Standard workflow only*)
\ \ Connect to your Azure Blob Storage account so you can create and manage blob content.
You can use the following built-in connectors to access specific services and sy
![Azure Cosmos DB icon][azure-cosmos-db-icon] \ \
- **Azure Cosmos DB**<br>(*Standard logic app only*)
+ **Azure Cosmos DB**<br>(*Standard workflow only*)
\ \ Connect to Azure Cosmos DB so that you can access and manage Azure Cosmos DB documents.
You can use the following built-in connectors to access specific services and sy
![Azure Event Hubs icon][azure-event-hubs-icon] \ \
- **Azure Event Hubs**<br>(*Standard logic app only*)
+ **Azure Event Hubs**<br>(*Standard workflow only*)
\ \
- Consume and publish events through an event hub. For example, get output from your logic app with Event Hubs, and then send that output to a real-time analytics provider.
+ Consume and publish events through an event hub. For example, get output from your workflow with Event Hubs, and then send that output to a real-time analytics provider.
:::column-end::: :::row-end::: :::row:::
You can use the following built-in connectors to access specific services and sy
[![Azure Logic Apps icon][azure-logic-apps-icon]][nested-logic-app-doc] \ \
- [**Azure Logic Apps**][nested-logic-app-doc]<br>(*Consumption logic app*) <br><br>-or-<br><br>**Workflow operations**<br>(*Standard logic app*)
+ [**Azure Logic Apps**][nested-logic-app-doc]<br>(*Consumption workflow*) <br><br>-or-<br><br>**Workflow operations**<br>(*Standard workflow*)
\ \ Call other workflows that start with the Request trigger named **When a HTTP request is received**.
You can use the following built-in connectors to access specific services and sy
![Azure Service Bus icon][azure-service-bus-icon] \ \
- **Azure Service Bus**<br>(*Standard logic app only*)
+ **Azure Service Bus**<br>(*Standard workflow only*)
\ \ Manage asynchronous messages, queues, sessions, topics, and topic subscriptions.
You can use the following built-in connectors to access specific services and sy
![Azure Table Storage icon][azure-table-storage-icon] \ \
- **Azure Table Storage**<br>(*Standard logic app only*)
+ **Azure Table Storage**<br>(*Standard workflow only*)
\ \ Connect to your Azure Storage account so that you can create, update, query, and manage tables.
You can use the following built-in connectors to access specific services and sy
![IBM DB2 icon][ibm-db2-icon] \ \
- **DB2**<br>(*Standard logic app only*)
+ **DB2**<br>(*Standard workflow only*)
\ \ Connect to IBM DB2 in the cloud or on-premises. Update a row, get a table, and more.
You can use the following built-in connectors to access specific services and sy
![IBM Host File icon][ibm-host-file-icon] \ \
- **IBM Host File**<br>(*Standard logic app only*)
+ **IBM Host File**<br>(*Standard workflow only*)
\ \ Connect to IBM Host File and generate or parse contents.
You can use the following built-in connectors to access specific services and sy
![IBM MQ icon][ibm-mq-icon] \ \
- **IBM MQ**<br>(*Standard logic app only*)
+ **IBM MQ**<br>(*Standard workflow only*)
\ \ Connect to IBM MQ on-premises or in Azure to send and receive messages.
You can use the following built-in connectors to access specific services and sy
[![SQL Server icon][sql-server-icon]][sql-server-doc] \ \
- [**SQL Server**][sql-server-doc]<br>(*Standard logic app only*)
+ [**SQL Server**][sql-server-doc]<br>(*Standard workflow only*)
\ \ Connect to your SQL Server on premises or an Azure SQL Database in the cloud so that you can manage records, run stored procedures, or perform queries.
Azure Logic Apps provides the following built-in actions for structuring and con
[**Terminate**][terminate-doc] \ \
- Stop an actively running logic app workflow.
+ Stop an actively running workflow.
:::column-end::: :::column::: [![Until action icon][until-icon]][until-doc]
Azure Logic Apps provides the following built-in actions for working with data o
## Integration account built-in connectors
-Integration account operations specifically support business-to-business (B2B) communication scenarios in Azure Logic Apps. After you create an integration account and define your B2B artifacts, such as trading partners, agreements, maps, and schemas, you can use integration account built-in actions to encode and decode messages, transform content, and more.
+Integration account operations support business-to-business (B2B) communication scenarios in Azure Logic Apps. After you create an integration account and define your B2B artifacts, such as trading partners, agreements, and others, you can use integration account built-in actions to encode and decode messages, transform content, and more.
-* Consumption logic apps
+* Consumption workflows
- Before you use any integration account operations in a Consumption logic app, you have to [link your logic app to your integration account](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md).
+ Before you use any integration account operations in a workflow, [link your logic app resource to your integration account](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md).
-* Standard logic apps
+* Standard workflows
- Integration account operations don't require that you link your logic app to your integration account. Instead, you create a connection to your integration account when you add the operation to your Standard logic app workflow. Actually, the built-in Liquid operations and XML operations don't even need an integration account. However, you have to upload Liquid maps, XML maps, or XML schemas through the respective operations in the Azure portal or add these files to your Visual Studio Code project's **Artifacts** folder using the respective **Maps** and **Schemas** folders.
+ While most integration account operations don't require that you link your logic app resource to your integration account, linking lets you share artifacts across multiple Standard workflows and their child workflows. Based on the integration account operation that you want to use, complete one of the following steps before you use the operation:
+
+ * For operations that require maps or schemas, you can either:
+
+ * Upload these artifacts to your logic app resource using the Azure portal or Visual Studio Code. You can then use these artifacts across all child workflows in the *same* logic app resource. For more information, review [Add schemas to use with workflows in Azure Logic Apps](../logic-apps/logic-apps-enterprise-integration-maps.md?tabs=standard) and [Add schemas to use with workflows in Azure Logic Apps](../logic-apps/logic-apps-enterprise-integration-schemas.md?tabs=standard).
+
+ * [Link your logic app resource to your integration account](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md).
+
+ * For operations that require a connection to your integration account, create the connection when you add the operation to your workflow.
For more information, review the following documentation:
For more information, review the following documentation:
* [Create and manage integration accounts for B2B workflows](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md) :::row:::
+ :::column:::
+ [![AS2 Decode v2 icon][as2-v2-icon]][as2-doc]
+ \
+ \
+ [**AS2 Decode (v2)**][as2-doc]<br>(*Standard workflow only*)
+ \
+ \
+ Decode messages received using the AS2 protocol.
+ :::column-end:::
+ :::column:::
+ [![AS2 Encode (v2) icon][as2-v2-icon]][as2-doc]
+ \
+ \
+ [**AS2 Encode (v2)**][as2-doc]<br>(*Standard workflow only*)
+ \
+ \
+ Encode messages sent using the AS2 protocol.
+ :::column-end:::
:::column::: [![Flat file decoding icon][flat-file-decode-icon]][flat-file-decode-doc] \
For more information, review the following documentation:
\ Decode XML after receiving the content from a trading partner. :::column-end::: :::column::: [![Integration account icon][integration-account-icon]][integration-account-doc] \ \
- [**Integration Account Artifact Lookup**][integration-account-doc]<br>(*Consumption logic app only*)
+ [**Integration Account Artifact Lookup**][integration-account-doc]<br>(*Consumption workflow only*)
\ \ Get custom metadata for artifacts, such as trading partners, agreements, schemas, and so on, in your integration account.
For more information, review the following documentation:
[**Liquid operations**][json-liquid-transform-doc] \ \
- Convert the following formats by using Liquid templates: <p><p>- JSON to JSON <br>- JSON to TEXT <br>- XML to JSON <br>- XML to TEXT
+ Convert the following formats by using Liquid templates: <br><br>- JSON to JSON <br>- JSON to TEXT <br>- XML to JSON <br>- XML to TEXT
:::column-end::: :::column::: [![Transform XML icon][xml-transform-icon]][xml-transform-doc] \
For more information, review the following documentation:
\ Validate XML documents against the specified schema. :::column-end:::
- :::column:::
- :::column-end:::
:::row-end::: ## Next steps
For more information, review the following documentation:
[variables-icon]: ./media/apis-list/variables.png <!--Built-in integration account connector icons -->
+[as2-v2-icon]: ./media/apis-list/as2-v2.png
[flat-file-encode-icon]: ./media/apis-list/flat-file-encoding.png [flat-file-decode-icon]: ./media/apis-list/flat-file-decoding.png [integration-account-icon]: ./media/apis-list/integration-account.png
For more information, review the following documentation:
<!--Built-in doc links--> [azure-api-management-doc]: ../api-management/get-started-create-service-instance.md "Create an Azure API Management service instance for managing and publishing your APIs"
-[azure-app-services-doc]: ../logic-apps/logic-apps-custom-api-host-deploy-call.md "Integrate logic apps with App Service API Apps"
+[azure-app-services-doc]: ../logic-apps/logic-apps-custom-api-host-deploy-call.md "Integrate logic app workflows with App Service API Apps"
[azure-blob-storage-doc]: ./connectors-create-api-azureblobstorage.md "Manage files in your blob container with Azure Blob storage connector" [azure-cosmos-db-doc]: ./connectors-create-api-cosmos-db.md "Connect to Azure Cosmos DB so that you can access and manage Azure Cosmos DB documents"
-[azure-event-hubs-doc]: ./connectors-create-api-azure-event-hubs.md "Connect to Azure Event Hubs so that you can receive and send events between logic apps and Event Hubs"
-[azure-functions-doc]: ../logic-apps/logic-apps-azure-functions.md "Integrate logic apps with Azure Functions"
+[azure-event-hubs-doc]: ./connectors-create-api-azure-event-hubs.md "Connect to Azure Event Hubs so that you can receive and send events between logic app workflows and Event Hubs"
+[azure-functions-doc]: ../logic-apps/logic-apps-azure-functions.md "Integrate logic app workflows with Azure Functions"
[azure-service-bus-doc]: ./connectors-create-api-servicebus.md "Manage messages from Service Bus queues, topics, and topic subscriptions" [azure-table-storage-doc]: /connectors/azuretables/ "Connect to your Azure Storage account so that you can create, update, and query tables and more" [batch-doc]: ../logic-apps/logic-apps-batch-process-send-receive-messages.md "Process messages in groups, or as batches"
For more information, review the following documentation:
[data-operations-doc]: ../logic-apps/logic-apps-perform-data-operations.md "Perform data operations such as filtering arrays or creating CSV and HTML tables" [for-each-doc]: ../logic-apps/logic-apps-control-flow-loops.md#foreach-loop "Perform the same actions on every item in an array" [ftp-doc]: ./connectors-create-api-ftp.md "Connect to an FTP or FTPS server for FTP tasks, like uploading, getting, deleting files, and more"
-[http-doc]: ./connectors-native-http.md "Call HTTP or HTTPS endpoints from your logic apps"
-[http-request-doc]: ./connectors-native-reqres.md "Receive HTTP requests in your logic apps"
-[http-response-doc]: ./connectors-native-reqres.md "Respond to HTTP requests from your logic apps"
-[http-swagger-doc]: ./connectors-native-http-swagger.md "Call REST endpoints from your logic apps"
+[http-doc]: ./connectors-native-http.md "Call HTTP or HTTPS endpoints from your logic app workflows"
+[http-request-doc]: ./connectors-native-reqres.md "Receive HTTP requests in your logic app workflows"
+[http-response-doc]: ./connectors-native-reqres.md "Respond to HTTP requests from your logic app workflows"
+[http-swagger-doc]: ./connectors-native-http-swagger.md "Call REST endpoints from your logic app workflows"
[http-webhook-doc]: ./connectors-native-webhook.md "Wait for specific events from HTTP or HTTPS endpoints" [ibm-db2-doc]: ./connectors-create-api-db2.md "Connect to IBM DB2 in the cloud or on-premises. Update a row, get a table, and more" [ibm-mq-doc]: ./connectors-create-api-mq.md "Connect to IBM MQ on-premises or in Azure to send and receive messages"
-[inline-code-doc]: ../logic-apps/logic-apps-add-run-inline-code.md "Add and run JavaScript code snippets from your logic apps"
-[nested-logic-app-doc]: ../logic-apps/logic-apps-http-endpoint.md "Integrate logic apps with nested workflows"
+[inline-code-doc]: ../logic-apps/logic-apps-add-run-inline-code.md "Add and run JavaScript code snippets from your logic app workflows"
+[nested-logic-app-doc]: ../logic-apps/logic-apps-http-endpoint.md "Integrate logic app workflows with nested workflows"
[query-doc]: ../logic-apps/logic-apps-perform-data-operations.md#filter-array-action "Select and filter arrays with the Query action"
-[schedule-doc]: ../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md "Run logic apps based a schedule"
+[schedule-doc]: ../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md "Run logic app workflows based a schedule"
[schedule-delay-doc]: ./connectors-native-delay.md "Delay running the next action" [schedule-delay-until-doc]: ./connectors-native-delay.md "Delay running the next action"
-[schedule-recurrence-doc]: ./connectors-native-recurrence.md "Run logic apps on a recurring schedule"
-[schedule-sliding-window-doc]: ./connectors-native-sliding-window.md "Run logic apps that need to handle data in contiguous chunks"
+[schedule-recurrence-doc]: ./connectors-native-recurrence.md "Run logic app workflows on a recurring schedule"
+[schedule-sliding-window-doc]: ./connectors-native-sliding-window.md "Run logic app workflows that need to handle data in contiguous chunks"
[scope-doc]: ../logic-apps/logic-apps-control-flow-run-steps-group-scopes.md "Organize actions into groups, which get their own status after the actions in group finish running" [sftp-ssh-doc]: ./connectors-sftp-ssh.md "Connect to your SFTP account by using SSH. Upload, get, delete files, and more" [sql-server-doc]: ./connectors-create-api-sqlazure.md "Connect to Azure SQL Database or SQL Server. Create, update, get, and delete entries in an SQL database table" [switch-doc]: ../logic-apps/logic-apps-control-flow-switch-statement.md "Organize actions into cases, which are assigned unique values. Run only the case whose value matches the result from an expression, object, or token. If no matches exist, run the default case"
-[terminate-doc]: ../logic-apps/logic-apps-workflow-actions-triggers.md#terminate-action "Stop or cancel an actively running workflow for your logic app"
+[terminate-doc]: ../logic-apps/logic-apps-workflow-actions-triggers.md#terminate-action "Stop or cancel an actively running workflow for your logic app workflow"
[until-doc]: ../logic-apps/logic-apps-control-flow-loops.md#until-loop "Repeat actions until the specified condition is true or some state has changed" [variables-doc]: ../logic-apps/logic-apps-create-variables-store-values.md "Perform operations with variables, such as initialize, set, increment, decrement, and append to string or array variable" <!--Built-in integration account doc links-->
-[flat-file-decode-doc]:../logic-apps/logic-apps-enterprise-integration-flatfile.md "Learn about enterprise integration flat file"
-[flat-file-encode-doc]:../logic-apps/logic-apps-enterprise-integration-flatfile.md "Learn about enterprise integration flat file"
+[as2-doc]: ../logic-apps/logic-apps-enterprise-integration-as2.md "Encode and decode messages that use the AS2 protocol"
+[flat-file-decode-doc]:../logic-apps/logic-apps-enterprise-integration-flatfile.md "Decode XML content with a flat file schema"
+[flat-file-encode-doc]:../logic-apps/logic-apps-enterprise-integration-flatfile.md "Encode XML content with a flat file schema"
[integration-account-doc]: ../logic-apps/logic-apps-enterprise-integration-metadata.md "Manage metadata for integration account artifacts"
-[json-liquid-transform-doc]: ../logic-apps/logic-apps-enterprise-integration-liquid-transform.md "Transform JSON with Liquid templates"
-[xml-transform-doc]: ../logic-apps/logic-apps-enterprise-integration-transform.md "Transform XML messages"
-[xml-validate-doc]: ../logic-apps/logic-apps-enterprise-integration-xml-validation.md "Validate XML messages"
+[json-liquid-transform-doc]: ../logic-apps/logic-apps-enterprise-integration-liquid-transform.md "Transform JSON or XML content with Liquid templates"
+[xml-transform-doc]: ../logic-apps/logic-apps-enterprise-integration-transform.md "Transform XML content"
+[xml-validate-doc]: ../logic-apps/logic-apps-enterprise-integration-xml-validation.md "Validate XML content"
connectors Connect Common Data Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connect-common-data-service.md
ms.suite: integration Previously updated : 08/05/2022 Last updated : 09/07/2022 tags: connectors
tags: connectors
> [!IMPORTANT] > > On August 30, 2022, the connector operations for Common Data Service 2.0, also known as Microsoft Dataverse
-> (Legacy), migrate to the current Microsoft Dataverse connector. You can use the current Dataverse connector
-> in any existing or new logic app workflows. For backward compatibility, existing workflows continue to work
+> (Legacy), migrate to the current Microsoft Dataverse connector. Legacy operations bear the "legacy" label,
+> while current operations bear the "preview" label. You can use the current Dataverse connector in any
+> existing or new logic app workflows. For backward compatibility, existing workflows continue to work
> with the legacy Dataverse connector. However, make sure to review these workflows, and update them promptly. > > Starting October 2023, the legacy version becomes unavailable for new workflows. Existing workflows continue
connectors Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/managed.md
ms.suite: integration Previously updated : 08/25/2022 Last updated : 09/07/2022 # Managed connectors in Azure Logic Apps Managed connectors provide ways for you to access other services and systems where built-in connectors aren't available. You can use these triggers and actions to create workflows that integrate data, apps, cloud-based services, and on-premises systems. Different from built-in connectors, managed connectors are usually tied to a specific service or system such as Office 365, SharePoint, Azure Key Vault, Salesforce, Azure Automation, and so on. Managed by Microsoft and hosted in Azure, managed connectors usually require that you first create a connection from your workflow and authenticate your identity.
-For a smaller number of services, systems and protocols, Azure Logic Apps provides a built-in version alongside the managed version. The number and range of built-in connectors vary based on whether you create a Consumption logic app that runs in multi-tenant Azure Logic Apps, or a Standard logic app that runs in single-tenant Azure Logic Apps. In most cases, the built-in version provides better performance, capabilities, pricing, and so on. In a few cases, some built-in connectors are available only in one logic app type, and not the other.
+For a smaller number of services, systems and protocols, Azure Logic Apps provides a built-in version alongside the managed version. The number and range of built-in connectors vary based on whether you create a Consumption logic app workflow that runs in multi-tenant Azure Logic Apps or a Standard logic app workflow that runs in single-tenant Azure Logic Apps. In most cases, the built-in version provides better performance, capabilities, pricing, and so on. In a few cases, some built-in connectors are available only in one logic app workflow type, and not the other.
-For example, a Standard logic app provides both managed connectors and built-in connectors for Azure Blob, Azure Cosmos DB, Azure Event Hubs, Azure Service Bus, DB2, FTP, MQ, SFTP, and SQL Server, while a Consumption logic app doesn't have the built-in versions. A Consumption logic app provides built-in connectors for Azure API Management, Azure App Services, and Batch, while a Standard logic app doesn't have these built-in connectors. For more information, review the following documentation: [Built-in connectors in Azure Logic Apps](built-in.md) and [Single-tenant versus multi-tenant and integration service environment (ISE)](../logic-apps/single-tenant-overview-compare.md).
+For example, a Standard workflow can use both managed connectors and built-in connectors for Azure Blob, Azure Cosmos DB, Azure Event Hubs, Azure Service Bus, DB2, FTP, MQ, SFTP, and SQL Server, while a Consumption workflow doesn't have the built-in versions. A Consumption workflow can use built-in connectors for Azure API Management, Azure App Services, and Batch, while a Standard workflow doesn't have these built-in connectors. For more information, review [Built-in connectors in Azure Logic Apps](built-in.md) and [Single-tenant versus multi-tenant and integration service environment (ISE)](../logic-apps/single-tenant-overview-compare.md).
-This article provides a general overview about managed connectors and how they're organized in Consumption logic apps versus Standard logic apps with examples. For technical reference information about each managed connector in Azure Logic Apps, review [Connectors reference for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors).
+This article provides a general overview about managed connectors and the way they're organized in the Consumption workflow designer versus the Standard workflow designer with examples. For technical reference information about each managed connector in Azure Logic Apps, review [Connectors reference for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors).
## Managed connector categories
-In a *Standard* logic app, all managed connectors are organized into the **Azure** group. In a *Consumption* logic app, managed connectors are organized into the **Standard** group or **Enterprise** group. However, pricing for managed connectors works the same in both Standard and Consumption logic apps. For more information, review [Trigger and action operations in the Consumption model](../logic-apps/logic-apps-pricing.md#consumption-operations) and [Trigger and action operations in the Standard model](../logic-apps/logic-apps-pricing.md#standard-operations).
+For a Consumption logic app workflow, managed connectors appear in the designer under the following labels:
* [Standard connectors](#standard-connectors) provide access to services such as Azure Blob Storage, Office 365, SharePoint, Salesforce, Power BI, OneDrive, and many more. * [Enterprise connectors](#enterprise-connectors) provide access to enterprise systems, such as SAP, IBM MQ, and IBM 3270 for an additional cost.
-Some managed connectors also belong to the following informal groups:
+For a Standard logic app *stateful* workflow, all managed connectors appear in the designer under the **Azure** label, which describes how these connectors are hosted on the Azure platform. A Standard *stateless* workflow can use only the built-in connectors designed to run natively in single-tenant Azure Logic Apps.
+
+Regardless whether you have a Consumption or Standard workflow, managed connector pricing follows the pricing for Enterprise connectors and Standard connectors, but metering works differently based on the workflow type. For more pricing information, review [Trigger and action operations in the Consumption model](../logic-apps/logic-apps-pricing.md#consumption-operations) and [Trigger and action operations in the Standard model](../logic-apps/logic-apps-pricing.md#standard-operations).
+
+Some managed connectors also fall into the following informal groups:
* [On-premises connectors](#on-premises-connectors) provide access to on-premises systems such as SQL Server, SharePoint Server, SAP, Oracle DB, file shares, and others.
Some managed connectors also belong to the following informal groups:
## Standard connectors
-For a *Consumption* logic app, this section lists *some* of the popular connectors in the **Standard** group. In a *Standard* logic app, all managed connectors are in the **Azure** group, but pricing works the same as Consumption logic apps. For more information, review [Trigger and action operations in the Standard model](../logic-apps/logic-apps-pricing.md#standard-operations).
+In the Consumption workflow designer, managed connectors that follow the Standard connector pricing model appear under the **Standard** label. This section lists *only some* of the popular managed connectors. For more pricing information, review [Trigger and action operations in the Consumption model](../logic-apps/logic-apps-pricing.md#consumption-operations).
+
+In the Standard workflow designer, *all* managed connectors appear under the **Azure** label. Managed connector pricing still follows the pricing for Enterprise connectors and Standard connectors, but metering works differently based on the workflow type. For more pricing information, review [Trigger and action operations in the Standard model](../logic-apps/logic-apps-pricing.md#standard-operations).
:::row::: :::column:::
For a *Consumption* logic app, this section lists *some* of the popular connecto
[**Azure Event Hubs**][azure-event-hubs-doc] \ \
- Consume and publish events through an event hub. For example, get output from your logic app with Event Hubs, and then send that output to a real-time analytics provider.
+ Consume and publish events through an event hub. For example, get output from your workflow with Event Hubs, and then send that output to a real-time analytics provider.
:::column-end::: :::column::: [![Azure Queues icon][azure-queues-icon]][azure-queues-doc]
For a *Consumption* logic app, this section lists *some* of the popular connecto
## Enterprise connectors
-For a *Consumption* logic app, this section lists connectors in the **Enterprise** group, which can access enterprise systems for an additional cost. In a *Standard* logic app, all managed connectors are in the **Azure** group, but pricing is the same as for Consumption logic apps. For more information, review [Trigger and action operations in the Standard model](../logic-apps/logic-apps-pricing.md#standard-operations).
+In the Consumption workflow designer, managed connectors that follow the Enterprise connector pricing model appear under the **Enterprise** label. These connectors can access enterprise systems for an additional cost. For more pricing information, review [Trigger and action operations in the Consumption model](../logic-apps/logic-apps-pricing.md#consumption-operations).
+
+In the Standard workflow designer, *all* managed connectors appear under the **Azure** label. Managed connector pricing still follows the pricing for Enterprise connectors and Standard connectors, but metering works differently based on the workflow type. For more pricing information, review [Trigger and action operations in the Standard model](../logic-apps/logic-apps-pricing.md#standard-operations).
:::row::: :::column:::
For a *Consumption* logic app, this section lists connectors in the **Enterprise
Before you can create a connection to an on-premises system, you must first [download, install, and set up an on-premises data gateway][gateway-doc]. This gateway provides a secure communication channel without having to set up the necessary network infrastructure.
-For a *Consumption* logic app, this section lists example [Standard connectors](#standard-connectors) that can access on-premises systems. For the expanded on-premises connectors list, review [Supported data sources](../logic-apps/logic-apps-gateway-connection.md#supported-connections).
+For a Consumption workflow, this section lists example [Standard connectors](#standard-connectors) that can access on-premises systems. For the expanded on-premises connectors list, review [Supported data sources](../logic-apps/logic-apps-gateway-connection.md#supported-connections).
:::row::: :::column:::
For a *Consumption* logic app, this section lists example [Standard connectors](
## Integration account connectors
-Integration account operations specifically support business-to-business (B2B) communication scenarios in Azure Logic Apps. After you create an integration account and define your B2B artifacts, such as trading partners, agreements, maps, and schemas, you can use integration account connectors to encode and decode messages, transform content, and more.
+Integration account operations support business-to-business (B2B) communication scenarios in Azure Logic Apps. After you create an integration account and define your B2B artifacts, such as trading partners, agreements, and others, you can use integration account connectors to encode and decode messages, transform content, and more.
For example, if you use Microsoft BizTalk Server, you can create a connection from your workflow using the [on-premises BizTalk Server connector](/connectors/biztalk/). You can then extend or perform BizTalk-like operations in your workflow by using these integration account connectors.
-* Consumption logic apps
+* Consumption workflows
- Before you use any integration account operations in a Consumption logic app, you have to [link your logic app to your integration account](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md).
+ Before you use any integration account operations in a Consumption workflow, you have to [link your logic app resource to your integration account](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md).
-* Standard logic apps
+* Standard workflows
- Integration account operations don't require that you link your logic app to your integration account. Instead, you create a connection to your integration account when you add the operation to your Standard logic app workflow.
+ Integration account operations don't require that you link your logic app resource to your integration account. Instead, you create a connection to your integration account when you add the operation to your Standard workflow.
For more information, review the following documentation:
For more information, review the following documentation:
* [Create and manage integration accounts for B2B workflows](../logic-apps/logic-apps-enterprise-integration-create-integration-account.md) :::row:::
+ :::column:::
+ [![AS2 Decode v2 icon][as2-v2-icon]][as2-doc]
+ \
+ \
+ [**AS2 Decode (v2)**][as2-doc]
+ :::column-end:::
+ :::column:::
+ [![AS2 Encode (v2) icon][as2-v2-icon]][as2-doc]
+ \
+ \
+ [**AS2 Encode (v2)**][as2-doc]
+ :::column-end:::
:::column::: [![AS2 decoding icon][as2-icon]][as2-doc] \
For more information, review the following documentation:
\ [**AS2 encoding**][as2-doc] :::column-end::: :::column::: [![EDIFACT decoding icon][edifact-icon]][edifact-decode-doc] \
For more information, review the following documentation:
In an integration service environment (ISE), these managed connectors also have [ISE versions](apis-list.md#ise-and-connectors), which have different capabilities than their multi-tenant versions: > [!NOTE]
-> Logic apps that run in an ISE and their connectors, regardless where those connectors run, follow a fixed pricing plan versus the consumption-based pricing plan. For more information, see [Logic Apps pricing model](../logic-apps/logic-apps-pricing.md) and [Logic Apps pricing details](https://azure.microsoft.com/pricing/details/logic-apps/).
+>
+> Workflows that run in an ISE and their connectors, regardless where those connectors run, follow a fixed pricing plan versus the Consumption pricing plan. For more information, review [Azure Logic Apps pricing model](../logic-apps/logic-apps-pricing.md) and [Azure Logic Apps pricing details](https://azure.microsoft.com/pricing/details/logic-apps/).
:::row::: :::column:::
In an integration service environment (ISE), these managed connectors also have
For more information, see these topics: * [Access to Azure virtual network resources from Azure Logic Apps](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md)
-* [Logic Apps pricing model](../logic-apps/logic-apps-pricing.md)
+* [Azure Logic Apps pricing model](../logic-apps/logic-apps-pricing.md)
* [Connect to Azure virtual networks from Azure Logic Apps](../logic-apps/connect-virtual-network-vnet-isolated-environment.md) ## Next steps
For more information, see these topics:
[azure-blob-storage-doc]: ./connectors-create-api-azureblobstorage.md "Manage files in your blob container with Azure blob storage connector" [azure-cosmos-db-doc]: ./connectors-create-api-cosmos-db.md "Connect to Azure Cosmos DB so that you can access and manage Azure Cosmos DB documents" [azure-event-grid-doc]: ../event-grid/monitor-virtual-machine-changes-event-grid-logic-app.md "Monitor events published by an Event Grid, for example, when Azure resources or third-party resources change"
-[azure-event-hubs-doc]: ./connectors-create-api-azure-event-hubs.md "Connect to Azure Event Hubs so that you can receive and send events between logic apps and Event Hubs"
+[azure-event-hubs-doc]: ./connectors-create-api-azure-event-hubs.md "Connect to Azure Event Hubs so that you can receive and send events between logic app workflows and Event Hubs"
[azure-file-storage-doc]: /connectors/azurefile/ "Connect to your Azure Storage account so that you can create, update, get, and delete files" [azure-key-vault-doc]: /connectors/keyvault/ "Connect to your Azure Key Vault so that you can manage your secrets and keys" [azure-monitor-logs-doc]: /connectors/azuremonitorlogs/ "Run queries against Azure Monitor Logs across Log Analytics workspaces and Application Insights components"
For more information, see these topics:
[youtube-doc]: ./connectors-create-api-youtube.md "Connect to YouTube. Manage your videos and channels" <!--Integration account connector icons -->
+[as2-v2-icon]: ./media/apis-list/as2-v2.png
[as2-icon]: ./media/apis-list/as2.png [edifact-icon]: ./media/apis-list/edifact.png [x12-icon]: ./media/apis-list/x12.png
For more information, see these topics:
[x12-encode-doc]: ../logic-apps/logic-apps-enterprise-integration-X12-encode.md "Encode messages that use the X12 protocol" <!--Other doc links-->
-[gateway-doc]: ../logic-apps/logic-apps-gateway-connection.md "Connect to data sources on-premises from logic apps with on-premises data gateway"
+[gateway-doc]: ../logic-apps/logic-apps-gateway-connection.md "Connect to data sources on-premises from logic app workflows with on-premises data gateway"
container-apps Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md
The second URL grants access to the log streaming service and the console. If ne
## Ports and IP addresses
-The subnet associated with a Container App Environment must have a CIDR prefix of /23.
+>[!NOTE]
+> The subnet associated with a Container App Environment requires a CIDR prefix of /23.
The following ports are exposed for inbound connections.
container-apps Vnet Custom Internal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom-internal.md
$vnet = New-AzVirtualNetwork @VnetArgs
+> [!NOTE]
+> Network subnet address prefix requires a CIDR range of `/23`.
+ With the VNET established, you can now query for the infrastructure subnet ID. # [Bash](#tab/bash)
cosmos-db Dedicated Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/dedicated-gateway.md
The dedicated gateway is available in the following sizes. The integrated cache
There are many different ways to provision a dedicated gateway: - [Provision a dedicated gateway using the Azure Portal](how-to-configure-integrated-cache.md#provision-the-dedicated-gateway)-- [Use Azure Cosmos DB's REST API](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/service/create)
+- [Use Azure Cosmos DB's REST API](/rest/api/cosmos-db-resource-provider/2022-05-15/service/create#sqldedicatedgatewayservicecreate)
- [Azure CLI](/cli/azure/cosmosdb/service?view=azure-cli-latest&preserve-view=true#az-cosmosdb-service-create) - [ARM template](/azure/templates/microsoft.documentdb/databaseaccounts/services?tabs=bicep) - Note: You cannot deprovision a dedicated gateway using ARM templates
cosmos-db Compression Cost Savings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/compression-cost-savings.md
+
+ Title: Improve performance and optimize costs when upgrading to Azure Cosmos DB API for MongoDB 4.0+
+description: Learn how upgrading your API for MongoDB account to versions 4.0+ saves you money on queries and storage.
+++ Last updated : 09/06/2022+++
+# Improve performance and optimize costs when upgrading to Azure Cosmos DB API for MongoDB 4.0+
+
+Azure Cosmos DB API for MongoDB introduced a new data compression algorithm in versions 4.0+ that saves up to 90% on RU and storage costs. Upgrading your database account to versions 4.0+ and following this guide will help you realize the maximum performance and cost improvements.
+
+## How it works
+The API for MongoDB charges users based on how many [request units](../request-units.md) (RUs) are consumed for each operation. With the new compression format, a reduction in storage size and query size directly results in a reduction in RU usage, saving you money. Performance and costs are coupled in Cosmos DB.
+
+When [upgrading](upgrade-mongodb-version.md) from an API for MongoDB database account versions 3.6 or 3.2 to version 4.0 or greater, all new documents (data) written to that account will be stored in the improved compression format. Older documents, written before the account was upgraded, remain fully backwards compatible, but will remain stored in the older compression format.
+
+## Upgrading older documents
+When upgrading your database account to versions 4.0+, it's good idea to consider upgrading your older documents as well. Doing so will provide you with efficiency improvements on your older data as well as new data that gets written to the account after the upgrade. The following steps upgrade your older documents to the new compression format:
+
+1. [Upgrade](upgrade-mongodb-version.md) your database account to 4.0 or higher. Any new data that's written to any collection in the account will be written in the new format. All formats are backwards compatible.
+2. Update at least one field in each old document (from before the upgrade) to a new value or change the document in a different way- such as adding a new field. Don't rewrite the exact same document since the Cosmos DB optimizer will ignore it.
+3. Repeat step two for each document. When a document is updated, it will be written in the new format.
++
+## Next steps
+Learn more about upgrading and the API for MongoDB versions:
+* [Introduction to the API for MongoDB](mongodb-introduction.md)
+* [Upgrade guide](upgrade-mongodb-version.md)
+* [Version 4.2](feature-support-42.md)
cost-management-billing Get Small Usage Datasets On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/get-small-usage-datasets-on-demand.md
description: The article explains how you can use the Cost Details API to get raw, unaggregated cost data that corresponds to your Azure bill. Previously updated : 07/15/2022 Last updated : 09/08/2022
To learn more about the data in cost details (formerly referred to as *usage det
The [Cost Details](/rest/api/cost-management/generate-cost-details-report) report is only available for customers with an Enterprise Agreement or Microsoft Customer Agreement. If you're an MSDN, Pay-As-You-Go or Visual Studio customer, see [Get cost details for a pay-as-you-go subscription](get-usage-details-legacy-customer.md).
+## Permissions
+
+To use the Cost Details API, you need read only permissions for supported features and scopes. For more information, see:
+
+- [Azure RBAC scopes - role permissions for feature behavior](../costs/understand-work-scopes.md#feature-behavior-for-each-role)
+- [Enterprise Agreement scopes - role permissions for feature behavior](../costs/understand-work-scopes.md#feature-behavior-for-each-role-1)
+- [Microsoft Customer Agreement scopes - role permissions for feature behavior](../costs/understand-work-scopes.md#feature-behavior-for-each-role-2)
+ ## Cost Details API best practices Microsoft recommends the following best practices as you use the Cost Details API.
cost-management-billing Understand Work Scopes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/understand-work-scopes.md
description: This article helps you understand billing and resource management scopes available in Azure and how to use the scopes in Cost Management and APIs. Previously updated : 12/07/2021 Last updated : 09/08/2022
The following table shows how Cost Management features are used by each role. Th
| **Feature/Role** | **Owner** | **Contributor** | **Reader** | **Cost Management Reader** | **Cost Management Contributor** | | | | | | | |
-| **Cost Analysis / Forecast / Query API** | Read only | Read only | Read only | Read only | Read only |
+| **Cost Analysis / Forecast / Query / Cost Details API** | Read only | Read only | Read only | Read only | Read only |
| **Shared views** | Create, Read, Update, Delete | Create, Read, Update, Delete | Read only | Read only | Create, Read, Update, Delete| | **Budgets** | Create, Read, Update, Delete | Create, Read, Update, Delete | Read only | Read only | Create, Read, Update, Delete | | **Alerts** | Read, Update | Read, Update | Read only | Read only | Read, Update |
The following tables show how Cost Management features can be utilized by each r
| **Feature/Role** | **Enterprise Admin** | **Enterprise Read-Only** | | | | |
-| **Cost Analysis / Forecast / Query API** | Read only | Read only |
+| **Cost Analysis / Forecast / Query / Cost Details API** | Read only | Read only |
| **Shared Views** | Create, Read, Update, Delete | Create, Read, Update, Delete | | **Budgets** | Create, Read, Update, Delete | Create, Read, Update, Delete | | **Alerts** | Read, Update | Read, Update |
The following tables show how Cost Management features can be utilized by each r
| **Feature/Role** | **Enterprise Admin** | **Enterprise Read Only** | **Department Admin (only if "DA view charges" setting is on)** | **Department Read Only (only if "DA view charges" setting is on)** | | | | | | |
-| **Cost Analysis / Forecast / Query API** | Read only | Read only | Read only | Read only |
+| **Cost Analysis / Forecast / Query / Cost Details API** | Read only | Read only | Read only | Read only |
| **Shared Views** | Create, Read, Update, Delete | Create, Read, Update, Delete | Create, Read, Update, Delete | Create, Read, Update, Delete | | **Budgets** | Create, Read, Update, Delete | Create, Read, Update, Delete | Create, Read, Update, Delete | Create, Read, Update, Delete | | **Alerts** | Read, Update | Read, Update | Read, Update | Read, Update |
The following tables show how Cost Management features can be utilized by each r
| **Feature/Role** | **Enterprise Admin** | **Enterprise Read Only** | **Department Admin (only if "DA view charges" is on)** | **Department Read Only (only if "DA view charges" setting is on)** | **Account Owner (only if "AO view charges" setting is on)** | | | | | | | |
-| **Cost Analysis / Forecast / Query API** | Read only | Read only | Read only | Read only | Read only |
+| **Cost Analysis / Forecast / Query / Cost Details API** | Read only | Read only | Read only | Read only | Read only |
| **Shared Views** | Create, Read, Update, Delete | Create, Read, Update, Delete | Create, Read, Update, Delete | Create, Read, Update, Delete | Create, Read, Update, Delete | | **Budgets** | Create, Read, Update, Delete | Create, Read, Update, Delete | Create, Read, Update, Delete | Create, Read, Update, Delete | Create, Read, Update, Delete | | **Alerts** | Read, Update | Read, Update | Read, Update | Read, Update | Read, Update |
The following tables show how Cost Management features can be utilized by each r
| **Feature/Role** | **Owner** | **Contributor** | **Reader** | | | | | |
-| **Cost Analysis / Forecast / Query API** | Read only | Read only | Read only |
+| **Cost Analysis / Forecast / Query / Cost Details API** | Read only | Read only | Read only |
| **Shared Views** | Create, Read, Update, Delete | Create, Read, Update, Delete | Create, Read, Update, Delete | | **Budgets** | Create, Read, Update, Delete | Create, Read, Update, Delete | Create, Read, Update, Delete | | **Alerts** | Read, Update | Read, Update | Read, Update |
The following tables show how Cost Management features can be utilized by each r
| **Feature/Role** | **Owner** | **Contributor** | **Reader** | **Invoice Manager** | | | | | | |
-| **Cost Analysis / Forecast / Query API** | Read only | Read only | Read only | Read only |
+| **Cost Analysis / Forecast / Query / Cost Details API** | Read only | Read only | Read only | Read only |
| **Shared Views** | Create, Read, Update, Delete | Create, Read, Update, Delete | Create, Read, Update, Delete | Create, Read, Update, Delete | | **Budgets** | Create, Read, Update, Delete | Create, Read, Update, Delete | Create, Read, Update, Delete | Create, Read, Update, Delete | | **Alerts** | Read, Update | Read, Update | Read, Update | Create, Read, Update, Delete |
The following tables show how Cost Management features can be utilized by each r
| **Feature/Role** | **Owner** | **Contributor** | **Reader** | **Azure Subscription Creator** | | | | | | |
-| **Cost Analysis / Forecast / Query API** | Read only | Read only | Read only | Read only |
+| **Cost Analysis / Forecast / Query / Cost Details API** | Read only | Read only | Read only | Read only |
| **Shared Views** | Create, Read, Update, Delete | Create, Read, Update, Delete | Create, Read, Update, Delete | Create, Read, Update, Delete | | **Budgets** | Create, Read, Update, Delete | Create, Read, Update, Delete | Create, Read, Update, Delete | Create, Read, Update, Delete | | **Alerts** | Read, Update | Read, Update | Read, Update | Read, Update |
data-factory Connector Hive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-hive.md
Previously updated : 09/09/2021 Last updated : 08/30/2022
data-factory Connector Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-rest.md
Previously updated : 06/13/2022 Last updated : 08/30/2022
For a list of data stores that are supported as sources/sinks, see [Supported da
Specifically, this generic REST connector supports: - Copying data from a REST endpoint by using the **GET** or **POST** methods and copying data to a REST endpoint by using the **POST**, **PUT** or **PATCH** methods.-- Copying data by using one of the following authentications: **Anonymous**, **Basic**, **AAD service principal**, and **user-assigned managed identity**.
+- Copying data by using one of the following authentications: **Anonymous**, **Basic**, **Service Principal**, and **user-assigned managed identity**.
- **[Pagination](#pagination-support)** in the REST APIs. - For REST as source, copying the REST JSON response [as-is](#export-json-response-as-is) or parse it by using [schema mapping](copy-activity-schema-and-type-mapping.md#schema-mapping). Only response payload in **JSON** is supported.
The following properties are supported for the REST linked service:
For different authentication types, see the corresponding sections for details. - [Basic authentication](#use-basic-authentication)-- [AAD service principal authentication](#use-aad-service-principal-authentication)
+- [Service Principal authentication](#use-service-principal-authentication)
- [OAuth2 Client Credential authentication](#use-oauth2-client-credential-authentication) - [User-assigned managed identity authentication](#use-user-assigned-managed-identity-authentication) - [Anonymous authentication](#using-authentication-headers)
Set the **authenticationType** property to **Basic**. In addition to the generic
} ```
-### Use AAD service principal authentication
+### Use Service Principal authentication
Set the **authenticationType** property to **AadServicePrincipal**. In addition to the generic properties that are described in the preceding section, specify the following properties:
Set the **authenticationType** property to **AadServicePrincipal**. In addition
| servicePrincipalId | Specify the Azure Active Directory application's client ID. | Yes | | servicePrincipalKey | Specify the Azure Active Directory application's key. Mark this field as a **SecureString** to store it securely in Data Factory, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes | | tenant | Specify the tenant information (domain name or tenant ID) under which your application resides. Retrieve it by hovering the mouse in the top-right corner of the Azure portal. | Yes |
-| aadResourceId | Specify the AAD resource you are requesting for authorization, for example, `https://management.core.windows.net`.| Yes |
-| azureCloudType | For service principal authentication, specify the type of Azure cloud environment to which your AAD application is registered. <br/> Allowed values are **AzurePublic**, **AzureChina**, **AzureUsGovernment**, and **AzureGermany**. By default, the data factory's cloud environment is used. | No |
+| aadResourceId | Specify the Microsoft Azure Active Directory (Azure AD) resource you are requesting for authorization, for example, `https://management.core.windows.net`.| Yes |
+| azureCloudType | For Service Principal authentication, specify the type of Azure cloud environment to which your Azure AD application is registered. <br/> Allowed values are **AzurePublic**, **AzureChina**, **AzureUsGovernment**, and **AzureGermany**. By default, the data factory's cloud environment is used. | No |
**Example**
Set the **authenticationType** property to **AadServicePrincipal**. In addition
"type": "SecureString" }, "tenant": "<tenant info, e.g. microsoft.onmicrosoft.com>",
- "aadResourceId": "<AAD resource URL e.g. https://management.core.windows.net>"
+ "aadResourceId": "<Azure AD resource URL e.g. https://management.core.windows.net>"
}, "connectVia": { "referenceName": "<name of Integration Runtime>",
Set the **authenticationType** property to **ManagedServiceIdentity**. In additi
| Property | Description | Required | |: |: |: |
-| aadResourceId | Specify the AAD resource you are requesting for authorization, for example, `https://management.core.windows.net`.| Yes |
+| aadResourceId | Specify the Azure AD resource you are requesting for authorization, for example, `https://management.core.windows.net`.| Yes |
| credentials | Specify the user-assigned managed identity as the credential object. | Yes |
Set the **authenticationType** property to **ManagedServiceIdentity**. In additi
"typeProperties": { "url": "<REST endpoint e.g. https://www.example.com/>", "authenticationType": "ManagedServiceIdentity",
- "aadResourceId": "<AAD resource URL e.g. https://management.core.windows.net>",
+ "aadResourceId": "<Azure AD resource URL e.g. https://management.core.windows.net>",
"credential": { "referenceName": "credential1", "type": "CredentialReference"
Response 2:
:::image type="content" source="media/connector-rest/pagination-rule-example-4-1.png" alt-text="Screenshot showing the End Condition setting for Example 4.1."::: -- **Example 4.2: The pagination ends when the value of the specific node in response dose not exist**
+- **Example 4.2: The pagination ends when the value of the specific node in response does not exist**
The REST API returns the last response in the following structure: ```json {} ```
- Set the end condition rule as **"EndCondition:$.data": "NonExist"** to end the pagination when the value of the specific node in response dose not exist.
+ Set the end condition rule as **"EndCondition:$.data": "NonExist"** to end the pagination when the value of the specific node in response does not exist.
:::image type="content" source="media/connector-rest/pagination-rule-example-4-2.png" alt-text="Screenshot showing the End Condition setting for Example 4.2.":::
data-factory Connector Troubleshoot Dynamics Dataverse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-dynamics-dataverse.md
This article provides suggestions to troubleshoot common problems with the Dynam
- **Cause**: The virtual column is not supported now. - **Recommendation**: For the Option Set value, follow the options below to get it:
- - You can get the object type code by referring to [How to Find the Object Type Code for Any Entity](https://powerobjects.com/tips-and-tricks/find-object-type-code-entity/) and [Dynamics 365 blog](https://dynamicscrmdotblog.wordpress.com/).
+ - You can get the object type code by referring to [How to Find the Object Type Code for Any Entity](https://powerobjects.com/tips-and-tricks/find-object-type-code-entity/).
- You can link the StringMap entity to your target entity and get the associated values. ## The parallel copy in a Dynamics CRM data store
data-factory Control Flow Web Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-web-activity.md
method | REST API method for the target endpoint. | String. <br/><br/>Supported
url | Target endpoint and path | String (or expression with resultType of string). The activity will timeout at 1 minute with an error if it does not receive a response from the endpoint. You can increase this response timeout up to 10 mins by updating the httpRequestTimeout property | Yes httpRequestTimeout | Response timeout duration | hh:mm:ss with the max value as 00:10:00. If not explicitly specified defaults to 00:01:00 | No headers | Headers that are sent to the request. For example, to set the language and type on a request: `"headers" : { "Accept-Language": "en-us", "Content-Type": "application/json" }`. | String (or expression with resultType of string) | No
-body | Represents the payload that is sent to the endpoint. | String (or expression with resultType of string). <br/><br/>See the schema of the request payload in [Request payload schema](#request-payload-schema) section. | Required for POST/PUT/PATCH methods.
+body | Represents the payload that is sent to the endpoint. | String (or expression with resultType of string). <br/><br/>See the schema of the request payload in [Request payload schema](#request-payload-schema) section. | Required for POST/PUT/PATCH methods. Optional for DELETE method.
authentication | Authentication method used for calling the endpoint. Supported Types are "Basic, Client Certificate, System-assigned Managed Identity, User-assigned Managed Identity, Service Principal." For more information, see [Authentication](#authentication) section. If authentication is not required, exclude this property. | String (or expression with resultType of string) | No turnOffAsync | Option to disable invoking HTTP GET on location field in the response header of a HTTP 202 Response. If set true, it stops invoking HTTP GET on http location given in response header. If set false then it continues to invoke HTTP GET call on location given in http response headers. | Allowed values are false (default) and true. | No disableCertValidation | Removes server side certificate validation (not recommended unless you are connecting to a trusted server that does not use a standard CA cert). | Allowed values are false (default) and true. | No
defender-for-cloud Other Threat Protections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/other-threat-protections.md
Title: Additional threat protections from Microsoft Defender for Cloud description: Learn about the threat protections available from Microsoft Defender for Cloud Previously updated : 07/24/2022 Last updated : 09/07/2022 # Additional threat protections in Microsoft Defender for Cloud
Some network configurations restrict Defender for Cloud from generating alerts o
For a list of the Azure network layer alerts, see the [Reference table of alerts](alerts-reference.md#alerts-azurenetlayer). -
-## Threat protection for Azure Cosmos DB<a name="cosmos-db"></a>
-
-The Azure Cosmos DB alerts are generated by unusual and potentially harmful attempts to access or exploit Azure Cosmos DB accounts.
-
-For more information, see:
--- [Advanced threat protection for Azure Cosmos DB](../cosmos-db/cosmos-db-advanced-threat-protection.md)-- [The list of threat protection alerts for Azure Cosmos DB](alerts-reference.md#alerts-azurecosmos)-- <a name="azure-mcas"></a> ## Display recommendations in Microsoft Defender for Cloud Apps
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
To protect your GCP-based resources, you can connect a GCP project with either:
- **Classic cloud connector** - Requires configuration in your GCP project to create a user that Defender for Cloud can use to connect to your GCP environment. If you have classic cloud connectors, we recommend that you [delete these connectors](#remove-classic-connectors) and use the native connector to reconnect to the account. Using both the classic and native connectors can produce duplicate recommendations. ::: zone pivot="env-settings"
Follow the steps below to create your GCP cloud connector.
1. Select the **Google Cloud Platform**.
- :::image type="content" source="media/quickstart-onboard-gcp/google-cloud.png" border="false" alt-text="Screenshot of the location of the Google cloud environment button.":::
+ :::image type="content" source="media/quickstart-onboard-gcp/google-cloud.png" alt-text="Screenshot of the location of the Google cloud environment button." lightbox="media/quickstart-onboard-gcp/google-cloud.png":::
1. Enter all relevant information.
To locate the unique numeric ID in the GCP portal, Navigate to **IAM & Admin** >
1. (**Servers/SQL only**) Select **Azure-Arc for servers onboarding**
- :::image type="content" source="media/quickstart-onboard-gcp/unique-numeric-id.png" alt-text="Screenshot showing the Azure-Arc for servers onboarding section of the screen.":::
+ :::image type="content" source="media/quickstart-onboard-gcp/unique-numeric-id.png" alt-text="Screenshot showing the Azure-Arc for servers onboarding section of the screen." lightbox="media/quickstart-onboard-gcp/unique-numeric-id.png":::
Enter the service account unique ID, which is generated automatically after running the GCP Cloud Shell.
After creating a connector, a scan will start on your GCP environment. New recom
By default, all plans are `On`. You can disable plans that you don't need. ### Configure the servers plan
As shown above, Microsoft Defender for Cloud's security recommendations page dis
To view all the active recommendations for your resources by resource type, use Defender for Cloud's asset inventory page and filter to the GCP resource type in which you're interested: ## FAQ - Connecting GCP projects to Microsoft Defender for Cloud
dms Tutorial Mysql Azure Single To Flex Offline Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mysql-azure-single-to-flex-offline-portal.md
DMS supports cross-region, cross-resource group, and cross-subscription migratio
* Select the compute size and compute tier for the target flexible server based on the source single serverΓÇÖs pricing tier and VCores as in the following table:
-| Single Server Pricing Tier | Single Server VCores | Flexible Server Compute Size | Flexible Server Compute Tier |
-| - | - |:-:|:-:|
-| Basic\* | 1 | General Purpose | Standard_D16ds_v4 |
-| Basic\* | 2 | General Purpose | Standard_D16ds_v4 |
-| General Purpose\* | 4 | General Purpose | Standard_D16ds_v4 |
-| General Purpose\* | 8 | General Purpose | Standard_D16ds_v4 |
-| General Purpose | 16 | General Purpose | Standard_D16ds_v4 |
-| General Purpose | 32 | General Purpose | Standard_D32ds_v4 |
-| General Purpose | 64 | General Purpose | Standard_D64ds_v4 |
-| Memory Optimized | 4 | Business Critical | Standard_E4ds_v4 |
-| Memory Optimized | 8 | Business Critical | Standard_E8ds_v4 |
-| Memory Optimized | 16 | Business Critical | Standard_E16ds_v4 |
-| Memory Optimized | 32 | Business Critical | Standard_E32ds_v4 |
+ | Single Server Pricing Tier | Single Server VCores | Flexible Server Compute Size | Flexible Server Compute Tier |
+ | - | - |:-:|:-:|
+ | Basic\* | 1 | General Purpose | Standard_D16ds_v4 |
+ | Basic\* | 2 | General Purpose | Standard_D16ds_v4 |
+ | General Purpose\* | 4 | General Purpose | Standard_D16ds_v4 |
+ | General Purpose\* | 8 | General Purpose | Standard_D16ds_v4 |
+ | General Purpose | 16 | General Purpose | Standard_D16ds_v4 |
+ | General Purpose | 32 | General Purpose | Standard_D32ds_v4 |
+ | General Purpose | 64 | General Purpose | Standard_D64ds_v4 |
+ | Memory Optimized | 4 | Business Critical | Standard_E4ds_v4 |
+ | Memory Optimized | 8 | Business Critical | Standard_E8ds_v4 |
+ | Memory Optimized | 16 | Business Critical | Standard_E16ds_v4 |
+ | Memory Optimized | 32 | Business Critical | Standard_E32ds_v4 |
\* For the migration, select General Purpose 16 VCores compute for the target flexible server for faster migrations. Scale back to the desired compute size for the target server after migration is complete by following the compute size recommendation in the Performing post-migration activities section later in this article.
With these best practices in mind, create your target flexible server and then c
* To ensure faster data loads when using DMS, configure the following server parameters as described. * max_allowed_packet ΓÇô set to 1073741824 (i.e., 1GB) to prevent any connection issues due to large rows. * slow_query_log ΓÇô set to OFF to turn off the slow query log. This will eliminate the overhead caused by slow query logging during data loads.
- * query_store_capture_mode ΓÇô set to NONE to turn off the Query Store. This will eliminate the overhead caused by sampling activities by Query Store.
- * innodb_buffer_pool_size ΓÇô Innodb_buffer_pool_size can only be increased by scaling up compute for Azure Database for MySQL server. Scale up the server to 64 vCore General Purpose SKU from the Pricing tier of the portal during migration to increase the innodb_buffer_pool_size.
- * innodb_io_capacity & innodb_io_capacity_max - Change to 9000 from the Server parameters in Azure portal to improve the IO utilization to optimize for migration speed.to-flex-offlinealt-text="Screenshot of a
- * innodb_write_io_threads & innodb_write_io_threads - Change to 4 from the Server parameters in Azure portal to improve the speed of migration.
+ * innodb_buffer_pool_size ΓÇô can only be increased by scaling up compute for Azure Database for MySQL server. Scale up the server to 64 vCore General Purpose SKU from the Pricing tier of the portal during migration to increase the innodb_buffer_pool_size.
+ * innodb_io_capacity & innodb_io_capacity_max - Change to 9000 from the Server parameters in Azure portal to improve the IO utilization to optimize for migration speed.
+ * innodb_write_io_threads - Change to 4 from the Server parameters in Azure portal to improve the speed of migration.
* Configure the firewall rules and replicas on the target server to match those on the source server. * Replicate the following server management features from the source single server to the target flexible server: * Role assignments, Roles, Deny Assignments, classic administrators, Access Control (IAM)
With your target flexible server deployed and configured, you next need to set u
To register the Microsoft.DataMigration resource provider, perform the following steps. 1. Before creating your first DMS instance, sign in to the Azure portal, and then search for and select **Subscriptions**.
- :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/1-subscriptions.png" alt-text="Screenshot of a Azure Marketplace.":::
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/1-subscriptions.png" alt-text="Screenshot of an Azure Marketplace.":::
2. Select the subscription that you want to use to create the DMS instance, and then select **Resource providers**.
- :::image type="content" source="media/tutorial-Azure-mysql-single-to-flex-offline/2-resource-provider.png" alt-text="Screenshot of a Screenshot of a Select resource provider.":::
+ :::image type="content" source="media/tutorial-Azure-mysql-single-to-flex-offline/2-resource-provider.png" alt-text="Screenshot of a Select resource provider.":::
3. Search for the term ΓÇ£MigrationΓÇ¥, and then, for **Microsoft.DataMigration**, select **Register**. :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/3-register.png" alt-text="Screenshot of a Select Register.":::
To create a migration project, perform the following steps.
To configure your DMS migration project, perform the following steps. 1. On the **Select source** screen, specify the connection details for the source MySQL instance.
- :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/13-select-source-offline.png" alt-text="Screenshot of a Add source details screen.":::
- When performing an offline migration, itΓÇÖs important to stop incoming traffic on the source when configuring the migration project.
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/13-select-source-offline.png" alt-text="Screenshot of an Add source details screen.":::
2. To proceed with the offline migration, select the **Make Source Server Read Only** check box. Selecting this check box prevents Write/Delete operations on the source server during migration, which ensures the data integrity of the target database as the source is migrated. When you make your source server read only as part of the migration process, all the databases on the source server, regardless of whether they are selected for migration, will be read-only.
Selecting this check box prevents Write/Delete operations on the source server d
:::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/15-select-target.png" alt-text="Screenshot of a Select target."::: 4. Select **Next : Select databases>>**, and then, on the Select databases tab, under [Preview] Select server objects, select the server objects that you want to migrate.
- :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/16-select-db.png" alt-text="Screenshot of a Select databases.":::
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/16-select-db.png" alt-text="Screenshot of a Select database.":::
5. In the **Select databases** section, under **Source Database**, select the database(s) to migrate. The non-table objects in the database(s) you specified will be migrated, while the items you didnΓÇÖt select will be skipped.
Selecting this check box prevents Write/Delete operations on the source server d
10. Select **Start migration**. The migration activity window appears, and the Status of the activity is Initializing. The Status changes to Running when the table migrations start.
- :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/19-running-project-offline.png" alt-text="Screenshot of a Running status.":::
### Monitor the migration
Selecting this check box prevents Write/Delete operations on the source server d
2. To see the status of each table during the migration, select the database name and then select Refresh to update the display.
- :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/20-monitor-migration-offline.png" alt-text="Screenshot of a Monitoring migration.":::
- 3. Select **Refresh** to update the display until the **Status** of the migration shows as **Completed**. :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-offline/21-status-complete-offline.png" alt-text="Screenshot of a Status of Migration.":::
dms Tutorial Mysql Azure Single To Flex Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mysql-azure-single-to-flex-online-portal.md
DMS supports cross-region, cross-resource group, and cross-subscription migratio
* Select the compute size and compute tier for the target flexible server based on the source single serverΓÇÖs pricing tier and VCores as in the following table:
-| Single Server Pricing Tier | Single Server VCores | Flexible Server Compute Size | Flexible Server Compute Tier |
-| - | - |:-:|:-:|
-| Basic\* | 1 | General Purpose | Standard_D16ds_v4 |
-| Basic\* | 2 | General Purpose | Standard_D16ds_v4 |
-| General Purpose\* | 4 | General Purpose | Standard_D16ds_v4 |
-| General Purpose\* | 8 | General Purpose | Standard_D16ds_v4 |
-| General Purpose | 16 | General Purpose | Standard_D16ds_v4 |
-| General Purpose | 32 | General Purpose | Standard_D32ds_v4 |
-| General Purpose | 64 | General Purpose | Standard_D64ds_v4 |
-| Memory Optimized | 4 | Business Critical | Standard_E4ds_v4 |
-| Memory Optimized | 8 | Business Critical | Standard_E8ds_v4 |
-| Memory Optimized | 16 | Business Critical | Standard_E16ds_v4 |
-| Memory Optimized | 32 | Business Critical | Standard_E32ds_v4 |
+ | Single Server Pricing Tier | Single Server VCores | Flexible Server Compute Size | Flexible Server Compute Tier |
+ | - | - |:-:|:-:|
+ | Basic\* | 1 | General Purpose | Standard_D16ds_v4 |
+ | Basic\* | 2 | General Purpose | Standard_D16ds_v4 |
+ | General Purpose\* | 4 | General Purpose | Standard_D16ds_v4 |
+ | General Purpose\* | 8 | General Purpose | Standard_D16ds_v4 |
+ | General Purpose | 16 | General Purpose | Standard_D16ds_v4 |
+ | General Purpose | 32 | General Purpose | Standard_D32ds_v4 |
+ | General Purpose | 64 | General Purpose | Standard_D64ds_v4 |
+ | Memory Optimized | 4 | Business Critical | Standard_E4ds_v4 |
+ | Memory Optimized | 8 | Business Critical | Standard_E8ds_v4 |
+ | Memory Optimized | 16 | Business Critical | Standard_E16ds_v4 |
+ | Memory Optimized | 32 | Business Critical | Standard_E32ds_v4 |
\* For the migration, select General Purpose 16 VCores compute for the target flexible server for faster migrations. Scale back to the desired compute size for the target server after migration is complete by following the compute size recommendation in the Performing post-migration activities section later in this article.
With these best practices in mind, create your target flexible server and then c
* To ensure faster data loads when using DMS, configure the following server parameters as described. * max_allowed_packet ΓÇô set to 1073741824 (i.e., 1GB) to prevent any connection issues due to large rows. * slow_query_log ΓÇô set to OFF to turn off the slow query log. This will eliminate the overhead caused by slow query logging during data loads.
- * query_store_capture_mode ΓÇô set to NONE to turn off the Query Store. This will eliminate the overhead caused by sampling activities by Query Store.
- * innodb_buffer_pool_size ΓÇô Innodb_buffer_pool_size can only be increased by scaling up compute for Azure Database for MySQL server. Scale up the server to 64 vCore General Purpose SKU from the Pricing tier of the portal during migration to increase the innodb_buffer_pool_size.
+ * innodb_buffer_pool_size ΓÇô can only be increased by scaling up compute for Azure Database for MySQL server. Scale up the server to 64 vCore General Purpose SKU from the Pricing tier of the portal during migration to increase the innodb_buffer_pool_size.
* innodb_io_capacity & innodb_io_capacity_max - Change to 9000 from the Server parameters in Azure portal to improve the IO utilization to optimize for migration speed.
- * innodb_write_io_threads & innodb_write_io_threads - Change to 4 from the Server parameters in Azure portal to improve the speed of migration.
+ * innodb_write_io_threads - Change to 4 from the Server parameters in Azure portal to improve the speed of migration.
* Configure the firewall rules and replicas on the target server to match those on the source server. * Replicate the following server management features from the source single server to the target flexible server: * Role assignments, Roles, Deny Assignments, classic administrators, Access Control (IAM)
To create a migration project, perform the following steps.
To configure your DMS migration project, perform the following steps. 1. On the **Select source** screen, specify the connection details for the source MySQL instance.
- :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/13-select-source-online.png" alt-text="Screenshot of a Add source details screen.":::
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/13-select-source-online.png" alt-text="Screenshot of an Add source details screen.":::
2. Select **Next : Select target>>**, and then, on the **Select target** screen, specify the connection details for the target flexible server. :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/15-select-target.png" alt-text="Screenshot of a Select target."::: 3. Select **Next : Select databases>>**, and then, on the Select databases tab, under [Preview] Select server objects, select the server objects that you want to migrate.
- :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/16-select-db.png" alt-text="Screenshot of a Select databases.":::
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/16-select-db.png" alt-text="Screenshot of a Select database.":::
4. In the **Select databases** section, under **Source Database**, select the database(s) to migrate. The non-table objects in the database(s) you specified will be migrated, while the items you didnΓÇÖt select will be skipped. You can only select the source and target databases whose names match that on the source and target server.
dns Private Dns Privatednszone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-privatednszone.md
Previously updated : 08/15/2022 Last updated : 09/08/2022
To understand how many private DNS zones you can create in a subscription and ho
## Restrictions
-* Single-labeled private DNS zones aren't supported. Your private DNS zone must have two or more labels. For example, contoso.com has two labels separated by a dot. A private DNS zone can have a maximum of 34 labels.
+* Single-label private DNS zones aren't supported. Your private DNS zone must have two or more labels. For example, contoso.com has two labels separated by a dot. A private DNS zone can have a maximum of 34 labels.
* You can't create zone delegations (NS records) in a private DNS zone. If you intend to use a child domain, you can directly create the domain as a private DNS zone. Then you can link it to the virtual network without setting up a nameserver delegation from the parent zone. * Starting the week of August 28th, 2022, specific reserved zone names will be blocked from creation to prevent disruption of services. The following zone names are blocked: | Public | Azure Government | Azure China | | | | |
+ |azclient.ms | azclient.us | azclient.cn
|azure.com | azure.us | azure.cn
+ |azure-api.net | azure-api.us | azure-api.cn
+ |cloudapp.net | usgovcloudapp.net | chinacloudapp.cn
+ |core.windows.net | core.usgovcloudapi.net | core.chinacloudapi.cn
|microsoft.com | microsoft.us | microsoft.cn
+ |msidentity.com | msidentity.us | msidentity.cn
|trafficmanager.net | usgovtrafficmanager.net | trafficmanager.cn
- |cloudapp.net | usgovcloudapp.net | chinacloudapp.cn
- |azclient.ms | azclient.us | azclient.cn
|windows.net| usgovcloudapi.net | chinacloudapi.cn
- |msidentity.com | msidentity.us | msidentity.cn
- |core.windows.net | core.usgovcloudapi.net | core.chinacloudapi.cn
## Next steps * Learn how to create a private zone in Azure DNS by using [Azure PowerShell](./private-dns-getstarted-powershell.md) or [Azure CLI](./private-dns-getstarted-cli.md).- * Read about some common [private zone scenarios](./private-dns-scenarios.md) that can be realized with private zones in Azure DNS.- * For common questions and answers about private zones in Azure DNS, see [Private DNS FAQ](./dns-faq-private.yml).
event-grid Custom Disaster Recovery Client Side https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/custom-disaster-recovery-client-side.md
Title: Build your own client-side disaster recovery for Azure Event Grid topics description: This article describes how you can build your own client-side disaster recovery for Azure Event Grid topics. Previously updated : 06/14/2022 Last updated : 09/07/2022 ms.devlang: csharp
To test your failover configuration, you'll need an endpoint to receive your eve
To simplify testing, deploy a [pre-built web app](https://github.com/Azure-Samples/azure-event-grid-viewer) that displays the event messages. The deployed solution includes an App Service plan, an App Service web app, and source code from GitHub. 1. [Deploy the solution](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fazure-event-grid-viewer%2Fmaster%2Fazuredeploy.json) to your subscription. In the Azure portal, provide values for the parameters.
-1. The deployment may take a few minutes to complete. After the deployment has succeeded, view your web app to make sure it's running. In a web browser, navigate to:
+1. The deployment may take a few minutes to complete. After the deployment has succeeded, navigate to the resource group, select the **App Service**, and then select **URL** to navigate to your web app.
`https://<your-site-name>.azurewebsites.net` Make sure to note this URL as you'll need it later.- 1. You see the site but no events have been posted to it yet.
- ![Screenshot showing your web site with no events.](./media/blob-event-quickstart-portal/view-site.png)
+ :::image type="content" source="./media/blob-event-quickstart-portal/view-site.png" alt-text="Screenshot showing the Event Grid Viewer sample web app.":::
[!INCLUDE [event-grid-register-provider-portal.md](../../includes/event-grid-register-provider-portal.md)]
First, create two Event Grid topics. These topics will act as primary and second
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. From the upper left corner of the main Azure menu,
- choose **All services** > search for **Event Grid** > select **Event Grid topics**.
-
- ![Screenshot showing the Event Grid topics menu.](./media/custom-disaster-recovery/select-topics-menu.png)
-
- Select the star next to Event Grid topics to add it to resource menu for easier access in the future.
-
-1. In the Event Grid topics Menu, select **+ADD** to create the primary topic.
-
- * Give the topic a logical name and add "-primary" as a suffix to make it easy to track.
- * This topic's region will be your primary region.
+1. In the search bar at the top, enter **Event Grid topics**, and then select **Event Grid topics** in the results.
- ![Screenshot showing the Create primary topic page.](./media/custom-disaster-recovery/create-primary-topic.png)
+ :::image type="content" source="./media/custom-disaster-recovery/select-topics-menu.png" lightbox="./media/custom-disaster-recovery/select-topics-menu.png" alt-text="Screenshot showing the search bar in the Azure portal.":::
+1. On the **Event Grid topics** page, select **+Create** to create the primary topic.
-1. Once the Topic has been created, navigate to it and copy the **Topic Endpoint**. you'll need the URI later.
+ :::image type="content" source="./media/custom-disaster-recovery/create-primary-topic-menu.png" lightbox="./media/custom-disaster-recovery/create-primary-topic-menu.png" alt-text="Screenshot showing the selection of the Create button on the Event Grid topics page.":::
+1. On the **Create topic** page, follow these steps:
+ 1. Select the **Azure subscription** where you want to create a topic.
+ 1. Select an existing **Azure resource group** or create a resource group.
+ 1. Enter a **name** for the topic. Give the topic a logical name and add "-primary" as a suffix to make it easy to track.
+ 1. Select a **region** for the topic. This topic's region will be your primary region.
+ 1. Select **Review + create** at the bottom of the page.
- ![Screenshot showing the topic endpoint.](./media/custom-disaster-recovery/get-primary-topic-endpoint.png)
+ :::image type="content" source="./media/custom-disaster-recovery/create-primary-topic.png" lightbox="./media/custom-disaster-recovery/create-primary-topic.png" alt-text="Screenshot showing the Create topic page.":::
+ 1. On the **Review + create** page, select **Create** at the bottom of the page.
+1. Once the topic has been created, select **Go to resource** to navigate to it and copy the **topic endpoint**. you'll need the URI later.
+ :::image type="content" source="./media/custom-disaster-recovery/get-primary-topic-endpoint.png" lightbox="./media/custom-disaster-recovery/get-primary-topic-endpoint.png" alt-text="Screenshot showing the Event Grid topic page.":::
1. Get the access key for the topic, which you'll also need later. Click on **Access keys** in the resource menu and copy Key 1.
- ![Screenshot showing the topic's access key.](./media/custom-disaster-recovery/get-primary-access-key.png)
-
-1. In the **Topic** page, click **+Event Subscription** to create a subscription connecting your subscribing the event receiver website you made in the pre-requisites to the tutorial.
-
- * Give the event subscription a logical name and add "-primary" as a suffix to make it easy to track.
- * Select Endpoint Type Web Hook.
- * Set the endpoint to your event receiver's event URL, which should look something like: `https://<your-event-reciever>.azurewebsites.net/api/updates`
-
- ![Screenshot that shows the "Create Event Subscription - Basic" page with the "Name", "Endpoint Type", and "Endpoint" values highlighted.](./media/custom-disaster-recovery/create-primary-es.png)
-
-1. Repeat the same flow to create your secondary topic and subscription. This time, replace the "-primary" suffix with "-secondary" for easier tracking. Finally, make sure you put it in a different Azure Region. While you can put it anywhere you want, it's recommended that you use the [Azure Paired Regions](../availability-zones/cross-region-replication-azure.md). Putting the secondary topic and subscription in a different region ensures that your new events will flow even if the primary region goes down.
+ :::image type="content" source="./media/custom-disaster-recovery/get-primary-access-key.png" lightbox="./media/custom-disaster-recovery/get-primary-access-key.png" alt-text="Screenshot showing the access key of a primary topic.":::
+1. Switch back to the **Overview** page, and click **+Event Subscription** to create a subscription connecting your subscribing the event receiver website you made in the pre-requisites to the tutorial.
+
+ :::image type="content" source="./media/custom-disaster-recovery/create-event-subscription-link.png" lightbox="./media/custom-disaster-recovery/create-event-subscription-link.png" alt-text="Screenshot showing the selection of the Create event subscription link.":::
+1. On the **Create Event Subscription** page, follow these steps:
+ 1. Give the event subscription a logical **name** and add "-primary" as a suffix to make it easy to track.
+ 1. For **Endpoint Type**, select **Web Hook**.
+
+ :::image type="content" source="./media/custom-disaster-recovery/create-event-subscription-page.png" lightbox="./media/custom-disaster-recovery/create-event-subscription-page.png" alt-text="Screenshot showing the selection of the Create Event Subscription page.":::
+ 1. Click **Select an endpoint**.
+ 1. On the **Select Web Hook** page, set the endpoint to your event receiver's event URL, which should look something like: `https://<your-event-reciever>.azurewebsites.net/api/updates`, and then select **Confirm Selection**. Remember to add `/api/updates` to the URL of the web app.
+
+ :::image type="content" source="./media/custom-disaster-recovery/select-webhook.png" lightbox="./media/custom-disaster-recovery/select-webhook.png" alt-text="Screenshot showing the selection of the Select Web Hook page.":::
+ 1. Now, back on the **Create Event Subscription** page, select **Create** at the bottom pf the page.
+1. Repeat the same flow to create your secondary topic and subscription. This time, replace the "-primary" suffix with "-secondary" for easier tracking. Finally, make sure you put it in a **different Azure Region**. While you can put it anywhere you want, it's recommended that you use the [Azure Paired Regions](../availability-zones/cross-region-replication-azure.md). Putting the secondary topic and subscription in a different region ensures that your new events will flow even if the primary region goes down.
You should now have:
Now that you have all of your components in place, you can test out your failove
Try running the event publisher. You should see your test events land in your Event Grid viewer like below.
-![Screenshot showing the Event Grid Viewer app.](./media/custom-disaster-recovery/event-grid-viewer.png)
To make sure your failover is working, you can change a few characters in your primary topic key to make it no longer valid. Try running the publisher again. You should still see new events appear in your Event Grid viewer, however when you look at your console, you'll see that they are now being published via the secondary topic.
event-grid Event Schema Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-aks.md
Title: Azure Kubernetes Service as Event Grid source (Preview)
+ Title: Azure Kubernetes Service as Event Grid source
description: This article describes how to use Azure Kubernetes Service as an Event Grid event source. It provides the schema and links to tutorial and how-to articles.
Last updated 10/04/2021
-# Azure Kubernetes Service (AKS) as an Event Grid source (Preview)
+# Azure Kubernetes Service (AKS) as an Event Grid source
This article provides the properties and schema for AKS events. For an introduction to event schemas, see [Azure Event Grid event schema](event-schema.md). It also gives you a list of quick starts and tutorials to use AKS as an event source. - ## Available event types AKS emits the following event types
frontdoor How To Add Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-add-custom-domain.md
Previously updated : 03/18/2022 Last updated : 09/06/2022 #Customer intent: As a website owner, I want to add a custom domain to my Front Door configuration so that my users can use my custom domain to access my content.
When you use Azure Front Door for application delivery, a custom domain is neces
After you create an Azure Front Door Standard/Premium profile, the default frontend host will have a subdomain of `azurefd.net`. This subdomain gets included in the URL when Azure Front Door Standard/Premium delivers content from your backend by default. For example, `https://contoso-frontend.azurefd.net/activeusers.htm`. For your convenience, Azure Front Door provides the option of associating a custom domain with the default host. With this option, you deliver your content with a custom domain in your URL instead of an Azure Front Door owned domain name. For example, `https://www.contoso.com/photo.png`.
+Azure Front Door supports two types of domains, non-Azure validated domain and Azure pre-validated domain. Azure managed certificate and customer certificate are supported for both types. For more information, see [Configure HTTPS on a custom domain](how-to-configure-https-custom-domain.md).
+
+* **Azure pre-validated domains** - are domains validated by another Azure service. This domain type is used when you onboard and validated a domain to an Azure service, and then configured the Azure service behind an Azure Front Door. You don't need to validate the domain through the Azure Front Door when you use this type of domain.
+
+ > [!NOTE]
+ > Currently Azure pre-validated domain only supports domain validated by Static Web App.
+
+* **Non-Azure validated domains** - are domains that aren't validated by any Azure service. This domain type can be hosted with any DNS service and requires domain ownership validation with Azure Front Door.
+ ## Prerequisites * Before you can complete the steps in this tutorial, you must first create a Front Door. For more information, see [Quickstart: Create a Front Door Standard/Premium](create-front-door-portal.md).
After you create an Azure Front Door Standard/Premium profile, the default front
## Add a new custom domain > [!NOTE]
-> * If a custom domain is validated in one of the Azure Front Door Standard, Premium, classic or classic Microsoft CDN profiles, then it can't be added to another profile.
+> If a custom domain is validated in an Azure Front Door or a Microsoft CDN profile already, then it can't be added to another profile.
-A custom domain is managed by Domains section in the portal. A custom domain can be created and validated before association to an endpoint. A custom domain and its subdomains can be associated with only a single endpoint at a time. However, you can use different subdomains from the same custom domain for different Front Doors. You can also map custom domains with different subdomains to the same Front Door endpoint.
+A custom domain is configured on the **Domains** page of the Front Door profile. A custom domain can be set up and validated prior to endpoint association. A custom domain and its subdomains can only be associated with a single endpoint at a time. However, you can use different subdomains from the same custom domain for different Front Door profiles. You may also map custom domains with different subdomains to the same Front Door endpoint.
1. Select **Domains** under settings for your Azure Front Door profile and then select **+ Add** button. :::image type="content" source="../media/how-to-add-custom-domain/add-domain-button.png" alt-text="Screenshot of add domain button on domain landing page.":::
-1. The **Add a domain** page will appear where you can enter information about of the custom domain. You can choose Azure-managed DNS, which is recommended or you can choose to use your own DNS provider. If you choose Azure-managed DNS, select an existing DNS zone and then select a custom subdomain or create a new one. If you're using another DNS provider, manually enter the custom domain name. Select **Add** to add your custom domain.
+1. On the *Add a domain* page, select the **Domain type**. You can select between a **Non-Azure validated domain** or an **Azure pre-validated domain**.
- > [!NOTE]
- > Azure Front Door supports both Azure managed certificate and customer-managed certificates. If you want to use customer-managed certificate, see [Configure HTTPS on a custom domain](how-to-configure-https-custom-domain.md).
+ * **Non-Azure validated domain** is a domain that requires ownership validation. When you select Non-Azure validated domain, the recommended DNS management option is to use Azure-managed DNS. You may also use your own DNS provider. If you choose Azure-managed DNS, select an existing DNS zone. Then select an existing custom subdomain or create a new one. If you're using another DNS provider, manually enter the custom domain name. Then select **Add** to add your custom domain.
- :::image type="content" source="../media/how-to-add-custom-domain/add-domain-page.png" alt-text="Screenshot of add a domain page.":::
+ :::image type="content" source="../media/how-to-add-custom-domain/add-domain-page.png" alt-text="Screenshot of add a domain page.":::
- A new custom domain is created with a validation state of **Submitting**.
+ * **Azure pre-validated domain** is a domain already validated by another Azure service. When you select this option, domain ownership validation isn't required by Azure Front Door. A dropdown list of validated domains by different Azure services will appear.
+
+ :::image type="content" source="../media/how-to-add-custom-domain/pre-validated-custom-domain.png" alt-text="Screenshot of pre-validated custom domain in add a domain page.":::
+
+ > [!NOTE]
+ > * Azure Front Door supports both Azure managed certificate and Bring Your Own Certificates. For Non-Azure validated domain, the Azure managed certificate is issued and managed by the Azure Front Door. For Azure pre-validated domain, the Azure managed certificate gets issued and is managed by the Azure service that validates the domain. To use own certificate, see [Configure HTTPS on a custom domain](how-to-configure-https-custom-domain.md).
+ > * Azure Front Door supports Azure pre-validated domains and Azure DNS zones in different subscriptions.
+ > * Currently Azure pre-validated domains only supports domains validated by Static Web App.
+
+ A new custom domain will have a validation state of **Submitting**.
:::image type="content" source="../media/how-to-add-custom-domain/validation-state-submitting.png" alt-text="Screenshot of domain validation state submitting.":::
- Wait until the validation state changes to **Pending**. This operation could take a few minutes.
+ > [!NOTE]
+ > An Azure pre-validated domain will have a validation state of **Pending** and will automatically change to **Approved** after a few minutes. Once validation gets approved, skip to [**Associate the custom domain to your Front Door endpoint**](#associate-the-custom-domain-to-your-front-door-endpoint) and complete the remaining steps.
+
+ The validation state will change to **Pending** after a few minutes.
:::image type="content" source="../media/how-to-add-custom-domain/validation-state-pending.png" alt-text="Screenshot of domain validation state pending.":::
-1. Select the **Pending** validation state. A new page will appear with DNS TXT record information needed to validate the custom domain. The TXT record is in the form of `_dnsauth.<your_subdomain>`. If you're using Azure DNS-based zone, select the **Add** button and a new TXT record with the displayed record value will be created in the Azure DNS zone. If you're using another DNS provider, manually create a new TXT record of name `_dnsauth.<your_subdomain>` with the record value as shown on the page.
+1. Select the **Pending** validation state. A new page will appear with DNS TXT record information needed to validate the custom domain. The TXT record is in the form of `_dnsauth.<your_subdomain>`. If you're using Azure DNS-based zone, select the **Add** button, and a new TXT record with the displayed record value will be created in the Azure DNS zone. If you're using another DNS provider, manually create a new TXT record of name `_dnsauth.<your_subdomain>` with the record value as shown on the page.
:::image type="content" source="../media/how-to-add-custom-domain/validate-custom-domain.png" alt-text="Screenshot of validate custom domain page.":::
A custom domain is managed by Domains section in the portal. A custom domain can
### Domain validation state | Domain validation state | Description and actions |
-| -- | -- |
+|--|--|
+| Approved | This status means the domain has been successfully validated. |
+| Internal error | If you see this error, retry validation by selecting the **Refresh** or **Regenerate** button. If you're still experiencing issues, submit a support request to Azure support. |
+| Pending | A domain goes to pending state once the DNS TXT record challenge is generated. Add the DNS TXT record to your DNS provider and wait for the validation to complete. If the status remains **Pending** even after the TXT record has been updated with the DNS provider, select **Regenerate** to refresh the TXT record then add the TXT record to your DNS provider again. |
+| Pending re-validation | This status occurs when the managed certificate is less than 45 days from expiring. If you have a CNAME record already pointing to the Azure Front Door endpoint, no action is required for certificate renewal. If the custom domain is pointed to another CNAME record, select the **Pending re-validation**, and then select **Regenerate** on the *Validate the custom domain* page. Lastly, select **Add** if you're using Azure DNS or manually add the TXT record with your own DNS providerΓÇÖs DNS management. |
+| Refreshing validation token | A domain goes into a *Refreshing Validation Token* state for a brief period after the **Regenerate** button is selected. Once a new TXT record challenge is issued, the state will change to **Pending**. |
+| Rejected | This when the certificate provider/authority rejects the issuance for the managed certificate, for example, when the domain is invalid. Select the **Rejected** link and then select **Regenerate** on the *Validate the custom domain* page, as shown in the screenshots below this table. Then select **Add** to add the TXT record in the DNS provider. |
| Submitting | When a new custom domain is added and being created, the validation state becomes Submitting. |
-| Pending | A domain goes to pending state once the DNS TXT record challenge is generated. Please add the DNS TXT record to your DNS provider and wait for the validation to complete. If it is in ΓÇÿPendingΓÇÖ even after the TXT record is updated in the DNS provider, please try to click ΓÇÿRegenerateΓÇÖ to refresh the TXT record and add the TXT record to your DNS provider again. |
-| Rejected | This state is applicable when the certificate provider/authority rejects the issuance for the managed certificate, e.g. when the domain is invalid. Please click on the ΓÇÿRejectedΓÇÖ link and click ΓÇÿRegenerateΓÇÖ on the ΓÇÿValidate the custom domainΓÇÖ page, as shown in the screenshots below this table. Then click on Add to add the TXT record in the DNS provider. |
-| TimeOut | The domain validation state will become from ΓÇÿPendingΓÇÖ to ΓÇÿTimeoutΓÇÖ if you do not add it to your DNS provider within 7 days or add an invalid DNS TXT record. Please click on the Timeout and hit ΓÇÿRegenerateΓÇÖ on the ΓÇÿValidate the custom domainΓÇÖ page, as shown in the screenshots below this table. Then click on Add. Repeat step 3 and 4. |
-| Approved | This means the domain has been successfully validated. |
-| Pending re-validation | This happens when the managed certificate is 45 days or less from expiry. If you have a CNAME record pointing to the AFD endpoint, no action is required for certificate renewal. If the custom domain is pointing to other CNAME records, please click on ΓÇÿPending RevalidationΓÇÖ and hit ΓÇÿRegenerateΓÇÖ on the ΓÇÿValidate the custom domainΓÇÖ page, as shown in the screenshots below this table. Then click on Add or add the TXT record with your own DNS providerΓÇÖs DNS management. |
-| Refreshing validation token | A domain goes to ΓÇ£Refreshing Validation TokenΓÇÖ stage for a brief period after Regenerate button is clicked. Once a new TXT record challenge is issued, the state changes to Pending. |
-| Internal error | If you see this error, retry by clicking the **Refresh** or **Regenerate** buttons. If you're still experiencing issues, raise a support request. |
+| Timeout | The domain validation state will change from *Pending* to *Timeout* if the TXT record isn't added to your DNS provider within seven days. You'll also see a *Timeout* state if an invalid DNS TXT record has been added. Select the **Timeout** link and then select **Regenerate** on the *Validate the custom domain* page. Then select **Add** to add the TXT record to the DNS provider. |
> [!NOTE] > 1. The default TTL for TXT record is 1 hour. When you need to regenerate the TXT record for re-validation, please pay attention to the TTL for the previous TXT record. If it doesn't expire, the validation will fail until the previous TXT record expires. > 2. If the **Regenerate** button doesn't work, delete and recreate the domain. > 3. If the domain state doesn't reflect as expected, select the **Refresh** button.
-## Associate the custom domain with your Front Door Endpoint
+## Associate the custom domain to your Front Door endpoint
-After you've validated your custom domain, you can then add it to your Azure Front Door Standard/Premium endpoint.
+After you validate your custom domain, you can associate it to your Azure Front Door Standard/Premium endpoint.
-1. Once custom domain is validated, you can associate it to an existing Azure Front Door endpoint and route. Select the **Unassociated** link to open the **Associate endpoint and routes** page. Select an endpoint and routes you want to associate with. Then select **Associate**. Close the page once the associate operation completes.
+1. Select the **Unassociated** link to open the **Associate endpoint and routes** page. Select an endpoint and routes you want to associate the domain with. Then select **Associate** to update your configuration.
:::image type="content" source="../media/how-to-add-custom-domain/associate-endpoint-routes.png" alt-text="Screenshot of associate endpoint and routes page.":::
After you've validated your custom domain, you can then add it to your Azure Fro
:::image type="content" source="../media/how-to-add-custom-domain/dns-state-link.png" alt-text="Screenshot of DNS state link.":::
+ > [!NOTE]
+ > For an Azure pre-validated domain, go to the DNS hosting service and manually update the CNAME record for this domain from the other Azure service endpoint to Azure Front Door endpoint. This step is required, regardless of whether the domain is hosted with Azure DNS or with another DNS service. The link to update the CNAME from the DNS State column isn't available for this type of domain.
+ 1. The **Add or update the CNAME record** page will appear and display the CNAME record information that must be provided before traffic can start flowing. If you're using Azure DNS hosted zones, the CNAME records can be created by selecting the **Add** button on the page. If you're using another DNS provider, you must manually enter the CNAME record name and value as shown on the page. :::image type="content" source="../media/how-to-add-custom-domain/add-update-cname-record.png" alt-text="Screenshot of add or update CNAME record.":::
After you've validated and associated the custom domain, verify that the custom
:::image type="content" source="../media/how-to-add-custom-domain/verify-configuration.png" alt-text="Screenshot of validated and associated custom domain.":::
-Then lastly, validate that your application content is getting served using a browser.
+Lastly, validate that your application content is getting served using a browser.
## Next steps
frontdoor How To Configure Https Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-configure-https-custom-domain.md
Last updated 06/06/2022-+ #Customer intent: As a website owner, I want to add a custom domain to my Front Door configuration so that my users can use my custom domain to access my content.
# Configure HTTPS on an Azure Front Door custom domain using the Azure portal
-Azure Front Door enables secure TLS delivery to your applications by default when a custom domain is added. By using the HTTPS protocol on your custom domain, you ensure your sensitive data get delivered securely with TLS/SSL encryption when it's sent across the internet. When your web browser is connected to a web site via HTTPS, it validates the web site's security certificate and verifies it gets issued by a legitimate certificate authority. This process provides security and protects your web applications from attacks.
+Azure Front Door enables secure TLS delivery to your applications by default when a custom domain is added. By using the HTTPS protocol on your custom domain, you ensure your sensitive data get delivered securely with TLS/SSL encryption when it's sent across the internet. When your web browser is connected to a web site via HTTPS, it validates the web site's security certificate, and verifies it gets issued by a legitimate certificate authority. This process provides security and protects your web applications from attacks.
-Azure Front Door supports both Azure managed certificate and customer-managed certificates. Azure Front Door by default automatically enables HTTPS to all your custom domains using Azure managed certificates. No extra steps are required for getting an Azure managed certificate. A certificate is created during the domain validation process. You can also use your own certificate by integrating Azure Front Door Standard/Premium with your Key Vault.
+Azure Front Door supports Azure managed certificate and customer-managed certificates.
+
+* A non-Azure validated domain requires domain ownership validation. The managed certificate (AFD managed) is issued and managed by Azure Front Door. Azure Front Door by default automatically enables HTTPS to all your custom domains using Azure managed certificates. No extra steps are required for getting an AFD managed certificate. A certificate is created during the domain validation process.
+
+* An Azure pre-validated domain doesn't require domain validation because it's already validated by another Azure service. The managed certificate (Azure managed) is issued and managed by the Azure service. No extra steps are required for getting an Azure managed certificate. Azure Front Door doesn't issue a new managed certificate for this scenario and instead will reuse the managed certificate issued by the Azure service. For supported Azure service for pre-validated domain, refer to [custom domain](how-to-add-custom-domain.md).
+
+* For both scenarios, you can bring your own certificate.
## Prerequisites
Azure Front Door supports both Azure managed certificate and customer-managed ce
* If you're using Azure to host your [DNS domains](../../dns/dns-overview.md), you must delegate the domain provider's domain name system (DNS) to an Azure DNS. For more information, see [Delegate a domain to Azure DNS](../../dns/dns-delegate-domain-azure-dns.md). Otherwise, if you're using a domain provider to handle your DNS domain, you must manually validate the domain by entering prompted DNS TXT records.
-## Azure managed certificates
+## AFD managed certificates for Non-Azure pre-validated domain
1. Select **Domains** under settings for your Azure Front Door profile and then select **+ Add** to add a new domain. :::image type="content" source="../media/how-to-configure-https-custom-domain/add-new-custom-domain.png" alt-text="Screenshot of domain configuration landing page.":::
-1. On the **Add a domain** page, for *DNS management* select the **Azure managed DNS** option.
+1. On the **Add a domain** page, enter or select the following information, then select **Add** to onboard the custom domain.
- :::image type="content" source="../media/how-to-configure-https-custom-domain/add-domain-azure-managed.png" alt-text="Screen shot of add a domain page with Azure managed DNS selected.":::
+ :::image type="content" source="../media/how-to-configure-https-custom-domain/add-domain-azure-managed.png" alt-text="Screenshot of add a domain page with Azure managed DNS selected.":::
+
+ | Setting | Value |
+ |--|--|
+ | Domain type | Select **Non-Azure pre-validated domain** |
+ | DNS management | Select **Azure managed DNS (Recommended)** |
+ | DNS zone | Select the **Azure DNS zone** that host the custom domain. |
+ | Custom domain | Select an existing domain or add a new domain. |
+ | HTTPS | Select **AFD managed (Recommended)** |
1. Validate and associate the custom domain to an endpoint by following the steps in enabling [custom domain](how-to-add-custom-domain.md).
-1. Once the custom domain gets associated to endpoint successfully, an Azure managed certificate gets deployed to Front Door. This process may take from several minutes to an hour to complete.
+1. Once the custom domain gets associated to an endpoint successfully, an AFD managed certificate gets deployed to Front Door. This process may take from several minutes to an hour to complete.
+
+## Azure managed certificates for Azure pre-validated domain
+
+1. Select **Domains** under settings for your Azure Front Door profile and then select **+ Add** to add a new domain.
+
+ :::image type="content" source="../media/how-to-configure-https-custom-domain/add-new-custom-domain.png" alt-text="Screenshot of domain configuration landing page.":::
+
+1. On the **Add a domain** page, enter or select the following information, then select **Add** to onboard the custom domain.
+
+ :::image type="content" source="../media/how-to-configure-https-custom-domain/add-pre-validated-domain.png" alt-text="Screenshot of add a domain page with pre-validated domain.":::
+
+ | Setting | Value |
+ |--|--|
+ | Domain type | Select **Azure pre-validated domain** |
+ | Pre-validated custom domain | Select a custom domain name from the drop-down list of Azure services. |
+ | HTTPS | Select **Azure managed (Recommended)** |
+
+1. Validate and associate the custom domain to an endpoint by following the steps in enabling [custom domain](how-to-add-custom-domain.md).
+
+1. Once the custom domain gets associated to endpoint successfully, an AFD managed certificate gets deployed to Front Door. This process may take from several minutes to an hour to complete.
## Using your own certificate
Azure Front Door can now access this key vault and the certificates it contains.
## Certificate renewal and changing certificate types
-### Azure-managed certificate
+### AFD managed certificate for Non-Azure pre-validated domain
-Azure-managed certificates are automatically rotated when your custom domain uses a CNAME record that points to an Azure Front Door standard or premium endpoint.
+AFD managed certificates are automatically rotated when your custom domain uses a CNAME record that points to an Azure Front Door Standard or Premium endpoint.
Front Door won't automatically rotate certificates in the following scenarios:
-* The custom domain's CNAME record is pointing to other DNS resources.
-* The custom domain points to Azure Front Door through a long chain. For example, if you put Azure Traffic Manager before Azure Front Door, the CNAME chain is `contoso.com` CNAME in `contoso.trafficmanager.net` CNAME in `contoso.z01.azurefd.net`.
+* The custom domain CNAME record is pointing to other DNS resources.
+* The custom domain points to the Azure Front Door through a long chain. For example, if you put Azure Traffic Manager before Azure Front Door, the CNAME chain is `contoso.com` CNAME in `contoso.trafficmanager.net` CNAME in `contoso.z01.azurefd.net`.
The domain validation state will become *Pending Revalidation* 45 days before the managed certificate expires, or *Rejected* if the managed certificate issuance is rejected by the certificate authority. Refer to [Add a custom domain](how-to-add-custom-domain.md#domain-validation-state) for actions for each of the domain states.
+### Azure managed certificate for Azure pre-validated domain
+
+Azure managed certificates are automatically rotated by the Azure service that validates the domain.
+ ### <a name="rotate-own-certificate"></a>Use your own certificate
-In order for the certificate to be automatically rotated to the latest version when a newer version of the certificate is available in your key vault, set the secret version to 'Latest'. If a specific version is selected, you have to reselect the new version manually for certificate rotation. It takes up to 24 hours for the new version of the certificate/secret to be automatically deployed.
+In order for the certificate to automatically be rotated to the latest version when a newer version of the certificate is available in your key vault, set the secret version to 'Latest'. If a specific version is selected, you have to reselect the new version manually for certificate rotation. It takes up to 24 hours for the new version of the certificate/secret to be automatically deployed.
If you want to change the secret version from ΓÇÿLatestΓÇÖ to a specified version or vice versa, add a new certificate.
frontdoor Web Application Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/web-application-firewall.md
An IP addressΓÇôbased access control rule is a custom WAF rule that lets you con
## Rate limiting
-A custom rate limit rule controls access based on matching conditions and the rates of incoming requests. For more information, see [configure rate limit](../web-application-firewall/afds/waf-front-door-rate-limit-powershell.md).
+A custom rate limit rule controls access based on matching conditions and the rates of incoming requests. For more information, see [What is rate limiting for Azure Front Door Service?](../web-application-firewall/afds/waf-front-door-rate-limit.md).
## Tuning
governance Policy For Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-for-kubernetes.md
Azure Policy for Kubernetes supports the following cluster environments:
- [Azure Kubernetes Service (AKS)](../../../aks/intro-kubernetes.md) - [Azure Arc enabled Kubernetes](../../../azure-arc/kubernetes/overview.md)-- [AKS Engine](https://github.com/Azure/aks-engine/blob/master/docs/README.md) > [!IMPORTANT]
-> The add-ons for AKS Engine and Arc enabled Kubernetes are in **preview**. Azure Policy for
-> Kubernetes only supports Linux node pools and built-in policy definitions (custom policy
-> definitions is a _public preview_ feature). Built-in policy definitions are in the **Kubernetes**
-> category. The limited preview policy definitions with **EnforceOPAConstraint** and
-> **EnforceRegoPolicy** effect and the related **Kubernetes Service** category are _deprecated_.
-> Instead, use the effects _audit_ and _deny_ with Resource Provider mode
-> `Microsoft.Kubernetes.Data`.
+> The Azure Policy Add-on Helm model and the add-on for AKS Engine have been _deprecated_. Instructions can be found below for [removal of those add-ons](#remove-the-add-on). The Azure Policy Extension for Azure Arc enabled Kubernetes is in _preview_.
## Overview
role-based access control (Azure RBAC) policy assignment operations. The Azure b
**Resource Policy Contributor** and **Owner** have these operations. To learn more, see [Azure RBAC permissions in Azure Policy](../overview.md#azure-rbac-permissions-in-azure-policy).
-> [!NOTE]
-> Custom policy definitions is a _public preview_ feature.
- Find the built-in policy definitions for managing your cluster using the Azure portal with the following steps. If using a custom policy definition, search for it by name or the category that you created it with.
you created it with.
> [!NOTE] > When assigning the Azure Policy for Kubernetes definition, the **Scope** must include the
- > cluster resource. For an AKS Engine cluster, the **Scope** must be the resource group of the
- > cluster.
+ > cluster resource.
1. Give the policy assignment a **Name** and **Description** that you can use to identify it easily.
To remove the Azure Policy Add-on from your AKS cluster, use either the Azure po
az aks disable-addons --addons azure-policy --name MyAKSCluster --resource-group MyResourceGroup ```
+### Remove the add-on from Azure Arc enabled Kubernetes
+ > [!NOTE] > Azure Policy Add-on Helm model is now deprecated. Please opt for the [Azure Policy Extension for Azure Arc enabled Kubernetes](#install-azure-policy-extension-for-azure-arc-enabled-kubernetes) instead.
-### Remove the add-on from Azure Arc enabled Kubernetes
- To remove the Azure Policy Add-on and Gatekeeper from your Azure Arc enabled Kubernetes cluster, run the following Helm command:
the following Helm command:
helm uninstall azure-policy-addon ```
+### Remove the add-on from AKS Engine
+ > [!NOTE] > The AKS Engine product is now deprecated for Azure public cloud customers. Please consider using [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) for managed Kubernetes or [Cluster API Provider Azure](https://github.com/kubernetes-sigs/cluster-api-provider-azure) for self-managed Kubernetes. There are no new features planned; this project will only be updated for CVEs & similar, with Kubernetes 1.24 as the final version to receive updates.
-### Remove the add-on from AKS Engine
- To remove the Azure Policy Add-on and Gatekeeper from your AKS Engine cluster, use the method that aligns with how the add-on was installed:
hdinsight Hdinsight Capacity Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-capacity-planning.md
description: Identify key questions for capacity and performance planning of an
Previously updated : 04/27/2022 Last updated : 09/08/2022 # Capacity planning for HDInsight clusters
HDInsight is available in many Azure regions. To find the closest region, see [P
### Location of default storage
-The default storage, either an Azure Storage account or Azure Data Lake Storage, must be in the same location as your cluster. Azure Storage is available at all locations. Data Lake Storage Gen1 is available in some regions - see the current [Data Lake Storage availability](https://azure.microsoft.com/global-infrastructure/services/?products=storage).
-
+The default storage, either an Azure Storage account or Azure Data Lake Storage, must be in the same location as your cluster. Azure Storage is available at all locations. Data Lake Storage is available in some regions - see the current [Data Lake Storage availability](https://azure.microsoft.com/global-infrastructure/services/?products=storage).
### Location of existing data If you want to use an existing storage account or Data Lake Storage as your cluster's default storage, then you must deploy your cluster at that same location. ### Storage size
-On a deployed cluster, you can attach additional Azure Storage accounts or access other Data Lake Storage. All your storage accounts must live in the same location as your cluster. A Data Lake Storage can be in a different location, though great distances may introduce some latency.
-
-Azure Storage has some [capacity limits](../azure-resource-manager/management/azure-subscription-service-limits.md#storage-limits), while Data Lake Storage Gen1 is almost unlimited.
+On a deployed cluster, you can attach another Azure Storage accounts or access other Data Lake Storage. All your storage accounts must live in the same location as your cluster. A Data Lake Storage can be in a different location, though great distances may introduce some latency.
+Azure Storage has some [capacity limits](../azure-resource-manager/management/azure-subscription-service-limits.md#storage-limits), while Data Lake Storage is almost unlimited.
A cluster can access a combination of different storage accounts. Typical examples include: * When the amount of data is likely to exceed the storage capacity of a single blob storage
For more information on how to choose the right VM family for your workload, see
## Choose the cluster scale
-A cluster's scale is determined by the quantity of its VM nodes. For all cluster types, there are node types that have a specific scale, and node types that support scale-out. For example, a cluster may require exactly three [Apache ZooKeeper](https://zookeeper.apache.org/) nodes or two Head nodes. Worker nodes that do data processing in a distributed fashion benefit from the additional worker nodes.
+A cluster's scale is determined by the quantity of its VM nodes. For all cluster types, there are node types that have a specific scale, and node types that support scale-out. For example, a cluster may require exactly three [Apache ZooKeeper](https://zookeeper.apache.org/) nodes or two Head nodes. Worker nodes that do data processing in a distributed fashion benefit from another worker nodes.
-Depending on your cluster type, increasing the number of worker nodes adds additional computational capacity (such as more cores). More nodes will increase the total memory required for the entire cluster to support in-memory storage of data being processed. As with the choice of VM size and type, selecting the right cluster scale is typically reached empirically. Use simulated workloads or canary queries.
+Depending on your cluster type, increasing the number of worker nodes adds more computational capacity (such as more cores). More nodes will increase the total memory required for the entire cluster to support in-memory storage of data being processed. As with the choice of VM size and type, selecting the right cluster scale is typically reached empirically. Use simulated workloads or canary queries.
You can scale out your cluster to meet peak load demands. Then scale it back down when those extra nodes are no longer needed. The [Autoscale feature](hdinsight-autoscale-clusters.md) allows you to automatically scale your cluster based upon predetermined metrics and timings. For more information on scaling your clusters manually, see [Scale HDInsight clusters](hdinsight-scaling-best-practices.md).
For more information on managing subscription quotas, see [Requesting quota incr
* [Set up clusters in HDInsight with Apache Hadoop, Spark, Kafka, and more](hdinsight-hadoop-provision-linux-clusters.md): Learn how to set up and configure clusters in HDInsight. * [Monitor cluster performance](hdinsight-key-scenarios-to-monitor.md): Learn about key scenarios to monitor for your HDInsight cluster that might affect your cluster's capacity.+
iot-central Howto Manage Data Export With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-data-export-with-rest-api.md
Each data export definition can send data to one or more destinations. Create th
Use the following request to create or update a destination definition: ```http
-PUT https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=1.2-preview
+PUT https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=2022-06-30-preview
``` * destinationId - Unique ID for the destination.
The response to this request looks like the following example:
Use the following request to retrieve details of a destination from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=1.2-preview
+GET https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=2022-06-30-preview
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to retrieve a list of destinations from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/dataExport/destinations?api-version=1.2-preview
+GET https://{subdomain}.{baseDomain}/api/dataExport/destinations?api-version=2022-06-30-preview
``` The response to this request looks like the following example:
The response to this request looks like the following example:
### Patch a destination ```http
-PATCH https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=1.2-preview
+PATCH https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=2022-06-30-preview
``` You can use this to perform an incremental update to an export. The sample request body looks like the following example which updates the `displayName` to a destination:
The response to this request looks like the following example:
Use the following request to delete a destination: ```http
-DELETE https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=1.2-preview
+DELETE https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=2022-06-30-preview
``` ### Create or update an export definition
DELETE https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destination
Use the following request to create or update a data export definition: ```http
-PUT https://{subdomain}.{baseDomain}/api/dataExport/exports/{exportId}?api-version=1.2-preview
+PUT https://{subdomain}.{baseDomain}/api/dataExport/exports/{exportId}?api-version=2022-06-30-preview
``` The following example shows a request body that creates an export definition for device telemetry:
The response to this request looks like the following example:
Use the following request to retrieve details of an export definition from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/dataExport/exports/{exportId}?api-version=1.2-preview
+GET https://{subdomain}.{baseDomain}/api/dataExport/exports/{exportId}?api-version=2022-06-30-preview
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to retrieve a list of export definitions from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/dataExport/exports?api-version=1.2-preview
+GET https://{subdomain}.{baseDomain}/api/dataExport/exports?api-version=2022-06-30-preview
``` The response to this request looks like the following example:
The response to this request looks like the following example:
### Patch an export definition ```http
-PATCH https://{subdomain}.{baseDomain}/dataExport/exports/{exportId}?api-version=1.2-preview
+PATCH https://{subdomain}.{baseDomain}/dataExport/exports/{exportId}?api-version=2022-06-30-preview
``` You can use this to perform an incremental update to an export. The sample request body looks like the following example which updates the `enrichments` to an export:
The response to this request looks like the following example:
Use the following request to delete an export definition: ```http
-DELETE https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=1.2-preview
+DELETE https://{subdomain}.{baseDomain}/api/dataExport/destinations/{destinationId}?api-version=2022-06-30-preview
``` ## Next steps
iot-central Howto Manage Device Templates With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-device-templates-with-rest-api.md
The response to this request looks like the following example:
You can use ODATA filters to filter the results returned by the list device templates API.
-> [!NOTE]
-> Currently, ODATA support is only available for `api-version=1.2-preview`.
- ### $top Use the **$top** filter to set the result size. The maximum returned result size is 100, and the default size is 25.
Use the **$top** filter to set the result size. The maximum returned result size
Use the following request to retrieve the top 10 device templates from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/deviceTemplates?api-version=1.2-preview&$top=10
+GET https://{subdomain}.{baseDomain}/api/deviceTemplates?api-version=2022-07-31&$top=10
``` The response to this request looks like the following example:
The response to this request looks like the following example:
}, ... ],
- "nextLink": "https://custom-12qmyn6sm0x.azureiotcentral.com/api/deviceTemplates?api-version=1.2-preview&%24top=1&%24skiptoken=%7B%22token%22%3A%22%2BRID%3A%7EJWYqAKZQKp20qCoAAAAACA%3D%3D%23RT%3A1%23TRC%3A1%23ISV%3A2%23IEO%3A65551%23QCF%3A4%22%2C%22range%22%3A%7B%22min%22%3A%2205C1DFFFFFFFFC%22%2C%22max%22%3A%22FF%22%7D%7D"
+ "nextLink": "https://custom-12qmyn6sm0x.azureiotcentral.com/api/deviceTemplates?api-version=2022-07-31&%24top=1&%24skiptoken=%7B%22token%22%3A%22%2BRID%3A%7EJWYqAKZQKp20qCoAAAAACA%3D%3D%23RT%3A1%23TRC%3A1%23ISV%3A2%23IEO%3A65551%23QCF%3A4%22%2C%22range%22%3A%7B%22min%22%3A%2205C1DFFFFFFFFC%22%2C%22max%22%3A%22FF%22%7D%7D"
} ```
$filter=contains(displayName, 'template1) eq false
The following example shows how to retrieve all the device templates where the display name contains the string `thermostat`: ```http
-GET https://{subdomain}.{baseDomain}/api/deviceTemplates?api-version=1.2-preview&$filter=contains(displayName, 'thermostat')
+GET https://{subdomain}.{baseDomain}/api/deviceTemplates?api-version=2022-07-31&$filter=contains(displayName, 'thermostat')
``` The response to this request looks like the following example:
$orderby=displayName desc
The following example shows how to retrieve all the device templates where the result is sorted by `displayName` : ```http
-GET https://{subdomain}.{baseDomain}/api/deviceTemplates?api-version=1.2-preview&$orderby=displayName
+GET https://{subdomain}.{baseDomain}/api/deviceTemplates?api-version=2022-07-31&$orderby=displayName
``` The response to this request looks like the following example:
You can also combine two or more filters.
The following example shows how to retrieve the top 2 device templates where the display name contains the string `thermostat`. ```http
-GET https://{subdomain}.{baseDomain}/api/deviceTemplates?api-version=1.2-preview&$filter=contains(displayName, 'thermostat')&$top=2
+GET https://{subdomain}.{baseDomain}/api/deviceTemplates?api-version=2022-07-31&$filter=contains(displayName, 'thermostat')&$top=2
``` The response to this request looks like the following example:
iot-central Howto Manage Devices With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-with-rest-api.md
The response to this request looks like the following example:
You can use ODATA filters to filter the results returned by the list devices API.
-> [!NOTE]
-> Currently, ODATA support is only available for `api-version=1.2-preview`
- ### $top Use the **$top** to set the result size, the maximum returned result size is 100, the default size is 25.
Use the **$top** to set the result size, the maximum returned result size is 100
Use the following request to retrieve a top 10 device from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/devices?api-version=1.2-preview&$top=10
+GET https://{subdomain}.{baseDomain}/api/devices?api-version=2022-07-31&$top=10
``` The response to this request looks like the following example:
The response to this request looks like the following example:
}, ... ],
- "nextLink": "https://custom-12qmyn6sm0x.azureiotcentral.com/api/devices?api-version=1.2-preview&%24top=1&%24skiptoken=%257B%2522token%2522%253A%2522%252BRID%253A%7EJWYqAOis7THQbBQAAAAAAg%253D%253D%2523RT%253A1%2523TRC%253A1%2523ISV%253A2%2523IEO%253A65551%2523QCF%253A4%2522%252C%2522range%2522%253A%257B%2522min%2522%253A%2522%2522%252C%2522max%2522%253A%252205C1D7F7591D44%2522%257D%257D"
+ "nextLink": "https://custom-12qmyn6sm0x.azureiotcentral.com/api/devices?api-version=2022-07-31&%24top=1&%24skiptoken=%257B%2522token%2522%253A%2522%252BRID%253A%7EJWYqAOis7THQbBQAAAAAAg%253D%253D%2523RT%253A1%2523TRC%253A1%2523ISV%253A2%2523IEO%253A65551%2523QCF%253A4%2522%252C%2522range%2522%253A%257B%2522min%2522%253A%2522%2522%252C%2522max%2522%253A%252205C1D7F7591D44%2522%257D%257D"
} ```
Use **$filter** to create expressions that filter the list of devices. The follo
| -- | | | | Equals | eq | id eq 'device1' and scopes eq 'redmond' | | Not Equals | ne | Enabled ne true |
-| Less than or equals | le | indexof(displayName, 'device1') le -1 |
-| Less than | lt | indexof(displayName, 'device1') lt 0 |
-| Greater than or equals | ge | indexof(displayName, 'device1') ge 0 |
-| Greater than | gt | indexof(displayName, 'device1') gt 0 |
+| Less than or equals | le | contains(displayName, 'device1') le -1 |
+| Less than | lt | contains(displayName, 'device1') lt 0 |
+| Greater than or equals | ge | contains(displayName, 'device1') ge 0 |
+| Greater than | gt | contains(displayName, 'device1') gt 0 |
The following table shows the logic operators you can use in *$filter* expressions:
Currently, *$filter* works with the following device fields:
**$filter supported functions:**
-Currently, the only supported filter function for device lists is the `indexof` function:
+Currently, the only supported filter function for device lists is the `contains` function:
```
-$filter=indexof(displayName, 'device1') ge 0
+$filter=contains(displayName, 'device1') ge 0
```
-The following example shows how to retrieve all the devices where the display name has index the string `thermostat`:
+The following example shows how to retrieve all the devices where the display name contains the string `thermostat`:
```http
-GET https://{subdomain}.{baseDomain}/api/deviceTemplates?api-version=1.2-preview&$filter=index(displayName, 'thermostat')
+GET https://{subdomain}.{baseDomain}/api/deviceTemplates?api-version=2022-07-31&$filter=contains(displayName, 'thermostat')
``` The response to this request looks like the following example:
$orderby=displayName desc
The following example shows how to retrieve all the device templates where the result is sorted by `displayName` : ```http
-GET https://{subdomain}.{baseDomain}/api/devices?api-version=1.2-preview&$orderby=displayName
+GET https://{subdomain}.{baseDomain}/api/devices?api-version=2022-07-31&$orderby=displayName
``` The response to this request looks like the following example:
You can also combine two or more filters.
The following example shows how to retrieve the top 2 device where the display name contains the string `thermostat`. ```http
-GET https://{subdomain}.{baseDomain}/api/deviceTemplates?api-version=1.2-preview&$filter=contains(displayName, 'thermostat')&$top=2
+GET https://{subdomain}.{baseDomain}/api/deviceTemplates?api-version=2022-07-31&$filter=contains(displayName, 'thermostat')&$top=2
``` The response to this request looks like the following example:
iot-central Howto Manage Jobs With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-jobs-with-rest-api.md
The IoT Central REST API lets you develop client applications that integrate wit
- List jobs and view job details in your application. - Create jobs in your application. - Stop, resume, and rerun jobs in your application.
+- Schedule jobs and view scheduled job details in your application.
-> [!IMPORTANT]
-> The jobs API is currently in preview. All The REST API calls described in this article should include `?api-version=1.2-preview`.
+Scheduled jobs are created to run at a future time. You can set a start date and time for a scheduled job to run one-time, daily, or weekly. Non-scheduled jobs run only one-time.
This article describes how to use the `/jobs/{job_id}` API to control devices in bulk. You can also control devices individually.
The following table describes the fields in the previous JSON snippet:
Use the following request to retrieve the list of the jobs in your application: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/jobs?api-version=1.2-preview
+GET https://{your app subdomain}.azureiotcentral.com/api/jobs?api-version=2022-07-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to retrieve an individual job by ID: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/jobs/job-004?api-version=1.2-preview
+GET https://{your app subdomain}.azureiotcentral.com/api/jobs/job-004?api-version=2022-07-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to retrieve the details of the devices in a job: ```http
-GET https://{your app subdomain}.azureiotcentral.com/api/jobs/job-004/devices?api-version=1.2-preview
+GET https://{your app subdomain}.azureiotcentral.com/api/jobs/job-004/devices?api-version=2022-07-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to create a job: ```http
-PUT https://{your app subdomain}.azureiotcentral.com/api/jobs/job-006?api-version=1.2-preview
+PUT https://{your app subdomain}.azureiotcentral.com/api/jobs/job-006?api-version=2022-07-31
``` The `group` field in the request body identifies a device group in your IoT Central application. A job uses a device group to identify the set of devices the job operates on.
The `group` field in the request body identifies a device group in your IoT Cent
If you don't already have a suitable device group, you can create one with REST API call. The following example creates a device group with `group1` as the group ID: ```http
-PUT https://{subdomain}.{baseDomain}/api/deviceGroups/group1?api-version=1.2-preview
+PUT https://{subdomain}.{baseDomain}/api/deviceGroups/group1?api-version=2022-07-31
``` When you create a device group, you define a `filter` that selects the devices to include in the group. A filter identifies a device template and any properties to match. The following example creates device group that contains all devices associated with the "dtmi:modelDefinition:dtdlv2" device template where the `provisioned` property is `true`.
The response to this request looks like the following example. The initial job s
Use the following request to stop a running job: ```http
-POST https://{your app subdomain}.azureiotcentral.com/api/jobs/job-006/stop?api-version=1.2-preview
+POST https://{your app subdomain}.azureiotcentral.com/api/jobs/job-006/stop?api-version=2022-07-31
``` If the request succeeds, it returns a `204 - No Content` response.
If the request succeeds, it returns a `204 - No Content` response.
Use the following request to resume a stopped job: ```http
-POST https://{your app subdomain}.azureiotcentral.com/api/jobs/job-006/resume?api-version=1.2-preview
+POST https://{your app subdomain}.azureiotcentral.com/api/jobs/job-006/resume?api-version=2022-07-31
``` If the request succeeds, it returns a `204 - No Content` response.
If the request succeeds, it returns a `204 - No Content` response.
Use the following command to rerun an existing job on any failed devices: ```http
-PUT https://{your app subdomain}.azureiotcentral.com/api/jobs/job-006/rerun/rerun-001?api-version=1.2-preview
+PUT https://{your app subdomain}.azureiotcentral.com/api/jobs/job-006/rerun/rerun-001?api-version=2022-07-31
+```
+
+## Create a scheduled job
+
+The payload for a scheduled job is similar to a standard job but includes the following additional fields:
+
+| Field | Description |
+| -- | -- |
+schedule/start |The start date and time for the job in ISO 8601 format
+schedule/recurrence| One of `daily`, `monthly`, `yearly`
+schedule/end| An optional field that either specifies the number of occurrences for the job or an end date in ISO 8601 format
+
+```http
+PUT https://{your app subdomain}.azureiotcentral.com/api/scheduledJobs/scheduled-Job-001?api-version=2022-07-31
+```
+
+The following example shows a request body that creates a scheduled job.
+
+```json
+{
+ "displayName": "New Scheduled Job",
+ "group": "6fecf96f-a26c-49ed-8076-6960f8efba31",
+ "data": [
+ {
+ "type": "cloudProperty",
+ "target": "dtmi:azurertos:devkit:hlby5jgib2o",
+ "path": "Company",
+ "value": "Contoso"
+ }
+ ],
+ "schedule": {
+ "start": "2022-10-24T22:29:01Z",
+ "recurrence": "daily",
+ "end": {
+ "type": "date",
+ "date": "2022-12-30"
+ }
+ }
+}
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "id": "scheduled-Job-001",
+ "displayName": "New Scheduled Job",
+ "description": "",
+ "group": "6fecf96f-a26c-49ed-8076-6960f8efba31",
+ "data": [
+ {
+ "type": "cloudProperty",
+ "target": "dtmi:azurertos:devkit:hlby5jgib2o",
+ "path": "Company",
+ "value": "Contoso"
+ }
+ ],
+ "schedule": {
+ "start": "2022-10-24T22:29:01Z",
+ "recurrence": "daily",
+ "end": {
+ "type": "date",
+ "date": "2022-12-30"
+ }
+ },
+ "enabled": false,
+ "completed": false,
+ "etag": "\"88003877-0000-0700-0000-631020670000\""
+}
+```
+
+## Get a scheduled job
+
+Use the following request to get a scheduled job:
+
+```http
+GET https://{your app subdomain}.azureiotcentral.com/api/scheduledJobs/scheduled-Job-001?api-version=2022-07-31
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "id": "scheduled-Job-001",
+ "displayName": "New Scheduled Job",
+ "description": "",
+ "group": "6fecf96f-a26c-49ed-8076-6960f8efba31",
+ "data": [
+ {
+ "type": "cloudProperty",
+ "target": "dtmi:azurertos:devkit:hlby5jgib2o",
+ "path": "Company",
+ "value": "Contoso"
+ }
+ ],
+ "schedule": {
+ "start": "2022-10-24T22:29:01Z",
+ "recurrence": "daily"
+ },
+ "enabled": false,
+ "completed": false,
+ "etag": "\"88003877-0000-0700-0000-631020670000\""
+}
+```
+
+## List scheduled jobs
+
+Use the following request to get a list of scheduled jobs:
+
+```http
+GET https://{your app subdomain}.azureiotcentral.com/api/scheduledJobs?api-version=2022-07-31
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "value": [
+ {
+ "id": "scheduled-Job-001",
+ "displayName": "New Scheduled Job",
+ "description": "",
+ "group": "6fecf96f-a26c-49ed-8076-6960f8efba31",
+ "data": [
+ {
+ "type": "cloudProperty",
+ "target": "dtmi:azurertos:devkit:hlby5jgib2o",
+ "path": "Company",
+ "value": "Contoso"
+ }
+ ],
+ "schedule": {
+ "start": "2022-10-24T22:29:01Z",
+ "recurrence": "daily"
+ },
+ "enabled": false,
+ "completed": false,
+ "etag": "\"88003877-0000-0700-0000-631020670000\""
+ },
+ {
+ "id": "46480dff-dc22-4542-924e-a5d45bf347aa",
+ "displayName": "test",
+ "description": "",
+ "group": "cdd04344-bb55-425b-a55a-954d68383289",
+ "data": [
+ {
+ "type": "cloudProperty",
+ "target": "dtmi:rigado:evxfmi0xim",
+ "path": "test",
+ "value": 2
+ }
+ ],
+ "schedule": {
+ "start": "2022-09-01T03:00:00.000Z"
+ },
+ "enabled": true,
+ "completed": true,
+ "etag": "\"88000f76-0000-0700-0000-631020310000\""
+ }
+ ]
+}
+```
+
+## Update a scheduled job
+
+Use the following request to update a scheduled job:
+
+```http
+PATCH https://{your app subdomain}.azureiotcentral.com/api/scheduledJobs/scheduled-Job-001?api-version=2022-07-31
+```
+
+The following example shows a request body that updates a scheduled job.
+
+```json
+{
+ "schedule": {
+ "start": "2022-10-24T22:29:01Z",
+ "recurrence": "weekly"
+ }
+}
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "id": "scheduled-Job-001",
+ "displayName": "New Scheduled Job",
+ "description": "",
+ "group": "6fecf96f-a26c-49ed-8076-6960f8efba31",
+ "data": [
+ {
+ "type": "cloudProperty",
+ "target": "dtmi:azurertos:devkit:hlby5jgib2o",
+ "path": "Company",
+ "value": "Contoso"
+ }
+ ],
+ "schedule": {
+ "start": "2022-10-24T22:29:01Z",
+ "recurrence": "weekly"
+ },
+ "enabled": false,
+ "completed": false,
+ "etag": "\"88003877-0000-0700-0000-631020670000\""
+}
+```
+
+## Delete a scheduled job
+
+Use the following request to delete a scheduled job
+
+```http
+GET https://{your app subdomain}.azureiotcentral.com/api/scheduledJobs/scheduled-Job-001?api-version=2022-07-31
``` ## Next steps
iot-central Howto Query With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-query-with-rest-api.md
To learn how to query devices by using the IoT Central UI, see [How to use data
Use the following request to run a query: ```http
-POST https://{your app subdomain}.azureiotcentral.com/api/query?api-version=1.2-preview
+POST https://{your app subdomain}.azureiotcentral.com/api/query?api-version=2022-06-30-preview
``` The query is in the request body and looks like the following example:
iot-edge Iot Edge Limits And Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-limits-and-restrictions.md
This article explains the limits and restrictions when using IoT Edge.
## Limits ### Number of children in gateway hierarchy
-Each IoT Edge parent device in gateway hierarchies can have up to 100 connected child devices by default. This limit can be changed by setting the **MaxConnectedClients** environment variable in the parent device's edgeHub module.
+Each IoT Edge parent device in gateway hierarchies can have up to 100 connected child devices by default.
+
+However, it's imporant to know that each IoT Edge device in a nested topology open a separate logical connection to the parent EdgeHub (or IoT Hub) on behalf of each connected client (device or module), plus one connection for itself. So the connections at each layer are not aggregated, but added.
+
+For example, if there are 2 IoT Edge child devices in one layer L4, each in turn has 100 clients, then the parent IoT Edge device in the layer above L5 would have 202 total incoming connections from L4.
+
+This limit can be changed by setting the **MaxConnectedClients** environment variable in the parent device's edgeHub module. But IoT Edge can run into issues with reporting its state in the twin reported properties if the number of clients exceeds a few hundred because of the IoT Hub twin size limit. In general, be careful when increasing the limit by changing this environment variable.
For more information, see [Create a gateway hierarchy](how-to-connect-downstream-iot-edge-device.md#create-a-gateway-hierarchy).
iot-edge Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/version-history.md
Azure IoT Edge is a product built from the open-source IoT Edge project hosted on GitHub. All new releases are made available in the [Azure IoT Edge project](https://github.com/Azure/azure-iotedge). Contributions and bug reports can be made on the [open-source IoT Edge project](https://github.com/Azure/iotedge).
-Azure IoT Edge is governed by Microsoft's [Modern Lifecycle Policy](/lifecycle/policies/modern).
+Azure IoT Edge is governed by Microsoft's [Modern Lifecycle Policy](/lifecycle/products/azure-iot-edge).
## Documented versions
All new releases are made available in the [Azure IoT Edge for Linux on Windows
This table provides recent version history for IoT Edge package releases, and highlights documentation updates made for each version.
+>[!NOTE]
+>Long-term servicing (LTS) releases are serviced for a fixed period. Updates to this release type contain critical security and bug fixes only. All other stable releases are continuously supported and serviced. A stable release may contain features updates along with critical security fixes. Stable releases are supported only until the next release (stable or LTS) is generally available.
+ | Release notes and assets | Type | Date | Highlights | | | - | - | - | | [1.4](https://github.com/Azure/azure-iotedge/releases/tag/1.4.0) | Long-term support (LTS) | August 2022 | Automatic image clean-up of unused Docker images <br> Ability to pass a [custom JSON payload to DPS on provisioning](../iot-dps/how-to-send-additional-data.md#iot-edge-support) <br> Ability to require all modules in a deployment be downloaded before restart <br> Use of the TCG TPM2 Software Stack which enables TPM hierarchy authorization values, specifying the TPM index at which to persist the DPS authentication key, and accommodating more [TPM configurations](http://github.com/Azure/iotedge/blob/897aed8c5573e8cad4b602e5a1298bdc64cd28b4/edgelet/contrib/config/linux/template.toml#L262-L288)
load-balancer Quickstart Load Balancer Standard Public Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-portal.md
In this section, you'll create a virtual network, subnet, and Azure Bastion host
| Resource Group | Select **Create new**. </br> In **Name** enter **CreatePubLBQS-rg**. </br> Select **OK**. | | **Instance details** | | | Name | Enter **myVNet** |
- | Region | Select **West Europe** |
+ | Region | Select **West US** |
4. Select the **IP Addresses** tab or select **Next: IP Addresses** at the bottom of the page.
In this section, you'll create a virtual network, subnet, and Azure Bastion host
| Setting | Value | |--|-| | Bastion name | Enter **myBastionHost** |
- | AzureBastionSubnet address space | Enter **10.1.1.0/27** |
+ | AzureBastionSubnet address space | Enter **10.1.1.0/26** |
| Public IP Address | Select **Create new**. </br> For **Name**, enter **myBastionIP**. </br> Select **OK**. | 11. Select the **Review + create** tab or select the **Review + create** button.
During the creation of the load balancer, you'll configure:
| Resource group | Select **CreatePubLBQS-rg**. | | **Instance details** | | | Name | Enter **myLoadBalancer** |
- | Region | Select **West Europe**. |
+ | Region | Select **West US**. |
| SKU | Leave the default **Standard**. | | Type | Select **Public**. | | Tier | Leave the default **Regional**. |
During the creation of the load balancer, you'll configure:
6. Enter **myFrontend** in **Name**.
-7. Select **IPv4** or **IPv6** for the **IP version**.
-
- > [!NOTE]
- > IPv6 isn't currently supported with Routing Preference or Cross-region load-balancing (Global Tier).
+7. Select **IPv4** for the **IP version**.
8. Select **IP address** for the **IP type**.
During the creation of the load balancer, you'll configure:
18. Select **myVNet** in **Virtual network**.
-19. Select **NIC** or **IP Address** for **Backend Pool Configuration**.
-
-20. Select **IPv4** or **IPv6** for **IP version**.
+19. Select **IP Address** for **Backend Pool Configuration**.
-21. Select **Add**.
+21. Select **Save**.
22. Select **Next: Inbound rules** at the bottom of the page.
-23. In **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**.
+23. Under **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**.
24. In **Add load balancing rule**, enter or select the following information:
During the creation of the load balancer, you'll configure:
| - | -- | | Name | Enter **myHTTPRule** | | IP Version | Select **IPv4** or **IPv6** depending on your requirements. |
- | Frontend IP address | Select **myFrontend**. |
+ | Frontend IP address | Select **myFrontend (To be created)**. |
| Backend pool | Select **myBackendPool**. | | Protocol | Select **TCP**. | | Port | Enter **80**. |
During the creation of the load balancer, you'll configure:
27. Select **Create**. > [!NOTE]
- > In this example we'll create a NAT gateway to provide outbound Internet access. The outbound rules tab in the configuration is bypassed as it's optional isn't needed with the NAT gateway. For more information on Azure NAT gateway, see [What is Azure Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md)
+ > In this example we'll create a NAT gateway to provide outbound Internet access. The outbound rules tab in the configuration is bypassed as it's optional and isn't needed with the NAT gateway. For more information on Azure NAT gateway, see [What is Azure Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md)
> For more information about outbound connections in Azure, see [Source Network Address Translation (SNAT) for outbound connections](../load-balancer/load-balancer-outbound-connections.md) ## Create NAT gateway
-In this section, you'll create a NAT gateway for outbound internet access for resources in the virtual network.
+In this section, you'll create a NAT gateway for outbound internet access for resources in the virtual network. For other options for outbound rules, check out [Network Address Translation (SNAT) for outbound connections](load-balancer-outbound-connections.md).
1. In the search box at the top of the portal, enter **NAT gateway**. Select **NAT gateways** in the search results.
In this section, you'll create a NAT gateway for outbound internet access for re
| Resource group | Select **CreatePubLBQS-rg**. | | **Instance details** | | | NAT gateway name | Enter **myNATgateway**. |
- | Region | Select **West Europe**. |
+ | Region | Select **West US**. |
| Availability zone | Select **None**. | | Idle timeout (minutes) | Enter **15**. |
These VMs are added to the backend pool of the load balancer that was created ea
1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
-2. In **Virtual machines**, select **+ Create** > **Virtual machine**.
+2. In **Virtual machines**, select **+ Create** > **Azure virtual machine**.
3. In **Create a virtual machine**, enter or select the following values in the **Basics** tab:
These VMs are added to the backend pool of the load balancer that was created ea
| Resource Group | Select **CreatePubLBQS-rg** | | **Instance details** | | | Virtual machine name | Enter **myVM1** |
- | Region | Select **(Europe) West Europe** |
+ | Region | Select **(US) West US)** |
| Availability Options | Select **Availability zones** | | Availability zone | Select **Zone 1** | | Security type | Select **Standard**. |
These VMs are added to the backend pool of the load balancer that was created ea
| Subnet | Select **myBackendSubnet** | | Public IP | Select **None**. | | NIC network security group | Select **Advanced** |
- | Configure network security group | Select **Create new**. </br> In the **Create network security group**, enter **myNSG** in **Name**. </br> Under **Inbound rules**, select **+Add an inbound rule**. </br> Under **Service**, select **HTTP**. </br> Under **Priority**, enter **100**. </br> In **Name**, enter **myNSGRule** </br> Select **Add** </br> Select **OK** |
+ | Configure network security group | Skip this setting until the rest of the settings are completed. Complete after **Select a backend pool**.|
| Delete NIC when VM is deleted | Leave the default of **unselected**. | | Accelerated networking | Leave the default of **selected**. | | **Load balancing** |
- | Place this virtual machine behind an existing load-balancing solution? | Select the check box. |
- | **Load balancing settings** |
+ | **Load balancing options** |
| Load-balancing options | Select **Azure load balancer** | | Select a load balancer | Select **myLoadBalancer** | | Select a backend pool | Select **myBackendPool** |
+ | Configure network security group | Select **Create new**. </br> In the **Create network security group**, enter **myNSG** in **Name**. </br> Under **Inbound rules**, select **+Add an inbound rule**. </br> Under **Service**, select **HTTP**. </br> Under **Priority**, enter **100**. </br> In **Name**, enter **myNSGRule** </br> Select **Add** </br> Select **OK** |
6. Select **Review + create**.
These VMs are added to the backend pool of the load balancer that was created ea
5. Select **Connect**.
-6. On the server desktop, navigate to **Windows Administrative Tools** > **Windows PowerShell**.
+6. On the server desktop, navigate to **Start** > **Windows PowerShell** > **Windows PowerShell**.
7. In the PowerShell Window, run the following commands to:
load-testing Resource Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/resource-limits-quotas-capacity.md
The following limits apply on a per-region, per-subscription basis.
### Data retention
-When you run a load test, Azure Load Testing stores both client-side and [server-side metrics](./how-to-monitor-server-side-metrics.md) for the test run. Azure Load Testing has a per-test-run limit on the retention period for this data:
-
-| Resource | Limit |
-|||
-| Server-side metrics | 90 days |
-| Client-side metrics | 365 days |
-
-The test run associated with the load test isn't removed.
+Azure Load Testing captures metrics, test results, and logs for each test run. The following data retention limits apply:
+
+| Resource | Limit | Notes |
+|-|-|-|
+| Server-side metrics | 90 days | Learn how to [configure server-side metrics](./how-to-monitor-server-side-metrics.md). |
+| Client-side metrics | 365 days | |
+| Test results | 6 months | Learn how to [export test results](./how-to-export-test-results.md). |
+| Test log files | 6 months | Learn how to [download the logs for troubleshooting tests](./how-to-find-download-logs.md). |
## Request quota increases
machine-learning Concept Ml Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-ml-pipelines.md
The core of a machine learning pipeline is to split a complete machine learning
Machine learning operation (MLOPs) automates the process of building machine learning models and taking the model to production. This is a complex process. It usually requires collaboration from different teams with different skills. A well-defined machine learning pipeline can abstract this complex process into a multiple steps workflow, mapping each step to a specific task such that each team can work independently.
-For example, a typical machine learning project includes the steps of data collection, data preparation, model training, model evaluation, and model deployment. Usually, the data engineers concentrate on data steps, data scientists spend most time on model training and evaluation, the machine learning engineers are focus on model deployment and automation of the entire workflow. By leveraging machine learning pipeline, each team only needs to work on building their own steps. The best way of building steps is using [Azure Machine Learning component](concept-component.md), a self-contained piece of code that does one step in a machine learning pipeline. All these steps built by different users are finally integrated into one workflow through the pipeline definition. The pipeline is a collaboration tool for everyone in the project. The process of defining a pipeline and all its steps can be standardized by each company's preferred DevOps practice. The pipeline can be further versioned and automated. If the ML projects are described as a pipeline, then the best MLOps practice is already applied.
+For example, a typical machine learning project includes the steps of data collection, data preparation, model training, model evaluation, and model deployment. Usually, the data engineers concentrate on data steps, data scientists spend most time on model training and evaluation, the machine learning engineers focus on model deployment and automation of the entire workflow. By leveraging machine learning pipeline, each team only needs to work on building their own steps. The best way of building steps is using [Azure Machine Learning component](concept-component.md), a self-contained piece of code that does one step in a machine learning pipeline. All these steps built by different users are finally integrated into one workflow through the pipeline definition. The pipeline is a collaboration tool for everyone in the project. The process of defining a pipeline and all its steps can be standardized by each company's preferred DevOps practice. The pipeline can be further versioned and automated. If the ML projects are described as a pipeline, then the best MLOps practice is already applied.
### Training efficiency and cost reduction
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
These rule collections are described in more detail in [What are some Azure Fire
1. Add __Application rules__ for the following hosts: > [!NOTE]
- > This is not a complete list of the hosts required for all Python resources on the internet, only the most commonly used. For example, if you need access to a GitHub repository or other host, you must identify and add the required hosts for that scenario.
+ > This is not a complete list of the hosts required for all hosts you may need to communicate with, only the most commonly used. For example, if you need access to a GitHub repository or other host, you must identify and add the required hosts for that scenario.
| **Host name** | **Purpose** | | - | - |
These rule collections are described in more detail in [What are some Azure Fire
| **cloud.r-project.org** | Used when installing CRAN packages for R development. | | **\*pytorch.org** | Used by some examples based on PyTorch. | | **\*.tensorflow.org** | Used by some examples based on Tensorflow. |
- | **update.code.visualstudio.com**</br></br>**\*.vo.msecnd.net** | Used to retrieve VS Code server bits that are installed on the compute instance through a setup script.|
+ | **\*vscode.dev**</br>**\*vscode-unpkg.net**</br>**\*vscode-cdn.net**</br>**\*vscodeexperiments.azureedge.net**</br>**default.exp-tas.com** | Required to access vscode.dev (Visual Studio Code for the Web) |
+ | **code.visualstudio.com** | Required to download and install VS Code desktop. This is not required for VS Code Web. |
+ | **update.code.visualstudio.com**</br>**\*.vo.msecnd.net** | Used to retrieve VS Code server bits that are installed on the compute instance through a setup script. |
+ | **marketplace.visualstudio.com**</br>**vscode.blob.core.windows.net**</br>**\*.gallerycdn.vsassets.io** | Required to download and install VS Code extensions. These enable the remote connection to Compute Instances provided by the Azure ML extension for VS Code, see [Connect to an Azure Machine Learning compute instance in Visual Studio Code](./how-to-set-up-vs-code-remote.md) for more information. |
| **raw.githubusercontent.com/microsoft/vscode-tools-for-ai/master/azureml_remote_websocket_server/\*** | Used to retrieve websocket server bits that are installed on the compute instance. The websocket server is used to transmit requests from Visual Studio Code client (desktop application) to Visual Studio Code server running on the compute instance.| | **dc.applicationinsights.azure.com** | Used to collect metrics and diagnostics information when working with Microsoft support. | | **dc.applicationinsights.microsoft.com** | Used to collect metrics and diagnostics information when working with Microsoft support. |
The hosts in this section are used to install Visual Studio Code packages to est
| **Host name** | **Purpose** | | - | - |
-| **update.code.visualstudio.com**</br></br>**\*.vo.msecnd.net** | Used to retrieve VS Code server bits that are installed on the compute instance through a setup script.|
+| **\*vscode.dev**</br>**\*vscode-unpkg.net**</br>**\*vscode-cdn.net**</br>**\*vscodeexperiments.azureedge.net**</br>**default.exp-tas.com** | Required to access vscode.dev (Visual Studio Code for the Web) |
+| **code.visualstudio.com** | Required to download and install VS Code desktop. This is not required for VS Code Web. |
+| **update.code.visualstudio.com**</br>**\*.vo.msecnd.net** | Used to retrieve VS Code server bits that are installed on the compute instance through a setup script. |
+| **marketplace.visualstudio.com**</br>**vscode.blob.core.windows.net**</br>**\*.gallerycdn.vsassets.io** | Required to download and install VS Code extensions. These enable the remote connection to Compute Instances provided by the Azure ML extension for VS Code, see [Connect to an Azure Machine Learning compute instance in Visual Studio Code](./how-to-set-up-vs-code-remote.md) for more information. |
| **raw.githubusercontent.com/microsoft/vscode-tools-for-ai/master/azureml_remote_websocket_server/\*** |Used to retrieve websocket server bits that are installed on the compute instance. The websocket server is used to transmit requests from Visual Studio Code client (desktop application) to Visual Studio Code server running on the compute instance. | ## Next steps
machine-learning How To Deploy Automl Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-automl-endpoint.md
Last updated 05/11/2022 -+ ms.devlang: azurecli
ms.devlang: azurecli
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] +
+> [!IMPORTANT]
+> SDK v2 is currently in public preview.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
In this article, you'll learn how to deploy an AutoML-trained machine learning model to an online (real-time inference) endpoint. Automated machine learning, also referred to as automated ML or AutoML, is the process of automating the time-consuming, iterative tasks of developing a machine learning model. For more, see [What is automated machine learning (AutoML)?](concept-automated-ml.md). In this article you'll know how to deploy AutoML trained machine learning model to online endpoints using: - Azure Machine Learning studio-- Azure Machine Learning CLI (v2))
+- Azure Machine Learning CLI v2
+- Azure Machine Learning Python SDK v2
## Prerequisites
You'll need to modify this file to use the files you downloaded from the AutoML
az ml online-deployment create -f automl_deployment.yml ``` -- After you create a deployment, you can score it as described in [Invoke the endpoint to score data by using your model](how-to-deploy-managed-online-endpoints.md#invoke-the-endpoint-to-score-data-by-using-your-model). +
+# [Python](#tab/python)
++
+## Configure the Python SDK
+
+If you haven't installed Python SDK v2 yet, please install with this command:
+
+```azurecli
+pip install --pre azure-ai-ml
+```
+
+For more information, see [Install the Azure Machine Learning SDK v2 for Python](/python/api/overview/azure/ml/installv2).
+
+## Put the scoring file in its own directory
+
+Create a directory called `src/` and place the scoring file you downloaded into it. This directory is uploaded to Azure and contains all the source code necessary to do inference. For an AutoML model, there's just the single scoring file.
+
+## Connect to Azure Machine Learning workspace
+
+1. Import the required libraries:
+
+ ```python
+ # import required libraries
+ from azure.ai.ml import MLClient
+ from azure.ai.ml.entities import (
+ ManagedOnlineEndpoint,
+ ManagedOnlineDeployment,
+ Model,
+ Environment,
+ CodeConfiguration,
+ )
+ from azure.identity import DefaultAzureCredential
+ ```
+
+1. Configure workspace details and get a handle to the workspace:
+
+ ```python
+ # enter details of your AzureML workspace
+ subscription_id = "<SUBSCRIPTION_ID>"
+ resource_group = "<RESOURCE_GROUP>"
+ workspace = "<AZUREML_WORKSPACE_NAME>"
+ ```
+
+ ```python
+ # get a handle to the workspace
+ ml_client = MLClient(
+ DefaultAzureCredential(), subscription_id, resource_group, workspace
+ )
+ ```
+
+## Create the endpoint and deployment
+
+Next, we'll create the managed online endpoints and deployments.
+
+1. Configure online endpoint:
+
+ > [!TIP]
+ > * `name`: The name of the endpoint. It must be unique in the Azure region. The name for an endpoint must start with an upper- or lowercase letter and only consist of '-'s and alphanumeric characters. For more information on the naming rules, see [managed online endpoint limits](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints).
+ > * `auth_mode` : Use `key` for key-based authentication. Use `aml_token` for Azure Machine Learning token-based authentication. A `key` doesn't expire, but `aml_token` does expire. For more information on authenticating, see [Authenticate to an online endpoint](how-to-authenticate-online-endpoint.md).
++
+ ```python
+ # Creating a unique endpoint name with current datetime to avoid conflicts
+ import datetime
+
+ online_endpoint_name = "endpoint-" + datetime.datetime.now().strftime("%m%d%H%M%f")
+
+ # create an online endpoint
+ endpoint = ManagedOnlineEndpoint(
+ name=online_endpoint_name,
+ description="this is a sample online endpoint",
+ auth_mode="key",
+ )
+ ```
+
+1. Create the endpoint:
+
+ Using the `MLClient` created earlier, we'll now create the Endpoint in the workspace. This command will start the endpoint creation and return a confirmation response while the endpoint creation continues.
+
+ ```python
+ ml_client.begin_create_or_update(endpoint)
+ ```
+
+1. Configure online deployment:
+
+ A deployment is a set of resources required for hosting the model that does the actual inferencing. We'll create a deployment for our endpoint using the `ManagedOnlineDeployment` class.
+
+ ```python
+ model = Model(path="./src/model.pkl")
+ env = Environment(
+ conda_file="./src/conda_env_v_1_0_0.yml",
+ image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1",
+ )
+
+ blue_deployment = ManagedOnlineDeployment(
+ name="blue",
+ endpoint_name=online_endpoint_name,
+ model=model,
+ environment=env,
+ code_configuration=CodeConfiguration(
+ code="./src", scoring_script="scoring_file_v_2_0_0.py"
+ ),
+ instance_type="Standard_DS2_v2",
+ instance_count=1,
+ )
+ ```
+
+ In the above example, we assume the files you downloaded from the AutoML Models page are in the `src` directory. You can modify the parameters in the code to suit your situation.
+
+ | Parameter | Change to |
+ | | |
+ | `model:path` | The path to the `model.pkl` file you downloaded. |
+ | `code_configuration:code:path` | The directory in which you placed the scoring file. |
+ | `code_configuration:scoring_script` | The name of the Python scoring file (`scoring_file_<VERSION>.py`). |
+ | `environment:conda_file` | A file URL for the downloaded conda environment file (`conda_env_<VERSION>.yml`). |
+
+1. Create the deployment:
+
+ Using the `MLClient` created earlier, we'll now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues.
+
+ ```python
+ ml_client.begin_create_or_update(blue_deployment)
+ ```
+
+After you create a deployment, you can score it as described in [Test the endpoint with sample data](how-to-deploy-managed-online-endpoint-sdk-v2.md#test-the-endpoint-with-sample-data).
+
+You can learn to deploy to managed online endpoints with SDK more in [Deploy machine learning models to managed online endpoint using Python SDK v2](how-to-deploy-managed-online-endpoint-sdk-v2.md).
+++ ## Next steps - [Troubleshooting online endpoints deployment](how-to-troubleshoot-managed-online-endpoints.md)
machine-learning How To Deploy Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-custom-container.md
Last updated 05/11/2022 -+ ms.devlang: azurecli
ms.devlang: azurecli
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] +
+> [!IMPORTANT]
+> SDK v2 is currently in public preview.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
Learn how to deploy a custom container as an online endpoint in Azure Machine Learning.
Custom container deployments can use web servers other than the default Python F
## Prerequisites
-* Install and configure the Azure CLI and ML extension. For more information, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
- * You must have an Azure resource group, in which you (or the service principal you use) need to have `Contributor` access. You'll have such a resource group if you configured your ML extension per the above article. * You must have an Azure Machine Learning workspace. You'll have such a workspace if you configured your ML extension per the above article.
+* To deploy locally, you must have [Docker engine](https://docs.docker.com/engine/install/) running locally. This step is **highly recommended**. It will help you debug issues.
+
+# [CLI](#tab/CLI)
+
+* Install and configure the Azure CLI and ML extension. For more information, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
+ * If you've not already set the defaults for Azure CLI, you should save your default settings. To avoid having to repeatedly pass in the values, run: ```azurecli az account set --subscription <subscription id> az configure --defaults workspace=<azureml workspace name> group=<resource group>
+ ```
-* To deploy locally, you must have [Docker engine](https://docs.docker.com/engine/install/) running locally. This step is **highly recommended**. It will help you debug issues.
+# [Python](#tab/python)
+
+* If you haven't installed Python SDK v2, please install with this command:
+
+ ```azurecli
+ pip install --pre azure-ai-ml
+ ```
+
+ For more information, see [Install the Azure Machine Learning SDK v2 for Python](/python/api/overview/azure/ml/installv2).
++ ## Download source code To follow along with this tutorial, download the source code below.
+# [CLI](#tab/CLI)
+ ```azurecli git clone https://github.com/Azure/azureml-examples --depth 1 cd azureml-examples/cli ```
+# [Python](#tab/python)
+
+```azurecli
+git clone https://github.com/Azure/azureml-examples --depth 1
+cd azureml-examples/sdk/endpoints/online/custom-container
+```
+++ ## Initialize environment variables Define environment variables:
Now that you've tested locally, stop the image:
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-tfserving.sh" id="stop_image":::
-## Create a YAML file for your endpoint and deployment
+## Deploy your online endpoint to Azure
+Next, deploy your online endpoint to Azure.
+
+# [CLI](#tab/CLI)
+
+### Create a YAML file for your endpoint and deployment
You can configure your cloud deployment using YAML. Take a look at the sample YAML for this example:
__tfserving-deployment.yml__
:::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/custom-container/tfserving-deployment.yml":::
-There are a few important concepts to notice in this YAML:
+# [Python](#tab/python)
+
+### Connect to Azure Machine Learning workspace
+Connect to Azure Machine Learning Workspace, configure workspace details, and get a handle to the workspace as follows:
+
+1. Import the required libraries:
+
+```python
+# import required libraries
+from azure.ai.ml import MLClient
+from azure.ai.ml.entities import (
+ ManagedOnlineEndpoint,
+ ManagedOnlineDeployment,
+ Model,
+ Environment,
+ CodeConfiguration,
+)
+from azure.identity import DefaultAzureCredential
+```
+
+2. Configure workspace details and get a handle to the workspace:
+
+```python
+# enter details of your AzureML workspace
+subscription_id = "<SUBSCRIPTION_ID>"
+resource_group = "<RESOURCE_GROUP>"
+workspace = "<AZUREML_WORKSPACE_NAME>"
+
+# get a handle to the workspace
+ml_client = MLClient(
+ DefaultAzureCredential(), subscription_id, resource_group, workspace
+)
+```
+
+For more information, see [Deploy machine learning models to managed online endpoint using Python SDK v2](how-to-deploy-managed-online-endpoint-sdk-v2.md).
+
+### Configure online endpoint
+
+> [!TIP]
+> * `name`: The name of the endpoint. It must be unique in the Azure region. The name for an endpoint must start with an upper- or lowercase letter and only consist of '-'s and alphanumeric characters. For more information on the naming rules, see [managed online endpoint limits](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints).
+> * `auth_mode` : Use `key` for key-based authentication. Use `aml_token` for Azure Machine Learning token-based authentication. A `key` doesn't expire, but `aml_token` does expire. For more information on authenticating, see [Authenticate to an online endpoint](how-to-authenticate-online-endpoint.md).
+
+Optionally, you can add description, tags to your endpoint.
+
+```python
+# Creating a unique endpoint name with current datetime to avoid conflicts
+import datetime
+
+online_endpoint_name = "endpoint-" + datetime.datetime.now().strftime("%m%d%H%M%f")
+
+# create an online endpoint
+endpoint = ManagedOnlineEndpoint(
+ name=online_endpoint_name,
+ description="this is a sample online endpoint",
+ auth_mode="key",
+ tags={"foo": "bar"},
+)
+```
+
+### Configure online deployment
+
+A deployment is a set of resources required for hosting the model that does the actual inferencing. We will create a deployment for our endpoint using the `ManagedOnlineDeployment` class.
+
+> [!TIP]
+> - `name` - Name of the deployment.
+> - `endpoint_name` - Name of the endpoint to create the deployment under.
+> - `model` - The model to use for the deployment. This value can be either a reference to an existing versioned > model in the workspace or an inline model specification.
+> - `environment` - The environment to use for the deployment. This value can be either a reference to an existing > versioned environment in the workspace or an inline environment specification.
+> - `code_configuration` - the configuration for the source code and scoring script
+> - `path`- Path to the source code directory for scoring the model
+> - `scoring_script` - Relative path to the scoring file in the source code directory
+> - `instance_type` - The VM size to use for the deployment. For the list of supported sizes, see [endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md).
+> - `instance_count` - The number of instances to use for the deployment
+
+```python
+# create a blue deployment
+model = Model(name="tfserving-mounted", version="1", path="half_plus_two")
+
+env = Environment(
+ image="docker.io/tensorflow/serving:latest",
+ inference_config={
+ "liveness_route": {"port": 8501, "path": "/v1/models/half_plus_two"},
+ "readiness_route": {"port": 8501, "path": "/v1/models/half_plus_two"},
+ "scoring_route": {"port": 8501, "path": "/v1/models/half_plus_two:predict"},
+ },
+)
+
+blue_deployment = ManagedOnlineDeployment(
+ name="blue",
+ endpoint_name=online_endpoint_name,
+ model=model,
+ environment=env,
+ environment_variables={
+ "MODEL_BASE_PATH": "/var/azureml-app/azureml-models/tfserving-mounted/1",
+ "MODEL_NAME": "half_plus_two",
+ },
+ instance_type="Standard_DS2_v2",
+ instance_count=1,
+)
+```
+++
+There are a few important concepts to notice in this YAML/Python parameter:
-### Readiness route vs. liveness route
+#### Readiness route vs. liveness route
An HTTP server defines paths for both _liveness_ and _readiness_. A liveness route is used to check whether the server is running. A readiness route is used to check whether the server is ready to do work. In machine learning inference, a server could respond 200 OK to a liveness request before loading a model. The server could respond 200 OK to a readiness request only after the model has been loaded into memory.
Review the [Kubernetes documentation](https://kubernetes.io/docs/tasks/configure
Notice that this deployment uses the same path for both liveness and readiness, since TF Serving only defines a liveness route.
-### Locating the mounted model
+#### Locating the mounted model
When you deploy a model as an online endpoint, Azure Machine Learning _mounts_ your model to your endpoint. Model mounting enables you to deploy new versions of the model without having to create a new Docker image. By default, a model registered with the name *foo* and version *1* would be located at the following path inside of your deployed container: `/var/azureml-app/azureml-models/foo/1`
For example, if you have a directory structure of `/azureml-examples/cli/endpoin
:::image type="content" source="./media/how-to-deploy-custom-container/local-directory-structure.png" alt-text="Diagram showing a tree view of the local directory structure.":::
+# [CLI](#tab/CLI)
+ and `tfserving-deployment.yml` contains: ```yaml
model:
path: ./half_plus_two ```
+# [Python](#tab/python)
+
+and `Model` class contains:
+
+```python
+model = Model(name="tfserving-mounted", version="1", path="half_plus_two")
+```
+++ then your model will be located under `/var/azureml-app/azureml-models/tfserving-deployment/1` in your deployment: :::image type="content" source="./media/how-to-deploy-custom-container/deployment-location.png" alt-text="Diagram showing a tree view of the deployment directory structure.":::
-You can optionally configure your `model_mount_path`. It enables you to change the path where the model is mounted. For example, you can have `model_mount_path` parameter in your _tfserving-deployment.yml_:
+You can optionally configure your `model_mount_path`. It enables you to change the path where the model is mounted.
> [!IMPORTANT] > The `model_mount_path` must be a valid absolute path in Linux (the OS of the container image).
+# [CLI](#tab/CLI)
+
+For example, you can have `model_mount_path` parameter in your _tfserving-deployment.yml_:
+ ```YAML name: tfserving-deployment endpoint_name: tfserving-endpoint
model_mount_path: /var/tfserving-model-mount
..... ```
+# [Python](#tab/python)
+
+For example, you can have `model_mount_path` parameter in your `ManagedOnlineDeployment` class:
+
+```python
+blue_deployment = ManagedOnlineDeployment(
+ name="blue",
+ endpoint_name=online_endpoint_name,
+ model=model,
+ environment=env,
+ model_mount_path="/var/tfserving-model-mount",
+ ...
+)
+```
+++ then your model will be located at `/var/tfserving-model-mount/tfserving-deployment/1` in your deployment. Note that it is no longer under `azureml-app/azureml-models`, but under the mount path you specified: :::image type="content" source="./media/how-to-deploy-custom-container/mount-path-deployment-location.png" alt-text="Diagram showing a tree view of the deployment directory structure when using mount_model_path."::: ### Create your endpoint and deployment
+# [CLI](#tab/CLI)
+ Now that you've understood how the YAML was constructed, create your endpoint. ```azurecli
az ml online-endpoint create --name tfserving-endpoint -f endpoints/online/custo
Creating a deployment may take few minutes.
+```azurecli
+az ml online-deployment create --name tfserving-deployment -f endpoints/online/custom-container/tfserving-deployment.yml --all-traffic
+```
-```azurecli
-az ml online-deployment create --name tfserving-deployment -f endpoints/online/custom-container/tfserving-deployment.yml
+
+# [Python](#tab/python)
+
+Using the `MLClient` created earlier, we will now create the Endpoint in the workspace. This command will start the endpoint creation and return a confirmation response while the endpoint creation continues.
+
+```python
+ml_client.begin_create_or_update(endpoint)
+```
+
+Create the deployment by running as well.
+
+```python
+ml_client.begin_create_or_update(blue_deployment)
``` ++ ### Invoke the endpoint Once your deployment completes, see if you can make a scoring request to the deployed endpoint.
+# [CLI](#tab/CLI)
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-tfserving.sh" id="invoke_endpoint":::
-### Delete endpoint and model
+# [Python](#tab/python)
+
+Using the `MLClient` created earlier, we will get a handle to the endpoint. The endpoint can be invoked using the `invoke` command with the following parameters:
+- `endpoint_name` - Name of the endpoint
+- `request_file` - File with request data
+- `deployment_name` - Name of the specific deployment to test in an endpoint
+
+We will send a sample request using a json file. The sample json is in the [example repository](https://github.com/Azure/azureml-examples/tree/main/sdk/endpoints/online/custom-container).
+
+```python
+# test the blue deployment with some sample data
+ml_client.online_endpoints.invoke(
+ endpoint_name=online_endpoint_name,
+ deployment_name="blue",
+ request_file="sample-request.json",
+)
+```
+++
+### Delete the endpoint
Now that you've successfully scored with your endpoint, you can delete it:
+# [CLI](#tab/CLI)
+ ```azurecli az ml online-endpoint delete --name tfserving-endpoint ```
-```azurecli
-az ml model delete -n tfserving-mounted --version 1
+# [Python](#tab/python)
+
+```python
+ml_client.online_endpoints.begin_delete(name=online_endpoint_name)
``` ++ ## Next steps - [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md)
machine-learning How To Inference Server Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-inference-server-http.md
The following steps explain how the Azure Machine Learning inference HTTP server
There are two ways to use Visual Studio Code (VSCode) and [Python Extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python) to debug with [azureml-inference-server-http](https://pypi.org/project/azureml-inference-server-http/) package. 1. User starts the AzureML Inference Server in a command line and use VSCode + Python Extension to attach to the process.
-1. User sets up the `launch.json` in the VSCode and start the AzureML Inference Server within VSCode.
+1. User sets up the `launch.json` in the VSCode and starts the AzureML Inference Server within VSCode.
**launch.json** ```json
TypeError: register() takes 3 positional arguments but 4 were given
```
-You have **Flask 2** installed in your python environment but are running a server (< 0.7.0) that does not support Flask 2. To resolve, please upgrade to the latest version of server.
+You have **Flask 2** installed in your python environment but are running a version of `azureml-inference-server-http` that doesn't support Flask 2. Support for Flask 2 is added in `azureml-inference-server-http>=0.7.0`, which is also in `azureml-defaults>=1.44`.
-### 2. I encountered an ``ImportError`` or ``ModuleNotFoundError`` on modules ``opencensus``, ``jinja2``, ``MarkupSafe``, or ``click`` during startup like the following:
+1. If you're not using this package in an AzureML docker image, use the latest version of
+ `azureml-inference-server-http` or `azureml-defaults`.
+
+2. If you're using this package with an AzureML docker image, make sure you're using an image built in or after July,
+ 2022. The image version is available in the container logs. You should be able to find a log similar to below:
+
+ ```
+ 2022-08-22T17:05:02,147738763+00:00 | gunicorn/run | AzureML Container Runtime Information
+ 2022-08-22T17:05:02,161963207+00:00 | gunicorn/run | ###############################################
+ 2022-08-22T17:05:02,168970479+00:00 | gunicorn/run |
+ 2022-08-22T17:05:02,174364834+00:00 | gunicorn/run |
+ 2022-08-22T17:05:02,187280665+00:00 | gunicorn/run | AzureML image information: openmpi4.1.0-ubuntu20.04, Materializaton Build:20220708.v2
+ 2022-08-22T17:05:02,188930082+00:00 | gunicorn/run |
+ 2022-08-22T17:05:02,190557998+00:00 | gunicorn/run |
+ ```
+
+ The build date of the image appears after "Materialization Build", which in the above example is `20220708`, or July 8, 2022. This image is compatible with Flask 2. If you don't see a banner like this in your container log, your image is out-of-date, and should be updated. If you're using a cuda image, and are unable to find a newer image, check if your image is deprecated in [AzureML-Containers](https://github.com/Azure/AzureML-Containers). If it is, you should be able to find replacements.
+
+ If this is an online endpoint, you can also find the logs under "Deployment logs" in the [online endpoint page in Azure Machine Learning studio](https://ml.azure.com/endpoints). If you deploy with SDK v1 and don't explicitly specify an image in your deployment configuration, it will default to using a version of `openmpi4.1.0-ubuntu20.04` that matches your local SDK toolset, which may not be the latest version of the image. For example, SDK 1.43 will default to using `openmpi4.1.0-ubuntu20.04:20220616`, which is incompatible. Make sure you use the latest SDK for your deployment.
+
+ If for some reason you're unable to update the image, you can temporarily avoid the issue by pinning `azureml-defaults==1.43` or `azureml-inference-server-http~=0.4.13`, which will install the older version server with `Flask 1.0.x`.
+
+ See also [Troubleshooting online endpoints deployment](how-to-troubleshoot-online-endpoints.md#error-resourcenotready).
+
+### 2. I encountered an ``ImportError`` or ``ModuleNotFoundError`` on modules ``opencensus``, ``jinja2``, ``MarkupSafe``, or ``click`` during startup like the following message:
```bash ImportError: cannot import name 'Markup' from 'jinja2' ```
-Older versions (<= 0.4.10) of the server did not pin Flask's dependency to compatible versions. This is fixed in the latest version of the server.
+Older versions (<= 0.4.10) of the server didn't pin Flask's dependency to compatible versions. This problem is fixed in the latest version of the server.
### 3. Do I need to reload the server when changing the score script?
The Azure Machine Learning inference server runs on Windows & Linux based operat
## Next steps * For more information on creating an entry script and deploying models, see [How to deploy a model using Azure Machine Learning](how-to-deploy-managed-online-endpoints.md).
-* Learn about [Prebuilt docker images for inference](concept-prebuilt-docker-images-inference.md)
+* Learn about [Prebuilt docker images for inference](concept-prebuilt-docker-images-inference.md)
machine-learning How To Read Write Data V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-read-write-data-v2.md
When you provide a data input/output to a Job, you'll need to specify a `path` p
|A path on a public http(s) server | `https://raw.githubusercontent.com/pandas-dev/pandas/main/doc/data/titanic.csv` | |A path on Azure Storage | `https://<account_name>.blob.core.windows.net/<container_name>/path` <br> `abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>` | |A path on a Datastore | `azureml://datastores/<data_store_name>/paths/<path>` |
+|A path to a Data Asset | `azureml:<my_data>:<version>` |
## Supported modes
machine-learning How To Safely Rollout Managed Endpoints Sdk V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-safely-rollout-managed-endpoints-sdk-v2.md
green_deployment = ManagedOnlineDeployment(
ml_client.begin_create_or_update(green_deployment) ```
-## Test the 'green' deployment
+### Test the new deployment
Though green has 0% of traffic allocated, you can still invoke the endpoint and deployment with [json](https://github.com/Azure/azureml-examples/blob/main/sdk/endpoints/online/model-2/sample-request.json) file.
ml_client.online_endpoints.invoke(
) ```
-1. Test the new deployment with a small percentage of live traffic:
+## Test the deployment with mirrored traffic (preview)
- Once you've tested your green deployment, allocate a small percentage of traffic to it:
- ```python
- endpoint.traffic = {"blue": 90, "green": 10}
- ml_client.begin_create_or_update(endpoint)
- ```
+Once you've tested your `green` deployment, you can copy (or 'mirror') a percentage of the live traffic to it. Mirroring traffic doesn't change results returned to clients. Requests still flow 100% to the blue deployment. The mirrored percentage of the traffic is copied and submitted to the `green` deployment so you can gather metrics and logging without impacting your clients. Mirroring is useful when you want to validate a new deployment without impacting clients. For example, to check if latency is within acceptable bounds and that there are no HTTP errors.
- Now, your green deployment will receive 10% of requests.
+> [!WARNING]
+> Mirroring traffic uses your [endpoint bandwidth quota](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints) (default 5 MBPS). Your endpoint bandwidth will be throttled if you exceed the allocated quota. For information on monitoring bandwidth throttling, see [Monitor managed online endpoints](how-to-monitor-online-endpoints.md#metrics-at-endpoint-scope).
-1. Send all traffic to your new deployment:
+The following command mirrors 10% of the traffic to the `green` deployment:
- Once you're satisfied that your green deployment is fully satisfactory, switch all traffic to it.
+```python
+endpoint.mirror_traffic = {"green": 10}
+ml_client.begin_create_or_update(endpoint)
+```
- ```python
- endpoint.traffic = {"blue": 0, "green": 100}
- ml_client.begin_create_or_update(endpoint)
- ```
+> [!IMPORTANT]
+> Mirroring has the following limitations:
+> * You can only mirror traffic to one deployment.
+> * A deployment can only be set to live or mirror traffic, not both.
+> * Mirrored traffic is not currently supported with K8s.
+> * The maximum mirrored traffic you can configure is 50%. This limit is to reduce the impact on your endpoint bandwidth quota.
-1. Remove the old deployment:
- ```python
- ml_client.online_deployments.delete(name="blue", endpoint_name=online_endpoint_name)
- ```
+After testing, you can set the mirror traffic to zero to disable mirroring:
+
+```python
+endpoint.mirror_traffic = {"green": 0}
+ml_client.begin_create_or_update(endpoint)
+```
+
+## Test the new deployment with a small percentage of live traffic:
+
+Once you've tested your green deployment, allocate a small percentage of traffic to it:
+
+```python
+endpoint.traffic = {"blue": 90, "green": 10}
+ml_client.begin_create_or_update(endpoint)
+```
+
+Now, your green deployment will receive 10% of requests.
+
+
+## Send all traffic to your new deployment:
+
+Once you're satisfied that your green deployment is fully satisfactory, switch all traffic to it.
+
+```python
+endpoint.traffic = {"blue": 0, "green": 100}
+ml_client.begin_create_or_update(endpoint)
+```
+
+## Remove the old deployment:
+
+```python
+ml_client.online_deployments.delete(name="blue", endpoint_name=online_endpoint_name)
+```
## Delete endpoint
+If you aren't going use the deployment, you should delete it with:
+ ```python ml_client.online_endpoints.begin_delete(name=online_endpoint_name) ``` ## Next steps-
-* Explore online endpoint samples - [https://github.com/Azure/azureml-examples/tree/main/sdk/endpoints](https://github.com/Azure/azureml-examples/tree/main/sdk/endpoints)
+- [Explore online endpoint samples](https://github.com/Azure/azureml-examples/tree/main/sdk/endpoints)
+- [Access Azure resources with a online endpoint and managed identity](how-to-access-resources-from-endpoints-managed-identities.md)
+- [Monitor managed online endpoints](how-to-monitor-online-endpoints.md)
+- [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints)
+- [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md)
+- [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md)
+- [Troubleshooting online endpoints deployment and scoring](how-to-troubleshoot-managed-online-endpoints.md)
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
Last updated 04/12/2022 -+ #Customer intent: As a data scientist, I want to figure out why my online endpoint deployment failed so that I can fix it.
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] +
+> [!IMPORTANT]
+> SDK v2 is currently in public preview.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+ Learn how to resolve common issues in the deployment and scoring of Azure Machine Learning online endpoints. This document is structured in the way you should approach troubleshooting:
The section [HTTP status codes](#http-status-codes) explains how invocation and
* An **Azure subscription**. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/). * The [Azure CLI](/cli/azure/install-azure-cli).
-* The [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
+* For Azure Machine Learning CLI v2, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md).
+* For Azure Machine Learning Python SDK v2, see [Install the Azure Machine Learning SDK v2 for Python](/python/api/overview/azure/ml/installv2).
## Deploy locally
Local deployment is deploying a model to a local Docker environment. Local deplo
> [!TIP] > Use Visual Studio Code to test and debug your endpoints locally. For more information, see [debug online endpoints locally in Visual Studio Code](how-to-debug-managed-online-endpoints-visual-studio-code.md).
-Local deployment supports creation, update, and deletion of a local endpoint. It also allows you to invoke and get logs from the endpoint. To use local deployment, add `--local` to the appropriate CLI command:
+Local deployment supports creation, update, and deletion of a local endpoint. It also allows you to invoke and get logs from the endpoint.
+
+# [CLI](#tab/CLI)
+
+To use local deployment, add `--local` to the appropriate CLI command:
```azurecli az ml online-deployment create --endpoint-name <endpoint-name> -n <deployment-name> -f <spec_file.yaml> --local ```
+# [Python](#tab/python)
+
+To use local deployment, add `local=True` parameter in the command:
+
+```python
+ml_client.begin_create_or_update(online_deployment, local=True)
+```
+
+* `ml_client` and `online_deployment` are instances for `MLClient` class and `ManagedOnlineDeployment` class, respectively.
+++ As a part of local deployment the following steps take place: - Docker either builds a new container image or pulls an existing image from the local Docker cache. An existing image is used if there's one that matches the environment part of the specification file.
To debug conda installation problems, try the following:
You can't get direct access to the VM where the model is deployed. However, you can get logs from some of the containers that are running on the VM. The amount of information depends on the provisioning status of the deployment. If the specified container is up and running you'll see its console output, otherwise you'll get a message to try again later.
+# [CLI](#tab/CLI)
+ To see log output from container, use the following CLI command: ```azurecli
az ml online-deployment get-logs -e <endpoint-name> -n <deployment-name> -l 100
or ```azurecli
- az ml online-deployment get-logs --endpoint-name <endpoint-name> --name <deployment-name> --lines 100
+az ml online-deployment get-logs --endpoint-name <endpoint-name> --name <deployment-name> --lines 100
``` Add `--resource-group` and `--workspace-name` to the commands above if you have not already set these parameters via `az configure`.
You can also get logs from the storage initializer container by passing `ΓÇô-con
Add `--help` and/or `--debug` to commands to see more information.
+# [Python](#tab/python)
+
+To see log output from container, use the `get_logs` method as follows:
+
+```python
+ml_client.online_deployments.get_logs(
+ name="<deployment-name>", endpoint_name="<endpoint-name>", lines=100
+)
+```
+
+To see information about how to set these parameters, see
+[reference for get-logs](/python/api/azure-ai-ml/azure.ai.ml.operations.onlinedeploymentoperations#azure-ai-ml-operations-onlinedeploymentoperations-get-logs)
+
+By default the logs are pulled from the inference server. Logs include the console log from the inference server, which contains print/log statements from your `score.py' code.
+
+> [!NOTE]
+> If you use Python logging, ensure you use the correct logging level order for the messages to be published to logs. For example, INFO.
+
+You can also get logs from the storage initializer container by adding `container_type="storage-initializer"` option. These logs contain information on whether code and model data were successfully downloaded to the container.
+
+```python
+ml_client.online_deployments.get_logs(
+ name="<deployment-name>", endpoint_name="<endpoint-name>", lines=100, container_type="storage-initializer"
+)
+```
+++ ## Request tracing There are three supported tracing headers:
Below is a list of common deployment errors that are reported as part of the dep
* [BadArgument](#error-badargument) * [ResourceNotReady](#error-resourcenotready) * [ResourceNotFound](#error-resourcenotfound)
-* [OperationCancelled](#error-operationcancelled)
+* [OperationCanceled](#error-operationcanceled)
* [InternalServerError](#error-internalservererror) ### ERROR: ImageBuildFailure
If your container could not start, this means scoring could not happen. It might
To get the exact reason for an error, run:
+# [CLI](#tab/CLI)
+ ```azurecli az ml online-deployment get-logs -e <endpoint-name> -n <deployment-name> -l 100 ```
+# [Python](#tab/python)
+
+```python
+ml_client.online_deployments.get_logs(
+ name="<deployment-name>", endpoint_name="<endpoint-name>", lines=100
+)
+```
+++ ### ERROR: OutOfCapacity The specified VM Size failed to provision due to a lack of Azure Machine Learning capacity. Retry later or try deploying to a different region.
Below is a list of reasons you might run into this error:
* [Startup task failed due to authorization error](#authorization-error) * [Startup task failed due to incorrect role assignments on resource](#authorization-error) * [Unable to download user container image](#unable-to-download-user-container-image)
-* [Unable to download user model or code artifacts](#unable-to-download-user-model-or-code-artifacts)
+* [Unable to download user model](#unable-to-download-user-model)
* [azureml-fe for kubernetes online endpoint is not ready](#azureml-fe-not-ready) #### Resource requests greater than limits
Requests for resources must be less than or equal to limits. If you don't set li
#### Authorization error
-After provisioning the compute resource, during deployment creation, Azure tries to pull the user container image from the workspace private Azure Container Registry (ACR) and mount the user model and code artifacts into the user container from the workspace storage account.
+After you provisioned the compute resource, during deployment creation, Azure tries to pull the user container image from the workspace private Azure Container Registry (ACR) and mount the user model and code artifacts into the user container from the workspace storage account.
-First, check if there is a permissions issue accessing ACR.
+First, check if there's a permissions issue accessing ACR.
To pull blobs, Azure uses [managed identities](../active-directory/managed-identities-azure-resources/overview.md) to access the storage account.
To pull blobs, Azure uses [managed identities](../active-directory/managed-ident
#### Unable to download user container image
-It is possible that the user container could not be found. Check [container logs](#get-container-logs) to get more details.
+It's possible that the user container couldn't be found. Check [container logs](#get-container-logs) to get more details.
Make sure container image is available in workspace ACR. For example, if image is `testacr.azurecr.io/azureml/azureml_92a029f831ce58d2ed011c3c42d35acb:latest` check the repository with `az acr repository show-tags -n testacr --repository azureml/azureml_92a029f831ce58d2ed011c3c42d35acb --orderby time_desc --output table`.
-#### Unable to download user model or code artifacts
+#### Unable to download user model
-It is possible that the user model or code artifacts can't be found. Check [container logs](#get-container-logs) to get more details.
+It is possible that the user model can't be found. Check [container logs](#get-container-logs) to get more details.
-Make sure model and code artifacts are registered to the same workspace as the deployment. Use the `show` command to show details for a model or code artifact in a workspace.
+Make sure the model is registered to the same workspace as the deployment. Use the `show` command or equivalent Python method to show details for a model in a workspace.
- For example:
+ # [CLI](#tab/CLI)
+ ```azurecli
- az ml model show --name <model-name>
- az ml code show --name <code-name> --version <version>
+ az ml model show --name <model-name> --version <version>
```
+ # [Python](#tab/python)
+
+ ```python
+ ml_client.models.get(name="<model-name>", version=<version>)
+ ```
+
+
+ > [!WARNING]
+ > You must specify either version or label to get the model information.
+ You can also check if the blobs are present in the workspace storage account. - For example, if the blob is `https://foobar.blob.core.windows.net/210212154504-1517266419/WebUpload/210212154504-1517266419/GaussianNB.pkl`, you can use this command to check if it exists:
- ```azurecli
- az storage blob exists --account-name foobar --container-name 210212154504-1517266419 --name WebUpload/210212154504-1517266419/GaussianNB.pkl --subscription <sub-name>`
- ```
+ ```azurecli
+ az storage blob exists --account-name foobar --container-name 210212154504-1517266419 --name WebUpload/210212154504-1517266419/GaussianNB.pkl --subscription <sub-name>`
+ ```
- If the blob is present, you can use this command to obtain the logs from the storage initializer:
+ # [CLI](#tab/CLI)
+ ```azurecli az ml online-deployment get-logs --endpoint-name <endpoint-name> --name <deployment-name> ΓÇô-container storage-initializer` ```
+ # [Python](#tab/python)
+
+ ```python
+ ml_client.online_deployments.get_logs(
+ name="<deployment-name>", endpoint_name="<endpoint-name>", lines=100, container_type="storage-initializer"
+ )
+ ```
+
+
+ #### azureml-fe not ready The front-end component (azureml-fe) that routes incoming inference requests to deployed services automatically scales as needed. It's installed during your k8s-extension installation.
-This component should be healthy on cluster, at least one healthy replica. You will get this error message if it's not avaliable when you trigger kubernetes online endpoint and deployment creation/update request.
+This component should be healthy on cluster, at least one healthy replica. You will get this error message if it's not available when you trigger kubernetes online endpoint and deployment creation/update request.
-Please check the pod status and logs to fix this issue, you can also try to update the k8s-extension intalled on the cluster.
+Please check the pod status and logs to fix this issue, you can also try to update the k8s-extension installed on the cluster.
### ERROR: ResourceNotReady
Please check the pod status and logs to fix this issue, you can also try to upda
To run the `score.py` provided as part of the deployment, Azure creates a container that includes all the resources that the `score.py` needs, and runs the scoring script on that container. The error in this scenario is that this container is crashing when running, which means scoring can't happen. This error happens when: - There's an error in `score.py`. Use `get-logs` to help diagnose common problems:
- - A package that was imported but is not in the conda environment.
+ - A package that was imported but isn't in the conda environment.
- A syntax error. - A failure in the `init()` method. - If `get-logs` isn't producing any logs, it usually means that the container has failed to start. To debug this issue, try [deploying locally](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/machine-learning/how-to-troubleshoot-online-endpoints.md#deploy-locally) instead.-- Readiness or liveness probes are not set up correctly.
+- Readiness or liveness probes aren't set up correctly.
- There's an error in the environment setup of the container, such as a missing dependency.
+- When you face `TypeError: register() takes 3 positional arguments but 4 were given` error, the error may be caused by the dependency between flask v2 and `azureml-inference-server-http`. See [FAQs for inference HTTP server](how-to-inference-server-http.md#1-i-encountered-the-following-error-during-server-startup) for more details.
### ERROR: ResourceNotFound
Below is a list of reasons you might run into this error:
#### Resource Manager cannot find a resource
-This error occurs when Azure Resource Manager can't find a required resource. For example, you will receive this error if a storage account was referred to but is not able to be found at the specified path. Be sure to double-check the spelling of exact paths or resource names.
+This error occurs when Azure Resource Manager can't find a required resource. For example, you'll receive this error if a storage account was referred to but can't be found at the path on which it was specified. Be sure to double check resources that might have been supplied by exact path or the spelling of their names.
For more information, see [Resolve Resource Not Found Errors](../azure-resource-manager/troubleshooting/error-not-found.md).
This error occurs when an image belonging to a private or otherwise inaccessible
At this time, our APIs cannot accept private registry credentials. To mitigate this error, either ensure that the container registry is **not private** or follow the following steps:
-1. Grant your private registry's `acrPull` role to the system identity of your online enpdoint.
+1. Grant your private registry's `acrPull` role to the system identity of your online endpoint.
1. In your environment definition, specify the address of your private image as well as the additional instruction to not modify (build) the image. If the mitigation is successful, the image will not require any building and the final image address will simply be the given image address.
At deployment time, your online endpoint's system identity will pull the image f
For more diagnostic information, see [How To Use the Workspace Diagnostic API](../machine-learning/how-to-workspace-diagnostic-api.md).
-### ERROR: OperationCancelled
+### ERROR: OperationCanceled
Below is a list of reasons you might run into this error:
-* [Operation was cancelled by another operation which has a higher priority](#operation-cancelled-by-another-higher-priority-operation)
-* [Operation was cancelled due to a previous operation waiting for lock confirmation](#operation-cancelled-waiting-for-lock-confirmation)
+* [Operation was canceled by another operation that has a higher priority](#operation-canceled-by-another-higher-priority-operation)
+* [Operation was canceled due to a previous operation waiting for lock confirmation](#operation-canceled-waiting-for-lock-confirmation)
-#### Operation cancelled by another higher priority operation
+#### Operation canceled by another higher priority operation
Azure operations have a certain priority level and are executed from highest to lowest. This error happens when your operation happened to be overridden by another operation that has a higher priority. Retrying the operation might allow it to be performed without cancellation.
-#### Operation cancelled waiting for lock confirmation
+#### Operation canceled waiting for lock confirmation
-Azure operations have a brief waiting period after being submitted during which they retrieve a lock to ensure that we do not run into race conditions. This error happens when the operation you submitted is the same as another operation that is currently still waiting for confirmation that it has received the lock to proceed. It may indicate that you have submitted a very similar request too soon after the initial request.
+Azure operations have a brief waiting period after being submitted during which they retrieve a lock to ensure that we don't run into race conditions. This error happens when the operation you submitted is the same as another operation that is currently still waiting for confirmation that it has received the lock to proceed. It may indicate that you've submitted a very similar request too soon after the initial request.
Retrying the operation after waiting a few seconds up to a minute may allow it to be performed without cancellation.
Although we do our best to provide a stable and reliable service, sometimes thin
## Autoscaling issues
-If you are having trouble with autoscaling, see [Troubleshooting Azure autoscale](../azure-monitor/autoscale/autoscale-troubleshoot.md).
+If you're having trouble with autoscaling, see [Troubleshooting Azure autoscale](../azure-monitor/autoscale/autoscale-troubleshoot.md).
## Bandwidth limit issues
When you access online endpoints with REST requests, the returned status codes a
| 408 | Request timeout | The model execution took longer than the timeout supplied in `request_timeout_ms` under `request_settings` of your model deployment config.| | 424 | Model Error | If your model container returns a non-200 response, Azure returns a 424. Check the `Model Status Code` dimension under the `Requests Per Minute` metric on your endpoint's [Azure Monitor Metric Explorer](../azure-monitor/essentials/metrics-getting-started.md). Or check response headers `ms-azureml-model-error-statuscode` and `ms-azureml-model-error-reason` for more information. | | 429 | Rate-limiting | You attempted to send more than 100 requests per second to your endpoint. |
-| 429 | Too many pending requests | Your model is getting more requests than it can handle. We allow 2 * `max_concurrent_requests_per_instance` * `instance_count` requests at any time. Additional requests are rejected. You can confirm these settings in your model deployment config under `request_settings` and `scale_settings`. If you are using auto-scaling, your model is getting requests faster than the system can scale up. With auto-scaling, you can try to resend requests with [exponential backoff](https://aka.ms/exponential-backoff). Doing so can give the system time to adjust. |
+| 429 | Too many pending requests | Your model is getting more requests than it can handle. We allow 2 * `max_concurrent_requests_per_instance` * `instance_count` requests at any time. Additional requests are rejected. You can confirm these settings in your model deployment config under `request_settings` and `scale_settings`. If you're using auto-scaling, your model is getting requests faster than the system can scale up. With auto-scaling, you can try to resend requests with [exponential backoff](https://aka.ms/exponential-backoff). Doing so can give the system time to adjust. |
| 500 | Internal server error | Azure ML-provisioned infrastructure is failing. | ## Common network isolation issues
migrate Tutorial Discover Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-vmware.md
As part of your migration journey to Azure, you discover your on-premises inventory and workloads.
-This tutorial shows you how to discover the servers that are running in your VMware environment by using the Azure Migrate: Discovery and assessment tool, a lightweight Azure Migrate appliance. You deploy the appliance as a server running in your vCenter Server instance, to continuously discover servers and their performance metadata, applications that are running on servers, server dependencies, ASP.NET web apps, and SQL Server instances and databases.
+This tutorial shows you how to discover the servers that are running in your VMware environment by using the Azure Migrate: Discovery and assessment tool, a lightweight Azure Migrate appliance. You deploy the appliance as a server running in your vCenter Server instance, to continuously discover servers and their performance metadata, applications that are running on servers, server dependencies, web apps, and SQL Server instances and databases.
In this tutorial, you learn how to:
migrate Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/whats-new.md
- Support for pausing and resuming ongoing replications without having to do a complete replication again. You can also retry VM migrations without the need to do a full initial replication again. - Enhanced notifications for test migration and migration completion status.
+- Java web apps discovery on Apache Tomcat running on Linux servers hosted in VMware environment.
+- Enhanced discovery data collection including detection of database connecting strings, application directories, and authentication mechanisms for ASP.NET web apps.
## Update (August 2022) -- SQL discovery and assessment for Microsoft Hyper-V and Physical/Bare-metal environments as well as IaaS services of other public clouds.-- Java web apps discovery on Apache Tomcat running on Linux servers hosted in VMware environment. -- Enhanced discovery data collection including detection of database connecting strings, application directories, and authentication mechanisms for ASP.NET web apps.
+- SQL discovery and assessment for Microsoft Hyper-V and Physical/Bare-metal environments as well as IaaS services of other public clouds.
## Update (June 2022)
purview How To Managed Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-managed-attributes.md
In Microsoft Purview Studio, an organization's managed attributes are managed in
### Expiring managed attributes
-In the managed attribute management experience, managed attributes can't be deleted, only expired. Expired asset can't be applied to any assets and are, by default, hidden in the user experience. By default, expired managed attributes aren't removed from an asset. If an asset has an expired managed attribute applied, it can only be removed, not edited.
+In the managed attribute management experience, managed attributes can't be deleted, only expired. Expired attributes can't be applied to any assets and are, by default, hidden in the user experience. By default, expired managed attributes aren't removed from an asset. If an asset has an expired managed attribute applied, it can only be removed, not edited.
Both attribute groups and individual managed attributes can be expired. To mark an attribute group or managed attribute as expired, select the **Edit** icon.
purview Quickstart Bicep Create Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/quickstart-bicep-create-azure-purview.md
- Title: 'Quickstart: Create a Microsoft Purview (formerly Azure Purview) account using Bicep'
-description: This Quickstart describes how to create a Microsoft Purview (formerly Azure Purview) account using Bicep.
-- Previously updated : 07/05/2022-----
-# Quickstart: Create a Microsoft Purview (formerly Azure Purview) account using Bicep
-
-This quickstart describes the steps to deploy a Microsoft Purview (formerly Azure Purview) account using Bicep.
--
-After you've created the account, you can begin registering your data sources and using the Microsoft Purview governance portal to understand and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, the Microsoft Purview Data Map creates an up-to-date map of your information. It identifies and classifies sensitive data, and provides end-to-end data linage. Data consumers are able to discover data across your organization and data administrators are able to audit, secure, and ensure right use of your data.
-
-For more information about the governance capabilities of Microsoft Purview, formerly Azure Purview, [see our overview page](overview.md). For more information about deploying Microsoft Purview across your organization, [see our deployment best practices](deployment-best-practices.md).
-
-To deploy a Microsoft Purview account to your subscription, follow the prerequisites guide below.
--
-## Review the Bicep file
-
-The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/data-share-share-storage-account/).
--
-The following resources are defined in the Bicep file:
-
-* [**Microsoft.Purview/accounts**](/azure/templates/microsoft.purview/accounts)
-
-The Bicep performs the following tasks:
-
-* Creates a Microsoft Purview account in the specified resource group.
-
-## Deploy the Bicep file
-
-1. Save the Bicep file as `main.bicep` to your local computer.
-1. Deploy the Bicep file using Azure CLI or Azure PowerShell.
-
- > [!NOTE]
- > Replace **\<project-name\>** with a project name that will be used to generate resource names. Replace **\<invitation-email\>** with an email address for receiving data share invitations.
-
- # [CLI](#tab/CLI)
-
- ```azurecli-interactive
- az group create --name exampleRG --location eastus
- az deployment group create --resource-group exampleRG --template-file main.bicep --parameters projectName=<project-name> invitationEmail=<invitation-email>
- ```
-
- # [PowerShell](#tab/PowerShell)
-
- ```powershell-interactive
- New-AzResourceGroup -Name exampleRG -Location eastus
- New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -projectName "<project-name>" -invitationEmail "<invitation-email>"
- ```
-
-
-
- When the deployment finishes, you should see a message indicating the deployment succeeded.
-
-## Open Microsoft Purview governance portal
-
-After your Microsoft Purview account is created, you'll use the Microsoft Purview governance portal to access and manage it. There are two ways to open Microsoft Purview governance portal:
-
-* Open your Microsoft Purview account in the [Azure portal](https://portal.azure.com). Select the "Open Microsoft Purview governance portal" tile on the overview page.
- :::image type="content" source="media/create-catalog-portal/open-purview-studio.png" alt-text="Screenshot showing the Microsoft Purview account overview page, with the Microsoft Purview governance portal tile highlighted.":::
-
-* Alternatively, you can browse to [https://web.purview.azure.com](https://web.purview.azure.com), select your Microsoft Purview account, and sign in to your workspace.
-
-## Get started with your Purview resource
-
-After deployment, the first activities are usually:
-
-* [Create a collection](quickstart-create-collection.md)
-* [Register a resource](azure-purview-connector-overview.md)
-* [Scan the resource](concept-scans-and-ingestion.md)
-
-At this time, these actions aren't able to be taken through a Bicep file. Follow the guides above to get started!
-
-## Clean up resources
-
-To clean up the resources deployed in this quickstart, delete the resource group, which deletes all resources in the group.
-
-You can delete the resources through the Azure portal, Azure CLI, or Azure PowerShell.
-
-# [CLI](#tab/CLI)
-
-```azurecli-interactive
-az group delete --name exampleRG
-```
-
-# [PowerShell](#tab/PowerShell)
-
-```powershell-interactive
-Remove-AzResourceGroup -Name exampleRG
-```
---
-## Next steps
-
-In this quickstart, you learned how to create a Microsoft Purview (formerly Azure Purview) account using Bicep and how to access the Microsoft Purview governance portal.
-
-Next, you can create a user-assigned managed identity (UAMI) that will enable your new Microsoft Purview account to authenticate directly with resources using Azure Active Directory (Azure AD) authentication.
-
-To create a UAMI, follow our [guide to create a user-assigned managed identity](manage-credentials.md#create-a-user-assigned-managed-identity).
-
-Follow these next articles to learn how to navigate the Microsoft Purview governance portal, create a collection, and grant access to Microsoft Purview:
-
-> [!div class="nextstepaction"]
-> [Using the Microsoft Purview governance portal](use-azure-purview-studio.md)
-> [Create a collection](quickstart-create-collection.md)
-> [Add users to your Microsoft Purview account](catalog-permissions.md)
purview Register Scan Synapse Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-synapse-workspace.md
Previously updated : 08/10/2022 Last updated : 09/06/2022
The steps below will set permissions for all three.
### Apply permissions to scan the contents of the workspace
-You can set up authentication for an Azure Synapse source in either of two ways. Select your scenario below for steps to apply permissions.
+You can set up authentication for an Azure Synapse source any of the following options. Select your scenario below for steps to apply permissions.
- Use a managed identity - Use a service principal
+- Use SQL Authentication
> [!IMPORTANT] > These steps for serverless databases **do not** apply to replicated databases. Currently in Synapse, serverless databases that are replicated from Spark databases are read-only. For more information, go [here](../synapse-analytics/sql/resources-self-help-sql-on-demand.md#operation-isnt-allowed-for-a-replicated-database).
GRANT REFERENCES ON DATABASE SCOPED CREDENTIAL::[scoped_credential] TO [PurviewA
ALTER ROLE db_datareader ADD MEMBER [ServicePrincipalID]; ```
+# [SQL Authentication](#tab/SQLAuth)
+
+#### Use SQL Authentication for dedicated SQL databases
+
+> [!NOTE]
+> You must first set up a new *credential* of type *SQL Authentication* by following the instructions in [Credentials for source authentication in Microsoft Purview](manage-credentials.md).
+
+1. Go to your **Azure Synapse workspace**.
+1. Go to the **Data** section, and then look for one of your dedicated SQL databases.
+1. Select the ellipsis (**...**) next to it, and then start a new SQL script.
+1. Add the **SQL Authentication login name** as **db_datareader** on the dedicated SQL database. You do so by running the following command in your SQL script:
+
+ ```sql
+ CREATE USER [SQLUser] FROM LOGIN [SQLUser];
+ GO
+
+ EXEC sp_addrolemember 'db_datareader', [SQLUser];
+ GO
+ ```
+
+> [!NOTE]
+> Repeat the previous step for all dedicated SQL databases in your Synapse workspace.
+
+#### Use SQL Authentication for serverless SQL databases
+
+1. Go to your Azure Synapse workspace.
+1. Go to the **Data** section, and then look for one of your serverless SQL databases.
+1. Select the ellipsis (**...**) next to it, and then start a new SQL script.
+1. Add the **SQL Authentication login name** on the serverless SQL databases. You do so by running the following command in your SQL script:
+ ```sql
+ CREATE USER [SQLUser] FROM LOGIN [SQLUser];
+ GO
+ ```
+
+1. Add **Service Principal ID** as **db_datareader** on each of the serverless SQL databases you want to scan. You do so by running the following command in your SQL script:
+ ```sql
+ ALTER ROLE db_datareader ADD MEMBER [SQLUser];
+ GO
+ ```
### Set up Azure Synapse workspace firewall access
role-based-access-control Role Assignments Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-template.md
Previously updated : 06/03/2022 Last updated : 09/07/2022 ms.devlang: azurecli
ms.devlang: azurecli
[!INCLUDE [Azure RBAC definition grant access](../../includes/role-based-access-control/definition-grant.md)] In addition to using Azure PowerShell or the Azure CLI, you can assign roles using [Azure Resource Manager templates](../azure-resource-manager/templates/syntax.md). Templates can be helpful if you need to deploy resources consistently and repeatedly. This article describes how to assign roles using templates.
+> [!NOTE]
+> Bicep is a new language for defining your Azure resources. It has a simpler authoring experience than JSON, along with other features that help improve the quality of your infrastructure as code. We recommend that anyone new to infrastructure as code on Azure use Bicep instead of JSON.
+>
+> To learn about how to define role assignments by using Bicep, see [Create Azure RBAC resources by using Bicep](../azure-resource-manager/bicep/scenarios-rbac.md). For a quickstart example, see [Quickstart: Assign an Azure role using Bicep](quickstart-role-assignments-bicep.md).
+ ## Prerequisites [!INCLUDE [Azure role assignment prerequisites](../../includes/role-based-access-control/prerequisites-role-assignments.md)]
search Cognitive Search Skill Custom Entity Lookup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-custom-entity-lookup.md
Previously updated : 05/19/2022 Last updated : 09/07/2022
Parameters are case-sensitive.
|--|-| | `entitiesDefinitionUri` | Path to an external JSON or CSV file containing all the target text to match against. This entity definition is read at the beginning of an indexer run; any updates to this file mid-run won't be realized until subsequent runs. This file must be accessible over HTTPS. See [Custom Entity Definition Format](#custom-entity-definition-format) below for expected CSV or JSON schema.| |`inlineEntitiesDefinition` | Inline JSON entity definitions. This parameter supersedes the entitiesDefinitionUri parameter if present. No more than 10 KB of configuration may be provided inline. See [Custom Entity Definition](#custom-entity-definition-format) below for expected JSON schema. |
-|`defaultLanguageCode` | (Optional) Language code of the input text used to tokenize and delineate input text. The following languages are supported: `da, de, en, es, fi, fr, it, ko, pt`. The default is English (`en`). If you pass a `languagecode-countrycode` format, only the `languagecode` part of the format is used. |
+|`defaultLanguageCode` | (Optional) Language code of the input text used to tokenize and delineate input text. The following languages are supported: `da, de, en, es, fi, fr, it, pt`. The default is English (`en`). If you pass a `languagecode-countrycode` format, only the `languagecode` part of the format is used. |
|`globalDefaultCaseSensitive` | (Optional) Default case sensitive value for the skill. If `defaultCaseSensitive` value of an entity isn't specified, this value will become the `defaultCaseSensitive` value for that entity. | |`globalDefaultAccentSensitive` | (Optional) Default accent sensitive value for the skill. If `defaultAccentSensitive` value of an entity isn't specified, this value will become the `defaultAccentSensitive` value for that entity. | |`globalDefaultFuzzyEditDistance` | (Optional) Default fuzzy edit distance value for the skill. If `defaultFuzzyEditDistance` value of an entity isn't specified, this value will become the `defaultFuzzyEditDistance` value for that entity. |
search Search File Storage Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-file-storage-integration.md
Previously updated : 02/21/2022 Last updated : 09/07/2022 # Index data from Azure Files
The Azure Files indexer can extract text from the following document formats:
[!INCLUDE [search-document-data-sources](../../includes/search-blob-data-sources.md)] +
+## How Azure Files are indexed
+
+By default, most files are indexed as a single search document in the index, including files with structured content, such as JSON or CSV, which are indexed as a single chunk of text.
+
+A compound or embedded document (such as a ZIP archive, a Word document with embedded Outlook email containing attachments, or an .MSG file with attachments) is also indexed as a single document. For example, all images extracted from the attachments of an .MSG file will be returned in the normalized_images field. If you have images, consider adding [AI enrichment](cognitive-search-concept-intro.md) to get more search utility from that content.
+
+Textual content of a document is extracted into a string field named "content". You can also extract standard and user-defined metadata.
++ ## Define the data source The data source definition specifies the data to index, credentials, and policies for identifying changes in the data. A data source is defined as an independent resource so that it can be used by multiple indexers.
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/feature-availability.md
The following table displays the current Defender for Cloud feature availability
| <li> [Microsoft Defender for Key Vault](../../defender-for-cloud/defender-for-key-vault-introduction.md) | GA | Not Available | | <li> [Microsoft Defender for Resource Manager](../../defender-for-cloud/defender-for-resource-manager-introduction.md) | GA | GA | | <li> [Microsoft Defender for Storage](../../defender-for-cloud/defender-for-storage-introduction.md) <sup>[6](#footnote6)</sup> | GA | GA |
-| <li> [Threat protection for Cosmos DB](../../defender-for-cloud/other-threat-protections.md#threat-protection-for-azure-cosmos-db) | GA | Not Available |
+| <li> [Microsoft Defender for Azure Cosmos DB](../../defender-for-cloud/defender-for-databases-enable-cosmos-protections.md) | GA | Not Available |
| <li> [Kubernetes workload protection](../../defender-for-cloud/kubernetes-workload-protections.md) | GA | GA | | <li> [Bi-directional alert synchronization with Microsoft Sentinel](../../sentinel/connect-azure-security-center.md) | Public Preview | Public Preview | | **Microsoft Defender for servers features** <sup>[7](#footnote7)</sup> | | |
sentinel Ama Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ama-migrate.md
The following tables show gap analyses for the log types that currently rely on
|**Multi-homing** | Collection only | Collection only | |**Application and service logs** | - | Collection only | |**Sysmon** | Collection only | Collection only |
-|**DNS logs** | - | Collection only |
+|**DNS logs** | [Windows DNS servers via AMA connector](connect-dns-ama.md) (Public preview) | [Windows DNS Server connector](data-connectors-reference.md#windows-dns-server-preview) (Public preview) |
### Linux logs
The following tables show gap analyses for the log types that currently rely on
## Recommended migration plan
-Each organization will have different metrics of success and internal migration processes. This section provides suggested guidance to considered when migrating from the Log Analytics MMA/OMS agent to the AMA, specifically for Microsoft Sentinel.
+Each organization will have different metrics of success and internal migration processes. This section provides suggested guidance to consider when migrating from the Log Analytics MMA/OMS agent to the AMA, specifically for Microsoft Sentinel.
**Include the following steps in your migration process**:
sentinel Connect Azure Windows Microsoft Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-azure-windows-microsoft-services.md
# Connect Microsoft Sentinel to Azure, Windows, Microsoft, and Amazon services - [!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)] Microsoft Sentinel uses the Azure foundation to provide built-in, service-to-service support for data ingestion from many Azure and Microsoft 365 services, Amazon Web Services, and various Windows Server services. There are a few different methods through which these connections are made, and this article describes how to make these connections.
+This article describes the collection of Windows Security Events. For Windows DNS events, learn about the [Windows DNS Events via AMA connector (Preview)](connect-dns-ama.md).
+
+## Types of connections
+ This article discusses the following types of connectors: - **API-based** connections
You can find and query the data for each resource type using the table name that
## Windows agent-based connections
+> [!NOTE]
+>
+> The [Windows DNS Events via AMA connector (Preview)](connect-dns-ama.md) also uses the Azure Monitor Agent. This connector streams and filter events from Windows Domain Name System (DNS) server logs.
+ # [Azure Monitor Agent](#tab/AMA) > [!IMPORTANT]
sentinel Connect Dns Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-dns-ama.md
+
+ Title: Stream and filter Windows DNS logs with the AMA connector
+description: Use the AMA connector to upload and filter data from your Windows DNS server logs. You can then dive into your logs to protect your DNS servers from threats and attacks.
++ Last updated : 01/05/2022+
+#Customer intent: As a security operator, I want proactively monitor Windows DNS activities so that I can prevent threats and attacks on DNS servers.
++
+# Stream and filter data from Windows DNS servers with the AMA connector
+
+This article describes how to use the Azure Monitor Agent (AMA) connector to stream and filter events from your Windows Domain Name System (DNS) server logs. You can then deeply analyze your data to protect your DNS servers from threats and attacks.
+
+The AMA and its DNS extension are installed on your Windows Server to upload data from your DNS analytical logs to your Microsoft Sentinel workspace. [Learn about the connector](#windows-dns-events-via-ama-connector).
+
+> [!IMPORTANT]
+> The Windows DNS Events via AMA connector is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Overview
+
+### Why it's important to monitor DNS activity
+
+DNS is a widely used protocol, which maps between host names and computer readable IP addresses. Because DNS wasnΓÇÖt designed with security in mind, the service is highly targeted by malicious activity, making its logging an essential part of security monitoring.
+
+Some well-known threats that target DNS servers include:
+- DDoS attacks targeting DNS servers
+- DNS DDoS Amplification
+- DNS hijacking
+- DNS tunneling
+- DNS poisoning
+- DNS spoofing
+- NXDOMAIN attack
+- Phantom domain attacks
+
+### Windows DNS Events via AMA connector
+
+While some mechanisms were introduced to improve the overall security of this protocol, DNS servers are still a highly targeted service. Organizations can monitor DNS logs to better understand network activity, and to identify suspicious behavior or attacks targeting resources within the network. The **Windows DNS Events via AMA** connector provides this type of visibility.
+
+With the connector, you can:
+- Identify clients that try to resolve malicious domain names.
+- View and monitor request loads on DNS servers.
+- View dynamic DNS registration failures.
+- Identify frequently queried domain names and talkative clients.
+- Identify stale resource records.
+- View all DNS related logs in one place.
+
+### How collection works with the Windows DNS Events via AMA connector
+
+1. The AMA connector uses the installed DNS extension to collect and parse the logs.
+
+ > [!NOTE]
+ > The Windows DNS Events via AMA connector currently supports analytic event activities only.
+
+1. The connector streams the events to the Microsoft Sentinel workspace to be further analyzed.
+1. You can now use advanced filters to filter out specific events or information. With advanced filters, you upload only the valuable data you want to monitor, reducing costs and bandwidth usage.
+
+### Normalization using ASIM
+
+This connector is fully normalized using [Advanced Security Information Model (ASIM) parsers](normalization.md). The connector streams events originated from the analytical logs into the normalized table named `ASimDnsActivityLogs`. This table acts as a translator, using one unified language, shared across all DNS connectors to come.
+
+For a source-agnostic parser that unifies all DNS data and ensures that your analysis runs across all configured sources, use the [ASIM DNS unifying parser](dns-normalization-schema.md#unifying-parsers) `_Im_Dns`.
+
+The ASIM unifying parser complements the native `ASimDnsActivityLogs` table. While the native table is ASIM compliant, the parser is needed to add capabilities, such as aliases, available only at query time, and to combine `ASimDnsActivityLogs`  with other DNS data sources.
+
+The [ASIM DNS schema](dns-normalization-schema.md) represents the DNS protocol activity, as logged in the Windows DNS server in the analytical logs. The schema is governed by official parameter lists and RFCs that define fields and values.
+
+See the [list of Windows DNS server fields](dns-ama-fields.md#asim-normalized-dns-schema) translated into the normalized field names.
+
+## Set up the Windows DNS over AMA connector
+
+You can set up the connector in two ways:
+- [Microsoft Sentinel portal](#set-up-the-connector-in-the-microsoft-sentinel-portal-ui). With this setup, you can create, manage, and delete a single Data Collection Rule (DCR) per workspace. Even if you define multiple DCRs via the API, the portal shows only a single DCR.
+- [API](#set-up-the-connector-with-the-api). With this setup, you can create, manage, and delete multiple DCRs.
+
+### Prerequisites
+
+Before you begin, verify that you have:
+
+- The Microsoft Sentinel solution enabled.
+- A defined Microsoft Sentinel workspace.
+- Windows Server 2012 R2 with auditing hotfix and later.
+- A Windows DNS Server with analytical logs enabled.
+- To collect events from any system that isn't an Azure virtual machine, ensure that [Azure Arc](../azure-monitor/agents/azure-monitor-agent-manage.md) is installed. Install and enable Azure Arc before you enable the Azure Monitor Agent-based connector. This requirement includes:
+ - Windows servers installed on physical machines
+ - Windows servers installed on on-premises virtual machines
+ - Windows servers installed on virtual machines in non-Azure clouds
+
+### Set up the connector in the Microsoft Sentinel portal (UI)
+
+#### Open the connector page and create the DCR
+
+1. Open the [Azure portal](https://portal.azure.com/) and navigate to the **Microsoft Sentinel** service.
+1. In the **Data connectors** blade, in the search bar, type *DNS*.
+1. Select the **Windows DNS Events via AMA (Preview)** connector.
+1. Below the connector description, select **Open connector page**.
+1. In the **Configuration** area, select **Create data collection rule**. You can create a single DCR per workspace. If you need to create multiple DCRs, [use the API](#set-up-the-connector-with-the-api).
+
+The DCR name, subscription, and resource group are automatically set based on the workspace name, the current subscription, and the resource group the connector was selected from.
++
+#### Define resources (VMs)
+
+1. Select the **Resources** tab and select **Add Resource(s)**.
+1. Select the VMs on which you want to install the connector to collect logs.
+
+ :::image type="content" source="media/connect-dns-ama/windows-dns-ama-connector-select-resource.png" alt-text="Screenshot of selecting resources for the Windows D N S over A M A connector.":::
+
+1. Review your changes and select **Save** > **Apply**.
+
+#### Filter out undesired events
+
+When you use filters, you exclude the event that the filter specifies. In other words, Microsoft Sentinel doesn't collect data for the specified event. While this step isn't required, it can help reduce costs and simplify event triage.
+
+To create filters:
+
+1. On the connector page, in the **Configuration** area, select **Add data collection filters**.
+1. Type a name for the filter and select the filter type. The filter type is a parameter that reduces the number of collected events. The parameters are normalized according to the DNS normalized schema. See the list of [available fields for filtering](dns-ama-fields.md#available-fields-for-filtering).
+
+ :::image type="content" source="media/connect-dns-ama/windows-dns-ama-connector-create-filter.png" alt-text="Screenshot of creating a filter for the Windows D N S over A M A connector.":::
+
+1. To add complex filters, select **Add field to filter** and add the relevant field.
+
+ :::image type="content" source="media/connect-dns-ama/windows-dns-ama-connector-filter-fields.png" alt-text="Screenshot of adding fields to a filter for the Windows D N S over A M A connector.":::
+
+1. To add new filters, select **Add new filters**.
+1. To edit, or delete existing filters or fields, select the edit or delete icons in the table under the **Configuration** area. To add fields or filters, select **Add data collection filters** again.
+1. To save and deploy the filters to your connectors, select **Apply changes**.
+
+### Set up the connector with the API
+
+You can create [DCRs](/rest/api/monitor/data-collection-rules) using the API. Use this option if you need to create multiple DCRs.
+
+Use this example as a template to create or update a DCR:
+
+#### Request URL and headerΓÇ»
+
+```rest
+
+PUT
+
+ https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Insights/dataCollectionRules/{dataCollectionRuleName}?api-version=2019-11-01-preview
+```
+
+#### Request body
+
+```rest
+
+{
+ "properties": {
+ "dataSources": {
+ "windowsEventLogs": [],
+ "extensions": [
+ {
+ "streams": [
+ "Microsoft-ASimDnsActivityLogs"
+ ],
+ "extensionName": "MicrosoftDnsAgent",
+ "extensionSettings": {
+ "Filters": [
+ {
+ "FilterName": "SampleFilter",
+ "Rules": [
+ {
+ "Field": "EventId",
+ "FieldValues": [
+ "260"
+ ]
+ }
+ ]
+ }
+ ]
+ },
+ "name": "SampleDns"
+ }
+ ]
+ },
+ "destinations": {
+ "logAnalytics": [
+ {
+ "workspaceResourceId": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}/providers/Microsoft.OperationalInsights/workspaces/{sentinelWorkspaceName}",
+ "workspaceId": {WorkspaceGuid}",
+ "name": "WorkspaceDestination"
+ }
+ ]
+ },
+ "dataFlows": [
+ {
+ "streams": [
+ "Microsoft-ASimDnsActivityLogs"
+ ],
+ "destinations": [
+ " WorkspaceDestination "
+ ]
+ }
+ ],
+ },
+ "location": "eastus2",
+ "tags": {},
+ "kind": "Windows",
+ "id":"/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}/providers/Microsoft.Insights/dataCollectionRules/{workspaceName}-microsoft-sentinel-asimdnsactivitylogs ",
+ "name": " {workspaceName}-microsoft-sentinel-asimdnsactivitylogs ",
+ "type": "Microsoft.Insights/dataCollectionRules",
+}
+```
+
+## Use advanced filters
+
+DNS server event logs can contain a huge number of events. You can use advanced filtering to filter out unneeded events before the data is uploaded, saving valuable triage time and costs. The filters remove the unneeded data from the stream of events uploaded to your workspace.
+
+Filters are based on a combination of numerous fields.
+- You can use multiple values for each field using a comma-separated list.
+- To create compound filters, use different fields with an AND relation.
+- To combine different filters, use an OR relation between them.
+
+Review the [available fields for filtering](dns-ama-fields.md#available-fields-for-filtering).
+
+### Use wildcards
+
+You can use wildcards in advanced filters. Review these considerations when using wildcards:
+
+- Add a dot after each asterisk (`*.`).
+- Don't use spaces between the list of domains.
+- Wildcards apply to the domain's subdomains only, including `www.domain.com`, regardless of the protocol. For example, if you use `*.domain.com` in an advanced filter:
+ - The filter applies to `www.domain.com` and `subdomain.domain.com`, regardless of whether the protocol is HTTPS, FTP, and so on.
+ - The filter doesn't apply to `domain.com`. To apply a filter to `domain.com`, specify the domain directly, without using a wildcard.
+
+### Advanced filter examples
+
+#### Don't collect specific event IDs
+
+This filter instructs the connector not to collect EventID 256 or EventID 257 or EventID 260 with IPv6 addresses.
+
+**Using the Microsoft Sentinel portal**:
+
+1. Create a filter with the **EventOriginalType** field, using the **Equals** operator, with the values **256**, **257**, and **260**.
+
+ :::image type="content" source="media/connect-dns-ama/windows-dns-ama-connector-eventid-filter.png" alt-text="Screenshot of filtering out event IDs for the Windows D N S over A M A connector.":::
+
+1. Create a filter with the **EventOriginalType** field defined above, and using the **And** operator, also including the **DnsQueryTypeName** field set to **AAAA**.
+
+ :::image type="content" source="media/connect-dns-ama/windows-dns-ama-connector-eventid-dnsquery-filter.png" alt-text="Screenshot of filtering out event IDs and IPv6 addresses for the Windows D N S over A M A connector.":::
+
+**Using the API**:
+
+```rest
+"Filters": [
+ {
+ "FilterName": "SampleFilter",
+ "Rules": [
+ {
+ "Field": "EventOriginalType",
+ "FieldValues": [
+ "256", "257", "260"
+ ]
+ },
+ {
+ "Field": "DnsQueryTypeName",
+ "FieldValues": [
+ "AAAA"
+ ]
+ }
+ ]
+ },
+ {
+ "FilterName": "EventResultDetails",
+ "Rules": [
+ {
+ "Field": "EventOriginalType",
+ "FieldValues": [
+ "230"
+ ]
+ },
+ {
+ "Field": "EventResultDetails",
+ "FieldValues": [
+ "BADKEY","NOTZONE"
+ ]
+ }
+ ]
+ }
+]
+```
+
+#### Don't collect events with specific domains
+
+This filter instructs the connector not to collect events from any subdomains of microsoft.com, google.com, amazon.com, or events from facebook.com or center.local.
+
+**Using the Microsoft Sentinel portal**:
+
+Set the **DnsQuery** field using the **Equals** operator, with the list *\*.microsoft.com,\*.google.com,facebook.com,\*.amazon.com,center.local*.
+
+Review these considerations for [using wildcards](#use-wildcards).
++
+To define different values in a single field, use the **OR** operator.
+
+**Using the API**:
+
+Review these considerations for [using wildcards](#use-wildcards).
+
+```rest
+"Filters": [
+
+ {
+
+ "FilterName": "SampleFilter",
+
+ "Rules": [
+
+ {
+
+ "Field": "DnsQuery",
+
+ "FieldValues": [
+
+ "*.microsoft.com", "*.google.com", "facebook.com", "*.amazon.com","center.local"
+
+ ]
+
+ },
+
+ }
+
+ }
+
+]
+```
+
+## Next steps
+In this article, you learned how to set up the Windows DNS events via AMA connector to upload data and filter your Windows DNS logs. To learn more about Microsoft Sentinel, see the following articles:
+- Learn how to [get visibility into your data, and potential threats](get-visibility.md).
+- Get started [detecting threats with Microsoft Sentinel](detect-threats-built-in.md).
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
# Find your Microsoft Sentinel data connector - This article describes how to deploy data connectors in Microsoft Sentinel, listing all supported, out-of-the-box data connectors, together with links to generic deployment procedures, and extra steps required for specific connectors. > [!TIP]
This article describes how to deploy data connectors in Microsoft Sentinel, list
| **Azure Functions and the REST API** | [Use Azure Functions to connect Microsoft Sentinel to your data source](connect-azure-functions-template.md) | | **Syslog** | [Collect data from Linux-based sources using Syslog](connect-syslog.md) | | **Custom logs** | [Collect data in custom log formats to Microsoft Sentinel with the Log Analytics agent](connect-custom-logs.md) |
- |
> [!NOTE] > The **Azure service-to-service integration** data ingestion method links to three different sections of its article, depending on the connector type. Each connector's section below specifies the section within that article that it links to.
Configure eNcore to stream data via TCP to the Log Analytics Agent. This configu
## DNS (Preview)
-**See [Windows DNS Server (Preview)](#windows-dns-server-preview).**
+**See [Windows DNS Events via AMA (Preview)](#windows-dns-events-via-ama-preview) or [Windows DNS Server (Preview)](#windows-dns-server-preview).**
## Dynamics 365
Follow the instructions to obtain the credentials.
| **Vendor documentation/<br>installation instructions** | Contact [WireX support](https://wirexsystems.com/contact-us/) in order to configure your NFP solution to send Syslog messages in CEF format. | | **Supported by** | [WireX Systems](mailto:support@wirexsystems.com) |
+## Windows DNS Events via AMA (Preview)
+| Connector attribute | Description |
+| | |
+| **Data ingestion method** | **Azure service-to-service integration: <br>[Azure monitor Agent-based connection](connect-dns-ama.md)** |
+| **Log Analytics table(s)** | DnsEvents<br>DnsInventory |
+| **DCR support** | Standard DCR |
+| **Supported by** | Microsoft |
## Windows DNS Server (Preview)
+This connector uses the legacy agent. We recommend that you use the DNS over AMA connector above.
+ | Connector attribute | Description | | | | | **Data ingestion method** | **Azure service-to-service integration: <br>[Log Analytics agent-based connections](connect-azure-windows-microsoft-services.md?tabs=LAA#windows-agent-based-connections) (Legacy)** |
We recommend installing the [Advanced Security Information Model (ASIM)](normali
| **DCR support** | Standard DCR | | **Supported by** | Microsoft | -
-See also: [**Security events via legacy agent**](#security-events-via-legacy-agent-windows) connector.
+See also:
+- [Windows DNS Events via AMA connector (Preview)](connect-dns-ama.md): Uses the Azure Monitor Agent to stream and filter events from Windows Domain Name System (DNS) server logs.
+- [**Security events via legacy agent**](#security-events-via-legacy-agent-windows) connector.
### Configure the Security events / Windows Security Events connector for anomalous RDP login detection
sentinel Dns Ama Fields https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/dns-ama-fields.md
+
+ Title: Microsoft Sentinel DNS over AMA connector reference - available fields and normalization schema
+description: This article lists available fields for filtering DNS data using the Windows DNS Events via AMA connector, and the normalization schema for Windows DNS server fields.
+++ Last updated : 09/01/2022++
+# DNS over AMA connector reference - available fields and normalization schema
+
+Microsoft Sentinel allows you to stream and filter events from your Windows Domain Name System (DNS) server logs to the `ASimDnsActivityLog` normalized schema table. This article describes the fields used for filtering the data, and the normalization schema for the Windows DNS server fields.
+
+The Azure Monitor Agent (AMA) and its DNS extension are installed on your Windows Server to upload data from your DNS analytical logs to your Microsoft Sentinel workspace. You stream and filter the data using the [Windows DNS Events via AMA connector](dns-ama-fields.md).
+
+## Available fields for filtering
+
+This table shows the available fields. The field names are normalized using the [DNS schema](#asim-normalized-dns-schema).
+
+|Field name |Values |Description |
+||||
+|EventOriginalType |Numbers between 256 and 280 |The Windows DNS eventID, which indicates the type of the DNS protocol event. |
+|EventResultDetails |ΓÇó NOERROR<br>ΓÇó FORMERR<br>ΓÇó SERVFAIL<br>ΓÇó NXDOMAIN<br>ΓÇó NOTIMP<br>ΓÇó REFUSED<br>ΓÇó YXDOMAIN<br>ΓÇó YXRRSET<br>ΓÇó NXRRSET<br>ΓÇó NOTAUTH<br>ΓÇó NOTZONE<br>ΓÇó DSOTYPENI<br>ΓÇó BADVERS<br>ΓÇó BADSIG<br>ΓÇó BADKEY<br>ΓÇó BADTIME<br>ΓÇó BADALG<br>ΓÇó BADTRUNC<br>ΓÇó BADCOOKIE |The operation's DNS result string as defined by the Internet Assigned Numbers Authority (IANA). |
+|DvcIpAdrr |IP addresses |The IP address of the server reporting the event. This field also includes geo-location and malicious IP information. |
+|DnsQuery |Domain names (FQDN) |The string representing the domain name to be resolved.<br>ΓÇó Can accept multiple values in a comma-separated list, and wildcards. For example:<br>`*.microsoft.com,google.com,facebook.com`<br>ΓÇó Review these considerations for [using wildcards](connect-dns-ama.md#use-wildcards). |
+|DnsQueryTypeName |ΓÇó A<br>ΓÇó NS<br>ΓÇó MD<br>ΓÇó MF<br>ΓÇó CNAME<br>ΓÇó SOA<br>ΓÇó MB<br>ΓÇó MG<br>ΓÇó MR<br>ΓÇó NULL<br>ΓÇó WKS<br>ΓÇó PTR<br>ΓÇó HINFO<br>ΓÇó MINFO<br>ΓÇó MX<br>ΓÇó TXT<br>ΓÇó RP<br>ΓÇó AFSDB<br>ΓÇó X25<br>ΓÇó ISDN<br>ΓÇó RT<br>ΓÇó NSAP<br>ΓÇó NSAP-PTR<br>ΓÇó SIG<br>ΓÇó KEY<br>ΓÇó PX<br>ΓÇó GPOS<br>ΓÇó AAAA<br>ΓÇó LOC<br>ΓÇó NXT<br>ΓÇó EID<br>ΓÇó NIMLOC<br>ΓÇó SRV |The requested DNS attribute. The DNS resource record type name as defined by IANA. |
+
+## ASIM normalized DNS schema
+
+This table describes and translates Windows DNS server fields into the normalized field names as they appear in the [DNS normalization schema](dns-normalization-schema.md#schema-details).
+
+|Windows DNS field name |Normalized field name |Type |Description |
+|||||
+|EventID |EventOriginalType |String |The original event type or ID. |
+|RCODE |EventResult |String |The outcome of the event (success, partial, failure, NA). |
+|RCODE parsed |EventResultDetails |String |The DNS response code as defined by IANA. |
+|InterfaceIP |DvcIpAdrr |String |The IP address of the event reporting device or interface. |
+|AA |DnsFlagsAuthoritative |Integer |Indicates whether the response from the server was authoritative. |
+|AD |DnsFlagsAuthenticated |Integer |Indicates that the server verified all of the data in the answer and the authority of the response, according to the server policies. |
+|RQNAME |DnsQuery |String |The domain needs to be resolved. |
+|QTYPE |DnsQueryType |Integer |The DNS resource record type as defined by IANA. |
+|Port |SrcPortNumber |Integer |Source port sending the query. |
+|Source |SrcIpAddr |IP address |The IP address of the client sending the DNS request. For a recursive DNS request, this value is typically the reporting device's IP, in most cases, `127.0.0.1`. |
+|ElapsedTime |DnsNetworkDuration |Integer |The time it took to complete the DNS request. |
+|GUID |DnsSessionId |String |The DNS session identifier as reported by the reporting device. |
+
+## Next steps
+
+In this article, you learned about the fields used to filter DNS log data using the Windows DNS events via AMA connector. To learn more about Microsoft Sentinel, see the following articles:
+- Learn how to [get visibility into your data, and potential threats](get-visibility.md).
+- Get started [detecting threats with Microsoft Sentinel](detect-threats-built-in.md).
+- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Normalization Parsers List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-parsers-list.md
Microsoft Sentinel provides the following out-of-the-box, product-specific Netwo
| **Azure Firewall logs** | |`_Im_NetworkSession_AzureFirewallVxx`| | **Azure Monitor VMConnection** | Collected as part of the Azure Monitor [VM Insights solution](../azure-monitor/vm/vminsights-overview.md). | `_Im_NetworkSession_VMConnectionVxx` | | **Azure Network Security Groups (NSG) logs** | Collected as part of the Azure Monitor [VM Insights solution](../azure-monitor/vm/vminsights-overview.md). | `_Im_NetworkSession_AzureNSGVxx` |
-| **Checkpoint Firewall-1** | Collected using CEF. | `__Im_NetworkSession_CheckPointFirewallVxx` |
+| **Checkpoint Firewall-1** | Collected using CEF. | `_Im_NetworkSession_CheckPointFirewallVxx`* |
| **Cisco Meraki** | Collected using the Cisco Meraki API connector. | `_Im_NetworkSession_CiscoMerakiVxx` |
-| **Corelight Zeek** | Collected using the Corelight Zeek connector. | `_im_NetworkSession_CorelightZeekVxx` |
+| **Corelight Zeek** | Collected using the Corelight Zeek connector. | `_im_NetworkSession_CorelightZeekVxx`* |
| **Fortigate FortiOS** | IP connection logs collected using Syslog. | `_Im_NetworkSession_FortinetFortiGateVxx` | | **Microsoft 365 Defender for Endpoint** | | `_Im_NetworkSession_Microsoft365DefenderVxx`| | **Microsoft Defender for IoT - Endpoint** | | `_Im_NetworkSession_MD4IoTVxx` |
Microsoft Sentinel provides the following out-of-the-box, product-specific Netwo
| **Sysmon for Linux** (event 3) | Collected using the Log Analytics Agent<br> or the Azure Monitor Agent. |`_Im_NetworkSession_LinuxSysmonVxx` | | **Vectra AI** | | `_Im_NetworkSession_VectraIAVxx` | | **Windows Firewall logs** | Collected as Windows events using the Log Analytics Agent (Event table) or Azure Monitor Agent (WindowsEvent table). Supports Windows events 5150 to 5159. | `_Im_NetworkSession_MicrosoftWindowsEventFirewallVxx`|
-| **Watchguard FirewareOW** | Collected using Syslog. | `_Im_NetworkSession_WatchGuardFirewareOSVxx` |
+| **Watchguard FirewareOW** | Collected using Syslog. | `_Im_NetworkSession_WatchGuardFirewareOSVxx`* |
| **Zscaler ZIA firewall logs** | Collected using CEF. | `_Im_NetworkSessionZscalerZIAVxx` |
+Note that the parsers marked with (*) are available for deployment from GitHub and are not yet built into workspaces.
+ Deploy the workspace deployed parsers from the [Microsoft Sentinel GitHub repository](https://aka.ms/AsimNetworkSession). ## Process Event parsers
sentinel Threat Intelligence Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/threat-intelligence-integration.md
Besides being used to import threat indicators, threat intelligence feeds can al
### RiskIQ Passive Total -- Find and enable incident enrichment playbooks for [RiskIQ Passive Total](https://www.riskiq.com/products/passivetotal/) in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks). Search for subfolders beginning with "Enrich-SentinelIncident-RiskIQ-".
+- Find and enable incident enrichment playbooks for [RiskIQ Passive Total](https://www.riskiq.com/products/passivetotal/) in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/RiskIQ/Playbooks).
- See [more information](https://techcommunity.microsoft.com/t5/azure-sentinel/enrich-azure-sentinel-security-incidents-with-the-riskiq/ba-p/1534412) on working with RiskIQ playbooks. - See the RiskIQ PassiveTotal Logic App [connector documentation](/connectors/riskiqpassivetotal/).
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
If you're looking for items older than six months, you'll find them in the [Arch
## September 2022 - [Add entities to threat intelligence (Preview)](#add-entities-to-threat-intelligence-preview)
+- [Windows DNS Events via AMA connector (Preview)](#windows-dns-events-via-ama-connector-preview)
### Add entities to threat intelligence (Preview)
Microsoft Sentinel allows you to flag the entity as malicious, right from within
Learn how to [add an entity to your threat intelligence](add-entity-to-threat-intelligence.md).
+### Windows DNS Events via AMA connector (Preview)
+
+You can now use the new [Windows DNS Events via AMA connector](connect-dns-ama.md) to stream and filter events from your Windows Domain Name System (DNS) server logs to the `ASimDnsActivityLog` normalized schema table. You can then dive into your data to protect your DNS servers from threats and attacks.
+
+The Azure Monitor Agent (AMA) and its DNS extension are installed on your Windows Server to upload data from your DNS analytical logs to your Microsoft Sentinel workspace.
+
+Here are some benefits of using AMA for DNS log collection:
+
+- AMA is faster compared to the existing Log Analytics Agent (MMA/OMS). AMA handles up to 5000 events per second (EPS) compared to 2000 EPS with the existing agent.
+- AMA provides centralized configuration using Data Collection Rules (DCRs), and also supports multiple DCRs.
+- AMA supports transformation from the incoming stream into other data tables.
+- AMA supports basic and advanced filtering of the data. The data is filtered on the DNS server and before the data is uploaded, which saves time and resources.
++ ## August 2022 - [Heads up: Microsoft 365 Defender now integrates Azure Active Directory Identity Protection (AADIP)](#heads-up-microsoft-365-defender-now-integrates-azure-active-directory-identity-protection-aadip)
Azure resources such as Azure Virtual Machines, Azure Storage Accounts, Azure Ke
You can now gain a 360-degree view of your resource security with the new entity page, which provides several layers of security information about your resources.
-First, it provides some basic details about the resource: where it is located, when it was created, to which resource group it belongs, the Azure tags it contains, etc. Further, it surfaces information about access management: how many owners, contributors, and other roles are authorized to access the resource, and what networks are allowed access to it; what is the permission model of the key vault, is public access to blobs allowed in the storage account, and more. Finally, the page also includes some integrations, such as Microsoft Defender for Cloud, Defender for Endpoint, and Purview, that enrich the information about the resource.
+First, it provides some basic details about the resource: where it is located, when it was created, to which resource group it belongs, the Azure tags it contains, etc. Further, it surfaces information about access management: how many owners, contributors, and other roles are authorized to access the resource, and what networks are allowed access to it; what is the permission model of the key vault, is public access to blobs allowed in the storage account, and more. Finally, the page also includes some integrations, such as Microsoft Defender for Cloud, Defender for Endpoint, and Purview that enrich the information about the resource.
### New data sources for User and entity behavior analytics (UEBA) (Preview)
Learn more about [relating alerts to incidents](relate-alerts-to-incidents.md).
### Similar incidents (Preview)
-When triaging or investigating an incident, the context of the entirety of incidents in your SOC can be extremely useful. For example, other incidents involving the same entities can represent useful context that will allow you to reach the right decision faster. Now there's a new tab in the incident page that lists other incidents that are similar to the incident you are investigating. Some common use cases for using similar incidents are:
+When you triage or investigate an incident, the context of the entirety of incidents in your SOC can be extremely useful. For example, other incidents involving the same entities can represent useful context that will allow you to reach the right decision faster. Now there's a new tab in the incident page that lists other incidents that are similar to the incident you are investigating. Some common use cases for using similar incidents are:
- Finding other incidents that might be part of a larger attack story. - Using a similar incident as a reference for incident handling. The way the previous incident was handled can act as a guide for handling the current one.
For more information, see:
In addition to supporting MITRE ATT&CK tactics, your entire Microsoft Sentinel user flow now also supports MITRE ATT&CK techniques.
-When creating or editing [analytics rules](detect-threats-custom.md), map the rule to one or more specific tactics *and* techniques. When searching for rules on the **Analytics** page, filter by tactic and technique to narrow your search results.
+When creating or editing [analytics rules](detect-threats-custom.md), map the rule to one or more specific tactics *and* techniques. When you search for rules on the **Analytics** page, filter by tactic and technique to narrow your search results.
:::image type="content" source="media/whats-new/mitre-in-analytics-rules.png" alt-text="Screenshot of MITRE technique and tactic filtering." lightbox="media/whats-new/mitre-in-analytics-rules.png":::
For more information, see:
Kusto Query Language is used in Microsoft Sentinel to search, analyze, and visualize data, as the basis for detection rules, workbooks, hunting, and more.
-The new **Advanced KQL for Microsoft Sentinel** interactive workbook is designed to help you improve your Kusto Query Language proficiency by taking a use case-driven approach based on:
+The new **Advanced KQL for Microsoft Sentinel** interactive workbook is designed to help you improve your Kusto Query Language proficiency by taking a use case-driven approach.
+
+The workbook:
-- Grouping Kusto Query Language operators / commands by category for easy navigation.-- Listing the possible tasks a user would perform with Kusto Query Language in Microsoft Sentinel. Each task includes operators used, sample queries, and use cases.-- Compiling a list of existing content found in Microsoft Sentinel (analytics rules, hunting queries, workbooks and so on) to provide additional references specific to the operators you want to learn.-- Allowing you to execute sample queries on-the-fly, within your own environment or in "LA Demo" - a public [Log Analytics demo environment](https://aka.ms/lademo). Try the sample Kusto Query Language statements in real time without the need to navigate away from the workbook.
+- Groups Kusto Query Language operators / commands by category for easy navigation.
+- Lists the possible tasks a user would perform with Kusto Query Language in Microsoft Sentinel. Each task includes operators used, sample queries, and use cases.
+- Compiles a list of existing content found in Microsoft Sentinel (analytics rules, hunting queries, workbooks and so on) to provide additional references specific to the operators you want to learn.
+- Allows you to execute sample queries on-the-fly, within your own environment or in "LA Demo" - a public [Log Analytics demo environment](https://aka.ms/lademo). Try the sample Kusto Query Language statements in real time without the need to navigate away from the workbook.
Accompanying the new workbook is an explanatory [blog post](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/advanced-kql-framework-workbook-empowering-you-to-become-kql/ba-p/3033766), as well as a new [introduction to Kusto Query Language](kusto-overview.md) and a [collection of learning and skilling resources](kusto-resources.md) in the Microsoft Sentinel documentation.
storage Data Lake Storage Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-access-control.md
Previously updated : 02/17/2021 Last updated : 09/07/2022 ms.devlang: python
Identities are evaluated in the following order:
4. Owning group or named group 5. All other users
-If more than one of these identities applies to a security principal, then the permission level associated with the first identity is granted. For example, if a security principal is both the owning user and a named user, then the permission level associated with the owning user applies.
+If more than one of these identities applies to a security principal, then the permission level associated with the first identity is granted. For example, if a security principal is both the owning user and a named user, then the permission level associated with the owning user applies.
+
+Named groups are all considered together. If a security principal is a member of more than one named group, then the system evaluates each group until the desired permission is granted. If none of the named groups provide the desired permission, then the system moves on to evaluate a request against the permission associated with all other users.
The following pseudocode represents the access check algorithm for storage accounts. This algorithm shows the order in which identities are evaluated.
storage Data Lake Storage Acl Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-acl-cli.md
az storage fs access set --acl "user::rw-,group::rw-,other::-wx" -p my-directory
``` > [!NOTE]
-> To a set the ACL of a specific group or user, use their respective object IDs. For example, `group:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx` or `user:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`.
+> To a set the ACL of a specific group or user, use their respective object IDs. For example, to set the ACL of a **group**, use `group:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`. To set the ACL of a **user**, use `user:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`.
The following image shows the output after setting the ACL of a file.
storage Data Lake Storage Acl Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-acl-powershell.md
$file.ACL
``` > [!NOTE]
-> To a set the ACL of a specific group or user, use their respective object IDs. For example, `group:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx` or `user:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`.
+> To a set the ACL of a specific group or user, use their respective object IDs. For example, to set the ACL of a **group**, use `group:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`. To set the ACL of a **user**, use `user:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`.
The following image shows the output after setting the ACL of a file.
storage Storage Dotnet How To Use Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-dotnet-how-to-use-files.md
if (share.Exists())
-For more information about creating and using shared access signatures, see [How a shared access signature works](../common/storage-sas-overview.md?toc=%2fazure%2fstorage%2ffiles%2ftoc.json#how-a-shared-access-signature-works).
+For more information about creating and using shared access signatures, see [How a shared access signature works](../common/storage-sas-overview.md?toc=/azure/storage/files/toc.json#how-a-shared-access-signature-works).
## Copy files Beginning with version 5.x of the Azure Files client library, you can copy a file to another file, a file to a blob, or a blob to a file.
-You can also use AzCopy to copy one file to another or to copy a blob to a file or the other way around. See [Get started with AzCopy](../common/storage-use-azcopy-v10.md?toc=%2fazure%2fstorage%2ffiles%2ftoc.json).
+You can also use AzCopy to copy one file to another or to copy a blob to a file or the other way around. See [Get started with AzCopy](../common/storage-use-azcopy-v10.md?toc=/azure/storage/files/toc.json).
> [!NOTE] > If you are copying a blob to a file, or a file to a blob, you must use a shared access signature (SAS) to authorize access to the source object, even if you are copying within the same storage account.
For more information about Azure Files, see the following resources:
### Tooling support for File storage -- [Get started with AzCopy](../common/storage-use-azcopy-v10.md?toc=%2fazure%2fstorage%2ffiles%2ftoc.json)
+- [Get started with AzCopy](../common/storage-use-azcopy-v10.md?toc=/azure/storage/files/toc.json)
- [Troubleshoot Azure Files problems in Windows](./storage-troubleshoot-windows-file-connection-problems.md) ### Reference - [Azure Storage APIs for .NET](/dotnet/api/overview/azure/storage)-- [File Service REST API](/rest/api/storageservices/File-Service-REST-API)
+- [File Service REST API](/rest/api/storageservices/File-Service-REST-API)
storage Storage Files Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-faq.md
Title: Frequently asked questions (FAQ) for Azure Files | Microsoft Docs
description: Get answers to Azure Files frequently asked questions. You can mount Azure file shares concurrently on cloud or on-premises Windows, Linux, or macOS deployments. Previously updated : 07/21/2022 Last updated : 09/08/2022
* <a id="ad-file-mount-cname"></a> **Can I use the canonical name (CNAME) to mount an Azure file share while using identity-based authentication (AD DS or Azure AD DS)?**
- No, this scenario isn't supported.
+ No, this scenario isn't supported. As an alternative to CNAME, you can use DFS Namespaces with SMB Azure file shares. To learn more, see [How to use DFS Namespaces with Azure Files](files-manage-namespaces.md).
* <a id="ad-vm-subscription"></a> **Can I access Azure file shares with Azure AD credentials from a VM under a different subscription?**
storage Monitor Queue Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/monitor-queue-storage.md
Get started with any of these guides.
| [Azure Monitor Logs overview](../../azure-monitor/logs/data-platform-logs.md)| The basics of logs and how to collect and analyze them | | [Transition to metrics in Azure Monitor](../common/storage-metrics-migration.md) | Move from Storage Analytics metrics to metrics in Azure Monitor. | | [Azure Queue Storage monitoring data reference](monitor-queue-storage-reference.md) | A reference of the logs and metrics created by Azure Queue Storage |
-| [Troubleshoot performance issues](../common/troubleshoot-storage-performance.md?toc=%2fazure%2fstorage%2fqueues%2ftoc.json)| Common performance issues and guidance about how to troubleshoot them. |
-| [Troubleshoot availability issues](../common/troubleshoot-storage-availability.md?toc=%2fazure%2fstorage%2fqueues%2ftoc.json)| Common availability issues and guidance about how to troubleshoot them.|
-| [Troubleshoot client application errors](../common/troubleshoot-storage-client-application-errors.md?toc=%2fazure%2fstorage%2fqueues%2ftoc.json)| Common issues with connecting clients and how to troubleshoot them.|
+| [Troubleshoot performance issues](../common/troubleshoot-storage-performance.md?toc=/azure/storage/queues/toc.json)| Common performance issues and guidance about how to troubleshoot them. |
+| [Troubleshoot availability issues](../common/troubleshoot-storage-availability.md?toc=/azure/storage/queues/toc.json)| Common availability issues and guidance about how to troubleshoot them.|
+| [Troubleshoot client application errors](../common/troubleshoot-storage-client-application-errors.md?toc=/azure/storage/queues/toc.json)| Common issues with connecting clients and how to troubleshoot them.|
storage Storage C Plus Plus How To Use Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-c-plus-plus-how-to-use-queues.md
Add the following include statements to the top of the C++ file where you want t
## Set up an Azure Storage connection string
-An Azure Storage client uses a storage connection string to store endpoints and credentials for accessing data management services. When running in a client application, you must provide the storage connection string in the following format, using the name of your storage account and the storage access key for the storage account listed in the [Azure portal](https://portal.azure.com) for the `AccountName` and `AccountKey` values. For information on storage accounts and access keys, see [About Azure Storage accounts](../common/storage-account-create.md?toc=%2fazure%2fstorage%2fqueues%2ftoc.json). This example shows how you can declare a static field to hold the connection string:
+An Azure Storage client uses a storage connection string to store endpoints and credentials for accessing data management services. When running in a client application, you must provide the storage connection string in the following format, using the name of your storage account and the storage access key for the storage account listed in the [Azure portal](https://portal.azure.com) for the `AccountName` and `AccountKey` values. For information on storage accounts and access keys, see [About Azure Storage accounts](../common/storage-account-create.md?toc=/azure/storage/queues/toc.json). This example shows how you can declare a static field to hold the connection string:
```cpp // Define the connection-string with your values. const utility::string_t storage_connection_string(U("DefaultEndpointsProtocol=https;AccountName=your_storage_account;AccountKey=your_storage_account_key")); ```
-To test your application in your local Windows computer, you can use the [Azurite storage emulator](../common/storage-use-azurite.md?toc=%2fazure%2fstorage%2fqueues%2ftoc.json). Azurite is a utility that simulates Azure Blob Storage and Queue Storage on your local development machine. The following example shows how you can declare a static field to hold the connection string to your local storage emulator:
+To test your application in your local Windows computer, you can use the [Azurite storage emulator](../common/storage-use-azurite.md?toc=/azure/storage/queues/toc.json). Azurite is a utility that simulates Azure Blob Storage and Queue Storage on your local development machine. The following example shows how you can declare a static field to hold the connection string to your local storage emulator:
```cpp // Define the connection-string with Azurite.
Now that you've learned the basics of Queue Storage, follow these links to learn
- [How to use Blob Storage from C++](../blobs/quickstart-blobs-c-plus-plus.md) - [How to use Table Storage from C++](../../cosmos-db/table-storage-how-to-use-c-plus.md)-- [List Azure Storage resources in C++](../common/storage-c-plus-plus-enumeration.md?toc=%2fazure%2fstorage%2fqueues%2ftoc.json)
+- [List Azure Storage resources in C++](../common/storage-c-plus-plus-enumeration.md?toc=/azure/storage/queues/toc.json)
- [Azure Storage client library for C++ reference](https://azure.github.io/azure-storage-cpp)-- [Azure Storage documentation](https://azure.microsoft.com/documentation/services/storage/)
+- [Azure Storage documentation](/azure/storage/)
storage Storage Quickstart Queues Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-quickstart-queues-dotnet.md
Additional resources:
- [API reference documentation](/dotnet/api/azure.storage.queues) - [Library source code](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Azure.Storage.Queues) - [Package (NuGet)](https://www.nuget.org/packages/Azure.Storage.Queues/12.0.0)-- [Samples](../common/storage-samples-dotnet.md?toc=%2fazure%2fstorage%2fqueues%2ftoc.json#queue-samples)
+- [Samples](../common/storage-samples-dotnet.md?toc=/azure/storage/queues/toc.json#queue-samples)
## Prerequisites
storage Monitor Table Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/monitor-table-storage.md
No. Azure Compute supports the metrics on disks. For more information, see [Per
| [Azure Monitor Logs overview](../../azure-monitor/logs/data-platform-logs.md)| The basics of logs and how to collect and analyze them | | [Transition to metrics in Azure Monitor](../common/storage-metrics-migration.md) | Move from Storage Analytics metrics to metrics in Azure Monitor. | | [Azure Table storage monitoring data reference](monitor-table-storage-reference.md)| A reference of the logs and metrics created by Azure Table Storage |
-| [Troubleshoot performance issues](../common/troubleshoot-storage-performance.md?toc=%2fazure%2fstorage%2ftables%2ftoc.json)| Common performance issues and guidance about how to troubleshoot them. |
-| [Troubleshoot availability issues](../common/troubleshoot-storage-availability.md?toc=%2fazure%2fstorage%2ftables%2ftoc.json)| Common availability issues and guidance about how to troubleshoot them.|
-| [Troubleshoot client application errors](../common/troubleshoot-storage-client-application-errors.md?toc=%2fazure%2fstorage%2ftables%2ftoc.json)| Common issues with connecting clients and how to troubleshoot them.|
+| [Troubleshoot performance issues](../common/troubleshoot-storage-performance.md?toc=/azure/storage/tables/toc.json)| Common performance issues and guidance about how to troubleshoot them. |
+| [Troubleshoot availability issues](../common/troubleshoot-storage-availability.md?toc=/azure/storage/tables/toc.json)| Common availability issues and guidance about how to troubleshoot them.|
+| [Troubleshoot client application errors](../common/troubleshoot-storage-client-application-errors.md?toc=/azure/storage/tables/toc.json)| Common issues with connecting clients and how to troubleshoot them.|
stream-analytics Automation Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/automation-powershell.md
Once it's provisioned, let's start with its overall configuration.
The Function needs permissions to start and stop the ASA job. We'll assign these permissions via a [managed identity](../active-directory/managed-identities-azure-resources/overview.md).
-The first step is to enable a **system-assigned managed identity** for the Function, following that [procedure](../app-service/overview-managed-identity.md?tabs=ps%2cportal&toc=%2fazure%2fazure-functions%2ftoc.json).
+The first step is to enable a **system-assigned managed identity** for the Function, following that [procedure](../app-service/overview-managed-identity.md?tabs=ps%2cportal&toc=/azure/azure-functions/toc.json).
Now we can grant the right permissions to that identity on the ASA job we want to auto-pause. For that, in the Portal for the **ASA job** (not the Function one), in **Access control (IAM)**, add a **role assignment** to the role *Contributor* for a member of type *Managed Identity*, selecting the name of the Function above.
stream-analytics Sql Database Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/sql-database-output.md
Last updated 07/21/2022
# Azure SQL Database output from Azure Stream Analytics
-You can use [Azure SQL Database](https://azure.microsoft.com/services/sql-database/) as an output for data that's relational in nature or for applications that depend on content being hosted in a relational database. Azure Stream Analytics jobs write to an existing table in SQL Database. The table schema must exactly match the fields and their types in your job's output. The Azure portal experience for Stream Analytics allows you to [test your streaming query and also detect if there are any mismatches between the schema](sql-db-table.md) of the results produced by your job and the schema of the target table in your SQL database. To learn about ways to improve write throughput, see the [Stream Analytics with Azure SQL Database as output](stream-analytics-sql-output-perf.md) article. While you can also specify [Azure Synapse Analytics SQL pool](https://azure.microsoft.com/documentation/services/sql-data-warehouse/) as an output via the SQL Database output option, it is recommended to use the dedicated [Azure Synapse Analytics output connector](azure-synapse-analytics-output.md) for best performance.
+You can use [Azure SQL Database](https://azure.microsoft.com/services/sql-database/) as an output for data that's relational in nature or for applications that depend on content being hosted in a relational database. Azure Stream Analytics jobs write to an existing table in SQL Database. The table schema must exactly match the fields and their types in your job's output. The Azure portal experience for Stream Analytics allows you to [test your streaming query and also detect if there are any mismatches between the schema](sql-db-table.md) of the results produced by your job and the schema of the target table in your SQL database. To learn about ways to improve write throughput, see the [Stream Analytics with Azure SQL Database as output](stream-analytics-sql-output-perf.md) article. While you can also specify [Azure Synapse Analytics SQL pool](/azure/sql-data-warehouse/) as an output via the SQL Database output option, it is recommended to use the dedicated [Azure Synapse Analytics output connector](azure-synapse-analytics-output.md) for best performance.
You can also use [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview) as an output. You have to [configure public endpoint in SQL Managed Instance](/azure/azure-sql/managed-instance/public-endpoint-configure) and then manually configure the following settings in Azure Stream Analytics. Azure virtual machine running SQL Server with a database attached is also supported by manually configuring the settings below.
stream-analytics Stream Analytics Dotnet Management Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-dotnet-management-sdk.md
You've learned the basics of using a .NET SDK to create and run analytics jobs.
<!--Link references-->
-[azure.blob.storage]: https://azure.microsoft.com/documentation/services/storage/
+[azure.blob.storage]: /azure/storage/
[azure.blob.storage.use]: ../storage/blobs/storage-quickstart-blobs-dotnet.md [azure.event.hubs]: https://azure.microsoft.com/services/event-hubs/
stream-analytics Stream Analytics Use Reference Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-use-reference-data.md
At start time, the job looks for the most recent blob produced before the job st
When a reference dataset is refreshed, a diagnostic log is generated: `Loaded new reference data from <blob path>`. For many reasons, a job might need to reload a previous reference dataset. Most often, the reason is to reprocess past data. The same diagnostic log is generated at that time. This action doesn't imply that current stream data will use past reference data.
-[Azure Data Factory](https://azure.microsoft.com/documentation/services/data-factory/) can be used to orchestrate the task of creating the updated blobs required by Stream Analytics to update reference data definitions.
+[Azure Data Factory](/azure/data-factory/) can be used to orchestrate the task of creating the updated blobs required by Stream Analytics to update reference data definitions.
Data Factory is a cloud-based data integration service that orchestrates and automates the movement and transformation of data. Data Factory supports [connecting to a large number of cloud-based and on-premises data stores](../data-factory/copy-activity-overview.md). It can move data easily on a regular schedule that you specify.
synapse-analytics Apache Spark External Metastore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-external-metastore.md
Here are the configurations and descriptions:
|Spark config|Description| |--|--| |`spark.sql.hive.metastore.version`|Supported versions: <ul><li>`0.13`</li><li>`1.2`</li><li>`2.1`</li><li>`2.3`</li><li>`3.1`</li></ul> Make sure you use the first 2 parts without the 3rd part|
-|`spark.sql.hive.metastore.jars`|<ul><li>Version 0.13: `/opt/hive-metastore/lib-0.13/*:/usr/hdp/current/hadoop-client/lib/*` </li><li>Version 1.2: `/opt/hive-metastore/lib-1.2/*:/usr/hdp/current/hadoop-client/lib/*` </li><li>Version 2.1: `/opt/hive-metastore/lib-2.1/*:/usr/hdp/current/hadoop-client/lib/*` </li><li>Version 2.3: `/opt/hive-metastore/lib-2.3/*:/usr/hdp/current/hadoop-client/lib/*` </li><li>Version 3.1: `/opt/hive-metastore/lib-3.1/*:/usr/hdp/current/hadoop-client/lib/*`</li></ul>|
+|`spark.sql.hive.metastore.jars`|<ul><li>Version 0.13: `/opt/hive-metastore/lib-0.13/*:/usr/hdp/current/hadoop-client/lib/*:/usr/hdp/current/hadoop-client/*` </li><li>Version 1.2: `/opt/hive-metastore/lib-1.2/*:/usr/hdp/current/hadoop-client/lib/*:/usr/hdp/current/hadoop-client/*` </li><li>Version 2.1: `/opt/hive-metastore/lib-2.1/*:/usr/hdp/current/hadoop-client/lib/*:/usr/hdp/current/hadoop-client/*` </li><li>Version 2.3: `/opt/hive-metastore/lib-2.3/*:/usr/hdp/current/hadoop-client/lib/*:/usr/hdp/current/hadoop-client/*` </li><li>Version 3.1: `/opt/hive-metastore/lib-3.1/*:/usr/hdp/current/hadoop-client/lib/*:/usr/hdp/current/hadoop-client/*`</li></ul>|
|`spark.hadoop.hive.synapse.externalmetastore.linkedservice.name`|Name of your linked service| ### Configure at Spark pool level
If you want to share the Hive catalog with a spark cluster in HDInsight 4.0, ple
### When sharing the Hive Metastore with HDInsight 4.0 Hive cluster, I can list the tables successfully, but only get empty result when I query the table As mentioned in the limitations, Synapse Spark pool only supports external hive tables and non-transactional/ACID managed tables, it doesn't support Hive ACID/transactional tables currently. In HDInsight 4.0 Hive clusters, all managed tables are created as ACID/transactional tables by default, that's why you get empty results when querying those tables.
+### See below error when an external metastore is used while Intelligent cache is enabled
+
+```text
+java.lang.ClassNotFoundException: Class com.microsoft.vegas.vfs.SecureVegasFileSystem not found
+```
+
+You can easily fix this issue by appending `/usr/hdp/current/hadoop-client/*` to your `spark.sql.hive.metastore.jars`.
+
+```text
+Eg:
+spark.sql.hive.metastore.jars":"/opt/hive-metastore/lib-2.3/*:/usr/hdp/current/hadoop-client/lib/*:/usr/hdp/current/hadoop-client/*
+```
synapse-analytics Apache Spark Pool Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-pool-configurations.md
Title: Apache Spark pool concepts
description: Introduction to Apache Spark pool sizes and configurations in Azure Synapse Analytics. -+ + Previously updated : 07/26/2022 Last updated : 09/07/2022 # Apache Spark pool configurations in Azure Synapse Analytics
-A Spark pool is a set of metadata that defines the compute resource requirements and associated behavior characteristics when a Spark instance is instantiated. These characteristics include but aren't limited to name, number of nodes, node size, scaling behavior, and time to live. A Spark pool in itself does not consume any resources. There are no costs incurred with creating Spark pools. Charges are only incurred once a Spark job is executed on the target Spark pool and the Spark instance is instantiated on demand.
+A Spark pool is a set of metadata that defines the compute resource requirements and associated behavior characteristics when a Spark instance is instantiated. These characteristics include but aren't limited to name, number of nodes, node size, scaling behavior, and time to live. A Spark pool in itself doesn't consume any resources. There are no costs incurred with creating Spark pools. Charges are only incurred once a Spark job is executed on the target Spark pool and the Spark instance is instantiated on demand.
You can read how to create a Spark pool and see all their properties here [Get started with Spark pools in Synapse Analytics](../quickstart-create-apache-spark-pool-portal.md) ## Isolated Compute
-The Isolated Compute option provides additional security to Spark compute resources from untrusted services by dedicating the physical compute resource to a single customer.
-Isolated compute option is best suited for workloads that require a high degree of isolation from other customer's workloads for reasons that include meeting compliance and regulatory requirements.
-The Isolate Compute option is only available with the XXXLarge (80 vCPU / 504 GB) node size and only available in the following regions. The isolated compute option can be enabled or disabled after pool creation although the instance may need to be restarted. If you expect to enable this feature in the future, ensure that your Synapse workspace is created in an isolated compute supported region.
+The Isolated Compute option provides more security to Spark compute resources from untrusted services by dedicating the physical compute resource to a single customer. Isolated compute option is best suited for workloads that require a high degree of isolation from other customer's workloads for reasons that include meeting compliance and regulatory requirements. The Isolate Compute option is only available with the XXXLarge (80 vCPU / 504 GB) node size and only available in the following regions. The isolated compute option can be enabled or disabled after pool creation although the instance may need to be restarted. If you expect to enable this feature in the future, ensure that your Synapse workspace is created in an isolated compute supported region.
* East US * West US 2
The Isolate Compute option is only available with the XXXLarge (80 vCPU / 504 GB
## Nodes
-Apache Spark pool instance consists of one head node and two or more worker nodes with a minimum of three nodes in a Spark instance. The head node runs additional management services such as Livy, Yarn Resource Manager, Zookeeper, and the Spark driver. All nodes run services such as Node Agent and Yarn Node Manager. All worker nodes run the Spark Executor service.
+Apache Spark pool instance consists of one head node and two or more worker nodes with a minimum of three nodes in a Spark instance. The head node runs extra management services such as Livy, Yarn Resource Manager, Zookeeper, and the Spark driver. All nodes run services such as Node Agent and Yarn Node Manager. All worker nodes run the Spark Executor service.
## Node Sizes
-A Spark pool can be defined with node sizes that range from a Small compute node with 4 vCore and 32 GB of memory up to a XXLarge compute node with 64 vCore and 512 GB of memory per node. Node sizes can be altered after pool creation although the instance may need to be restarted.
+A Spark pool can be defined with node sizes that range from a Small compute node with 4 vCore and 32 GB of memory up to a XXLarge compute node with 64 vCore and 512 GB of memory per node. Node sizes can be altered after pool creation although the instance may need to be restarted.
|Size | vCore | Memory| |--||-|
A Spark pool can be defined with node sizes that range from a Small compute node
## Autoscale
-Apache Spark pools provide the ability to automatically scale up and down compute resources based on the amount of activity. When the autoscale feature is enabled, you can set the minimum and maximum number of nodes to scale. When the autoscale feature is disabled, the number of nodes set will remain fixed. This setting can be altered after pool creation although the instance may need to be restarted.
+Autoscale for Apache Spark pools allows automatic scale up and down of compute resources based on the amount of activity. When the autoscale feature is enabled, you set the minimum, and maximum number of nodes to scale. When the autoscale feature is disabled, the number of nodes set will remain fixed. This setting can be altered after pool creation although the instance may need to be restarted.
## Elastic pool storage
-Apache Spark pools utilize temporary disk storage while the pool is instantiated. For many Spark jobs, it is difficult to estimate cluster storage requirements, which may cause your Spark jobs to fail if the worker nodes exhaust storage. Elastic pool storage allows the Spark engine to monitor worker node temporary cluster storage, and attach additional disks if needed. No action is required by customers. Customers should see fewer job failures as a result of elastic pool storage.
+Apache Spark pools now support elastic pool storage. Elastic pool storage allows the Spark engine to monitor worker node temporary storage and attach extra disks if needed. Apache Spark pools utilize temporary disk storage while the pool is instantiated. Spark jobs write shuffle map outputs, shuffle data and spilled data to local VM disks. Examples of operations that may utilize local disk are sort, cache, and persist. When temporary VM disk space runs out, Spark jobs may fail due to ΓÇ£Out of Disk SpaceΓÇ¥ error (java.io.IOException: No space left on device). With ΓÇ£Out of Disk SpaceΓÇ¥ errors, much of the burden to prevent jobs from failing shifts to the customer to reconfigure the Spark jobs (for example, tweak the number of partitions) or clusters (for example, add more nodes to the cluster). These errors might not be consistent, and the user may end up experimenting heavily by running production jobs. This process can be expensive for the user in multiple dimensions:
+
+* Wasted time. Customers are required to experiment heavily with job configurations via trial and error and are expected to understand SparkΓÇÖs internal metrics to make the correct decision.
+* Wasted resources. Since production jobs can process varying amount of data, Spark jobs can fail non-deterministically if resources aren't over-provisioned. For instance, consider the problem of data skew, which may result in a few nodes requiring more disk space than others. Currently in Synapse, each node in a cluster gets the same size of disk space and increasing disk space across all nodes isn't an ideal solution and leads to tremendous waste.
+* Slowdown in job execution. In the hypothetical scenario where we solve the problem by autoscaling nodes (assuming costs aren't an issue to the end customer), adding a compute node is still expensive (takes a few minutes) as opposed to adding storage (takes a few seconds).
+
+No action is required by you, plus you should see fewer job failures as a result.
> [!NOTE]
-> Azure Synapse Elastic pool storage is currently in Public Preview. During Public Preview there is no charge for use of Elastic Pool Storage.
+> Azure Synapse Elastic pool storage is currently in Public Preview. During Public Preview there is no charge for use of Elastic pool storage.
## Automatic pause
-The automatic pause feature releases resources after a set idle period, reducing the overall cost of an Apache Spark pool. The number of minutes of idle time can be set once this feature is enabled. The automatic pause feature is independent of the autoscale feature. Resources can be paused whether the autoscale is enabled or disabled. This setting can be altered after pool creation although active sessions will need to be restarted.
+The automatic pause feature releases resources after a set idle period, reducing the overall cost of an Apache Spark pool. The number of minutes of idle time can be set once this feature is enabled. The automatic pause feature is independent of the autoscale feature. Resources can be paused whether the autoscale is enabled or disabled. This setting can be altered after pool creation although active sessions will need to be restarted.
## Next steps
synapse-analytics Connect Synapse Link Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/connect-synapse-link-sql-database.md
This article provides a step-by-step guide for getting started with Azure Synaps
1. Select a target Synapse SQL database and pool.
-1. Provide a name for your Azure Synapse Link connection, and select the number of cores. These cores will be used for the movement of data from the source to the target.
+1. Provide a name for your Azure Synapse Link connection, and select the number of cores for the [link connection compute](sql-database-synapse-link.md#link-connection). These cores will be used for the movement of data from the source to the target.
> [!NOTE] > We recommend starting low and increasing as needed.
If you are using a different type of database, see how to:
* [Configure Azure Synapse Link for Azure Cosmos DB](../../cosmos-db/configure-synapse-link.md?context=/azure/synapse-analytics/context/context) * [Configure Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/azure-synapse-link-synapse?context=/azure/synapse-analytics/context/context)
-* [Get started with Azure Synapse Link for SQL Server 2022](connect-synapse-link-sql-server-2022.md)
+* [Get started with Azure Synapse Link for SQL Server 2022](connect-synapse-link-sql-server-2022.md)
synapse-analytics Connect Synapse Link Sql Server 2022 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/connect-synapse-link-sql-server-2022.md
This article provides a step-by-step guide for getting started with Azure Synaps
* Input your **link connection name**.
- * Select your **Core count**. We recommend starting from small number and increasing as needed.
+ * Select your **Core count** for the [link connection compute](sql-server-2022-synapse-link.md#link-connection). These cores will be used for the movement of data from the source to the target. We recommend starting from small number and increasing as needed.
* Configure your landing zone. Select your **linked service** connecting to your landing zone.
If you are using a different type of database, see how to:
* [Configure Azure Synapse Link for Azure Cosmos DB](../../cosmos-db/configure-synapse-link.md?context=/azure/synapse-analytics/context/context) * [Configure Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/azure-synapse-link-synapse?context=/azure/synapse-analytics/context/context)
-* [Get started with Azure Synapse Link for Azure SQL Database](connect-synapse-link-sql-database.md)
+* [Get started with Azure Synapse Link for Azure SQL Database](connect-synapse-link-sql-database.md)
traffic-manager Traffic Manager Troubleshooting Degraded https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-troubleshooting-degraded.md
public class TrustAllCertsPolicy : ICertificatePolicy {
[Cloud Services](/previous-versions/azure/jj155995(v=azure.100))
-[Azure App Service](https://azure.microsoft.com/documentation/services/app-service/web/)
+[Azure App Service](/azure/app-service/web/)
[Operations on Traffic Manager (REST API Reference)](/previous-versions/azure/reference/hh758255(v=azure.100)) [Azure Traffic Manager Cmdlets][1]
-[1]: /powershell/module/az.trafficmanager
+[1]: /powershell/module/az.trafficmanager
virtual-desktop Safe Url List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/safe-url-list.md
Azure Virtual Desktop currently doesn't have a list of IP address ranges that yo
## Remote Desktop clients
-Any [Remote Desktop clients](user-documentation/connect-windows-7-10.md?toc=%2Fazure%2Fvirtual-desktop%2Ftoc.json&bc=%2Fazure%2Fvirtual-desktop%2Fbreadcrumb%2Ftoc.json) you use to connect to Azure Virtual Desktop must have access to the following URLs. Select the relevant tab based on which cloud you're using. Opening these URLs is essential for a reliable client experience. Blocking access to these URLs is unsupported and will affect service functionality.
+Any [Remote Desktop clients](user-documentation/connect-windows-7-10.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json) you use to connect to Azure Virtual Desktop must have access to the following URLs. Select the relevant tab based on which cloud you're using. Opening these URLs is essential for a reliable client experience. Blocking access to these URLs is unsupported and will affect service functionality.
# [Azure cloud](#tab/azure)
virtual-desktop Troubleshoot Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-azure-monitor.md
If your data isn't displaying properly, check the following common solutions:
- First, make sure you've set up correctly with the configuration workbook as described in [Use Azure Monitor for Azure Virtual Desktop to monitor your deployment](azure-monitor.md). If you're missing any counters or events, the data associated with them won't appear in the Azure portal. - Check your access permissions & contact the resource owners to request missing permissions; anyone monitoring Azure Virtual Desktop requires the following permissions:
- - Read-access to the Azure subscriptions that hold your Azure Virtual Desktop resources
+ - Read-access to the Azure resource groups that hold your Azure Virtual Desktop resources
- Read-access to the subscription's resource groups that hold your Azure Virtual Desktop session hosts - Read-access to whichever Log Analytics workspaces you're using - You may need to open outgoing ports in your server's firewall to allow Azure Monitor and Log Analytics to send data to the portal. To learn how to do this, see the following articles:
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new.md
Title: What's new in Azure Virtual Desktop? - Azure
description: New features and product updates for Azure Virtual Desktop. Previously updated : 08/05/2022 Last updated : 09/06/2022
Azure Virtual Desktop updates regularly. This article is where you'll find out a
Make sure to check back here often to keep up with new updates.
+## August 2022
+
+Here's what changed in August 2022:
+
+### Updates to the preview version of FSLogix profiles for Azure AD-joined VMs
+
+We've updated the public preview version of the Azure Files integration with Azure Active Directory (Azure AD) Kerberos for hybrid identities so that it's now simpler to deploy and manage. The update should give users using FSLogix user profiles on Azure AD-joined session host an overall better experience. For more information, see [the Azure Files blog post](https://techcommunity.microsoft.com/t5/azure-storage-blog/public-preview-leverage-azure-active-directory-kerberos-with/ba-p/3612111).
+
+### Single sign-on and passwordless authentication now in Windows Insider preview
+
+In the Windows Insider build of Windows 11 22H2, you can now enable a preview version of the Azure Active Directory (AD)-based single sign-on experience. This Windows Insider build also supports passwordless authentication with Windows Hello and security devices like FIDO2 keys. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/insider-preview-single-sign-on-and-passwordless-authentication/m-p/3608842).
+
+### Universal Print for Azure Virtual Desktop now in Windows Insider preview
+
+The Windows Insider build of Windows 11 22H2 also includes a preview version of the Universal Print for Azure Virtual Desktop feature. We hope this feature will provide an improved printing experience that combines the benefits of Azure Virtual Desktop and Universal Print for Windows 11 multi-session users. Learn more at [Printing on Azure Virtual Desktop using Universal Print](/universal-print/fundamentals/universal-print-avd) and [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/a-better-printing-experience-for-azure-virtual-desktop-with/m-p/3598592).
+
+### Autoscale for pooled host pools now generally available
+
+Autoscale on Azure Virtual Desktop for pooled host pools is now generally available. This feature is a native automated scaling solution that automatically turns session host virtual machines on and off according to the schedule and capacity thresholds that you define to fit your workload. Learn more at [How autoscale works](autoscale-scenarios.md) and [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/announcing-general-availability-of-autoscale-for-pooled-host/ba-p/3591462).
+
+### Azure Virtual Desktop with Trusted Launch update
+
+Azure Virtual Desktop now supports provisioning Trusted Launch virtual machines with custom images stored in an Azure Compute Gallery. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/avd-now-supports-azure-compute-gallery-custom-images-with/m-p/3593955).
+ ## July 2022 Here's what changed in July 2022:
-## Scheduled agent updates now generally available
+### Scheduled agent updates now generally available
Scheduled agent updates on Azure Virtual Desktop are now generally available. This feature gives IT admins control over when the Azure Virtual Desktop agent, side-by-side stack, and Geneva Monitoring agent get updated. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/announcing-general-availability-of-scheduled-agent-updates-on/ba-p/3579236).
-## FSLogix 2201 hotfix 2
+### FSLogix 2201 hotfix 2
The FSLogix 2201 hotfix 2 update includes fixes to multi-session VHD mounting, Cloud Cache meta tracking files, and registry cleanup operations. This update doesn't include new fatures. Learn more at [WhatΓÇÖs new in FSLogix](/fslogix/whats-new?context=%2Fazure%2Fvirtual-desktop%2Fcontext%2Fcontext#fslogix-2201-hotfix-2-29822850276) and [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/announcing-fslogix-2201-hotfix-2-2-9-8228-50276-has-been/m-p/3579409).
-## Japan and Australia metadata service now generally available
+### Japan and Australia metadata service now generally available
The Azure Virtual Desktop metadata database located in Japan and Australia is now generally available. This update allows customers to store their Azure Virtual Desktop objects and metadata within a database located within that geography. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/announcing-general-availability-of-the-azure-virtual-desktop/ba-p/3570756).
-## Azure Virtual Desktop moving away from Storage Blob image type
+### Azure Virtual Desktop moving away from Storage Blob image type
Storage Blob images are created from unmanaged disks, which means they lack the availability, scalability, and frictionless user experience that managed images and Shared Image Gallery images offer. As a result, Azure Virtual Desktop will be deprecating support for Storage Blobs image types by August 22, 2022. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/azure-virtual-desktop-is-moving-away-from-storage-blob-image/ba-p/3568364).
-## Azure Virtual Desktop Custom Configuration changing to PowerShell
+### Azure Virtual Desktop Custom Configuration changing to PowerShell
Starting July 21, 2022, Azure Virtual Desktop will replace the Custom Configuration Azure Resource Manager template parameters for creating host pools, adding session hosts to host pools, and the Getting Started feature with a PowerShell script URL parameter stored in a publicly accessible location. This replacement includes the parameters' respective Azure Resource Manager templates. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/azure-virtual-desktop-custom-configuration-breaking-change/m-p/3568069).
virtual-machines Disks Enable Host Based Encryption Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-enable-host-based-encryption-portal.md
You must enable the feature for your subscription before you use the EncryptionA
![Icon to launch the Cloud Shell from the Azure portal](../Cloud-Shell/media/overview/portal-launch-icon.png) 1. Execute the following command to register the feature for your subscription
+
+ ### [Azure PowerShell](#tab/azure-powershell)
```powershell Register-AzProviderFeature -FeatureName "EncryptionAtHost" -ProviderNamespace "Microsoft.Compute" ```
+
+ ### [Azure CLI](#tab/azure-cli)
+
+ ```azurecli
+ az feature register --name EncryptionAtHost --namespace Microsoft.Compute
+ ```
+
+
1. Confirm that the registration state is **Registered** (takes a few minutes) using the command below before trying out the feature.
+
+ ### [Azure PowerShell](#tab/azure-powershell)
```powershell Get-AzProviderFeature -FeatureName "EncryptionAtHost" -ProviderNamespace "Microsoft.Compute" ```-
+
+ ### [Azure CLI](#tab/azure-cli)
+
+ ```azurecli
+ az feature show --name EncryptionAtHost --namespace Microsoft.Compute
+ ```
+
+
Sign in to the Azure portal using the [provided link](https://aka.ms/diskencryptionupdates).
virtual-machines Infrastructure Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/infrastructure-automation.md
Scripts can be downloaded from Azure storage or any public location such as a Gi
Learn how to: -- [Create a Linux VM with the Azure CLI and use the Custom Script Extension](/previous-versions/azure/virtual-machines/scripts/virtual-machines-linux-cli-sample-create-vm-nginx?toc=%2fcli%2fazure%2ftoc.json).
+- [Create a Linux VM with the Azure CLI and use the Custom Script Extension](/previous-versions/azure/virtual-machines/scripts/virtual-machines-linux-cli-sample-create-vm-nginx?toc=/cli/azure/toc.json).
- [Create a Windows VM with Azure PowerShell and use the Custom Script Extension](/previous-versions/azure/virtual-machines/scripts/virtual-machines-windows-powershell-sample-create-vm-iis).
virtual-machines Disk Encryption Key Vault Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-key-vault-aad.md
See the main [Creating and configuring a key vault for Azure Disk Encryption](di
## Create a key vault
-Azure Disk Encryption is integrated with [Azure Key Vault](https://azure.microsoft.com/documentation/services/key-vault/) to help you control and manage the disk-encryption keys and secrets in your key vault subscription. You can create a key vault or use an existing one for Azure Disk Encryption. For more information about key vaults, see [Get started with Azure Key Vault](../../key-vault/general/overview.md) and [Secure your key vault](../../key-vault/general/security-features.md). You can use a Resource Manager template, Azure PowerShell, or the Azure CLI to create a key vault.
+Azure Disk Encryption is integrated with [Azure Key Vault](/azure/key-vault/) to help you control and manage the disk-encryption keys and secrets in your key vault subscription. You can create a key vault or use an existing one for Azure Disk Encryption. For more information about key vaults, see [Get started with Azure Key Vault](../../key-vault/general/overview.md) and [Secure your key vault](../../key-vault/general/security-features.md). You can use a Resource Manager template, Azure PowerShell, or the Azure CLI to create a key vault.
>[!WARNING]
virtual-machines Disk Encryption Sample Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-sample-scripts.md
This article provides sample scripts for preparing pre-encrypted VHDs and other
``` ### Using the Azure Disk Encryption prerequisites PowerShell script
-If you're already familiar with the prerequisites for Azure Disk Encryption, you can use the [Azure Disk Encryption prerequisites PowerShell script](https://raw.githubusercontent.com/Azure/azure-powershell/master/src/Compute/Compute/Extension/AzureDiskEncryption/Scripts/AzureDiskEncryptionPreRequisiteSetup.ps1 ). For an example of using this PowerShell script, see the [Encrypt a VM Quickstart](disk-encryption-powershell-quickstart.md). You can remove the comments from a section of the script, starting at line 211, to encrypt all disks for existing VMs in an existing resource group.
+If you're already familiar with the prerequisites for Azure Disk Encryption, you can use the [Azure Disk Encryption prerequisites PowerShell script](https://raw.githubusercontent.com/Azure/azure-powershell/master/src/Compute/Compute/Extension/AzureDiskEncryption/Scripts/AzureDiskEncryptionPreRequisiteSetup.ps1). For an example of using this PowerShell script, see the [Encrypt a VM Quickstart](disk-encryption-powershell-quickstart.md). You can remove the comments from a section of the script, starting at line 211, to encrypt all disks for existing VMs in an existing resource group.
The following table shows which parameters can be used in the PowerShell script:
virtual-machines Expand Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/expand-disks.md
Previously updated : 08/23/2022 Last updated : 09/08/2022
This feature has the following limitations:
[!INCLUDE [virtual-machines-disks-expand-without-downtime-restrictions](../../../includes/virtual-machines-disks-expand-without-downtime-restrictions.md)]
-To register for the feature, use the following command:
-
-```azurecli
-az feature register --namespace Microsoft.Compute --name LiveResize
-```
-
-It may take a few minutes for registration to take complete. To confirm that you've registered, use the following command:
-
-```azurecli
-az feature show --namespace Microsoft.Compute --name LiveResize
-```
- ### Get started Make sure that you have the latest [Azure CLI](/cli/azure/install-az-cli2) installed and are signed in to an Azure account by using [az login](/cli/azure/reference-index#az-login).
This article requires an existing VM in Azure with at least one data disk attach
In the following samples, replace example parameter names such as *myResourceGroup* and *myVM* with your own values. > [!IMPORTANT]
-> If you've enabled **LiveResize** and your disk meets the requirements in [Expand without downtime](#expand-without-downtime), you can skip step 1 and 3.
+> If your disk meets the requirements in [Expand without downtime](#expand-without-downtime), you can skip step 1 and 3.
1. Operations on virtual hard disks can't be performed with the VM running. Deallocate your VM with [az vm deallocate](/cli/azure/vm#az-vm-deallocate). The following example deallocates the VM named *myVM* in the resource group named *myResourceGroup*:
virtual-machines Image Builder Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-json.md
Customize properties:
- **inline** ΓÇô Inline commands to be run, separated by commas. - **validExitCodes** ΓÇô Optional, valid codes that can be returned from the script/inline command, this will avoid reported failure of the script/inline command. - **runElevated** ΓÇô Optional, boolean, support for running commands and scripts with elevated permissions.-- **sha256Checksum** - Value of sha256 checksum of the file, you generate this locally, and then Image Builder will checksum and validate.
+- **sha256Checksum** - generate the SHA256 checksum of the file locally, update the checksum value to lowercase, and Image Builder will validate the checksum during the deployment of the image template.
- To generate the sha256Checksum, using a PowerShell on Windows [Get-Hash](/powershell/module/microsoft.powershell.utility/get-filehash)
+ To generate the sha256Checksum, use the [Get-FileHash](/powershell/module/microsoft.powershell.utility/get-filehash) cmdlet in PowerShell.
### File customizer
If there is an error trying to download the file, or put it in a specified direc
> [!NOTE] > The file customizer is only suitable for small file downloads, < 20MB. For larger file downloads, use a script or inline command, then use code to download files, such as, Linux `wget` or `curl`, Windows, `Invoke-WebRequest`.
+- **sha256Checksum** - generate the SHA256 checksum of the file locally, update the checksum value to lowercase, and Image Builder will validate the checksum during the deployment of the image template.
+
+ To generate the sha256Checksum, use the [Get-FileHash](/powershell/module/microsoft.powershell.utility/get-filehash) cmdlet in PowerShell.
+ ### Windows Update Customizer This customizer is built on the [community Windows Update Provisioner](https://packer.io/docs/provisioners/community-supported.html) for Packer, which is an open source project maintained by the Packer community. Microsoft tests and validate the provisioner with the Image Builder service, and will support investigating issues with it, and work to resolve issues, however the open source project is not officially supported by Microsoft. For detailed documentation on and help with the Windows Update Provisioner, see the project repository.
virtual-machines Migrate To Premium Storage Using Azure Site Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/migrate-to-premium-storage-using-azure-site-recovery.md
For specific scenarios for migrating virtual machines, see the following resourc
Also, see the following resources to learn more about Azure Storage and Azure Virtual Machines:
-* [Azure Storage](https://azure.microsoft.com/documentation/services/storage/)
-* [Azure Virtual Machines](https://azure.microsoft.com/documentation/services/virtual-machines/)
+* [Azure Storage](/azure/storage/)
+* [Azure Virtual Machines](/azure/virtual-machines/)
* [Select a disk type for IaaS VMs](../disks-types.md) [1]:./media/migrate-to-premium-storage-using-azure-site-recovery/migrate-to-premium-storage-using-azure-site-recovery-1.png
Also, see the following resources to learn more about Azure Storage and Azure Vi
[12]:./media/migrate-to-premium-storage-using-azure-site-recovery/migrate-to-premium-storage-using-azure-site-recovery-12.PNG [13]:./media/migrate-to-premium-storage-using-azure-site-recovery/migrate-to-premium-storage-using-azure-site-recovery-13.png [14]:../site-recovery/media/site-recovery-vmware-to-azure/v2a-architecture-henry.png
-[15]:./media/migrate-to-premium-storage-using-azure-site-recovery/migrate-to-premium-storage-using-azure-site-recovery-14.png
+[15]:./media/migrate-to-premium-storage-using-azure-site-recovery/migrate-to-premium-storage-using-azure-site-recovery-14.png
virtual-machines Np Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/np-series.md
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-The NP-series virtual machines are powered by [Xilinx U250 ](https://www.xilinx.com/products/boards-and-kits/alveo/u250.html) FPGAs for accelerating workloads including machine learning inference, video transcoding, and database search & analytics. NP-series VMs are also powered by Intel Xeon 8171M (Skylake) CPUs with all core turbo clock speed of 3.2 GHz.
+The NP-series virtual machines are powered by [Xilinx U250](https://www.xilinx.com/products/boards-and-kits/alveo/u250.html) FPGAs for accelerating workloads including machine learning inference, video transcoding, and database search & analytics. NP-series VMs are also powered by Intel Xeon 8171M (Skylake) CPUs with all core turbo clock speed of 3.2 GHz.
[Premium Storage](premium-storage-performance.md): Supported<br> [Premium Storage caching](premium-storage-performance.md): Supported<br>
virtual-machines Sizes Hpc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes-hpc.md
Azure provides several options to create clusters of HPC VMs that can communicat
> [!NOTE] > Contact Azure Support if you have large-scale capacity needs. Azure quotas are credit limits, not capacity guarantees. Regardless of your quota, you are only charged for cores that you use. -- **Virtual network** ΓÇô An Azure [virtual network](https://azure.microsoft.com/documentation/services/virtual-network/) is not required to use the compute-intensive instances. However, for many deployments you need at least a cloud-based Azure virtual network, or a site-to-site connection if you need to access on-premises resources. When needed, create a new virtual network to deploy the instances. Adding compute-intensive VMs to a virtual network in an affinity group is not supported.
+- **Virtual network** ΓÇô An Azure [virtual network](/azure/virtual-network/) is not required to use the compute-intensive instances. However, for many deployments you need at least a cloud-based Azure virtual network, or a site-to-site connection if you need to access on-premises resources. When needed, create a new virtual network to deploy the instances. Adding compute-intensive VMs to a virtual network in an affinity group is not supported.
- **Resizing** ΓÇô Because of their specialized hardware, you can only resize compute-intensive instances within the same size family (H-series or N-series). For example, you can only resize an H-series VM from one H-series size to another. Additional considerations around InfiniBand driver support and NVMe disks may need to be considered for certain VMs.
virtual-machines Disk Encryption Key Vault Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-key-vault-aad.md
See the main [Creating and configuring a key vault for Azure Disk Encryption](di
## Create a key vault
-Azure Disk Encryption is integrated with [Azure Key Vault](https://azure.microsoft.com/documentation/services/key-vault/) to help you control and manage the disk-encryption keys and secrets in your key vault subscription. You can create a key vault or use an existing one for Azure Disk Encryption. For more information about key vaults, see [Get started with Azure Key Vault](../../key-vault/general/overview.md) and [Secure your key vault](../../key-vault/general/security-features.md). You can use a Resource Manager template, Azure PowerShell, or the Azure CLI to create a key vault.
+Azure Disk Encryption is integrated with [Azure Key Vault](/azure/key-vault/) to help you control and manage the disk-encryption keys and secrets in your key vault subscription. You can create a key vault or use an existing one for Azure Disk Encryption. For more information about key vaults, see [Get started with Azure Key Vault](../../key-vault/general/overview.md) and [Secure your key vault](../../key-vault/general/security-features.md). You can use a Resource Manager template, Azure PowerShell, or the Azure CLI to create a key vault.
>[!WARNING]
virtual-machines Expand Os Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/expand-os-disk.md
Previously updated : 08/23/2022 Last updated : 09/08/2022
This feature has the following limitations:
[!INCLUDE [virtual-machines-disks-expand-without-downtime-restrictions](../../../includes/virtual-machines-disks-expand-without-downtime-restrictions.md)]
-To register for the feature, use the following command:
-
-```azurepowershell
-Register-AzProviderFeature -FeatureName "LiveResize" -ProviderNamespace "Microsoft.Compute"
-```
-
-It may take a few minutes for registration to complete. To confirm that you've registered, use the following command:
-
-```azurepowershell
-Get-AzProviderFeature -FeatureName "LiveResize" -ProviderNamespace "Microsoft.Compute"
-```
- ## Resize a managed disk in the Azure portal > [!IMPORTANT]
-> If you've enabled **LiveResize** and your disk meets the requirements in [Expand without downtime](#expand-without-downtime), you can skip step 1.
+> If your disk meets the requirements in [Expand without downtime](#expand-without-downtime), you can skip step 1.
1. In the [Azure portal](https://portal.azure.com/), go to the virtual machine in which you want to expand the disk. Select **Stop** to deallocate the VM. 1. In the left menu under **Settings**, select **Disks**.
$vm = Get-AzVM -ResourceGroupName $rgName -Name $vmName
``` > [!IMPORTANT]
-> If you've enabled **LiveResize** and your disk meets the requirements in [expand without downtime](#expand-without-downtime), you can skip step 4 and 6.
+> If your disk meets the requirements in [expand without downtime](#expand-without-downtime), you can skip step 4 and 6.
Stop the VM before resizing the disk:
virtual-machines Migrate To Premium Storage Using Azure Site Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/migrate-to-premium-storage-using-azure-site-recovery.md
For specific scenarios for migrating virtual machines, see the following resourc
Also, see the following resources to learn more about Azure Storage and Azure Virtual Machines:
-* [Azure Storage](https://azure.microsoft.com/documentation/services/storage/)
-* [Azure Virtual Machines](https://azure.microsoft.com/documentation/services/virtual-machines/)
+* [Azure Storage](/azure/storage/)
+* [Azure Virtual Machines](/azure/virtual-machines/)
[1]:./media/migrate-to-premium-storage-using-azure-site-recovery/migrate-to-premium-storage-using-azure-site-recovery-1.png [2]:./media/migrate-to-premium-storage-using-azure-site-recovery/migrate-to-premium-storage-using-azure-site-recovery-2.png
Also, see the following resources to learn more about Azure Storage and Azure Vi
[12]:./media/migrate-to-premium-storage-using-azure-site-recovery/migrate-to-premium-storage-using-azure-site-recovery-12.PNG [13]:./media/migrate-to-premium-storage-using-azure-site-recovery/migrate-to-premium-storage-using-azure-site-recovery-13.png [14]:../site-recovery/media/site-recovery-vmware-to-azure/v2a-architecture-henry.png
-[15]:./media/migrate-to-premium-storage-using-azure-site-recovery/migrate-to-premium-storage-using-azure-site-recovery-14.png
+[15]:./media/migrate-to-premium-storage-using-azure-site-recovery/migrate-to-premium-storage-using-azure-site-recovery-14.png
virtual-machines Oracle Vm Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-vm-solutions.md
When migrating Oracle software and workloads from on-premises to Microsoft Azure
When using Oracle databases in Azure, you are responsible for implementing a high availability and disaster recovery solution to avoid any downtime.
-High availability and disaster recovery for Oracle Database Enterprise Edition (without relying on Oracle RAC) can be achieved on Azure using [Data Guard, Active Data Guard](https://www.oracle.com/database/technologies/high-availability/dataguard.html), or [Oracle GoldenGate](https://www.oracle.com/technetwork/middleware/goldengate), with two databases on two separate virtual machines. Both virtual machines should be in the same [virtual network](https://azure.microsoft.com/documentation/services/virtual-network/) to ensure they can access each other over the private persistent IP address. Additionally, we recommend placing the virtual machines in the same availability set to allow Azure to place them into separate fault domains and upgrade domains. Should you want to have geo-redundancy, set up the two databases to replicate between two different regions and connect the two instances with a VPN Gateway.
+High availability and disaster recovery for Oracle Database Enterprise Edition (without relying on Oracle RAC) can be achieved on Azure using [Data Guard, Active Data Guard](https://www.oracle.com/database/technologies/high-availability/dataguard.html), or [Oracle GoldenGate](https://www.oracle.com/technetwork/middleware/goldengate), with two databases on two separate virtual machines. Both virtual machines should be in the same [virtual network](/azure/virtual-network/) to ensure they can access each other over the private persistent IP address. Additionally, we recommend placing the virtual machines in the same availability set to allow Azure to place them into separate fault domains and upgrade domains. Should you want to have geo-redundancy, set up the two databases to replicate between two different regions and connect the two instances with a VPN Gateway.
The tutorial [Implement Oracle Data Guard on Azure](configure-oracle-dataguard.md) walks you through the basic setup procedure on Azure.
virtual-machines Cal S4h https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/cal-s4h.md
The online library is continuously updated with Appliances for demo, proof of co
|This appliance contains SAP S/4HANA 2021 (FPS02) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, Migration Cockpit, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Details]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/3f4931de-b15b-47f1-b93d-a4267296b8bc) | | **SAP BW/4HANA 2021 including BW/4HANA Content 2.0 SP08 - Dev Edition** May 11 2022 | [Create Appliance](https://cal.sap.com/registration?sguid=06725b24-b024-4757-860d-ac2db7b49577&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | |This solution offers you an insight of SAP BW/4HANA. SAP BW/4HANA is the next generation Data Warehouse optimized for HANA. Beside the basic BW/4HANA options the solution offers a bunch of HANA optimized BW/4HANA Content and the next step of Hybrid Scenarios with SAP Data Warehouse Cloud. As the system is pre-configured you can start directly implementing your scenarios. | [Details]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/06725b24-b024-4757-860d-ac2db7b49577) |
-| **SAP S/4HANA 2021, Fully-Activated Appliance** December 20 2021 | [Create Appliance](https://cal.sap.com/registration?sguid=b8a9077c-f0f7-47bd-977c-70aa6a6a2aa7&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
-|This appliance contains SAP S/4HANA 2021 (SP00) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Transportation Mgmt. (TM), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, Migration Cockpit, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Details]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/b8a9077c-f0f7-47bd-977c-70aa6a6a2aa7) |
+| **SAP Business One 10.0 PL02, version for SAP HANA** August 04 2020 | [Create Appliance](https://cal.sap.com/registration?sguid=371edc8c-56c6-4d21-acb4-2d734722c712&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
+|Trusted by over 70,000 small and midsize businesses in 170+ countries, SAP Business One is a flexible, affordable, and scalable ERP solution with the power of SAP HANA. The solution is pre-configured using a 31-day trial license and has a demo database of your choice pre-installed. See the getting started guide to learn about the scope of the solution and how to easily add new demo databases. To secure your system against the CVE-2021-44228 vulnerability, apply SAP Support Note 3131789. For more information, see the Getting Started Guide of this solution (check the "Security Aspects" chapter). | [Details](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/371edc8c-56c6-4d21-acb4-2d734722c712) |
| **SAP Product Lifecycle Costing 4.0 SP4 Hotfix 3** August 10 2022 | [Create Appliance](https://cal.sap.com/registration?sguid=61af97ea-be7e-4531-ae07-f1db561d0847&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | |SAP Product Lifecycle Costing is a solution to calculate costs and other dimensions for new products or product related quotations in an early stage of the product lifecycle, to quickly identify cost drivers and to easily simulate and compare alternatives. | [Details]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/61af97ea-be7e-4531-ae07-f1db561d0847) | | **SAP NetWeaver 7.5 SP15 on SAP ASE** January 20 2020 | [Create Appliance](https://cal.sap.com/registration?sguid=69efd5d1-04de-42d8-a279-813b7a54c1f6&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/get-started.md
ms.assetid: ad8e5c75-0cf6-4564-ae62-ea1246b4e5f2
vm-linux Previously updated : 08/22/2022 Last updated : 09/08/2022
When you use Microsoft Azure, you can reliably run your mission-critical SAP wor
Besides hosting SAP NetWeaver and S/4HANA scenarios with the different DBMS on Azure, you can host other SAP workload scenarios, like SAP BI on Azure.
-We just announced our new services of Azure Center for SAP solutions and Azure Monitor for SAP 2.0 entering the public previev stage. These services will give you the possibility to deploy SAP workload on Azure in a highly automated manner in an optimal architecture and configuration. And monitor your Azure infrastructure, OS, DBMS, and ABAP stack deployments on one single pane of glass.
+We just announced our new services of Azure Center for SAP solutions and Azure Monitor for SAP 2.0 entering the public preview stage. These services will give you the possibility to deploy SAP workload on Azure in a highly automated manner in an optimal architecture and configuration. And monitor your Azure infrastructure, OS, DBMS, and ABAP stack deployments on one single pane of glass.
For customers and partners who are focussed on deploying and operating their assets in public cloud through Terraform and Ansible, leverage our SAP Deployment Automation Framework (SDAF) to jump start your SAP deployments into Azure using our public Terraform and Ansible modules on [github](https://github.com/Azure/sap-automation).
In the SAP workload documentation space, you can find the following areas:
- **Plan and Deploy (Azure VMs)**: Deploying SAP workload into Azure Infrastructure as a Service, you should go through the documents in this section first to learn more about the principle Azure components used and guidelines - **Storage (Azure VMs)**: This section includes documents that give recommendations how to use the different Azure storage types when deploying SAP workload on Azure - **DBMS Guides (Azure VMs)**: The section DBMS Guides covers specifics around deploying different DBMS that are supported for SAP workload in Azure IaaS-- **High Availability (Azure VMs)**: In this section, many of the high availability configurations around SAP workload on Azure is covered. This section includes detailed documentation around deploying Windows clustering and Pacemaker cluster configuration for the different SAP comonentns and different database systems
+- **High Availability (Azure VMs)**: In this section, many of the high availability configurations around SAP workload on Azure is covered. This section includes detailed documentation around deploying Windows clustering and Pacemaker cluster configuration for the different SAP components and different database systems
- **Automation Framework (Azure VMs)**: Automation Framework documentation is covering an a [Terraform and Ansible based automation framework](https://github.com/Azure/sap-automation) that allows automation of Azure infrastructure and SAP software-- **Azure Monitor for SAP solutions**: Microsoft developed a monitoring solutions specifically for SAP supported OS and DBMS, as well as S/4HANA and NetWeaver. This section documents the deployment and usage of the service
+- **Azure Monitor for SAP solutions**: Microsoft developed monitoring solutions specifically for SAP supported OS and DBMS, as well as S/4HANA and NetWeaver. This section documents the deployment and usage of the service
- **Integration with Microsoft Services** and **References** contain different links to integration between SAP and other Microsoft services. The list may not be complete. ## Change Log -- September 6, 2022: Add managed identity for pacemaker fence agent [Set up Pacemaker on SUSE Linux Enterprise Server (SLES) in Azure](high-availability-guide-suse-pacemaker.md) on SLES and [Setting up Pacemaker on RHEL in Azure](high-availability-guide-rhel-pacemaker.md) RHEL.-- August 22, 2022: Release of cost optimization scenario [Deploy PAS and AAS with SAP NetWeaver HA cluster](high-availability-guide-rhel-with-dialog-instance.md) on RHEL.-- August 09, 2022: Release of scenario [HA for SAP ASCS/ERS with NFS simple mount](./high-availability-guide-suse-nfs-simple-mount.md) on SLES 15 for SAP Applications.
+- September 8, 2022: Change in [SAP HANA scale-out HSR with Pacemaker on Azure VMs on SLES](./sap-hana-high-availability-scale-out-hsr-suse.md) to add instructions for deploying /hana/shared (only) on NFS on Azure Files
+- September 6, 2022: Add managed identity for pacemaker fence agent [Set up Pacemaker on SUSE Linux Enterprise Server (SLES) in Azure](high-availability-guide-suse-pacemaker.md) on SLES and [Setting up Pacemaker on RHEL in Azure](high-availability-guide-rhel-pacemaker.md) RHEL
+- August 22, 2022: Release of cost optimization scenario [Deploy PAS and AAS with SAP NetWeaver HA cluster](high-availability-guide-rhel-with-dialog-instance.md) on RHEL
+- August 09, 2022: Release of scenario [HA for SAP ASCS/ERS with NFS simple mount](./high-availability-guide-suse-nfs-simple-mount.md) on SLES 15 for SAP Applications
- July 18, 2022: Clarify statement around Pacemaker support on Oracle Linux in [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms_guide_oracle.md) - June 29, 2022: Add recommendation and links to Pacemaker usage for Db2 versions 11.5.6 and higher in the documents [IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload](./dbms_guide_ibm.md), [High availability of IBM Db2 LUW on Azure VMs on SUSE Linux Enterprise Server with Pacemaker](./dbms-guide-ha-ibm.md), and [High availability of IBM Db2 LUW on Azure VMs on Red Hat Enterprise Linux Server](./high-availability-guide-rhel-ibm-db2-luw.md) - June 08, 2022: Change in [HA for SAP NW on Azure VMs on SLES with ANF](./high-availability-guide-suse-netapp-files.md) and [HA for SAP NW on Azure VMs on RHEL with ANF](./high-availability-guide-rhel-netapp-files.md) to adjust timeouts when using NFSv4.1 (related to NFSv4.1 lease renewal) for more resilient Pacemaker configuration - June 02, 2022: Change in the [SAP Deployment Guide](deployment-guide.md) to add a link to RHEL in-place upgrade documentation - June 02, 2022: Change in [HA for SAP NetWeaver on Azure VMs on Windows with Azure NetApp Files(SMB)](./high-availability-guide-windows-netapp-files-smb.md), [HA for SAP NW on Azure VMs on SLES with ANF](./high-availability-guide-suse-netapp-files.md) and [HA for SAP NW on Azure VMs on RHEL with ANF](./high-availability-guide-rhel-netapp-files.md) to add sizing considerations - May 11, 2022: Change in [Cluster an SAP ASCS/SCS instance on a Windows failover cluster by using a cluster shared disk in Azure](./sap-high-availability-guide-wsfc-shared-disk.md), [Prepare the Azure infrastructure for SAP HA by using a Windows failover cluster and shared disk for SAP ASCS/SCS](./sap-high-availability-infrastructure-wsfc-shared-disk.md) and [SAP ASCS/SCS instance multi-SID high availability with Windows server failover clustering and Azure shared disk](./sap-ascs-ha-multi-sid-wsfc-azure-shared-disk.md) to update instruction about the usage of Azure shared disk for SAP deployment with PPG.-- May 10, 2022: Changes in Change in [HA for SAP HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md), [SAP HANA scale-out HSR with Pacemaker on Azure VMs on RHEL](./sap-hana-high-availability-scale-out-hsr-rhel.md), [HA for SAP HANA Scale-up with Azure NetApp Files on SLES](./sap-hana-high-availability-netapp-files-suse.md), [SAP HANA scale-out with standby node on Azure VMs with ANF on SLES](./sap-hana-scale-out-standby-netapp-files-suse.md), [SAP HANA scale-out HSR with Pacemaker on Azure VMs on SLES](./sap-hana-high-availability-scale-out-hsr-suse.md) and [SAP HANA scale-out with standby node on Azure VMs with ANF on RHEL](./sap-hana-scale-out-standby-netapp-files-rhel.md) to adjust parameters per SAP note 3024346
+- May 10, 2022: Change in [HA for SAP HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md), [SAP HANA scale-out HSR with Pacemaker on Azure VMs on RHEL](./sap-hana-high-availability-scale-out-hsr-rhel.md), [HA for SAP HANA Scale-up with Azure NetApp Files on SLES](./sap-hana-high-availability-netapp-files-suse.md), [SAP HANA scale-out with standby node on Azure VMs with ANF on SLES](./sap-hana-scale-out-standby-netapp-files-suse.md), [SAP HANA scale-out HSR with Pacemaker on Azure VMs on SLES](./sap-hana-high-availability-scale-out-hsr-suse.md) and [SAP HANA scale-out with standby node on Azure VMs with ANF on RHEL](./sap-hana-scale-out-standby-netapp-files-rhel.md) to adjust parameters per SAP note 3024346
- April 26, 2022: Changes in [Setting up Pacemaker on SUSE Linux Enterprise Server in Azure](high-availability-guide-suse-pacemaker.md) to add Azure Identity Python module to installation instructions for Azure Fence Agent - March 30, 2022: Adding information that Red Hat Gluster Storage is being phased out [GlusterFS on Azure VMs on RHEL](./high-availability-guide-rhel-glusterfs.md) - March 30, 2022: Correcting DNN support for older releases of SQL Server in [SQL Server Azure Virtual Machines DBMS deployment for SAP NetWeaver](./dbms_guide_sqlserver.md)
virtual-machines Planning Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/planning-guide.md
By building up an Azure Virtual Network, you can define the address range of the
Every Virtual Machine in Azure needs to be connected to a Virtual Network.
-More details can be found in [this article][resource-groups-networking] and on [this page](https://azure.microsoft.com/documentation/services/virtual-network/).
+More details can be found in [this article][resource-groups-networking] and on [this page](/azure/virtual-network/).
> [!NOTE]
Microsoft Azure ExpressRoute allows the creation of private connections between
Find more details on Azure ExpressRoute and offerings here:
-* [ExpressRoute documentation](https://azure.microsoft.com/documentation/services/expressroute/)
+* [ExpressRoute documentation](/azure/expressroute/)
* [Azure ExpressRoute pricing](https://azure.microsoft.com/pricing/details/expressroute/) * [ExpressRoute FAQ](../../../expressroute/expressroute-faqs.md)
virtual-machines Sap Hana High Availability Scale Out Hsr Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-high-availability-scale-out-hsr-suse.md
vm-windows Previously updated : 05/10/2022 Last updated : 09/07/2022
[nfs-ha]:high-availability-guide-suse-nfs.md
-This article describes how to deploy a highly available SAP HANA system in a scale-out configuration with HANA system replication (HSR) and Pacemaker on Azure SUSE Linux Enterprise Server virtual machines (VMs). The shared file systems in the presented architecture are provided by [Azure NetApp Files](../../../azure-netapp-files/azure-netapp-files-introduction.md) and are mounted over NFS.
+This article describes how to deploy a highly available SAP HANA system in a scale-out configuration with HANA system replication (HSR) and Pacemaker on Azure SUSE Linux Enterprise Server virtual machines (VMs). The shared file systems in the presented architecture are NFS mounted and are provided by [Azure NetApp Files](../../../azure-netapp-files/azure-netapp-files-introduction.md) or [NFS share on Azure Files](../../../storage/files/files-nfs-protocol.md).
-In the example configurations, installation commands, and so on, the HANA instance is **03** and the HANA system ID is **HN1**. The examples are based on HANA 2.0 SP4 and SUSE Linux Enterprise Server 12 SP5.
+In the example configurations, installation commands, and so on, the HANA instance is **03** and the HANA system ID is **HN1**. The examples are based on HANA 2.0 SP5 and SUSE Linux Enterprise Server 12 SP5.
Before you begin, refer to the following SAP notes and papers:
-* [Azure NetApp Files documentation][anf-azure-doc]
+* [Azure NetApp Files documentation][anf-azure-doc]
+* [Azure Files documentation](../../../storage/files/storage-files-introduction.md)
* SAP Note [1928533] includes: * A list of Azure VM sizes that are supported for the deployment of SAP software * Important capacity information for Azure VM sizes
Before you begin, refer to the following SAP notes and papers:
One method to achieve HANA high availability for HANA scale-out installations, is to configure HANA system replication and protect the solution with Pacemaker cluster to allow automatic failover. When an active node fails, the cluster fails over the HANA resources to the other site. The presented configuration shows three HANA nodes on each site, plus majority maker node to prevent split-brain scenario. The instructions can be adapted, to include more VMs as HANA DB nodes.
-The HANA shared file system `/han). It is mounted via NFSv4.1 on each HANA node in the same HANA system replication site. File systems `/hana/data` and `/hana/log` are local file systems and are not shared between the HANA DB nodes. SAP HANA will be installed in non-shared mode.
+The HANA shared file system `/han). The HANA shared file system is NFS mounted on each HANA node in the same HANA system replication site. File systems `/hana/data` and `/hana/log` are local file systems and are not shared between the HANA DB nodes. SAP HANA will be installed in non-shared mode.
-> [!TIP]
+> [!WARNING]
+> Deploying `/hana/data` and `/hana/log` on NFS on Azure Files is not supported.
> For recommended SAP HANA storage configurations, see [SAP HANA Azure VMs storage configurations](./hana-vm-operations-storage.md). [![SAP HANA scale-out with HSR and Pacemaker cluster on SLES](./media/sap-hana-high-availability/sap-hana-high-availability-scale-out-hsr-suse.png)](./media/sap-hana-high-availability/sap-hana-high-availability-scale-out-hsr-suse-detail.png#lightbox)
In the preceding diagram, three subnets are represented within one Azure virtual
As `/hana/data` and `/hana/log` are deployed on local disks, it is not necessary to deploy separate subnet and separate virtual network cards for communication to the storage.
-The Azure NetApp volumes are deployed in a separate subnet, [delegated to Azure NetApp Files](../../../azure-netapp-files/azure-netapp-files-delegate-subnet.md): `anf` 10.23.1.0/26.
+If you are using Azure NetApp Files, the NFS volumes for `/han): `anf` 10.23.1.0/26.
> [!IMPORTANT] > System replication to a 3rd site is not supported. For details see section "Important prerequisites" in [SLES-SAP HANA System Replication Scale-out Performance Optimized scenario](https://documentation.suse.com/sbp/all/html/SLES4SAP-hana-scaleOut-PerfOpt-12/https://docsupdatetracker.net/index.html#_important_prerequisites).
For the configuration presented in this document, deploy seven virtual machines:
> [!IMPORTANT]
- > Make sure that the OS you select is SAP-certified for SAP HANA on the specific VM types you're using. For a list of SAP HANA certified VM types and OS releases for those types, go to the [SAP HANA certified IaaS platforms](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;iaas;v:125;v:105;v:99;v:120) site. Click into the details of the listed VM type to get the complete list of SAP HANA-supported OS releases for that type.
+ > Make sure that the OS you select is SAP-certified for SAP HANA on the specific VM types you're using. For a list of SAP HANA certified VM types and OS releases for those types, go to the [SAP HANA certified IaaS platforms](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;iaas;v:125;v:105;v:99;v:120) site. Click into the details of the listed VM type to get the complete list of SAP HANA-supported OS releases for that type.
+ > If you choose to deploy `/hana/shared` on NFS on Azure Files, we recommend to deploy on SLES15 SP2 and above.
2. Create six network interfaces, one for each HANA DB virtual machine, in the `inter` virtual network subnet (in this example, **hana-s1-db1-inter**, **hana-s1-db2-inter**, **hana-s1-db3-inter**, **hana-s2-db1-inter**, **hana-s2-db2-inter**, and **hana-s2-db3-inter**).
For the configuration presented in this document, deploy seven virtual machines:
1. Open the load balancer, select **health probes**, and select **Add**. 1. Enter the name of the new health probe (for example, **hana-hp**).
- 1. Select **TCP** as the protocol and port 625**03**. Keep the **Interval** value set to 5, and the **Unhealthy threshold** value set to 2.
+ 1. Select **TCP** as the protocol and port 625**03**. Keep the **Interval** value set to 5.
1. Select **OK**. 1. Next, create the load-balancing rules:
For the configuration presented in this document, deploy seven virtual machines:
> Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set parameter **net.ipv4.tcp_timestamps** to **0**. For details see [Load Balancer health probes](../../../load-balancer/load-balancer-custom-probe-overview.md). > See also SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421).
-### Deploy the Azure NetApp Files infrastructure
+### Deploy NFS
+
+There are two options for deploying Azure native NFS for `/han). Azure files supports NFSv4.1 protocol, NFS on Azure NetApp files supports both NFSv4.1 and NFSv3.
+
+The next sections describe the steps to deploy NFS - you'll need to select only *one* of the options.
+
+> [!TIP]
+> You chose to deploy `/han).
++
+#### Deploy the Azure NetApp Files infrastructure
-Deploy the ANF volumes for the `/han#set-up-the-azure-netapp-files-infrastructure).
+Deploy ANF volumes for the `/han#set-up-the-azure-netapp-files-infrastructure).
In this example, the following Azure NetApp Files volumes were used:
In this example, the following Azure NetApp Files volumes were used:
* volume **HN1**-shared-s2 (nfs://10.23.1.7/**HN1**-shared-s2)
+#### Deploy the NFS on Azure Files infrastructure
+
+Deploy Azure Files NFS shares for the `/han?tabs=azure-portal).
+
+In this example, the following Azure Files NFS shares were used:
+
+* share **hn1**-shared-s1 (sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s1)
+* share **hn1**-shared-s2 (sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s2)
+ ## Operating system configuration and preparation The instructions in the next sections are prefixed with one of the following abbreviations:
Configure and prepare your OS by doing the following steps:
10.23.1.201 hana-s2-db3-hsr ```
-3. **[A]** Prepare the OS for running SAP HANA on NetApp Systems with NFS, as described in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/91-NetApp-HANA.conf* for the NetApp configuration settings.
-
- <pre><code>
- vi /etc/sysctl.d/91-NetApp-HANA.conf
- # Add the following entries in the configuration file
- net.core.rmem_max = 16777216
- net.core.wmem_max = 16777216
- net.ipv4.tcp_rmem = 4096 131072 16777216
- net.ipv4.tcp_wmem = 4096 16384 16777216
- net.core.netdev_max_backlog = 300000
- net.ipv4.tcp_slow_start_after_idle=0
- net.ipv4.tcp_no_metrics_save = 1
- net.ipv4.tcp_moderate_rcvbuf = 1
- net.ipv4.tcp_window_scaling = 1
- net.ipv4.tcp_sack = 1
- </code></pre>
-
-4. **[A]** Create configuration file */etc/sysctl.d/ms-az.conf* with Microsoft for Azure configuration settings.
+2. **[A]** Create configuration file */etc/sysctl.d/ms-az.conf* with Microsoft for Azure configuration settings.
<pre><code> vi /etc/sysctl.d/ms-az.conf
Configure and prepare your OS by doing the following steps:
> [!TIP] > Avoid setting net.ipv4.ip_local_port_range and net.ipv4.ip_local_reserved_ports explicitly in the sysctl configuration files to allow SAP Host Agent to manage the port ranges. For more details see SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421).
-4. **[A]** Adjust the sunrpc settings, as recommended in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346).
-
- <pre><code>
- vi /etc/modprobe.d/sunrpc.conf
- # Insert the following line
- options sunrpc tcp_max_slot_table_entries=128
- </code></pre>
-
-2. **[A]** SUSE delivers special resource agents for SAP HANA and by default agents for SAP HANA ScaleUp are installed. Uninstall the packages for ScaleUp, if installed and install the packages for scenario SAP HANAScaleOut. The step needs to be performed on all cluster VMs, including the majority maker.
+3. **[A]** SUSE delivers special resource agents for SAP HANA and by default agents for SAP HANA ScaleUp are installed. Uninstall the packages for ScaleUp, if installed and install the packages for scenario SAP HANAScaleOut. The step needs to be performed on all cluster VMs, including the majority maker.
```bash # Uninstall ScaleUp packages and patterns
Configure and prepare your OS by doing the following steps:
zypper in -t pattern ha_sles ```
-3. **[AH]** Prepare the VMs - apply the recommended settings per SAP note [2205917] for SUSE Linux Enterprise Server for SAP Applications.
+4. **[AH]** Prepare the VMs - apply the recommended settings per SAP note [2205917] for SUSE Linux Enterprise Server for SAP Applications.
## Prepare the file systems
-### Mount the shared file systems
-In this example, the shared HANA file systems are deployed on Azure NetApp Files and mounted over NFSv4.
+You chose to deploy the SAP shared directories on [NFS share on Azure Files](../../../storage/files/files-nfs-protocol.md) or [NFS volume on Azure NetApp Files](../../../azure-netapp-files/azure-netapp-files-introduction.md).
-1. **[AH]** Create mount points for the HANA database volumes.
+### Mount the shared file systems (Azure NetApp Files NFS)
+
+In this example, the shared HANA file systems are deployed on Azure NetApp Files and mounted over NFSv4.1. Follow the steps in this section, only if you are using NFS on Azure NetApp Files.
+
+1. **[A]** Prepare the OS for running SAP HANA on NetApp Systems with NFS, as described in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/91-NetApp-HANA.conf* for the NetApp configuration settings.
+
+ <pre><code>
+ vi /etc/sysctl.d/91-NetApp-HANA.conf
+ # Add the following entries in the configuration file
+ net.core.rmem_max = 16777216
+ net.core.wmem_max = 16777216
+ net.ipv4.tcp_rmem = 4096 131072 16777216
+ net.ipv4.tcp_wmem = 4096 16384 16777216
+ net.core.netdev_max_backlog = 300000
+ net.ipv4.tcp_slow_start_after_idle=0
+ net.ipv4.tcp_no_metrics_save = 1
+ net.ipv4.tcp_moderate_rcvbuf = 1
+ net.ipv4.tcp_window_scaling = 1
+ net.ipv4.tcp_sack = 1
+ </code></pre>
+
+2. **[A]** Adjust the sunrpc settings, as recommended in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346).
+
+ <pre><code>
+ vi /etc/modprobe.d/sunrpc.conf
+ # Insert the following line
+ options sunrpc tcp_max_slot_table_entries=128
+ </code></pre>
++
+3. **[AH]** Create mount points for the HANA database volumes.
```bash mkdir -p /hana/shared ```
-2. **[AH]** Verify the NFS domain setting. Make sure that the domain is configured as the default Azure NetApp Files domain, that is, **`defaultv4iddomain.com`** and the mapping is set to **nobody**.
+4. **[AH]** Verify the NFS domain setting. Make sure that the domain is configured as the default Azure NetApp Files domain, that is, **`defaultv4iddomain.com`** and the mapping is set to **nobody**.
This step is only needed, if using Azure NetAppFiles NFSv4.1. > [!IMPORTANT]
In this example, the shared HANA file systems are deployed on Azure NetApp Files
Nobody-Group = nobody ```
-3. **[AH]** Verify `nfs4_disable_idmapping`. It should be set to **Y**. To create the directory structure where `nfs4_disable_idmapping` is located, execute the mount command. You won't be able to manually create the directory under /sys/modules, because access is reserved for the kernel / drivers.
+5. **[AH]** Verify `nfs4_disable_idmapping`. It should be set to **Y**. To create the directory structure where `nfs4_disable_idmapping` is located, execute the mount command. You won't be able to manually create the directory under /sys/modules, because access is reserved for the kernel / drivers.
This step is only needed, if using Azure NetAppFiles NFSv4.1. ```bash
In this example, the shared HANA file systems are deployed on Azure NetApp Files
echo "options nfs nfs4_disable_idmapping=Y" >> /etc/modprobe.d/nfs.conf ```
-4. **[AH1]** Mount the shared Azure NetApp Files volumes on the SITE1 HANA DB VMs.
+6. **[AH1]** Mount the shared Azure NetApp Files volumes on the SITE1 HANA DB VMs.
```bash sudo vi /etc/fstab
- # Add the following entries
+ # Add the following entry
10.23.1.7:/HN1-shared-s1 /hana/shared nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0 # Mount all volumes sudo mount -a ```
-5. **[AH2]** Mount the shared Azure NetApp Files volumes on the SITE2 HANA DB VMs.
+7. **[AH2]** Mount the shared Azure NetApp Files volumes on the SITE2 HANA DB VMs.
```bash sudo vi /etc/fstab
- # Add the following entries
+ # Add the following entry
10.23.1.7:/HN1-shared-s2 /hana/shared nfs rw,vers=4,minorversion=1,hard,timeo=600,rsize=262144,wsize=262144,intr,noatime,lock,_netdev,sec=sys 0 0 # Mount the volume sudo mount -a ``` -
-10. **[AH]** Verify that the corresponding `/hana/shared/` file systems are mounted on all HANA DB VMs with NFS protocol version **NFSv4**.
+8. **[AH]** Verify that the corresponding `/hana/shared/` file systems are mounted on all HANA DB VMs with NFS protocol version **NFSv4.1**.
```bash sudo nfsstat -m # Verify that flag vers is set to 4.1 # Example from SITE 1, hana-s1-db1 /hana/shared from 10.23.1.7:/HN1-shared-s1
- Flags: rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.11,local_lock=none,addr=10.23.1.7
+ Flags: rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.19,local_lock=none,addr=10.23.1.7
# Example from SITE 2, hana-s2-db1 /hana/shared from 10.23.1.7:/HN1-shared-s2
- Flags: rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.14,local_lock=none,addr=10.23.1.7
+ Flags: rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.22,local_lock=none,addr=10.23.1.7
+ ```
+
+### Mount the shared file systems (Azure Files NFS)
+
+In this example, the shared HANA file systems are deployed on NFS on Azure Files. Follow the steps in this section, only if you are using NFS on Azure Files.
+
+1. **[AH]** Create mount points for the HANA database volumes.
+
+ ```bash
+ mkdir -p /hana/shared
+ ```
+
+6. **[AH1]** Mount the shared Azure NetApp Files volumes on the SITE1 HANA DB VMs.
+
+ ```bash
+ sudo vi /etc/fstab
+ # Add the following entry
+ sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s1 /hana/shared nfs vers=4,minorversion=1,sec=sys 0 0
+ # Mount all volumes
+ sudo mount -a
+ ```
+
+7. **[AH2]** Mount the shared Azure NetApp Files volumes on the SITE2 HANA DB VMs.
+
+ ```bash
+ sudo vi /etc/fstab
+ # Add the following entries
+ sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s2 /hana/shared nfs vers=4,minorversion=1,sec=sys 0 0
+ # Mount the volume
+ sudo mount -a
+ ```
+
+8. **[AH]** Verify that the corresponding `/hana/shared/` file systems are mounted on all HANA DB VMs with NFS protocol version **NFSv4.1**.
+
+ ```bash
+ sudo nfsstat -m
+ # Example from SITE 1, hana-s1-db1
+ sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s1
+ Flags: rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.19,local_lock=none,addr=10.23.0.35
+ # Example from SITE 2, hana-s2-db1
+ sapnfsafs.file.core.windows.net:/sapnfsafs/hn1-shared-s2
+ Flags: rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.23.0.22,local_lock=none,addr=10.23.0.35
``` ### Prepare the data and log local file systems
Include all virtual machines, including the majority maker in the cluster.
## Installation
-In this example for deploying SAP HANA in scale-out configuration with HSR on Azure VMs, we've used HANA 2.0 SP4.
+In this example for deploying SAP HANA in scale-out configuration with HSR on Azure VMs, we've used HANA 2.0 SP5.
### Prepare for HANA installation
In this example for deploying SAP HANA in scale-out configuration with HSR on Az
ssh root@hana-s2-db3 ```
-5. **[AH]** Install additional packages, which are required for HANA 2.0 SP4. For more information, see SAP Note [2593824](https://launchpad.support.sap.com/#/notes/2593824) for your SLES version.
+5. **[AH]** Install additional packages, which are required for HANA 2.0 SP4 and above. For more information, see SAP Note [2593824](https://launchpad.support.sap.com/#/notes/2593824) for your SLES version.
```bash # In this example, using SLES12 SP5
In this example for deploying SAP HANA in scale-out configuration with HSR on Az
``` ### HANA installation on the first node on each site
-1. **[1]** Install SAP HANA by following the instructions in the [SAP HANA 2.0 Installation and Update guide](https://help.sap.com/viewer/2c1988d620e04368aa4103bf26f17727/2.0.04/en-US/7eb0167eb35e4e2885415205b8383584.html). In the instructions that follow, we show the SAP HANA installation on the first node on SITE 1.
+1. **[1]** Install SAP HANA by following the instructions in the [SAP HANA 2.0 Installation and Update guide](https://help.sap.com/docs/SAP_HANA_PLATFORM/2c1988d620e04368aa4103bf26f17727/7eb0167eb35e4e2885415205b8383584.html?version=2.0.05). In the instructions that follow, we show the SAP HANA installation on the first node on SITE 1.
a. Start the **hdblcm** program as `root` from the HANA installation software directory. Use the `internal_network` parameter and pass the address space for subnet, which is used for the internal HANA inter-node communication.
In this example for deploying SAP HANA in scale-out configuration with HSR on Az
# site name: HANA_S1 ```
-4. **[1,2]** Change the HANA configuration so that communication for HANA system replication if directed though the HANA system replication virtual network interfaces.
+4. **[1,2]** Change the HANA configuration so that communication for HANA system replication is directed through the HANA system replication virtual network interfaces.
- Stop HANA on both sites ```bash sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function StopSystem HDB
In this example for deploying SAP HANA in scale-out configuration with HSR on Az
sudo -u hn1adm /usr/sap/hostctrl/exe/sapcontrol -nr 03 -function StartSystem HDB ```
- For more information, see [Host Name resolution for System Replication](https://help.sap.com/viewer/eb3777d5495d46c5b2fa773206bbfb46/1.0.12/en-US/c0cba1cb2ba34ec89f45b48b2157ec7b.html).
+ For more information, see [Host Name resolution for System Replication](https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/c0cba1cb2ba34ec89f45b48b2157ec7b.html?version=2.0.05).
## Create file system resources
Create a dummy file system cluster resource, which will monitor and report failu
crm configure property maintenance-mode=true ```
-2. **[1,2]** Create the directory on the ANF /hana/sahred volume, which will be used in the special file system monitoring resource. The directories need to be created on both sites.
+2. **[1,2]** Create the directory on the NFS mounted file system /hana/shared, which will be used in the special file system monitoring resource. The directories need to be created on both sites.
```bash mkdir -p /hana/shared/HN1/check ```
Create a dummy file system cluster resource, which will monitor and report failu
3. Verify the cluster configuration for a failure scenario, when a node loses access to the NFS share (`/hana/shared`).
- The SAP HANA resource agents depend on binaries, stored on `/hana/shared` to perform operations during failover. File system `/hana/shared` is mounted over NFS in the presented configuration. A test that can be performed, is to create a temporary firewall rule to block access to the `/hana/shared` ANF volume on one of the primary site VMs. This approach validates that the cluster will fail over, if access to `/hana/shared` is lost on the active system replication site.
+ The SAP HANA resource agents depend on binaries, stored on `/hana/shared` to perform operations during failover. File system `/hana/shared` is mounted over NFS in the presented configuration. A test that can be performed, is to create a temporary firewall rule to block access to the `/hana/shared` NFS mounted file system on one of the primary site VMs. This approach validates that the cluster will fail over, if access to `/hana/shared` is lost on the active system replication site.
- **Expected result**: When you block the access to the `/hana/shared` ANF volume on one of the primary site VMs, the monitoring operation that performs read/write operation on file system, will fail, as it is not able to access the file system and will trigger HANA resource failover. The same result is expected when your HANA node loses access to the NFS share.
+ **Expected result**: When you block the access to the `/hana/shared` NFS mounted file system on one of the primary site VMs, the monitoring operation that performs read/write operation on file system, will fail, as it is not able to access the file system and will trigger HANA resource failover. The same result is expected when your HANA node loses access to the NFS share.
You can check the state of the cluster resources by executing `crm_mon` or `crm status`. Resource state before starting the test: ```bash
Create a dummy file system cluster resource, which will monitor and report failu
# rsc_nc_HN1_HDB03 (ocf::heartbeat:azure-lb): Started hana-s2-db1 ```
- To simulate failure for `/hana/shared`, first confirm the IP address for the `/hana/shared` ANF volume on the primary site. You can do that by running `df -kh|grep /hana/shared`.
- Then, set up a temporary firewall rule to block access to the IP address of the `/hana/shared` ANF volume by executing the following command on one of the primary HANA system replication site VMs.
- In this example the command was executed on hana-s1-db1.
+ To simulate failure for `/hana/shared`:
+
+- If using NFS on ANF, first confirm the IP address for the `/hana/shared` ANF volume on the primary site. You can do that by running `df -kh|grep /hana/shared`.
+- If using NFS on Azure Files, first determine the IP address of the private end point for your storage account.
+
+ Then, set up a temporary firewall rule to block access to the IP address of the `/hana/shared` NFS file system by executing the following command on one of the primary HANA system replication site VMs.
+
+ In this example the command was executed on hana-s1-db1 for ANF volume `/hana/shared`.
```bash iptables -A INPUT -s 10.23.1.7 -j DROP; iptables -A OUTPUT -d 10.23.1.7 -j DROP
virtual-network-manager Concept Azure Policy Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-azure-policy-integration.md
To use Azure Policy with network groups, users need the following permissions:
- `Microsoft.Network/networkManagers/networkGroups/join/action` action is needed on the target network group referenced in the **Add to network group** section. This permission allows for the adding and removing of objects from the target network group. - When using set definitions to assign multiple policies at the same time, concurrent `networkGroup/join/action` permissions are needed on all definitions being assigned at the time of assignment.
-To set the needed permissions, uses can be assigned built-in roles with [role-based access control](../role-based-access-control/quickstart-assign-role-user-portal.md):
+To set the needed permissions, users can be assigned built-in roles with [role-based access control](../role-based-access-control/quickstart-assign-role-user-portal.md):
- **Network Contributor** role to the target network group. -- **API Management Service Contributor** role at the target scope level.
+- **Resource Policy Contributor** role at the target scope level.
For more granular role assignment, you can create [custom roles](../role-based-access-control/custom-roles-portal.md) using the `networkGroups/join/action` permission and `policy/write` permission. ## Helpful tips
virtual-network Virtual Network Multiple Ip Addresses Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-multiple-ip-addresses-cli.md
Title: VM with multiple IP addresses via the Azure CLI-
-description: Learn how to assign multiple IP addresses to a virtual machine via the Azure CLI.
+ Title: Multiple IP addresses for Azure virtual machines - Azure CLI
+
+description: Learn how to create a virtual machine with multiple IP addresses with the Azure CLI.
- Previously updated : 11/17/2016 Last updated : 09/07/2022
-# Assign multiple IP addresses to virtual machines via the Azure CLI
--
-This article explains how to create a virtual machine (VM) through the Azure Resource Manager deployment model using the Azure CLI. Multiple IP addresses cannot be assigned to resources created through the classic deployment model. To learn more about Azure deployment models, read the [Understand deployment models](../../azure-resource-manager/management/deployment-models.md) article.
--
-## <a name = "create"></a>Create a VM with multiple IP addresses
-
-The steps that follow explain how to create an example virtual machine with multiple IP addresses, as described in the scenario. Change variable values in "" and IP address types, as required, for your implementation.
-
-1. Install the [Azure CLI](/cli/azure/install-azure-cli) if you don't already have it installed.
-2. Create an SSH public and private key pair for Linux VMs by completing the steps in the [Create an SSH public and private key pair for Linux VMs](../../virtual-machines/linux/mac-create-ssh-keys.md?toc=/azure/virtual-network/toc.json).
-3. From a command shell, login with the command `az login` and select the subscription you're using.
-4. Create the VM by executing the script that follows on a Linux or Mac computer. The script creates a resource group, one virtual network (VNet), one NIC with three IP configurations, and a VM with the two NICs attached to it. The NIC, public IP address, virtual network, and VM resources must all exist in the same location and subscription. Though the resources don't all have to exist in the same resource group, in the following script they do.
-
-```azurecli
-
-#!/bin/sh
-
-RgName="myResourceGroup"
-Location="westcentralus"
-az group create --name $RgName --location $Location
-
-# Create a public IP address resource with a static IP address using the `--allocation-method Static` option. If you
-# do not specify this option, the address is allocated dynamically. The address is assigned to the resource from a pool
-# of IP addresses unique to each Azure region. Download and view the file from
-# https://www.microsoft.com/en-us/download/details.aspx?id=41653 that lists the ranges for each region.
-
-PipName="myPublicIP"
-
-# This name must be unique within an Azure location.
-DnsName="myDNSName"
-
-az network public-ip create \
name $PipName \resource-group $RgName \location $Location \dns-name $DnsName\allocation-method Static-
-# Create a virtual network with one subnet
-
-VnetName="myVnet"
-VnetPrefix="10.0.0.0/16"
-VnetSubnetName="mySubnet"
-VnetSubnetPrefix="10.0.0.0/24"
-
-az network vnet create \
name $VnetName \resource-group $RgName \location $Location \address-prefix $VnetPrefix \subnet-name $VnetSubnetName \subnet-prefix $VnetSubnetPrefix-
-# Create a network interface connected to the subnet and associate the public IP address to it. Azure will create the
-# first IP configuration with a static private IP address and will associate the public IP address resource to it.
-
-NicName="MyNic1"
-az network nic create \
name $NicName \resource-group $RgName \location $Location \subnet $VnetSubnet1Name \private-ip-address 10.0.0.4vnet-name $VnetName \public-ip-address $PipName
-
-# Create a second public IP address, a second IP configuration, and associate it to the NIC. This configuration has a
-# static public IP address and a static private IP address.
-
-az network public-ip create \
resource-group $RgName \location $Location \name myPublicIP2 \dns-name mypublicdns2 \allocation-method Static-
-az network nic ip-config create \
resource-group $RgName \nic-name $NicName \name IPConfig-2 \private-ip-address 10.0.0.5 \public-ip-name myPublicIP2-
-# Create a third IP configuration, and associate it to the NIC. This configuration has static private IP address and # no public IP address.
-
-az network nic ip-config create \
resource-group $RgName \nic-name $NicName \private-ip-address 10.0.0.6 \name IPConfig-3-
-# Note: Though this article assigns all IP configurations to a single NIC, you can also assign multiple IP configurations
-# to any NIC in a VM. To learn how to create a VM with multiple NICs, read the Create a VM with multiple NICs
-# article: https://docs.microsoft.com/azure/virtual-network/virtual-network-deploy-multinic-arm-cli.
-
-# Create a VM and attach the NIC.
-
-VmName="myVm"
-
-# Replace the value for the following **VmSize** variable with a value from the
-# https://docs.microsoft.com/azure/virtual-machines/sizes article. The script fails if the VM size
-# is not supported in the location you select. Run the `azure vm sizes --location eastcentralus` command to get a full list
-# of VMs in US West Central, for example.
-
-VmSize="Standard_DS1"
-
-# Replace the value for the OsImage variable value with a value for *urn* from the output returned by entering the
-# `az vm image list` command.
-
-OsImage="credativ:Debian:8:latest"
-
-Username="adminuser"
-
-# Replace the following value with the path to your public key file. If you're creating a Windows VM, remove the following
-# line and you'll be prompted for the password you want to configure for the VM.
-
-SshKeyValue="~/.ssh/id_rsa.pub"
-
-az vm create \
name $VmName \resource-group $RgName \image $OsImage \location $Location \size $VmSize \nics $NicName \admin-username $Username \ssh-key-value $SshKeyValue
+# Assign multiple IP addresses to virtual machines using the Azure CLI
+
+An Azure Virtual Machine (VM) has one or more network interfaces (NIC) attached to it. Any NIC can have one or more static or dynamic public and private IP addresses assigned to it.
+
+Assigning multiple IP addresses to a VM enables the following capabilities:
+
+* Hosting multiple websites or services with different IP addresses and TLS/SSL certificates on a single server.
+
+* Serve as a network virtual appliance, such as a firewall or load balancer.
+
+* The ability to add any of the private IP addresses for any of the NICs to an Azure Load Balancer back-end pool. In the past, only the primary IP address for the primary NIC could be added to a back-end pool. For more information about load balancing multiple IP configurations, see [Load balancing multiple IP configurations](../../load-balancer/load-balancer-multiple-ip.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
+
+Every NIC attached to a VM has one or more IP configurations associated to it. Each configuration is assigned one static or dynamic private IP address. Each configuration may also have one public IP address resource associated to it. To learn more about IP addresses in Azure, read the [IP addresses in Azure](../../virtual-network/ip-services/public-ip-addresses.md) article.
+
+> [!NOTE]
+> All IP configurations on a single NIC must be associated to the same subnet. If multiple IPs on different subnets are desired, multiple NICs on a VM can be used. To learn more about multiple NICs on a VM in Azure, read the [Create VM with Multiple NICs](../../virtual-machines/windows/multiple-nics.md) article.
+
+There's a limit to how many private IP addresses can be assigned to a NIC. There's also a limit to how many public IP addresses that can be used in an Azure subscription. See the [Azure limits](../../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#azure-resource-manager-virtual-networking-limits) article for details.
+
+This article explains how to add multiple IP addresses to a virtual machine using the Azure portal.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
++
+- This tutorial requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+> [!NOTE]
+> Though the steps in this article assigns all IP configurations to a single NIC, you can also assign multiple IP configurations to any NIC in a multi-NIC VM. To learn how to create a VM with multiple NICs, see [Create a VM with multiple NICs](../../virtual-machines/windows/multiple-nics.md).
+
+ :::image type="content" source="./media/virtual-network-multiple-ip-addresses-portal/multiple-ipconfigs.png" alt-text="Diagram of network configuration resources created in How-to article.":::
+
+ *Figure: Diagram of network configuration resources created in How-to article.*
+
+## Create a resource group
+
+An Azure resource group is a logical container into which Azure resources are deployed and managed.
+
+Create a resource group with [az group create](/cli/azure/group#az-group-create) named **myResourceGroup** in the **eastus2** location.
+
+```azurecli-interactive
+ az group create \
+ --name myResourceGroup \
+ --location eastus2
+```
+
+## Create a virtual network
+
+In this section, you'll create a virtual network for the virtual machine.
+
+Use [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create) to create a virtual network.
+
+```azurecli-interactive
+ az network vnet create \
+ --resource-group myResourceGroup \
+ --location eastus2 \
+ --name myVNet \
+ --address-prefixes 10.1.0.0/16 \
+ --subnet-name myBackendSubnet \
+ --subnet-prefixes 10.1.0.0/24
```
-In addition to creating a VM with a NIC with 3 IP configurations, the script creates:
--- A single premium managed disk by default, but you have other options for the disk type you can create. Read the [Create a Linux VM using the Azure CLI](../../virtual-machines/linux/quick-create-cli.md?toc=/azure/virtual-network/toc.json) article for details.-- A virtual network with one subnet and two public IP addresses. Alternatively, you can use *existing* virtual network, subnet, NIC, or public IP address resources. To learn how to use existing network resources rather than creating additional resources, enter `az vm create -h`.-
-Public IP addresses have a nominal fee. To learn more about IP address pricing, read the [IP address pricing](https://azure.microsoft.com/pricing/details/ip-addresses) page. There is a limit to the number of public IP addresses that can be used in a subscription. To learn more about the limits, read the [Azure limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#networking-limits) article.
-
-After the VM is created, enter the `az network nic show --name MyNic1 --resource-group myResourceGroup` command to view the NIC configuration. Enter the `az network nic ip-config list --nic-name MyNic1 --resource-group myResourceGroup --output table` to view a list of the IP configurations associated to the NIC.
-
-Add the private IP addresses to the VM operating system by completing the steps for your operating system in the [Add IP addresses to a VM operating system](#os-config) section of this article.
-
-## <a name="add"></a>Add IP addresses to a VM
-
-You can add additional private and public IP addresses to an existing Azure network interface by completing the steps that follow. The examples build upon the scenario described in this article.
-
-1. Open a command shell and complete the remaining steps in this section within a single session. If you don't already have Azure CLI installed and configured, complete the steps in the [Azure CLI installation](/cli/azure/install-az-cli2?toc=/azure/virtual-network/toc.json) article and login to your Azure account with the `az-login` command.
-
-2. Complete the steps in one of the following sections, based on your requirements:
-
- **Add a private IP address**
-
- To add a private IP address to a NIC, you must create an IP configuration using the command that follows. The static IP address must be an unused address for the subnet.
-
- ```azurecli
- az network nic ip-config create \
- --resource-group myResourceGroup \
- --nic-name myNic1 \
- --private-ip-address 10.0.0.7 \
- --name IPConfig-4
- ```
-
- Create as many configurations as you require, using unique configuration names and private IP addresses (for configurations with static IP addresses).
-
- **Add a public IP address**
-
- A public IP address is added by associating it to either a new IP configuration or an existing IP configuration. Complete the steps in one of the sections that follow, as you require.
-
- Public IP addresses have a nominal fee. To learn more about IP address pricing, read the [IP address pricing](https://azure.microsoft.com/pricing/details/ip-addresses) page. There is a limit to the number of public IP addresses that can be used in a subscription. To learn more about the limits, read the [Azure limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#networking-limits) article.
-
- - **Associate the resource to a new IP configuration**
-
- Whenever you add a public IP address in a new IP configuration, you must also add a private IP address, because all IP configurations must have a private IP address. You can either add an existing public IP address resource, or create a new one. To create a new one, enter the following command:
-
- ```azurecli
- az network public-ip create \
- --resource-group myResourceGroup \
- --location westcentralus \
- --name myPublicIP3 \
- --dns-name mypublicdns3
- ```
-
- To create a new IP configuration with a static private IP address and the associated *myPublicIP3* public IP address resource, enter the following command:
-
- ```azurecli
- az network nic ip-config create \
- --resource-group myResourceGroup \
- --nic-name myNic1 \
- --name IPConfig-5 \
- --private-ip-address 10.0.0.8
- --public-ip-address myPublicIP3
- ```
-
- - **Associate the resource to an existing IP configuration**
- A public IP address resource can only be associated to an IP configuration that doesn't already have one associated. You can determine whether an IP configuration has an associated public IP address by entering the following command:
-
- ```azurecli
- az network nic ip-config list \
- --resource-group myResourceGroup \
- --nic-name myNic1 \
- --query "[?provisioningState=='Succeeded'].{ Name: name, PublicIpAddressId: publicIpAddress.id }" --output table
- ```
-
- Returned output:
-
- ```output
- Name PublicIpAddressId
-
- ipconfig1 /subscriptions/[Id]/resourceGroups/myResourceGroup/providers/Microsoft.Network/publicIPAddresses/myPublicIP1
- IPConfig-2 /subscriptions/[Id]/resourceGroups/myResourceGroup/providers/Microsoft.Network/publicIPAddresses/myPublicIP2
- IPConfig-3
- ```
-
- Since the **PublicIpAddressId** column for *IpConfig-3* is blank in the output, no public IP address resource is currently associated to it. You can add an existing public IP address resource to IpConfig-3, or enter the following command to create one:
-
- ```azurecli
- az network public-ip create \
- --resource-group myResourceGroup
- --location westcentralus \
- --name myPublicIP3 \
- --dns-name mypublicdns3 \
- --allocation-method Static
- ```
-
- Enter the following command to associate the public IP address resource to the existing IP configuration named *IPConfig-3*:
-
- ```azurecli
- az network nic ip-config update \
- --resource-group myResourceGroup \
- --nic-name myNic1 \
- --name IPConfig-3 \
- --public-ip myPublicIP3
- ```
-
-3. View the private IP addresses and the public IP address resource Ids assigned to the NIC by entering the following command:
-
- ```azurecli
- az network nic ip-config list \
- --resource-group myResourceGroup \
- --nic-name myNic1 \
- --query "[?provisioningState=='Succeeded'].{ Name: name, PrivateIpAddress: privateIpAddress, PrivateIpAllocationMethod: privateIpAllocationMethod, PublicIpAddressId: publicIpAddress.id }" --output table
- ```
-
- Returned output: <br>
-
- ```output
- Name PrivateIpAddress PrivateIpAllocationMethod PublicIpAddressId
-
- ipconfig1 10.0.0.4 Static /subscriptions/[Id]/resourceGroups/myResourceGroup/providers/Microsoft.Network/publicIPAddresses/myPublicIP1
- IPConfig-2 10.0.0.5 Static /subscriptions/[Id]/resourceGroups/myResourceGroup/providers/Microsoft.Network/publicIPAddresses/myPublicIP2
- IPConfig-3 10.0.0.6 Static /subscriptions/[Id]/resourceGroups/myResourceGroup/providers/Microsoft.Network/publicIPAddresses/myPublicIP3
- ```
-
-4. Add the private IP addresses you added to the NIC to the VM operating system by following the instructions in the [Add IP addresses to a VM operating system](#os-config) section of this article. Do not add the public IP addresses to the operating system.
+## Create public IP addresses
+
+Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create two public IP addresses.
+
+```azurecli-interactive
+ az network public-ip create \
+ --resource-group myResourceGroup \
+ --name myPublicIP-1 \
+ --sku Standard \
+ --version IPv4 \
+ --zone 1 2 3
+
+ az network public-ip create \
+ --resource-group myResourceGroup \
+ --name myPublicIP-2 \
+ --sku Standard \
+ --version IPv4 \
+ --zone 1 2 3
+
+```
+
+## Create a network security group
+
+In this section, you'll create a network security group for the virtual machine and virtual network.
+
+Use [az network nsg create](/cli/azure/network/nsg#az-network-nsg-create) to create the network security group.
+
+```azurecli-interactive
+ az network nsg create \
+ --resource-group myResourceGroup \
+ --name myNSG
+```
+
+### Create network security group rules
+
+You'll create a rule to allow connections to the virtual machine on port 22 for SSH.
+
+Use [az network nsg rule create](/cli/azure/network/nsg/rule#az-network-nsg-rule-create) to create the network security group rules.
+
+```azurecli-interactive
+ az network nsg rule create \
+ --resource-group myResourceGroup \
+ --nsg-name myNSG \
+ --name myNSGRuleSSH \
+ --protocol '*' \
+ --direction inbound \
+ --source-address-prefix '*' \
+ --source-port-range '*' \
+ --destination-address-prefix '*' \
+ --destination-port-range 22 \
+ --access allow \
+ --priority 200
+
+```
+### Create network interface
+
+You'll use [az network nic create](/cli/azure/network/nic#az-network-nic-create) to create the network interface for the virtual machine. The public IP addresses and the NSG created previously are associated with the NIC. The network interface is attached to the virtual network you created previously.
+
+```azurecli-interactive
+ az network nic create \
+ --resource-group myResourceGroup \
+ --name myNIC1 \
+ --private-ip-address-version IPv4 \
+ --vnet-name myVNet \
+ --subnet myBackEndSubnet \
+ --network-security-group myNSG \
+ --public-ip-address myPublicIP-1
+```
+
+### Create secondary private and public IP configuration
+
+Use [az network nic ip-config create](/cli/azure/network/nic/ip-config#az-network-nic-ip-config-create) to create the secondary private and public IP configuration for the NIC. Replace **10.1.0.5** with your secondary private IP address.
+
+```azurecli-interactive
+ az network nic ip-config create \
+ --resource-group myResourceGroup \
+ --name ipconfig2 \
+ --nic-name myNIC1 \
+ --private-ip-address 10.1.0.5 \
+ --private-ip-address-version IPv4 \
+ --vnet-name myVNet \
+ --subnet myBackendSubnet \
+ --public-ip-address myPublicIP-2
+```
+
+### Create tertiary private IP configuration
+
+Use [az network nic ip-config create](/cli/azure/network/nic/ip-config#az-network-nic-ip-config-create) to create the tertiary private IP configuration for the NIC. Replace **10.1.0.6** with your secondary private IP address.
+
+```azurecli-interactive
+ az network nic ip-config create \
+ --resource-group myResourceGroup \
+ --name ipconfig3 \
+ --nic-name myNIC1 \
+ --private-ip-address 10.1.0.6 \
+ --private-ip-address-version IPv4 \
+ --vnet-name myVNet \
+ --subnet myBackendSubnet
+```
+
+> [!NOTE]
+> When adding a static IP address, you must specify an unused, valid address on the subnet the NIC is connected to.
+
+### Create virtual machine
+
+Use [az vm create](/cli/azure/vm#az-vm-create) to create the virtual machine.
+
+```azurecli-interactive
+ az vm create \
+ --resource-group myResourceGroup \
+ --name myVM \
+ --nics myNIC1 \
+ --image UbuntuLTS \
+ --admin-username azureuser \
+ --authentication-type ssh \
+ --generate-ssh-keys
+```
[!INCLUDE [virtual-network-multiple-ip-addresses-os-config.md](../../../includes/virtual-network-multiple-ip-addresses-os-config.md)]
virtual-network Virtual Network Multiple Ip Addresses Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-multiple-ip-addresses-portal.md
This article explains how to add multiple IP addresses to a virtual machine usin
> [!NOTE] > Though the steps in this article assigns all IP configurations to a single NIC, you can also assign multiple IP configurations to any NIC in a multi-NIC VM. To learn how to create a VM with multiple NICs, see [Create a VM with multiple NICs](../../virtual-machines/windows/multiple-nics.md). +
+ *Figure: Diagram of network configuration resources cerated in How-to article.*
## Add public and private IP address to a VM
virtual-wan Create Bgp Peering Hub Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/create-bgp-peering-hub-portal.md
description: Learn how to create a BGP peering with Virtual WAN hub router.
Previously updated : 08/24/2022 Last updated : 09/06/2022 # Configure BGP peering to an NVA - Azure portal
-This article helps you configure an Azure Virtual WAN hub router to peer with a Network Virtual Appliance (NVA) in your virtual network using BGP Peering via the Azure portal. The virtual hub router learns routes from the NVA in a spoke VNet that is connected to a virtual WAN hub. The virtual hub router also advertises the virtual network routes to the NVA. For more information, see [Scenario: BGP peering with a virtual hub](scenario-bgp-peering-hub.md).
+This article helps you configure an Azure Virtual WAN hub router to peer with a Network Virtual Appliance (NVA) in your virtual network using BGP Peering using the Azure portal. The virtual hub router learns routes from the NVA in a spoke VNet that is connected to a virtual WAN hub. The virtual hub router also advertises the virtual network routes to the NVA. For more information, see [Scenario: BGP peering with a virtual hub](scenario-bgp-peering-hub.md). You can also create this configuration using [Azure PowerShell](create-bgp-peering-hub-powershell.md).
:::image type="content" source="./media/create-bgp-peering-hub-portal/diagram.png" alt-text="Diagram of configuration.":::
Verify that you've met the following criteria before beginning your configuratio
[!INCLUDE [Before you begin](../../includes/virtual-wan-before-include.md)]
-## <a name="openvwan"></a>Create a virtual WAN
+## Create a virtual WAN
[!INCLUDE [Create a virtual WAN](../../includes/virtual-wan-create-vwan-include.md)]
-## <a name="hub"></a>Create a hub
+## Create a hub
A hub is a virtual network that can contain gateways for site-to-site, ExpressRoute, or point-to-site functionality. Once the hub is created, you'll be charged for the hub, even if you don't attach any sites.
A hub is a virtual network that can contain gateways for site-to-site, ExpressRo
Once you have the settings configured, click **Review + Create** to validate, then click **Create**. The hub will begin provisioning. After the hub is created, go to the hub's **Overview** page. When provisioning is completed, the **Routing status** is **Provisioned**.
-## <a name="vnet"></a>Connect the VNet to the hub
+## Connect the VNet to the hub
After your hub router status is provisioned, create a connection between your hub and VNet.
virtual-wan Create Bgp Peering Hub Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/create-bgp-peering-hub-powershell.md
+
+ Title: 'Configure BGP peering to an NVA: PowerShell'
+
+description: Learn how to create a BGP peering with Virtual WAN hub router using Azure PowerShell.
+++ Last updated : 09/08/2022+++
+# Configure BGP peering to an NVA - PowerShell
+
+This article helps you configure an Azure Virtual WAN hub router to peer with a Network Virtual Appliance (NVA) in your virtual network using BGP Peering using Azure PowerShell. The virtual hub router learns routes from the NVA in a spoke VNet that is connected to a virtual WAN hub. The virtual hub router also advertises the virtual network routes to the NVA. For more information, see [Scenario: BGP peering with a virtual hub](scenario-bgp-peering-hub.md). You can also create this configuration using the [Azure portal](create-bgp-peering-hub-portal.md).
++
+## Prerequisites
+
+Verify that you've met the following criteria before beginning your configuration:
++
+### Azure PowerShell
++
+#### <a name="signin"></a>Sign in
++
+## Create a virtual WAN
+
+```azurepowershell-interactive
+$virtualWan = New-AzVirtualWan -ResourceGroupName "testRG" -Name "myVirtualWAN" -Location "West US"
+```
+
+## Create a virtual hub
+
+A hub is a virtual network that can contain gateways for site-to-site, ExpressRoute, or point-to-site functionality. Once the hub is created, you'll be charged for the hub, even if you don't attach any sites.
+
+```azurepowershell-interactive
+$virtualHub = New-AzVirtualHub -VirtualWan $virtualWan -ResourceGroupName "testRG" -Name "westushub" -AddressPrefix "10.0.0.1/24"
+```
+
+## Connect the VNet to the hub
+
+Create a connection between your hub and VNet.
+
+```azurepowershell-interactive
+$remote = Get-AzVirtualNetwork -Name "[vnet name]" -ResourceGroupName "[resource group name]"
+$hubVnetConnection = New-AzVirtualHubVnetConnection -ResourceGroupName "[parent resource group name]" -VirtualHubName "[virtual hub name]" -Name "[name of connection]" -RemoteVirtualNetwork $remote
+```
+
+## Configure a BGP peer
+
+Configure BGP peer for the $hubVnetConnection you created.
+
+```azurepowershell-interactive
+New-AzVirtualHubBgpConnection -ResourceGroupName "testRG" -VirtualHubName "westushub" -PeerIp 192.168.1.5 -PeerAsn 20000 -Name "testBgpConnection" -VirtualHubVnetConnection $hubVnetConnection
+```
+
+Or, you can configure BGP for an existing virtual hub VNet connection.
+
+```azurepowershell-interactive
+$hubVnetConnection = Get-AzVirtualHubVnetConnection -ResourceGroupName "[resource group name]" -VirtualHubName "[virtual hub name]" -Name "[name of connection]"
+
+New-AzVirtualHubBgpConnection -ResourceGroupName "[resource group name]" -VirtualHubName "westushub" -PeerIp 192.168.1.5 -PeerAsn 20000 -Name "testBgpConnection" -VirtualHubVnetConnection $hubVnetConnection
+```
+
+## Modify a BGP peer
+
+Update an existing hub BGP peer connection.
+
+```azurepowershell-interactive
+Update-AzVirtualHubBgpConnection -ResourceGroupName "[resource group name]" -VirtualHubName "westushub" -PeerIp 192.168.1.6 -PeerAsn 20000 -Name "testBgpConnection" -VirtualHubVnetConnection $hubVnetConnection
+```
+
+## Delete a BGP peer
+
+Remove an existing hub BGP connection.
+
+```azurepowershell-interactive
+Remove-AzVirtualHubBgpConnection -ResourceGroupName "[resource group name]" -VirtualHubName "westushub" -Name "testBgpConnection"
+```
+
+## Next steps
+
+For more information about BGP scenarios, see [Scenario: BGP peering with a virtual hub](scenario-bgp-peering-hub.md).
web-application-firewall Waf Front Door Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-best-practices.md
For more information, see [Web Application Firewall DRS rule groups and rules](w
Front Door's WAF enables you to control the number of requests allowed from each client's IP address over a period of time. It's a good practice to add rate limiting to reduce the impact of clients accidentally or intentionally sending large amounts of traffic to your service, such as during a [*retry storm*](/azure/architecture/antipatterns/retry-storm/). For more information, see the following resources:-- [Configure a Web Application Firewall rate limit rule using Azure PowerShell](waf-front-door-rate-limit-powershell.md).
+- [What is rate limiting for Azure Front Door Service?](waf-front-door-rate-limit.md).
+- [Configure a Web Application Firewall rate limit rule using Azure PowerShell](waf-front-door-rate-limit-configure.md).
- [Why do additional requests above the threshold configured for my rate limit rule get passed to my backend server?](waf-faq.yml#why-do-additional-requests-above-the-threshold-configured-for-my-rate-limit-rule-get-passed-to-my-backend-server-)
+### Use a high threshold for rate limits
+
+It's usually a good practice to set your rate limit threshold to be quite high. For example, if you know that a single client IP address might send around 10 requests to your server each minute, consider specifying a threshold of 20 requests per minute.
+
+High rate limit thresholds avoid blocking legitimate traffic, while still providing protection against extremely high numbers of requests that might overwhelm your infrastructure.
+ ## Geo-filtering best practices ### Geo-filter traffic
web-application-firewall Waf Front Door Custom Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-custom-rules.md
Last updated 03/22/2022
-# Custom rules for Web Application Firewall with Azure Front Door
+# Custom rules for Web Application Firewall with Azure Front Door
-Azure Web Application Firewall (WAF) with Front Door allows you to control access to your web applications based on the conditions you define. A custom WAF rule consists of a priority number, rule type, match conditions, and an action. There are two types of custom rules: match rules and rate limit rules. A match rule controls access based on a set of matching conditions while a rate limit rule controls access based on matching conditions and the rates of incoming requests. You may disable a custom rule to prevent it from being evaluated, but still keep the configuration.
+Azure Web Application Firewall (WAF) with Front Door allows you to control access to your web applications based on the conditions you define. A custom WAF rule consists of a priority number, rule type, match conditions, and an action. There are two types of custom rules: match rules and rate limit rules. A match rule controls access based on a set of matching conditions while a rate limit rule controls access based on matching conditions and the rates of incoming requests. You may disable a custom rule to prevent it from being evaluated, but still keep the configuration.
+
+For more information on rate limiting, see [What is rate limiting for Azure Front Door Service?](waf-front-door-rate-limit.md).
## Priority, match conditions, and action types
You can control access with a custom WAf rule that defines a priority number, a
## Examples
-### WAF custom rules example based on http parameters
+### Match based on HTTP request parameters
-Here is an example that shows the configuration of a custom rule with two match conditions. Requests are from a specified site as defined by referrer, and query string doesn't contain "password".
+Suppose you need to configure a custom rule to allow requests that match the following two conditions:
+- The `Referer` header's value is equal to a known value.
+- The query string doesn't contain the word "password".
-```
-# http rules example
+Here's an example JSON description of the custom rule:
+
+```json
{ "name": "AllowFromTrustedSites", "priority": 1,
Here is an example that shows the configuration of a custom rule with two match
"negateCondition": true } ],
- "action": "Allow",
- "transforms": []
+ "action": "Allow"
}- ```
-An example configuration for blocking "PUT" method is shown as below:
-```
-# http Request Method custom rules
+### Block HTTP PUT requests
+
+Suppose you need to block any request that uses the HTTP PUT method.
+
+Here's an example JSON description of the custom rule:
+
+``` json
{ "name": "BlockPUT", "priority": 2,
An example configuration for blocking "PUT" method is shown as below:
] } ],
- "action": "Block",
- "transforms": []
+ "action": "Block"
} ``` ### Size constraint
-You may build a custom rule that specifies size constraint on part of an incoming request. For example, below rule blocks a Url that is longer than 100 characters.
+Front Door's WAF enables you to build custom rules that apply a length or size constraint on a part of an incoming request.
-```
-# http parameters size constraint
+Suppose you need to block requests where the URL is longer than 100 characters.
+
+Here's an example JSON description of the custom rule:
+
+```json
{ "name": "URLOver100", "priority": 5,
You may build a custom rule that specifies size constraint on part of an incomin
] } ],
- "action": "Block",
- "transforms": []
+ "action": "Block"
} ```
You may build a custom rule that specifies size constraint on part of an incomin
- [Configure a Web Application Firewall policy using Azure PowerShell](waf-front-door-custom-rules-powershell.md) - Learn about [web Application Firewall with Front Door](afds-overview.md) - Learn how to [create a Front Door](../../frontdoor/quickstart-create-front-door.md).-
web-application-firewall Waf Front Door Rate Limit Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-rate-limit-configure.md
+
+ Title: Configure WAF rate limit rule for Front Door
+description: Learn how to configure a rate limit rule for an existing Front Door endpoint.
++++ Last updated : 09/07/2022++
+zone_pivot_groups: web-application-firewall-configuration
++
+# Configure a Web Application Firewall rate limit rule
+
+The Azure Web Application Firewall (WAF) rate limit rule for Azure Front Door controls the number of requests allowed from a particular client IP address to the application during a rate limit duration. For more information about rate limiting, see [What is rate limiting for Azure Front Door Service?](waf-front-door-rate-limit.md).
+
+This article shows how to configure a WAF rate limit rule on Azure Front Door Standard and Premium tiers.
++
+## Scenario
+
+Suppose you're responsible for a public website. You've just added a page with information about a promotion your organization is running. You're concerned that, if clients visit that page too often, some of your backend services might not scale quickly and the application might have performance issues.
+
+You decide to create a rate limiting rule that restricts each client IP address to a maximum of 1000 requests per minute. You'll only apply this rule to requests that contain `*/promo*` in the request URL.
+
+> [!TIP]
+> If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+++
+## Create a Front Door profile and WAF policy
+
+1. On the Azure portal home page, select **Create a resource**.
+
+ :::image type="content" source="../media/waf-front-door-rate-limit-configure/create-resource.png" alt-text="Screenshot of the Azure portal showing the 'Create a resource' button on the home page." :::
+
+1. Search for **Front Door**, and select **Front Door and CDN profiles**.
+
+ :::image type="content" source="../media/waf-front-door-rate-limit-configure/create-front-door.png" alt-text="Screenshot of the Azure portal showing the marketplace, with Front Door highlighted." :::
+
+1. Select **Create**.
+
+ :::image type="content" source="../media/waf-front-door-rate-limit-configure/create-front-door-2.png" alt-text="Screenshot of the Azure portal showing the marketplace and Front Door, with the create button highlighted." :::
+
+1. Select **Continue to create a Front Door** to use the *quick create* portal creation process.
+
+ :::image type="content" source="../media/waf-front-door-rate-limit-configure/quick-create.png" alt-text="Screenshot of the Azure portal showing the Front Door offerings, with the 'Quick create' option selected and the 'Continue to create a Front Door' button highlighted." :::
+
+1. Enter the information required on the *Basics* page:
+
+ - **Resource group:** Select an existing resource group, or create a new resource group for the Front Door and WAF resources.
+ - **Name:** Enter the name of your Front Door profile.
+ - **Tier:** Select *Standard* or *Premium*. For this scenario, both tiers support rate limiting.
+ - **Endpoint name:** Front Door endpoints must have globally unique names, so provide a unique name for your endpoint.
+ - **Origin type** and **Origin host name:** Select the origin application that you want to protect with your rate limiting rule.
+
+1. Next to **WAF policy**, select **Create new**.
+
+ :::image type="content" source="../media/waf-front-door-rate-limit-configure/front-door-waf-policy-create.png" alt-text="Screenshot of the Azure portal showing the Front Door creation workflow, with the WAF policy 'Create new' button highlighted." :::
+
+1. Enter the name of a WAF policy and select **Create**.
+
+ :::image type="content" source="../media/waf-front-door-rate-limit-configure/waf-policy-create.png" alt-text="Screenshot of the Azure portal showing the WAF policy creation prompt, with the 'Create' button highlighted." :::
+
+1. Select **Review + create**, then select **Create**.
+
+ :::image type="content" source="../media/waf-front-door-rate-limit-configure/front-door-create.png" alt-text="Screenshot of the Azure portal showing the completed Front Door profile configuration." :::
+
+1. After the deployment is finished, select **Go to resource**.
+
+## Create a rate limit rule
+
+1. Select **Custom rules** > **Add custom rule**.
+
+ :::image type="content" source="../media/waf-front-door-rate-limit-configure/custom-rule-add.png" alt-text="Screenshot of the Azure portal showing the WAF policy's custom rules page." :::
+
+1. Enter the information required to create a rate limiting rule:
+
+ - **Custom rule name:** Enter the name of the custom rule, such as *rateLimitRule*.
+ - **Rule type:** Rate limit
+ - **Priority:** Enter the priority of the rule, such as *1*.
+ - **Rate limit duration:** 1 minute
+ - **Rate limit threshold (requests):** 1000
+
+1. In **Conditions**, enter the information required to specify a match condition to identify requests where the URL contains the string */promo*:
+
+ - **Match type:** String
+ - **Match variable:** RequestUri
+ - **Operation:** Is
+ - **Operator:** Contains
+ - **Match values:** */promo*
+
+ :::image type="content" source="../media/waf-front-door-rate-limit-configure/custom-rule.png" alt-text="Screenshot of the Azure portal showing the custom rule configuration." :::
+
+1. Select **Add**.
+
+1. Select **Save**.
+
+ :::image type="content" source="../media/waf-front-door-rate-limit-configure/custom-rule-save.png" alt-text="Screenshot of the Azure portal showing the custom rule list, including the new rate limiting rule." :::
+
+## Use prevention mode on the WAF
+
+By default, the Azure portal creates WAF policies in detection mode. This setting means that the WAF won't block requests. For more information, see [WAF modes](afds-overview.md#waf-modes).
+
+It's a good practice to [tune your WAF](waf-front-door-tuning.md) before using prevention mode, to avoid false positive detections and your WAF blocking legitimate requests.
+
+Here, you reconfigure the WAF to use prevention mode.
+
+1. Open the WAF policy.
+
+ Notice that the *Policy mode* is *Detection*.
+
+ :::image type="content" source="../media/waf-front-door-rate-limit-configure/waf-policy-mode.png" alt-text="Screenshot of the Azure portal showing the WAF policy, with the policy mode and 'Switch to prevention mode' button highlighted." :::
+
+1. Select **Switch to prevention mode**.
+++
+## Prerequisites
+
+Before you begin to set up a rate limit policy, set up your PowerShell environment and create a Front Door profile.
+
+### Set up your PowerShell environment
+
+Azure PowerShell provides a set of cmdlets that use the [Azure Resource Manager](../../azure-resource-manager/management/overview.md) model for managing your Azure resources.
+
+You can install [Azure PowerShell](/powershell/azure/) on your local machine and use it in any PowerShell session. Here you sign in with your Azure credentials and install the Azure PowerShell module for Front Door Standard/Premium.
+
+#### Connect to Azure with an interactive dialog for sign-in
+
+Sign in to Azure by running the following command:
+
+```azurepowershell
+Connect-AzAccount
+```
+
+#### Install PowerShellGet
+
+Ensure that current version of PowerShellGet is installed. Run the following command:
+
+```azurepowershell
+Install-Module PowerShellGet -Force -AllowClobber
+```
+
+Then, restart PowerShell to ensure you use the latest version.
+
+#### Install the Front Door PowerShell modules
+
+Install the *Az.FrontDoor* and *Az.Cdn* PowerShell modules to work with Front Door Standard/Premium from PowerShell:
+
+```azurepowershell
+Install-Module -Name Az.FrontDoor
+Install-Module -Name Az.Cdn
+```
+
+You use the *Az.Cdn* module to work with Front Door Standard/Premium resources, and you use the *Az.FrontDoor* module to work with WAF resources.
+
+### Create a resource group
+
+Use the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) cmdlet to create a new resource group for your Front Door profile and WAF policy. Update the resource group name and location for your own requirements:
+
+```azurepowershell
+$resourceGroupName = 'FrontDoorRateLimit'
+
+New-AzResourceGroup -Name $resourceGroupName -Location 'westus'
+```
+
+### Create a Front Door profile
+
+Use the [New-AzFrontDoorCdnProfile](/powershell/module/az.cdn/new-azfrontdoorcdnprofile) cmdlet to create a new Front Door profile.
+
+In this example, you create a Front Door standard profile named *MyFrontDoorProfile*:
+
+```azurepowershell
+$frontDoorProfile = New-AzFrontDoorCdnProfile `
+ -Name 'MyFrontDoorProfile' `
+ -ResourceGroupName $resourceGroupName `
+ -Location global `
+ -SkuName Standard_AzureFrontDoor
+```
+
+### Create a Front Door endpoint
+
+Use the [New-AzFrontDoorCdnEndpoint](/powershell/module/az.cdn/new-azfrontdoorcdnendpoint) cmdlet to add an endpoint to your Front Door profile.
+
+Front Door endpoints must have globally unique names, so update the value of the `$frontDoorEndpointName` variable to something unique.
+
+```azurepowershell
+$frontDoorEndpointName = '<unique-front-door-endpoint-name>'
+
+$frontDoorEndpoint = New-AzFrontDoorCdnEndpoint `
+ -EndpointName $frontDoorEndpointName `
+ -ProfileName $frontDoorProfile.Name `
+ -ResourceGroupName $frontDoorProfile.ResourceGroupName `
+ -Location $frontDoorProfile.Location
+```
+
+## Define a URL match condition
+
+Use the [New-AzFrontDoorWafMatchConditionObject](/powershell/module/az.frontdoor/new-azfrontdoorwafmatchconditionobject) cmdlet to create a match condition, to identify requests that should have the rate limit applied.
+
+The following example matches requests where the *RequestUri* variable contains the string */promo*:
+
+```azurepowershell
+$promoMatchCondition = New-AzFrontDoorWafMatchConditionObject `
+ -MatchVariable RequestUri `
+ -OperatorProperty Contains `
+ -MatchValue '/promo'
+```
+
+## Create a custom rate limit rule
+
+Use the [New-AzFrontDoorWafCustomRuleObject](/powershell/module/az.frontdoor/new-azfrontdoorwafcustomruleobject) cmdlet to create the rate limit rule, which includes the match condition you defined in the previous step as well as the rate limit request threshold.
+
+The following example sets the limit to 1000:
+
+```azurepowershell
+$promoRateLimitRule = New-AzFrontDoorWafCustomRuleObject `
+ -Name 'rateLimitRule' `
+ -RuleType RateLimitRule `
+ -MatchCondition $promoMatchCondition `
+ -RateLimitThreshold 1000 `
+ -Action Block `
+ -Priority 1
+```
+
+When any client IP address sends more than 1000 requests within one minute, the WAF blocks subsequent requests until the next minute starts.
+
+## Create a WAF policy
+
+Use the [New-AzFrontDoorWafPolicy](/powershell/module/az.frontdoor/new-azfrontdoorwafpolicy) cmdlet to create a WAF policy, which includes the custom rule you just created:
+
+```azurepowershell
+$wafPolicy = New-AzFrontDoorWafPolicy `
+ -Name 'MyWafPolicy' `
+ -ResourceGroupName $frontDoorProfile.ResourceGroupName `
+ -Sku Standard_AzureFrontDoor `
+ -CustomRule $promoRateLimitRule
+```
+
+## Configure a security policy to associate your Front Door profile with your WAF policy
+
+Use the [New-AzFrontDoorCdnSecurityPolicy](/powershell/module/az.cdn/new-azfrontdoorcdnsecuritypolicy) cmdlet to create a security policy for your Front Door profile. A security policy associates your WAF policy with domains that you want to be protected by the WAF rule.
+
+In this example, you associate the endpoint's default hostname with your WAF policy:
+
+```azurepowershell
+$securityPolicyAssociation = New-AzFrontDoorCdnSecurityPolicyWebApplicationFirewallAssociationObject `
+ -PatternsToMatch @("/*") `
+ -Domain @(@{"Id"=$($frontDoorEndpoint.Id)})
+
+$securityPolicyParameters = New-AzFrontDoorCdnSecurityPolicyWebApplicationFirewallParametersObject `
+ -Association $securityPolicyAssociation `
+ -WafPolicyId $wafPolicy.Id
+
+$frontDoorSecurityPolicy = New-AzFrontDoorCdnSecurityPolicy `
+ -Name 'MySecurityPolicy' `
+ -ProfileName $frontDoorProfile.Name `
+ -ResourceGroupName $frontDoorProfile.ResourceGroupName `
+ -Parameter $securityPolicyParameters
+```
+
+> [!NOTE]
+> Whenever you make changes to your WAF policy, you don't need to recreate the Front Door security policy. WAF policy updates are automatically applied to the Front Door domains.
+++
+## Quickstart
+
+To create a Front Door profile with a rate limit rule by using Bicep, see the [Front Door Standard/Premium with rate limit](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.cdn/front-door-standard-premium-rate-limit/) Bicep quickstart.
++
+## Next steps
+
+- Learn more about [Front Door](../../frontdoor/front-door-overview.md).
web-application-firewall Waf Front Door Rate Limit Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-rate-limit-powershell.md
- Title: Configure WAF rate limit rule for Front Door - Azure PowerShell
-description: Learn how to configure a rate limit rule for an existing Front Door endpoint.
---- Previously updated : 03/21/2022----
-# Configure a Web Application Firewall rate limit rule using Azure PowerShell
-
-The Azure Web Application Firewall (WAF) rate limit rule for Azure Front Door controls the number of requests allowed from a particular client IP address to the application during a rate limit duration. This article shows how to configure a WAF rate limit rule that controls the number of requests allowed from a particular client to a web application that contains */promo* in the URL using Azure PowerShell.
--
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-
-> [!NOTE]
-> Rate limits are applied for each client IP address. If you have multiple clients accessing your Front Door from different IP addresses, they will have their own rate limits applied.
-
-## Prerequisites
-
-Before you begin to set up a rate limit policy, set up your PowerShell environment and create a Front Door profile.
-
-### Set up your PowerShell environment
-
-Azure PowerShell provides a set of cmdlets that use the [Azure Resource Manager](../../azure-resource-manager/management/overview.md) model for managing your Azure resources.
-
-You can install [Azure PowerShell](/powershell/azure/) on your local machine and use it in any PowerShell session. Follow the instructions on the page, to sign in with your Azure credentials, and install Az PowerShell module.
-
-#### Connect to Azure with an interactive dialog for sign in
-
-```
-Connect-AzAccount
-
-```
-Before install Front Door module, make sure the current version of PowerShellGet is installed. Run the following command and reopen PowerShell.
-
-```
-Install-Module PowerShellGet -Force -AllowClobber
-```
-
-#### Install Az.FrontDoor module
-
-```
-Install-Module -Name Az.FrontDoor
-```
-### Create a Front Door profile
-
-Create a Front Door profile by following the instructions described in [Quickstart: Create a Front Door profile](../../frontdoor/quickstart-create-front-door.md)
-
-## Define URL match conditions
-
-Define a URL match condition (URL contains /promo) using [New-AzFrontDoorWafMatchConditionObject](/powershell/module/az.frontdoor/new-azfrontdoorwafmatchconditionobject).
-The following example matches */promo* as the value of the *RequestUri* variable:
-
-```powershell-interactive
- $promoMatchCondition = New-AzFrontDoorWafMatchConditionObject `
- -MatchVariable RequestUri `
- -OperatorProperty Contains `
- -MatchValue "/promo"
-```
-## Create a custom rate limit rule
-
-Set a rate limit using [New-AzFrontDoorWafCustomRuleObject](/powershell/module/az.frontdoor/new-azfrontdoorwafcustomruleobject).
-In the following example, the limit is set to 1000. Requests from a particular client IP address client to the promo page exceeding 1000 during one minute are blocked until the next minute starts.
-
-```powershell-interactive
- $promoRateLimitRule = New-AzFrontDoorWafCustomRuleObject `
- -Name "rateLimitRule" `
- -RuleType RateLimitRule `
- -MatchCondition $promoMatchCondition `
- -RateLimitThreshold 1000 `
- -Action Block -Priority 1
-```
-
-## Configure a security policy
-
-Find the name of the resource group that contains the Front Door profile using `Get-AzureRmResourceGroup`. Next, configure a security policy with a custom rate limit rule using [New-AzFrontDoorWafPolicy](/powershell/module/az.frontdoor/new-azfrontdoorwafpolicy) in the specified resource group that contains the Front Door profile.
-
-The below example uses the Resource Group name *myResourceGroupFD1* with the assumption that you've created the Front Door profile using instructions provided in the [Quickstart: Create a Front Door](../../frontdoor/quickstart-create-front-door.md) article, using [New-AzFrontDoorWafPolicy](/powershell/module/az.frontdoor/new-azfrontdoorwafpolicy).
-
-```powershell-interactive
- $ratePolicy = New-AzFrontDoorWafPolicy `
- -Name "RateLimitPolicyExamplePS" `
- -resourceGroupName myResourceGroupFD1 `
- -Customrule $promoRateLimitRule `
- -Mode Prevention `
- -EnabledState Enabled
-```
-## Link policy to a Front Door front-end host
-
-Link the security policy object to an existing Front Door front-end host and update Front Door properties. First retrieve the Front Door object using [Get-AzFrontDoor](/powershell/module/Az.FrontDoor/Get-AzFrontDoor) command.
-Next, set the front-end *WebApplicationFirewallPolicyLink* property to the *resourceId* of the "$ratePolicy" created in the previous step using [Set-AzFrontDoor](/powershell/module/Az.FrontDoor/Set-AzFrontDoor) command.
-
-The below example uses the Resource Group name *myResourceGroupFD1* with the assumption that you've created the Front Door profile using instructions provided in the [Quickstart: Create a Front Door](../../frontdoor/quickstart-create-front-door.md) article. Also, in the below example, replace $frontDoorName with the name of your Front Door profile.
-
-```powershell-interactive
- $FrontDoorObjectExample = Get-AzFrontDoor `
- -ResourceGroupName myResourceGroupFD1 `
- -Name $frontDoorName
- $FrontDoorObjectExample[0].FrontendEndpoints[0].WebApplicationFirewallPolicyLink = $ratePolicy.Id
- Set-AzFrontDoor -InputObject $FrontDoorObjectExample[0]
- ```
-
-> [!NOTE]
-> You only need to set *WebApplicationFirewallPolicyLink* property once to link a security policy to a Front Door front-end. Subsequent policy updates are automatically applied to the front-end.
-
-## Next steps
--- Learn more about [Front Door](../../frontdoor/front-door-overview.md).
web-application-firewall Waf Front Door Rate Limit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-rate-limit.md
+
+ Title: Web application firewall rate limiting for Azure Front Door
+description: Learn how to use Web Application Firewall (WAF) rate limiting protecting your web applications from malicious attacks.
++++ Last updated : 09/07/2022+++
+# What is rate limiting for Azure Front Door Service?
+
+Rate limiting enables you to detect and block abnormally high levels of traffic from any client IP address. By using the web application firewall (WAF) with Azure Front Door, you can mitigate some types of denial of service attacks. Rate limiting also protects you against clients that have accidentally been misconfigured to send large volumes of requests in a short time period.
+
+Rate limits are applied for each client IP address. If you have multiple clients accessing your Front Door from different IP addresses, they'll each have their own rate limits applied.
+
+## Configure a rate limit policy
+
+Rate limiting is configured by using [custom WAF rules](./waf-front-door-custom-rules.md).
+
+When you configure a rate limit rule, you specify the *threshold*: the number of web requests allowed from each client IP address within a time period of either one minute or five minutes.
+
+You also must specify at least one *match condition*, which tells Front Door when to activate the rate limit. You can configure multiple rate limits that apply to different paths within your application.
+
+If you need to apply a rate limit rule to all of your requests, consider using a match condition like the following example:
++
+The match condition above identifies all requests with a `Host` header of length greater than 0. Because all valid HTTP requests for Front Door contain a `Host` header, this match condition has the effect of matching all HTTP requests.
+
+## Rate limits and Front Door servers
+
+Requests from the same client often arrive at the same Front Door server. In that case, you'll see requests are blocked as soon as the rate limit is reached for each client IP address.
+
+However, it's possible that requests from the same client might arrive at a different Front Door server that hasn't refreshed the rate limit counter yet. For example, the client might open a new TCP connection for each request. If the threshold is low enough, the first request to the new Front Door server could pass the rate limit check. So, for a very low threshold (for example, less than about 50 requests per minute), you might see some requests above the threshold get through.
+
+## Next steps
+
+- [Configure rate limiting on your Front Door WAF](waf-front-door-rate-limit-configure.md)
+- Review [Rate limiting best practices](waf-front-door-best-practices.md#rate-limiting-best-practices)