Updates from: 10/26/2022 01:12:02
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Concept Conditional Access Users Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-users-groups.md
-# Conditional Access: Users and groups
+# Conditional Access: Users, groups, and workload identities
-A Conditional Access policy must include a user assignment as one of the signals in the decision process. Users can be included or excluded from Conditional Access policies. Azure Active Directory evaluates all policies and ensures that all requirements are met before granting access to the user.
+A Conditional Access policy must include a user, group, or workload identity assignment as one of the signals in the decision process. These can be included or excluded from Conditional Access policies. Azure Active Directory evaluates all policies and ensures that all requirements are met before granting access.
> [!VIDEO https://www.youtube.com/embed/5DsW1hB3Jqs]
If you do find yourself locked out, see [What to do if you're locked out of the
Conditional Access policies that target external users may interfere with service provider access, for example granular delegated admin privileges [Introduction to granular delegated admin privileges (GDAP)](/partner-center/gdap-introduction). For policies that are intended to target service provider tenants, use the **Service provider user** external user type available in the **Guest or external users** selection options.
+## Workload identities (Preview)
+
+A workload identity is an identity that allows an application or service principal access to resources, sometimes in the context of a user. Conditional Access policies can be applied to single tenant service principals that have been registered in your tenant. Third party SaaS and multi-tenanted apps are out of scope. Managed identities aren't covered by policy.
+
+Organizations can target specific workload identities to be included or excluded from policy.
+
+For more information, see the article [Conditional Access for workload identities preview](workload-identity.md).
+ ## Next steps - [Conditional Access: Cloud apps or actions](concept-conditional-access-cloud-apps.md)
active-directory Concept Filter For Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-filter-for-applications.md
+
+ Title: Filter for applications in Conditional Access policy (Preview) - Azure Active Directory
+description: Use filter for applications in Conditional Access to manage conditions.
+++ Last updated : 09/30/2022++++++++++
+# Conditional Access: Filter for applications (Preview)
+
+Currently Conditional Access policies can be applied to all apps or to individual apps. Organizations with a large number of apps may find this process difficult to manage across multiple Conditional Access policies.
+
+Application filters are a new feature for Conditional Access that allows organizations to tag service principals with custom attributes. These custom attributes are then added to their Conditional Access policies. Filters for applications are evaluated at token issuance runtime, a common question is if apps are assigned at runtime or configuration time.
+
+In this document, you create a custom attribute set, assign a custom security attribute to your application, and create a Conditional Access policy to secure the application.
+
+> [!NOTE]
+> Filter for applications is currently in public preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Assign roles
+
+Custom security attributes are security sensitive and can only be managed by delegated users. Even global administrators don't have default permissions for custom security attributes. One or more of the following roles should be assigned to the users who manage or report on these attributes.
+
+| Role name | Description |
+| | |
+| Attribute assignment administrator | Assign custom security attribute keys and values to supported Azure AD objects. |
+| Attribute assignment reader | Read custom security attribute keys and values for supported Azure AD objects. |
+| Attribute definition administrator | Define and manage the definition of custom security attributes. |
+| Attribute definition reader | Read the definition of custom security attributes. |
+
+1. Assign the appropriate role to the users who will manage or report on these attributes at the directory scope.
+
+ For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+
+## Create custom security attributes
+
+Follow the instructions in the article, [Add or deactivate custom security attributes in Azure AD (Preview)](../fundamentals/custom-security-attributes-add.md) to add the following **Attribute set** and **New attributes**.
+
+- Create an **Attribute set** named *ConditionalAccessTest*.
+- Create **New attributes** named *policyRequirement* that **Allow multiple values to be assigned** and **Only allow predefined values to be assigned**. We add the following predefined values:
+ - legacyAuthAllowed
+ - blockGuesUsers
+ - requireMFA
+ - requireCompliantDevice
+ - requireHybridJoinedDevice
+ - requireCompliantApp
++
+> [!NOTE]
+> Conditional Access filters for devices only works with custom security attributes of type "string".
+
+## Create a Conditional Access policy
++
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**.
+1. Select **New policy**.
+1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
+1. Under **Assignments**, select **Users or workload identities**.
+ 1. Under **Include**, select **All users**.
+ 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts.
+ 1. Select **Done**.
+1. Under **Cloud apps or actions**, select the following options:
+ 1. Select what this policy applies to **Cloud apps**.
+ 1. Include **Select apps**.
+ 1. Select **Edit filter**.
+ 1. Set **Configure** to **Yes**.
+ 1. Select the **Attribute** we created earlier called *policyRequirement*.
+ 1. Set **Operator** to **Contains**.
+ 1. Set **Value** to **requireMFA**.
+ 1. Select **Done**.
+1. Under **Access controls** > **Grant**, select **Grant access**, **Require multi-factor authentication**, and select **Select**.
+1. Confirm your settings and set **Enable policy** to **Report-only**.
+1. Select **Create** to create to enable your policy.
+
+After confirming your settings using [report-only mode](howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+
+## Configure custom attributes
+
+### Step 1: Set up a sample application
+
+If you already have a test application that makes use of a service principal, you can skip this step.
+
+Set up a sample application that, demonstrates how a job or a Windows service can run with an application identity, instead of a user's identity. Follow the instructions in the article [Quickstart: Get a token and call the Microsoft Graph API by using a console app's identity](../develop/quickstart-v2-netcore-daemon.md) to create this application.
+
+### Step 2: Assign a custom security attribute to an application
+
+When you don't have a service principal listed in your tenant, it can't be targeted. The Office 365 suite is an example of one such service principal.
+
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
+1. Browse to **Azure Active Directory** > **Enterprise applications**.
+1. Select the service principal you want to apply a custom security attribute to.
+1. Under **Manage** > **Custom security attributes (preview)**, select **Add assignment**.
+1. Under **Attribute set**, select **ConditionalAccessTest**.
+1. Under **Attribute name**, select **policyRequirement**.
+1. Under **Assigned values**, select **Add values**, select **requireMFA** from the list, then select **Done**.
+1. Select **Save**.
+
+### Step 3: Test the policy
+
+Sign in as a user who the policy would apply to and test to see that MFA is required when accessing the application.
+
+## Other scenarios
+
+- Blocking legacy authentication
+- Blocking external access to applications
+- Requiring compliant device or Intune app protection policies
+- Enforcing sign in frequency controls for specific applications
+- Requiring a privileged access workstation for specific applications
+- Require session controls for high risk users and specific applications
+
+## Next steps
+
+[Conditional Access common policies](concept-conditional-access-policy-common.md)
+
+[Determine impact using Conditional Access report-only mode](howto-conditional-access-insights-reporting.md)
+
+[Simulate sign in behavior using the Conditional Access What If tool](troubleshoot-conditional-access-what-if.md)
active-directory Workload Identity Federation Create Trust User Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-create-trust-user-assigned-managed-identity.md
Previously updated : 09/26/2022 Last updated : 10/24/2022
az identity federated-credential delete --name $ficId --identity-name $uaId --re
::: zone-end
+## Prerequisites
+
+- If you're unfamiliar with managed identities for Azure resources, check out the [overview section](/azure/active-directory/managed-identities-azure-resources/overview). Be sure to review the [difference between a system-assigned and user-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/overview#managed-identity-types).
+- If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue.
+- Get the information for your external IdP and software workload, which you need in the following steps.
+- To create a user-assigned managed identity and configure a federated identity credential, your account needs the [Managed Identity Contributor](/azure/role-based-access-control/built-in-roles#managed-identity-contributor) role assignment.
+- To run the example scripts, you have two options:
+ - Use [Azure Cloud Shell](../../cloud-shell/overview.md), which you can open by using the **Try It** button in the upper-right corner of code blocks.
+ - Run scripts locally with Azure PowerShell, as described in the next section.
+- [Create a user-assigned manged identity](/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-powershell#list-user-assigned-managed-identities-2)
+- Find the object ID of the user-assigned managed identity, which you need in the following steps.
+
+### Configure Azure PowerShell locally
+
+To use Azure PowerShell locally for this article instead of using Cloud Shell:
+
+1. Install [the latest version of Azure PowerShell](/powershell/azure/install-az-ps) if you haven't already.
+
+1. Sign in to Azure.
+
+ ```azurepowershell
+ Connect-AzAccount
+ ```
+
+1. Install the [latest version of PowerShellGet](/powershell/scripting/gallery/installing-psget#for-systems-with-powershell-50-or-newer-you-can-install-the-latest-powershellget).
+
+ ```azurepowershell
+ Install-Module -Name PowerShellGet -AllowPrerelease
+ ```
+
+ You might need to `Exit` out of the current PowerShell session after you run this command for the next step.
+
+1. Install the `Az.ManagedServiceIdentity` module to perform the user-assigned managed identity operations in this article.
+
+ ```azurepowershell
+ Install-Module -Name Az.ManagedServiceIdentity
+ ```
+
+## Configure a federated identity credential on a user-assigned managed identity
+
+Run the New-AzFederatedIdentityCredentials command to create a new federated identity credential on your user-assigned managed identity (specified by the object ID of the app). Specify the *name*, *issuer*, *subject*, and other parameters.
+
+```azurepowershell
+New-AzFederatedIdentityCredentials -ResourceGroupName azure-rg-test -IdentityName uai-pwsh01 `
+ -Name fic-pwsh01 -Issuer "https://kubernetes-oauth.azure.com" -Subject "system:serviceaccount:ns:svcaccount"
+```
+
+## List federated identity credentials on a user-assigned managed identity
+
+Run the Get-AzFederatedIdentityCredentials command to read all the federated identity credentials configured on a user-assigned managed identity:
+
+```azurepowershell
+Get-AzFederatedIdentityCredentials -ResourceGroupName azure-rg-test -IdentityName uai-pwsh01
+```
+
+## Get a federated identity credential on a user-assigned managed identity
+
+Run the Get-AzFederatedIdentityCredentials command to show a federated identity credential (by ID):
+
+```azurepowershell
+Get-AzFederatedIdentityCredentials -ResourceGroupName azure-rg-test -IdentityName uai-pwsh01 -Name fic-pwsh01
+```
+
+## Delete a federated identity credential from a user-assigned managed identity
+
+Run the Remove-AzFederatedIdentityCredentials command to delete a federated identity credential under an existing user assigned identity.
+
+```azurepowershell
+Remove-AzFederatedIdentityCredentials -ResourceGroupName azure-rg-test -IdentityName uai-pwsh01 -Name fic-pwsh01
+```
+ ::: zone pivot="identity-wif-mi-methods-arm" ## Prerequisites
All of the template parameters are mandatory.
There is a limit of 3-120 characters for a federated identity credential name length. It must be alphanumeric, dash, underscore. First symbol is alphanumeric only.
-You must add exactly 1 audience to a federated identity credential, this gets verified during token exchange. Use ΓÇ£api://AzureADTokenExchangeΓÇ¥ as the default value.
+You must add exactly 1 audience to a federated identity credential. The audience is verified during token exchange. Use ΓÇ£api://AzureADTokenExchangeΓÇ¥ as the default value.
List, Get, and Delete operations are not available with template. Refer to Azure CLI for these operations. By default, all child federated identity credentials are created in parallel, which triggers concurrency detection logic and causes the deployment to fail with a 409-conflict HTTP status code. To create them sequentially, specify a chain of dependencies using the *dependsOn* property.
https://management.azure.com/subscriptions/<SUBSCRIPTION ID>/resourceGroups/<RES
## Delete a federated identity credential from a user-assigned managed identity
-Delete a federated identity credentials on the specified user-assigned managed identity.
+Delete a federated identity credential on the specified user-assigned managed identity.
```bash curl 'https://management.azure.com/subscriptions/<SUBSCRIPTION ID>/resourceGroups/<RESOURCE GROUP>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<USER ASSIGNED IDENTITY NAME>/<RESOURCE NAME>/federatedIdentityCredentials/<FEDERATED IDENTITY CREDENTIAL RESOURCENAME>?api-version=2022-01-31-preview' -X DELETE -H "Content-Type: application/json" -H "Authorization: Bearer <ACCESS TOKEN>"
active-directory Azure Ad Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/azure-ad-account.md
+ # Add Azure Active Directory (Azure AD) as an identity provider for External Identities
active-directory B2b Quickstart Add Guest Users Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-quickstart-add-guest-users-portal.md
Last updated 05/10/2022
-+ #Customer intent: As a tenant admin, I want to walk through the B2B invitation workflow so that I can understand how to add a guest user in the portal, and understand the end user experience.
active-directory Cross Tenant Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-overview.md
Last updated 08/05/2022
-+
active-directory Cross Tenant Access Settings B2b Collaboration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-settings-b2b-collaboration.md
Last updated 06/30/2022
-+
active-directory Leave The Organization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/leave-the-organization.md
adobe-target: true+ # Leave an organization as an external user
Permanent deletion can be initiated by the admin, or it happens at the end of th
## Next steps - Learn more about [Azure AD B2B collaboration](what-is-b2b.md) and [Azure AD B2B direct connect](b2b-direct-connect-overview.md)-- [Close your Microsoft account](/microsoft-365/commerce/close-your-account)
+- [Close your Microsoft account](/microsoft-365/commerce/close-your-account)
active-directory User Flow Add Custom Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/user-flow-add-custom-attributes.md
Last updated 03/02/2021 -+
active-directory Users Default Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/users-default-permissions.md
You can restrict default permissions for member users in the following ways:
| Permission | Setting explanation | | - | | | **Register applications** | Setting this option to **No** prevents users from creating application registrations. You can the grant the ability back to specific individuals by adding them to the application developer role. |
-| **Create tenants** | Setting this option to **No** prevents users from creating new Azure AD or Azure AD B2C tenants. You can grant the ability back to specific individuals by adding them to tenant creator role. |
| **Allow users to connect work or school account with LinkedIn** | Setting this option to **No** prevents users from connecting their work or school account with their LinkedIn account. For more information, see [LinkedIn account connections data sharing and consent](../enterprise-users/linkedin-user-consent.md). | | **Create security groups** | Setting this option to **No** prevents users from creating security groups. Global administrators and user administrators can still create security groups. To learn how, see [Azure Active Directory cmdlets for configuring group settings](../enterprise-users/groups-settings-cmdlets.md). | | **Create Microsoft 365 groups** | Setting this option to **No** prevents users from creating Microsoft 365 groups. Setting this option to **Some** allows a set of users to create Microsoft 365 groups. Global administrators and user administrators can still create Microsoft 365 groups. To learn how, see [Azure Active Directory cmdlets for configuring group settings](../enterprise-users/groups-settings-cmdlets.md). |
active-directory Create Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-access-review.md
na Previously updated : 09/09/2022 Last updated : 10/24/2022
If you are reviewing access to an application, then before creating the review,
1. In the **Enable review decision helpers** section choose whether you want your reviewer to receive recommendations during the review process: 1. If you select **No sign-in within 30 days**, users who have signed in during the previous 30-day period are recommended for approval. Users who haven't signed in during the past 30 days are recommended for denial. This 30-day interval is irrespective of whether the sign-ins were interactive or not. The last sign-in date for the specified user will also display along with the recommendation.
+ 1. If you select User-to-Group Affiliation, reviewers will get the recommendation to Approve or Deny access for the users based on userΓÇÖs average distance in the organizationΓÇÖs reporting-structure. Users who are very distant from all the other users within the group are considered to have "low affiliation" and will get a deny recommendation in the group access reviews.
> [!NOTE] > If you create an access review based on applications, your recommendations are based on the 30-day interval period depending on when the user last signed in to the application rather than the tenant.
active-directory Managed Identities Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identities-status.md
The following Azure services support managed identities for Azure resources:
| Azure DevTest Labs | [Enable user-assigned managed identities on lab virtual machines in Azure DevTest Labs](../../devtest-labs/enable-managed-identities-lab-vms.md) | | Azure Digital Twins | [Enable a managed identity for routing Azure Digital Twins events](../../digital-twins/how-to-enable-managed-identities-portal.md) | | Azure Event Grid | [Event delivery with a managed identity](../../event-grid/managed-service-identity.md)
+| Azure Event Hubs | [Authenticate a managed identity with Azure Active Directory to access Event Hubs Resources](../../event-hubs/authenticate-managed-identity.md)
| Azure Image Builder | [Azure Image Builder overview](../../virtual-machines/image-builder-overview.md#permissions) | | Azure Import/Export | [Use customer-managed keys in Azure Key Vault for Import/Export service](../../import-export/storage-import-export-encryption-key-portal.md) | Azure IoT Hub | [IoT Hub support for virtual networks with Private Link and Managed Identity](../../iot-hub/virtual-network-support.md) |
active-directory Atea Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/atea-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
11. Review the user attributes that are synchronized from Azure AD to Atea in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Atea for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Atea API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
- |Attribute|Type|Supported for filtering|
- ||||
- |userName|String|&check;|
- |active|Boolean|
- |emails[type eq "work"].value|String|
- |name.givenName|String|
- |name.familyName|String|
- |name.formatted|String|
- |phoneNumbers[type eq "mobile"].value|String|
- |locale|String|
- |nickName|String|
+ Attribute|Type|Supported for filtering|Required by LawVu|
+ |||||
+ |userName|String|&check;|&check;|
+ |active|Boolean||&check;|
+ |emails[type eq "work"].value|String||&check;|
+ |name.givenName|String|||
+ |name.familyName|String|||
+ |name.formatted|String||&check;|
+ |phoneNumbers[type eq "mobile"].value|String|||
+ |locale|String|||
12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
Once you've configured provisioning, use the following resources to monitor your
* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion. * If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+## Change Log
+* 10/25/2022 - Drop core user attribute **nickName**.
+* 10/25/2022 - Changed the mapping of core user attribute **name.formatted** to **Join(" ", [givenName], [surname]) -> name.formatted**.
+* 10/25/2022 - Domain name of all OAuth config urls of Atea app changed to Atea owned domain.
+ ## Additional resources * [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
active-directory Atlassian Cloud Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/atlassian-cloud-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Atlassian Cloud**. 9. Review the user attributes that are synchronized from Azure AD to Atlassian Cloud in the **Attribute Mapping** section.
- The email attribute will be used to match Atlassian Cloud accounts with your Azure AD accounts.
+ **The email attribute will be used to match Atlassian Cloud accounts with your Azure AD accounts.**
Select the **Save** button to commit any changes. |Attribute|Type|
Once you've configured provisioning, use the following resources to monitor your
* Atlassian Cloud only supports provisioning updates for users with verified domains. Changes made to users from a non-verified domain will not be pushed to Atlassian Cloud. Learn more about Atlassian verified domains [here](https://support.atlassian.com/provisioning-users/docs/understand-user-provisioning/). * Atlassian Cloud does not support group renames today. This means that any changes to the displayName of a group in Azure AD will not be updated and reflected in Atlassian Cloud. * The value of the **mail** user attribute in Azure AD is only populated if the user has a Microsoft Exchange Mailbox. If the user does not have one, it is recommended to map a different desired attribute to the **emails** attribute in Atlassian Cloud.
-* When a group is synced and removed from the sync scope, the corresponding group gets deleted from the Atlassian provisioning directory. It will cause the groups on the Cloud sites, where the group had previously synced to be deleted as well. Deleting groups at the site level is destructive and can't be reversed/recovered. If the synced groups are being used to provide permissions like Jira project permissions, deleting the groups will remove those permissions settings from the Cloud site. Simply re-adding/re-syncing the group won't restore permissions. The Cloud site admins will need to manually rebuild the permissions. Do not remove a group from the sync scope in Azure AD unless you are sure that you want the group to get deleted on the Atlassian Cloud sites.
+ ## Change log
active-directory Gong Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/gong-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
![Provisioning tab automatic](common/provisioning-automatic.png)
-1. In the **Admin Credentials** section, click on Authorize, make sure that you enter your Taskize Connect account's Admin credentials. Click **Test Connection** to ensure Azure AD can connect to Taskize Connect. If the connection fails, ensure your Taskize Connect account has Admin permissions and try again.
+1. In the **Admin Credentials** section, click on Authorize, make sure that you enter your Gong account's Admin credentials. Click **Test Connection** to ensure Azure AD can connect to Gong. If the connection fails, ensure your Gong account has Admin permissions and try again.
![Token](media/gong-provisioning-tutorial/gong-authorize.png)
+
1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box. ![Notification Email](common/provisioning-notification-email.png)
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
The traditional [Azure Container Networking Interface (CNI)](./configure-azure-c
With Azure CNI Overlay, the cluster nodes are deployed into an Azure Virtual Network subnet, whereas pods are assigned IP addresses from a private CIDR logically different from the VNet hosting the nodes. Pod and node traffic within the cluster use an overlay network, and Network Address Translation (via the node's IP address) is used to reach resources outside the cluster. This solution saves a significant amount of VNet IP addresses and enables you to seamlessly scale your cluster to very large sizes. An added advantage is that the private CIDR can be reused in different AKS clusters, truly extending the IP space available for containerized applications in AKS. > [!NOTE]
-> - Azure CNI Overlay is currently only available in US West Central region.
-
+> Azure CNI Overlay is currently available in the following regions:
+> - North Central US
+> - West Central US
## Overview of overlay networking In overlay networking, only the Kubernetes cluster nodes are assigned IPs from a subnet. Pods receive IPs from a private CIDR that is provided at the time of cluster creation. Each node is assigned a `/24` address space carved out from the same CIDR. Additional nodes that are created when you scale out a cluster automatically receive `/24` address spaces from the same CIDR. Azure CNI assigns IPs to pods from this `/24` space.
aks Azure Cni Powered By Cilium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-powered-by-cilium.md
+
+ Title: Configure Azure CNI Powered by Cilium in Azure Kubernetes Service (AKS) (Preview)
+description: Learn how to create an Azure Kubernetes Service (AKS) cluster with Azure CNI Powered by Cilium.
+++ Last updated : 10/24/2022++
+# Configure Azure CNI Powered by Cilium in Azure Kubernetes Service (AKS) (Preview)
+
+Azure CNI Powered by Cilium combines the robust control plane of Azure CNI with the dataplane of [Cilium](https://cilium.io/) to provide high-performance networking and security.
+
+By making use of eBPF programs loaded into the Linux kernel and a more efficient API object structure, Azure CNI Powered by Cilium provides the following benefits:
+
+- Functionality equivalent to existing Azure CNI and Azure CNI Overlay plugins
+- Faster service routing
+- More efficient network policy enforcement
+- Better observability of cluster traffic
+- Support for larger clusters (more nodes, pods, and services)
++
+## IP Address Management (IPAM) with Azure CNI Powered by Cilium
+
+Azure CNI Powered by Cilium can be deployed using two different methods for assigning pod IPs:
+
+- assign IP addresses from a VNet (similar to existing Azure CNI with Dynamic Pod IP Assignment)
+- assign IP addresses from an overlay network (similar to Azure CNI Overlay mode)
+
+> [!NOTE]
+> Azure CNI Overlay networking currently requires the `Microsoft.ContainerService/AzureOverlayPreview` feature and may be available only in certain regions. For more information, see [Azure CNI Overlay networking](./azure-cni-overlay.md).
+
+If you aren't sure which option to select, read ["Choosing a network model to use"](./azure-cni-overlay.md#choosing-a-network-model-to-use).
+
+## Network Policy Enforcement
+
+Cilium enforces [network policies to allow or deny traffic between pods](./operator-best-practices-network.md#control-traffic-flow-with-network-policies). With Cilium, you don't need to install a separate network policy engine such as Azure Network Policy Manager or Calico.
+
+## Limitations
+
+Azure CNI powered by Cilium currently has the following limitations:
+
+* Available only for new clusters.
+* Available only for Linux and not for Windows.
+* Cilium L7 policy enforcement is disabled.
+* Hubble is disabled.
+* Kubernetes services with `internalTrafficPolicy=Local` aren't supported ([Cilium issue #17796](https://github.com/cilium/cilium/issues/17796)).
+* Multiple Kubernetes services can't use the same host port with different protocols (for example, TCP or UDP) ([Cilium issue #14287](https://github.com/cilium/cilium/issues/14287)).
+* Network policies may be enforced on reply packets when a pod connects to itself via service cluster IP ([Cilium issue #19406](https://github.com/cilium/cilium/issues/19406)).
+
+## Prerequisites
+
+* Azure CLI version 2.41.0 or later. Run `az --version` to see the currently installed version. If you need to install or upgrade, see [Install Azure CLI][/cli/azure/install-azure-cli].
+* Azure CLI with aks-preview extension 0.5.109 or later.
+* If using ARM templates or the REST API, the AKS API version must be 2022-09-02-preview or later.
+
+### Install the aks-preview CLI extension
+
+```azurecli-interactive
+# Install the aks-preview extension
+az extension add --name aks-preview
+
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
+
+### Register the `CiliumDataplanePreview` preview feature
+
+To create an AKS cluster with Azure CNI powered by Cilium, you must enable the `CiliumDataplanePreview` feature flag on your subscription.
+
+Register the `CiliumDataplanePreview` feature flag by using the `az feature register` command, as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "CiliumDataplanePreview"
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the `az feature list` command:
+
+```azurecli-interactive
+az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/CiliumDataplanePreview')].{Name:name,State:properties.state}"
+```
+
+When the feature has been registered, refresh the registration of the *Microsoft.ContainerService* resource provider by using the `az provider register` command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+## Create a new AKS Cluster with Azure CNI Powered by Cilium
+
+### Option 1: Assign IP addresses from a VNet
+
+Run the following commands to create a resource group and VNet with a subnet for nodes and a subnet for pods.
+
+```azurecli-interactive
+# Create the resource group
+az group create --name <resourceGroupName> --location <location>
+```
+
+```azurecli-interactive
+# Create a VNet with a subnet for nodes and a subnet for pods
+az network vnet create -g <resourceGroupName> --location <location> --name <vnetName> --address-prefixes <address prefix, example: 10.0.0.0/8> -o none
+az network vnet subnet create -g <resourceGroupName> --vnet-name <vnetName> --name nodesubnet --address-prefixes <address prefix, example: 10.240.0.0/16> -o none
+az network vnet subnet create -g <resourceGroupName> --vnet-name <vnetName> --name podsubnet --address-prefixes <address prefix, example: 10.241.0.0/16> -o none
+```
+
+Create the cluster using `--enable-cilium-dataplane`:
+
+```azurecli-interactive
+az aks create -n <clusterName> -g <resourceGroupName> -l <location> \
+ --max-pods 250 \
+ --node-count 2 \
+ --network-plugin azure \
+ --vnet-subnet-id /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Network/virtualNetworks/<vnetName>/subnets/nodesubnet \
+ --pod-subnet-id /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Network/virtualNetworks/<vnetName>/subnets/podsubnet \
+ --enable-cilium-dataplane
+```
+
+### Option 2: Assign IP addresses from an overlay network
+
+Run these commands to create a resource group and VNet with a single subnet:
+
+```azurecli-interactive
+# Create the resource group
+az group create --name <resourceGroupName> --location <location>
+```
+
+```azurecli-interactive
+# Create a VNet with a subnet for nodes and a subnet for pods
+az network vnet create -g <resourceGroupName> --location <location> --name <vnetName> --address-prefixes <address prefix, example: 10.0.0.0/8> -o none
+az network vnet subnet create -g <resourceGroupName> --vnet-name <vnetName> --name nodesubnet --address-prefixes <address prefix, example: 10.240.0.0/16> -o none
+```
+
+Then create the cluster using `--enable-cilium-dataplane`:
+
+```azurecli-interactive
+az aks create -n <clusterName> -g <resourceGroupName> -l <location> \
+ --max-pods 250 \
+ --node-count 2 \
+ --network-plugin azure \
+ --network-plugin-mode overlay \
+ --pod-cidr 192.168.0.0/16 \
+ --vnet-subnet-id /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Network/virtualNetworks/<vnetName>/subnets/nodesubnet \
+ --enable-cilium-dataplane
+```
+
+## Frequently asked questions
+
+- *Can I customize Cilium configuration?*
+
+ No, the Cilium configuration is managed by AKS can't be modified. We recommend that customers who require more control use [AKS BYO CNI](./use-byo-cni.md) and install Cilium manually.
+
+- *Can I use `CiliumNetworkPolicy` custom resources instead of Kubernetes `NetworkPolicy` resources?*
+
+ `CiliumNetworkPolicy` custom resources aren't officially supported. We recommend that customers use Kubernetes `NetworkPolicy` resources to configure network policies.
+
+## Next steps
+
+Learn more about networking in AKS in the following articles:
+
+* [Use a static IP address with the Azure Kubernetes Service (AKS) load balancer](static-ip.md)
+* [Use an internal load balancer with Azure Container Service (AKS)](internal-lb.md)
+
+* [Create a basic ingress controller with external network connectivity][aks-ingress-basic]
+* [Enable the HTTP application routing add-on][aks-http-app-routing]
+* [Create an ingress controller that uses an internal, private network and IP address][aks-ingress-internal]
+* [Create an ingress controller with a dynamic public IP and configure Let's Encrypt to automatically generate TLS certificates][aks-ingress-tls]
+* [Create an ingress controller with a static public IP and configure Let's Encrypt to automatically generate TLS certificates][aks-ingress-static-tls]
+
+<!-- LINKS - Internal -->
+[aks-ingress-basic]: ingress-basic.md
+[aks-ingress-tls]: ingress-tls.md
+[aks-ingress-static-tls]: ingress-static-ip.md
+[aks-http-app-routing]: http-application-routing.md
+[aks-ingress-internal]: ingress-internal-ip.md
aks Concepts Sustainable Software Engineering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-sustainable-software-engineering.md
Title: Concepts - Sustainable software engineering in Azure Kubernetes Services
description: Learn about sustainable software engineering in Azure Kubernetes Service (AKS). Previously updated : 10/21/2022 Last updated : 10/25/2022 # Sustainable software engineering practices in Azure Kubernetes Service (AKS)
We recommend careful consideration of these design patterns for building a susta
| [Enable cluster and node auto-updates](#enable-cluster-and-node-auto-updates) | | ✔️ | | [Install supported add-ons and extensions](#install-supported-add-ons-and-extensions) | ✔️ | ✔️ | | [Containerize your workload where applicable](#containerize-your-workload-where-applicable) | ✔️ | |
-| [Use spot node pools when possible](#use-spot-node-pools-when-possible) | | ✔️ |
+| [Use energy efficient hardware](#use-energy-efficient-hardware) | | ✔️ |
| [Match the scalability needs and utilize auto-scaling and bursting capabilities](#match-the-scalability-needs-and-utilize-auto-scaling-and-bursting-capabilities) | | ✔️ | | [Turn off workloads and node pools outside of business hours](#turn-off-workloads-and-node-pools-outside-of-business-hours) | ✔️ | ✔️ | | [Delete unused resources](#delete-unused-resources) | ✔️ | ✔️ |
Containers allow for reducing unnecessary resource allocation and making better
* Use [Draft](/azure/aks/draft) to simplify application containerization by generating Dockerfiles and Kubernetes manifests.
-### Use spot node pools when possible
+### Use energy efficient hardware
-Spot nodes use Spot VMs and are great for workloads that can handle interruptions, early terminations, or evictions such as batch processing jobs and development and testing environments.
+Ampere's Cloud Native Processors are uniquely designed to meet both the high performance and power efficiency needs of the cloud.
-* Use [spot node pools](/azure/aks/spot-node-pool) to take advantage of unused capacity in Azure at a significant cost saving for a more sustainable platform design for your [interruptible workloads](/azure/architecture/guide/spot/spot-eviction).
+* Evaluate if nodes with [Ampere Altra ArmΓÇôbased processors](https://azure.microsoft.com/blog/azure-virtual-machines-with-ampere-altra-arm-based-processors-generally-available/) are a good option for your workloads.
### Match the scalability needs and utilize auto-scaling and bursting capabilities
aks Configure Kube Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kube-proxy.md
+
+ Title: Configure kube-proxy (iptables/IPVS) (preview)
+
+description: Learn how to configure kube-proxy to utilize different load balancing configurations with Azure Kubernetes Service (AKS).
++ Last updated : 10/25/2022+++
+#Customer intent: As a cluster operator, I want to utilize a different kube-proxy configuration.
++
+# Configure `kube-proxy` in Azure Kubernetes Service (AKS) (preview)
+
+`kube-proxy` is a component of Kubernetes that handles routing traffic for services within the cluster. There are two backends available for Layer 3/4 load balancing in upstream `kube-proxy` - iptables and IPVS.
+
+- iptables is the default backend utilized in the majority of Kubernetes clusters. It is simple and well supported, but is not as efficient or intelligent as IPVS.
+- IPVS utilizes the Linux Virtual Server, a layer 3/4 load balancer built into the Linux kernel. IPVS provides a number of advantages over the default iptables configuration, including state awareness, connection tracking, and more intelligent load balancing.
+
+The AKS managed `kube-proxy` DaemonSet can also be disabled entirely if that is desired to support [bring-your-own CNI][aks-byo-cni].
++
+## Prerequisites
+
+* Azure CLI with aks-preview extension 0.5.105 or later.
+* If using ARM or the REST API, the AKS API version must be 2022-08-02-preview or later.
+
+### Install the aks-preview CLI extension
+
+```azurecli-interactive
+# Install the aks-preview extension
+az extension add --name aks-preview
+
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
+
+### Register the `KubeProxyConfigurationPreview` preview feature
+
+To create an AKS cluster with custom `kube-proxy` configuration, you must enable the `KubeProxyConfigurationPreview` feature flag on your subscription.
+
+Register the `KubeProxyConfigurationPreview` feature flag by using the `az feature register` command, as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "KubeProxyConfigurationPreview"
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the `az feature list` command:
+
+```azurecli-interactive
+az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/KubeProxyConfigurationPreview')].{Name:name,State:properties.state}"
+```
+
+When the feature has been registered, refresh the registration of the *Microsoft.ContainerService* resource provider by using the `az provider register` command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+## Configurable options
+
+The full `kube-proxy` configuration structure can be found in the [AKS Cluster Schema][aks-schema-kubeproxyconfig].
+
+- `enabled` - whether or not to deploy the `kube-proxy` DaemonSet. Defaults to true.
+- `mode` - can be set to `IPTABLES` or `IPVS`. Defaults to `IPTABLES`.
+- `ipvsConfig` - if `mode` is `IPVS`, this object contains IPVS-specific configuration properties.
+ - `scheduler` - which connection scheduler to utilize. Supported values:
+ - `LeastConnections` - sends connections to the backend pod with the fewest connections
+ - `RoundRobin` - distributes connections evenly between backend pods
+ - `tcpFinTimeoutSeconds` - the value used for timeout after a FIN has been received in a TCP session
+ - `tcpTimeoutSeconds` - the value used for timeout length for idle TCP sessions
+ - `udpTimeoutSeconds` - the value used for timeout length for idle UDP sessions
+
+> [!NOTE]
+> IPVS load balancing operates in each node independently and is still only aware of connections flowing through the local node. This means that while `LeastConnections` results in more even load under higher number of connections, when low numbers of connections (# connects < 2 * node count) occur traffic may still be relatively unbalanced.
+
+## Utilize `kube-proxy` configuration in a new or existing AKS cluster using Azure CLI
+
+`kube-proxy` configuration is a cluster-wide setting. No action is needed to update your services.
+
+>[!WARNING]
+> Changing the kube-proxy configuration may cause a slight interruption in cluster service traffic flow.
+
+To begin, create a JSON configuration file with the desired settings:
+
+### Create a configuration file
+
+```json
+{
+ "enabled": true,
+ "mode": "IPVS",
+ "ipvsConfig": {
+ "scheduler": "LeastConnection",
+ "TCPTimeoutSeconds": 900,
+ "TCPFINTimeoutSeconds": 120,
+ "UDPTimeoutSeconds": 300
+ }
+}
+```
+
+### Deploy a new cluster
+
+Deploy your cluster using `az aks create` and pass in the configuration file:
+
+```bash
+az aks create -g <resourceGroup> -n <clusterName> --kube-proxy-config kube-proxy.json
+```
+
+### Update an existing cluster
+
+Configure your cluster using `az aks update` and pass in the configuration file:
+
+```bash
+az aks update -g <resourceGroup> -n <clusterName> --kube-proxy-config kube-proxy.json
+```
+
+## Next steps
+
+Learn more about utilizing the Standard Load Balancer for inbound traffic at the [AKS Standard Load Balancer documentation][load-balancer-standard.md].
+
+Learn more about using Internal Load Balancer for Inbound traffic at the [AKS Internal Load Balancer documentation](internal-lb.md).
+
+Learn more about Kubernetes services at the [Kubernetes services documentation][kubernetes-services].
+
+<!-- LINKS - External -->
+[kubernetes-services]: https://kubernetes.io/docs/concepts/services-networking/service/
+[aks-schema-kubeproxyconfig]: /azure/templates/microsoft.containerservice/managedclusters?pivots=deployment-language-bicep#containerservicenetworkprofilekubeproxyconfig
+
+<!-- LINKS - Internal -->
+[aks-byo-cni]: use-byo-cni.md
api-management Api Management Howto App Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-app-insights.md
To improve performance issues, skip:
+ Learn more about [Azure Application Insights](/azure/application-insights/). + Consider [logging with Azure Event Hubs](api-management-howto-log-event-hubs.md).++ - Learn about visualizing data from Application Insights using [Azure Managed Grafana](visualize-using-managed-grafana-dashboard.md)
api-management Mitigate Owasp Api Threats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/mitigate-owasp-api-threats.md
More information about this threat: [API4:2019 Lack of resources and rate limiti
* Limit the number of parallel backend connections with the [limit concurrency](api-management-advanced-policies.md#LimitConcurrency) policy.
-* While API Management can protect backend services from DDoS attacks, it may be vulnerable to those attacks itself. Deploy a bot protection service in front of API Management (for example, [Azure Application Gateway](api-management-howto-integrate-internal-vnet-appgateway.md), [Azure Front Door](../frontdoor/front-door-overview.md), or [Azure DDoS Protection Service](../ddos-protection/ddos-protection-overview.md)) to better protect against DDoS attacks. When using a WAF with Azure Application Gateway or Azure Front Door, consider using [Microsoft_BotManagerRuleSet_1.0](../web-application-firewall/afds/afds-overview.md#bot-protection-rule-set).
+* While API Management can protect backend services from DDoS attacks, it may be vulnerable to those attacks itself. Deploy a bot protection service in front of API Management (for example, [Azure Application Gateway](api-management-howto-integrate-internal-vnet-appgateway.md), [Azure Front Door](front-door-api-management.md), or [Azure DDoS Protection](protect-with-ddos-protection.md)) to better protect against DDoS attacks. When using a WAF with Azure Application Gateway or Azure Front Door, consider using [Microsoft_BotManagerRuleSet_1.0](../web-application-firewall/afds/afds-overview.md#bot-protection-rule-set).
## Broken function level authorization
More information about this threat: [API8:2019 Injection](https://github.com/OWA
### Recommendations
-* [Modern Web Application Firewall (WAF) policies](https://github.com/SpiderLabs/ModSecurity) cover many common injection vulnerabilities. While API Management doesnΓÇÖt have a built-in WAF component, deploying a WAF upstream (in front) of the API Management instance is strongly recommended. For example, use [Azure Application Gateway](/azure/architecture/reference-architectures/apis/protect-apis) or [Azure Front Door](../frontdoor/front-door-overview.md).
+* [Modern Web Application Firewall (WAF) policies](https://github.com/SpiderLabs/ModSecurity) cover many common injection vulnerabilities. While API Management doesnΓÇÖt have a built-in WAF component, deploying a WAF upstream (in front) of the API Management instance is strongly recommended. For example, use [Azure Application Gateway](/azure/architecture/reference-architectures/apis/protect-apis) or [Azure Front Door](front-door-api-management.md).
> [!IMPORTANT] > Ensure that a bad actor can't bypass the gateway hosting the WAF and connect directly to the API Management gateway or backend API itself. Possible mitigations include: [network ACLs](../virtual-network/network-security-groups-overview.md), using API Management policy to [restrict inbound traffic by client IP](api-management-access-restriction-policies.md#RestrictCallerIPs), removing public access where not required, and [client certificate authentication](api-management-howto-mutual-certificates-for-clients.md) (also known as mutual TLS or mTLS).
api-management Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/observability.md
The table below summarizes all the observability capabilities supported by API M
- Get started with [Azure Monitor metrics and logs](api-management-howto-use-azure-monitor.md) - Learn how to log requests with [Application Insights](api-management-howto-app-insights.md) - Learn how to log events through [Event Hubs](api-management-howto-log-event-hubs.md)
+- Learn about visualizing Azure Monitor data using [Azure Managed Grafana](visualize-using-managed-grafana-dashboard.md)
api-management Protect With Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/protect-with-ddos-protection.md
+
+ Title: Defend API Management against DDoS attacks
+description: Learn how to protect your API Management instance in an external virtual network against volumetric and protocol DDoS attacks by using Azure DDoS Protection Standard.
+++++ Last updated : 10/24/2022++
+# Defend your Azure API Management instance against DDoS attacks
+
+This article shows how to defend your Azure API Management instance against distributed denial of service (DDoS) attacks by enabling [Azure DDoS Protection](../ddos-protection/ddos-protection-overview.md). Azure DDoS Protection provides enhanced DDoS mitigation features to defend against volumetric and protocol DDoS attacks.ΓÇï
++
+## Supported configurations
+
+Enabling Azure DDoS Protection for API Management is currently available only for instances deployed (injected) in a VNet in [external mode](api-management-using-with-vnet.md).
+
+Currently, Azure DDoS Protection can't be enabled for the following API Management configurations:
+
+* Instances that aren't VNet-injected
+* Instances deployed in a VNet in [internal mode](api-management-using-with-internal-vnet.md)
+* Instances configured with a [private endpoint](private-endpoint.md)
+
+## Prerequisites
+
+* An API Management instance
+ * The instance must be deployed in an Azure VNet in [external mode](api-management-using-with-vnet.md)
+ * The instance to be configured with an Azure public IP address resource, which is supported only on the API Management `stv2` [compute platform](compute-infrastructure.md).
+ * If the instance is hosted on the `stv1` platform, you must [migrate](compute-infrastructure.md#how-do-i-migrate-to-the-stv2-platform) to the `stv2` platform.
+* An Azure DDoS Protection [plan](../ddos-protection/manage-ddos-protection.md)
+ * The plan you select can be in the same, or different, subscription than the virtual network and the API Management instance. If the subscriptions differ, they must be associated to the same Azure Active Directory tenant.
+ * You may use a plan created using either the Network DDoS protection SKU or IP DDoS Protection SKU (preview). See [Azure DDoS Protection SKU Comparison](../ddos-protection/ddos-protection-sku-comparison.md).
+
+ > [!NOTE]
+ > Azure DDoS Protection plans incur additional charges. For more information, see [Pricing](https://azure.microsoft.com/pricing/details/ddos-protection/).
+
+## Enable DDoS Protection
+
+Depending on the DDoS Protection plan you use, enable DDoS protection on the virtual network used for your API Management instance, or the IP address resource configured for your virtual network.
+
+### Enable DDoS Protection on the virtual network used for your API Management instance
+
+1. In the [Azure portal](https://portal.azure.com), navigate to the VNet where your API Management is injected.
+1. In the left menu, under **Settings**, select **DDoS protection**.
+1. Select **Enable**, and then select your **DDoS protection plan**.
+1. Select **Save**.
+
+ :::image type="content" source="media/protect-with-ddos-protection/enable-ddos-protection.png" alt-text="Screenshot of enabling a DDoS Protection plan on a VNet in the Azure portal.":::
+
+### Enable DDoS protection on the API Management public IP address
+
+If your plan uses the IP DDoS Protection SKU, see [Enable DDoS IP Protection for a public IP address](../ddos-protection/manage-ddos-protection-powershell-ip.md#disable-ddos-ip-protection-for-an-existing-public-ip-address).
+
+## Next steps
+
+* Learn how to verify DDoS protection of your API Management instance by [testing with simulation partners](../ddos-protection/test-through-simulations.md)
+* Learn how to [view and configure Azure DDoS Protection telemetry](../ddos-protection/telemetry.md)
api-management Virtual Network Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-concepts.md
For more information, see [Integrate API Management in an internal virtual netwo
Learn more about:
-* [Connecting a virtual network to backend using VPN Gateway](../vpn-gateway/design.md#s2smulti)
-* [Connecting a virtual network from different deployment models](../vpn-gateway/vpn-gateway-connect-different-deployment-models-powershell.md)
-* [Virtual network frequently asked questions](../virtual-network/virtual-networks-faq.md)
- Virtual network configuration with API Management: * [Connect to an external virtual network using Azure API Management](./api-management-using-with-vnet.md). * [Connect to an internal virtual network using Azure API Management](./api-management-using-with-internal-vnet.md). * [Connect privately to API Management using a private endpoint](private-endpoint.md)-
+* [Defend your Azure API Management instance against DDoS attacks](protect-with-ddos-protection.md)
Related articles: * [Connecting a Virtual Network to backend using Vpn Gateway](../vpn-gateway/design.md#s2smulti) * [Connecting a Virtual Network from different deployment models](../vpn-gateway/vpn-gateway-connect-different-deployment-models-powershell.md)
-* [How to use the API Inspector to trace calls in Azure API Management](api-management-howto-api-inspector.md)
* [Virtual Network Frequently asked Questions](../virtual-network/virtual-networks-faq.md)
-* [Service tags](../virtual-network/network-security-groups-overview.md#service-tags)
api-management Visualize Using Managed Grafana Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/visualize-using-managed-grafana-dashboard.md
+
+ Title: Visualize Azure API Management monitoring data with Azure Managed Grafana
+description: Learn how to use an Azure Managed Grafana dashboard to visualize monitoring data from Azure API Management.
+++ Last updated : 10/17/2022++++
+# Visualize API Management monitoring data using a Managed Grafana dashboard
+
+You can use [Azure Managed Grafana](../managed-grafana/index.yml) to visualize API Management monitoring data that is collected into a Log Analytics workspace. Use a prebuilt [API Management dashboard](https://grafana.com/grafana/dashboards/16604-azure-api-management) for real-time visualization of logs and metrics collected from your API Management instance.
+
+* [Learn more about Azure Managed Grafana](../managed-grafan)
+* [Learn more about observability in Azure API Management](observability.md)
+
+## Prerequisites
+
+* API Management instance
+
+ * To visualize resource logs and metrics for API Management, configure [diagnostic settings](api-management-howto-use-azure-monitor.md#resource-logs) to collect resource logs and send them to a Log Analytics workspace
+
+ * To visualize detailed data about requests to the API Management gateway, [integrate](api-management-howto-app-insights.md) your API Management instance with Application Insights.
+
+ > [!NOTE]
+ > To visualize data in a single dashboard, configure the Log Analytics workspace for the diagnostic settings and the Application Insights instance in the same resource group as your API Management instance.
+
+* Managed Grafana workspace
+
+ * To create a Managed Grafana instance and workspace, see the quickstart for the [portal](../managed-grafan).
+
+ * The Managed Grafana instance must be in the same subscription as the API Management instance.
+
+ * When created, the Grafana workspace is automatically assigned an Azure Active Directory managed identity, which is assigned the Monitor Reader role on the subscription. This gives you immediate access to Azure Monitor from the new Grafana workspace without needing to set permissions manually. Learn more about [configuring data sources](../managed-grafan) for Managed Grafana.
+
+
+## Import API Management dashboard
+
+First import the [API Management dashboard](https://grafana.com/grafana/dashboards/16604-azure-api-management) to your Management Grafana workspace.
+
+To import the dashboard:
+
+1. Go to your Azure Managed Grafana workspace. In the portal, on the **Overview** page of your Managed Grafana instance, select the **Endpoint** link.
+1. In the Managed Grafana workspace, go to **Dashboards** > **Browse** > **Import**.
+1. On the **Import** page, under **Import via grafana.com**, enter *16604* and select **Load**.
+1. Select an **Azure Monitor data source**, review or update the other options, and select **Import**.
+
+## Use API Management dashboard
+
+1. In the Managed Grafana workspace, go to **Dashboards** > **Browse** and select your API Management dashboard.
+1. In the dropdowns at the top, make selections for your API Management instance. If configured, select an Application Insights instance and a Log Analytics workspace.
+
+Review the default visualizations on the dashboard, which will appear similar to the following screenshot:
++
+## Next steps
+
+* For more information about managing your Grafana dashboard, see the [Grafana docs](https://grafana.com/docs/grafana/v9.0/dashboards/).
+* Easily pin log queries and charts from the Azure portal to your Managed Grafana dashboard. For more information, see [Monitor your Azure services in Grafana](../azure-monitor/visualize/grafana-plugin.md#pin-charts-from-the-azure-portal-to-azure-managed-grafana).
++++
applied-ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/language-support.md
The following table lists the supported languages for print text by the most rec
|Kazakh (Latin) | `kk-latn`|Zhuang | `za` | |Khaling | `klr`|Zulu | `zu` |
-### Preview (2022-06-30-preview)
+### Print text in preview (API version 2022-06-30-preview)
Use the parameter `api-version=2022-06-30-preview` when using the REST API or the corresponding SDK to support these languages in your applications.
Receipt supports all English receipts with the following locales:
|English (United Kingdom)|`en-gb`| |English (India|`en-in`| |English (United States)| `en-us`|
+|French | 'fr' |
+| Spanish | `es` |
## Business card model
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
## October 2022
+### Language expansion
+ With the latest preview release, Form Recognizer's Read (OCR), Layout, and Custom template models support 134 new languages. These language additions include Greek, Latvian, Serbian, Thai, Ukrainian, and Vietnamese, along with several Latin and Cyrillic languages. Form Recognizer now has a total of 299 supported languages across the most recent GA and new preview versions. Refer to the [supported languages](language-support.md) page to see all supported languages. Use the REST API parameter `api-version=2022-06-30-preview` when using the API or the corresponding SDK to support the new languages in your applications.
+### New Prebuilt Contract model
+
+A new prebuilt that extracts information from contracts such as parties, title, contract ID, execution date and more. Contracts is currenlty in preview, please request access [here](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQTRDQUdHMTBWUDRBQ01QUVNWNlNYMVFDViQlQCN0PWcu_).
+ ### Region expansion for training custom neural models Training custom neural models now supported in added regions.
automation Automation Config Aws Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-config-aws-account.md
description: This article tells how to authenticate runbooks with Amazon Web Ser
keywords: aws authentication, configure aws Previously updated : 04/23/2020 Last updated : 10/28/2022+ # Authenticate runbooks with Amazon Web Services
-Automating common tasks with resources in Amazon Web Services (AWS) can be accomplished with Automation runbooks in Azure. You can automate many tasks in AWS using Automation runbooks just like you can with resources in Azure. For authentication, you must have an Azure subscription.
+You can automate common tasks with resources in Amazon Web Services (AWS) using Automation runbooks in Azure. You can automate many tasks in AWS using Automation runbooks similar to the resources in Azure. Ensure that you have the Azure subscription to authenticate.
## Obtain AWS subscription and credentials
-To authenticate with AWS, you must obtain an AWS subscription and specify a set of AWS credentials to authenticate your runbooks running from Azure Automation. Specific credentials required are the AWS Access Key and Secret Key. See [Using AWS Credentials](https://docs.aws.amazon.com/powershell/latest/userguide/specifying-your-aws-credentials.html).
+Ensure that you obtain an AWS subscription and specify a set of AWS credentials to authenticate your runbooks running from Azure Automation. Specific credentials required are the AWS Access Key and Secret Key. See [Using AWS Credentials](https://docs.aws.amazon.com/powershell/latest/userguide/specifying-your-aws-credentials.html).
## Configure Automation account
You can use an existing Automation account to authenticate with AWS. Alternative
## Store AWS credentials
-You must store the AWS credentials as assets in Azure Automation. See [Managing Access Keys for your AWS Account](https://docs.aws.amazon.com/general/latest/gr/managing-aws-access-keys.html) for instructions on creating the Access Key and the Secret Key. When the keys are available, copy the Access Key ID and the Secret Key ID in a safe place. You can download your key file to store it somewhere safe.
+You must store the AWS credentials as assets in Azure Automation. See [Managing Access Keys for your AWS Account](https://docs.aws.amazon.com/general/latest/gr/managing-aws-access-keys.html) for instructions on how to create the Access Key and the Secret Key. When the keys are available, copy the Access Key ID and the Secret Key ID in a safe place. You can download your key file to store it safely.
-## Create credential asset
+### Create credential asset
-After you have created and copied your AWS security keys, you must create a Credential asset with the Automation account. The asset allows you to securely store the AWS keys and reference them in your runbooks. See [Create a new credential asset with the Azure portal](shared-resources/credentials.md#create-a-new-credential-asset-with-the-azure-portal). Enter the following AWS information in the fields provided:
+After you have created and copied your AWS security keys, you must create a Credential asset with the Automation account. The asset allows you to securely store the AWS keys and reference them in your runbooks. See [Create a new credential asset with the Azure portal](shared-resources/credentials.md#create-a-new-credential-asset-with-the-azure-portal).
+
+Enter the following AWS information in the fields provided:
* **Name** - **AWScred**, or an appropriate value following your naming standards * **User name** - Your access ID
automation Automation Dsc Cd Chocolatey https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-cd-chocolatey.md
# Set up continuous deployment with Chocolatey
+> [!NOTE]
+> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+ In a DevOps world, there are many tools to assist with various points in the continuous integration pipeline. Azure Automation [State Configuration](automation-dsc-overview.md) is a welcome new addition to the options that DevOps teams can employ.
automation Automation Dsc Compile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-compile.md
# Compile DSC configurations in Azure Automation State Configuration
+> [!NOTE]
+> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+ You can compile Desired State Configuration (DSC) configurations in Azure Automation State Configuration in the following ways: - Azure State Configuration compilation service
automation Automation Dsc Config Data At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-config-data-at-scale.md
**Applies to:** :heavy_check_mark: Windows PowerShell 5.1
+> [!NOTE]
+> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+ > [!IMPORTANT] > This article refers to a solution that is maintained by the Open Source community. Support is only available in the form of GitHub collaboration, and not from Microsoft.
automation Automation Dsc Config From Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-config-from-server.md
description: This article tells how to create configurations from existing serve
keywords: dsc,powershell,configuration,setup Previously updated : 08/08/2019 Last updated : 10/25/2022+ # Create configurations from existing servers
-> Applies To: Windows PowerShell 5.1
-
-Creating configurations from existing servers can be a challenging task.
-You might not want *all* settings,
-just those that you care about.
-Even then you need to know in what order the settings
-must be applied in order for the configuration to apply successfully.
+> **Applies to:** :heavy_check_mark: Windows PowerShell 5.1
> [!NOTE]
-> This article refers to a solution that is maintained by the Open Source community.
-> Support is only available in the form of GitHub collaboration, not from Microsoft.
+> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+
+> [!IMPORTANT]
+> The article refers to a solution that is maintained by the Open Source community. Support is only available in the form of GitHub collaboration, not from Microsoft.
+
+This article explains how to create configuration from existing servers for an Azure Automation state configuration. To create configurations from an existing servers is a challenging task as you need to know the right settings and the order they must be applied to ensure that configuration is successful.
+
+## Community project: ReverseDSC
+
+ The [ReverseDSC](https://github.com/microsoft/reversedsc) is a community maintained solution created to work in this area beginning with the SharePoint. The solution builds on the [SharePointDSC resource](https://github.com/powershell/sharepointdsc) and extends it to orchestrate by [gathering information](https://github.com/Microsoft/sharepointDSC.reverse#how-to-use) from existing servers running SharePoint.
-## Community project: ReverseDSC
+The latest version has multiple [extraction modes](https://github.com/Microsoft/SharePointDSC.Reverse/wiki/Extraction-Modes) to determine the level of information to include. The result of using the solution is generating
+[Configuration Data](https://github.com/Microsoft/sharepointDSC.reverse#configuration-data) that must be used with SharePointDSC configuration scripts.
-A community maintained solution named
-[ReverseDSC](https://github.com/microsoft/reversedsc)
-has been created to work in this area starting SharePoint.
-The solution builds on the
-[SharePointDSC resource](https://github.com/powershell/sharepointdsc)
-and extends it to orchestrate
-[gathering information](https://github.com/Microsoft/sharepointDSC.reverse#how-to-use)
-from existing servers running SharePoint.
-The latest version has multiple
-[extraction modes](https://github.com/Microsoft/SharePointDSC.Reverse/wiki/Extraction-Modes)
-to determine what level of information to include.
+## Create configuration from existing servers for an Azure Automation state configuration
-The result of using the solution is generating
-[Configuration Data](https://github.com/Microsoft/sharepointDSC.reverse#configuration-data)
-to be used with SharePointDSC configuration scripts.
+Follow the steps to create a configuration from existing servers for an Azure Automation state configuration:
-Once the data files have been generated,
-you can use them with
-[DSC Configuration scripts](/powershell/dsc/overview)
-to generate MOF files
-and
-[upload the MOF files to Azure Automation](./tutorial-configure-servers-desired-state.md#create-and-upload-a-configuration-to-azure-automation).
-Then register your servers from either
-[on-premises](./automation-dsc-onboarding.md#enable-physicalvirtual-linux-machines)
-or [in Azure](./automation-dsc-onboarding.md#enable-azure-vms)
-to pull configurations.
+1. After you generate the data files, you can use them with [DSC Configuration scripts](/powershell/dsc/overview) to generate *MOF* files.
+1. upload the [MOF files to Azure Automation](./tutorial-configure-servers-desired-state.md#create-and-upload-a-configuration-to-azure-automation).
+1. Register your servers from either [on-premises](./automation-dsc-onboarding.md#enable-physicalvirtual-linux-machines)
+or [in Azure](./automation-dsc-onboarding.md#enable-azure-vms) to pull configurations.
-To try out ReverseDSC, visit the
-[PowerShell Gallery](https://www.powershellgallery.com/packages/ReverseDSC/)
-and download the solution or click "Project Site"
-to view the
-[documentation](https://github.com/Microsoft/sharepointDSC.reverse).
+For more information on ReverseDSC, visit the [PowerShell Gallery](https://www.powershellgallery.com/packages/ReverseDSC/) and download the solution or select **Project Site** to view the [documentation](https://github.com/Microsoft/sharepointDSC.reverse).
## Next steps
automation Automation Dsc Configuration Based On Stig https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-configuration-based-on-stig.md
> Applies To: Windows PowerShell 5.1
+> [!NOTE]
+> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+ Creating configuration content for the first time can be challenging. In many cases, the goal is to automate configuration of servers following a "baseline" that hopefully aligns to an industry recommendation.
automation Automation Dsc Create Composite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-create-composite.md
> **Applies to:** :heavy_check_mark: Windows PowerShell 5.1
+> [!NOTE]
+> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+ > [!IMPORTANT] > This article refers to a solution that is maintained by the Open Source community and support is only available in the form of GitHub collaboration, not from Microsoft.
automation Automation Dsc Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-diagnostics.md
# Integrate Azure Automation State Configuration with Azure Monitor Logs
+> [!NOTE]
+> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+ Azure Automation State Configuration retains node status data for 30 days. You can send node status data to [Azure Monitor Logs](../azure-monitor/logs/data-platform-logs.md) if you prefer to retain this data for a longer period. Compliance status is visible in the Azure portal or with PowerShell, for nodes and for individual DSC resources in node configurations. Azure Monitor Logs provides greater operational visibility to your Automation State Configuration data and can help address incidents more quickly. With Azure Monitor Logs you can:
automation Automation Dsc Extension History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-extension-history.md
# Work with Azure Desired State Configuration extension version history
-The Azure Desired State Configuration (DSC) VM [extension](../virtual-machines/extensions/dsc-overview.md) is updated as-needed to support enhancements and new capabilities delivered by Azure, Windows Server, and the Windows Management Framework (WMF) that includes Windows PowerShell.
- > [!NOTE]
-> Before you enable the DSC extension, we would like you to know that a newer version of DSC is now available in preview, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+
+The Azure Desired State Configuration (DSC) VM [extension](../virtual-machines/extensions/dsc-overview.md) is updated as-needed to support enhancements and new capabilities delivered by Azure, Windows Server, and the Windows Management Framework (WMF) that includes Windows PowerShell.
This article provides information about each version of the Azure DSC VM extension, what environments it supports, and comments and remarks on new features or changes.
automation Automation Dsc Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-getting-started.md
# Get started with Azure Automation State Configuration
-This article provides a step-by-step guide for doing the most common tasks with Azure Automation State Configuration, such as creating, importing, and compiling configurations, enabling machines to manage, and viewing reports. For an overview State Configuration, see [State Configuration overview](automation-dsc-overview.md). For Desired State Configuration (DSC) documentation, see [Windows PowerShell Desired State Configuration Overview](/powershell/dsc/overview).
- > [!NOTE]
-> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now available in preview, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+
+This article provides a step-by-step guide for doing the most common tasks with Azure Automation State Configuration, such as creating, importing, and compiling configurations, enabling machines to manage, and viewing reports. For an overview State Configuration, see [State Configuration overview](automation-dsc-overview.md). For Desired State Configuration (DSC) documentation, see [Windows PowerShell Desired State Configuration Overview](/powershell/dsc/overview).
If you want a sample environment that is already set up without following the steps described in this article, you can use the [Azure Automation Managed Node template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.automation/automation-configuration). This template sets up a complete State Configuration (DSC) environment, including an Azure VM that is managed by State Configuration (DSC).
automation Automation Dsc Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-onboarding.md
# Enable Azure Automation State Configuration
-This topic describes how you can set up your machines for management with Azure Automation State Configuration. For details of this service, see [Azure Automation State Configuration overview](automation-dsc-overview.md).
- > [!NOTE]
-> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now available in preview, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+
+This topic describes how you can set up your machines for management with Azure Automation State Configuration. For details of this service, see [Azure Automation State Configuration overview](automation-dsc-overview.md).
## Enable Azure VMs
automation Automation Dsc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-overview.md
# Azure Automation State Configuration overview
+> [!NOTE]
+> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+ Azure Automation State Configuration is an Azure configuration management service that allows you to write, manage, and compile PowerShell Desired State Configuration (DSC) [configurations](/powershell/dsc/configurations/configurations) for nodes in any cloud or on-premises datacenter. The service also imports [DSC Resources](/powershell/dsc/resources/resources), and assigns configurations to target nodes, all in the cloud. You can access Azure Automation State Configuration in the Azure portal by selecting **State configuration (DSC)** under **Configuration Management**.
-> [!NOTE]
-> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now available in preview, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
- You can use Azure Automation State Configuration to manage a variety of machines: - Azure virtual machines
automation Automation Dsc Remediate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-dsc-remediate.md
Last updated 07/17/2019
# Remediate noncompliant Azure Automation State Configuration servers
+> [!NOTE]
+> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+ When servers are registered with Azure Automation State Configuration, the configuration mode is set to `ApplyOnly`, `ApplyAndMonitor`, or `ApplyAndAutoCorrect`. If the mode isn't set to `ApplyAndAutoCorrect`, servers that drift from a compliant state for any reason
automation Dsc Linux Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/dsc-linux-powershell.md
Last updated 08/31/2021
# Configure Linux desired state with Azure Automation State Configuration using PowerShell
+> [!NOTE]
+> Before you enable Automation State Configuration, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [guest configuration](../governance/machine-configuration/overview.md). The guest configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../azure-arc/servers/overview.md).
+
+> [!IMPORTANT]
+> The desired state configuration VM extension for Linux will be [retired on **September 30, 2023**](https://aka.ms/dscext4linuxretirement). If you're currently using the desired state configuration VM extension for Linux, you should start planning your migration to the machine configuration feature of Azure Automanage by using the information in this article.
+ In this tutorial, you'll apply an Azure Automation State Configuration with PowerShell to an Azure Linux virtual machine to check whether it complies with a desired state. The desired state is to identify if the apache2 service is present on the node. Azure Automation State Configuration allows you to specify configurations for your machines and ensure those machines are in a specified state over time. For more information about State Configuration, see [Azure Automation State Configuration overview](./automation-dsc-overview.md).
automation Collect Data Microsoft Azure Automation Case https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/collect-data-microsoft-azure-automation-case.md
description: This article describes the information to gather before opening a c
Previously updated : 09/23/2019+ Last updated : 10/21/2022 # Data to collect when opening a case for Microsoft Azure Automation
-This article describes some of the information that you should gather before you open a case for Azure Automation with Microsoft Azure Support. This information is not required to open the case. However, it can help Microsoft resolve your problem more quickly. Also, you may be asked for this data by the support engineer after you open the case.
-
-## Basic data
+This article describes the information that you should gather before you open a case for Azure Automation with Microsoft Azure Support. Even though this information isn't required to open a case. However, it helps the support team to quickly resolve a problem.
+
+> [!NOTE]
+> For more information, refer the Knowledge Base article [4034605 - How to capture Azure Automation-scripted diagnostics](https://support.microsoft.com/help/4034605/how-to-capture-azure-automation-scripted-diagnostics).
-Collect the basic data described in the Knowledge Base article [4034605 - How to capture Azure Automation-scripted diagnostics](https://support.microsoft.com/help/4034605/how-to-capture-azure-automation-scripted-diagnostics).
## Data for Update Management issues on Linux
-1. In addition to the items that are listed in KB [4034605](https://support.microsoft.com/help/4034605/how-to-capture-azure-automation-scripted-diagnostics), run the following log collection tool:
+1. Run the following log collection tool, in addition to the details in KB [4034605](https://support.microsoft.com/help/4034605/how-to-capture-azure-automation-scripted-diagnostics).
- [OMS Linux Agent Log Collector](https://github.com/Microsoft/OMS-Agent-for-Linux/blob/master/tools/LogCollector/OMS_Linux_Agent_Log_Collector.md)
+ - [OMS Linux Agent Log Collector](https://github.com/Microsoft/OMS-Agent-for-Linux/blob/master/tools/LogCollector/OMS_Linux_Agent_Log_Collector.md)
-2. Compress the contents of the **/var/opt/microsoft/omsagent/run/automationworker/** folder, then send the compressed file to Azure Support.
+1. Compress the contents of the **/var/opt/microsoft/omsagent/run/automationworker/** folder, and send the compressed file to Azure Support.
-3. Verify that the ID for the workspace that the Log Analytics agent for Linux reports to is the same as the ID for the workspace being monitored for updates.
+1. Verify that the ID for the workspace that the Log Analytics agent for Linux reports to is the same as the ID for the workspace being monitored for updates.
## Data for Update Management issues on Windows 1. Collect data for the items listed in [4034605](https://support.microsoft.com/help/4034605/how-to-capture-azure-automation-scripted-diagnostics).
-2. Export the following event logs into the EVTX format:
+1. Export the following event logs into the EVTX format:
* System * Application
Collect the basic data described in the Knowledge Base article [4034605 - How to
* Operations Manager * Microsoft-SMA/Operational
-3. Verify that the ID of the workspace that the agent reports to is the same as the ID for the workspace being monitored by Windows Updates.
+1. Verify that the ID of the workspace that the agent reports to is the same as the ID for the workspace being monitored by Windows Updates.
## Data for job issues
Collect the basic data described in the Knowledge Base article [4034605 - How to
1. In the Azure portal, go to **Automation Accounts**. 2. Select the Automation account that you are troubleshooting, and note the name. 3. Select **Jobs**.+
+ :::image type="content" source="./media/collect-data-microsoft-azure-automation-case/select-jobs.png" alt-text="Screenshot showing to select jobs menu from automation account.":::
+ 4. Choose the job that you are troubleshooting.
- 5. In the Job Summary pane, look for the GUID value in **Job ID**.
+ 5. In the Job Summary pane, check for the GUID value in **Job ID**.
- ![Job ID within Job Summary Pane](media/collect-data-microsoft-azure-automation-case/job-summary-job-id.png)
+ :::image type="content" source="./media/collect-data-microsoft-azure-automation-case/job-summary-job-id.png" alt-text="Screenshot Job ID within Job Summary Pane.":::
3. Collect a sample of the script that you are running.
Collect the basic data described in the Knowledge Base article [4034605 - How to
3. Select **Jobs**. 4. Choose the job that you are troubleshooting. 5. Select **All Logs**.
- 6. In the resulting pane, collect the data.
-
- ![Data listed under All Logs](media/collect-data-microsoft-azure-automation-case/all-logs-data.png)
+ In the pane below, you can collect the data.
+ :::image type="content" source="./media/collect-data-microsoft-azure-automation-case/all-logs-data.png" alt-text="Screenshot of Data listed under All Logs.":::
+
## Data for module issues
-In addition to the [basic data items](#basic-data), gather the following information:
+In addition to the knowledge Base article [4034605 - How to capture Azure Automation-scripted diagnostics](https://support.microsoft.com/help/4034605/how-to-capture-azure-automation-scripted-diagnostics), obtain the following information:
-* The steps you have followed, so that the problem can be reproduced.
+* The steps you followed, so that the problem can be reproduced.
* Screenshots of any error messages. * Screenshots of the current modules and their version numbers. ## Next steps
-If you need more help:
- * Get answers from Azure experts through [Azure Forums](https://azure.microsoft.com/support/forums/). * Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience by connecting the Azure community to the right resources: answers, support, and experts. * File an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/) and select **Get Support**.
automation Desired State Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/desired-state-configuration.md
Title: Troubleshoot Azure Automation State Configuration issues
description: This article tells how to troubleshoot and resolve Azure Automation State Configuration issues. Previously updated : 04/16/2019 Last updated : 10/17/2022
VM has reported a failure when processing extension 'Microsoft.Powershell.DSC /
### Cause
-This issue is caused by a bad or expired certificate. See [Re-register a node](../automation-dsc-onboarding.md#re-register-a-node).
+The following are the possible causes:
-This issue might also be caused by a proxy configuration not allowing access to ***.azure-automation.net**. For more information, see [Configuration of private networks](../automation-dsc-overview.md#network-planning).
+- A bad or expired certificate. See [Re-register a node](../automation-dsc-onboarding.md#re-register-a-node).
+
+- A proxy configuration that isn't allowing access to ***.azure-automation.net**. For more information, see [Configuration of private networks](../automation-dsc-overview.md#network-planning).
+
+- When you disable local authentication in Azure Automation. See [Disable local authentication](../disable-local-authentication.md). To fix it, see [re-enable local authentication](../disable-local-authentication.md#re-enable-local-authentication).
### Resolution
azure-arc Plan Azure Arc Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/plan-azure-arc-data-services.md
You can deploy Azure Arc-enabled data services on various types of Kubernetes cl
- Google Kubernetes Engine (GKE) - Open source, upstream Kubernetes (typically deployed by using kubeadm) - OpenShift Container Platform (OCP)
+- Additional [partner-validated Kubernetes distributions](./validation-program.md)
> [!IMPORTANT] > * The minimum supported version of Kubernetes is v1.21. For more information, see the "Known issues" section of [Release notes&nbsp;- Azure Arc-enabled data services](./release-notes.md#known-issues).
azure-arc Conceptual Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-gitops-flux2.md
description: "This article provides a conceptual overview of GitOps in Azure for
keywords: "GitOps, Flux, Kubernetes, K8s, Azure, Arc, AKS, Azure Kubernetes Service, containers, devops" Previously updated : 10/12/2022 Last updated : 10/24/2022
With GitOps, you declare the desired state of your Kubernetes clusters in files
Because these files are stored in a Git repository, they're versioned, and changes between versions are easily tracked. Kubernetes controllers run in the clusters and continually reconcile the cluster state with the desired state declared in the Git repository. These operators pull the files from the Git repositories and apply the desired state to the clusters. The operators also continuously assure that the cluster remains in the desired state.
-GitOps on Azure Arc-enabled Kubernetes or Azure Kubernetes Service uses [Flux](https://fluxcd.io/docs/), a popular open-source tool set. Flux provides support for common file sources (Git and Helm repositories, Buckets) and template types (YAML, Helm, and Kustomize). Flux also supports multi-tenancy and deployment dependency management, among [other features](https://fluxcd.io/docs/).
+GitOps on Azure Arc-enabled Kubernetes or Azure Kubernetes Service uses [Flux](https://fluxcd.io/docs/), a popular open-source tool set. Flux provides support for common file sources (Git and Helm repositories, Buckets, Azure Blob Storage) and template types (YAML, Helm, and Kustomize). Flux also supports multi-tenancy and deployment dependency management, among [other features](https://fluxcd.io/docs/).
## Flux cluster extension
The most recent version of the Flux v2 extension and the two previous versions (
The `microsoft.flux` extension installs by default the [Flux controllers](https://fluxcd.io/docs/components/) (Source, Kustomize, Helm, Notification) and the FluxConfig CRD, fluxconfig-agent, and fluxconfig-controller. You can control which of these controllers is installed and can optionally install the Flux image-automation and image-reflector controllers, which provide functionality around updating and retrieving Docker images.
-* [Flux Source controller](https://toolkit.fluxcd.io/components/source/controller/): Watches the source.toolkit.fluxcd.io custom resources. Handles the synchronization between the Git repositories, Helm repositories, and Buckets. Handles authorization with the source for private Git and Helm repos. Surfaces the latest changes to the source through a tar archive file.
+* [Flux Source controller](https://toolkit.fluxcd.io/components/source/controller/): Watches the source.toolkit.fluxcd.io custom resources. Handles the synchronization between the Git repositories, Helm repositories, Buckets and Azure Blob storage. Handles authorization with the source for private Git, Helm repos and Azure blob storage accounts. Surfaces the latest changes to the source through a tar archive file.
* [Flux Kustomize controller](https://toolkit.fluxcd.io/components/kustomize/controller/): Watches the `kustomization.toolkit.fluxcd.io` custom resources. Applies Kustomize or raw YAML files from the source onto the cluster. * [Flux Helm controller](https://toolkit.fluxcd.io/components/helm/controller/): Watches the `helm.toolkit.fluxcd.io` custom resources. Retrieves the associated chart from the Helm Repository source surfaced by the Source controller. Creates the `HelmChart` custom resource and applies the `HelmRelease` with given version, name, and customer-defined values to the cluster. * [Flux Notification controller](https://toolkit.fluxcd.io/components/notification/controller/): Watches the `notification.toolkit.fluxcd.io` custom resources. Receives notifications from all Flux controllers. Pushes notifications to user-defined webhook endpoints.
The `microsoft.flux` extension installs by default the [Flux controllers](https:
:::image type="content" source="media/gitops/flux2-config-install.png" alt-text="Diagram showing the installation of a Flux configuration in an Azure Arc-enabled Kubernetes or Azure Kubernetes Service cluster." lightbox="media/gitops/flux2-config-install.png":::
-You create Flux configuration resources (`Microsoft.KubernetesConfiguration/fluxConfigurations`) to enable GitOps management of the cluster from your Git repos or Bucket sources. When you create a `fluxConfigurations` resource, the values you supply for the parameters, such as the target Git repo, are used to create and configure the Kubernetes objects that enable the GitOps process in that cluster. To ensure data security, the `fluxConfigurations` resource data is stored encrypted at rest in an Azure Cosmos DB database by the Cluster Configuration service.
+You create Flux configuration resources (`Microsoft.KubernetesConfiguration/fluxConfigurations`) to enable GitOps management of the cluster from your Git repos, Bucket sources or Azure Blob Storage. When you create a `fluxConfigurations` resource, the values you supply for the parameters, such as the target Git repo, are used to create and configure the Kubernetes objects that enable the GitOps process in that cluster. To ensure data security, the `fluxConfigurations` resource data is stored encrypted at rest in an Azure Cosmos DB database by the Cluster Configuration service.
The `fluxconfig-agent` and `fluxconfig-controller` agents, installed with the `microsoft.flux` extension, manage the GitOps configuration process.
The `fluxconfig-agent` and `fluxconfig-controller` agents, installed with the `m
* Sets up RBAC (service account provisioned, role binding created/assigned, role created/assigned). * Creates `GitRepository` or `Bucket` custom resource and `Kustomization` custom resources from the information in the `FluxConfig` custom resource.
-Each `fluxConfigurations` resource in Azure will be associated in a Kubernetes cluster with one Flux `GitRepository` or `Bucket` custom resource and one or more `Kustomization` custom resources. When you create a `fluxConfigurations` resource, you'll specify, among other information, the URL to the source (Git repository or Bucket) and the sync target in the source for each `Kustomization`. You can configure dependencies between `Kustomization` custom resources to control deployment sequencing. Also, you can create multiple namespace-scoped `fluxConfigurations` resources on the same cluster for different applications and app teams.
+Each `fluxConfigurations` resource in Azure will be associated in a Kubernetes cluster with one Flux `GitRepository` or `Bucket` custom resource and one or more `Kustomization` custom resources. When you create a `fluxConfigurations` resource, you'll specify, among other information, the URL to the source (Git repository, Bucket or Azure Blob storage) and the sync target in the source for each `Kustomization`. You can configure dependencies between `Kustomization` custom resources to control deployment sequencing. Also, you can create multiple namespace-scoped `fluxConfigurations` resources on the same cluster for different applications and app teams.
> [!NOTE] > The `fluxconfig-agent` monitors for new or updated `fluxConfiguration` resources in Azure. The agent requires connectivity to Azure for the desired state of the `fluxConfiguration` to be applied to the cluster. If the agent is unable to connect to Azure, there will be a delay in making the changes in the cluster until the agent can connect. If the cluster is disconnected from Azure for more than 48 hours, then the request to the cluster will time-out, and the changes will need to be re-applied in Azure.
azure-arc Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/troubleshooting.md
Title: "Troubleshoot common Azure Arc-enabled Kubernetes issues"
# Previously updated : 09/15/2022 Last updated : 10/24/2022 description: "Learn how to resolve common issues with Azure Arc-enabled Kubernetes clusters and GitOps." keywords: "Kubernetes, Arc, Azure, containers, GitOps, Flux"
spec:
app.kubernetes.io/name: flux-extension ```
+### Flux v2 - Installing the `microsoft.flux` extension in a cluster with Kubelet Identity enabled
+
+When working with Azure Kubernetes clusters, one of the authentication options to use is kubelet identity. In order to let Flux use this, add a parameter --config useKubeletIdentity=true at the time of Flux extension installation.
+
+```console
+az k8s-extension create --resource-group <resource-group> --cluster-name <cluster-name> --cluster-type managedClusters --name flux --extension-type microsoft.flux --config useKubeletIdentity=true
+```
+ ### Flux v2 - `microsoft.flux` extension installation CPU and memory limits The controllers installed in your Kubernetes cluster with the Microsoft.Flux extension require the following CPU and memory resource limits to properly schedule on Kubernetes cluster nodes.
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
description: "This tutorial shows how to use GitOps with Flux v2 to manage confi
keywords: "GitOps, Flux, Flux v2, Kubernetes, K8s, Azure, Arc, AKS, Azure Kubernetes Service, containers, devops" Previously updated : 10/12/2022 Last updated : 10/24/2022
Here's an example for including the [Flux image-reflector and image-automation c
az k8s-extension create -g <cluster_resource_group> -c <cluster_name> -t <connectedClusters or managedClusters> --name flux --extension-type microsoft.flux --config image-automation-controller.enabled=true image-reflector-controller.enabled=true ```
+### Using Kubelet identity as authentication method for Azure Kubernetes Clusters
+
+When working with Azure Kubernetes clusters, one of the authentication options to use is kubelet identity. In order to let Flux use this, add a parameter --config useKubeletIdentity=true at the time of Flux extension installation.
+
+```console
+az k8s-extension create --resource-group <resource-group> --cluster-name <cluster-name> --cluster-type managedClusters --name flux --extension-type microsoft.flux --config useKubeletIdentity=true
+```
+ ### Red Hat OpenShift onboarding guidance Flux controllers require a **nonroot** [Security Context Constraint](https://access.redhat.com/documentation/en-us/openshift_container_platform/4.2/html/authentication/managing-pod-security-policies) to properly provision pods on the cluster. These constraints must be added to the cluster prior to onboarding of the `microsoft.flux` extension.
Arguments
--bucket-insecure : Communicate with a bucket without TLS. Allowed values: false, true. --bucket-name : Name of the S3 bucket to sync.
+ --container-name : Name of the Azure Blob Storage container to sync
--interval --sync-interval : Time between reconciliations of the source on the cluster.
- --kind : Source kind to reconcile. Allowed values: bucket, git.
+ --kind : Source kind to reconcile. Allowed values: bucket, git, azblob.
Default: git. --kustomization -k : Define kustomizations to sync sources with parameters ['name', 'path', 'depends_on', 'timeout', 'sync_interval',
Global Arguments
--subscription : Name or ID of subscription. You can configure the default subscription using `az account set -s NAME_OR_ID`. --verbose : Increase logging verbosity. Use --debug for full debug logs.
+
+Azure Blob Storage Account Auth Arguments
+ --sp_client_id : The client ID for authenticating a service principal with Azure Blob, required for this authentication method
+ --sp_tenant_id : The tenant ID for authenticating a service principal with Azure Blob, required for this authentication method
+ --sp_client_secret : The client secret for authenticating a service principal with Azure Blob
+ --sp_client_cert : The Base64 encoded client certificate for authenticating a service principal with Azure Blob
+ --sp_client_cert_password : The password for the client certificate used to authenticate a service principal with Azure Blob
+ --sp_client_cert_send_chain : Specifies whether to include x5c header in client claims when acquiring a token to enable subject name / issuer based authentication for the client certificate
+ --account_key : The Azure Blob Shared Key for authentication
+ --sas_token : The Azure Blob SAS Token for authentication
+ --mi_client_id : The client ID of the managed identity for authentication with Azure Blob
Examples Create a Flux v2 Kubernetes configuration
Examples
--kind bucket --url https://bucket-provider.minio.io \ --bucket-name my-bucket --kustomization name=my-kustomization \ --bucket-access-key my-access-key --bucket-secret-key my-secret-key
+
+ Create a Kubernetes v2 Flux Configuration with Azure Blob Storage Source Kind
+ az k8s-configuration flux create --resource-group my-resource-group \
+ --cluster-name mycluster --cluster-type connectedClusters \
+ --name myconfig --scope cluster --namespace my-namespace \
+ --kind azblob --url https://mystorageaccount.blob.core.windows.net \
+ --container-name my-container --kustomization name=my-kustomization \
+ --account-key my-account-key
``` ### Configuration general arguments
Examples
| Parameter | Format | Notes | | - | - | - |
-| `--kind` | String | Source kind to reconcile. Allowed values: `bucket`, `git`. Default: `git`. |
+| `--kind` | String | Source kind to reconcile. Allowed values: `bucket`, `git`, `azblob`. Default: `git`. |
| `--timeout` | [golang duration format](https://pkg.go.dev/time#Duration.String) | Maximum time to attempt to reconcile the source before timing out. Default: `10m`. | | `--sync-interval` `--interval` | [golang duration format](https://pkg.go.dev/time#Duration.String) | Time between reconciliations of the source on the cluster. Default: `10m`. |
If you use a `bucket` source instead of a `git` source, here are the bucket-spec
| `--bucket-secret-key` | String | Secret Key used to authenticate with the `bucket`. | | `--bucket-insecure` | Boolean | Communicate with a `bucket` without TLS. If not provided, assumed false; if provided, assumed true. |
+### Azure Blob Storage Account source arguments
+
+If you use a `azblob` source, here are the blob-specific command arguments.
+
+| Parameter | Format | Notes |
+| - | - | - |
+| `--url` `-u` | URL String | The URL for the `azblob`. |
+| `--container-name` | String | Name of the Azure Blob Storage container to sync |
+| `--sp_client_id` | String | The client ID for authenticating a service principal with Azure Blob, required for this authentication method |
+| `--sp_tenant_id` | String | The tenant ID for authenticating a service principal with Azure Blob, required for this authentication method |
+| `--sp_client_secret` | String | The client secret for authenticating a service principal with Azure Blob |
+| `--sp_client_cert` | String | The Base64 encoded client certificate for authenticating a service principal with Azure Blob |
+| `--sp_client_cert_password` | String | The password for the client certificate used to authenticate a service principal with Azure Blob |
+| `--sp_client_cert_send_chain` | String | Specifies whether to include x5c header in client claims when acquiring a token to enable subject name / issuer based authentication for the client certificate |
+| `--account_key` | String | The Azure Blob Shared Key for authentication |
+| `--sas_token` | String | The Azure Blob SAS Token for authentication |
+| `--mi_client_id` | String | The client ID of the managed identity for authentication with Azure Blob |
+ ### Local secret for authentication with source
-You can use a local Kubernetes secret for authentication with a `git` or `bucket` source. The local secret must contain all of the authentication parameters needed for the source and must be created in the same namespace as the Flux configuration.
+You can use a local Kubernetes secret for authentication with a `git`, `bucket` or `azBlob` source. The local secret must contain all of the authentication parameters needed for the source and must be created in the same namespace as the Flux configuration.
| Parameter | Format | Notes | | - | - | - |
azure-arc Migrate Azure Monitor Agent Ansible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/migrate-azure-monitor-agent-ansible.md
Follow the steps below to create the template:
1. Select **Add**. 1. Select **Add job template**, then complete the fields of the form as follows:
- **Name:** Content Lab - Install Arc Agent
+ **Name:** Content Lab - Install Arc Connected Machine Agent
**Job Type:** Run
Follow the steps below to create the template:
1. Select **Add**. 1. Select **Add job template**, then complete the fields of the form as follows:
- **Name:** Content Lab - Replace Log Analytics agent with Arc agent
+ **Name:** Content Lab - Replace Log Analytics agent with Arc Connected Machine agent
**Job Type:** Run
An automation controller workflow allows you to construct complex automation by
1. Select **Save**. 1. Select **Start** to begin the workflow designer.
-1. Set **Node Type** to "Job Template" and select **Content Lab - Replace Log Analytics with Arc Agent**.
+1. Set **Node Type** to "Job Template" and select **Content Lab - Replace Log Analytics with Arc Connected Machine Agent**.
1. Select **Next**. 1. Select **Save**.
-1. Hover over the **Content Lab - Replace Log Analytics with Arc Agent** node and select the **+** button.
+1. Hover over the **Content Lab - Replace Log Analytics with Arc Connected Machine Agent** node and select the **+** button.
1. Select **On Success**. 1. Select **Next**. 1. Set **Node Type** to "Job Template" and select **Content Lab - Uninstall Log Analytics Agent**.
azure-arc Onboard Group Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-group-policy.md
Title: Connect machines at scale using group policy description: In this article, you learn how to connect machines to Azure using Azure Arc-enabled servers using group policy. Previously updated : 10/18/2022 Last updated : 10/20/2022
You can onboard Active DirectoryΓÇôjoined Windows machines to Azure Arc-enabled servers at scale using Group Policy.
-You'll first need to set up a local remote share with the Connected Machine Agent and define a configuration file specifying the Arc-enabled server's landing zone within Azure. You will then define a Group Policy Object to run an onboarding script using a scheduled task. This Group Policy can be applied at the site, domain, or organizational unit level. Assignment can also use Access Control List (ACL) and other security filtering native to Group Policy. Machines in the scope of the Group Policy will be onboarded to Azure Arc-enabled servers.
+You'll first need to set up a local remote share and define a configuration file specifying the Arc-enabled server's landing zone within Azure. You will then define a Group Policy Object to run an onboarding script using a scheduled task. This Group Policy can be applied at the site, domain, or organizational unit level. Assignment can also use Access Control List (ACL) and other security filtering native to Group Policy. Machines in the scope of the Group Policy will be onboarded to Azure Arc-enabled servers.
Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.
If you don't have an Azure subscription, create a [free account](https://azure.m
## Prepare a remote share
-The Group Policy to onboard Azure Arc-enabled servers requires a remote share with the Connected Machine Agent. You will need to:
-
-1. Prepare a remote share to host the Azure Connected Machine agent package for Windows and the configuration file. You need to be able to add files to the distributed location.
-
-1. Download the latest version of the [Windows agent Windows Installer package](https://aka.ms/AzureConnectedMachineAgent) from the Microsoft Download Center and save it to the remote share.
+Prepare a remote share to host the configuration file and onboarding script. You need to be able to add files to the distributed location.
## Generate an onboarding script and configuration file from Azure portal
Before you can run the script to connect your machines, you'll need to do the fo
The group policy will project machines as Arc-enabled servers in the Azure subscription, resource group, and region specified in this configuration file.
-## Save the onboarding script to a remote share
+## Save the onboarding script to the remote share
Before you can run the script to connect your machines, you'll need to save the onboarding script to the remote share. This will be referenced when creating the Group Policy Object.
In the **General** tab, set the following parameters under **Security Options**:
1. In the field **Configure for**, select **Windows Vista or Window 2008**. ### Assign trigger parameters for the task
azure-arc Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/security-overview.md
When configuring the Azure Connected Machine agent with a reduced set of capabil
### Example configuration for monitoring and security scenarios
-It's common to use Azure Arc to monitor your servers with Azure Monitor and Microsoft Sentinel and secure them with Microsoft Defender for Cloud. The following configuration samples can help you configure the Azure Arc agent to only allow these scenarios.
+It's common to use Azure Arc to monitor your servers with Azure Monitor and Microsoft Sentinel and secure them with Microsoft Defender for Cloud. The following configuration samples can help you configure the Azure Arc Connected Machine agent to only allow these scenarios.
#### Azure Monitor Agent only
azure-functions Durable Functions Timers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-timers.md
When you "await" the timer task, the orchestrator function will sleep until the
When you create a timer that expires at 4:30 pm UTC, the underlying Durable Task Framework enqueues a message that becomes visible only at 4:30 pm UTC. If the function app is scaled down to zero instances in the meantime, the newly visible timer message will ensure that the function app gets activated again on an appropriate VM. > [!NOTE]
-> * Starting with [version 2.3.0](https://github.com/Azure/azure-functions-durable-extension/releases/tag/v2.3.0) of the Durable Extension, Durable timers are unlimited for .NET apps. For JavaScript, Python, and PowerShell apps, as well as .NET apps using earlier versions of the extension, Durable timers are limited to six days. When you are using an older extension version or a non-.NET language runtime and need a delay longer than six days, use the timer APIs in a `while` loop to simulate a longer delay.
+> * For JavaScript, Python, and PowerShell apps, Durable timers are limited to six days. To work around this limitation, you can use the timer APIs in a `while` loop to simulate a longer delay. Up-to-date .NET and Java apps support arbitrarily long timers.
+> * Depending on the version of the SDK and [storage provider](durable-functions-storage-providers.md) being used, long timers of 6 days or more may be internally implemented using a series of shorter timers (e.g., of 3 day durations) until the desired expiration time is reached. This can be observed in the underlying data store but won't impact the the orchestration behavior.
> * Don't use built-in date/time APIs for getting the current time. When calculating a future date for a timer to expire, always use the orchestrator function's current time API. For more information, see the [orchestrator function code constraints](durable-functions-code-constraints.md#dates-and-times) article. ## Usage for delay
azure-monitor Action Groups Create Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups-create-resource-manager-template.md
Title: Create action groups with Resource Manager templates description: Learn how to create an action group by using an Azure Resource Manager template.-+
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md
Title: Manage action groups in the Azure portal description: Find out how to create and manage action groups. Learn about notifications and actions that action groups enable, such as email, webhooks, and Azure Functions.-+ Last updated 09/07/2022
azure-monitor Alerts Common Schema Test Action Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-common-schema-test-action-definitions.md
Title: Alert schema definitions in Azure Monitor for Test Action Group description: Understanding the common alert schema definitions for Azure Monitor for Test Action group-+ Last updated 01/14/2022
-ms.revewer: issahn
+ms.revewer: jagummersall
# Common alert schema definitions for Test Action Group (Preview)
azure-monitor Alerts Processing Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-processing-rules.md
You can also define filters to narrow down which specific subset of alerts are a
| Filter | Description| |:|:|
-Alert context (payload) | The rule applies only to alerts that contain any of the filter's strings within the [alert context](./alerts-common-schema-definitions.md#alert-context) section of the alert. This section includes fields specific to each alert type. |
+Alert context (payload) | The rule applies only to alerts that contain any of the filter's strings within the [alert context](./alerts-common-schema-definitions.md#alert-context) section of the alert. This section includes fields specific to each alert type. This filter does not apply to log alert search results. |
Alert rule ID | The rule applies only to alerts from a specific alert rule. The value should be the full resource ID, for example, `/subscriptions/SUB1/resourceGroups/RG1/providers/microsoft.insights/metricalerts/MY-API-LATENCY`. To locate the alert rule ID, open a specific alert rule in the portal, select **Properties**, and copy the **Resource ID** value. You can also locate it by listing your alert rules from PowerShell or the Azure CLI. | Alert rule name | The rule applies only to alerts with this alert rule name. It can also be useful with a **Contains** operator. | Description | The rule applies only to alerts that contain the specified string within the alert rule description field. |
azure-monitor Alerts Rate Limiting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-rate-limiting.md
Title: Rate limiting for SMS, emails, push notifications description: Understand how Azure limits the number of possible SMS, email, Azure App push or webhook notifications from an action group.--++ Last updated 2/23/2022-+ # Rate limiting for Voice, SMS, emails, Azure App push notifications and webhook posts
azure-monitor Alerts Sms Behavior https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-sms-behavior.md
Title: SMS Alert behavior in Action Groups description: SMS message format and responding to SMS messages to unsubscribe, resubscribe or request help.--++ Last updated 2/23/2022-+ # SMS Alert Behavior in Action Groups
azure-monitor Alerts Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-types.md
This table can help you decide when to use what type of alert. For more detailed
|Alert Type |When to Use |Pricing Information| ||||
-|Metric alert|Metric alerts are useful when you want to be alerted about data that requires little or no manipulation. Metric data is stored in the system already pre-computed. We recommend using metric alerts if the data you want to monitor is available in metric data.|Each metrics alert rule is charged based on the number of time-series that are monitored. |
-|Log alert|Log alerts allow you to perform advanced logic operations on your data. If the data you want to monitor is available in logs, or requires advanced logic, you can use the robust features of KQL for data manipulation using log alerts.|Each Log Alert rule is billed based the interval at which the log query is evaluated (more frequent query evaluation results in a higher cost). Additionally, for Log Alerts configured for [at scale monitoring](#splitting-by-dimensions-in-log-alert-rules), the cost also depends on the number of time series created by the dimensions resulting from your query. |
+|Metric alert|Metric data is stored in the system already pre-computed. Metric alerts are useful when you want to be alerted about data that requires little or no manipulation. We recommend using metric alerts if the data you want to monitor is available in metric data.|Each metric alert rule is charged based on the number of time-series that are monitored. |
+|Log alert|Log alerts allow you to perform advanced logic operations on your data. If the data you want to monitor is available in logs, or requires advanced logic, you can use the robust features of KQL for data manipulation using log alerts.|Each log alert rule is billed based on the interval at which the log query is evaluated (more frequent query evaluation results in a higher cost). Additionally, for log alerts configured for [at scale monitoring](#splitting-by-dimensions-in-log-alert-rules), the cost also depends on the number of time series created by the dimensions resulting from your query. |
|Activity Log alert|Activity logs provide auditing of all actions that occurred on resources. Use activity log alerts to be alerted when a specific event happens to a resource, for example, a restart, a shutdown, or the creation or deletion of a resource.|For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/).| |Prometheus alerts (preview)| Prometheus alerts are primarily used for alerting on performance and health of Kubernetes clusters (including AKS). The alert rules are based on PromQL, which is an open source query language. | There is no charge for Prometheus alerts during the preview period. | ## Metric alerts
azure-monitor Proactive Performance Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-performance-diagnostics.md
Emails about smart detection performance anomalies are limited to one email per
* Not yet, but you can: * [Set up alerts](./alerts-log.md) that tell you when a metric crosses a threshold.
- * [Export telemetry](../app/export-telemetry.md) to a [database](../../stream-analytics/app-insights-export-sql-stream-analytics.md) or [to Power BI](../app/export-power-bi.md), where you can analyze it yourself.
+ * [Export telemetry](../app/export-telemetry.md) to a [database](../../stream-analytics/app-insights-export-sql-stream-analytics.md) or [to Power BI](../logs/log-powerbi.md), where you can analyze it yourself.
* *How often is the analysis done?* * We run the analysis daily on the telemetry from the previous day (full day in UTC timezone).
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md
Title: Application Insights API for custom events and metrics | Microsoft Docs description: Insert a few lines of code in your device or desktop app, webpage, or service to track usage and diagnose issues. Previously updated : 09/07/2022 Last updated : 10/24/2022 ms.devlang: csharp, java, javascript, vb
None. You don't need to wrap them in try-catch clauses. If the SDK encounters pr
### Is there a REST API to get data from the portal?
-Yes, the [data access API](https://dev.applicationinsights.io/). Other ways to extract data include [export from Log Analytics to Power BI](./export-power-bi.md) and [continuous export](./export-telemetry.md).
+Yes, the [data access API](https://dev.applicationinsights.io/). Other ways to extract data include [Power BI](..\logs\log-powerbi.md) if you've [migrated to a workspace-based resource](convert-classic-resource.md) or [continuous export](./export-telemetry.md) if you're still on a classic resource.
### Why are my calls to custom events and metrics APIs ignored?
azure-monitor Export Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/export-power-bi.md
- Title: Export to Power BI from Azure Application Insights | Microsoft Docs
-description: Analytics queries can be displayed in Power BI.
- Previously updated : 08/10/2018---
-# Feed Power BI from Application Insights
-
-[Power BI](https://www.powerbi.com/) is a suite of business tools that helps you analyze data and share insights. Rich dashboards are available on every device. You can combine data from many sources, including Analytics queries from [Azure Application Insights](./app-insights-overview.md).
-
-There are three methods of exporting Application Insights data to Power BI:
-
-* [**Export Analytics queries**](#export-analytics-queries). This is the preferred method. Write any query you want and export it to Power BI. You can place this query on a dashboard, along with any other data.
-* [**Continuous export and Azure Stream Analytics**](../../stream-analytics/app-insights-export-stream-analytics.md). This method is useful if you want to store your data for long periods of time. If you don't have an extended data retention requirement, use the export analytics query method. Continuous export and Stream Analytics involves more work to set up and additional storage overhead.
-* **Power BI adapter**. The set of charts is predefined, but you can add your own queries from any other sources.
-
-> [!NOTE]
-> The Power BI adapter is now **deprecated**. The predefined charts for this solution are populated by static uneditable queries. You do not have the ability to edit these queries and depending on certain properties of your data it is possible for the connection to Power BI to be successful, but no data is populated. This is due to exclusion criteria that are set within the hardcoded query. While this solution may still work for some customers, due to the lack of flexibility of the adapter the recommended solution is to use the [**export Analytics query**](#export-analytics-queries) functionality.
-
-## Export Analytics queries
-
-This route allows you to write any Analytics query you like, or export from Usage Funnels, and then export that to a Power BI dashboard. (You can add to the dashboard created by the adapter.)
-
-### One time: install Power BI Desktop
-
-To import your Application Insights query, you use the desktop version of Power BI. Then you can publish it to the web or to your Power BI cloud workspace.
-
-Install [Power BI Desktop](https://powerbi.microsoft.com/en-us/desktop/).
-
-### Export an Analytics query
-
-1. [Open Analytics and write your query](../logs/log-analytics-tutorial.md).
-2. Test and refine the query until you're happy with the results. Make sure that the query runs correctly in Analytics before you export it.
-3. On the **Export** menu, choose **Power BI (M)**. Save the text file.
-
- ![Screenshot of Analytics, with Export menu highlighted](./media/export-power-bi/analytics-export-power-bi.png)
-4. In Power BI Desktop, select **Get Data** > **Blank Query**. Then, in the query editor, under **View**, select **Advanced Editor**.
-
- Paste the exported M Language script into the Advanced Editor.
-
- ![Screenshot of Power BI Desktop, with Advanced Editor highlighted](./media/export-power-bi/power-bi-import-analytics-query.png)
-
-5. To allow Power BI to access Azure, you might have to provide credentials. Use **Organizational account** to sign in with your Microsoft account.
-
- ![Screenshot of Power BI Query Settings dialog box](./media/export-power-bi/power-bi-import-sign-in.png)
-
- If you need to verify the credentials, use the **Data Source Settings** menu command in the query editor. Be sure to specify the credentials you use for Azure, which might be different from your credentials for Power BI.
-6. Choose a visualization for your query, and select the fields for x-axis, y-axis, and segmenting dimension.
-
- ![Screenshot of Power BI Desktop visualization options](./media/export-power-bi/power-bi-analytics-visualize.png)
-7. Publish your report to your Power BI cloud workspace. From there, you can embed a synchronized version into other web pages.
-
- ![Screenshot of Power BI Desktop, with Publish button highlighted](./media/export-power-bi/publish-power-bi.png)
-8. Refresh the report manually at intervals, or set up a scheduled refresh on the options page.
-
-### Export a Funnel
-
-1. [Make your Funnel](./usage-funnels.md).
-2. Select **Power BI**.
-
- ![Screenshot of Power BI button](./media/export-power-bi/button.png)
-
-3. In Power BI Desktop, select **Get Data** > **Blank Query**. Then, in the query editor, under **View**, select **Advanced Editor**.
-
- ![Screenshot of Power BI Desktop, with Blank Query button highlighted](./media/export-power-bi/blankquery.png)
-
- Paste the exported M Language script into the Advanced Editor.
-
- ![Screenshot shows the Power BI Desktop, with Advanced Editor highlighted](./media/export-power-bi/advancedquery.png)
-
-4. Select items from the query, and choose a Funnel visualization.
-
- ![Screenshot shows the Power BI Desktop Funnel visualization options](./media/export-power-bi/selectsequence.png)
-
-5. Change the title to make it meaningful, and publish your report to your Power BI cloud workspace.
-
- ![Screenshot of Power BI Desktop, with title change highlighted](./media/export-power-bi/changetitle.png)
-
-## Troubleshooting
-
-You might encounter errors pertaining to credentials or the size of the dataset. Here is some information about what to do about these errors.
-
-### Unauthorized (401 or 403)
-
-This can happen if your refresh token has not been updated. Try these steps to ensure you still have access:
-
-1. Sign in to the Azure portal, and make sure you can access the resource.
-2. Try to refresh the credentials for the dashboard.
-3. Try to clear the cache from your Power BI Desktop.
-
- If you do have access and refreshing the credentials does not work, please open a support ticket.
-
-### Bad Gateway (502)
-
-This is usually caused by an Analytics query that returns too much data. Try using a smaller time range for the query.
-
-If reducing the dataset coming from the Analytics query doesn't meet your requirements, consider using the [API](https://dev.applicationinsights.io/documentation/overview) to pull a larger dataset. Here's how to convert the M-Query export to use the API.
-
-1. Create an [API key](https://dev.applicationinsights.io/documentation/Authorization/API-key-and-App-ID).
-2. Update the Power BI M script that you exported from Analytics by replacing the Azure Resource Manager URL with the Application Insights API.
- * Replace **https:\//management.azure.com/subscriptions/...**
- * with, **https:\//api.applicationinsights.io/beta/apps/...**
-3. Finally, update the credentials to basic, and use your API key.
-
-**Existing script**
-
-```
- Source = Json.Document(Web.Contents("https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups//providers/microsoft.insights/components//api/query?api-version=2014-12-01-preview",[Query=[#"csl"="requests",#"x-ms-app"="AAPBI"],Timeout=#duration(0,0,4,0)]))
-```
-
-**Updated script**
-
-```
-Source = Json.Document(Web.Contents("https://api.applicationinsights.io/beta/apps/<APPLICATION_ID>/query?api-version=2014-12-01-preview",[Query=[#"csl"="requests",#"x-ms-app"="AAPBI"],Timeout=#duration(0,0,4,0)]))
-```
-
-## About sampling
-
-Depending on the amount of data sent by your application, you might want to use the adaptive sampling feature, which sends only a percentage of your telemetry. The same is true if you have manually set sampling either in the SDK or on ingestion. [Learn more about sampling](./sampling.md).
-
-## Power BI adapter (deprecated)
-
-This method creates a complete dashboard of telemetry for you. The initial dataset is predefined, but you can add more data to it.
-
-### Get the adapter
-
-1. Sign in to [Power BI](https://app.powerbi.com/).
-2. Open **Get Data** ![Screenshot of GetData Icon in lower left corner](./media/export-power-bi/001.png), **Services**.
-
- ![Screenshots shows Get button in the Services window.](./media/export-power-bi/002.png)
-
-3. Select **Get it now** under Application Insights.
-
- ![Screenshots of Get from Application Insights data source](./media/export-power-bi/003.png)
-4. Provide the details of your Application Insights resource, and then **Sign-in**.
-
- ![Screenshot shows Connect to Application Insights window.](./media/export-power-bi/005.png)
-
- This information can be found in the Application Insights Overview pane:
-
- ![Screenshot of Get from Application Insights data source](./media/export-power-bi/004.png)
-
-5. Open the newly created Application Insights Power BI App.
-
-6. Wait a minute or two for the data to be imported.
-
- ![Screenshot of Power BI adapter](./media/export-power-bi/010.png)
-
-You can edit the dashboard, combining the Application Insights charts with those of other sources, and with Analytics queries. You can get more charts in the visualization gallery, and each chart has parameters you can set.
-
-After the initial import, the dashboard and the reports continue to update daily. You can control the refresh schedule on the dataset.
-
-## Next steps
-
-* [Power BI - Learn](https://www.powerbi.com/learning/)
-* [Analytics tutorial](../logs/log-analytics-tutorial.md)
azure-monitor Export Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/export-telemetry.md
Title: Continuous export of telemetry from Application Insights | Microsoft Docs description: Export diagnostic and usage data to storage in Microsoft Azure, and download it from there. Previously updated : 02/19/2021 Last updated : 10/24/2022
Before you set up continuous export, there are some alternatives you might want
* The Export button at the top of a metrics or search tab lets you transfer tables and charts to an Excel spreadsheet. * [Analytics](../logs/log-query-overview.md) provides a powerful query language for telemetry. It can also export results.
-* If you're looking to [explore your data in Power BI](./export-power-bi.md), you can do that without using Continuous Export.
+* If you're looking to [explore your data in Power BI](../logs/log-powerbi.md), you can do that without using Continuous Export if you've [migrated to a workspace-based resource](convert-classic-resource.md).
* The [Data access REST API](https://dev.applicationinsights.io/) lets you access your telemetry programmatically. * You can also access setup [continuous export via PowerShell](/powershell/module/az.applicationinsights/new-azapplicationinsightscontinuousexport).
azure-monitor Ip Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-collection.md
This article explains how geolocation lookup and IP address handling work in App
By default, IP addresses are temporarily collected but not stored in Application Insights. The basic process is as follows:
-When telemetry is sent to Azure, Application Insights uses the IP address to do a geolocation lookup by using [GeoLite2 from MaxMind](https://dev.maxmind.com/geoip/geoip2/geolite2/). Application Insights uses the results of this lookup to populate the fields `client_City`, `client_StateOrProvince`, and `client_CountryOrRegion`. The address is then discarded, and `0.0.0.0` is written to the `client_IP` field.
+When telemetry is sent to Azure, Application Insights uses the IP address to do a geolocation lookup. Application Insights uses the results of this lookup to populate the fields `client_City`, `client_StateOrProvince`, and `client_CountryOrRegion`. The address is then discarded, and `0.0.0.0` is written to the `client_IP` field.
Geolocation data can be removed in the following ways. * [Remove the client IP initializer](../app/configuration-with-applicationinsights-config.md) * [Use a custom initializer](../app/api-filtering-sampling.md)
-> [!NOTE]
-> Application Insights uses an older version of the GeoLite2 database. If you experience accuracy issues with IP to geolocation mappings, then as a workaround you can disable IP masking and utilize another geomapping service to convert the client_IP field of the underlying telemetry to a more accurate geolocation. We are currently working on an update to improve the geolocation accuracy.
- The telemetry types are: * Browser telemetry: Application Insights collects the sender's IP address. The ingestion endpoint calculates the IP address.
azure-monitor Platforms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/platforms.md
Title: 'Application Insights: languages, platforms, and integrations | Microsoft Docs' description: Languages, platforms, and integrations available for Application Insights Previously updated : 10/29/2021 Last updated : 10/24/2022
## Export and data analysis * [Power BI](https://powerbi.microsoft.com/blog/explore-your-application-insights-data-with-power-bi/)
-* [Stream Analytics](./export-power-bi.md)
+* [Power BI for workspace-based resources](../logs/log-powerbi.md)
## Unsupported SDKs Several other community-supported Application Insights SDKs exist. However, Azure Monitor only provides support when using the supported instrumentation options listed on this page. We're constantly assessing opportunities to expand our support for other languages. Follow [Azure Updates for Application Insights](https://azure.microsoft.com/updates/?query=application%20insights) for the latest SDK news.
azure-monitor Usage Funnels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-funnels.md
Title: Application Insights Funnels description: Learn how you can use Funnels to discover how customers are interacting with your application. Previously updated : 07/30/2021 Last updated : 10/24/2022
To create a funnel:
* [Retention](usage-retention.md) * [Workbooks](../visualize/workbooks-overview.md) * [Add user context](./usage-overview.md)
- * [Export to Power BI](./export-power-bi.md)
+ * [Export to Power BI](../logs/log-powerbi.md) if you've [migrated to a workspace-based resource](convert-classic-resource.md)
azure-monitor Container Insights Enable Arc Enabled Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md
az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-n
To use [managed identity authentication (preview)](container-insights-onboard.md#authentication), add the `configuration-settings` parameter as in the following: ```azurecli
-az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogsagent.useAADAuth=true
+az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings omsagent.useAADAuth=true
```
az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-n
If you want to tweak the default resource requests and limits, you can use the advanced configurations settings: ```azurecli
-az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogsagent.resources.daemonset.limits.cpu=150m amalogsagent.resources.daemonset.limits.memory=600Mi amalogsagent.resources.deployment.limits.cpu=1 amalogsagent.resources.deployment.limits.memory=750Mi
+az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings omsagent.resources.daemonset.limits.cpu=150m omsagent.resources.daemonset.limits.memory=600Mi omsagent.resources.deployment.limits.cpu=1 omsagent.resources.deployment.limits.memory=750Mi
``` Checkout the [resource requests and limits section of Helm chart](https://github.com/helm/charts/blob/master/incubator/azuremonitor-containers/values.yaml) for the available configuration settings.
Checkout the [resource requests and limits section of Helm chart](https://github
If the Azure Arc-enabled Kubernetes cluster is on Azure Stack Edge, then a custom mount path `/home/data/docker` needs to be used. ```azurecli
-az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogsagent.logsettings.custommountpath=/home/data/docker
+az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings omsagent.logsettings.custommountpath=/home/data/docker
```
az k8s-extension show --name azuremonitor-containers --cluster-name \<cluster-na
Enable Container insights extension with managed identity authentication option using the workspace returned in the first step. ```cli
-az k8s-extension create --name azuremonitor-containers --cluster-name \<cluster-name\> --resource-group \<resource-group\> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogsagent.useAADAuth=true logAnalyticsWorkspaceResourceID=\<workspace-resource-id\>
+az k8s-extension create --name azuremonitor-containers --cluster-name \<cluster-name\> --resource-group \<resource-group\> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings omsagent.useAADAuth=true logAnalyticsWorkspaceResourceID=\<workspace-resource-id\>
``` ## [Resource Manager](#tab/migrate-arm)
azure-monitor Container Insights Prometheus Metrics Addon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-prometheus-metrics-addon.md
Assign the `Monitoring Data Reader` role to the Grafana System Assigned Identity
| `clusterResourceId` | Resource ID for the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. | | `clusterLocation` | Location of the AKS cluster. Retrieve from the **JSON view** on the **Overview** page for the cluster. | | `metricLabelsAllowlist` | Comma-separated list of Kubernetes labels keys that will be used in the resource's labels metric. |
- | `metrican'tationsAllowList` | Comma-separated list of additional Kubernetes label keys that will be used in the resource's labels metric. |
+ | `metricAnnotationsAllowList` | Comma-separated list of additional Kubernetes label keys that will be used in the resource's labels metric. |
| `grafanaResourceId` | Resource ID for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. | | `grafanaLocation` | Location for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. | | `grafanaSku` | SKU for the managed Grafana instance. Retrieve from the **JSON view** on the **Overview** page for the Grafana instance. Use the **sku.name**. |
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|Availability|No|Overall Vault Availability|Percent|Average|Vault requests availability|ActivityType, ActivityName, StatusCode, StatusCodeClass|
|ServiceApiHit|Yes|Total Service Api Hits|Count|Count|Number of total service api hits|ActivityType, ActivityName|
-|ServiceApiLatency|No|Overall Service Api Latency|Milliseconds|Average|Overall latency of service api requests|ActivityType, ActivityName, StatusCode, StatusCodeClass|
-|ServiceApiResult|Yes|Total Service Api Results|Count|Count|Gets the available metrics for a Managed HSM pool|ActivityType, ActivityName, StatusCode, StatusCodeClass|
- ## Microsoft.KeyVault/vaults
This latest update adds a new column and reorders the metrics to be alphabetical
- [Read about metrics in Azure Monitor](../data-platform.md) - [Create alerts on metrics](../alerts/alerts-overview.md)-- [Export metrics to storage, Event Hub, or Log Analytics](../essentials/platform-logs-overview.md)
+- [Export metrics to storage, Event Hub, or Log Analytics](../essentials/platform-logs-overview.md)
azure-monitor Prometheus Grafana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-grafana.md
The following sections describe how to configure Azure Monitor managed service f
> [!IMPORTANT] > This section describes the manual process for adding an Azure Monitor managed service for Prometheus data source to Azure Managed Grafana. You can achieve the same functionality by linking the Azure Monitor workspace and Grafana workspace as described in [Link a Grafana workspace](azure-monitor-workspace-overview.md#link-a-grafana-workspace).
-### Configure system identify
+### Configure system identity
Your Grafana workspace requires the following: - System managed identity enabled
azure-netapp-files Azure Netapp Files Create Volumes Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md
na Previously updated : 10/20/2022 Last updated : 10/25/2022 # Create an SMB volume for Azure NetApp Files
You can set permissions for a file or folder by using the **Security** tab of th
* [Troubleshoot volume errors for Azure NetApp Files](troubleshoot-volumes.md) * [Learn about virtual network integration for Azure services](../virtual-network/virtual-network-for-azure-services.md) * [Install a new Active Directory forest using Azure CLI](/windows-server/identity/ad-ds/deploy/virtual-dc/adds-on-azure-vm)
+* [Application resilience FAQs for Azure NetApp Files](faq-application-resilience.md)
azure-netapp-files Azure Netapp Files Create Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes.md
na Previously updated : 10/20/2022 Last updated : 10/25/2022 # Create an NFS volume for Azure NetApp Files
This article shows you how to create an NFS volume. For SMB volumes, see [Create
* [Configure Unix permissions and change ownership mode](configure-unix-permissions-change-ownership-mode.md). * [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md) * [Learn about virtual network integration for Azure services](../virtual-network/virtual-network-for-azure-services.md)
+* [Application resilience FAQs for Azure NetApp Files](faq-application-resilience.md)
azure-netapp-files Azure Netapp Files Delegate Subnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-delegate-subnet.md
na Previously updated : 01/07/2022 Last updated : 10/25/2022 # Delegate a subnet to Azure NetApp Files
You can also create and delegate a subnet when you [create a volume for Azure Ne
* [Create a volume for Azure NetApp Files](azure-netapp-files-create-volumes.md) * [Learn about virtual network integration for Azure services](../virtual-network/virtual-network-for-azure-services.md)
+* [Application resilience FAQs for Azure NetApp Files](faq-application-resilience.md)
azure-netapp-files Azure Netapp Files Quickstart Set Up Account Create Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-quickstart-set-up-account-create-volumes.md
Previously updated : 10/04/2021 Last updated : 10/25/2022 #Customer intent: As an IT admin new to Azure NetApp Files, I want to quickly set up Azure NetApp Files and create a volume.
Use the Azure portal, PowerShell, or the Azure CLI to delete the resource group.
> [!div class="nextstepaction"] > [Solution architectures using Azure NetApp Files](azure-netapp-files-solution-architectures.md)+
+> [!div class="nextstepaction"]
+> [Application resilience FAQs for Azure NetApp Files](faq-application-resilience.md)
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
You can create an Azure support request to increase the adjustable limits from t
- [Cost model for Azure NetApp Files](azure-netapp-files-cost-model.md) - [Regional capacity quota for Azure NetApp Files](regional-capacity-quota.md) - [Request region access for Azure NetApp Files](request-region-access.md)
+- [Application resilience FAQs for Azure NetApp Files](faq-application-resilience.md)
+-
azure-netapp-files Create Volumes Dual Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-volumes-dual-protocol.md
na Previously updated : 10/20/2022 Last updated : 10/25/2022 # Create a dual-protocol volume for Azure NetApp Files
Follow instructions in [Configure an NFS client for Azure NetApp Files](configur
* [Configure AD DS LDAP over TLS for Azure NetApp Files](configure-ldap-over-tls.md) * [Configure AD DS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md) * [Troubleshoot volume errors for Azure NetApp Files](troubleshoot-volumes.md)
+* [Application resilience FAQs for Azure NetApp Files](faq-application-resilience.md)
azure-netapp-files Understand Guidelines Active Directory Domain Service Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/understand-guidelines-active-directory-domain-service-site.md
na Previously updated : 08/15/2022 Last updated : 10/25/2022 # Understand guidelines for Active Directory Domain Services site design and planning for Azure NetApp Files
Ensure that stale DNS records associated with the retired AD DS domain controlle
A separate discovery process for AD DS LDAP servers occurs when LDAP is enabled for an Azure NetApp Files NFS volume. When the LDAP client is created on Azure NetApp Files, Azure NetApp Files queries the AD DS domain service (SRV) resource record for a list of all AD DS LDAP servers in the domain and not the AD DS LDAP servers assigned to the AD DS site specified in the AD connection. > [!IMPORTANT]
-> If Azure NetApp Files cannot reach a discovered AD DS LDAP server during the creation of the Azure NetApp Files LDAP client, the creation of the LDAP enabled volume will fail. In large or complex AD DS topologies, you might need to implement [DNS Policies](/windows-server/networking/dns/dns-top) or [DNS subnet prioritization](/previous-versions/windows/it-pro/windows-2000-server/cc961422(v=technet.10)?redirectedfrom=MSDN) to ensure that the AD DS LDAP servers assigned to the AD DS site specified in the AD connection are returned. Contact your Microsoft CSA for guidance on how to best configure your DNS to support LDAP-enabled NFS volumes.
+> If Azure NetApp Files cannot reach a discovered AD DS LDAP server during the creation of the Azure NetApp Files LDAP client, the creation of the LDAP enabled volume will fail. In large or complex AD DS topologies, you might need to implement [DNS Policies](/windows-server/networking/dns/deploy/dns-policies-overview) or [DNS subnet prioritization](/previous-versions/windows/it-pro/windows-2000-server/cc961422(v=technet.10)?redirectedfrom=MSDN) to ensure that the AD DS LDAP servers assigned to the AD DS site specified in the AD connection are returned. Contact your Microsoft CSA for guidance on how to best configure your DNS to support LDAP-enabled NFS volumes.
### Consequences of incorrect or incomplete AD Site Name configuration
azure-netapp-files Use Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/use-availability-zones.md
na Previously updated : 10/21/2022 Last updated : 10/25/2022 # Use availability zones for high availability in Azure NetApp Files
AzureΓÇ»[availability zones](../availability-zones/az-overview.md#availability-z
Azure availability zones are highly available, fault tolerant, and more scalable than traditional single or multiple data center infrastructures. Azure availability zones let you design and operate applications and databases that automatically transition between zones without interruption. You can design resilient solutions by using Azure services that use availability zones.
-The use of high availability (HA) architectures with availability zones are now a default and best practice recommendation inΓÇ»[AzureΓÇÖs Well-Architected Framework](/architecture/framework/resiliency/app-design#use-availability-zones-within-a-region). Enterprise applications and resources are increasingly deployed into multiple availability zones to achieve this level of high availability (HA) or failure domain (zone) isolation.
+The use of high availability (HA) architectures with availability zones are now a default and best practice recommendation inΓÇ»[AzureΓÇÖs Well-Architected Framework](/azure/architecture/framework/resiliency/design-best-practices#use-zone-aware-services). Enterprise applications and resources are increasingly deployed into multiple availability zones to achieve this level of high availability (HA) or failure domain (zone) isolation.
:::image type="content" alt-text="Diagram of three availability zones in one Azure region." source="../media/azure-netapp-files/availability-zone-diagram.png":::
All Virtual Machines within the region in (peered) VNets can access all Azure Ne
Azure NetApp Files deployments will occur in the availability of zone of choice if Azure NetApp Files is present in that availability zone and has sufficient capacity. >[!IMPORTANT]
->Azure NetApp Files availability zone volume placement provides zonal placement. It doesn't provide proximity placement towards compute. As such, it doesnΓÇÖt provide lowest latency guarantee. VM-to-storage latencies are within the availability zone latency envelopes.
+>Azure NetApp Files availability zone volume placement provides zonal placement. It ***does not*** provide proximity placement towards compute. As such, it ***does not*** provide lowest latency guarantee. VM-to-storage latencies are within the availability zone latency envelopes.
You can co-locate your compute, storage, networking, and data resources across an availability zone, and replicate this arrangement in other availability zones. Many applications are built for HA across multiple availability zones using application-based replication and failover technologies, like [SQL Server Always-On Availability Groups (AOAG)](/sql/database-engine/availability-groups/windows/always-on-availability-groups-sql-server), [SAP HANA with HANA System Replication (HSR)](../virtual-machines/workloads/sap/sap-hana-high-availability-netapp-files-suse.md), and [Oracle with Data Guard](../virtual-machines/workloads/oracle/oracle-reference-architecture.md#high-availability-for-oracle-databases).
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 10/20/2022 Last updated : 10/25/2022 # What's new in Azure NetApp Files
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Availability zone volume placement](manage-availability-zone-volume-placement.md) (Preview)
- Azure availability zones are highly available, fault tolerant, and more scalable than traditional single or multiple data center infrastructures. Using Azure availability zones lets you design and operate applications and databases that automatically transition between zones without interruption. Azure NetApp Files lets you deploy new volumes in the logical availability zone of your choice to support enterprise, mission-critical HA deployments across multiple AZs. AzureΓÇÖs push towards the use of [availability zones (AZs)](../availability-zones/az-overview.md#availability-zones) has increased, and the use of high availability (HA) deployments with availability zones are now a default and best practice recommendation in AzureΓÇÖs [Well Architected Framework](/architecture/framework/resiliency/design-best-practices#use-zone-aware-services).
+ Azure availability zones are highly available, fault tolerant, and more scalable than traditional single or multiple data center infrastructures. Using Azure availability zones lets you design and operate applications and databases that automatically transition between zones without interruption. Azure NetApp Files lets you deploy new volumes in the logical availability zone of your choice to support enterprise, mission-critical HA deployments across multiple AZs. AzureΓÇÖs push towards the use of [availability zones (AZs)](../availability-zones/az-overview.md#availability-zones) has increased, and the use of high availability (HA) deployments with availability zones are now a default and best practice recommendation in AzureΓÇÖs [Well-Architected Framework](/azure/architecture/framework/resiliency/design-best-practices#use-zone-aware-services).
* [Application volume group for SAP HANA](application-volume-group-introduction.md) now generally available (GA)
azure-resource-manager Bicep Functions Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-deployment.md
The preceding example returns the following object when deployed to global Azure
"vmImageAliasDoc": "https://raw.githubusercontent.com/Azure/azure-rest-api-specs/master/arm-compute/quickstart-templates/aliases.json", "resourceManager": "https://management.azure.com/", "authentication": {
- "loginEndpoint": "https://login.windows.net/",
+ "loginEndpoint": "https://login.microsoftonline.com/",
"audiences": [ "https://management.core.windows.net/", "https://management.azure.com/"
azure-resource-manager Deployment Script Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deployment-script-bicep.md
Previously updated : 12/28/2021 Last updated : 10/26/2022
Learn how to use deployment scripts in Bicep. With [Microsoft.Resources/deployme
- perform data plane operations, for example, copy blobs or seed database - look up and validate a license key - create a self-signed certificate-- create an object in Azure AD
+- create an object in Azure Active Directory (Azure AD)
- look up IP Address blocks from custom system The benefits of deployment script:
The deployment script resource is only available in the regions where Azure Cont
### Training resources
-If you would rather learn about the ARM template test toolkit through step-by-step guidance, see [Extend ARM templates by using deployment scripts](/training/modules/extend-resource-manager-template-deployment-scripts).
+If you would rather learn about deployment scripts through step-by-step guidance, see [Extend ARM templates by using deployment scripts](/training/modules/extend-resource-manager-template-deployment-scripts).
## Configure the minimum permissions
Property value details:
- [Sample 1](https://raw.githubusercontent.com/Azure/azure-docs-bicep-samples/master/samples/deployment-script/deploymentscript-keyvault.bicep): create a key vault and use deployment script to assign a certificate to the key vault. - [Sample 2](https://raw.githubusercontent.com/Azure/azure-docs-bicep-samples/master/samples/deployment-script/deploymentscript-keyvault-subscription.bicep): create a resource group at the subscription level, create a key vault in the resource group, and then use deployment script to assign a certificate to the key vault. - [Sample 3](https://raw.githubusercontent.com/Azure/azure-docs-bicep-samples/master/samples/deployment-script/deploymentscript-keyvault-mi.bicep): create a user-assigned managed identity, assign the contributor role to the identity at the resource group level, create a key vault, and then use deployment script to assign a certificate to the key vault.
+- [Sample 4](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.resources/deployment-script-azcli-graph-azure-ad): manually create a user-assigned managed identity and assign it permission to use the Microsoft Graph API to create Azure AD applications; in the Bicep file, use a deployment script to create an Azure AD application and service principal, and output the object IDs and client ID.
## Use inline scripts
After the script is tested successfully, you can use it as a deployment script i
| DeploymentScriptContainerGroupInNonterminalState | When creating the Azure container instance (ACI), another deployment script is using the same ACI name in the same scope (same subscription, resource group name, and resource name). | | DeploymentScriptContainerGroupNameInvalid | The Azure container instance name (ACI) specified doesn't meet the ACI requirements. See [Troubleshoot common issues in Azure Container Instances](../../container-instances/container-instances-troubleshooting.md#issues-during-container-group-deployment).|
+## Use Microsoft Graph within a deployment script
+
+A deployment script can use [Microsoft Graph](/graph/overview) to create and work with objects in Azure AD.
+
+### Commands
+
+When you use Azure CLI deployment scripts, you can use commands within the `az ad` command group to work with applications, service principals, groups, and users. You can also directly invoke Microsoft Graph APIs by using the `az rest` command.
+
+When you use Azure PowerShell deployment scripts, you can use the `Invoke-RestMethod` cmdlet to directly invoke the Microsoft Graph APIs.
+
+### Permissions
+
+The identity that your deployment script uses needs to be authorized to work with the Microsoft Graph API, with the appropriate permissions for the operations it performs. You must authorize the identity outside of your Bicep file, such as by pre-creating a user-assigned managed identity and assigning it an app role for Microsoft Graph. For more information, [see this quickstart example](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.resources/deployment-script-azcli-graph-azure-ad).
+ ## Next steps In this article, you learned how to use deployment scripts. To walk through a Learn module:
azure-resource-manager Deployment Script Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-script-template.md
Previously updated : 09/06/2022 Last updated : 10/26/2022
Learn how to use deployment scripts in Azure Resource templates (ARM templates).
- Perform data plane operations, for example, copy blobs or seed database. - Look up and validate a license key. - Create a self-signed certificate.-- Create an object in Azure AD.
+- Create an object in Azure Active Directory (Azure AD).
- Look up IP Address blocks from custom system. The benefits of deployment script:
The deployment script resource is only available in the regions where Azure Cont
### Training resources
-To learn more about the ARM template test toolkit, and for hands-on guidance, see [Extend ARM templates by using deployment scripts](/training/modules/extend-resource-manager-template-deployment-scripts).
+If you would rather learn about deployment scripts through step-by-step guidance, see [Extend ARM templates by using deployment scripts](/training/modules/extend-resource-manager-template-deployment-scripts).
## Configure the minimum permissions
Property value details:
- [Sample 3](https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/deployment-script/deploymentscript-keyvault-mi.json): create a user-assigned managed identity, assign the contributor role to the identity at the resource group level, create a key vault, and then use deployment script to assign a certificate to the key vault. - [Sample 4](https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/deployment-script/deploymentscript-keyvault-lock-sub.json): it is the same scenario as Sample 1 in this list. A new resource group is created to run the deployment script. This template is a subscription level template. - [Sample 5](https://raw.githubusercontent.com/Azure/azure-docs-json-samples/master/deployment-script/deploymentscript-keyvault-lock-group.json): it is the same scenario as Sample 4. This template is a resource group level template.
+- [Sample 6](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.resources/deployment-script-azcli-graph-azure-ad): manually create a user-assigned managed identity and assign it permission to use the Microsoft Graph API to create Azure AD applications; in the Bicep file, use a deployment script to create an Azure AD application and service principal, and output the object IDs and client ID.
## Use inline scripts
After the script is tested successfully, you can use it as a deployment script i
| DeploymentScriptContainerGroupInNonterminalState | When creating the Azure container instance (ACI), another deployment script is using the same ACI name in the same scope (same subscription, resource group name, and resource name). | | DeploymentScriptContainerGroupNameInvalid | The Azure container instance name (ACI) specified doesn't meet the ACI requirements. See [Troubleshoot common issues in Azure Container Instances](../../container-instances/container-instances-troubleshooting.md#issues-during-container-group-deployment).|
+## Use Microsoft Graph within a deployment script
+
+A deployment script can use [Microsoft Graph](/graph/overview) to create and work with objects in Azure AD.
+
+### Commands
+
+When you use Azure CLI deployment scripts, you can use commands within the `az ad` command group to work with applications, service principals, groups, and users. You can also directly invoke Microsoft Graph APIs by using the `az rest` command.
+
+When you use Azure PowerShell deployment scripts, you can use the `Invoke-RestMethod` cmdlet to directly invoke the Microsoft Graph APIs.
+
+### Permissions
+
+The identity that your deployment script uses needs to be authorized to work with the Microsoft Graph API, with the appropriate permissions for the operations it performs. You must authorize the identity outside of your template deployment, such as by pre-creating a user-assigned managed identity and assigning it an app role for Microsoft Graph. For more information, [see this quickstart example](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.resources/deployment-script-azcli-graph-azure-ad).
+ ## Next steps In this article, you learned how to use deployment scripts. To walk through a deployment script tutorial:
azure-resource-manager Deployment Tutorial Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-tutorial-pipeline.md
In the [previous tutorial](./deployment-tutorial-linked-template.md), you deploy a linked template. In this tutorial, you learn how to use Azure Pipelines to continuously build and deploy Azure Resource Manager template (ARM template) projects.
-Azure DevOps provides developer services to support teams to plan work, collaborate on code development, and build and deploy applications. Developers can work in the cloud using Azure DevOps Services. Azure DevOps provides an integrated set of features that you can access through your web browser or IDE client. Azure Pipeline is one of these features. Azure Pipelines is a fully featured continuous integration (CI) and continuous delivery (CD) service. It works with your preferred Git provider and can deploy to most major cloud services. Then you can automate the build, testing, and deployment of your code to Microsoft Azure, Google Cloud Platform, or Amazon Web Services.
+Azure DevOps provides developer services to support teams to plan work, collaborate on code development, and build and deploy applications. Developers can work in the cloud using Azure DevOps Services. Azure DevOps provides an integrated set of features that you can access through your web browser or IDE client. Azure Pipelines is one of these features. Azure Pipelines is a fully featured continuous integration (CI) and continuous delivery (CD) service. It works with your preferred Git provider and can deploy to most major cloud services. Then you can automate the build, testing, and deployment of your code to Microsoft Azure, Google Cloud Platform, or Amazon Web Services.
> [!NOTE] > Pick a project name. When you go through the tutorial, replace any of the **AzureRmPipeline** with your project name.
azure-signalr Signalr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-overview.md
There are many different ways to program with Azure SignalR Service, as some of
- **[Scale an ASP.NET Core SignalR App](signalr-concept-scale-aspnet-core.md)** - Integrate Azure SignalR Service with an ASP.NET Core SignalR application to scale out to hundreds of thousands of connections. - **[Build serverless real-time apps](signalr-concept-azure-functions.md)** - Use Azure Functions' integration with Azure SignalR Service to build serverless real-time applications in languages such as JavaScript, C#, and Java.-- **[Send messages from server to clients via REST API](https://github.com/Azure/azure-signalr/blob/dev/docs/rest-api.md)** - Azure SignalR Service provides REST API to enable applications to post messages to clients connected with SignalR Service, in any REST capable programming languages.
+- **[Send messages from server to clients via REST API](signalr-reference-data-plane-rest-api.md)** - Azure SignalR Service provides REST API to enable applications to post messages to clients connected with SignalR Service, in any REST capable programming languages.
azure-vmware Azure Security Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-security-integration.md
Title: Integrate Microsoft Defender for Cloud with Azure VMware Solution
description: Learn how to protect your Azure VMware Solution VMs with Azure's native security tools from the workload protection dashboard. Previously updated : 10/18/2022 Last updated : 10/24/2022+ # Integrate Microsoft Defender for Cloud with Azure VMware Solution
azure-vmware Azure Vmware Solution Citrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-citrix.md
Title: Deploy Citrix on Azure VMware Solution
description: Learn how to deploy VMware Citrix on Azure VMware Solution. Previously updated : 11/02/2021- Last updated : 10/24/2022+
Citrix Virtual Apps and Desktop service supports Azure VMware Solution. Azure VM
[Solution brief](https://www.citrix.com/content/dam/citrix/en_us/documents/solution-brief/citrix-virtual-apps-and-desktop-service-on-azure-vmware-solution.pdf) **FAQ (review Q&As)**
-
-- Q. Can I migrate my existing Citrix desktops and apps to Azure VMware Solution, or operate a hybrid environment that consists of on-premises and Azure VMware Solution-based Citrix workloads? +
+- Q. Can I migrate my existing Citrix desktops and apps to Azure VMware Solution, or operate a hybrid environment that consists of on-premises and Azure VMware Solution-based Citrix workloads?
A. Yes. You can use the same machine images, application packages, and processes you currently use. YouΓÇÖre able to seamlessly link on-premises and Azure VMware Solution-based environments together for a migration. -- Q. Can Citrix be deployed as a standalone environment within Azure VMware Solution?
+- Q. Can Citrix be deployed as a standalone environment within Azure VMware Solution?
- A. Yes. YouΓÇÖre free to migrate, operate a hybrid environment, or deploy a standalone directly into Azure VMware Solution
+ A. Yes. YouΓÇÖre free to migrate, operate a hybrid environment, or deploy a standalone directly into Azure VMware Solution.
-- Q. Does Azure VMware Solution support both PVS and MCS?
+- Q. Does Azure VMware Solution support both PVS and MCS?
- A. Yes
+ A. Yes.
-- Q. Are GPU-based workloads supported in Citrix on Azure VMware Solution?
+- Q. Are GPU-based workloads supported in Citrix on Azure VMware Solution?
- A. Not at this time. However, Citrix workloads on Microsoft Azure support GPU if that use case is important to you.
+ A. Not at this time. However, Citrix workloads on Microsoft Azure support GPU if that use case is important to you.
-- Q. Is Azure VMware Solution supported with on-prem Citrix deployments or LTSR?
+- Q. Is Azure VMware Solution supported with on-premesis Citrix deployments or LTSR?
A. No. Azure VMware Solution is only supported with the Citrix Virtual Apps and Desktops service offerings. -- Q. Who do I call for support?
+- Q. Who do I call for support?
- A. Customers should contact Citrix support www.citrix.com/support for assistance.
+ A. Customers should contact Citrix support www.citrix.com/support for assistance.
- Q. Can I use my Azure Virtual Desktop benefit from Microsoft with Citrix on Azure VMware Solution?
- A. No. Azure Virtual Desktop benefits are applicable to native Microsoft Azure workloads only. Citrix Virtual Apps and Desktops service, as a native Azure offering, can apply your Azure Virtual Desktop benefit alongside your Azure VMware Solution deployment.
+ A. No. Azure Virtual Desktop benefits are applicable to native Microsoft Azure workloads only. Citrix Virtual Apps and Desktops service, as a native Azure offering, can apply your Azure Virtual Desktop benefit alongside your Azure VMware Solution deployment.
-- Q. How do I purchase Citrix Virtual Apps and Desktops service to use Azure VMware Solution?
+- Q. How do I purchase Citrix Virtual Apps and Desktops service to use Azure VMware Solution?
- A. You can purchase Citrix offerings via your Citrix partner or directly from the Azure Marketplace.
+ A. You can purchase Citrix offerings via your Citrix partner or directly from the Azure Marketplace.
azure-vmware Concepts Api Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-api-management.md
Title: Concepts - API Management
description: Learn how API Management protects APIs running on Azure VMware Solution virtual machines (VMs) Previously updated : 04/28/2021 Last updated : 10/25/2022+ # Publish and protect APIs running on Azure VMware Solution VMs Microsoft Azure [API Management](https://azure.microsoft.com/services/api-management/) lets you securely publish to external or internal consumers. Only the Developer (development) and Premium (production) SKUs allow Azure Virtual Network integration to publish APIs that run on Azure VMware Solution workloads. In addition, both SKUs enable the connectivity between the API Management service and the backend.
-The API Management configuration is the same for backend services that run on Azure VMware Solution virtual machines (VMs) and on-premises. In addition, API Management configures the virtual IP on the load balancer as the backend endpoint for both deployments when the backend server is placed behind an NSX Load Balancer on the Azure VMware Solution.
+The API Management configuration is the same for backend services that run on Azure VMware Solution virtual machines (VMs) and on-premises. API Management also configures the virtual IP on the load balancer as the backend endpoint for both deployments when the backend server is placed behind an NSX Load Balancer on Azure VMware Solution.
## External deployment
The external deployment diagram shows the entire process and the actors involved
The traffic flow goes through the API Management instance, which abstracts the backend services, plugged into the Hub virtual network. The ExpressRoute Gateway routes the traffic to the ExpressRoute Global Reach channel and reaches an NSX Load Balancer distributing the incoming traffic to the different backend service instances.
-API Management has an Azure Public API, and activating Azure DDoS Protection Service is recommended.
+API Management has an Azure Public API, and activating Azure DDoS Protection Service is recommended.
:::image type="content" source="media/api-management/api-management-external-deployment.png" alt-text="Diagram showing an external API Management deployment for Azure VMware Solution" border="false"::: - ## Internal deployment An internal deployment publishes APIs consumed by internal users or systems. DevOps teams and API developers use the same management tools and developer portal as in the external deployment.
The deployment diagram below shows consumers that can be internal or external, w
In an internal deployment, APIs get exposed to the same API Management instance. In front of API Management, Application Gateway gets deployed with Azure Web Application Firewall (WAF) capability activated. Also deployed, a set of HTTP listeners and rules to filter the traffic, exposing only a subset of the backend services running on Azure VMware Solution. -
-* Internal traffic routes through ExpressRoute Gateway to Azure Firewall and then to API Management, directly or through traffic rules.
+* Internal traffic routes through ExpressRoute Gateway to Azure Firewall and then to API Management, directly or through traffic rules.
* External traffic enters Azure through Application Gateway, which uses the external protection layer for API Management. - :::image type="content" source="media/api-management/api-management-internal-deployment.png" alt-text="Diagram showing an internal API Management deployment for Azure VMware Solution" lightbox="media/api-management/api-management-internal-deployment.png" border="false":::
azure-vmware Concepts Hub And Spoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-hub-and-spoke.md
Title: Concept - Integrate an Azure VMware Solution deployment in a hub and spok
description: Learn about integrating an Azure VMware Solution deployment in a hub and spoke architecture on Azure. Previously updated : 10/20/2022 Last updated : 10/24/2022+ # Integrate Azure VMware Solution in a hub and spoke architecture This article provides recommendations for integrating an Azure VMware Solution deployment in an existing or a new [Hub and Spoke architecture](/azure/architecture/reference-architectures/hybrid-networking/#hub-spoke-network-topology) on Azure. - The Hub and Spoke scenario assume a hybrid cloud environment with workloads on: * Native Azure using IaaS or PaaS services
The architecture has the following main components:
- **ExpressRoute Global Reach:** Enables the connectivity between on-premises and Azure VMware Solution private cloud. The connectivity between Azure VMware Solution and the Azure fabric is through ExpressRoute Global Reach only. - - **S2S VPN considerations:** Connectivity to Azure VMware Solution private cloud using Azure S2S VPN is supported as long as it meets the [minimum network requirements](https://docs.vmware.com/en/VMware-HCX/4.4/hcx-user-guide/GUID-8128EB85-4E3F-4E0C-A32C-4F9B15DACC6D.html) for VMware HCX. - - **Hub virtual network:** Acts as the central point of connectivity to your on-premises network and Azure VMware Solution private cloud. - **Spoke virtual network**
Because an ExpressRoute gateway doesn't provide transitive routing between its c
:::image type="content" source="./media/hub-spoke/on-premises-azure-vmware-solution-traffic-flow.png" alt-text="Diagram showing the on-premises to Azure VMware Solution traffic flow." border="false" lightbox="./media/hub-spoke/on-premises-azure-vmware-solution-traffic-flow.png"::: - * **Azure VMware Solution to Hub VNET traffic flow** :::image type="content" source="./media/hub-spoke/azure-vmware-solution-hub-vnet-traffic-flow.png" alt-text="Diagram showing the Azure VMware Solution to Hub virtual network traffic flow." border="false" lightbox="./media/hub-spoke/azure-vmware-solution-hub-vnet-traffic-flow.png"::: - For more information on Azure VMware Solution networking and connectivity concepts, see the [Azure VMware Solution product documentation](./concepts-networking.md). ### Traffic segmentation
Create route tables to direct the traffic to Azure Firewall. For the Spoke virt
:::image type="content" source="media/hub-spoke/create-route-table-to-direct-traffic.png" alt-text="Screenshot showing the route tables to direct traffic to Azure Firewall." lightbox="media/hub-spoke/create-route-table-to-direct-traffic.png"::: - > [!IMPORTANT] > A route with address prefix 0.0.0.0/0 on the **GatewaySubnet** setting is not supported.
For more information, see the Azure VMware Solution-specific article on [Applica
:::image type="content" source="media/hub-spoke/azure-vmware-solution-second-level-traffic-segmentation.png" alt-text="Diagram showing the second level of traffic segmentation using the Network Security Groups." border="false"::: - ### Jump box and Azure Bastion Access Azure VMware Solution environment with a jump box, which is a Windows 10 or Windows Server VM deployed in the shared service subnet within the Hub virtual network.
As a security best practice, deploy [Microsoft Azure Bastion](../bastion/index.y
> [!IMPORTANT] > Do not give a public IP address to the jump box VM or expose 3389/TCP port to the public internet. - :::image type="content" source="media/hub-spoke/azure-bastion-hub-vnet.png" alt-text="Diagram showing the Azure Bastion Hub virtual network." border="false":::
As a security best practice, deploy [Microsoft Azure Bastion](../bastion/index.y
For Azure DNS resolution, there are two options available: -- Use the domain controllers deployed on the Hub (described in [Identity considerations](#identity-considerations)) as name servers.
+- Use the domain controllers deployed on the Hub (described in [Identity considerations](#identity-considerations)) as name servers.
-- Deploy and configure an Azure DNS private zone.
+- Deploy and configure an Azure DNS private zone.
The best approach is to combine both to provide reliable name resolution for Azure VMware Solution, on-premises, and Azure. As a general design recommendation, use the existing Active Directory-integrated DNS deployed onto at least two Azure VMs in the Hub virtual network and configured in the Spoke virtual networks to use those Azure DNS servers in the DNS settings.
-You can use Azure Private DNS, where the Azure Private DNS zone links to the virtual network. The DNS servers are used as hybrid resolvers with conditional forwarding to on-premises or Azure VMware Solution running DNS using customer Azure Private DNS infrastructure.
+You can use Azure Private DNS, where the Azure Private DNS zone links to the virtual network. The DNS servers are used as hybrid resolvers with conditional forwarding to on-premises or Azure VMware Solution running DNS using customer Azure Private DNS infrastructure.
To automatically manage the DNS records' lifecycle for the VMs deployed within the Spoke virtual networks, enable autoregistration. When enabled, the maximum number of private DNS zones is only one. If disabled, then the maximum number is 1000.
azure-vmware Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-networking.md
Title: Concepts - Network interconnectivity
description: Learn about key aspects and use cases of networking and interconnectivity in Azure VMware Solution. Previously updated : 06/28/2021 Last updated : 10/25/2022+ # Azure VMware Solution networking and interconnectivity concepts
In the fully interconnected scenario, you can access the Azure VMware Solution f
The diagram below shows the on-premises to private cloud interconnectivity, which enables the following use cases: - Hot/Cold vSphere vMotion between on-premises and Azure VMware Solution.-- On-Premises to Azure VMware Solution private cloud management access.
+- On-premises to Azure VMware Solution private cloud management access.
:::image type="content" source="media/concepts/adjacency-overview-drawing-double.png" alt-text="Diagram showing the virtual network and on-premises to private cloud interconnectivity." border="false":::
azure-vmware Concepts Private Clouds Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-private-clouds-clusters.md
Title: Concepts - Private clouds and clusters
description: Learn about the key capabilities of Azure VMware Solution software-defined data centers and VMware vSphere clusters. Previously updated : 08/25/2021 Last updated : 10/25/2022+
-# Azure VMware Solution private cloud and cluster concepts
+# Azure VMware Solution private cloud and cluster concepts
Azure VMware Solution delivers VMware-based private clouds in Azure. The private cloud hardware and software deployments are fully integrated and automated in Azure. You deploy and manage the private cloud through the Azure portal, CLI, or PowerShell. A private cloud includes clusters with: -- Dedicated bare-metal server hosts provisioned with VMware ESXi hypervisor -- VMware vCenter Server for managing ESXi and vSAN
+- Dedicated bare-metal server hosts provisioned with VMware ESXi hypervisor
+- VMware vCenter Server for managing ESXi and vSAN
- VMware NSX-T Data Center software-defined networking for vSphere workload VMs - VMware vSAN datastore for vSphere workload VMs - VMware HCX for workload mobility - Resources in the Azure underlay (required for connectivity and to operate the private cloud)
-As with other resources, private clouds are installed and managed from within an Azure subscription. The number of private clouds within a subscription is scalable. Initially, there's a limit of one private cloud per subscription. There's a logical relationship between Azure subscriptions, Azure VMware Solution private clouds, vSAN clusters, and hosts.
+As with other resources, private clouds are installed and managed from within an Azure subscription. The number of private clouds within a subscription is scalable. Initially, there's a limit of one private cloud per subscription. There's a logical relationship between Azure subscriptions, Azure VMware Solution private clouds, vSAN clusters, and hosts.
-The diagram shows a single Azure subscription with two private clouds that represent a development and production environment. In each of those private clouds are two clusters.
+The diagram shows a single Azure subscription with two private clouds that represent a development and production environment. In each of those private clouds are two clusters.
## Hosts
The diagram shows a single Azure subscription with two private clouds that repre
## Host maintenance and lifecycle management -- [!INCLUDE [vmware-software-update-frequency](includes/vmware-software-update-frequency.md)] ## Host monitoring and remediation
-Azure VMware Solution continuously monitors the health of both the underlay and the VMware components. When Azure VMware Solution detects a failure, it takes action to repair the failed components. When Azure VMware Solution detects a degradation or failure on an Azure VMware Solution node, it triggers the host remediation process.
+Azure VMware Solution continuously monitors the health of both the underlay and the VMware components. When Azure VMware Solution detects a failure, it takes action to repair the failed components. When Azure VMware Solution detects a degradation or failure on an Azure VMware Solution node, it triggers the host remediation process.
Host remediation involves replacing the faulty node with a new healthy node in the cluster. Then, when possible, the faulty host is placed in VMware vSphere maintenance mode. VMware vMotion moves the VMs off the faulty host to other available servers in the cluster, potentially allowing zero downtime for live migration of workloads. If the faulty host can't be placed in maintenance mode, the host is removed from the cluster. Azure VMware Solution monitors the following conditions on the host: -- Processor status -- Memory status -- Connection and power state -- Hardware fan status -- Network connectivity loss -- Hardware system board status -- Errors occurred on the disk(s) of a vSAN host -- Hardware voltage -- Hardware temperature status -- Hardware power status -- Storage status -- Connection failure
+- Processor status
+- Memory status
+- Connection and power state
+- Hardware fan status
+- Network connectivity loss
+- Hardware system board status
+- Errors occurred on the disk(s) of a vSAN host
+- Hardware voltage
+- Hardware temperature status
+- Hardware power status
+- Storage status
+- Connection failure
> [!NOTE] > Azure VMware Solution tenant admins must not edit or delete the above defined VMware vCenter Server alarms, as these are managed by the Azure VMware Solution control plane on vCenter Server. These alarms are used by Azure VMware Solution monitoring to trigger the Azure VMware Solution host remediation process.
Now that you've covered Azure VMware Solution private cloud concepts, you may wa
[vCSA versions]: https://kb.vmware.com/s/article/2143838 [ESXi versions]: https://kb.vmware.com/s/article/2143832 [vSAN versions]: https://kb.vmware.com/s/article/2150753-
azure-vmware Concepts Run Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-run-command.md
Title: Concepts - Run command in Azure VMware Solution (Preview)
description: Learn about using run commands in Azure VMware Solution. Previously updated : 09/17/2021 Last updated : 10/25/2022+
-# Run command in Azure VMware Solution
+# Run command in Azure VMware Solution
-In Azure VMware Solution, vCenter Server has a built-in local user called *cloudadmin* assigned to the CloudAdmin role. The CloudAdmin role has vCenter Server [privileges](concepts-identity.md#vcenter-server-access-and-identity) that differ from other VMware cloud solutions and on-premises deployments. The Run command feature lets you perform operations that would normally require elevated privileges through a collection of PowerShell cmdlets.
+In Azure VMware Solution, vCenter Server has a built-in local user called *cloudadmin* assigned to the CloudAdmin role. The CloudAdmin role has vCenter Server [privileges](concepts-identity.md#vcenter-server-access-and-identity) that differ from other VMware cloud solutions and on-premises deployments. The Run command feature lets you perform operations that would normally require elevated privileges through a collection of PowerShell cmdlets.
Azure VMware Solution supports the following operations:
Azure VMware Solution supports the following operations:
- [Deploy disaster recovery using JetStream](deploy-disaster-recovery-using-jetstream.md) - >[!NOTE] >Run commands are executed one at a time in the order submitted.
You can view the status of any executed run command, including the output, error
:::image type="content" source="media/run-command/run-execution-status-example-output.png" alt-text="Screenshot showing the output of a run execution.":::
- - **Error** - Error messages generated in the execution of the cmdlet. This is in addition to the terminating error message on the details pane.
+ - **Error** - Error messages generated in the execution of the cmdlet. This is in addition to the terminating error message on the details pane.
:::image type="content" source="media/run-command/run-execution-status-example-error.png" alt-text="Screenshot showing the errors detected during the execution of an execution.":::
- - **Warning** - Warning messages generated during the execution.
+ - **Warning** - Warning messages generated during the execution.
:::image type="content" source="media/run-command/run-execution-status-example-warning.png" alt-text="Screenshot showing the warnings detected during the execution of an execution.":::
- - **Information** - Progress and diagnostic generated messages during the execution of a cmdlet.
+ - **Information** - Progress and diagnostic generated messages during the execution of a cmdlet.
:::image type="content" source="medilet as it runs."::: -- ## Cancel or delete a job -- ### Method 1 This method attempts to cancel the execution, and then deletes it upon completion.
This method attempts to cancel the execution, and then deletes it upon completio
2. Select **Yes** to cancel and remove the job for all users. -- ### Method 2 1. Select **Run command** > **Packages** > **Run execution status**.
This method attempts to cancel the execution, and then deletes it upon completio
3. Select **Yes** to cancel and remove the job for all users. -- ## Next steps Now that you've learned about the Run command concepts, you can use the Run command feature to:
Now that you've learned about the Run command concepts, you can use the Run comm
- [Configure external identity source for vCenter (Run command)](configure-identity-source-vcenter.md) - Configure Active Directory over LDAP or LDAPS for vCenter Server, which enables the use of an external identity source as an Active Directory. Then, you can add groups from the external identity source to the CloudAdmin role. -- [Deploy disaster recovery using JetStream](deploy-disaster-recovery-using-jetstream.md) - Store data directly to a recovery cluster in vSAN. The data gets captured through I/O filters that run within vSphere. The underlying data store can be VMFS, VSAN, vVol, or any HCI platform.
+- [Deploy disaster recovery using JetStream](deploy-disaster-recovery-using-jetstream.md) - Store data directly to a recovery cluster in vSAN. The data gets captured through I/O filters that run within vSphere. The underlying data store can be VMFS, VSAN, vVol, or any HCI platform.
azure-vmware Enable Public Ip Nsx Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-public-ip-nsx-edge.md
Title: Enable Public IP to the NSX-T Data Center Edge for Azure VMware Solution
description: This article shows how to enable internet access for your Azure VMware Solution. Previously updated : 10/17/2022 Last updated : 10/24/2022+ # Enable Public IP to the NSX-T Data Center Edge for Azure VMware Solution
-In this article, you'll learn how to enable Public IP to the NSX-T Data Center Edge for your Azure VMware Solution.
+In this article, you'll learn how to enable Public IP to the NSX-T Data Center Edge for your Azure VMware Solution.
>[!TIP] >Before you enable Internet access to your Azure VMware Solution, review the [Internet connectivity design considerations](concepts-design-public-internet-access.md).
-Public IP to the NSX-T Data Center Edge is a feature in Azure VMware Solution that enables inbound and outbound internet access for your Azure VMware Solution environment.
+Public IP to the NSX-T Data Center Edge is a feature in Azure VMware Solution that enables inbound and outbound internet access for your Azure VMware Solution environment.
>[!IMPORTANT] >The use of Public IPv4 addresses can be consumed directly in Azure VMware Solution and charged based on the Public IPv4 prefix shown on [Pricing - Virtual Machine IP Address Options.](https://azure.microsoft.com/pricing/details/ip-addresses/).
Public IP to the NSX-T Data Center Edge is a feature in Azure VMware Solution th
The Public IP is configured in Azure VMware Solution through the Azure portal and the NSX-T Data Center interface within your Azure VMware Solution private cloud. With this capability, you have the following features:+ - A cohesive and simplified experience for reserving and using a Public IP down to the NSX Edge. - The ability to receive up to 1000 or more Public IPs, enabling Internet access at scale. - Inbound and outbound internet access for your workload VMs.-- DDoS Security protection against network traffic in and out of the Internet.
+- DDoS Security protection against network traffic in and out of the Internet.
- HCX Migration support over the Public Internet. >[!IMPORTANT] >You can configure up to 64 total Public IP addresses across these network blocks. If you want to configure more than 64 Public IP addresses, please submit a support ticket stating how many. ## Prerequisites+ - Azure VMware Solution private cloud - DNS Server configured on the NSX-T Data Center
-## Reference architecture
+## Reference architecture
+ The architecture shows Internet access to and from your Azure VMware Solution private cloud using a Public IP directly to the NSX-T Data Center Edge. :::image type="content" source="media/public-ip-nsx-edge/architecture-internet-access-avs-public-ip.png" alt-text="Diagram that shows architecture of Internet access to and from your Azure VMware Solution Private Cloud using a Public IP directly to the NSX Edge." border="false" lightbox="media/public-ip-nsx-edge/architecture-internet-access-avs-public-ip-expanded.png"::: >[!IMPORTANT]
->The use of Public IP down to the NSX-T Data Center Edge is not compatible with reverse DNS Lookup.
+>The use of Public IP down to the NSX-T Data Center Edge is not compatible with reverse DNS Lookup.
## Configure a Public IP in the Azure portal
-1. Log on to the Azure portal.
+
+1. Log in to the Azure portal.
1. Search for and select Azure VMware Solution.
-2. Select the Azure VMware Solution private cloud.
-1. In the left navigation, under **Workload Networking**, select **Internet connectivity**.
-4. Select the **Connect using Public IP down to the NSX-T Edge** button.
+1. Select the Azure VMware Solution private cloud.
+1. In the left navigation, under **Workload Networking**, select **Internet connectivity**.
+1. Select the **Connect using Public IP down to the NSX-T Edge** button.
>[!IMPORTANT] >Before selecting a Public IP, ensure you understand the implications to your existing environment. For more information, see [Internet connectivity design considerations](concepts-design-public-internet-access.md). This should include a risk mitigation review with your relevant networking and security governance and compliance teams.
-
-5. Select **Public IP**.
+
+6. Select **Public IP**.
:::image type="content" source="media/public-ip-nsx-edge/public-ip-internet-connectivity.png" alt-text="Diagram that shows how to select public IP to the NSX Edge":::
-6. Enter the **Public IP name** and select a subnet size from the **Address space** dropdown and select **Configure**.
-7. This Public IP should be configured within 20 minutes and will show the subnet.
+6. Enter the **Public IP name** and select a subnet size from the **Address space** dropdown and select **Configure**.
+7. This Public IP should be configured within 20 minutes and will show the subnet.
:::image type="content" source="media/public-ip-nsx-edge/public-ip-subnet-internet-connectivity.png" alt-text="Diagram that shows Internet connectivity in Azure VMware Solution."::: 1. If you don't see the subnet, refresh the list. If the refresh fails, try the configuration again.
-9. After configuring the Public IP, select the **Connect using the Public IP down to the NSX-T Edge** checkbox to disable all other Internet options.
-10. Select **Save**.
+9. After configuring the Public IP, select the **Connect using the Public IP down to the NSX-T Edge** checkbox to disable all other Internet options.
+10. Select **Save**.
-You have successfully enabled Internet connectivity for your Azure VMware Solution private cloud and reserved a Microsoft allocated Public IP. You can now configure this Public IP down to the NSX-T Data Center Edge for your workloads. The NSX-T Data Center is used for all VM communication. There are several options for configuring your reserved Public IP down to the NSX-T Data Center Edge.
+You have successfully enabled Internet connectivity for your Azure VMware Solution private cloud and reserved a Microsoft allocated Public IP. You can now configure this Public IP down to the NSX-T Data Center Edge for your workloads. The NSX-T Data Center is used for all VM communication. There are several options for configuring your reserved Public IP down to the NSX-T Data Center Edge.
There are three options for configuring your reserved Public IP down to the NSX-T Data Center Edge: Outbound Internet Access for VMs, Inbound Internet Access for VMs, and Gateway Firewall used to Filter Traffic to VMs at T1 Gateways. ### Outbound Internet access for VMs
-
+ A Sourced Network Translation Service (SNAT) with Port Address Translation (PAT) is used to allow many VMs to one SNAT service. This connection means you can provide Internet connectivity for many VMs. >[!IMPORTANT] > To enable SNAT for your specified address ranges, you must [configure a gateway firewall rule](#gateway-firewall-used-to-filter-traffic-to-vms-at-t1-gateways) and SNAT for the specific address ranges you desire. If you don't want SNAT enabled for specific address ranges, you must create a [No-NAT rule](#no-network-address-translation-rule-for-specific-address-ranges) for the address ranges to exclude. For your SNAT service to work as expected, the No-NAT rule should be a lower priority than the SNAT rule. **Add rule**
-1. From your Azure VMware Solution private cloud, select **vCenter Server Credentials**
-2. Locate your NSX-T Manager URL and credentials.
-3. Log in to **VMware NSX-T Manager**.
-4. Navigate to **NAT Rules**.
-5. Select the T1 Router.
-1. Select **ADD NAT RULE**.
+
+1. From your Azure VMware Solution private cloud, select **vCenter Server Credentials**
+2. Locate your NSX-T Manager URL and credentials.
+3. Log in to **VMware NSX-T Manager**.
+4. Navigate to **NAT Rules**.
+5. Select the T1 Router.
+1. Select **ADD NAT RULE**.
**Configure rule** 1. Enter a name.
-1. Select **SNAT**.
+1. Select **SNAT**.
1. Optionally, enter a source such as a subnet to SNAT or destination. 1. Enter the translated IP. This IP is from the range of Public IPs you reserved from the Azure VMware Solution Portal. 1. Optionally, give the rule a higher priority number. This prioritization will move the rule further down the rule list to ensure more specific rules are matched first.
Logging can be enabled by way of the logging slider. For more information on NSX
### No Network Address Translation rule for specific address ranges A No SNAT rule in NSX-T Manager can be used to exclude certain matches from performing Network Address Translation. This policy can be used to allow private IP traffic to bypass existing network translation rules.+ 1. From your Azure VMware Solution private cloud, select **vCenter Server Credentials**. 1. Locate your NSX-T Manager URL and credentials.
-1. Log in to **VMware NSX-T Manager** and then select **NAT Rules**.
+1. Log in to **VMware NSX-T Manager** and then select **NAT Rules**.
1. Select the T1 Router and then select **ADD NAT RULE**. 1. Select **NO SNAT** rule as the type of NAT rule. 1. Select the **Source IP** as the range of addresses you do not want to be translated. The **Destination IP** should be any internal addresses you are reaching from the range of Source IP ranges. 1. Select **SAVE**. ### Inbound Internet Access for VMs+ A Destination Network Translation Service (DNAT) is used to expose a VM on a specific Public IP address and/or a specific port. This service provides inbound internet access to your workload VMs. **Log in to VMware NSX-T Manager**
-1. From your Azure VMware Solution private cloud, select **VMware credentials**.
-2. Locate your NSX-T Manager URL and credentials.
-3. Log in to **VMware NSX-T Manager**.
+
+1. From your Azure VMware Solution private cloud, select **VMware credentials**.
+2. Locate your NSX-T Manager URL and credentials.
+3. Log in to **VMware NSX-T Manager**.
**Configure the DNAT rule**+ 1. Name the rule. 1. Select **DNAT** as the action. 1. Enter the reserved Public IP in the destination match. This IP is from the range of Public IPs reserved from the Azure VMware Solution Portal. 1. Enter the VM Private IP in the translated IP.
-1. Select **SAVE**.
+1. Select **SAVE**.
1. Optionally, configure the Translated Port or source IP for more specific matches.
-
+ The VM is now exposed to the internet on the specific Public IP and/or specific ports. ### Gateway Firewall used to filter traffic to VMs at T1 Gateways
-
-You can provide security protection for your network traffic in and out of the public internet through your Gateway Firewall.
+
+You can provide security protection for your network traffic in and out of the public internet through your Gateway Firewall.
+ 1. From your Azure VMware Solution Private Cloud, select **VMware credentials**. 2. Locate your NSX-T Manager URL and credentials.
-3. Log in to **VMware NSX-T Manager**.
-4. From the NSX-T home screen, select **Gateway Policies**.
-5. Select **Gateway Specific Rules**, choose the T1 Gateway and select **ADD POLICY**.
-6. Select **New Policy** and enter a policy name.
-7. Select the Policy and select **ADD RULE**.
+3. Log in to **VMware NSX-T Manager**.
+4. From the NSX-T home screen, select **Gateway Policies**.
+5. Select **Gateway Specific Rules**, choose the T1 Gateway and select **ADD POLICY**.
+6. Select **New Policy** and enter a policy name.
+7. Select the Policy and select **ADD RULE**.
8. Configure the rule.
-
+ 1. Select **New Rule**. 1. Enter a descriptive name. 1. Configure the source, destination, services, and action.
-
+ 1. Select **Match External Address** to apply firewall rules to the external address of a NAT rule. For example, the following rule is set to Match External Address, and this setting will allow SSH traffic inbound to the Public IP. :::image type="content" source="media/public-ip-nsx-edge/gateway-specific-rules-match-external-connectivity.png" alt-text="Screenshot Internet connectivity inbound Public IP." lightbox="media/public-ip-nsx-edge/gateway-specific-rules-match-external-connectivity-expanded.png":::
-
-If **Match Internal Address** was specified, the destination would be the internal or private IP address of the VM.
-For more information on the NSX-T Data Center Gateway Firewall see the [NSX-T Data Center Gateway Firewall Administration Guide]( https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-A52E1A6F-F27D-41D9-9493-E3A75EC35481.html)
+
+If **Match Internal Address** was specified, the destination would be the internal or private IP address of the VM.
+
+For more information on the NSX-T Data Center Gateway Firewall see the [NSX-T Data Center Gateway Firewall Administration Guide]( https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-A52E1A6F-F27D-41D9-9493-E3A75EC35481.html).
The Distributed Firewall could be used to filter traffic to VMs. This feature is outside the scope of this document. For more information, see [NSX-T Data Center Distributed Firewall Administration Guide]( https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-6AB240DB-949C-4E95-A9A7-4AC6EF5E3036.html).
-## Next steps
+## Next steps
+ [Internet connectivity design considerations (Preview)](concepts-design-public-internet-access.md) [Enable Managed SNAT for Azure VMware Solution Workloads (Preview)](enable-managed-snat-for-workloads.md)
azure-vmware Fix Deployment Failures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/fix-deployment-failures.md
Title: Support for Azure VMware Solution deployment or provisioning failure
description: Get information from your Azure VMware Solution private cloud to file a service request for an Azure VMware Solution deployment or provisioning failure. Previously updated : 10/20/2022 Last updated : 10/24/2022+ # Open a support request for an Azure VMware Solution deployment or provisioning failure
bastion Bastion Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md
description: Learn about frequently asked questions for Azure Bastion.
Previously updated : 10/21/2022 Last updated : 10/25/2022 # Azure Bastion FAQ
Azure Bastion is deployed within VNets or peered VNets, and is associated to an
Currently, by default, new Bastion deployments don't support zone redundancies. Previously deployed bastions may or may not be zone-redundant. The exceptions are Bastion deployments in Korea Central and Southeast Asia, which do support zone redundancies.
+### <a name="azure-ad-guests"></a>Does Bastion support Azure AD guest accounts?
+
+Yes, [Azure AD guest accounts](../active-directory/external-identities/what-is-b2b.md) can be granted access to Bastion and can connect to virtual machines.
+ ## <a name="vm"></a>VM features and connection FAQs ### <a name="roles"></a>Are any roles required to access a virtual machine?
cdn Create Profile Endpoint Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/create-profile-endpoint-bicep.md
Title: 'Quickstart: Create a profile and endpoint - Bicep'
description: In this quickstart, learn how to create an Azure Content Delivery Network profile and endpoint by using a Bicep file -+ na Last updated 03/14/2022-+ # Quickstart: Create an Azure CDN profile and endpoint - Bicep
cognitive-services Concept Detecting Faces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/concept-detecting-faces.md
Image Analysis can detect human faces within an image and generate rectangle coo
> [!NOTE] > This feature is also offered by the dedicated [Face](./overview-identity.md) service. Use this alternative for more detailed face analysis, including face identification and head pose detection. + Try out the face detection features quickly and easily in your browser using Vision Studio. > [!div class="nextstepaction"]
cognitive-services How To Pronunciation Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-pronunciation-assessment.md
You can get pronunciation assessment scores for:
- Full text - Words - Syllable groups-- Phonemes in SAPI or IPA format
+- Phonemes in [SAPI](/previous-versions/windows/desktop/ee431828(v=vs.85)#american-english-phoneme-table) or [IPA](https://en.wikipedia.org/wiki/IPA) format
> [!NOTE]
-> For information about availability of pronunciation assessment, see [supported languages](language-support.md?tabs=stt-tts) and [available regions](regions.md#speech-service).
+> The syllable group, phoneme name, and spoken phoneme of pronunciation assessment are currently only available for the en-US locale.
+>
+> Usage of pronunciation assessment is charged the same as standard Speech to Text [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services).
>
-> The syllable groups, IPA phonemes, and spoken phoneme features of pronunciation assessment are currently only available for the en-US locale.
+> For information about availability of pronunciation assessment, see [supported languages](language-support.md?tabs=pronunciation-assessment) and [available regions](regions.md#speech-service).
+ ## Configuration parameters
To request syllable-level results along with phonemes, set the granularity [conf
## Phoneme alphabet format
-For some locales, the phoneme name is provided together with the score, to help identify which phonemes were pronounced accurately or inaccurately. The phoneme name in [SAPI](/previous-versions/windows/desktop/ee431828(v=vs.85)#american-english-phoneme-table) format is available for the `en-GB` and `en-US` locales. The phoneme name in [IPA](https://en.wikipedia.org/wiki/IPA) format is only available for the `en-US` locale. For other locales, you can only get the phoneme score.
+For `en-US` locale, the phoneme name is provided together with the score, to help identify which phonemes were pronounced accurately or inaccurately. For other locales, you can only get the phoneme score.
The following table compares example SAPI phonemes with the corresponding IPA phonemes.
cognitive-services Pronunciation Assessment Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/pronunciation-assessment-tool.md
Pronunciation assessment provides various assessment results in different granul
This article describes how to use the pronunciation assessment tool through the [Speech Studio](https://speech.microsoft.com). You can get immediate feedback on the accuracy and fluency of your speech without writing any code. For information about how to integrate pronunciation assessment in your speech applications, see [How to use pronunciation assessment](how-to-pronunciation-assessment.md).
+> [!NOTE]
+> Usage of pronunciation assessment is charged the same as standard Speech to Text [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services).
+>
+> For information about availability of pronunciation assessment, see [supported languages](language-support.md?tabs=pronunciation-assessment) and [available regions](regions.md#speech-service).
+ ## Try out pronunciation assessment You can explore and try out pronunciation assessment even without signing in.
cognitive-services Speech Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-howto.md
With Speech containers, you can build a speech application architecture that's o
| Container | Features | Latest | Release status | |--|--|--|--|
-| Speech-to-text | Analyzes sentiment and transcribes continuous real-time speech or batch audio recordings with intermediate results. | 3.6.0 | Generally available |
-| Custom speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | 3.6.0 | Generally available |
+| Speech-to-text | Analyzes sentiment and transcribes continuous real-time speech or batch audio recordings with intermediate results. | 3.7.0 | Generally available |
+| Custom speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | 3.7.0 | Generally available |
| Speech language identification | Detects the language spoken in audio files. | 1.5.0 | Preview |
-| Neural text-to-speech | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | 2.5.0 | Generally available |
+| Neural text-to-speech | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | 2.6.0 | Generally available |
## Prerequisites
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
The following content types are supported for the `interpret-as` and `format` at
| `characters`, `spell-out` | | The text is spoken as individual letters (spelled out). The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="characters">test</say-as>`<br /><br />As "T E S T." | | `cardinal`, `number` | None| The text is spoken as a cardinal number. The speech synthesis engine pronounces:<br /><br />`There are <say-as interpret-as="cardinal">10</say-as> options`<br /><br />As "There are ten options."| | `ordinal` | None | The text is spoken as an ordinal number. The speech synthesis engine pronounces:<br /><br />`Select the <say-as interpret-as="ordinal">3rd</say-as> option`<br /><br />As "Select the third option."|
-| `digits`, `number_digit` | None | The text is spoken as a sequence of individual digits. The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="number_digit">123456789</say-as>`<br /><br />As "1 2 3 4 5 6 7 8 9." |
+| `number_digit` | None | The text is spoken as a sequence of individual digits. The speech synthesis engine pronounces:<br /><br />`<say-as interpret-as="number_digit">123456789</say-as>`<br /><br />As "1 2 3 4 5 6 7 8 9." |
| `fraction` | None | The text is spoken as a fractional number. The speech synthesis engine pronounces:<br /><br /> `<say-as interpret-as="fraction">3/8</say-as> of an inch`<br /><br />As "three eighths of an inch." | | `date` | dmy, mdy, ymd, ydm, ym, my, md, dm, d, m, y | The text is spoken as a date. The `format` attribute specifies the date's format (*d=day, m=month, and y=year*). The speech synthesis engine pronounces:<br /><br />`Today is <say-as interpret-as="date" format="mdy">10-19-2016</say-as>`<br /><br />As "Today is October nineteenth two thousand sixteen." | | `time` | hms12, hms24 | The text is spoken as a time. The `format` attribute specifies whether the time is specified by using a 12-hour clock (hms12) or a 24-hour clock (hms24). Use a colon to separate numbers representing hours, minutes, and seconds. Here are some valid time examples: 12:35, 1:14:32, 08:15, and 02:50:45. The speech synthesis engine pronounces:<br /><br />`The train departs at <say-as interpret-as="time" format="hms12">4:00am</say-as>`<br /><br />As "The train departs at four A M." |
cognitive-services Container Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/container-image-tags.md
The [Custom Speech-to-text][sp-cstt] container image can be found on the `mcr.mi
# [Latest version](#tab/current)
+Release note for `3.7.0-amd64`:
+
+**Features**
+* Security upgrade.
+
+| Image Tags | Notes | Digest |
+|-|:|:-|
+| `latest` | | `sha256:551113f7df4840bde91bbe3d9902af5a09153462ca450490347547d95ab1c08e`|
+| `3.7.0-amd64` | | `sha256:551113f7df4840bde91bbe3d9902af5a09153462ca450490347547d95ab1c08e`|
+
+# [Previous version](#tab/previous)
Release note for `3.6.0-amd64`: **Features**
Release note for `3.6.0-amd64`:
| `latest` | | `sha256:9a1ef0bcb5616ff9d1c70551d4634acae50ff4f7ed04b0ad514a75f2e6fa1241`| | `3.6.0-amd64` | | `sha256:9a1ef0bcb5616ff9d1c70551d4634acae50ff4f7ed04b0ad514a75f2e6fa1241`|
-# [Previous version](#tab/previous)
Release note for `3.5.0-amd64`: **Features**
Since Speech-to-text v2.5.0, images are supported in the *US Government Virginia
# [Latest version](#tab/current) +
+Release note for `3.7.0-amd64-<locale>`:
+
+**Features**
+* Security upgrade.
+* Support for latest model versions.
++
+| Image Tags | Notes |
+|-|:--|
+| `latest` | Container image with the `en-US` locale. |
+| `3.7.0-amd64-<locale>` | Replace `<locale>` with one of the available locales, listed below. For example `3.7.0-amd64-en-us`. |
+
+This container has the following locales available.
+
+| Locale for v3.7.0 | Notes | Digest |
+|--|:--|:--|
+| `ar-ae`| Container image with the `ar-AE` locale. | `sha256:06b50c123adb079c470ad37912bf6f0e37578e39f0432bf79d5f1c334f4013b2` |
+| `ar-bh`| Container image with the `ar-BH` locale. | `sha256:37a9ba7b309c5d43fc23d47dd7aaaf9f0775851295d674c0ca546aa9484f3d38` |
+| `ar-eg`| Container image with the `ar-EG` locale. | `sha256:80b7ad3d3d37d99782c8473cb5a36724bec381e321ae13fb2069a88472cea1af` |
+| `ar-iq`| Container image with the `ar-IQ` locale. | `sha256:dce00ea6b3c2ba9f12d8f8b7cee7762d8762c396f41667ed438a4f4420e8109b` |
+| `ar-jo`| Container image with the `ar-JO` locale. | `sha256:cf84e78c25edbd01e42645db3f7d08fcb0b702cbf9648cd0f504f6d5873c916f` |
+| `ar-kw`| Container image with the `ar-KW` locale. | `sha256:7c2548a1073e6bbee58193ba20353d9fb62cff17f57c1618b99d1dd9ca3a457b` |
+| `ar-lb`| Container image with the `ar-LB` locale. | `sha256:dae94f065cf026098181068ee5a21cb0e7d070f62fa36abdaa31c53de6ee5561` |
+| `ar-om`| Container image with the `ar-OM` locale. | `sha256:ae47c7c004161cd424cb8387eb82c7b75a76e5c97516e936a143b800427272e7` |
+| `ar-qa`| Container image with the `ar-QA` locale. | `sha256:07da7898b38f98a33d4dbf61bfab145de3549023988b1f0501227dee6b446799` |
+| `ar-sa`| Container image with the `ar-SA` locale. | `sha256:e58608eae7548c617677973031774bd61a12385162cc281d4d9ec14b10f50123` |
+| `ar-sy`| Container image with the `ar-SY` locale. | `sha256:a8fa1046c2ac8d87a58b6ea80811d8bec70bcde0ccf57f04fb73e18e485dfdad` |
+| `az-az`| Container image with the `az-AZ` locale. | `sha256:0a15dfda36aac86dfe12f5682d952ad38a4e02f5f322314787585cd16f985ba3` |
+| `bg-bg`| Container image with the `bg-BG` locale. | `sha256:638ca1a3c8e0a7e7e56750c013a1006a5c55d246eb6475cc977b017162cddad8` |
+| `bn-in`| Container image with the `bn-IN` locale. | `sha256:b35b74967c70d9480ec0aae95567db5f4eb25a9e78189690cc6fcb5580f3dae6` |
+| `bs-ba`| Container image with the `bs-BA` locale. | `sha256:6c9a35c675274dc358660a70bf01a71ff3d0c28eff7b4b40045acb606c52c311` |
+| `ca-es`| Container image with the `ca-ES` locale. | `sha256:8163db1b99645795a7e427eb31fead772e40423cddb4e99f3dc573c4b6c4e2ec` |
+| `cs-cz`| Container image with the `cs-CZ` locale. | `sha256:b3eeec75abe1d50f4e325dcb3d8ff0c94516eaeacc24493685ac21c5bb7b723f` |
+| `cy-gb`| Container image with the `cy-GB` locale. | `sha256:686c50efe91f4addab5fdb9b25d5b1eac45303d9ac4517150ae7d3b14ba76680` |
+| `da-dk`| Container image with the `da-DK` locale. | `sha256:0375770ab3eae63184fba644ffc820f42bebc28fdabff91e8529a2aeca0a5ab3` |
+| `de-at`| Container image with the `de-AT` locale. | `sha256:00020ceb473244a65aa3c74ac6afe5798c9488e1e6de75991a1ff45b64f639b2` |
+| `de-ch`| Container image with the `de-CH` locale. | `sha256:48e11b783237f1ac3b83f5d9da400c9b785404a0bbd880512b5ca1044c9e5da0` |
+| `de-de`| Container image with the `de-DE` locale. | `sha256:12c854473d84fdafac5095ac4eb1e1dce6bbf4557ecfe74c24270ff5afbb61d4` |
+| `el-gr`| Container image with the `el-GR` locale. | `sha256:d988b1046e8da4b41b1762ffc4a5e1e4463b6563d6466bd268ad5bc70d685ff7` |
+| `en-au`| Container image with the `en-AU` locale. | `sha256:510c056ee039636eacf49558357546553f84d1733ee79be1b7cbb8a00a255b4f` |
+| `en-ca`| Container image with the `en-CA` locale. | `sha256:8ee1190b738ed21bb98398bf485bb6150f8d88283458f4b3be07154ea78802fd` |
+| `en-gb`| Container image with the `en-GB` locale. | `sha256:7983af189b737c91a940e8be9f8859a7b9c069240d68348e1a5468ba1afea8bc` |
+| `en-gh`| Container image with the `en-GH` locale. | `sha256:e1a9bd5b21cbd8c017deb2339175d36a88a29ffaf15413bf530080cb44da8653` |
+| `en-hk`| Container image with the `en-HK` locale. | `sha256:3f6227c250f0f925dab6a6f2c92fb0e9b024cb9fc3ab38fd4556d2302a68b72b` |
+| `en-ie`| Container image with the `en-IE` locale. | `sha256:eae5b1864dd845aafddd8632b0bf86f70d322cbd9f91f4aa38681b9cff78f4b9` |
+| `en-in`| Container image with the `en-IN` locale. | `sha256:1e3c288591fc0df20ae381f520f78ac33aee1f7251037044312c947fd0b8589d` |
+| `en-ke`| Container image with the `en-KE` locale. | `sha256:975515ac47703c0b0a454fe23a5c6174bf5d69be723304449b6ff07d49fb9b3f` |
+| `en-nz`| Container image with the `en-NZ` locale. | `sha256:33f7a214071ec7719f4c697e18225262694198387c7a00ae624b3dc8d6236b7c` |
+| `en-ph`| Container image with the `en-PH` locale. | `sha256:94cb8c27aa2ce700914e4766de48f01f8b6420631c78538c5fcab2e325bc2cc9` |
+| `en-sg`| Container image with the `en-SG` locale. | `sha256:60933b58d251356dc35de461887f0bcfe0f5c47c559a570b98c9f69bbe4ef1f6` |
+| `en-tz`| Container image with the `en-TZ` locale. | `sha256:66d4f800c0a02d2f9dbf4ed4328d5032ced8b1f4d830c0f68ec669c333160962` |
+| `en-us`| Container image with the `en-US` locale. | `sha256:7f1a36eb10de11651d3077387a6e2ee89adb80d2efb1aa7648b9572cde505c64` |
+| `en-za`| Container image with the `en-ZA` locale. | `sha256:536d8edc033d00fda69791681ff15e91ff297edbe6fb0a771685139964857a8c` |
+| `es-ar`| Container image with the `es-AR` locale. | `sha256:a9b2765ed5eb3f18b265b0d088f347ddfb54dc1b21cb6a0a94ffbf1e0ec69f51` |
+| `es-bo`| Container image with the `es-BO` locale. | `sha256:2c1b8297e362ab6df9d2bab4268149656c28ef0a0704d51a608a54e4bf61d643` |
+| `es-cl`| Container image with the `es-CL` locale. | `sha256:d981571c6455c587188782488234852b50cae3b7319147f81f3a46c48c0cc628` |
+| `es-co`| Container image with the `es-CO` locale. | `sha256:07e0fab0ab15f411de6b56a6c5a4bb5dbf6882b7abd49c9dcb54de3ce2b0a20b` |
+| `es-cr`| Container image with the `es-CR` locale. | `sha256:db7c55da662e5e8a52b726819ec8bfe4f9b7a21903f9c8d949a6ddd65f7ca56e` |
+| `es-cu`| Container image with the `es-CU` locale. | `sha256:adbb56aeee08651b2dd030d2010c5c0e81c99fd59ae361302b444116c1086cfa` |
+| `es-do`| Container image with the `es-DO` locale. | `sha256:5dedfebb025725ce9391a324659cc8567f90c02c1b6e4554bada022922457463` |
+| `es-ec`| Container image with the `es-EC` locale. | `sha256:72a267f458b66cc3935d96ebffdacb12fbb8f6f91f5347464f2e9af34273260c` |
+| `es-es`| Container image with the `es-ES` locale. | `sha256:42540849cb394203a81a14ce02ac4a890916b025636666a77ba32cf347200915` |
+| `es-gt`| Container image with the `es-GT` locale. | `sha256:095cb31ec78862831db242fa69dc2f3db564ac7d5caa86274357cee137c62d82` |
+| `es-hn`| Container image with the `es-HN` locale. | `sha256:864179bf6de0d66db3bef50e6aa551b6c124ab37ce10a255c5aa33d03f7dae03` |
+| `es-mx`| Container image with the `es-MX` locale. | `sha256:af4b1d78f141a15ff27adc3c1d20a5bf82c00a4fa3f3cfa52ba17ac50260cb76` |
+| `es-ni`| Container image with the `es-NI` locale. | `sha256:5280d1e305ad3a0d403633cda6a65dc1c402e561ef0fca0ccefd73b99331838e` |
+| `es-pa`| Container image with the `es-PA` locale. | `sha256:5e9326793966a0e0d963bece1bd10b0d8bf9917990773eb65e28a26ef782f91f` |
+| `es-pe`| Container image with the `es-PE` locale. | `sha256:3d014bd060833953b5895f3431f00da332bdb6d8d4e224699045a11ec11a8975` |
+| `es-pr`| Container image with the `es-PR` locale. | `sha256:b8377678b543eea2c581f6fd7ef7d6ca59765c5094aa3303935c350cc3b30030` |
+| `es-py`| Container image with the `es-PY` locale. | `sha256:d5e7400cead88820b888e529d2c5c1ce6333147e210e1e8c4dea21cab8866e4e` |
+| `es-sv`| Container image with the `es-SV` locale. | `sha256:641ce9b02848542c786e1ca8c63134bb3fb97d260b4091b31c56831b1f0da684` |
+| `es-us`| Container image with the `es-US` locale. | `sha256:9b22ebc2b757dbf57205cace1a995b0e5f17c4402b089b41367bae459e45bcf0` |
+| `es-uy`| Container image with the `es-UY` locale. | `sha256:95baf4be71d34a9864d21f0499f7fd36b76e83fa8b8ae2486e56d38f7eb270ad` |
+| `es-ve`| Container image with the `es-VE` locale. | `sha256:552ec7b4df6cf143795ee78e6c3cefbc252baa7389486657390164686f369b87` |
+| `et-ee`| Container image with the `et-EE` locale. | `sha256:d96975024d81899a7c93a2634b8399ab142941a63db743f99471f3519c4aa760` |
+| `eu-es`| Container image with the `eu-ES` locale. | `sha256:789aec61fa61500adb7d60e5441755781b9241e2526d1100afcc1db4b9ea28f7` |
+| `fa-ir`| Container image with the `fa-IR` locale. | `sha256:a7d7d368f6493fcc6efdab07dc51a536da3ae3db92eb374725dba758f464ca99` |
+| `fi-fi`| Container image with the `fi-FI` locale. | `sha256:26370a84499831a50615337fb3de77530b1bcc3245313fd59078549f21b12a1c` |
+| `fil-ph`| Container image with the `fil-PH` locale. | `sha256:de11d47b9b1099f1b418b347c198b98e8cefbce74991773800e954e286f7f766` |
+| `fr-ca`| Container image with the `fr-CA` locale. | `sha256:dc747e1dafda312fcc79f9d0ec1b9b137de149fdde55cdbe9e9da04647e2216f` |
+| `fr-ch`| Container image with the `fr-CH` locale. | `sha256:962366fa475d196f09a932ae002d5479482996d827b2822d48e4632c0a118f53` |
+| `fr-fr`| Container image with the `fr-FR` locale. | `sha256:32dcc215732ed60c149f3a7f27e400fe17c2c885e5788eaa00db694e3b61c6fa` |
+| `ga-ie`| Container image with the `ga-IE` locale. | `sha256:5cfd9b63ed99df7eff27c94466c8f795ce56bddbd519bddce4dd960a4d85f1a0` |
+| `gl-es`| Container image with the `gl-ES` locale. | `sha256:13023950f7630296d2699a2211e7ae45a38188d82fb212a7a6780354087f815e` |
+| `gu-in`| Container image with the `gu-IN` locale. | `sha256:5511eb7c2e0a33ed7b16213297b3a530b1fdb858ea526b5bdaaea709966dac0b` |
+| `he-il`| Container image with the `he-IL` locale. | `sha256:99aa9f70c301f61a6f39793f874d70a45791ec6fd705b84639dc920b3c8b10a5` |
+| `hi-in`| Container image with the `hi-IN` locale. | `sha256:86147556e59e221a8c2c5ceb56fb5a40cead3c6e677aab8ddbbaffa39addd28a` |
+| `hr-hr`| Container image with the `hr-HR` locale. | `sha256:3123fa32f7322e3ab3bedf8c006b34a3d254b9d737f3e747312c45ca9d6c6271` |
+| `hu-hu`| Container image with the `hu-HU` locale. | `sha256:cede22619c83c84cb8356807a589c7992fdc5879f8987dc7fc1ff81abd123961` |
+| `hy-am`| Container image with the `hy-AM` locale. | `sha256:63c8b2e155d453666a5e842a270bc988a41fc7af09bb95e6508698713412a272` |
+| `id-id`| Container image with the `id-ID` locale. | `sha256:6e28166255a2ae55eb7d41aac3fb133403f01dd27fef583a12ac30d4a584ce50` |
+| `it-ch`| Container image with the `it-CH` locale. | `sha256:2065ef047c7704abda14134abd8508a7de7c3b2e30fdb051ee5525b8a8daee32` |
+| `it-it`| Container image with the `it-IT` locale. | `sha256:9ef3c51329c2c44585f8cf41847fd83dcaadeb783d51df55e15b57ff7cabfac7` |
+| `ja-jp`| Container image with the `ja-JP` locale. | `sha256:0d75f2e00c7a93375a56c315961c61cb2a93a7eb83deab210dfcd4c56fc4c519` |
+| `ka-ge`| Container image with the `ka-GE` locale. | `sha256:e05f315a34dec1efe527790c84082358cf9155def79be058f772b8cb05111d0a` |
+| `kk-kz`| Container image with the `kk-KZ` locale. | `sha256:e8d700480fe77edf46f2c8a02838b5bee1b6b76ae22cded45c3297febbd97725` |
+| `ko-kr`| Container image with the `ko-KR` locale. | `sha256:7fc44d9110f3e127d49b74146a9c8cde20f922a6aa8dc58643295d6e1c139fb6` |
+| `lt-lt`| Container image with the `lt-LT` locale. | `sha256:58d583963cc54edf4be231a1681774bc9213befa6a72aab20f556d5040e92f64` |
+| `lv-lv`| Container image with the `lv-LV` locale. | `sha256:37d8f5ce4734c8e3a7096a4e1148258c0b987261dc4911df37dae2586409d1f4` |
+| `mk-mk`| Container image with the `mk-MK` locale. | `sha256:388311b1e87277cc7c2d346eed8e2d8f456900aac8bfd735d26fefddcd1c7ce2` |
+| `mn-mn`| Container image with the `mn-MN` locale. | `sha256:41bdb6afea3c27f4ef67ca6eb9302b0207a8f631481bc16815993e131caf130a` |
+| `mr-in`| Container image with the `mr-IN` locale. | `sha256:2c9bb428c66e0238c65e9f6fedf09524398c2e9348971951b16502576005e244` |
+| `ms-my`| Container image with the `ms-MY` locale. | `sha256:8f61a55cdf6340b1327c2533ff0d6371b70d817308efd4546fce9bffef80ef5e` |
+| `mt-mt`| Container image with the `mt-MT` locale. | `sha256:7573d303239a4e99b646c8b2aa95d4903a37e31fcdc597c29952f0a555c1829e` |
+| `nb-no`| Container image with the `nb-NO` locale. | `sha256:408641de59d99085a8755c3e2f42b430ccf6af4f6b4fb12d7b2a4136d60383ff` |
+| `ne-np`| Container image with the `ne-NP` locale. | `sha256:ee5f5fae979352b572f093bf38e59c6edb636cffbbb49494da8c194b324e1956` |
+| `nl-nl`| Container image with the `nl-NL` locale. | `sha256:22e6b1734be2048c7dbe09f75bf49caae2c864b31687bd03025cbcf6b98ec7c6` |
+| `pl-pl`| Container image with the `pl-PL` locale. | `sha256:86a8c47e03eb75ea55b3b3dd43bb408f9e7cd7e58cc8b22004acb8d9e54d8e16` |
+| `ps-af`| Container image with the `ps-AF` locale. | `sha256:87b30023526044fa4917696128c84f08fa5355a0471c4768b66c6a340565ba98` |
+| `pt-br`| Container image with the `pt-BR` locale. | `sha256:79cd07b0be7249935a581758e3e3c0ce6af08d447c8063b8c580d09385bf0067` |
+| `pt-pt`| Container image with the `pt-PT` locale. | `sha256:c5c248c679122726427d6ca69fed8234331bf20be97f482707ba8ae7c6cdb67e` |
+| `ro-ro`| Container image with the `ro-RO` locale. | `sha256:6e5e203cbba8c60319a2f6dec7bd0b49d2915d6dc726882f633165dc4e239e64` |
+| `ru-ru`| Container image with the `ru-RU` locale. | `sha256:fd896f373deb0e70c394af235c00401b98333ffb70d5ca4ace0869192bd091ca` |
+| `sk-sk`| Container image with the `sk-SK` locale. | `sha256:962aca8128e74a30525d9c0a53a2a410e18f2fb679ecdf1a281d23e337e040a1` |
+| `sl-si`| Container image with the `sl-SI` locale. | `sha256:e4a8b4bbe5d70bf378bead62ccd1d54e994a970729061add43f0ba5c5d9d70b5` |
+| `so-so`| Container image with the `so-SO` locale. | `sha256:999b5f6b1708e0f012481db3e62eae14ab3b99fd12bba9c84a03b7bc79534b0c` |
+| `sq-al`| Container image with the `sq-AL` locale. | `sha256:5d230ed821290893fe90f833d8c6a7468bfd456709cb85abb9b953455fedb132` |
+| `sv-se`| Container image with the `sv-SE` locale. | `sha256:9c900b80eca404751c894ab47a8367d67f26c8a2710815d926c9f542a507990b` |
+| `ta-in`| Container image with the `ta-IN` locale. | `sha256:568c89e3d7ededa5c38682d724a884e59b16221e2945640ce0790d2eafdd9b28` |
+| `te-in`| Container image with the `te-IN` locale. | `sha256:cc66d2d64e62c64f0dd582becc8fdfc54b1cd590be68409bde034d32e5c8c165` |
+| `th-th`| Container image with the `th-TH` locale. | `sha256:ad4fe647e5d37860b1351f8f9c1536269dbef6200af78a35a534994697bf9887` |
+| `tr-tr`| Container image with the `tr-TR` locale. | `sha256:5b8c6de12d72c367c74d695fa90b16cbc96b6fcc2fa891e62475c346520fd10a` |
+| `uk-ua`| Container image with the `uk-UA` locale. | `sha256:552ce967cc23a8629acc6e297ae01d765658724fb711b105afee768b92dc4e7e` |
+| `vi-vn`| Container image with the `vi-VN` locale. | `sha256:4668f0b5f895dd85d2b1ffcf0ce9b9ff23339a82d290b973bfd91113bc0eb68a` |
+| `wuu-cn`| Container image with the `wuu-CN` locale. | `sha256:2c2321e7610bfcd812df267c044281801f5f470f023cfe37273ce6c4ee1748a9` |
+| `yue-cn`| Container image with the `yue-CN` locale. | `sha256:f2eeefd4926e4714e5996a6e13f00e58a985835923113679f40c0f8dcd86000b` |
+| `zh-cn`| Container image with the `zh-CN` locale. | `sha256:5691c41fedfb89d7738afabd5624aad43cf8c427c4de1608e7381674fdcb88a2` |
+| `zh-cn-sichuan`| Container image with the `zh-CN-sichuan` locale. | `sha256:192a125987d397018734c57f952e68df04d7fd550cfb6ae9434f200b7bd44d13` |
+| `zh-hk`| Container image with the `zh-HK` locale. | `sha256:f3f8b50f982c19f31eea553ed92ebfb6c7e333a4d2fa55c81a1c8b680afd6101` |
+| `zh-tw`| Container image with the `zh-` locale. | `sha256:20245c6b1b4da4a393e6d0aaa3c1a013f03de69eec351d9b7e5fe9d542c1f098` |
+
+# [Previous version](#tab/previous)
+ Release note for `3.6.0-amd64-<locale>`: **Features**
This container has the following locales available.
| `zh-hk`| Container image with the `zh-HK` locale. | `sha256:5d21febbb1e8710b01ad1a5727c33080e6853d3a4bfbf5365b059630b76a9901` | | `zh-tw`| Container image with the `zh-TW` locale. | `sha256:15dbadcd92e335705e07a8ecefbe621e3c97b723bdf1c5b0c322a5b9965ea47d` |
-# [Previous version](#tab/previous)
- Release note for `3.5.0-amd64-<locale>`: **Features**
This container image has the following tags available. You can also find a full
# [Latest version](#tab/current)
+Release notes for `v2.6.0`:
+
+**Features**
+* Security upgrade.
+
+| Image Tags | Notes |
+||:|
+| `latest` | Container image with the `en-US` locale and `en-US-AriaNeural` voice. |
+| `2.6.0-amd64-<locale-and-voice>` | Replace `<locale>` with one of the available locales, listed below. For example `2.6.0-amd64-en-us-arianeural`. |
++
+| v2.6.0 Locales and voices | Notes |
+|-|:|
+| `am-et-amehaneural`| Container image with the `am-ET` locale and `am-ET-amehaneural` voice.|
+| `am-et-mekdesneural`| Container image with the `am-ET` locale and `am-ET-mekdesneural` voice.|
+| `ar-bh-lailaneural`| Container image with the `ar-BH` locale and `ar-BH-lailaneural` voice.|
+| `ar-eg-salmaneural`| Container image with the `ar-EG` locale and `ar-EG-salmaneural` voice.|
+| `ar-eg-shakirneural`| Container image with the `ar-EG` locale and `ar-EG-shakirneural` voice.|
+| `ar-sa-hamedneural`| Container image with the `ar-SA` locale and `ar-SA-hamedneural` voice.|
+| `ar-sa-zariyahneural`| Container image with the `ar-SA` locale and `ar-SA-zariyahneural` voice.|
+| `az-az-babekneural`| Container image with the `az-AZ` locale and `az-AZ-babekneural` voice.|
+| `az-az-banuneural`| Container image with the `az-AZ` locale and `az-AZ-banuneural` voice.|
+| `cs-cz-antoninneural`| Container image with the `cs-CZ` locale and `cs-CZ-antoninneural` voice.|
+| `cs-cz-vlastaneural`| Container image with the `cs-CZ` locale and `cs-CZ-vlastaneural` voice.|
+| `de-ch-janneural`| Container image with the `de-CH` locale and `de-CH-janneural` voice.|
+| `de-ch-lenineural`| Container image with the `de-CH` locale and `de-CH-lenineural` voice.|
+| `de-de-conradneural`| Container image with the `de-DE` locale and `de-DE-conradneural` voice.|
+| `de-de-katjaneural`| Container image with the `de-DE` locale and `de-DE-katjaneural` voice.|
+| `en-au-natashaneural`| Container image with the `en-AU` locale and `en-AU-natashaneural` voice.|
+| `en-au-williamneural`| Container image with the `en-AU` locale and `en-AU-williamneural` voice.|
+| `en-ca-claraneural`| Container image with the `en-CA` locale and `en-CA-claraneural` voice.|
+| `en-ca-liamneural`| Container image with the `en-CA` locale and `en-CA-liamneural` voice.|
+| `en-gb-libbyneural`| Container image with the `en-GB` locale and `en-GB-libbyneural` voice.|
+| `en-gb-ryanneural`| Container image with the `en-GB` locale and `en-GB-ryanneural` voice.|
+| `en-gb-sonianeural`| Container image with the `en-GB` locale and `en-GB-sonianeural` voice.|
+| `en-us-arianeural`| Container image with the `en-US` locale and `en-US-arianeural` voice.|
+| `en-us-guyneural`| Container image with the `en-US` locale and `en-US-guyneural` voice.|
+| `en-us-jennyneural`| Container image with the `en-US` locale and `en-US-jennyneural` voice.|
+| `es-es-alvaroneural`| Container image with the `es-ES` locale and `es-ES-alvaroneural` voice.|
+| `es-es-elviraneural`| Container image with the `es-ES` locale and `es-ES-elviraneural` voice.|
+| `es-mx-dalianeural`| Container image with the `es-MX` locale and `es-MX-dalianeural` voice.|
+| `es-mx-jorgeneural`| Container image with the `es-MX` locale and `es-MX-jorgeneural` voice.|
+| `fa-ir-dilaraneural`| Container image with the `fa-IR` locale and `fa-IR-dilaraneural` voice.|
+| `fa-ir-faridneural`| Container image with the `fa-IR` locale and `fa-IR-faridneural` voice.|
+| `fil-ph-angeloneural`| Container image with the `fil-PH` locale and `fil-PH-angeloneural` voice.|
+| `fil-ph-blessicaneural`| Container image with the `fil-PH` locale and `fil-PH-blessicaneural` voice.|
+| `fr-ca-antoineneural`| Container image with the `fr-CA` locale and `fr-CA-antoineneural` voice.|
+| `fr-ca-jeanneural`| Container image with the `fr-CA` locale and `fr-CA-jeanneural` voice.|
+| `fr-ca-sylvieneural`| Container image with the `fr-CA` locale and `fr-CA-sylvieneural` voice.|
+| `fr-fr-deniseneural`| Container image with the `fr-FR` locale and `fr-FR-deniseneural` voice.|
+| `fr-fr-henrineural`| Container image with the `fr-FR` locale and `fr-FR-henrineural` voice.|
+| `he-il-avrineural`| Container image with the `he-IL` locale and `he-IL-avrineural` voice.|
+| `he-il-hilaneural`| Container image with the `he-IL` locale and `he-IL-hilaneural` voice.|
+| `hi-in-madhurneural`| Container image with the `hi-IN` locale and `hi-IN-madhurneural` voice.|
+| `hi-in-swaraneural`| Container image with the `hi-IN` locale and `hi-IN-swaraneural` voice.|
+| `id-id-ardineural`| Container image with the `id-ID` locale and `id-ID-ardineural` voice.|
+| `id-id-gadisneural`| Container image with the `id-ID` locale and `id-ID-gadisneural` voice.|
+| `it-it-diegoneural`| Container image with the `it-IT` locale and `it-IT-diegoneural` voice.|
+| `it-it-elsaneural`| Container image with the `it-IT` locale and `it-IT-elsaneural` voice.|
+| `it-it-isabellaneural`| Container image with the `it-IT` locale and `it-IT-isabellaneural` voice.|
+| `ja-jp-keitaneural`| Container image with the `ja-JP` locale and `ja-JP-keitaneural` voice.|
+| `ja-jp-nanamineural`| Container image with the `ja-JP` locale and `ja-JP-nanamineural` voice.|
+| `ka-ge-ekaneural`| Container image with the `ka-GE` locale and `ka-GE-ekaneural` voice.|
+| `ka-ge-giorgineural`| Container image with the `ka-GE` locale and `ka-GE-giorgineural` voice.|
+| `ko-kr-injoonneural`| Container image with the `ko-KR` locale and `ko-KR-injoonneural` voice.|
+| `ko-kr-sunhineural`| Container image with the `ko-KR` locale and `ko-KR-sunhineural` voice.|
+| `pt-br-antonioneural`| Container image with the `pt-BR` locale and `pt-BR-antonioneural` voice.|
+| `pt-br-franciscaneural`| Container image with the `pt-BR` locale and `pt-BR-franciscaneural` voice.|
+| `so-so-muuseneural`| Container image with the `so-SO` locale and `so-SO-muuseneural` voice.|
+| `so-so-ubaxneural`| Container image with the `so-SO` locale and `so-SO-ubaxneural` voice.|
+| `sv-se-hillevineural`| Container image with the `sv-SE` locale and `sv-SE-hillevineural` voice.|
+| `sv-se-mattiasneural`| Container image with the `sv-SE` locale and `sv-SE-mattiasneural` voice.|
+| `sv-se-sofieneural`| Container image with the `sv-SE` locale and `sv-SE-sofieneural` voice.|
+| `th-th-acharaneural`| Container image with the `th-TH` locale and `th-TH-acharaneural` voice.|
+| `th-th-niwatneural`| Container image with the `th-TH` locale and `th-TH-niwatneural` voice.|
+| `th-th-premwadeeneural`| Container image with the `th-TH` locale and `th-TH-premwadeeneural` voice.|
+| `tr-tr-ahmetneural`| Container image with the `tr-TR` locale and `tr-TR-ahmetneural` voice.|
+| `tr-tr-emelneural`| Container image with the `tr-TR` locale and `tr-TR-emelneural` voice.|
+| `zh-cn-xiaochenneural-preview`| Container image with the `zh-CN` locale and `zh-CN-xiaochenneural` voice.|
+| `zh-cn-xiaohanneural`| Container image with the `zh-CN` locale and `zh-CN-xiaohanneural` voice.|
+| `zh-cn-xiaomoneural`| Container image with the `zh-CN` locale and `zh-CN-xiaomoneural` voice.|
+| `zh-cn-xiaoqiuneural-preview`| Container image with the `zh-CN` locale and `zh-CN-xiaoqiuneural` voice.|
+| `zh-cn-xiaoruineural`| Container image with the `zh-CN` locale and `zh-CN-xiaoruineural` voice.|
+| `zh-cn-xiaoshuangneural-preview`| Container image with the `zh-CN` locale and `zh-CN-xiaoshuangneural` voice.|
+| `zh-cn-xiaoxiaoneural`| Container image with the `zh-CN` locale and `zh-CN-xiaoxiaoneural` voice.|
+| `zh-cn-xiaoxuanneural`| Container image with the `zh-CN` locale and `zh-CN-xiaoxuanneural` voice.|
+| `zh-cn-xiaoyanneural-preview`| Container image with the `zh-CN` locale and `zh-CN-xiaoyanneural` voice.|
+| `zh-cn-xiaoyouneural`| Container image with the `zh-CN` locale and `zh-CN-xiaoyouneural` voice.|
+| `zh-cn-yunxineural`| Container image with the `zh-CN` locale and `zh-CN-yunxineural` voice.|
+| `zh-cn-yunyangneural`| Container image with the `zh-CN` locale and `zh-CN-yunyangneural` voice.|
+| `zh-cn-yunyeneural`| Container image with the `zh-CN` locale and `zh-CN-yunyeneural` voice.|
++
+# [Previous version](#tab/previous)
+ Release notes for `v2.5.0`: **Features**
Release notes for `v2.5.0`:
| `zh-cn-yunyangneural`| Container image with the `zh-CN` locale and `zh-CN-yunyangneural` voice.| | `zh-cn-yunyeneural`| Container image with the `zh-CN` locale and `zh-CN-yunyeneural` voice.|
-# [Previous version](#tab/previous)
Release notes for `v2.4.0`:
cognitive-services Disconnected Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/disconnected-containers.md
Fill out and submit the [request form](https://aka.ms/csdisconnectedcontainers)
Access is limited to customers that meet the following requirements:
-* Your organization must have a Microsoft Enterprise Agreement or an equivalent agreement and should be identified as strategic customer or partner with Microsoft.
+* Your organization should be identified as strategic customer or partner with Microsoft.
* Disconnected containers are expected to run fully offline, hence your use cases must meet one of below or similar requirements: * Environment or device(s) with zero connectivity to internet. * Remote location that occasionally has internet access.
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/how-to/call-api.md
Title: how to call the Key Phrase Extraction API
description: How to extract key phrases by using the Key Phrase Extraction API. -+ Last updated 07/27/2022-+
cognitive-services Use Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/how-to/use-containers.md
Title: Use Docker containers for Key Phrase Extraction on-premises
description: Learn how to use Docker containers for Key Phrase Extraction on-premises. -+ Last updated 07/27/2022-+ keywords: on-premises, Docker, container, natural language processing
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/language-support.md
Title: Language support for Key Phrase Extraction
description: Use this article to find the natural languages supported by Key Phrase Extraction. -+ Last updated 07/28/2022-+
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/overview.md
Title: What is key phrase extraction in Azure Cognitive Service for Language?
description: An overview of key phrase extraction in Azure Cognitive Services, which helps you identify main concepts in unstructured text -+ Last updated 06/15/2022-+
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/quickstart.md
Title: "Quickstart: Use the Key Phrase Extraction client library"
description: Use this quickstart to start using the Key Phrase Extraction API. -+ Last updated 08/15/2022-+ ms.devlang: csharp, java, javascript, python keywords: text mining, key phrase
cognitive-services Integrate Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/tutorials/integrate-power-bi.md
Title: 'Tutorial: Integrate Power BI with key phrase extraction'
description: Learn how to use the key phrase extraction feature to get text stored in Power BI. -+ Last updated 09/28/2022-+
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/language-detection/how-to/call-api.md
Title: How to perform language detection
description: This article will show you how to detect the language of written text using language detection. -+ Last updated 03/01/2022-+
cognitive-services Use Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/language-detection/how-to/use-containers.md
Title: Use language detection Docker containers on-premises
description: Use Docker containers for the Language Detection API to determine the language of written text, on-premises. -+ Last updated 11/02/2021-+ keywords: on-premises, Docker, container
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/language-detection/language-support.md
Title: Language Detection language support
description: This article explains which natural languages are supported by the Language Detection API. -+ Last updated 11/02/2021-+
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/language-detection/overview.md
Title: What is language detection in Azure Cognitive Service for Language?
description: An overview of language detection in Azure Cognitive Services, which helps you detect the language that text is written in by returning language codes. -+ Last updated 07/27/2022-+
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/language-detection/quickstart.md
Title: "Quickstart: Use the Language Detection client library"
description: Use this quickstart to start using Language Detection. -+ Last updated 08/15/2022-+ ms.devlang: csharp, java, javascript, python keywords: text mining, language detection
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/quickstart.md
Title: "Quickstart: Use Document Summarization (preview)"
description: Use this quickstart to start using Document Summarization. -+
cognitive-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md
description: Learn about the different models that are available in Azure OpenAI
Last updated 06/24/2022-+
keywords:
# Azure OpenAI models
-The service provides access to many different models, grouped by family and capability. A model family typically associates models by their intended task. The following table describes model families currently available in Azure OpenAI.
+The service provides access to many different models, grouped by family and capability. A model family typically associates models by their intended task. The following table describes model families currently available in Azure OpenAI. Not all models are available in all regions currently. Please refer to the capability table at the bottom for a full breakdown.
| Model family | Description | |--|--|
Similar to text search embedding models, there are two input types supported by
When using our Embeddings models, keep in mind their limitations and risks.
+## Model Summary table and region availability
+
+### GPT-3 Models
+| Model | Supports Completions | Supports Embeddings | Base model Regions | Fine-Tuning Regions |
+| | | | | |
+| Ada | Yes | No | N/A | East US, South Central US, West Europe |
+| Text-Ada-001 | Yes | No | East US, South Central US, West Europe | N/A |
+| Babbage | Yes | No | N/A | East US, South Central US, West Europe |
+| Text-Babbage-001 | Yes | No | East US, South Central US, West Europe | N/A |
+| Curie | Yes | No | N/A | East US, South Central US, West Europe |
+| Text-curie-001 | Yes | No | East US, South Central US, West Europe | N/A |
+| Davinci* | Yes | No | N/A | East US, South Central US, West Europe |
+| Text-davinci-001 | Yes | No | South Central US, West Europe | N/A |
+| Text-davinci-002 | Yes | No | East US, South Central US, West Europe | N/A |
+| Text-davinci-fine-tune-002* | Yes | No | N/A | East US, West Europe |
+
+\*Models available by request only. Please open a support request.
+
+### Codex Models
+| Model | Supports Completions | Supports Embeddings | Base model Regions | Fine-Tuning Regions |
+| | | | | |
+| Code-Cushman-001* | Yes | No | South Central US, West Europe | East US, South Central US, West Europe |
+| Code-Davinci-002 | Yes | No | East US, West Europe | N/A |
+| Code-Davinci-Fine-tune-002* | Yes | No | N/A | East US, West Europe |
+
+\*Models available for Fine-tuning by request only. Please open a support request.
+++
+### Embeddings Models
+| Model | Supports Completions | Supports Embeddings | Base model Regions | Fine-Tuning Regions |
+| | | | | |
+| text-similarity-ada-001 | No | Yes | East US, South Central US, West Europe | N/A |
+| text-similarity-babbage-001 | No | Yes | South Central US, West Europe | N/A |
+| text-similarit-curie-001 | No | Yes | East US, South Central US, West Europe | N/A |
+| text-similarity-davinci-001 | No | Yes | South Central US, West Europe | N/A |
+| text-search-ada-doc-001 | No | Yes | South Central US, West Europe | N/A |
+| text-search-ada-query-001 | No | Yes | South Central US, West Europe | N/A |
+| text-search-babbage-doc-001 | No | Yes | South Central US, West Europe | N/A |
+| text-search-babbage-query-001 | No | Yes | South Central US, West Europe | N/A |
+| text-search-curie-doc-001 | No | Yes | South Central US, West Europe | N/A |
+| text-search-curie-query-001 | No | Yes | South Central US, West Europe | N/A |
+| text-search-davinci-doc-001 | No | Yes | South Central US, West Europe | N/A |
+| text-search-davinci-query-001 | No | Yes | South Central US, West Europe | N/A |
+| code-search-ada-code-001 | No | Yes | South Central US, West Europe | N/A |
+| code-search-ada-text-001 | No | Yes | South Central US, West Europe | N/A |
+| code-search-babbage-code-001 | No | Yes | South Central US, West Europe | N/A |
+| code-search-babbage-text-001 | No | Yes | South Central US, West Europe | N/A |
+
+ ## Next steps [Learn more about Azure OpenAI](../overview.md).
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/overview.md
The Azure OpenAI service provides REST API access to OpenAI's powerful language
| Feature | Azure OpenAI | | | | | Models available | GPT-3 base series <br> Codex series <br> Embeddings series <br> Learn more in our [Models](./concepts/models.md) page.|
-| Fine-tuning | Ada <br> Babbage <br> Curie <br> Cushman* <br> Davinci* <br> \* available by request |
-| Billing Model| Coming Soon |
+| Fine-tuning | Ada <br> Babbage <br> Curie <br> Cushman* <br> Davinci* <br> \* available by request. Please open a support request|
+| Price | [Available here](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) |
| Virtual network support | Yes | | Managed Identity| Yes, via Azure Active Directory | | UI experience | **Azure Portal** for account & resource management, <br> **Azure OpenAI Service Studio** for model exploration and fine tuning |
-| Regional availability | South Central US <br> West Europe |
+| Regional availability | East US <br> South Central US <br> West Europe |
| Content filtering | Prompts and completions are evaluated against our content policy with automated systems. High severity content will be filtered. | ## Responsible AI
cognitive-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/quotas-limits.md
The following sections provide you with a quick guide to the quotas and limits t
| Limit Name | Limit Value | |--|--| | OpenAI resources per region | 2 |
-| Requests per second per deployment | 1 |
+| Requests per second per deployment | 5 |
| Max fine-tuned model deployments | 2 | | Ability to deploy same model to multiple deployments | Not allowed | | Total number of training jobs per resource | 100 | | Max simultaneous running training jobs per resource | 1 | | Max training jobs queued | 20 | | Max Files per resource | 50 |
-| Total size of all files per resource | 1 GB|
+| Total size of all files per resource | 1 GB |
| Max training job time (job will fail if exceeded) | 120 hours |
-| Max training job size (tokens in training file * # of epochs) | **Ada**: 4-M tokens <br> **Babbage**: 4-M tokens <br> **Curie**: 4-M tokens <br> **Cushman**: 4-M tokens <br> **Davinci**: 500 K |
+| Max training job size (tokens in training file * # of epochs) | **Ada**: 40-M tokens <br> **Babbage**: 40-M tokens <br> **Curie**: 40-M tokens <br> **Cushman**: 40-M tokens <br> **Davinci**: 10-M |
### General best practices to mitigate throttling during autoscaling
cognitive-services How To Inference Explainability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/how-to-inference-explainability.md
+
+ Title: "How-to: Use Inference Explainability"
+
+description: Personalizer can return feature scores in each Rank call to provide insight on what features are important to the model's decision.
++
+ms.
+++ Last updated : 09/20/2022++
+# Inference Explainability
+Personalizer can help you to understand which features of a chosen action are the most and least influential to then model during inference. When enabled, inference explainability includes feature scores from the underlying model into the Rank API response, so your application receives this information at the time of inference.
+
+Feature scores empower you to better understand the relationship between features and the decisions made by Personalizer. They can be used to provide insight to your end-users into why a particular recommendation was made, or to analyze whether your model is exhibiting bias toward or against certain contextual settings, users, and actions.
+
+## How do I enable inference explainability?
+
+Setting the service configuration flag IsInferenceExplainabilityEnabled in your service configuration enables Personalizer to include feature values and weights in the Rank API response. To update your current service configuration, use the [Service Configuration ΓÇô Update API](/rest/api/personalizer/1.1preview1/service-configuration/update?tabs=HTTP). In the JSON request body, include your current service configuration and add the additional entry: ΓÇ£IsInferenceExplainabilityEnabledΓÇ¥: true. If you donΓÇÖt know your current service configuration, you can obtain it from the [Service Configuration ΓÇô Get API](/rest/api/personalizer/1.1preview1/service-configuration/get?tabs=HTTP)
+
+```JSON
+{
+ "rewardWaitTime": "PT10M",
+ "defaultReward": 0,
+ "rewardAggregation": "earliest",
+ "explorationPercentage": 0.2,
+ "modelExportFrequency": "PT5M",
+ "logMirrorEnabled": true,
+ "logMirrorSasUri": "https://testblob.blob.core.windows.net/container?se=2020-08-13T00%3A00Z&sp=rwl&spr=https&sv=2018-11-09&sr=c&sig=signature",
+ "logRetentionDays": 7,
+ "lastConfigurationEditDate": "0001-01-01T00:00:00Z",
+ "learningMode": "Online",
+ "isAutoOptimizationEnabled": true,
+ "autoOptimizationFrequency": "P7D",
+ "autoOptimizationStartDate": "2019-01-19T00:00:00Z",
+"isInferenceExplainabilityEnabled": true
+}
+```
+
+> [!NOTE]
+> Enabling inference explainability will significantly increase the latency of calls to the Rank API. We recommend experimenting with this capability and measuring the latency in your scenario to see if it satisfies your applicationΓÇÖs latency requirements.
++
+## How to interpret feature scores?
+Enabling inference explainability will add a collection to the JSON response from the Rank API called *inferenceExplanation*. This contains a list of feature names and values that were submitted in the Rank request, along with feature scores learned by PersonalizerΓÇÖs underlying model. The feature scores provide you with insight on how influential each feature was in the model choosing the action.
+
+```JSON
+
+{
+ "ranking": [
+ {
+ "id": "EntertainmentArticle",
+ "probability": 0.8
+ },
+ {
+ "id": "SportsArticle",
+ "probability": 0.15
+ },
+ {
+ "id": "NewsArticle",
+ "probability": 0.05
+ }
+ ],
+ "eventId": "75269AD0-BFEE-4598-8196-C57383D38E10",
+ "rewardActionId": "EntertainmentArticle",
+ "inferenceExplanation": [
+ {
+ "idΓÇ¥: "EntertainmentArticle",
+ "features": [
+ {
+ "name": "user.profileType",
+ "score": 3.0
+ },
+ {
+ "name": "user.latLong",
+ "score": -4.3
+ },
+ {
+ "name": "user.profileType^user.latLong",
+ "score" : 12.1
+ },
+ ]
+ ]
+}
+```
+
+In the example above, three action IDs are returned in the _ranking_ collection along with their respective probabilities scores. The action with the largest probability is the_ best action_ as determined by the model trained on data sent to the Personalizer APIs, which in this case is `"id": "EntertainmentArticle"`. The action ID can be seen again in the _inferenceExplanation_ collection, along with the feature names and scores determined by the model for that action and the features and values sent to the Rank API.
+
+Recall that Personalizer will either return the _best action_ or an _exploratory action_ chosen by the exploration policy. The best action is the one that the model has determined has the highest probability of maximizing the average reward, whereas exploratory actions are chosen among the set of all possible actions provided in the Rank API call. Actions taken during exploration do not leverage the feature scores in determining which action to take, therefore **feature scores for exploratory actions should not be used to gain an understanding of why the action was taken.** [You can learn more about exploration here](concepts-exploration.md).
+
+For the best actions returned by Personalizer, the feature scores can provide general insight where:
+* Larger positive scores provide more support for the model choosing this action.
+* Larger negative scores provide more support for the model not choosing this action.
+* Scores close to zero have a small effect on the decision to choose this action.
+
+## Important considerations for Inference Explainability
+* **Increased latency.** Enabling _Inference Explainability_ will significantly increase the latency of Rank API calls due to processing of the feature information. Run experiments and measure the latency in your scenario to see if it satisfies your applicationΓÇÖs latency requirements.
+
+* **Correlated Features.** Features that are highly correlated with each other can reduce the utility of feature scores. For example, suppose Feature A is highly correlated with Feature B. It may be that Feature AΓÇÖs score is a large positive value while Feature BΓÇÖs score is a large negative value. In this case, the two features may effectively cancel each other out and have little to no impact on the model. While Personalizer is very robust to highly correlated features, when using _Inference Explainability_, ensure that features sent to Personalizer are not highly correlated
+* **Default exploration only.** Currently, Inference Explainability supports only the default exploration algorithm.
++
+## Next steps
+
+[Reinforcement learning](concepts-reinforcement-learning.md)
communication-services Phone Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/phone-capabilities.md
The following list of capabilities is supported for scenarios where at least one
| | Manage Teams transcription | ❌ | | | Receive information of call being transcribed | ✔️ | | | Support for compliance recording | ✔️ |
-| Media | Support for early media | ❌ |
+| Media | Support for early media | ✔️ |
| | Place a phone call honors location-based routing | ❌ | | | Support for survivable branch appliance | ❌ | | Accessibility | Receive closed captions | ❌ |
communication-services Sms Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms/sms-faq.md
This table shows the maximum number of characters that can be sent per SMS segme
### Can I send/receive long messages (>2048 chars)?
-Azure Communication Services supports sending and receiving of long messages over SMS. However, some wireless carriers or devices may act differently when receiving long messages.
+Azure Communication Services supports sending and receiving of long messages over SMS. However, some wireless carriers or devices may act differently when receiving long messages. We recommend keeping SMS messages to a length of 320 characters and reducing the use of accents to ensure maximum delivery.
+
+*Limitation of US short code - There is a known limit of ~4 segments when sending/receiving a message with Non-ASCII characters. Beyond 4 segments, the message may not be delivered with the right formatting.
### Are there any limits on sending messages?
communication-services Calling Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
The following list presents the set of features that are currently available in
| | Place a group call with PSTN participants | ✔️ | ✔️ | ✔️ | ✔️ | | | Promote a one-to-one call with a PSTN participant into a group call | ✔️ | ✔️ | ✔️ | ✔️ | | | Dial-out from a group call as a PSTN participant | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Support for early media | ❌ | ✔️ | ✔️ | ✔️ |
+| | Support for early media | ✔️ | ✔️ | ✔️ | ✔️ |
| General | Test your mic, speaker, and camera with an audio testing service (available by calling 8:echo123) | ✔️ | ✔️ | ✔️ | ✔️ | | Device Management | Ask for permission to use audio and/or video | ✔️ | ✔️ | ✔️ | ✔️ | | | Get camera list | ✔️ | ✔️ | ✔️ | ✔️ |
communication-services Create Communication Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/create-communication-resource.md
az communication delete --name "acsResourceName" --resource-group "resourceGroup
If you have any phone numbers assigned to your resource upon resource deletion, the phone numbers will be released from your resource automatically at the same time. > [!Note]
-> Resource deletion is **permanent** and no data, including event gird filters, phone numbers, or other data tied to your resource, can be recovered if you delete the resource.
+> Resource deletion is **permanent** and no data, including event grid filters, phone numbers, or other data tied to your resource, can be recovered if you delete the resource.
## Next steps
container-apps Ingress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/ingress.md
TCP ingress is useful for exposing container apps that use a TCP-based protocol
> [!NOTE] > TCP ingress is in public preview and is only supported in Container Apps environments that use a [custom VNET](vnet-custom.md).
->
-> To enable TCP ingress, use ARM or Bicep (API version `2022-06-01-preview` or above), or the Azure CLI.
With TCP ingress enabled, your container app features the following characteristics:
container-instances Container Instances Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-github-action.md
In the GitHub workflow, you need to supply Azure credentials to authenticate to
First, get the resource ID of your resource group. Substitute the name of your group in the following [az group show][az-group-show] command: ```azurecli
-groupId=$(az group show \
+$groupId=$(az group show \
--name <resource-group-name> \ --query id --output tsv) ```
Update the Azure service principal credentials to allow push and pull access to
Get the resource ID of your container registry. Substitute the name of your registry in the following [az acr show][az-acr-show] command: ```azurecli
-registryId=$(az acr show \
+$registryId=$(az acr show \
--name <registry-name> \ --resource-group <resource-group-name> \ --query id --output tsv)
container-instances Container Instances Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-overview.md
Containers are becoming the preferred way to package, deploy, and manage cloud applications. Azure Container Instances offers the fastest and simplest way to run a container in Azure, without having to manage any virtual machines and without having to adopt a higher-level service.
-Azure Container Instances is a great solution for any scenario that can operate in isolated containers, including simple applications, task automation, and build jobs. For scenarios where you need full container orchestration, including service discovery across multiple containers, automatic scaling, and coordinated application upgrades, we recommend [Azure Kubernetes Service (AKS)](../aks/index.yml).
+Azure Container Instances is a great solution for any scenario that can operate in isolated containers, including simple applications, task automation, and build jobs. For scenarios where you need full container orchestration, including service discovery across multiple containers, automatic scaling, and coordinated application upgrades, we recommend [Azure Kubernetes Service (AKS)](../aks/index.yml). We recommend reading through the [considerations and limitations](#considerations) and the [FAQs](./container-instances-faq.yml) to understand the best practices when deploying container instances.
## Fast startup times
Azure Container Instances supports scheduling of [multi-container groups](contai
Azure Container Instances enables [deployment of container instances into an Azure virtual network](container-instances-vnet.md). When deployed into a subnet within your virtual network, container instances can communicate securely with other resources in the virtual network, including those that are on premises (through [VPN gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md) or [ExpressRoute](../expressroute/expressroute-introduction.md)).
+## Considerations
+
+There are default limits that require quota increases. Not all quota increases may be approved: [Service quotas and region availability - Azure Container Instances | Microsoft Learn](./container-instances-quotas.md)
+
+Different regions have different default limits, so you should consider the limits in your region: [Resource availability by region - Azure Container Instances | Microsoft Learn](./container-instances-region-availability.md)
+
+If your container group stops working, we suggest trying to restart your container, checking your application code, or your local network configuration before opening a [support request][azure-support].
+
+Container Images cannot be larger than 15 GB, any images above this size may cause unexpected behavior: [How large can my container image be?](./container-instances-faq.yml)
+
+Some Windows Server base images are no longer compatible with Azure Container Instances:
+[What Windows base OS images are supported?](./container-instances-faq.yml)
+
+If a container group restarts, the container groupΓÇÖs IP may change. We advise against using a hard coded IP address in your scenario. If you need a static public IP address, use Application Gateway: [Static IP address for container group - Azure Container Instances | Microsoft Learn](./container-instances-application-gateway.md)
+
+There are ports that are reserved for service functionality. We advise you not to use these ports since this will lead to unexpected behavior: [Does the ACI service reserve ports for service functionality?](./container-instances-faq.yml)
+
+ If youΓÇÖre having trouble deploying or running your container, first check the [Troubleshooting Guide](./container-instances-troubleshooting.md) for common mistakes and issues
+
+Your container groups may restart due to platform maintenance events. These maintenance events are done to ensure the continuous improvement of the underlying infrastructure: [Container had an isolated restart without explicit user input](./container-instances-faq.yml)
+
+ACI does not allow [privileged container operations](./container-instances-faq.yml). We advise you to not depend on using the root directory for your scenario
+ ## Next steps Try deploying a container to Azure with a single command using our quickstart guide:
Try deploying a container to Azure with a single command using our quickstart gu
<!-- LINKS - External --> [terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
+[azure-support]: https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest
container-registry Intro Connected Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/intro-connected-registry.md
Title: What is a connected registry description: Overview and scenarios of the connected registry feature of Azure Container Registry-+ Previously updated : 10/11/2022 Last updated : 10/25/2022
In this article, you learn about the *connected registry* feature of [Azure Cont
## Available regions
-* Asia East
-* EU North
-* EU West
-* US East
+* Canada Central
+* East Asia
+* East US
+* North Europe
+* Norway East
+* Southeast Asia
+* West Central US
+* West Europe
## Scenarios
cosmos-db Choose Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/choose-api.md
Previously updated : 10/05/2021 Last updated : 10/24/2022 adobe-target: true
Applications written for Azure Table storage can migrate to the API for Table wi
## API for PostgreSQL
+Azure Cosmos DB for PostgreSQL is a managed service for running PostgreSQL at any scale, with the [Citus open source](https://github.com/citusdata/citus) superpower of distributed tables. It stores data either on a single node, or distributed in a multi-node configuration.
+
+Azure Cosmos DB for PostgreSQL is built on native PostgreSQL--rather than a PostgreSQL fork--and lets you choose any major database versions supported by the PostgreSQL community. It's ideal for starting on a single-node database with rich indexing, geospatial capabilities, and JSONB support. Later, if your performance needs grow, you can add nodes to the cluster with zero downtime.
+
+If youΓÇÖre looking for a managed open source relational database with high performance and geo-replication, Azure Cosmos DB for PostgreSQL is the recommended choice. To learn more, see the [Azure Cosmos DB for PostgreSQL introduction](postgresql/introduction.md).
+ ## Capacity planning when migrating data Trying to do capacity planning for a migration to Azure Cosmos DB for NoSQL or MongoDB from an existing database cluster? You can use information about your existing database cluster for capacity planning.
cosmos-db Index Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/index-overview.md
The goal of this article is to explain how Azure Cosmos DB indexes data and how
## From items to trees
-Every time an item is stored in a container, its content is projected as a JSON document, then converted into a tree representation. What that means is that every property of that item gets represented as a node in a tree. A pseudo root node is created as a parent to all the first-level properties of the item. The leaf nodes contain the actual scalar values carried by an item.
+Every time an item is stored in a container, its content is projected as a JSON document, then converted into a tree representation. This means that every property of that item gets represented as a node in a tree. A pseudo root node is created as a parent to all the first-level properties of the item. The leaf nodes contain the actual scalar values carried by an item.
As an example, consider this item:
Range indexes can be used on scalar values (string or number). The default index
SELECT * FROM c WHERE ST_INTERSECTS(c.property, { 'type':'Polygon', 'coordinates': [[ [31.8, -5], [32, -5], [31.8, -5] ]] }) ```
-Spatial indexes can be used on correctly formatted [GeoJSON](./sql-query-geospatial-intro.md) objects. Points, LineStrings, Polygons, and MultiPolygons are currently supported. To use this index type, set by using the `"kind": "Range"` property when configuring the indexing policy. To learn how to configure spatial indexes, see [Spatial indexing policy examples](how-to-manage-indexing-policy.md#spatial-index)
+Spatial indexes can be used on correctly formatted [GeoJSON](./sql-query-geospatial-intro.md) objects. Points, LineStrings, Polygons, and MultiPolygons are currently supported. To learn how to configure spatial indexes, see [Spatial indexing policy examples](how-to-manage-indexing-policy.md#spatial-index)
### Composite indexes
Here is a table that summarizes the different ways indexes are used in Azure Cos
| Full index scan | Read distinct set of indexed values and load only matching items from the transactional data store | Contains, EndsWith, RegexMatch, LIKE | Increases linearly based on the cardinality of indexed properties | Increases based on number of items in query results | | Full scan | Load all items from the transactional data store | Upper, Lower | N/A | Increases based on number of items in container |
-When writing queries, you should use filter predicate that use the index as efficiently as possible. For example, if either `StartsWith` or `Contains` would work for your use case, you should opt for `StartsWith` since it will do a precise index scan instead of a full index scan.
+When writing queries, you should use filter predicate that uses the index as efficiently as possible. For example, if either `StartsWith` or `Contains` would work for your use case, you should opt for `StartsWith` since it will do a precise index scan instead of a full index scan.
## Index usage details
The query predicate (filtering on items where any location has "France" as its c
:::image type="content" source="./media/index-overview/matching-path.png" alt-text="Matching a specific path within a tree" border="false":::
-Since this query has an equality filter, after traversing this tree, we can quickly identify the index pages that contain the query results. In this case, the query engine would read index pages that contain Item 1. An index seek is the most efficient way to use the index. With an index seek we only read the necessary index pages and load only the items in the query results. Therefore, the index lookup time and RU charge from index lookup are incredibly low, regardless of the total data volume.
+Since this query has an equality filter, after traversing this tree, we can quickly identify the index pages that contain the query results. In this case, the query engine would read index pages that contain Item 1. An index seek is the most efficient way to use the index. With an index seek, we only read the necessary index pages and load only the items in the query results. Therefore, the index lookup time and RU charge from index lookup are incredibly low, regardless of the total data volume.
### Precise index scan
FROM company
WHERE company.headquarters.employees = 200 AND CONTAINS(company.headquarters.country, "United") ```
-To execute this query, the query engine must do an index seek on `headquarters/employees` and full index scan on `headquarters/country`. The query engine has internal heuristics that it uses to evaluate the query filter expression as efficiently as possible. In this case, the query engine would avoid needing to read unnecessary index pages by doing the index seek first. If, for example, only 50 items matched the equality filter, the query engine would only need to evaluate `Contains` on the index pages that contained those 50 items. A full index scan of the entire container wouldn't be necessary.
+To execute this query, the query engine must do an index seek on `headquarters/employees` and full index scan on `headquarters/country`. The query engine has internal heuristics that it uses to evaluate the query filter expression as efficiently as possible. In this case, the query engine would avoid needing to read unnecessary index pages by doing the index seek first. If for example, only 50 items matched the equality filter, the query engine would only need to evaluate `Contains` on the index pages that contained those 50 items. A full index scan of the entire container wouldn't be necessary.
## Index utilization for scalar aggregate functions
Queries with aggregate functions must rely exclusively on the index in order to
In some cases, the index can return false positives. For example, when evaluating `Contains` on the index, the number of matches in the index may exceed the number of query results. The query engine will load all index matches, evaluate the filter on the loaded items, and return only the correct results.
-For the majority of queries, loading false positive index matches will not have any noticeable impact on index utilization.
+For most queries, loading false positive index matches will not have any noticeable impact on index utilization.
For example, consider the following query:
cosmos-db Index Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/index-policy.md
Any indexing policy has to include the root path `/*` as either an included or a
- If the indexing mode is set to **consistent**, the system properties `id` and `_ts` are automatically indexed.
-When including and excluding paths, you may encounter the following attributes:
--- `kind` can be either `range` or `hash`. Hash index support is limited to equality filters. Range index functionality provides all of the functionality of hash indexes as well as efficient sorting, range filters, system functions. We always recommend using a range index.--- `precision` is a number defined at the index level for included paths. A value of `-1` indicates maximum precision. We recommend always setting this value to `-1`.--- `dataType` can be either `String` or `Number`. This indicates the types of JSON properties that will be indexed.-
-It's no longer necessary to set these properties. When not specified, these properties will have the following default values:
-
-| **Property Name** | **Default Value** |
-| -- | -- |
-| `kind` | `range` |
-| `precision` | `-1` |
-| `dataType` | `String` and `Number` |
- See [this section](how-to-manage-indexing-policy.md#indexing-policy-examples) for indexing policy examples for including and excluding paths. ## Include/exclude precedence
A container's indexing policy can be updated at any time [by using the Azure por
> Index transformation is an operation that consumes [Request Units](request-units.md). Request Units consumed by an index transformation aren't currently billed if you are using [serverless](serverless.md) containers. These Request Units will get billed once serverless becomes generally available. > [!NOTE]
-> You can track the progress of index transformation in the Azure portal or [by using one of the SDKs](how-to-manage-indexing-policy.md).
+> You can track the progress of index transformation in the [Azure portal](how-to-manage-indexing-policy.md#use-the-azure-portal) or by [using one of the SDKs](how-to-manage-indexing-policy.md#dotnet-sdk).
There's no impact to write availability during any index transformations. The index transformation uses your provisioned RUs but at a lower priority than your CRUD operations or queries.
cosmos-db Find Request Unit Charge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/find-request-unit-charge.md
The cost of all database operations is normalized by Azure Cosmos DB and is expr
This article presents the different ways you can find the [request unit](../request-units.md) (RU) consumption for any operation executed against a container in Azure Cosmos DB for MongoDB. If you're using a different API, see [API for NoSQL](../find-request-unit-charge.md), [API for Cassandra](../cassandr) articles to find the RU/s charge.
-The RU charge is exposed by a custom [database command](https://docs.mongodb.com/manual/reference/command/) named `getLastRequestStatistics`. The command returns a document that contains the name of the last operation executed, its request charge, and its duration. If you use the Azure Cosmos DB for MongoDB, you have multiple options for retrieving the RU charge.
+The RU charge is exposed by a custom database command named `getLastRequestStatistics`. The command returns a document that contains the name of the last operation executed, its request charge, and its duration. If you use the Azure Cosmos DB for MongoDB, you have multiple options for retrieving the RU charge.
## Use the Azure portal
The RU charge is exposed by a custom [database command](https://docs.mongodb.com
`db.runCommand({getLastRequestStatistics: 1})`
-## Use a MongoDB driver
+## Programmatically
+
+### [Mongo Shell](#tab/mongo-shell)
+
+When you use the Mongo shell, you can execute commands by using runCommand().
+
+```javascript
+db.runCommand('getLastRequestStatistics')
+```
### [.NET driver](#tab/dotnet-driver)
cosmos-db How To Configure Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-configure-capabilities.md
Previously updated : 10/12/2022 Last updated : 10/24/2022
Capabilities are features that can be added or removed to your API for MongoDB a
| `EnableMongoRoleBasedAccessControl` | Enable support for creating Users/Roles for native MongoDB role-based access control | No | | `EnableMongoRetryableWrites` | Enables support for retryable writes on the account | Yes | | `EnableMongo16MBDocumentSupport` | Enables support for inserting documents upto 16 MB in size | No |
+| `EnableUniqueCompoundNestedDocs` | Enables support for compound and unique indexes on nested fields, as long as the nested field is not an array. | No |
+ ## Enable a capability
cosmos-db Indexing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/indexing.md
ms.devlang: javascript Previously updated : 4/5/2022 Last updated : 10/24/2022
In the API for MongoDB, compound indexes are **required** if your query needs th
A compound index or single field indexes for each field in the compound index will result in the same performance for filtering in queries.
+Compounded indexes on nested fields are not supported by default due to limiations with arrays. If your nested field does not contain an array, the index will work as intended. If your nested field contains an array (anywhere on the path), that value will be ignored in the index.
+
+For example a compound index containing people.tom.age will work in this case since there's no array on the path:
+```javascript
+{ "people": { "tom": { "age": "25" }, "mark": { "age": "30" } } }
+```
+but won't won't work in this case since there's an array in the path:
+```javascript
+{ "people": { "tom": [ { "age": "25" } ], "mark": [ { "age": "30" } ] } }
+```
+
+This feature can be enabled for your database account by [enabling the 'EnableUniqueCompoundNestedDocs' capability](how-to-configure-capabilities.md).
+ > [!NOTE]
-> You can't create compound indexes on nested properties or arrays.
+> You can't create compound indexes on arrays.
The following command creates a compound index on the fields `name` and `age`:
In the preceding example, omitting the ```"university":1``` clause returns an er
Unique indexes need to be created while the collection is empty.
+Unique indexes on nested fields are not supported by default due to limiations with arrays. If your nested field does not contain an array, the index will work as intended. If your nested field contains an array (anywhere on the path), that value will be ignored in the unique index and uniqueness wil not be preserved for that value.
+
+For example a unique index on people.tom.age will work in this case since there's no array on the path:
+```javascript
+{ "people": { "tom": { "age": "25" }, "mark": { "age": "30" } } }
+```
+but won't won't work in this case since there's an array in the path:
+```javascript
+{ "people": { "tom": [ { "age": "25" } ], "mark": [ { "age": "30" } ] } }
+```
+
+This feature can be enabled for your database account by [enabling the 'EnableUniqueCompoundNestedDocs' capability](how-to-configure-capabilities.md).
++ ### TTL indexes To enable document expiration in a particular collection, you need to create a [time-to-live (TTL) index](../time-to-live.md). A TTL index is an index on the `_ts` field with an `expireAfterSeconds` value.
cosmos-db How To Manage Indexing Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-manage-indexing-policy.md
Here are some examples of indexing policies shown in [their JSON format](../inde
} ```
-This indexing policy is equivalent to the one below which manually sets ```kind```, ```dataType```, and ```precision``` to their default values. These properties are no longer necessary to explicitly set and you should omit them from your indexing policy entirely (as shown in above example). If you try to set these properties, they'll be automatically removed from your indexing policy.
--
-```json
- {
- "indexingMode": "consistent",
- "includedPaths": [
- {
- "path": "/*",
- "indexes": [
- {
- "kind": "Range",
- "dataType": "Number",
- "precision": -1
- },
- {
- "kind": "Range",
- "dataType": "String",
- "precision": -1
- }
- ]
- }
- ],
- "excludedPaths": [
- {
- "path": "/path/to/single/excluded/property/?"
- },
- {
- "path": "/path/to/root/of/multiple/excluded/properties/*"
- }
- ]
- }
-```
- ### Opt-in policy to selectively include some property paths ```json
This indexing policy is equivalent to the one below which manually sets ```kind`
} ```
-This indexing policy is equivalent to the one below which manually sets ```kind```, ```dataType```, and ```precision``` to their default values. These properties are no longer necessary to explicitly set and you should omit them from your indexing policy entirely (as shown in above example). If you try to set these properties, they'll be automatically removed from your indexing policy.
--
-```json
- {
- "indexingMode": "consistent",
- "includedPaths": [
- {
- "path": "/path/to/included/property/?",
- "indexes": [
- {
- "kind": "Range",
- "dataType": "Number"
- },
- {
- "kind": "Range",
- "dataType": "String"
- }
- ]
- },
- {
- "path": "/path/to/root/of/multiple/included/properties/*",
- "indexes": [
- {
- "kind": "Range",
- "dataType": "Number"
- },
- {
- "kind": "Range",
- "dataType": "String"
- }
- ]
- }
- ],
- "excludedPaths": [
- {
- "path": "/*"
- }
- ]
- }
-```
- > [!NOTE] > It is generally recommended to use an **opt-out** indexing policy to let Azure Cosmos DB proactively index any new property that may be added to your data model.
This indexing policy is equivalent to the one below which manually sets ```kind`
## <a id="composite-index"></a>Composite indexing policy examples
-In addition to including or excluding paths for individual properties, you can also specify a composite index. If you would like to perform a query that has an `ORDER BY` clause for multiple properties, a [composite index](../index-policy.md#composite-indexes) on those properties is required. Additionally, composite indexes will have a performance benefit for queries that have a multiple filters or both a filter and an ORDER BY clause.
+In addition to including or excluding paths for individual properties, you can also specify a composite index. If you would like to perform a query that has an `ORDER BY` clause for multiple properties, a [composite index](../index-policy.md#composite-indexes) on those properties is required. Additionally, composite indexes will have a performance benefit for queries that have multiple filters or both a filter and an ORDER BY clause.
> [!NOTE] > Composite paths have an implicit `/?` since only the scalar value at that path is indexed. The `/*` wildcard is not supported in composite paths. You shouldn't specify `/?` or `/*` in a composite path.
To create a container with a custom indexing policy see, [Create a container wit
## <a id="dotnet-sdk"></a> Use the .NET SDK
-# [.NET SDK V2](#tab/dotnetv2)
-
-The `DocumentCollection` object from the [.NET SDK v2](https://www.nuget.org/packages/Microsoft.Azure.DocumentDB/) exposes an `IndexingPolicy` property that lets you change the `IndexingMode` and add or remove `IncludedPaths` and `ExcludedPaths`.
-
-```csharp
-// Retrieve the container's details
-ResourceResponse<DocumentCollection> containerResponse = await client.ReadDocumentCollectionAsync(UriFactory.CreateDocumentCollectionUri("database", "container"));
-// Set the indexing mode to consistent
-containerResponse.Resource.IndexingPolicy.IndexingMode = IndexingMode.Consistent;
-// Add an included path
-containerResponse.Resource.IndexingPolicy.IncludedPaths.Add(new IncludedPath { Path = "/*" });
-// Add an excluded path
-containerResponse.Resource.IndexingPolicy.ExcludedPaths.Add(new ExcludedPath { Path = "/name/*" });
-// Add a spatial index
-containerResponse.Resource.IndexingPolicy.SpatialIndexes.Add(new SpatialSpec() { Path = "/locations/*", SpatialTypes = new Collection<SpatialType>() { SpatialType.Point } } );
-// Add a composite index
-containerResponse.Resource.IndexingPolicy.CompositeIndexes.Add(new Collection<CompositePath> {new CompositePath() { Path = "/name", Order = CompositePathSortOrder.Ascending }, new CompositePath() { Path = "/age", Order = CompositePathSortOrder.Descending }});
-// Update container with changes
-await client.ReplaceDocumentCollectionAsync(containerResponse.Resource);
-```
-
-To track the index transformation progress, pass a `RequestOptions` object that sets the `PopulateQuotaInfo` property to `true`.
-
-```csharp
-// retrieve the container's details
-ResourceResponse<DocumentCollection> container = await client.ReadDocumentCollectionAsync(UriFactory.CreateDocumentCollectionUri("database", "container"), new RequestOptions { PopulateQuotaInfo = true });
-// retrieve the index transformation progress from the result
-long indexTransformationProgress = container.IndexTransformationProgress;
-```
- # [.NET SDK V3](#tab/dotnetv3) The `ContainerProperties` object from the [.NET SDK v3](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/) (see [this Quickstart](quickstart-dotnet.md) regarding its usage) exposes an `IndexingPolicy` property that lets you change the `IndexingMode` and add or remove `IncludedPaths` and `ExcludedPaths`.
ContainerResponse containerResponse = await client.GetContainer("database", "con
long indexTransformationProgress = long.Parse(containerResponse.Headers["x-ms-documentdb-collection-index-transformation-progress"]); ```
-When defining a custom indexing policy while creating a new container, the SDK V3's fluent API lets you write this definition in a concise and efficient way:
+The SDK V3's fluent API lets you write this definition in a concise and efficient way when defining a custom indexing policy while creating a new container:
```csharp await client.GetDatabase("database").DefineContainer(name: "container", partitionKeyPath: "/myPartitionKey")
await client.GetDatabase("database").DefineContainer(name: "container", partitio
.Attach() .CreateIfNotExistsAsync(); ```+
+# [.NET SDK V2](#tab/dotnetv2)
+
+The `DocumentCollection` object from the [.NET SDK v2](https://www.nuget.org/packages/Microsoft.Azure.DocumentDB/) exposes an `IndexingPolicy` property that lets you change the `IndexingMode` and add or remove `IncludedPaths` and `ExcludedPaths`.
+
+```csharp
+// Retrieve the container's details
+ResourceResponse<DocumentCollection> containerResponse = await client.ReadDocumentCollectionAsync(UriFactory.CreateDocumentCollectionUri("database", "container"));
+// Set the indexing mode to consistent
+containerResponse.Resource.IndexingPolicy.IndexingMode = IndexingMode.Consistent;
+// Add an included path
+containerResponse.Resource.IndexingPolicy.IncludedPaths.Add(new IncludedPath { Path = "/*" });
+// Add an excluded path
+containerResponse.Resource.IndexingPolicy.ExcludedPaths.Add(new ExcludedPath { Path = "/name/*" });
+// Add a spatial index
+containerResponse.Resource.IndexingPolicy.SpatialIndexes.Add(new SpatialSpec() { Path = "/locations/*", SpatialTypes = new Collection<SpatialType>() { SpatialType.Point } } );
+// Add a composite index
+containerResponse.Resource.IndexingPolicy.CompositeIndexes.Add(new Collection<CompositePath> {new CompositePath() { Path = "/name", Order = CompositePathSortOrder.Ascending }, new CompositePath() { Path = "/age", Order = CompositePathSortOrder.Descending }});
+// Update container with changes
+await client.ReplaceDocumentCollectionAsync(containerResponse.Resource);
+```
+
+To track the index transformation progress, pass a `RequestOptions` object that sets the `PopulateQuotaInfo` property to `true`.
+
+```csharp
+// retrieve the container's details
+ResourceResponse<DocumentCollection> container = await client.ReadDocumentCollectionAsync(UriFactory.CreateDocumentCollectionUri("database", "container"), new RequestOptions { PopulateQuotaInfo = true });
+// retrieve the index transformation progress from the result
+long indexTransformationProgress = container.IndexTransformationProgress;
+```
## Use the Java SDK
cosmos-db Geospatial Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/geospatial-intro.md
Azure Cosmos DB interprets coordinates as represented per the WGS-84 reference s
**LineStrings in GeoJSON** ```json
+{
"type":"LineString",
- "coordinates":[ [
+ "coordinates":[
[ 31.8, -5 ], [ 31.8, -4.7 ]
- ] ]
+ ]
+}
``` ### Polygons
cosmos-db Spatial Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/spatial-functions.md
Azure Cosmos DB supports the following Open Geospatial Consortium (OGC) built-in
The following scalar functions perform an operation on a spatial object input value and return a numeric or Boolean value.
+* [ST_AREA](st-area.md)
* [ST_DISTANCE](st-distance.md) * [ST_INTERSECTS](st-intersects.md) * [ST_ISVALID](st-isvalid.md) * [ST_ISVALIDDETAILED](st-isvaliddetailed.md) * [ST_WITHIN](st-within.md) ---
-
- ## Next steps - [System functions Azure Cosmos DB](system-functions.md)
cosmos-db St Area https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/st-area.md
+
+ Title: ST_AREA in Azure Cosmos DB query language
+description: Learn about SQL system function ST_AREA in Azure Cosmos DB.
++++ Last updated : 10/21/2022++++
+# ST_AREA (Azure Cosmos DB)
++
+ Returns the total area of a GeoJSON Polygon or MultiPolygon expression. To learn more, see the [Geospatial and GeoJSON location data](geospatial-intro.md) article.
+
+## Syntax
+
+```sql
+ST_AREA (<spatial_expr>)
+```
+
+## Arguments
+
+*spatial_expr*
+ Is any valid GeoJSON Polygon or MultiPolygon object expression.
+
+## Return types
+
+ Returns the total area of a set of points. This is expressed in square meters for the default reference system.
+
+## Examples
+
+ The following example shows how to return the area of a polygon using the `ST_AREA` built-in function.
+
+```sql
+SELECT ST_AREA({
+ "type":"Polygon",
+ "coordinates":[ [
+ [ 31.8, -5 ],
+ [ 32, -5 ],
+ [ 32, -4.7 ],
+ [ 31.8, -4.7 ],
+ [ 31.8, -5 ]
+ ] ]
+}) as Area
+```
+
+Here is the result set.
+
+```json
+[
+ {
+ "Area": 735970283.0522614
+ }
+]
+```
+
+## Remarks
+
+Using the ST_AREA function to calculate the area of zero or one-dimensional figures like GeoJSON Points and LineStrings will result in an area of 0.
+
+> [!NOTE]
+> The GeoJSON specification requires that points within a Polygon be specified in counter-clockwise order. A Polygon specified in clockwise order represents the inverse of the region within it.
+
+## Next steps
+
+- [Spatial functions Azure Cosmos DB](spatial-functions.md)
+- [System functions Azure Cosmos DB](system-functions.md)
+- [Introduction to Azure Cosmos DB](../../introduction.md)
cosmos-db St Distance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/st-distance.md
ST_DISTANCE (<spatial_expr>, <spatial_expr>)
## Examples
- The following example shows how to return all family documents that are within 30 km of the specified location using the `ST_DISTANCE` built-in function. .
+ The following example shows how to return all family documents that are within 30 km of the specified location using the `ST_DISTANCE` built-in function.
```sql SELECT f.id
cosmos-db Concepts Performance Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-performance-tuning.md
Previously updated : 08/30/2022 Last updated : 10/25/2022 # Performance tuning
cosmos-db Howto Modify Distributed Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-modify-distributed-tables.md
Previously updated : 08/02/2022 Last updated : 10/24/2022 # Distribute and modify tables
CREATE TABLE positions (object_id text primary key, position coordinates);
-- data loading thus goes over a single connection: SELECT create_distributed_table(ΓÇÿpositionsΓÇÖ, ΓÇÿobject_idΓÇÖ);+
+SET client_encoding TO 'UTF8';
\COPY positions FROM ΓÇÿpositions.csvΓÇÖ COMMIT;
BEGIN;
CREATE TABLE items (key text, value text); -- parallel data loading: SELECT create_distributed_table(ΓÇÿitemsΓÇÖ, ΓÇÿkeyΓÇÖ);
+SET client_encoding TO 'UTF8';
\COPY items FROM ΓÇÿitems.csvΓÇÖ CREATE TYPE coordinates AS (x int, y int);
cosmos-db Resources Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/resources-regions.md
Previously updated : 06/21/2022 Last updated : 10/24/2022 # Regional availability for Azure Cosmos DB for PostgreSQL [!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)]
-clusters are available in the following Azure regions:
+Azure Cosmos DB for PostgreSQL is available in the following Azure regions:
* Americas: * Brazil South
clusters are available in the following Azure regions:
* France Central * Germany West Central * North Europe
+ * Sweden Central
* Switzerland North
+ * Switzerland WestΓÇá
* UK South * West Europe
-Some of these regions may not be initially activated on all Azure
+ΓÇá This Azure region is a [restricted one](../../availability-zones/cross-region-replication-azure.md#azure-cross-region-replication-pairings-for-all-geographies). To use it you need to request access to it by opening a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+Some of these regions may not be activated on all Azure
subscriptions. If you want to use a region from the list above and don't see it in your subscription, or if you want to use a region not on this list, open a [support
cosmos-db Tutorial Design Database Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/tutorial-design-database-multi-tenant.md
ms.devlang: azurecli Previously updated : 06/29/2022 Last updated : 10/24/2022 #Customer intent: As an developer, I want to design a Azure Cosmos DB for PostgreSQL database so that my multi-tenant application runs efficiently for all tenants.
done
Back inside psql, bulk load the data. Be sure to run psql in the same directory where you downloaded the data files. ```sql
-SET CLIENT_ENCODING TO 'utf8';
+SET client_encoding TO 'UTF8';
\copy companies from 'companies.csv' with csv \copy campaigns from 'campaigns.csv' with csv
cost-management-billing View All Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/view-all-accounts.md
tags: billing
Previously updated : 07/28/2022 Last updated : 10/25/2022
Azure portal supports the following type of billing accounts:
- **Enterprise Agreement**: A billing account for an Enterprise Agreement (EA) is created when your organization signs an [Enterprise Agreement](https://azure.microsoft.com/pricing/enterprise-agreement/) to use Azure. An EA enrollment can contain an unlimited number of EA accounts. However, an EA account has a subscription limit of 5000. *Regardless of a subscription's state, its included in the limit. So, deleted and disabled subscriptions are included in the limit*. If you need more subscriptions than the limit, create more EA accounts. Generally speaking, a subscription is a billing container. We recommend that you avoid creating multiple subscriptions to implement access boundaries. To separate resources with an access boundary, consider using a resource group. For more information about resource groups, see [Manage Azure resource groups by using the Azure portal](../../azure-resource-manager/management/manage-resource-groups-portal.md). -- **Microsoft Customer Agreement**: A billing account for a Microsoft Customer Agreement is created when your organization works with a Microsoft representative to sign a Microsoft Customer Agreement. Some customers in select regions, who sign up through the Azure website for an [account with pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/) or an [Azure Free Account](https://azure.microsoft.com/offers/ms-azr-0044p/) may have a billing account for a Microsoft Customer Agreement as well. You can have a maximum of 20 subscriptions in a Microsoft Customer Agreement for an individual. A Microsoft Customer Agreement for an enterprise can have up to 5000 subscriptions under it.
+- **Microsoft Customer Agreement**: A billing account for a Microsoft Customer Agreement is created when your organization works with a Microsoft representative to sign a Microsoft Customer Agreement. Some customers in select regions, who sign up through the Azure website for an [account with pay-as-you-go rates](https://azure.microsoft.com/offers/ms-azr-0003p/) or an [Azure Free Account](https://azure.microsoft.com/offers/ms-azr-0044p/) may have a billing account for a Microsoft Customer Agreement as well. You can have a maximum of 5 subscriptions in a Microsoft Customer Agreement for an individual. A Microsoft Customer Agreement for an enterprise can have up to 5000 subscriptions under it.
- **Microsoft Partner Agreement**: A billing account for a Microsoft Partner Agreement is created for Cloud Solution Provider (CSP) partners to manage their customers in the new commerce experience. Partners need to have at least one customer with an [Azure plan](/partner-center/purchase-azure-plan) to manage their billing account in the Azure portal. For more information, see [Get started with your billing account for Microsoft Partner Agreement](../understand/mpa-overview.md).
cost-management-billing Manage Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/microsoft-customer-agreement/manage-tenants.md
A tenant is a digital representation of your organization and is primarily assoc
Each tenant is distinct and separate from other tenants, yet you can allow guest users from other tenants to access your tenant to track your costs and manage billing.
+## What's an associated tenant?
+An associated tenant is a tenant that is linked to your primary billing tenantΓÇÖs billing account. You can move Microsoft 365 subscriptions to these tenants. You can also assign billing account roles to users in associated billing tenants. Read more about associated tenants [Manage billing across multiple tenants using associated billing tenants](../manage/manage-billing-across-tenants.md).
+ ## How tenants and subscriptions relate to billing account You use your Microsoft Customer Agreement (billing account) to track costs and manage billing. Each billing account has at least one billing profile. The billing profile allows you to manage your invoice and payment method. Each billing profile includes one invoice section, by default. You can create more invoice sections to group, track, and manage costs at a more granular level if needed. -- Your billing account is associated with a single tenant. It means only users who are part of the tenant can access your billing account.-- When you create a new Azure subscription for your billing account, it's always created in your billing account tenant. However, you can move subscriptions to other tenants. You can also link existing subscriptions from other tenants to your billing account. It allows you to centrally manage billing through one tenant while keeping resources and subscriptions in other tenants based on your needs.
+- Your billing account is associated with a single, primary tenant. Users who are part of the primary tenant or who are part of associated tenants can access your billing account if they have the appropriate billing role assigned.
+- When you create a new Azure subscription for your billing account, it's created in your tenant or one of the other tenants you have access to. You can choose the tenant while creating the subscription.
+- You can move subscriptions to other tenants. You can also link existing subscriptions from other tenants to your billing account. This flexibility allows you to centrally manage billing through one tenant while keeping resources and subscriptions in other tenants based on your needs.
-The following diagram shows how billing account and subscriptions are linked to tenants. The Contoso MCA billing account is associated with Tenant 1 while Contoso PAYG account is associated with Tenant 2. Let's assume Contoso wants to pay for their PAYG subscription through their MCA billing account, they can use a billing ownership transfer to link the subscription to their MCA billing account. The subscription and its resources will still be associated with Tenant 2, but they're paid for using the MCA billing account.
+The following diagram shows how billing account and subscriptions are linked to tenants. Let's assume Contoso would like to streamline their billing management through an MCA. The Contoso MCA billing account is in Tenant 1 while Contoso PAYG account is in Tenant 2. They can use a billing ownership transfer to link the subscription to their MCA billing account. The subscription and its resources will still be associated with Tenant 2, but they're paid for using the MCA billing account.
:::image type="content" source="./media/manage-tenants/billing-hierarchy-example.png" alt-text="Diagram showing an example billing hierarchy." border="false" lightbox="./media/manage-tenants/billing-hierarchy-example.png"::: ## Manage subscriptions under multiple tenants in a single Microsoft Customer Agreement
-Billing owners can create subscriptions when they have the [appropriate permissions](../manage/understand-mca-roles.md#subscription-billing-roles-and-tasks) to the billing account. By default, any new subscriptions created under the Microsoft Customer Agreement are in the Microsoft Customer Agreement tenant.
+Billing owners can create subscriptions when they have the [appropriate permissions](../manage/understand-mca-roles.md#subscription-billing-roles-and-tasks) to the billing account. By default, any new subscriptions created under the Microsoft Customer Agreement are in the current userΓÇÖs tenant. Different tenants can be selected from the list of tenants to which the user has access to create subscriptions.
- You can link subscriptions from other tenants to your Microsoft Customer Agreement billing account. Taking billing ownership of a subscription only changes the invoicing arrangement. It doesn't affect the service tenant or Azure RBAC roles.-- To change the subscription owner in the service tenant, you must transfer the [subscription to a different Azure Active Directory directory](../../role-based-access-control/transfer-subscription.md).-
-An MCA billing account is managed by a single tenant/directory. The billing account only controls billing for the subscriptions in its tenant. However, you can use a billing ownership transfer to link a subscription to a billing account in a different tenant.
### Billing ownership transfer
Billing ownership transfer doesnΓÇÖt affect:
- Resources - Azure RBAC permissions
+## Assign roles to users to your Microsoft Customer Agreement
+
+There are three ways users with billing owner access can assign roles to users to MCA
+
+- Assign billing roles to users in the primary tenant
+- Assign billing roles to external users (outside of your primary tenant) if they are part of an associated tenant
+- If tenants are not associated, [create guest users in primary tenant and assign roles](#add-guest-users-to-your-microsoft-customer-agreement-tenant).
## Add guest users to your Microsoft Customer Agreement tenant
data-factory Author Visually https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/author-visually.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Visual authoring in Azure Data Factory
data-factory Compute Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/compute-linked-services.md
Previously updated : 09/09/2021 Last updated : 10/25/2022
data-factory Concepts Data Flow Expression Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-expression-builder.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Build expressions in mapping data flow
data-factory Concepts Data Flow Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-monitoring.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Monitor Data Flows
data-factory Concepts Data Flow Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-overview.md
Mapping data flows provide an entirely visual experience with no coding required
Data flows are created from the factory resources pane like pipelines and datasets. To create a data flow, select the plus sign next to **Factory Resources**, and then select **Data Flow**. -
+![Screenshot showing a new data flow.](media/concepts-data-flow-overview/new-data-flow.png)
This action takes you to the data flow canvas, where you can create your transformation logic. Select **Add source** to start configuring your source transformation. For more information, see [Source transformation](data-flow-source.md). ## Authoring data flows
Each transformation contains at least four configuration tabs.
The first tab in each transformation's configuration pane contains the settings specific to that transformation. For more information, see that transformation's documentation page. -
+![Screenshot showing the source settings tab.](media/concepts-data-flow-overview/source-1.png)
#### Optimize The **Optimize** tab contains settings to configure partitioning schemes. To learn more about how to optimize your data flows, see the [mapping data flow performance guide](concepts-data-flow-performance.md). -
+![Screenshot shows the Optimize tab, which includes Partition option, Partition type, and Number of partitions.](media/concepts-data-flow-overview/optimize.png)
#### Inspect The **Inspect** tab provides a view into the metadata of the data stream that you're transforming. You can see column counts, the columns changed, the columns added, data types, the column order, and column references. **Inspect** is a read-only view of your metadata. You don't need to have debug mode enabled to see metadata in the **Inspect** pane.
Mapping data flows are available in the following regions in ADF:
* Learn how to create a [source transformation](data-flow-source.md). * Learn how to build your data flows in [debug mode](concepts-data-flow-debug-mode.md).+
data-factory Concepts Data Flow Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-performance.md
Previously updated : 09/29/2021 Last updated : 10/25/2022 # Mapping data flows performance and tuning guide
data-factory Concepts Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-linked-services.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Linked services in Azure Data Factory and Azure Synapse Analytics
data-factory Connector Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-github.md
Previously updated : 09/09/2021 Last updated : 10/24/2022
Use the following steps to create a linked service to GitHub in the Azure portal
1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
- # [Azure Data Factory](#tab/data-factory)
+ # [Azure Data Factory](#tab/data-factory)
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+ :::image type="content" source="media/connector-github/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
- # [Azure Synapse](#tab/synapse-analytics)
-
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+ # [Azure Synapse](#tab/synapse-analytics)
+ :::image type="content" source="media/connector-github/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
2. Search for GitHub and select the GitHub connector. :::image type="content" source="media/connector-github/github-connector.png" alt-text="Screenshot of the GitHub connector.":::
The following properties are supported for the GitHub linked service.
## Next steps
-Create a [source dataset](data-flow-source.md) in mapping data flow.
+Create a [source dataset](data-flow-source.md) in mapping data flow.
data-factory Connector Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mongodb.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Copy data from or to MongoDB using Azure Data Factory or Synapse Analytics
data-factory Connector Odbc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-odbc.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Copy data from and to ODBC data stores using Azure Data Factory or Synapse Analytics
data-factory Connector Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-postgresql.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Copy data from PostgreSQL using Azure Data Factory or Synapse Analytics
data-factory Connector Sap Business Warehouse Open Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-business-warehouse-open-hub.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Copy data from SAP Business Warehouse via Open Hub using Azure Data Factory or Synapse Analytics
data-factory Connector Sap Business Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-business-warehouse.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Copy data from SAP Business Warehouse using Azure Data Factory or Synapse Analytics
data-factory Connector Sap Ecc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sap-ecc.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Copy data from SAP ECC using Azure Data Factory or Synapse Analytics
data-factory Connector Troubleshoot Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-azure-files.md
Previously updated : 10/01/2021 Last updated : 10/23/2022
data-factory Continuous Integration Delivery Automate Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-automate-azure-pipelines.md
Previously updated : 09/24/2021 Last updated : 10/25/2022
The following is a guide for setting up an Azure Pipelines release that automate
## Requirements -- An Azure subscription linked to Visual Studio Team Foundation Server or Azure Repos that uses the [Azure Resource Manager service endpoint](/azure/devops/pipelines/library/service-endpoints#sep-azure-resource-manager).
+- An Azure subscription linked to Azure DevOps Server (formerly Visual Studio Team Foundation Server) or Azure Repos that uses the [Azure Resource Manager service endpoint](/azure/devops/pipelines/library/service-endpoints#sep-azure-resource-manager).
- A data factory configured with Azure Repos Git integration.
data-factory Continuous Integration Delivery Sample Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-sample-script.md
Previously updated : 09/24/2021 Last updated : 10/25/2022
data-factory Continuous Integration Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery.md
If you're using Git integration with your data factory and have a CI/CD pipeline
- **Resource naming**. Due to ARM template constraints, issues in deployment may arise if your resources contain spaces in the name. The Azure Data Factory team recommends using '_' or '-' characters instead of spaces for resources. For example, 'Pipeline_1' would be a preferable name over 'Pipeline 1'. -- **Exposure control and feature flags**. When working on a team, there are instances where you may merge changes, but don't want them to be run in elevated environments such as PROD and QA. To handle this scenario, the ADF team recommends [the DevOps concept of using feature flags](/devops/operate/progressive-experimentation-feature-flags). In ADF, you can combine [global parameters](author-global-parameters.md) and the [if condition activity](control-flow-if-condition-activity.md) to hide sets of logic based upon these environment flags.
+- **Exposure control and feature flags**. When working in a team, there are instances where you may merge changes, but don't want them to be run in elevated environments such as PROD and QA. To handle this scenario, the ADF team recommends [the DevOps concept of using feature flags](/devops/operate/progressive-experimentation-feature-flags). In ADF, you can combine [global parameters](author-global-parameters.md) and the [if condition activity](control-flow-if-condition-activity.md) to hide sets of logic based upon these environment flags.
To learn how to set up a feature flag, see the below video tutorial:
data-factory Control Flow Execute Pipeline Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-execute-pipeline-activity.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Execute Pipeline activity in Azure Data Factory and Synapse Analytics
data-factory Control Flow Expression Language Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-expression-language-functions.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Expressions and functions in Azure Data Factory and Azure Synapse Analytics
data-factory Control Flow Fail Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-fail-activity.md
Previously updated : 09/22/2021 Last updated : 10/25/2022 # Execute a Fail activity in Azure Data Factory and Synapse Analytics
data-factory Control Flow Filter Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-filter-activity.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Filter activity in Azure Data Factory and Synapse Analytics pipelines
data-factory Control Flow Set Variable Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-set-variable-activity.md
Previously updated : 09/09/2021 Last updated : 10/24/2022 + # Set Variable Activity in Azure Data Factory and Azure Synapse Analytics [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
Use the Set Variable activity to set the value of an existing variable of type S
## Create an Append Variable activity with UI To use a Set Variable activity in a pipeline, complete the following steps:- 1. Select the background of the pipeline canvas and use the Variables tab to add a variable:
- :::image type="content" source="media/control-flow-activities-common/add-pipeline-array-variable.png" alt-text="Shows an empty pipeline canvas with the Variables tab selected having an array type variable named TestVariable.":::
-
-2. Search for _Set Variable_ in the pipeline Activities pane, and drag a Set Variable activity to the pipeline canvas.
-1. Select the Set Variable activity on the canvas if it is not already selected, and its **Variables** tab, to edit its details.
+1. Search for _Set Variable_ in the pipeline Activities pane, and drag a Set Variable activity to the pipeline canvas.
+1. Select the Set Variable activity on the canvas if it is not already selected, and its **Variables** tab, to edit its details.
1. Select the variable for the Name property.
-1. Enter an expression to set the value. This can be a literal string expression, or any combination of dynamic [expressions, functions](control-flow-expression-language-functions.md), [system variables](control-flow-system-variables.md), or [outputs from other activities](how-to-expression-language-functions.md#examples-of-using-parameters-in-expressions).
-
- :::image type="content" source="media/control-flow-set-variable-activity/set-variable-activity.png" alt-text="Shows the UI for a Set Variable activity.":::
-
+1. Enter an expression to set the value for the variables. This expression can be a literal string expression, or any combination of dynamic [expressions, functions](control-flow-expression-language-functions.md), [system variables](control-flow-system-variables.md), or [outputs from other activities](how-to-expression-language-functions.md#examples-of-using-parameters-in-expressions).
## Type properties
variableName | Name of the variable that is set by this activity | yes
## Incrementing a variable
-A common scenario involving variables is using a variable as an iterator within an until or foreach activity. In a set variable activity you cannot reference the variable being set in the `value` field. To workaround this limitation, set a temporary variable and then create a second set variable activity. The second set variable activity sets the value of the iterator to the temporary variable.
-
+A common scenario involving variables is using a variable as an iterator within an until or foreach activity. In a set variable activity, you cannot reference the variable being set in the `value` field. To work around this limitation, set a temporary variable and then create a second set variable activity. The second set variable activity sets the value of the iterator to the temporary variable.
Below is an example of this pattern:- ``` json {
Variables are currently scoped at the pipeline level. This means that they are n
Learn about another related control flow activity: - [Append Variable Activity](control-flow-append-variable-activity.md)+
data-factory Control Flow Switch Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-switch-activity.md
Previously updated : 06/23/2021 Last updated : 10/25/2022
data-factory Control Flow System Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-system-variables.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # System variables supported by Azure Data Factory and Azure Synapse Analytics
data-factory Control Flow Until Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-until-activity.md
Previously updated : 09/09/2021 Last updated : 10/25/2022
data-factory Control Flow Validation Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-validation-activity.md
Title: Validation activity
-description: The Validation activity in Azure Data Factory and Synapse Analytics delays execution of the pipeline until it a dataset is validated with user-defined criteria.
+description: The Validation activity in Azure Data Factory and Synapse Analytics delays execution of the pipeline until a dataset is validated with user-defined criteria.
Previously updated : 09/09/2021 Last updated : 10/24/2022 # Validation activity in Azure Data Factory and Synapse Analytics pipelines
To use a Validation activity in a pipeline, complete the following steps:
1. Search for _Validation_ in the pipeline Activities pane, and drag a Validation activity to the pipeline canvas. 1. Select the new Validation activity on the canvas if it is not already selected, and its **Settings** tab, to edit its details.-
- :::image type="content" source="media/control-flow-validation-activity/validation-activity.png" alt-text="Shows the UI for a Validation activity.":::
- 1. Select a dataset, or define a new one by selecting the New button. For file based datasets like the delimited text example above, you can select either a specific file, or a folder. When a folder is selected, the Validation activity allows you to ignore validation of the existence of child items in the folder, or require whether child items exist or not. 1. The output of the Validation activity can be used as an input to any other activities, and referenced within those activities for any of their properties using dynamic expressions. ## Syntax + ```json {
- "name": "Validation_Activity",
- "type": "Validation",
- "typeProperties": {
- "dataset": {
- "referenceName": "Storage_File",
- "type": "DatasetReference"
- },
- "timeout": "7.00:00:00",
- "sleep": 10,
- "minimumSize": 20
- }
+"name": "Validation_Activity",
+"type": "Validation",
+"typeProperties": {
+"dataset": {
+"referenceName": "Storage_File",
+"type": "DatasetReference"
+},
+"timeout": "0.12:00:00",
+"sleep": 10,
+"minimumSize": 20
+}
}, {
- "name": "Validation_Activity_Folder",
- "type": "Validation",
- "typeProperties": {
- "dataset": {
- "referenceName": "Storage_Folder",
- "type": "DatasetReference"
- },
- "timeout": "7.00:00:00",
- "sleep": 10,
- "childItems": true
- }
+"name": "Validation_Activity_Folder",
+"type": "Validation",
+"typeProperties": {
+"dataset": {
+"referenceName": "Storage_Folder",
+"type": "DatasetReference"
+},
+"timeout": "0.12:00:00",
+"sleep": 10,
+"childItems": true
+}
} ```-- ## Type properties
-Property | Description | Allowed values | Required
| -- | -- | --
-name | Name of the 'Validation' activity | String | Yes |
-type | Must be set to **Validation**. | String | Yes |
-dataset | Activity will block execution until it has validated this dataset reference exists and that it meets the specified criteria, or timeout has been reached. Dataset provided should support "MinimumSize" or "ChildItems" property. | Dataset reference | Yes |
-timeout | Specifies the timeout for the activity to run. If no value is specified, default value is 7 days ("7.00:00:00"). Format is d.hh:mm:ss | String | No |
-sleep | A delay in seconds between validation attempts. If no value is specified, default value is 10 seconds. | Integer | No |
-childItems | Checks if the folder has child items. Can be set to-true : Validate that the folder exists and that it has items. Blocks until at least one item is present in the folder or timeout value is reached.-false: Validate that the folder exists and that it is empty. Blocks until folder is empty or until timeout value is reached. If no value is specified, activity will block until the folder exists or until timeout is reached. | Boolean | No |
-minimumSize | Minimum size of a file in bytes. If no value is specified, default value is 0 bytes | Integer | No |
+|Property | Description | Allowed values | Required|
+|-- | -- | -- | --|
+|name | Name of the 'Validation' activity | String | Yes |
+|type | Must be set to **Validation**. | String | Yes |
+|dataset | Activity will block execution until it has validated this dataset reference exists and that it meets the specified criteria, or timeout has been reached. Dataset provided should support "MinimumSize" or "ChildItems" property. | Dataset reference | Yes |
+|timeout | Specifies the timeout for the activity to run. If no value is specified, default value is 12 hours ("0.12:00:00"). Format is d.hh:mm:ss | String | No |
+|sleep | A delay in seconds between validation attempts. If no value is specified, default value is 10 seconds. | Integer | No |
+|childItems | Checks if the folder has child items. Can be set to-true : Validate that the folder exists and that it has items. Blocks until at least one item is present in the folder or timeout value is reached.-false: Validate that the folder exists and that it is empty. Blocks until folder is empty or until timeout value is reached. If no value is specified, activity will block until the folder exists or until timeout is reached. | Boolean | No |
+|minimumSize | Minimum size of a file in bytes. If no value is specified, default value is 0 bytes | Integer | No |
## Next steps
See other supported control flow activities:
- [Lookup Activity](control-flow-lookup-activity.md) - [Web Activity](control-flow-web-activity.md) - [Until Activity](control-flow-until-activity.md)+
data-factory Control Flow Wait Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-wait-activity.md
Previously updated : 09/09/2021 Last updated : 10/24/2022 # Execute Wait activity in Azure Data Factory and Synapse Analytics
data-factory Control Flow Web Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-web-activity.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Web activity in Azure Data Factory and Azure Synapse Analytics
data-factory Control Flow Webhook Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-webhook-activity.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Webhook activity in Azure Data Factory
See the following supported control flow activities:
- [Get Metadata Activity](control-flow-get-metadata-activity.md) - [Lookup Activity](control-flow-lookup-activity.md) - [Web Activity](control-flow-web-activity.md)-- [Until Activity](control-flow-until-activity.md)
+- [Until Activity](control-flow-until-activity.md)
data-factory Copy Activity Fault Tolerance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-fault-tolerance.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Fault tolerance of copy activity in Azure Data Factory and Synapse Analytics pipelines
data-factory Copy Activity Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-monitoring.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Monitor copy activity
data-factory Copy Activity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-overview.md
To copy data from a source to a sink, the service that runs the Copy activity pe
:::image type="content" source="media/copy-activity-overview/copy-activity-overview.png" alt-text="Copy activity overview":::
+> [!NOTE]
+> In case if a self-hosted integration runtime is used in either source or sink data store within a copy activity, than both the source and sink must be accessible from the server hosting the integartion runtime for the copy activity to be successful.
+ ## Supported data stores and formats [!INCLUDE [data-factory-v2-supported-data-stores](includes/data-factory-v2-supported-data-stores.md)]
data-factory Copy Activity Performance Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-performance-troubleshooting.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Troubleshoot copy activity performance
data-factory Copy Activity Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-performance.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Copy activity performance and scalability guide
data-factory Copy Activity Schema And Type Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-activity-schema-and-type-mapping.md
Previously updated : 09/09/2021 Last updated : 10/25/2022 # Schema and data type mapping in copy activity
data-factory Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-data-tool.md
Previously updated : 09/09/2021 Last updated : 10/25/2022
data-factory Create Azure Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-azure-integration-runtime.md
description: Learn how to create Azure integration runtime in Azure Data Factory
Previously updated : 09/09/2021 Last updated : 10/24/2022 + # How to create and configure Azure Integration Runtime [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
Integration Runtime can be created using the **Set-AzDataFactoryV2IntegrationRun
```powershell Set-AzDataFactoryV2IntegrationRuntime -DataFactoryName "SampleV2DataFactory1" -Name "MySampleAzureIR" -ResourceGroupName "ADFV2SampleRG" -Type Managed -Location "West Europe"
-```
+```
For Azure IR, the type must be set to **Managed**. You do not need to specify compute details because it is fully managed elastically in cloud. Specify compute details like node size and node count when you would like to create Azure-SSIS IR. For more information, see [Create and Configure Azure-SSIS IR](create-azure-ssis-integration-runtime.md). You can configure an existing Azure IR to change its location using the Set-AzDataFactoryV2IntegrationRuntime PowerShell cmdlet. For more information about the location of an Azure IR, see [Introduction to integration runtime](concepts-integration-runtime.md).
Use the following steps to create an Azure IR using UI.
1. On the home page for the service, select the [Manage tab](./author-management-hub.md) from the leftmost pane. # [Azure Data Factory](#tab/data-factory)
-
- :::image type="content" source="media/doc-common-process/get-started-page-manage-button.png" alt-text="The home page Manage button":::
+
+ :::image type="content" source="media/create-azure-integration-runtime/get-started-page-manage-button.png" alt-text="Screenshot showing the home page Manage button.":::
# [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/get-started-page-manage-button-synapse.png" alt-text="The home page Manage button":::
+ :::image type="content" source="media/doc-common-process/get-started-page-manage-button-synapse.png" alt-text="Screenshot showing the home page Manage button.":::
-2. Select **Integration runtimes** on the left pane, and then select **+New**.
+
+
+1. Select **Integration runtimes** on the left pane, and then select **+New**.
# [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/create-azure-integration-runtime/manage-new-integration-runtime.png" alt-text="Screenshot that highlights integration runtimes in the left pane and the +New button.":::
- :::image type="content" source="media/doc-common-process/manage-new-integration-runtime.png" alt-text="Screenshot that highlights Integration runtimes in the left pane and the +New button.":::
-
# [Azure Synapse](#tab/synapse-analytics)
- :::image type="content" source="media/doc-common-process/manage-new-integration-runtime-synapse.png" alt-text="Screenshot that highlights Integration runtimes in the left pane and the +New button.":::
-
-3. On the **Integration runtime setup** page, select **Azure, Self-Hosted**, and then select **Continue**.
+ :::image type="content" source="media/doc-common-process/manage-new-integration-runtime-synapse.png" alt-text="Screenshot that highlights integration runtimes in the left pane and the +New button.":::
+
+
+1. On the **Integration runtime setup** page, select **Azure, Self-Hosted**, and then select **Continue**.
+ :::image type="content" source="media/create-azure-integration-runtime/integration-runtime-setup.png" alt-text="Screenshot showing the Azure self-hosted integration runtime option.":::
1. On the following page, select **Azure** to create an Azure IR, and then select **Continue**.
- :::image type="content" source="media/create-azure-integration-runtime/new-azure-integration-runtime.png" alt-text="Create an integration runtime":::
-
+ :::image type="content" source="media/create-azure-integration-runtime/new-azure-integration-runtime.png" alt-text="Screenshot that shows create an Azure integration runtime.":::
1. Enter a name for your Azure IR, and select **Create**.
- :::image type="content" source="media/create-azure-integration-runtime/create-azure-integration-runtime.png" alt-text="Create an Azure IR":::
-
+ :::image type="content" source="media/create-azure-integration-runtime/create-azure-integration-runtime.png" alt-text="Screenshot that shows the final step to create the Azure integration runtime.":::
1. You'll see a pop-up notification when the creation completes. On the **Integration runtimes** page, make sure that you see the newly created IR in the list.-
+ :::image type="content" source="media/create-azure-integration-runtime/integration-runtime-in-the-list.png" alt-text="Screenshot showing the Azure integration runtime in the list.":::
+
> [!NOTE] > If you want to enable managed virtual network on Azure IR, please see [How to enable managed virtual network](managed-virtual-network-private-endpoint.md)
See the following articles on how to create other types of integration runtimes:
- [Create self-hosted integration runtime](create-self-hosted-integration-runtime.md) - [Create Azure-SSIS integration runtime](create-azure-ssis-integration-runtime.md)+
data-factory Create Shared Self Hosted Integration Runtime Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-shared-self-hosted-integration-runtime-powershell.md
Previously updated : 01/26/2022 Last updated : 09/22/2022 # Create a shared self-hosted integration runtime in Azure Data Factory
data-factory Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/credentials.md
Previously updated : 07/19/2021 Last updated : 10/25/2022
data-factory How To Create Event Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-event-trigger.md
This section shows you how to create a storage event trigger within the Azure Da
5. Select your storage account from the Azure subscription dropdown or manually using its Storage account resource ID. Choose which container you wish the events to occur on. Container selection is required, but be mindful that selecting all containers can lead to a large number of events. > [!NOTE]
- > The Storage Event Trigger currently supports only Azure Data Lake Storage Gen2 and General-purpose version 2 storage accounts. Due to an Azure Event Grid limitation, Azure Data Factory only supports a maximum of 500 storage event triggers per storage account. If you hit the limit, please contact support for recommendations and increasing the limit upon evaluation by Event Grid team.
+ > The Storage Event Trigger currently supports only Azure Data Lake Storage Gen2 and General-purpose version 2 storage accounts. If you are working with SFTP Storage Events you need to specify the SFTP Data API under the filtering section too. Due to an Azure Event Grid limitation, Azure Data Factory only supports a maximum of 500 storage event triggers per storage account. If you hit the limit, please contact support for recommendations and increasing the limit upon evaluation by Event Grid team.
> [!NOTE] > To create a new or modify an existing Storage Event Trigger, the Azure account used to log into the service and publish the storage event trigger must have appropriate role based access control (Azure RBAC) permission on the storage account. No additional permission is required: Service Principal for the Azure Data Factory and Azure Synapse does _not_ need special permission to either the Storage account or Event Grid. For more information about access control, see [Role based access control](#role-based-access-control) section.
data-factory Load Azure Data Lake Storage Gen2 From Gen1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/load-azure-data-lake-storage-gen2-from-gen1.md
Previously updated : 08/06/2021 Last updated : 10/25/2022 # Copy data from Azure Data Lake Storage Gen1 to Gen2 with Azure Data Factory
ADF offers a serverless architecture that allows parallelism at different levels
Customers have successfully migrated petabytes of data consisting of hundreds of millions of files from Data Lake Storage Gen1 to Gen2, with a sustained throughput of 2 GBps and higher.
-you can achieve great data movement speeds through different levels of parallelism:
+You can achieve greater data movement speeds by applying different levels of parallelism:
- A single copy activity can take advantage of scalable compute resources: when using Azure Integration Runtime, you can specify up to 256 [data integration units (DIUs)](copy-activity-performance-features.md#data-integration-units) for each copy activity in a serverless manner; when using self-hosted Integration Runtime, you can manually scale up the machine or scale out to multiple machines (up to 4 nodes), and a single copy activity will partition its file set across all nodes. - A single copy activity reads from and writes to the data store using multiple threads.
You can also enable [fault tolerance](copy-activity-fault-tolerance.md) in copy
### Permissions
-In Data Factory, the [Data Lake Storage Gen1 connector](connector-azure-data-lake-store.md) supports service principal and managed identity for Azure resource authentications. The [Data Lake Storage Gen2 connector](connector-azure-data-lake-storage.md) supports account key, service principal, and managed identity for Azure resource authentications. To make Data Factory able to navigate and copy all the files or access control lists (ACLs) you need, grant high enough permissions for the account you provide to access, read, or write all files and set ACLs if you choose to. Grant it a super-user or owner role during the migration period.
+In Data Factory, the [Data Lake Storage Gen1 connector](connector-azure-data-lake-store.md) supports service principal and managed identity for Azure resource authentications. The [Data Lake Storage Gen2 connector](connector-azure-data-lake-storage.md) supports account key, service principal, and managed identity for Azure resource authentications. To make Data Factory able to navigate and copy all the files or access control lists (ACLs) you will need to grant high enough permissions to the account to access, read, or write all files and set ACLs if you choose to. You should grant the account a super-user or owner role during the migration period and remove the elevated permissions once the migration is completed.
## Next steps
In Data Factory, the [Data Lake Storage Gen1 connector](connector-azure-data-lak
> [!div class="nextstepaction"] > [Copy activity overview](copy-activity-overview.md) > [Azure Data Lake Storage Gen1 connector](connector-azure-data-lake-store.md)
-> [Azure Data Lake Storage Gen2 connector](connector-azure-data-lake-storage.md)
+> [Azure Data Lake Storage Gen2 connector](connector-azure-data-lake-storage.md)
data-factory Monitor Metrics Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-metrics-alerts.md
Previously updated : 09/02/2021 Last updated : 10/25/2022 # Data Factory metrics and alerts
data-factory Monitor Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-programmatically.md
Title: Programmatically monitor an Azure data factory
+ Title: Programmatically monitor an Azure Data Factory
description: Learn how to monitor a pipeline in a data factory by using different software development kits (SDKs).
-# Programmatically monitor an Azure data factory
+# Programmatically monitor an Azure Data Factory
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
data-factory Monitor Schema Logs Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-schema-logs-events.md
Previously updated : 09/02/2021 Last updated : 10/25/2022 # Schema of logs and events
data-factory Parameters Data Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/parameters-data-flow.md
For example, if you wanted to map a string column based upon a parameter `column
:::image type="content" source="media/data-flow/parameterize-column-name.png" alt-text="Passing in a column name as a parameter":::
+> [!NOTE]
+> In data flow expressions, string interpolation (substituting variables inside of the string) is not supported. Instead, concatenate the expression into string values. For example, `'string part 1' + $variable + 'string part 2'`
+ ## Next steps * [Execute data flow activity](control-flow-execute-data-flow-activity.md) * [Control flow expressions](control-flow-expression-language-functions.md)
data-factory Pricing Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-concepts.md
The prices used in these examples below are hypothetical and are not intended to
- [Data integration in Azure Data Factory Managed VNET](pricing-examples-data-integration-managed-vnet.md) - [Pricing example: Get delta data from SAP ECC via SAP CDC in mapping data flows](pricing-examples-get-delta-data-from-sap-ecc.md) + ## Next steps Now that you understand the pricing for Azure Data Factory, you can get started!
data-factory Quickstart Create Data Factory Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-resource-manager-template.md
Previously updated : 07/05/2021 Last updated : 10/25/2022 # Quickstart: Create an Azure Data Factory using ARM template
data-factory Quickstart Create Data Factory Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-rest-api.md
Title: Create an Azure data factory using REST API
-description: Create an Azure data factory pipeline to copy data from one location in Azure Blob storage to another location.
+ Title: Create an Azure Data Factory using REST API
+description: Create an Azure Data Factory pipeline to copy data from one location in Azure Blob storage to another location.
-# Quickstart: Create an Azure data factory and pipeline by using the REST API
+# Quickstart: Create an Azure Data Factory and pipeline by using the REST API
> [!div class="op_single_selector" title1="Select the version of Data Factory service you are using:"] > * [Version 1](v1/data-factory-copy-data-from-azure-blob-storage-to-sql-database.md)
Azure Data Factory is a cloud-based data integration service that allows you to create data-driven workflows in the cloud for orchestrating and automating data movement and data transformation. Using Azure Data Factory, you can create and schedule data-driven workflows (called pipelines) that can ingest data from disparate data stores, process/transform the data by using compute services such as Azure HDInsight Hadoop, Spark, Azure Data Lake Analytics, and Azure Machine Learning, and publish output data to data stores such as Azure Synapse Analytics for business intelligence (BI) applications to consume.
-This quickstart describes how to use REST API to create an Azure data factory. The pipeline in this data factory copies data from one location to another location in an Azure blob storage.
+This quickstart describes how to use REST API to create an Azure Data Factory. The pipeline in this data factory copies data from one location to another location in an Azure blob storage.
If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin.
$response.Content
Note the following points:
-* The name of the Azure data factory must be globally unique. If you receive the following error, change the name and try again.
+* The name of the Azure Data Factory must be globally unique. If you receive the following error, change the name and try again.
``` Data factory name "ADFv2QuickStartDataFactory" is not available.
data-factory Quickstart Create Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory.md
A quick creation experience provided in the Azure Data Factory Studio to enable
> If you see that the web browser is stuck at "Authorizing", clear the **Block third-party cookies and site data** check box. Or keep it selected, create an exception for **login.microsoftonline.com**, and then try to open the app again. ## Next steps
-Learn how to use Azure Data Factory to copy data from one location to another with the [Hello World - How to copy data](tutorial-copy-data-portal.md) tutorial.
+Learn how to use Azure Data Factory to copy data from one location to another with the [Hello World - How to copy data](quickstart-hello-world-copy-data-tool.md) tutorial.
Lean how to create a data flow with Azure Data Factory[data-flow-create.md].
data-factory Quickstart Hello World Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-hello-world-copy-data-tool.md
Previously updated : 07/05/2021 Last updated : 10/24/2022
data-factory Tutorial Control Flow Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-control-flow-portal.md
Previously updated : 10/04/2022 Last updated : 10/25/2022 # Branching and chaining activities in an Azure Data Factory pipeline using the Azure portal
data-factory Tutorial Incremental Copy Multiple Tables Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-multiple-tables-portal.md
Last updated 09/26/2022
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-In this tutorial, you create an Azure data factory with a pipeline that loads delta data from multiple tables in a SQL Server database to a database in Azure SQL Database.
+In this tutorial, you create an Azure Data Factory with a pipeline that loads delta data from multiple tables in a SQL Server database to a database in Azure SQL Database.
You perform the following steps in this tutorial:
END
3. In the **New data factory** page, enter **ADFMultiIncCopyTutorialDF** for the **name**.
- The name of the Azure data factory must be **globally unique**. If you see a red exclamation mark with the following error, change the name of the data factory (for example, yournameADFIncCopyTutorialDF) and try creating again. See [Data Factory - Naming Rules](naming-rules.md) article for naming rules for Data Factory artifacts.
+ The name of the Azure Data Factory must be **globally unique**. If you see a red exclamation mark with the following error, change the name of the data factory (for example, yournameADFIncCopyTutorialDF) and try creating again. See [Data Factory - Naming Rules](naming-rules.md) article for naming rules for Data Factory artifacts.
`Data factory name "ADFIncCopyTutorialDF" is not available`
You performed the following steps in this tutorial:
Advance to the following tutorial to learn about transforming data by using a Spark cluster on Azure: > [!div class="nextstepaction"]
->[Incrementally load data from Azure SQL Database to Azure Blob storage by using Change Tracking technology](tutorial-incremental-copy-change-tracking-feature-portal.md)
+>[Incrementally load data from Azure SQL Database to Azure Blob storage by using Change Tracking technology](tutorial-incremental-copy-change-tracking-feature-portal.md)
data-factory Wrangling Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/wrangling-functions.md
Keep and Remove Top, Keep Range (corresponding M functions,
| -- | -- | | Table.PromoteHeaders | Not supported. The same result can be achieved by setting "First row as header" in the dataset. | | Table.CombineColumns | This is a common scenario that isn't directly supported but can be achieved by adding a new column that concatenates two given columns. For example, Table.AddColumn(RemoveEmailColumn, "Name", each [FirstName] & " " & [LastName]) |
-| Table.TransformColumnTypes | This is supported in most cases. The following scenarios are unsupported: transforming string to currency type, transforming string to time type, transforming string to Percentage type. |
+| Table.TransformColumnTypes | This is supported in most cases. The following scenarios are unsupported: transforming string to currency type, transforming string to time type, transforming string to Percentage type and tranfoming with locale. |
| Table.NestedJoin | Just doing a join will result in a validation error. The columns must be expanded for it to work. | | Table.RemoveLastN | Remove bottom rows isn't supported. | | Table.RowCount | Not supported, but can be achieved by adding a custom column containing the value 1, then aggregating that column with List.Sum. Table.Group is supported. |
databox-online Azure Stack Edge Gpu Technical Specifications Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-technical-specifications-compliance.md
The Azure Stack Edge Pro device has the following specifications for compute and
| CPU: usable | 40 vCPUs | | Memory type | Dell Compatible 16 GB PC4-23400 DDR4-2933Mhz 2Rx8 1.2v ECC Registered RDIMM | | Memory: raw | 128 GB RAM (8 x 16 GB) |
-| Memory: usable | 102 GB RAM |
+| Memory: usable | 96 GB RAM |
## Compute acceleration specifications
databox Data Box Disk Deploy Copy Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-deploy-copy-data.md
Previously updated : 11/09/2021 Last updated : 10/12/2022 # Customer intent: As an IT admin, I need to be able to order Data Box Disk to upload on-premises data from my server onto Azure.
Perform the following steps to connect and copy data from your computer to the D
A container is created in the Azure storage account for each subfolder under BlockBlob and PageBlob folders. All files under BlockBlob and PageBlob folders are copied into a default container `$root` under the Azure Storage account. Any files in the `$root` container are always uploaded as block blobs.
- Copy files to a folder within *AzureFile* folder. A sub-folder within *AzureFile* folder creates a fileshare. Files copied directly to *AzureFile* folder fail and are uploaded as block blobs.
+ Copy files to a folder within *AzureFile* folder. All files under *AzureFile* folder will be uploaded as files to a default container of type ΓÇ£databox-format-GuidΓÇ¥ (ex: databox-azurefile-7ee19cfb3304122d940461783e97bf7b4290a1d7).
If files and folders exist in the root directory, then you must move those to a different folder before you begin data copy.
databox Data Box System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-system-requirements.md
Previously updated : 10/07/2022 Last updated : 10/21/2022 # Azure Data Box system requirements
defender-for-cloud Defender For Cloud Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-introduction.md
Review the findings from these vulnerability scanners and respond to them all fr
Learn more on the following pages: - [Defender for Cloud's integrated Qualys scanner for Azure and hybrid machines](deploy-vulnerability-assessment-vm.md)-- [Identify vulnerabilities in images in Azure container registries](defender-for-containers-va-acr.md#identify-vulnerabilities-in-images-in-other-container-registries)
+- [Identify vulnerabilities in images in Azure container registries](defender-for-containers-va-acr.md)
+- [Identify vulnerabilities in images in AWS Elastic Container Registry](defender-for-containers-va-ecr.md)
## Enforce your security policy from the top down
defender-for-cloud Defender For Containers Va Acr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-va-acr.md
Title: Identify vulnerabilities in Azure Container Registry with Microsoft Defen
description: Learn how to use Defender for Containers to scan images in your Azure Container Registry to find vulnerabilities. Previously updated : 09/11/2022 Last updated : 10/24/2022 # Use Defender for Containers to scan your Azure Container Registry images for vulnerabilities
-This page explains how to use Defender for Containers to scan the container images stored in your Azure Resource Manager-based Azure Container Registry, as part of the protections provided within Microsoft Defender for Cloud.
+This article explains how to use Defender for Containers to scan the container images stored in your Azure Resource Manager-based Azure Container Registry, as part of the protections provided within Microsoft Defender for Cloud.
To enable scanning of vulnerabilities in containers, you have to [enable Defender for Containers](defender-for-containers-enable.md). When the scanner, powered by Qualys, reports vulnerabilities, Defender for Cloud presents the findings and related information as recommendations. In addition, the findings include related information such as remediation steps, relevant CVEs, CVSS scores, and more. You can view the identified vulnerabilities for one or more subscriptions, or for a specific registry.
The triggers for an image scan are:
- (Preview) Continuous scan for running images. This scan is performed every seven days for as long as the image runs. This mode runs instead of the above mode when the Defender profile, or extension is running on the cluster.
- > [!NOTE]
- > **Windows containers**: There is no Defender agent for Windows containers, the Defender agent is deployed to a Linux node running in the cluster, to retrieve the running container inventory for your Windows nodes.
- >
- > Images that aren't pulled from ACR for deployment in AKS won't be checked and will appear under the **Not applicable** tab.
- >
- > Images that have been deleted from their ACR registry, but are still running, won't be reported on only 30 days after their last scan occurred in ACR.
+When a scan is triggered, findings are available as Defender for Cloud recommendations from 2 minutes up to 15 minutes after the scan is complete.
-This scan typically completes within 2 minutes, but it might take up to 40 minutes.
+Also, check out the ability scan container images for vulnerabilities as the images are built in your CI/CD GitHub workflows. Learn more in [Defender for DevOps](defender-for-devops-introduction.md).
-Also, check out the ability scan container images for vulnerabilities as the images are built in your CI/CD GitHub workflows. Learn more in [Identify vulnerable container images in your CI/CD workflows](defender-for-containers-cicd.md).
+## Prerequisites
-## Identify vulnerabilities in images in Azure container registries
+Before you can scan your ACR images:
-To enable vulnerability scans of images stored in your Azure Resource Manager-based Azure Container Registry:
-
-1. [Enable Defender for Containers](defender-for-containers-enable.md) for your subscription. Defender for Containers is now ready to scan images in your registries.
+- [Enable Defender for Containers](defender-for-containers-enable.md) for your subscription. Defender for Containers is now ready to scan images in your registries.
>[!NOTE] > This feature is charged per image.
- When a scan is triggered, findings are available as Defender for Cloud recommendations from 2 minutes up to 15 minutes after the scan is complete.
-
-1. [View and remediate findings as explained below](#view-and-remediate-findings).
-
-## Identify vulnerabilities in images in other container registries
-
-If you want to find vulnerabilities in images stored in other container registries, you can import the images into ACR and scan them.
+- If you want to find vulnerabilities in images stored in other container registries, you can import the images into ACR and scan them.
-You can also [scan images in Amazon AWS Elastic Container Registry](defender-for-containers-va-ecr.md) directly from the Azure portal.
-
-1. Use the ACR tools to bring images to your registry from Docker Hub or Microsoft Container Registry. When the import completes, the imported images are scanned by the built-in vulnerability assessment solution.
+ Use the ACR tools to bring images to your registry from Docker Hub or Microsoft Container Registry. When the import completes, the imported images are scanned by the built-in vulnerability assessment solution.
Learn more in [Import container images to a container registry](../container-registry/container-registry-import-images.md)
- When the scan completes (typically after approximately 2 minutes, but can be up to 15 minutes), findings are available as Defender for Cloud recommendations.
+ You can also [scan images in Amazon AWS Elastic Container Registry](defender-for-containers-va-ecr.md) directly from the Azure portal.
-1. [View and remediate findings as explained below](#view-and-remediate-findings).
+For a list of the types of images and container registries supported by Microsoft Defender for Containers, see [Availability](supported-machines-endpoint-solutions-clouds-containers.md?tabs=azure-aks#registries-and-images).
## View and remediate findings
Defender for Cloud filters and classifies findings from the scanner. When an ima
Yes. The results are under [Sub-Assessments REST API](/rest/api/defenderforcloud/sub-assessments/list). Also, you can use Azure Resource Graph (ARG), the Kusto-like API for all of your resources: a query can fetch a specific scan.
-### What registry types are scanned? What types are billed?
-
-For a list of the types of container registries supported by Microsoft Defender for container registries, see [Availability](supported-machines-endpoint-solutions-clouds-containers.md#additional-information). Defender for Containers doesn't scan unsupported registries that you connect to your Azure subscription.
- ### Why is Defender for Cloud alerting me to vulnerabilities about an image that isnΓÇÖt in my registry? Some images may reuse tags from an image that was already scanned. For example, you may reassign the tag ΓÇ£LatestΓÇ¥ every time you add an image to a digest. In such cases, the ΓÇÿoldΓÇÖ image does still exist in the registry and may still be pulled by its digest. If the image has security findings and is pulled, it will expose security vulnerabilities.
defender-for-cloud Defender For Containers Va Ecr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-va-ecr.md
Defender for Containers lets you scan the container images stored in your Amazon AWS Elastic Container Registry (ECR) as part of the protections provided within Microsoft Defender for Cloud.
-To enable scanning of vulnerabilities in containers, you have to [connect your AWS account to Defender for Cloud](quickstart-onboard-aws.md) and [enable Defender for Containers](defender-for-containers-enable.md). The agentless scanner, powered by the open-source scanner Trivy, scans your ECR repositories and reports vulnerabilities. Defender for Containers creates resources in your AWS account, such as an ECS cluster in a dedicated VPC, internet gateway and an S3 bucket, so that images stay within your account for privacy and intellectual property protection. Resources are created in two AWS regions: us-east-1 and eu-central-1.
+To enable scanning of vulnerabilities in containers, you have to [connect your AWS account to Defender for Cloud](quickstart-onboard-aws.md) and [enable Defender for Containers](defender-for-containers-enable.md). The agentless scanner, powered by the open-source scanner Trivy, scans your ECR repositories and reports vulnerabilities.
-Defender for Cloud filters and classifies findings from the scanner. Images without vulnerabilities are marked as healthy and Defender for Cloud doesn't send notifications about healthy images to keep you from getting unwanted informational alerts.
+Defender for Containers creates resources in your AWS account to build an inventory of the software in your images. The scan then sends only the software inventory to Defender for Cloud. This architecture protects your information privacy and intellectual property, and also keeps the outbound network traffic to a minimum. Defender for Containers creates an ECS cluster in a dedicated VPC, an internet gateway, and an S3 bucket in the us-east-1 and eu-central-1 regions to build the software inventory.
+
+Defender for Cloud filters and classifies findings from the software inventory that the scanner creates. Images without vulnerabilities are marked as healthy and Defender for Cloud doesn't send notifications about healthy images to keep you from getting unwanted informational alerts.
The triggers for an image scan are:
The triggers for an image scan are:
Before you can scan your ECR images: - [Connect your AWS account to Defender for Cloud and enable Defender for Containers](quickstart-onboard-aws.md)-- You must have at least one free VPC in us-east-1 and eu-central-1.
+- You must have at least one free VPC in the `us-east-1` and `eu-central-1` regions to host the AWS resources that build the software inventory.
-> [!NOTE]
-> - Images that have at least one layer over 2GB are not scanned.
-> - Public repositories and manifest lists are not supported.
+For a list of the types of images not supported by Microsoft Defender for Containers, see [Availability](supported-machines-endpoint-solutions-clouds-containers.md?tabs=aws-eks#images).
## Enable vulnerability assessment
To enable vulnerability assessment:
:::image type="content" source="media/defender-for-containers-va-ecr/aws-containers-enable-va.png" alt-text="Screenshot of the toggle to turn on vulnerability assessment for ECR images.":::
-1. Select **Next: Configure access**.
+1. Select **Save** > **Next: Configure access**.
1. Download the CloudFormation template.
defender-for-cloud Multi Factor Authentication Enforcement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/multi-factor-authentication-enforcement.md
There are some limitations to Defender for Cloud's identity and access protectio
- Identity recommendations aren't available for subscriptions with more than 6,000 accounts. In these cases, these types of subscriptions will be listed under Not applicable tab. - Identity recommendations aren't available for Cloud Solution Provider (CSP) partner's admin agents. - Identity recommendations donΓÇÖt identify accounts that are managed with a privileged identity management (PIM) system. If you're using a PIM tool, you might see inaccurate results in the **Manage access and permissions** control.
+- Identity recommendations don't support Azure AD conditional access policies with included Directory Roles instead of users and groups.
## Next steps To learn more about recommendations that apply to other Azure resource types, see the following article: -- [Protecting your network in Microsoft Defender for Cloud](protect-network-resources.md)
+- [Protecting your network in Microsoft Defender for Cloud](protect-network-resources.md)
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
Title: Microsoft Defender for Containers feature availability description: Learn about the availability of Microsoft Defender for Cloud containers features according to OS, machine type, and cloud deployment. Previously updated : 08/10/2022 Last updated : 10/24/2022
The **tabs** below show the features that are available, by environment, for Mic
| Aspect | Details | |--|--| | Registries and images | **Supported**<br> ΓÇó [ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md) (Private registries requires access to Trusted Services) <br> ΓÇó Windows images using Windows OS version 1709 and above (Preview). This is free while it's in preview, and will incur charges (based on the Defender for Containers plan) when it becomes generally available.<br><br>**Unsupported**<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images<br> ΓÇó "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br> ΓÇó Images with [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md) |
-| OS Packages | **Supported** <br> ΓÇó Alpine Linux 3.12-3.15 <br> ΓÇó Red Hat Enterprise Linux 6, 7, 8 <br> ΓÇó CentOS 6, 7 <br> ΓÇó Oracle Linux 6,6,7,8 <br> ΓÇó Amazon Linux 1,2 <br> ΓÇó openSUSE Leap 42, 15 <br> ΓÇó SUSE Enterprise Linux 11,12, 15 <br> ΓÇó Debian GNU/Linux wheezy, jessie, stretch, buster, bullseye <br> ΓÇó Ubuntu 10.10-22.04 <br> ΓÇó FreeBSD 11.1-13.1 <br> ΓÇó Fedora 32, 33, 34, 35|
+| OS Packages | **Supported** <br> ΓÇó Alpine Linux 3.12-3.15 <br> ΓÇó Red Hat Enterprise Linux 6, 7, 8 <br> ΓÇó CentOS 6, 7 <br> ΓÇó Oracle Linux 6, 7, 8 <br> ΓÇó Amazon Linux 1, 2 <br> ΓÇó openSUSE Leap 42, 15 <br> ΓÇó SUSE Enterprise Linux 11, 12, 15 <br> ΓÇó Debian GNU/Linux wheezy, jessie, stretch, buster, bullseye <br> ΓÇó Ubuntu 10.10-22.04 <br> ΓÇó FreeBSD 11.1-13.1 <br> ΓÇó Fedora 32, 33, 34, 35|
| Language specific packages (Preview) <br><br> (**Only supported for Linux images**) | **Supported** <br> ΓÇó Python <br> ΓÇó Node.js <br> ΓÇó .NET <br> ΓÇó JAVA <br> ΓÇó Go | ### Kubernetes distributions and configurations
The **tabs** below show the features that are available, by environment, for Mic
<sup><a name="footnote1"></a>1</sup> Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.
-<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you will need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
+<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you'll need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
> [!NOTE] > For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
The **tabs** below show the features that are available, by environment, for Mic
#### Private link
-Defender for Containers relies on the Defender profile\extension for several features. The Defender profile\extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that no machine can send data to that workstation except those that are configured to send traffic through Azure Monitor Private Link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
+Defender for Containers relies on the Defender profile\extension for several features. The Defender profile\extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that only machines that are configured to send traffic through Azure Monitor Private Link can send data to that workstation. You can configure a private link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
Allowing data ingestion to occur only through Private Link Scope on your workspace Network Isolation settings, can result in communication failures and partial converge of the Defender for Containers feature set.
Learn how to [use Azure Private Link to connect networks to Azure Monitor](../az
<sup><a name="footnote1"></a>1</sup> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-## Additional information
+## Additional environment information
+
+### Images
+
+| Aspect | Details |
+|--|--|
+| Registries and images | **Unsupported** <br>ΓÇó Images that have at least one layer over 2 GB<br> ΓÇó Public repositories and manifest lists <br>ΓÇó Images in the AWS management account aren't scanned so that we don't create resources in the management account. |
### Kubernetes distributions and configurations
Learn how to [use Azure Private Link to connect networks to Azure Monitor](../az
<sup><a name="footnote1"></a>1</sup> Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.
-<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you will need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
+<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you'll need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
> [!NOTE] > For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
Learn how to [use Azure Private Link to connect networks to Azure Monitor](../az
#### Private link
-Defender for Containers relies on the Defender profile\extension for several features. The Defender profile\extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that no machine can send data to that workstation except those that are configured to send traffic through Azure Monitor Private Link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
+Defender for Containers relies on the Defender profile\extension for several features. The Defender profile\extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that only machines that are configured to send traffic through Azure Monitor Private Link can send data to that workstation. You can configure a private link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
Allowing data ingestion to occur only through Private Link Scope on your workspace Network Isolation settings, can result in communication failures and partial converge of the Defender for Containers feature set.
Outbound proxy without authentication and outbound proxy with basic authenticati
<sup><a name="footnote1"></a>1</sup> Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.
-<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you will need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
+<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you'll need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
> [!NOTE] > For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
Outbound proxy without authentication and outbound proxy with basic authenticati
#### Private link
-Defender for Containers relies on the Defender profile\extension for several features. The Defender profile\extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that no machine can send data to that workstation except those that are configured to send traffic through Azure Monitor Private Link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
+Defender for Containers relies on the Defender profile\extension for several features. The Defender profile\extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that only machines that are configured to send traffic through Azure Monitor Private Link can send data to that workstation. You can configure a private link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
Allowing data ingestion to occur only through Private Link Scope on your workspace Network Isolation settings, can result in communication failures and partial converge of the Defender for Containers feature set.
Outbound proxy without authentication and outbound proxy with basic authenticati
| Aspect | Details | |--|--| | Registries and images | **Supported**<br> ΓÇó [ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md) (Private registries requires access to Trusted Services) <br> ΓÇó Windows images using Windows OS version 1709 and above (Preview). This is free while it's in preview, and will incur charges (based on the Defender for Containers plan) when it becomes generally available.<br><br>**Unsupported**<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images<br> ΓÇó "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br> ΓÇó Images with [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md) |
-| OS Packages | **Supported** <br> ΓÇó Alpine Linux 3.12-3.15 <br> ΓÇó Red Hat Enterprise Linux 6, 7, 8 <br> ΓÇó CentOS 6, 7 <br> ΓÇó Oracle Linux 6,6,7,8 <br> ΓÇó Amazon Linux 1,2 <br> ΓÇó openSUSE Leap 42, 15 <br> ΓÇó SUSE Enterprise Linux 11,12, 15 <br> ΓÇó Debian GNU/Linux wheezy, jessie, stretch, buster, bullseye <br> ΓÇó Ubuntu 10.10-22.04 <br> ΓÇó FreeBSD 11.1-13.1 <br> ΓÇó Fedora 32, 33, 34, 35 |
+| OS Packages | **Supported** <br> ΓÇó Alpine Linux 3.12-3.15 <br> ΓÇó Red Hat Enterprise Linux 6, 7, 8 <br> ΓÇó CentOS 6, 7 <br> ΓÇó Oracle Linux 6, 7, 8 <br> ΓÇó Amazon Linux 1, 2 <br> ΓÇó openSUSE Leap 42, 15 <br> ΓÇó SUSE Enterprise Linux 11, 12, 15 <br> ΓÇó Debian GNU/Linux wheezy, jessie, stretch, buster, bullseye <br> ΓÇó Ubuntu 10.10-22.04 <br> ΓÇó FreeBSD 11.1-13.1 <br> ΓÇó Fedora 32, 33, 34, 35|
| Language specific packages (Preview) <br><br> (**Only supported for Linux images**) | **Supported** <br> ΓÇó Python <br> ΓÇó Node.js <br> ΓÇó .NET <br> ΓÇó JAVA <br> ΓÇó Go | ### Kubernetes distributions and configurations
Outbound proxy without authentication and outbound proxy with basic authenticati
<sup><a name="footnote1"></a>1</sup> Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested.
-<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you will need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
+<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you'll need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
> [!NOTE] > For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
Ensure your Kubernetes node is running on one of the verified supported operatin
#### Private link
-Defender for Containers relies on the Defender profile\extension for several features. The Defender profile\extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that no machine can send data to that workstation except those that are configured to send traffic through Azure Monitor Private Link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
+Defender for Containers relies on the Defender profile\extension for several features. The Defender profile\extension doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that only machines that are configured to send traffic through Azure Monitor Private Link can send data to that workstation. You can configure a private link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
Allowing data ingestion to occur only through Private Link Scope on your workspace Network Isolation settings, can result in communication failures and partial converge of the Defender for Containers feature set.
digital-twins Concepts Data Explorer Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-data-explorer-plugin.md
Once the target table is created, you can use the Azure Digital Twins plugin to
#### Example schema
-Here's an example of a schema that might be used to represent shared data.
+Here's an example of a schema that might be used to represent shared data. The example follows the Azure Data Explorer [data history schema](concepts-data-history.md#data-schema).
| `TimeStamp` | `SourceTimeStamp` | `TwinId` | `ModelId` | `Name` | `Value` | `RelationshipTarget` | `RelationshipID` | | | | | | | | | |
digital-twins Concepts Data History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-data-history.md
description: Understand data history for Azure Digital Twins. Previously updated : 03/28/2022 Last updated : 10/25/2022
For more of an introduction to data history, including a quick demo, watch the f
<iframe src="https://aka.ms/docs/player?id=2f9a9af4-1556-44ea-ab5f-afcfd6eb9c15" width="1080" height="530"></iframe>
-## Required resources and data flow
+## Resources and data flow
Data history requires the following resources: * Azure Digital Twins instance, with a [managed identity](concepts-security.md#managed-identity-for-accessing-other-resources) enabled
Data moves through these resources in this order:
When working with data history, use the [2022-05-31](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/digitaltwins/data-plane/Microsoft.DigitalTwins/stable/2022-05-31) version of the APIs.
-### Required permissions
+### History from multiple Azure Digital Twins instances
+
+If you'd like, you can have multiple Azure Digital Twins instances historize twin property updates to the same Azure Data Explorer cluster.
+
+Each Azure Digital Twins instance will have its own data history connection targeting the same Azure Data Explorer cluster. Within the cluster, instances can send their twin data to either...
+* **different tables** in the Azure Data Explorer cluster.
+* **the same table** in the Azure Data Explorer cluster. To do this, specify the same Azure Data Explorer table name while [creating the data history connections](how-to-use-data-history.md#set-up-data-history-connection). In the [data history table schema](#data-schema), the `ServiceId` column will contain the URL of the source Azure Digital Twins instance, so you can use this field to resolve which Azure Digital Twins instance emitted each record.
+
+## Required permissions
In order to set up a data history connection, your Azure Digital Twins instance must have the following permissions to access the Event Hubs and Azure Data Explorer resources. These roles enable Azure Digital Twins to configure the event hub and Azure Data Explorer database on your behalf (for example, creating a table in the database). These permissions can optionally be removed after data history is set up. * Event Hubs resource: **Azure Event Hubs Data Owner**
Later, your Azure Digital Twins instance must have the following permission on t
## Creating a data history connection
-Once all the [required resources](#required-resources-and-data-flow) are set up, you can use the [Azure CLI](/cli/azure/what-is-azure-cli), [Azure portal](https://portal.azure.com), or the [Azure Digital Twins SDK](concepts-apis-sdks.md) to create the data history connection between them. The CLI command is part of the [az iot](/cli/azure/iot?view=azure-cli-latest&preserve-view=true) extension.
+Once all the [resources](#resources-and-data-flow) and [permissions](#required-permissions) are set up, you can use the [Azure CLI](/cli/azure/what-is-azure-cli), [Azure portal](https://portal.azure.com), or the [Azure Digital Twins SDK](concepts-apis-sdks.md) to create the data history connection between them. The CLI command set is [az dt data-history](/cli/azure/dt/data-history).
For instructions on how to set up a data history connection, see [Use data history with Azure Data Explorer](how-to-use-data-history.md).
digital-twins How To Use Data History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-data-history.md
Now that you've created the required resources, use the command below to create
# [CLI](#tab/cli)
-Use the following command to create a data history connection. By default, this command assumes all resources are in the same resource group as the Azure Digital Twins instance. You can also specify resources that are in different resource groups using the parameter options for this command, which can be displayed by running `az dt data-history connection create adx -h`.
-The command uses several local variables (`$connectionname`, `$dtname`, `$clustername`, `$databasename`, `$eventhub`, and `$eventhubnamespace`) that were created earlier in [Set up local variables for CLI session](#set-up-local-variables-for-cli-session).
+Use the command in this section to create a data history connection.
+
+By default, this command assumes all resources are in the same resource group as the Azure Digital Twins instance. You can specify resources that are in different resource groups using the parameter options for this command, which can be displayed by running `az dt data-history connection create adx -h`. You can also see the full list of optional parameters, including how to specify a table name and more, in its reference documentation: [az dt data-history connection create adx](/cli/azure/dt/data-history/connection/create#az-dt-data-history-connection-create-adx).
+
+The command below uses several local variables (`$connectionname`, `$dtname`, `$clustername`, `$databasename`, `$eventhub`, and `$eventhubnamespace`) that were created earlier in [Set up local variables for CLI session](#set-up-local-variables-for-cli-session).
```azurecli-interactive az dt data-history connection create adx --cn $connectionname --dt-name $dtname --adx-cluster-name $clustername --adx-database-name $databasename --eventhub $eventhub --eventhub-namespace $eventhubnamespace
energy-data-services How To Add More Data Partitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-add-more-data-partitions.md
Each partition provides the highest level of data isolation within a single depl
## Create a data partition
-1. **Open the "Data Partitions" menu-item from left-panel of MEDS overview page.**
+1. Open the "Data Partitions" menu-item from left-panel of MEDS overview page.
[![Screenshot for dynamic data partitions feature discovery from MEDS overview page. Find it under the 'advanced' section in menu-items.](media/how-to-add-more-data-partitions/dynamic-data-partitions-discovery-meds-overview-page.png)](media/how-to-add-more-data-partitions/dynamic-data-partitions-discovery-meds-overview-page.png#lightbox)
-2. **Select "Create"**
+2. Select "Create".
The page shows a table of all data partitions in your MEDS instance with the status of the data partition next to it. Clicking "Create" option on the top opens a right-pane for next steps. [![Screenshot to help you locate the create button on the data partitions page. The 'create' button to add a new data partition is highlighted.](media/how-to-add-more-data-partitions/start-create-data-partition.png)](media/how-to-add-more-data-partitions/start-create-data-partition.png#lightbox)
-3. **Choose a name for your data partition**
+3. Choose a name for your data partition.
Each data partition name needs to be - "1-10 characters long and be a combination of lowercase letters, numbers and hyphens only" The data partition name will be prepended with the name of the MEDS instance. Choose a name for your data partition and hit create. Soon as you hit create, the deployment of the underlying data partition resources such as Azure Cosmos DB and Azure Storage accounts is started.
energy-data-services How To Integrate Airflow Logs With Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-integrate-airflow-logs-with-azure-monitor.md
# Integrate airflow logs with Azure Monitor
-This article describes how you can start collecting Airflow Logs for your Microsoft Energy Data Services instances into Azure Monitor. This integration feature helps you debug Airflow DAG run failures.
+In this article, you'll learn how to start collecting Airflow Logs for your Microsoft Energy Data Services instances into Azure Monitor. This integration feature helps you debug Airflow DAG ([Directed Acyclic Graph](https://airflow.apache.org/docs/apache-airflow/stable/concepts/dags.html)) run failures.
## Prerequisites
Every Microsoft Energy Data Services instance comes inbuilt with an Azure Data F
To access logs via any of the above two options, you need to create a Diagnostic Setting. Each Diagnostic Setting has three basic parts:
-| Title | Description |
+| Part | Description |
|-|-| | Name | This is the name of the diagnostic log. Ensure a unique name is set for each log. | | Categories | Category of logs to send to each of the destinations. The set of categories will vary for each Azure service. Visit: [Supported Resource Log Categories](../azure-monitor/essentials/resource-logs-categories.md) |
To access logs via any of the above two options, you need to create a Diagnostic
Follow the following steps to set up Diagnostic Settings:
-1. Open Microsoft Energy Data Services' "**Overview**" page
-1. Select "**Diagnostic Settings**" from the left panel
+1. Open Microsoft Energy Data Services' *Overview* page
+1. Select *Diagnostic Settings* from the left panel
[![Screenshot for Azure monitor diagnostic setting overview page. The page shows a list of existing diagnostic settings and the option to add a new one.](media/how-to-integrate-airflow-logs-with-azure-monitor/azure-monitor-diagnostic-settings-overview-page.png)](media/how-to-integrate-airflow-logs-with-azure-monitor/azure-monitor-diagnostic-settings-overview-page.png#lightbox)
-1. Select "**Add diagnostic setting**"
+1. Select *Add diagnostic setting*
-1. Select "**Airflow Task Logs**" under Logs
+1. Select *Airflow Task Logs* under Logs
-1. Select "**Archive to a storage account**"
+1. Select *Archive to a storage account*
[![Screenshot for creating a diagnostic setting to archive logs to a storage account. The image shows the subscription and the storage account chosen for a diagnostic setting.](media/how-to-integrate-airflow-logs-with-azure-monitor/creating-diagnostic-setting-destination-storage-account.png)](media/how-to-integrate-airflow-logs-with-azure-monitor/creating-diagnostic-setting-destination-storage-account.png#lightbox)
Follow the following steps to set up Diagnostic Settings:
After a diagnostic setting is created for archiving Airflow task logs into a storage account, you can navigate to the storage account **overview** page. You can then use the "Storage Browser" on the left panel to find the right JSON file that you want to investigate. Browsing through different directories is intuitive as you move from a year to a month to a day.
-1. Navigate through **Containers**, available on the left panel.
+1. Navigate through *Containers*, available on the left panel.
[![Screenshot for exploring archived logs in the containers of the Storage Account. The container will show logs from all the sources set up.](media/how-to-integrate-airflow-logs-with-azure-monitor/storage-account-containers-page-showing-collected-logs-explorer.png)](media/how-to-integrate-airflow-logs-with-azure-monitor/storage-account-containers-page-showing-collected-logs-explorer.png#lightbox)
You can integrate Airflow logs with Log Analytics Workspace by using **Diagnosti
## Working with the integrated Airflow Logs in Log Analytics Workspace
-Data is retrieved from a Log Analytics Workspace using a query written in Kusto Query Language (KQL). A set of precreated queries is available for many Azure services (not available for Airflow at the moment) so that you don't require knowledge of KQL to get started.
+Use Kusto Query Language (KQL) to retrieve desired data on collected Airflow logs from your Log Analytics Workspace.
[![Screenshot for Azure Monitor Log Analytics page for viewing collected logs. Under log management, tables from all sources will be visible.](media/how-to-integrate-airflow-logs-with-azure-monitor/azure-monitor-log-analytics-page-viewing-collected-logs.png)](media/how-to-integrate-airflow-logs-with-azure-monitor/azure-monitor-log-analytics-page-viewing-collected-logs.png#lightbox)
energy-data-services How To Integrate Elastic Logs With Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-integrate-elastic-logs-with-azure-monitor.md
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
-This article describes how you can start collecting Elasticsearch logs for your Microsoft Energy Data Services instances in Azure Monitor. This integration feature is developed to help you debug Elasticsearch related issues inside Azure Monitor.
+In this article, you'll learn how to start collecting Elasticsearch logs for your Microsoft Energy Data Services instances in Azure Monitor. This integration feature is developed to help you debug Elasticsearch related issues inside Azure Monitor.
## Prerequisites -- You need to have a Log Analytics workspace. It will be used to query the Elasticsearch logs dataset using the Kusto Query Language (KQL) query editor in the Log Analytics workspace. Useful Resource: [Create a log Analytics workspace in Azure portal](../azure-monitor/logs/quick-create-workspace.md)
+- You need to have a Log Analytics workspace. It will be used to query the Elasticsearch logs dataset using the Kusto Query Language (KQL) query editor in the Log Analytics workspace. [Create a log Analytics workspace in Azure portal](../azure-monitor/logs/quick-create-workspace.md).
- You need to have a storage account. It will be used to store JSON dumps of Elasticsearch & Elasticsearch Operator logs. The storage account doesnΓÇÖt have to be in the same subscription as your Log Analytics workspace.
Every Microsoft Energy Data Services instance comes inbuilt with a managed Elast
Each diagnostic setting has three basic parts:
-| Title | Description |
+| Part | Description |
|-|-| | Name | This is the name of the diagnostic log. Ensure a unique name is set for each log. | | Categories | Category of logs to send to each of the destinations. The set of categories will vary for each Azure service. Visit: [Supported Resource Log Categories](../azure-monitor/essentials/resource-logs-categories.md) |
We support two destinations for your Elasticsearch logs from Microsoft Energy Da
1. Select *Send to a Log Analytics workspace*
-1. Choose Subscription and the Log Analytics workspace Name. You would have created it already as a prerequisite.
+1. Choose Subscription and the Log Analytics workspace name. You would have created it already as a prerequisite.
[![Screenshot for choosing destination settings for Log Analytics workspace. The image shows the subscription and Log Analytics workspace chosen.](media/how-to-integrate-elastic-logs-with-azure-monitor/diagnostic-setting-log-analytics-workspace.png)](media/how-to-integrate-elastic-logs-with-azure-monitor/diagnostic-setting-log-analytics-workspace.png#lightbox) 1. Select *Archive to storage account*
-1. Choose Subscription and storage account Name. You would have created it already as a prerequisite.
+1. Choose Subscription and storage account name. You would have created it already as a prerequisite.
[![Screenshot that shows choosing destination settings for storage account. Required fields include regions, subscription and storage account.](media/how-to-integrate-elastic-logs-with-azure-monitor/diagnostic-setting-archive-storage-account.png)](media/how-to-integrate-elastic-logs-with-azure-monitor/diagnostic-setting-archive-storage-account.png#lightbox) 1. Select *Save*.
energy-data-services Overview Ddms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/overview-ddms.md
Domain data management services (DDMS) store, access, and retrieve metadata and
### Frictionless Exploration and Production(E&P)
-The Microsoft Energy Data Services Preview DDMS service enables energy companies to access their data in a manner that is fast, portable, testable and extendible. As a result, they'll achieve unparalleled streaming performance and use the standards and output from OSDU&trade;. The Azure DDMS service will onboard the OSDU&trade; DDMS and Schlumberger proprietary DMS. Microsoft also continues to contribute to the OSDU&trade; community DDMS to ensure compatibility and architectural alignment.
+The Microsoft Energy Data Services Preview DDMS service enables energy companies to access their data in a manner that is fast, portable, testable and extendible. As a result, they'll achieve unparalleled streaming performance and use the standards and output from OSDU&trade;. The Azure DDMS service will onboard the OSDU&trade; DDMS and SLB proprietary DMS. Microsoft also continues to contribute to the OSDU&trade; community DDMS to ensure compatibility and architectural alignment.
### Seamless connection between applications and data
The seismic DMS is part of the OSDU&trade; platform and enables users to connect
## OSDU&trade; - Wellbore DMS
-Well Logs are measurements taken while drilling, which tells energy companies information about the subsurface. Ultimately, they reveal whether hydrocarbons are present (or if the well is dry). Logs contain many attributes that inform geoscientists about the type of rock, its quality, and whether it contains oil, water, gas, or a mix. Energy companies use these attributes to determine the quality of a reservoir ΓÇô how much oil or gas is present, its quality, and ultimately, economic viability. Maintaining Well Log data and ensuring easy access to historical logs is critical to energy companies. The Wellbore DMS facilitates access to this data in any OSDU&trade; compliant application. The Wellbore DMS was contributed by Schlumberger to OSDU&trade;.
+Well Logs are measurements taken while drilling, which tells energy companies information about the subsurface. Ultimately, they reveal whether hydrocarbons are present (or if the well is dry). Logs contain many attributes that inform geoscientists about the type of rock, its quality, and whether it contains oil, water, gas, or a mix. Energy companies use these attributes to determine the quality of a reservoir ΓÇô how much oil or gas is present, its quality, and ultimately, economic viability. Maintaining Well Log data and ensuring easy access to historical logs is critical to energy companies. The Wellbore DMS facilitates access to this data in any OSDU&trade; compliant application. The Wellbore DMS was contributed by SLB to OSDU&trade;.
Well Log data can come in different formats. It's most often indexed by depth or time and the increment of these measurements can vary. Well Logs typically contain multiple attributes for each vertical measurement. Well Logs can therefore be small or for more modern Well Logs that use high frequency data, greater than 1 Gb. Well Log data is smaller than seismic; however, users will want to look at upwards of hundreds of wells at a time. This scenario is common in mature areas that have been heavily drilled such as the Permian Basin in West Texas.
energy-data-services Overview Microsoft Energy Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/overview-microsoft-energy-data-services.md
# What is Microsoft Energy Data Services Preview?
-Microsoft Energy Data Services Preview is a secure, reliable, hyperscale, fully managed cloud-based data platform solution for the energy industry. It is an enterprise-grade data platform that brings together the capabilities of OSDU&trade; Data Platform, Microsoft's secure and trusted Azure cloud platform, and Schlumberger's extensive domain expertise. It allows customers to free data from silos, provides strong data management, storage, and federation strategy. Microsoft Energy Data Services ensures compatibility with evolving community standards like OSDU&trade; and enables value addition through interoperability with both first-party and third-party solutions.
+Microsoft Energy Data Services Preview is a secure, reliable, hyperscale, fully managed cloud-based data platform solution for the energy industry. It is an enterprise-grade data platform that brings together the capabilities of OSDU&trade; Data Platform, Microsoft's secure and trusted Azure cloud platform, and SLB's extensive domain expertise. It allows customers to free data from silos, provides strong data management, storage, and federation strategy. Microsoft Energy Data Services ensures compatibility with evolving community standards like OSDU&trade; and enables value addition through interoperability with both first-party and third-party solutions.
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
Furthermore, Microsoft Energy Data Services Preview provides security capabiliti
Microsoft Energy Data Services Preview also supports multiple data partitions for every platform instance. More data partitions can also be created after creating an instance, as needed.
-As an Azure-based service, it also provides elasticity with auto-scaling to handle dynamically varying workload requirements. The service provides out-of-the-box compatibility and built-in integration with industry-leading applications from Schlumberger, including Petrel to provide quick time to value.
+As an Azure-based service, it also provides elasticity with auto-scaling to handle dynamically varying workload requirements. The service provides out-of-the-box compatibility and built-in integration with industry-leading applications from SLB, including Petrel to provide quick time to value.
Microsoft will provide support for the platform to enable our customers' use cases.
event-grid Subscribe To Partner Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-partner-events.md
Here's the list of partners and a link to submit a request to enable events flow
- [Auth0](auth0-how-to.md) - [Microsoft Graph API](subscribe-to-graph-api-events.md)
+- [SAP](subscribe-to-sap-events.md)
## Activate a partner topic
event-grid Subscribe To Sap Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-sap-events.md
+
+ Title: Azure Event Grid - Subscribe to SAP events
+description: This article explains how to subscribe to events published by SAP.
+ Last updated : 10/25/2022++
+# Subscribe to events published by SAP
+This article describes steps to subscribe to events published by an SAP S/4HANA system.
+
+## High-level steps
+
+The common steps to subscribe to events published by any partner, including SAP, are described in [subscribe to partner events](subscribe-to-partner-events.md). For your quick reference, the steps are provided again here with the addition of a step to make sure that your SAP system has the required components. This article deals with steps 1 and 3.
+
+1. [Ensure you meet all prerequisites](#prerequisites).
+1. Register the Event Grid resource provider with your Azure subscription.
+1. Authorize partner to create a partner topic in your resource group.
+1. [Enable SAP S/4HANA events to flow to a partner topic](#enable-events-to-flow-to-your-partner-topic).
+1. Activate partner topic so that your events start flowing to your partner topic.
+1. Subscribe to events.
+
+## Prerequisites
+
+Following are the prerequisites that your system needs to meet before attempting to configure your SAP system to send events to Azure Event Grid.
+
+1. SAP S/4HANA system (on-premises) version 2020 or later.
+1. SAP's [Business Technology Platform](https://www.sap.com/products/technology-platform.html)(BTP).
+1. On the Business Technology Platform, [SAP Event Mesh](https://help.sap.com/docs/SAP_EM/bf82e6b26456494cbdd197057c09979f/df532e8735eb4322b00bfc7e42f84e8d.html) is enabled.
+
+If you have any questions, contact us at <a href="mailto:ask-grid-and-ms-sap@microsoft.com">ask-grid-and-ms-sap@microsoft.com</a>
+
+## Enable events to flow to your partner topic
+
+SAP's capability to send events to Azure Event Grid is available through SAP's [beta program](https://influence.sap.com/sap/ino/#campaign/3314). Using this program, you can let SAP know about your desire to have your S4/HANA events available on Azure. You can find the SAP's announcement of this new feature [here](https://blogs.sap.com/2022/10/11/sap-event-mesh-event-bridge-to-microsoft-azure-to-go-beta/). Through SAP's Beta program, you'll be provided with the documentation on how to configure your SAP S4/HANA system to flow events to Event Grid. At at that point, you may proceed with the next step in the process described in the [High-level steps](#high-level-steps) section.
+
+SAP's BETA program started in October 2022 and will last a couple of months. Thereafter, the feature will be released by SAP as a generally available (GA) capability. Event Grid's capability to receive events from a partner, like SAP, is already a GA feature.
+
+If you have any questions, you can contact us at <a href="mailto:ask-grid-and-ms-sap@microsoft.com">ask-grid-and-ms-sap@microsoft.com</a>.
+
+## Next steps
+See [subscribe to partner events](subscribe-to-partner-events.md).
expressroute Quickstart Create Expressroute Vnet Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/quickstart-create-expressroute-vnet-bicep.md
Title: 'Quickstart: Create an Azure ExpressRoute circuit using Bicep' description: This quickstart shows you how to create an ExpressRoute circuit using Bicep. --++ Last updated 03/24/2022
firewall-manager Quick Firewall Policy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/quick-firewall-policy-bicep.md
Title: 'Quickstart: Create an Azure Firewall and a firewall policy - Bicep' description: In this quickstart, you deploy an Azure Firewall and a firewall policy using Bicep. --++ Last updated 07/05/2022
firewall-manager Quick Secure Virtual Hub Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/quick-secure-virtual-hub-bicep.md
Title: 'Quickstart: Secure virtual hub using Azure Firewall Manager - Bicep' description: In this quickstart, you learn how to secure your virtual hub using Azure Firewall Manager and Bicep. --++ Last updated 06/28/2022
firewall Deploy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/deploy-bicep.md
Title: 'Quickstart: Create an Azure Firewall with Availability Zones - Bicep' description: In this quickstart, you deploy Azure Firewall using Bicep. The virtual network has one VNet with three subnets. Two Windows Server virtual machines, a jump box, and a server are deployed. -+ Last updated 06/28/2022-+ # Quickstart: Deploy Azure Firewall with Availability Zones - Bicep
firewall Policy Rule Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/policy-rule-sets.md
Previously updated : 09/12/2022 Last updated : 10/25/2022
A rule belongs to a rule collection, and it specifies which traffic is allowed o
For application rules, the traffic is processed by our built-in [infrastructure rule collection](infrastructure-fqdns.md) before it's denied by default.
+### Inbound vs. outbound
+
+An **inbound** firewall rule protects your network from threats that originate from outside your network (traffic sourced from the Internet) and attempts to infiltrate your network inwardly.
+
+An **outbound** firewall rule protects against nefarious traffic that originates internally (traffic sourced from a private IP address within Azure) and travels outwardly. This is usually traffic from within Azure resources being redirected via the Firewall before reaching a destination.
+
+### Rule types
+ There are three types of rules: - DNAT - Network - Application
-### DNAT rules
+#### DNAT rules
DNAT rules allow or deny inbound traffic through the firewall public IP address(es). You can use a DNAT rule when you want a public IP address to be translated into a private IP address. The Azure Firewall public IP addresses can be used to listen to inbound traffic from the Internet, filter the traffic and translate this traffic to internal resources in Azure.
-### Network rules
+#### Network rules
Network rules allow or deny inbound, outbound, and east-west traffic based on the network layer (L3) and transport layer (L4). You can use a network rule when you want to filter traffic based on IP addresses, any ports, and any protocols.
-### Application rules
+#### Application rules
Application rules allow or deny outbound and east-west traffic based on the application layer (L7). You can use an application rule when you want to filter traffic based on fully qualified domain names (FQDNs), URLs, and HTTP/HTTPS protocols.
frontdoor Create Front Door Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-bicep.md
Title: 'Quickstart: Create an Azure Front Door Standard/Premium using Bicep' description: This quickstart describes how to create an Azure Front Door Standard/Premium using Bicep. --++ Last updated 07/08/2022
frontdoor Quickstart Create Front Door Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/quickstart-create-front-door-bicep.md
Title: 'Quickstart: Create an Azure Front Door Service using Bicep'
description: This quickstart describes how to create an Azure Front Door Service using Bicep. documentationcenter: --++ Last updated 03/30/2022
governance Definition Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/definition-structure.md
The following Resource Provider modes are fully supported:
The following Resource Provider modes are currently supported as a **[preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)**: - `Microsoft.Network.Data` for managing [Azure Virtual Network Manager](../../../virtual-network-manager/overview.md) custom membership policies using Azure Policy.-- `Microsoft.Kubernetes.Data` for Azure Policy components that target [Azure Arc-enabled Kubernetes clusters](../../../aks/intro-kubernetes.md) resources such as pods, containers, and ingresses. > [!NOTE] >Unless explicitly stated, Resource Provider modes only support built-in policy definitions, and exemptions are not supported at the component-level.
hdinsight Apache Hadoop Mahout Linux Mac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-mahout-linux-mac.md
description: Learn how to use the Apache Mahout machine learning library to gene
Previously updated : 06/29/2022 Last updated : 10/25/2022 # Generate recommendations using Apache Mahout in Azure HDInsight
Learn how to use the [Apache Mahout](https://mahout.apache.org) machine learning
Mahout is a [machine learning](https://en.wikipedia.org/wiki/Machine_learning) library for Apache Hadoop. Mahout contains algorithms for processing data, such as filtering, classification, and clustering. In this article, you use a recommendation engine to generate movie recommendations that are based on movies your friends have seen.
-Mahout is avaiable in HDInsight 3.6, and is not available in HDInsight 4.0. For more information about the version of Mahout in HDInsight, see [HDInsight 3.6 component versions](../hdinsight-36-component-versioning.md).
- ## Prerequisites An Apache Hadoop cluster on HDInsight. See [Get Started with HDInsight on Linux](./apache-hadoop-linux-tutorial-get-started.md).
hdinsight Hdinsight 36 Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-36-component-versioning.md
- Title: Apache Hadoop components and versions - Azure HDInsight 3.6
-description: Learn about the Apache Hadoop components and versions in Azure HDInsight 3.6.
-- Previously updated : 08/05/2022--
-# HDInsight 3.6 component versions
-
-In this article, you learn about the Apache Hadoop environment components and versions in Azure HDInsight 3.6.
-
-## Support for HDInsight 3.6
-
-Starting July 1st, 2021 Microsoft will offer Basic support for certain HDI 3.6 cluster types.
-The table below lists the support timeframe for HDInsight 3.6 cluster types.
-
-| Cluster Type | Framework version | Standard support expiration | Basic support expiration date | Retirement date |
-||-|--|-|-|
-| HDInsight 3.6 Hadoop | 2.7.3 | June 30, 2021 | September 30, 2022 | October 1, 2022 |
-| HDInsight 3.6 Spark | 2.3 | June 30, 2021 | September 30, 2022 | October 1, 2022 |
-| HDInsight 3.6 Kafka | 1.1 | June 30, 2021 | September 30, 2022 | October 1, 2022 |
-| HDInsight 3.6 HBase | 1.1 | June 30, 2021 | September 30, 2022 | October 1, 2022 |
-| HDInsight 3.6 Interactive Query | 2.1 | June 30, 2021 | September 30, 2022 | October 1, 2022 |
-| HDInsight 3.6 ML Services | 9.3 | - | - | December 31, 2020 |
-| HDInsight 3.6 Spark | 2.2 | - | - | June 30, 2020 |
-| HDInsight 3.6 Spark | 2.1 | - | - | June 30, 2020 |
-| HDInsight 3.6 Kafka | 1.0 | - | - | June 30, 2020 |
-
-## Apache components available with HDInsight version 3.6
-
-The OSS component versions associated with HDInsight 3.6 are listed in the following table.
-
-| Component | HDInsight 3.6 (default) |
-||--|
-| Apache Hadoop and YARN | 2.7.3 |
-| Apache Tez | 0.7.0 |
-| Apache Pig | 0.16.0 |
-| Apache Hive | (2.1.0 on ESP Interactive Query) |
-| Apache Tez Hive2 | 0.8.4 |
-| Apache Ranger | 0.7.0 |
-| Apache HBase | 1.1.2 |
-| Apache Sqoop | 1.4.6 |
-| Apache Oozie | 4.2.0 |
-| Apache Zookeeper | 3.4.6 |
-| Apache Mahout | 0.9.0+ |
-| Apache Phoenix | 4.7.0 |
-| Apache Spark | 2.3.2. |
-| Apache Livy | 0.4. |
-| Apache Kafka | 1.1 |
-| Apache Ambari | 2.6.0 |
-| Apache Zeppelin | 0.7.3 |
-| Mono | 4.2.1 |
-
-## HDInsight 3.6 to 4.0 Migration Guides
-- [Migrate Apache Spark 2.1 and 2.2 workloads to 2.3 and 2.4](spark/migrate-versions.md).-- [Migrate Azure HDInsight 3.6 Hive workloads to HDInsight 4.0](interactive-query/apache-hive-migrate-workloads.md).-- [Migrate Apache Kafka workloads to Azure HDInsight 4.0](kafk).-- [Migrate an Apache HBase cluster to a new version](hbase/apache-hbase-migrate-new-version.md).-
-## Next steps
--- [Cluster setup for Apache Hadoop, Spark, and more on HDInsight](hdinsight-hadoop-provision-linux-clusters.md)-- [Enterprise Security Package](./enterprise-security-package.md)-- [Work in Apache Hadoop on HDInsight from a Windows PC](hdinsight-hadoop-windows-tools.md)
hdinsight Hdinsight Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-component-versioning.md
Title: Open-source components and versions - Azure HDInsight
description: Learn about the open-source components and versions in Azure HDInsight. Previously updated : 08/25/2022 Last updated : 10/25/2022 # Azure HDInsight versions
This table lists the versions of HDInsight that are available in the Azure porta
| | | | | | | | | [HDInsight 5.0](hdinsight-50-component-versioning.md) |Ubuntu 18.0.4 LTS |July 01, 2022 | [Standard](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | See [HDInsight 5.0](hdinsight-50-component-versioning.md) for date details. | See [HDInsight 5.0](hdinsight-50-component-versioning.md) for date details. |Yes | | [HDInsight 4.0](hdinsight-40-component-versioning.md) |Ubuntu 18.0.4 LTS |September 24, 2018 | [Standard](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | See [HDInsight 4.0](hdinsight-40-component-versioning.md) for date details. | See [HDInsight 4.0](hdinsight-40-component-versioning.md) for date details. |Yes |
-| [HDInsight 3.6](hdinsight-36-component-versioning.md) |Ubuntu 16.0.4 LTS |April 4, 2017 | [Basic](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) | Standard support expired on June 30, 2021 for all cluster types.<br> Basic support expires on September 30, 2022. See [HDInsight 3.6 component versions](hdinsight-36-component-versioning.md) for cluster type details. |October 1, 2022 |Yes |
**Support expiration** means that Microsoft no longer provides support for the specific HDInsight version. You may not be able to create clusters from the Azure portal.
Basic support doesn't include
Microsoft doesn't encourage creating analytics pipelines or solutions on clusters in basic support. We recommend migrating existing clusters to the most recent fully supported version. ## HDInsight 3.6 to 4.0 Migration Guides-- [Migrate Apache Spark 2.1 and 2.2 workloads to 2.3 and 2.4](spark/migrate-versions.md). - [Migrate Azure HDInsight 3.6 Hive workloads to HDInsight 4.0](interactive-query/apache-hive-migrate-workloads.md). - [Migrate Apache Kafka workloads to Azure HDInsight 4.0](kafk). - [Migrate an Apache HBase cluster to a new version](hbase/apache-hbase-migrate-new-version.md).
hdinsight Hdinsight Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes-archive.md
description: Archived release notes for Azure HDInsight. Get development tips an
Previously updated : 08/12/2022 Last updated : 10/25/2022 # Archived release notes
HDI Hive 3.1 version is upgraded to OSS Hive 3.1.2. This version has all fixes a
| Schema tool enhancements to support mergeCatalog|[HIVE-22498](https://issues.apache.org/jira/browse/HIVE-22498)| | Hive with TEZ UNION ALL and UDTF results in data loss|[HIVE-21915](https://issues.apache.org/jira/browse/HIVE-21915)| | Split text files even if header/footer exists|[HIVE-21924](https://issues.apache.org/jira/browse/HIVE-21924)|
-| MultiDelimitSerDe returns wrong results in last column when the loaded file has more columns than the once are present in table schema|[HIVE-22360](https://issues.apache.org/jira/browse/HIVE-22360)|
+| MultiDelimitSerDe returns wrong results in last column when the loaded file has more columns than the onc is present in table schema|[HIVE-22360](https://issues.apache.org/jira/browse/HIVE-22360)|
| LLAP external client - Need to reduce LlapBaseInputFormat#getSplits() footprint|[HIVE-22221](https://issues.apache.org/jira/browse/HIVE-22221)| | Column name with reserved keyword is unescaped when query including join on table with mask column is rewritten (Zoltan Matyus via Zoltan Haindrich)|[HIVE-22208](https://issues.apache.org/jira/browse/HIVE-22208)| |Prevent LLAP shutdown on AMReporter related RuntimeException|[HIVE-22113](https://issues.apache.org/jira/browse/HIVE-22113)|
HDI Hive 3.1 version is upgraded to OSS Hive 3.1.2. This version has all fixes a
| Parsing time can be high if there's deeply nested subqueries|[HIVE-21980](https://issues.apache.org/jira/browse/HIVE-21980)| | For ALTER TABLE t SET TBLPROPERTIES ('EXTERNAL'='TRUE'); `TBL_TYPE` attribute changes not reflecting for non-CAPS|[HIVE-20057](https://issues.apache.org/jira/browse/HIVE-20057)| | JDBC: HiveConnection shades log4j interfaces|[HIVE-18874](https://issues.apache.org/jira/browse/HIVE-18874)|
-| Update repo URLs in poms - branh 3.1 version|[HIVE-21786](https://issues.apache.org/jira/browse/HIVE-21786)|
+| Update repo URLs in poms - branch 3.1 version|[HIVE-21786](https://issues.apache.org/jira/browse/HIVE-21786)|
| DBInstall tests broken on master and branch-3.1|[HIVE-21758](https://issues.apache.org/jira/browse/HIVE-21758)| | Load data into a bucketed table is ignoring partitions specs and loads data into default partition|[HIVE-21564](https://issues.apache.org/jira/browse/HIVE-21564)| | Queries with join condition having timestamp or timestamp with local time zone literal throw SemanticException|[HIVE-21613](https://issues.apache.org/jira/browse/HIVE-21613)|
Let more atΓÇ»[enable private link](./hdinsight-private-link.md).ΓÇ»
The new Azure monitor integration experience will be Preview in East US and West Europe with this release. Learn more details about the new Azure monitor experience [here](./log-analytics-migration.md#migrate-to-the-new-azure-monitor-integration). ### Deprecation
-#### Basic support for HDInsight 3.6 starting July 1, 2021
-Starting July 1, 2021, Microsoft offers [Basic support](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) for certain HDInsight 3.6 cluster types. The Basic support plan will be available until 3 April 2022. You are automatically enrolled in Basic support starting July 1, 2021. No action is required by you to opt in. See [our documentation](hdinsight-36-component-versioning.md) for which cluster types are included under Basic support.
-
-We don't recommend building any new solutions on HDInsight 3.6, freeze changes on existing 3.6 environments. We recommend that you [migrate your clusters to HDInsight 4.0](hdinsight-version-release.md#how-to-upgrade-to-hdinsight-40). Learn more about [what's new in HDInsight 4.0](hdinsight-version-release.md#whats-new-in-hdinsight-40).
-
+HDInsight 3.6 version is deprecated effective Oct 01, 2022.
### Behavior changes #### HDInsight Interactive Query only supports schedule-based Autoscale
-As customer scenarios grow more mature and diverse, we have identified some limitations with Interactive Query (LLAP) load-based Autoscale. These limitations are caused by the nature of LLAP query dynamics, future load prediction accuracy issues, and issues in the LLAP scheduler's task redistribution. Due to these limitations, users may see their queries run slower on LLAP clusters when Autoscale is enabled. The effect on performance can outweigh the cost benefits of Autoscale.
+As customer scenarios grow more mature and diverse, we've identified some limitations with Interactive Query (LLAP) load-based Autoscale. These limitations are caused by the nature of LLAP query dynamics, future load prediction accuracy issues, and issues in the LLAP scheduler's task redistribution. Due to these limitations, users may see their queries run slower on LLAP clusters when Autoscale is enabled. The effect on performance can outweigh the cost benefits of Autoscale.
Starting from July 2021, the Interactive Query workload in HDInsight only supports schedule-based Autoscale. You can no longer enable load-based autoscale on new Interactive Query clusters. Existing running clusters can continue to run with the known limitations described above.
Here are the back ported Apache JIRAs for this release:
### Price Correction for HDInsight Dv2 Virtual Machines
-A pricing error was corrected on April 25, 2021, for the Dv2 VM series on HDInsight. The pricing error resulted in a reduced charge on some customer's bills prior to April 25th, and with the correction, prices now match what had been advertised on the HDInsight pricing page and the HDInsight pricing calculator. The pricing error impacted customers in the following regions who used Dv2 VMs:
+A pricing error was corrected on April 25, 2021, for the Dv2 VM series on HDInsight. The pricing error resulted in a reduced charge on some customer's bills prior to April 25, and with the correction, prices now match what had been advertised on the HDInsight pricing page and the HDInsight pricing calculator. The pricing error impacted customers in the following regions who used Dv2 VMs:
- Canada Central - Canada East
The OS versions for this release are:
#### OS version upgrade As referenced in [Ubuntu's release cycle](https://ubuntu.com/about/release-cycle), the Ubuntu 16.04 kernel will reach End of Life (EOL) in April 2021. We started rolling out the new HDInsight 4.0 cluster image running on Ubuntu 18.04 with this release. Newly created HDInsight 4.0 clusters will run on Ubuntu 18.04 by default once available. Existing clusters on Ubuntu 16.04 will run as is with full support.
-HDInsight 3.6 will continue to run on Ubuntu 16.04. It will change to Basic support (from Standard support) beginning 1 July 2021. For more information about dates and support options, see [Azure HDInsight versions](./hdinsight-component-versioning.md#supported-hdinsight-versions). Ubuntu 18.04 will not be supported for HDInsight 3.6. If you'd like to use Ubuntu 18.04, you'll need to migrate your clusters to HDInsight 4.0.
+HDInsight 3.6 will continue to run on Ubuntu 16.04. It will change to Basic support (from Standard support) beginning 1 July 2021. For more information about dates and support options, see [Azure HDInsight versions](./hdinsight-component-versioning.md#supported-hdinsight-versions). Ubuntu 18.04 won't be supported for HDInsight 3.6. If you'd like to use Ubuntu 18.04, you'll need to migrate your clusters to HDInsight 4.0.
You need to drop and recreate your clusters if you'd like to move existing HDInsight 4.0 clusters to Ubuntu 18.04. Plan to create or recreate your clusters after Ubuntu 18.04 support becomes available.
No deprecation in this release.
### Behavior changes #### Disable Stardard_A5 VM size as Head Node for HDInsight 4.0
-HDInsight cluster Head Node is responsible for initializing and managing the cluster. Standard_A5 VM size has reliability issues as Head Node for HDInsight 4.0. Starting from this release, customers will not be able to create new clusters with Standard_A5 VM size as Head Node. You can use other two-core VMs like E2_v3 or E2s_v3. Existing clusters will run as is. A four-core VM is highly recommended for Head Node to ensure the high availability and reliability of your production HDInsight clusters.
+HDInsight cluster Head Node is responsible for initializing and managing the cluster. Standard_A5 VM size has reliability issues as Head Node for HDInsight 4.0. Starting from this release, customers won't be able to create new clusters with Standard_A5 VM size as Head Node. You can use other two-core VMs like E2_v3 or E2s_v3. Existing clusters will run as is. A four-core VM is highly recommended for Head Node to ensure the high availability and reliability of your production HDInsight clusters.
#### Network interface resource not visible for clusters running on Azure virtual machine scale sets HDInsight is gradually migrating to Azure virtual machine scale sets. Network interfaces for virtual machines are no longer visible to customers for clusters that use Azure virtual machine scale sets.
The following changes will happen in upcoming releases.
#### HDInsight Interactive Query only supports schedule-based Autoscale
-As customer scenarios grow more mature and diverse, we have identified some limitations with Interactive Query (LLAP) load-based Autoscale. These limitations are caused by the nature of LLAP query dynamics, future load prediction accuracy issues, and issues in the LLAP scheduler's task redistribution. Due to these limitations, users may see their queries run slower on LLAP clusters when Autoscale is enabled. The effect on performance can outweigh the cost benefits of Autoscale.
+As customer scenarios grow more mature and diverse, we've identified some limitations with Interactive Query (LLAP) load-based Autoscale. These limitations are caused by the nature of LLAP query dynamics, future load prediction accuracy issues, and issues in the LLAP scheduler's task redistribution. Due to these limitations, users may see their queries run slower on LLAP clusters when Autoscale is enabled. The effect on performance can outweigh the cost benefits of Autoscale.
Starting from July 2021, the Interactive Query workload in HDInsight only supports schedule-based Autoscale. You can no longer enable Autoscale on new Interactive Query clusters. Existing running clusters can continue to run with the known limitations described above. Microsoft recommends that you move to a schedule-based Autoscale for LLAP. You can analyze your cluster's current usage pattern through the Grafana Hive dashboard. For more information, see [Automatically scale Azure HDInsight clusters](hdinsight-autoscale-clusters.md).
-#### Basic support for HDInsight 3.6 starting July 1, 2021
-Starting July 1, 2021, Microsoft will offer [Basic support](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) for certain HDInsight 3.6 cluster types. The Basic support plan will be available until 3 April 2022. You'll automatically be enrolled in Basic support starting July 1, 2021. No action is required by you to opt in. See [our documentation](hdinsight-36-component-versioning.md) for which cluster types are included under Basic support.
-
-We don't recommend building any new solutions on HDInsight 3.6, freeze changes on existing 3.6 environments. We recommend that you [migrate your clusters to HDInsight 4.0](hdinsight-version-release.md#how-to-upgrade-to-hdinsight-40). Learn more about [what's new in HDInsight 4.0](hdinsight-version-release.md#whats-new-in-hdinsight-40).
- #### VM host naming will be changed on July 1, 2021
-HDInsight now uses Azure virtual machines to provision the cluster. The service is gradually migrating to [Azure virtual machine scale sets](../virtual-machine-scale-sets/overview.md). This migration will change the cluster host name FQDN name format, and the numbers in the host name will not be guarantee in sequence. If you want to get the FQDN names for each node, refer to [Find the Host names of Cluster Nodes](./find-host-name.md).
+HDInsight now uses Azure virtual machines to provision the cluster. The service is gradually migrating to [Azure virtual machine scale sets](../virtual-machine-scale-sets/overview.md). This migration will change the cluster host name FQDN name format, and the numbers in the host name won't be guarantee in sequence. If you want to get the FQDN names for each node, refer to [Find the Host names of Cluster Nodes](./find-host-name.md).
#### Move to Azure virtual machine scale sets HDInsight now uses Azure virtual machines to provision the cluster. The service will gradually migrate to [Azure virtual machine scale sets](../virtual-machine-scale-sets/overview.md). The entire process may take months. After your regions and subscriptions are migrated, newly created HDInsight clusters will run on virtual machine scale sets without customer actions. No breaking change is expected.
The following changes will happen in upcoming releases.
#### HDInsight Interactive Query only supports schedule-based Autoscale
-As customer scenarios grow more mature and diverse, we have identified some limitations with Interactive Query (LLAP) load-based Autoscale. These limitations are caused by the nature of LLAP query dynamics, future load prediction accuracy issues, and issues in the LLAP scheduler's task redistribution. Due to these limitations, users may see their queries run slower on LLAP clusters when Autoscale is enabled. The impact on performance can outweigh the cost benefits of Autoscale.
+As customer scenarios grow more mature and diverse, we've identified some limitations with Interactive Query (LLAP) load-based Autoscale. These limitations are caused by the nature of LLAP query dynamics, future load prediction accuracy issues, and issues in the LLAP scheduler's task redistribution. Due to these limitations, users may see their queries run slower on LLAP clusters when Autoscale is enabled. The impact on performance can outweigh the cost benefits of Autoscale.
Starting from July 2021, the Interactive Query workload in HDInsight only supports schedule-based Autoscale. You can no longer enable Autoscale on new Interactive Query clusters. Existing running clusters can continue to run with the known limitations described above.
Microsoft recommends that you move to a schedule-based Autoscale for LLAP. You
#### OS version upgrade HDInsight clusters are currently running on Ubuntu 16.04 LTS. As referenced in [UbuntuΓÇÖs release cycle](https://ubuntu.com/about/release-cycle), the Ubuntu 16.04 kernel will reach End of Life (EOL) in April 2021. WeΓÇÖll start rolling out the new HDInsight 4.0 cluster image running on Ubuntu 18.04 in May 2021. Newly created HDInsight 4.0 clusters will run on Ubuntu 18.04 by default once available. Existing clusters on Ubuntu 16.04 will run as is with full support.
-HDInsight 3.6 will continue to run on Ubuntu 16.04. It will reach the end of standard support by 30 June 2021, and will change to Basic support starting on 1 July 2021. For more information about dates and support options, see [Azure HDInsight versions](./hdinsight-component-versioning.md#supported-hdinsight-versions). Ubuntu 18.04 will not be supported for HDInsight 3.6. If youΓÇÖd like to use Ubuntu 18.04, youΓÇÖll need to migrate your clusters to HDInsight 4.0.
+HDInsight 3.6 will continue to run on Ubuntu 16.04. It will reach the end of standard support by 30 June 2021, and will change to Basic support starting on 1 July 2021. For more information about dates and support options, see [Azure HDInsight versions](./hdinsight-component-versioning.md#supported-hdinsight-versions). Ubuntu 18.04 won't be supported for HDInsight 3.6. If youΓÇÖd like to use Ubuntu 18.04, youΓÇÖll need to migrate your clusters to HDInsight 4.0.
You need to drop and recreate your clusters if youΓÇÖd like to move existing clusters to Ubuntu 18.04. Plan to create or recreate your cluster after Ubuntu 18.04 support becomes available. WeΓÇÖll send another notification after the new image becomes available in all regions.
-ItΓÇÖs highly recommended that you test your script actions and custom applications deployed on edge nodes on an Ubuntu 18.04 virtual machine (VM) in advance. You can [create a simple Ubuntu Linux VM on 18.04-LTS](https://azure.microsoft.com/resources/templates/vm-simple-linux/), then create and use a [secure shell (SSH) key pair](../virtual-machines/linux/mac-create-ssh-keys.md#ssh-into-your-vm) on your VM to run and test your script actions and custom applications deployed on edge nodes.
+ItΓÇÖs highly recommended that you test your script actions and custom applications deployed on edge nodes on an Ubuntu 18.04 virtual machine (VM) in advance. You can [create Ubuntu Linux VM on 18.04-LTS](https://azure.microsoft.com/resources/templates/vm-simple-linux/), then create and use a [secure shell (SSH) key pair](../virtual-machines/linux/mac-create-ssh-keys.md#ssh-into-your-vm) on your VM to run and test your script actions and custom applications deployed on edge nodes.
#### Disable Stardard_A5 VM size as Head Node for HDInsight 4.0
-HDInsight cluster Head Node is responsible for initializing and managing the cluster. Standard_A5 VM size has reliability issues as Head Node for HDInsight 4.0. Starting from the next release in May 2021, customers will not be able to create new clusters with Standard_A5 VM size as Head Node. You can use other 2-core VMs like E2_v3 or E2s_v3. Existing clusters will run as is. A 4-core VM is highly recommended for Head Node to ensure the high availability and reliability of your production HDInsight clusters.
-
-#### Basic support for HDInsight 3.6 starting July 1, 2021
-Starting July 1, 2021, Microsoft will offer [Basic support](hdinsight-component-versioning.md#support-options-for-hdinsight-versions) for certain HDInsight 3.6 cluster types. The Basic support plan will be available until 3 April 2022. You'll automatically be enrolled in Basic support starting July 1, 2021. No action is required by you to opt in. See [our documentation](hdinsight-36-component-versioning.md) for which cluster types are included under Basic support.
-
-We don't recommend building any new solutions on HDInsight 3.6, freeze changes on existing 3.6 environments. We recommend that you [migrate your clusters to HDInsight 4.0](hdinsight-version-release.md#how-to-upgrade-to-hdinsight-40). Learn more about [what's new in HDInsight 4.0](hdinsight-version-release.md#whats-new-in-hdinsight-40).
+HDInsight cluster Head Node is responsible for initializing and managing the cluster. Standard_A5 VM size has reliability issues as Head Node for HDInsight 4.0. Starting from the next release in May 2021, customers won't be able to create new clusters with Standard_A5 VM size as Head Node. You can use other 2-core VMs like E2_v3 or E2s_v3. Existing clusters will run as is. A 4-core VM is highly recommended for Head Node to ensure the high availability and reliability of your production HDInsight clusters.
### Bug fixes HDInsight continues to make cluster reliability and performance improvements.
Default cluster VM sizes will be changed from D-series to Ev3-series. This chang
#### Network interface resource not visible for clusters running on Azure virtual machine scale sets HDInsight is gradually migrating to Azure virtual machine scale sets. Network interfaces for virtual machines are no longer visible to customers for clusters that use Azure virtual machine scale sets.
-#### Breaking change for .NET for Apache Spark 1.0.0
-With the latest release, HDInsight introduces the first official version v1.0.0 of the [ΓÇ£.NET for Apache SparkΓÇ¥](https://github.com/dotnet/spark) library. It provides DataFrame API completeness for Spark 2.4.x and Spark 3.0.x along with a host of [other features](https://github.com/dotnet/spark/blob/master/docs/release-notes/1.0.0/release-1.0.0.md). There will be breaking changes for this major version, refer to [the .NET for Apache Spark migration guide](https://github.com/dotnet/spark/blob/master/docs/migration-guide.md#upgrading-from-microsoftspark-0x-to-10) to understand steps needed to update your code and pipelines. To learn more, refer to this [.NET for Apache Spark v1.0 on Azure HDInsight guide](./spark/spark-dotnet-version-update.md#using-net-for-apache-spark-v10-in-hdinsight).
- ### Upcoming changes The following changes will happen in upcoming releases.
Apache Tez View is used to track and debug the execution of Hive Tez job. Tez Vi
### Deprecation #### Deprecation of Spark 2.1 and 2.2 in HDInsight 3.6 Spark cluster
-Starting from July 1 2020, customers cannot create new Spark clusters with Spark 2.1 and 2.2 on HDInsight 3.6. Existing clusters will run as is without the support from Microsoft. Consider to move to Spark 2.3 on HDInsight 3.6 by June 30 2020 to avoid potential system/support interruption.
+Starting from July 1 2020, customers can't create new Spark clusters with Spark 2.1 and 2.2 on HDInsight 3.6. Existing clusters will run as is without the support from Microsoft. Consider to move to Spark 2.3 on HDInsight 3.6 by June 30 2020 to avoid potential system/support interruption.
#### Deprecation of Spark 2.3 in HDInsight 4.0 Spark cluster
-Starting from July 1 2020, customers cannot create new Spark clusters with Spark 2.3 on HDInsight 4.0. Existing clusters will run as is without the support from Microsoft. Consider moving to Spark 2.4 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
+Starting from July 1 2020, customers can't create new Spark clusters with Spark 2.3 on HDInsight 4.0. Existing clusters will run as is without the support from Microsoft. Consider moving to Spark 2.4 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
#### Deprecation of Kafka 1.1 in HDInsight 4.0 Kafka cluster
-Starting from July 1 2020, customers will not be able to create new Kafka clusters with Kafka 1.1 on HDInsight 4.0. Existing clusters will run as is without the support from Microsoft. Consider moving to Kafka 2.1 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
+Starting from July 1 2020, customers won't be able to create new Kafka clusters with Kafka 1.1 on HDInsight 4.0. Existing clusters will run as is without the support from Microsoft. Consider moving to Kafka 2.1 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
### Behavior changes #### Ambari stack version change
Customers can now use Service Endpoint Policies (SEP) on the HDInsight cluster s
### Deprecation #### Deprecation of Spark 2.1 and 2.2 in HDInsight 3.6 Spark cluster
-Starting from July 1 2020, customers cannot create new Spark clusters with Spark 2.1 and 2.2 on HDInsight 3.6. Existing clusters will run as is without the support from Microsoft. Consider to move to Spark 2.3 on HDInsight 3.6 by June 30 2020 to avoid potential system/support interruption.
+Starting from July 1 2020, customers can't create new Spark clusters with Spark 2.1 and 2.2 on HDInsight 3.6. Existing clusters will run as is without the support from Microsoft. Consider to move to Spark 2.3 on HDInsight 3.6 by June 30 2020 to avoid potential system/support interruption.
#### Deprecation of Spark 2.3 in HDInsight 4.0 Spark cluster
-Starting from July 1 2020, customers cannot create new Spark clusters with Spark 2.3 on HDInsight 4.0. Existing clusters will run as is without the support from Microsoft. Consider moving to Spark 2.4 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
+Starting from July 1 2020, customers can't create new Spark clusters with Spark 2.3 on HDInsight 4.0. Existing clusters will run as is without the support from Microsoft. Consider moving to Spark 2.4 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
#### Deprecation of Kafka 1.1 in HDInsight 4.0 Kafka cluster
-Starting from July 1 2020, customers will not be able to create new Kafka clusters with Kafka 1.1 on HDInsight 4.0. Existing clusters will run as is without the support from Microsoft. Consider moving to Kafka 2.1 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
+Starting from July 1 2020, customers won't be able to create new Kafka clusters with Kafka 1.1 on HDInsight 4.0. Existing clusters will run as is without the support from Microsoft. Consider moving to Kafka 2.1 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
### Behavior changes No behavior changes you need to pay attention to.
In this release, we support rebooting VMs in HDInsight cluster to reboot unrespo
### Deprecation #### Deprecation of Spark 2.1 and 2.2 in HDInsight 3.6 Spark cluster
-Starting from July 1 2020, customers cannot create new Spark clusters with Spark 2.1 and 2.2 on HDInsight 3.6. Existing clusters will run as is without the support from Microsoft. Consider to move to Spark 2.3 on HDInsight 3.6 by June 30 2020 to avoid potential system/support interruption.
+Starting from July 1 2020, customers can't create new Spark clusters with Spark 2.1 and 2.2 on HDInsight 3.6. Existing clusters will run as is without the support from Microsoft. Consider to move to Spark 2.3 on HDInsight 3.6 by June 30 2020 to avoid potential system/support interruption.
#### Deprecation of Spark 2.3 in HDInsight 4.0 Spark cluster
-Starting from July 1 2020, customers cannot create new Spark clusters with Spark 2.3 on HDInsight 4.0. Existing clusters will run as is without the support from Microsoft. Consider moving to Spark 2.4 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
+Starting from July 1 2020, customers can't create new Spark clusters with Spark 2.3 on HDInsight 4.0. Existing clusters will run as is without the support from Microsoft. Consider moving to Spark 2.4 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
#### Deprecation of Kafka 1.1 in HDInsight 4.0 Kafka cluster
-Starting from July 1 2020, customers will not be able to create new Kafka clusters with Kafka 1.1 on HDInsight 4.0. Existing clusters will run as is without the support from Microsoft. Consider moving to Kafka 2.1 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
+Starting from July 1 2020, customers won't be able to create new Kafka clusters with Kafka 1.1 on HDInsight 4.0. Existing clusters will run as is without the support from Microsoft. Consider moving to Kafka 2.1 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
### Behavior changes #### ESP Spark cluster head node size change
When 80% of the worker nodes are ready, the cluster enters **operational** stage
After the **operational** stage, the cluster waits another 60 minutes for the remaining 20% worker nodes. At the end of this 60 minute, the cluster moves to the **running** stage, even if all of worker nodes are still not available. Once a cluster enters the **running** stage, you can use it as normal. Both control plan operations like scaling up/down, and data plan operations like running scripts and jobs are accepted. If some of the requested worker nodes are not available, the cluster will be marked as partial success. You are charged for the nodes that were deployed successfully. #### Create new service principal through HDInsight
-Previously, with cluster creation, customers can create a new service principal to access the connected ADLS Gen 1 account in Azure portal. Starting June 15 2020, customers cannot create new service principal in HDInsight creation workflow, only existing service principal is supported. See [Create Service Principal and Certificates using Azure Active Directory](../active-directory/develop/howto-create-service-principal-portal.md).
+Previously, with cluster creation, customers can create a new service principal to access the connected ADLS Gen 1 account in Azure portal. Starting June 15 2020, customers can't create new service principal in HDInsight creation workflow, only existing service principal is supported. See [Create Service Principal and Certificates using Azure Active Directory](../active-directory/develop/howto-create-service-principal-portal.md).
#### Time out for script actions with cluster creation HDInsight supports running script actions with cluster creation. From this release, all script actions with cluster creation must finish within **60 minutes**, or they time out. Script actions submitted to running clusters are not impacted. Learn more details [here](./hdinsight-hadoop-customize-cluster-linux.md#script-action-in-the-cluster-creation-process).
You can find the current component versions for HDInsight 4.0 ad HDInsight 3.6 i
### Known issues #### Hive Warehouse Connector issue
-There is an issue for Hive Warehouse Connector in this release. The fix will be included in the next release. Existing clusters created before this release are not impacted. Avoid dropping and recreating the cluster if possible. Open support ticket if you need further help on this.
+There's an issue for Hive Warehouse Connector in this release. The fix will be included in the next release. Existing clusters created before this release are not impacted. Avoid dropping and recreating the cluster if possible. Open support ticket if you need further help on this.
## Release date: 01/09/2020
No behavior changes for this release. To get ready for upcoming changes, see [Up
The following changes will happen in upcoming releases. #### Deprecation of Spark 2.1 and 2.2 in HDInsight 3.6 Spark cluster
-Starting July 1, 2020, customers will not be able to create new Spark clusters with Spark 2.1 and 2.2 on HDInsight 3.6. Existing clusters will run as is without support from Microsoft. Consider moving to Spark 2.3 on HDInsight 3.6 by June 30, 2020 to avoid potential system/support interruption. For more information, see [Migrate Apache Spark 2.1 and 2.2 workloads to 2.3 and 2.4](./spark/migrate-versions.md).
+Starting July 1, 2020, customers won't be able to create new Spark clusters with Spark 2.1 and 2.2 on HDInsight 3.6. Existing clusters will run as is without support from Microsoft. Consider moving to Spark 2.3 on HDInsight 3.6 by June 30, 2020 to avoid potential system/support interruption.
#### Deprecation of Spark 2.3 in HDInsight 4.0 Spark cluster
-Starting July 1, 2020, customers will not be able to create new Spark clusters with Spark 2.3 on HDInsight 4.0. Existing clusters will run as is without support from Microsoft. Consider moving to Spark 2.4 on HDInsight 4.0 by June 30, 2020 to avoid potential system/support interruption. For more information, see [Migrate Apache Spark 2.1 and 2.2 workloads to 2.3 and 2.4](./spark/migrate-versions.md).
+Starting July 1, 2020, customers won't be able to create new Spark clusters with Spark 2.3 on HDInsight 4.0. Existing clusters will run as is without support from Microsoft. Consider moving to Spark 2.4 on HDInsight 4.0 by June 30, 2020 to avoid potential system/support interruption.
#### Deprecation of Kafka 1.1 in HDInsight 4.0 Kafka cluster
-Starting July 1 2020, customers will not be able to create new Kafka clusters with Kafka 1.1 on HDInsight 4.0. Existing clusters will run as is without support from Microsoft. Consider moving to Kafka 2.1 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption. For more information, see [Migrate Apache Kafka workloads to Azure HDInsight 4.0](./kafk).
+Starting July 1 2020, customers won't be able to create new Kafka clusters with Kafka 1.1 on HDInsight 4.0. Existing clusters will run as is without support from Microsoft. Consider moving to Kafka 2.1 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption. For more information, see [Migrate Apache Kafka workloads to Azure HDInsight 4.0](./kafk).
#### HBase 2.0 to 2.1.6 In the upcoming HDInsight 4.0 release, HBase version will be upgraded from version 2.0 to 2.1.6
F-series virtual machines(VMs) is a good choice to get started with HDInsight wi
From this release, G-series VMs are no longer offered in HDInsight. #### Dv1 virtual machine deprecation
-From this release, the use of Dv1 VMs with HDInsight is deprecated. Any customer request for Dv1 will be served with Dv2 automatically. There is no price difference between Dv1 and Dv2 VMs.
+From this release, the use of Dv1 VMs with HDInsight is deprecated. Any customer request for Dv1 will be served with Dv2 automatically. There's no price difference between Dv1 and Dv2 VMs.
### Behavior changes
A-series VMs could cause ESP cluster issues due to relatively low CPU and memory
HDInsight continues to make cluster reliability and performance improvements. ### Component version change
-There is no component version change for this release. You could find the current component versions for HDInsight 4.0 and HDInsight 3.6 [here](./hdinsight-component-versioning.md).
+There's no component version change for this release. You could find the current component versions for HDInsight 4.0 and HDInsight 3.6 [here](./hdinsight-component-versioning.md).
## Release Date: 08/07/2019
This release provides Hadoop Common 2.7.3 and the following Apache patches:
- [HDFS-11689](https://issues.apache.org/jira/browse/HDFS-11689): New exception thrown by DFSClient%isHDFSEncryptionEnabled broke hacky hive code. -- [HDFS-11711](https://issues.apache.org/jira/browse/HDFS-11711): DN should not delete the block On "Too many open files" Exception.
+- [HDFS-11711](https://issues.apache.org/jira/browse/HDFS-11711): DN shouldn't delete the block On "Too many open files" Exception.
- [HDFS-12347](https://issues.apache.org/jira/browse/HDFS-12347): TestBalancerRPCDelay\#testBalancerRPCDelay fails frequently.
This release provides Kafka 1.0.0 and the following Apache patches.
- [KAFKA-6179](https://issues.apache.org/jira/browse/KAFKA-6179): RecordQueue.clear() does not clear MinTimestampTracker's maintained list. -- [KAFKA-6185](https://issues.apache.org/jira/browse/KAFKA-6185): Selector memory leak with high likelihood of OOM if there is a down conversion.
+- [KAFKA-6185](https://issues.apache.org/jira/browse/KAFKA-6185): Selector memory leak with high likelihood of OOM if there's a down conversion.
- [KAFKA-6190](https://issues.apache.org/jira/browse/KAFKA-6190): GlobalKTable never finishes restoring when consuming transactional messages.
In HDP-2.5.x and 2.6.x, we removed the "commons-httpclient" library from Mahout
- Previously compiled Mahout jobs will need to be recompiled in the HDP-2.5 or 2.6 environment. -- There is a small possibility that some Mahout jobs may encounter "ClassNotFoundException" or "could not load class" errors related to "org.apache.commons.httpclient", "net.java.dev.jets3t", or related class name prefixes. If these errors happen, you may consider whether to manually install the needed jars in your classpath for the job, if the risk of security issues in the obsolete library is acceptable in your environment.
+- There's a small possibility that some Mahout jobs may encounter "ClassNotFoundException" or "could not load class" errors related to "org.apache.commons.httpclient", "net.java.dev.jets3t", or related class name prefixes. If these errors happen, you may consider whether to manually install the needed jars in your classpath for the job, if the risk of security issues in the obsolete library is acceptable in your environment.
-- There is an even smaller possibility that some Mahout jobs may encounter crashes in Mahout's hbase-client code calls to the hadoop-common libraries, due to binary compatibility problems. Regrettably, there is no way to resolve this issue except revert to the HDP-2.4.2 version of Mahout, which may have security issues. Again, this should be unusual, and is unlikely to occur in any given Mahout job suite.
+- There's an even smaller possibility that some Mahout jobs may encounter crashes in Mahout's hbase-client code calls to the hadoop-common libraries, due to binary compatibility problems. Regrettably, there's no way to resolve this issue except revert to the HDP-2.4.2 version of Mahout, which may have security issues. Again, this should be unusual, and is unlikely to occur in any given Mahout job suite.
#### Oozie
This release provides Phoenix 4.7.0 and the following Apache patches:
- [PHOENIX-3240](https://issues.apache.org/jira/browse/PHOENIX-3240): ClassCastException from Pig loader. -- [PHOENIX-3452](https://issues.apache.org/jira/browse/PHOENIX-3452): NULLS FIRST/NULL LAST should not impact whether GROUP BY is order preserving.
+- [PHOENIX-3452](https://issues.apache.org/jira/browse/PHOENIX-3452): NULLS FIRST/NULL LAST shouldn't impact whether GROUP BY is order preserving.
- [PHOENIX-3469](https://issues.apache.org/jira/browse/PHOENIX-3469): Incorrect sort order for DESC primary key for NULLS LAST/NULLS FIRST.
This release provides Phoenix 4.7.0 and the following Apache patches:
- [PHOENIX-4525](https://issues.apache.org/jira/browse/PHOENIX-4525): Integer overflow in GroupBy execution. -- [PHOENIX-4560](https://issues.apache.org/jira/browse/PHOENIX-4560): ORDER BY with GROUP BY doesn't work if there is WHERE on pk column.
+- [PHOENIX-4560](https://issues.apache.org/jira/browse/PHOENIX-4560): ORDER BY with GROUP BY doesn't work if there's WHERE on pk column.
- [PHOENIX-4586](https://issues.apache.org/jira/browse/PHOENIX-4586): UPSERT SELECT doesn't take in account comparison operators for subqueries.
This release provides Spark 2.3.0 and the following Apache patches:
- [SPARK-23406](https://issues.apache.org/jira/browse/SPARK-23406): Enable stream-stream self-joins for branch-2.3. -- [SPARK-23434](https://issues.apache.org/jira/browse/SPARK-23434): Spark should not warn \`metadata directory\` for a HDFS file path.
+- [SPARK-23434](https://issues.apache.org/jira/browse/SPARK-23434): Spark shouldn't warn \`metadata directory\` for a HDFS file path.
- [SPARK-23436](https://issues.apache.org/jira/browse/SPARK-23436): Infer partition as Date only if it can be cast to Date.
This release provides Spark 2.3.0 and the following Apache patches:
- [SPARK-23490](https://issues.apache.org/jira/browse/SPARK-23490): Check storage.locationUri with existing table in CreateTable. -- [SPARK-23524](https://issues.apache.org/jira/browse/SPARK-23524): Big local shuffle blocks should not be checked for corruption.
+- [SPARK-23524](https://issues.apache.org/jira/browse/SPARK-23524): Big local shuffle blocks shouldn't be checked for corruption.
- [SPARK-23525](https://issues.apache.org/jira/browse/SPARK-23525): Support ALTER TABLE CHANGE COLUMN COMMENT for external hive table. -- [SPARK-23553](https://issues.apache.org/jira/browse/SPARK-23553): Tests should not assume the default value of \`spark.sql.sources.default\`.
+- [SPARK-23553](https://issues.apache.org/jira/browse/SPARK-23553): Tests shouldn't assume the default value of \`spark.sql.sources.default\`.
- [SPARK-23569](https://issues.apache.org/jira/browse/SPARK-23569): Allow pandas\_udf to work with python3 style type-annotated functions.
This release provides Spark 2.3.0 and the following Apache patches:
- [SPARK-23624](https://issues.apache.org/jira/browse/SPARK-23624): Revise doc of method pushFilters in Datasource V2. -- [SPARK-23628](https://issues.apache.org/jira/browse/SPARK-23628): calculateParamLength should not return 1 + num of expressions.
+- [SPARK-23628](https://issues.apache.org/jira/browse/SPARK-23628): calculateParamLength shouldn't return 1 + num of expressions.
- [SPARK-23630](https://issues.apache.org/jira/browse/SPARK-23630): Allow user's hadoop conf customizations to take effect.
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-93159 | [OOZIE-3139](https://issues.apache.org/jira/browse/OOZIE-3139) | Oozie validates workflow incorrectly | | BUG-93936 | [ATLAS-2289](https://issues.apache.org/jira/browse/ATLAS-2289) | Embedded kafka/zookeeper server start/stop code to be moved out of KafkaNotification implementation | | BUG-93942 | [ATLAS-2312](https://issues.apache.org/jira/browse/ATLAS-2312) | Use ThreadLocal DateFormat objects to avoid simultaneous use from multiple threads |
-| BUG-93946 | [ATLAS-2319](https://issues.apache.org/jira/browse/ATLAS-2319) | UI: Deleting a tag which at 25+ position in the tag list in both Flat and Tree structure needs a refresh to remove the tag from the list. |
+| BUG-93946 | [ATLAS-2319](https://issues.apache.org/jira/browse/ATLAS-2319) | UI: Deleting a tag, which at 25+ position in the tag list in both Flat and Tree structure needs a refresh to remove the tag from the list. |
| BUG-94618 | [YARN-5037](https://issues.apache.org/jira/browse/YARN-5037), [YARN-7274](https://issues.apache.org/jira/browse/YARN-7274) | Ability to disable elasticity at leaf queue level | | BUG-94901 | [HBASE-19285](https://issues.apache.org/jira/browse/HBASE-19285) | Add per-table latency histograms | | BUG-95259 | [HADOOP-15185](https://issues.apache.org/jira/browse/HADOOP-15185), [HADOOP-15186](https://issues.apache.org/jira/browse/HADOOP-15186) | Update adls connector to use the current version of ADLS SDK | | BUG-95619 | [HIVE-18551](https://issues.apache.org/jira/browse/HIVE-18551) | Vectorization: VectorMapOperator tries to write too many vector columns for Hybrid Grace |
-| BUG-97223 | [SPARK-23434](https://issues.apache.org/jira/browse/SPARK-23434) | Spark should not warn \`metadata directory\` for a HDFS file path |
+| BUG-97223 | [SPARK-23434](https://issues.apache.org/jira/browse/SPARK-23434) | Spark shouldn't warn \`metadata directory\` for a HDFS file path |
**Performance**
Fixed issues represent selected issues that were previously logged via Hortonwor
||-|--| | BUG-100180 | [CALCITE-2232](https://issues.apache.org/jira/browse/CALCITE-2232) | Assertion error on AggregatePullUpConstantsRule while adjusting Aggregate indices | | BUG-100422 | [HIVE-19085](https://issues.apache.org/jira/browse/HIVE-19085) | FastHiveDecimal abs(0) sets sign to +ve |
-| BUG-100834 | [PHOENIX-4658](https://issues.apache.org/jira/browse/PHOENIX-4658) | IllegalStateException: requestSeek cannot be called on ReversedKeyValueHeap |
+| BUG-100834 | [PHOENIX-4658](https://issues.apache.org/jira/browse/PHOENIX-4658) | IllegalStateException: requestSeek can't be called on ReversedKeyValueHeap |
| BUG-102078 | [HIVE-17978](https://issues.apache.org/jira/browse/HIVE-17978) | TPCDS queries 58 and 83 generate exceptions in vectorization. | | BUG-92483 | [HIVE-17900](https://issues.apache.org/jira/browse/HIVE-17900) | analyze stats on columns triggered by Compactor generates malformed SQL with &gt; 1 partition column | | BUG-93135 | [HIVE-15874](https://issues.apache.org/jira/browse/HIVE-15874), [HIVE-18189](https://issues.apache.org/jira/browse/HIVE-18189) | Hive query returning wrong results when set hive.groupby.orderby.position.alias to true |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-97178 | [ATLAS-2467](https://issues.apache.org/jira/browse/ATLAS-2467) | Dependency upgrade for Spring and nimbus-jose-jwt | | BUG-97180 | N/A | Upgrade Nimbus-jose-jwt | | BUG-98038 | [HIVE-18788](https://issues.apache.org/jira/browse/HIVE-18788) | Clean up inputs in JDBC PreparedStatement |
-| BUG-98353 | [HADOOP-13707](https://issues.apache.org/jira/browse/HADOOP-13707) | Revert of "If kerberos is enabled while HTTP SPNEGO isn't configured, some links cannot be accessed" |
+| BUG-98353 | [HADOOP-13707](https://issues.apache.org/jira/browse/HADOOP-13707) | Revert of "If kerberos is enabled while HTTP SPNEGO isn't configured, some links can't be accessed" |
| BUG-98372 | [HBASE-13848](https://issues.apache.org/jira/browse/HBASE-13848) | Access InfoServer SSL passwords through Credential Provider API | | BUG-98385 | [ATLAS-2500](https://issues.apache.org/jira/browse/ATLAS-2500) | Add more headers to Atlas response. | | BUG-98564 | [HADOOP-14651](https://issues.apache.org/jira/browse/HADOOP-14651) | Update okhttp version to 2.7.5 |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-93361 | [HIVE-12360](https://issues.apache.org/jira/browse/HIVE-12360) | Bad seek in uncompressed ORC with predicate pushdown | | BUG-93426 | [CALCITE-2086](https://issues.apache.org/jira/browse/CALCITE-2086) | HTTP/413 in certain circumstances due to large Authorization headers | | BUG-93429 | [PHOENIX-3240](https://issues.apache.org/jira/browse/PHOENIX-3240) | ClassCastException from Pig loader |
-| BUG-93485 | N/A | Cannot get table mytestorg.apache.hadoop.hive.ql.metadata.InvalidTableException: Table not found when running analyze table on columns in LLAP |
+| BUG-93485 | N/A | can'tcan'tCan't get table mytestorg.apache.hadoop.hive.ql.metadata.InvalidTableException: Table not found when running analyze table on columns in LLAP |
| BUG-93512 | [PHOENIX-4466](https://issues.apache.org/jira/browse/PHOENIX-4466) | java.lang.RuntimeException: response code 500 - Executing a spark job to connect to phoenix query server and load data | | BUG-93550 | N/A | Zeppelin %spark.r does not work with spark1 due to scala version mismatch | | BUG-93910 | [HIVE-18293](https://issues.apache.org/jira/browse/HIVE-18293) | Hive is failing to compact tables contained within a folder that isn't owned by identity running HiveMetaStore |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-94928 | [HDFS-11078](https://issues.apache.org/jira/browse/HDFS-11078) | Fix NPE in LazyPersistFileScrubber | | BUG-95013 | [HIVE-18488](https://issues.apache.org/jira/browse/HIVE-18488) | LLAP ORC readers are missing some null checks | | BUG-95077 | [HIVE-14205](https://issues.apache.org/jira/browse/HIVE-14205) | Hive doesn't support union type with AVRO file format |
-| BUG-95200 | [HDFS-13061](https://issues.apache.org/jira/browse/HDFS-13061) | SaslDataTransferClient\#checkTrustAndSend should not trust a partially trusted channel |
+| BUG-95200 | [HDFS-13061](https://issues.apache.org/jira/browse/HDFS-13061) | SaslDataTransferClient\#checkTrustAndSend shouldn'tshould'n trust a partially trusted channel |
| BUG-95201 | [HDFS-13060](https://issues.apache.org/jira/browse/HDFS-13060) | Adding a BlacklistBasedTrustedChannelResolver for TrustedChannelResolver | | BUG-95284 | [HBASE-19395](https://issues.apache.org/jira/browse/HBASE-19395) | \[branch-1\] TestEndToEndSplitTransaction.testMasterOpsWhileSplitting fails with NPE | | BUG-95301 | [HIVE-18517](https://issues.apache.org/jira/browse/HIVE-18517) | Vectorization: Fix VectorMapOperator to accept VRBs and check vectorized flag correctly to support LLAP Caching |
Fixed issues represent selected issues that were previously logged via Hortonwor
|**Apache Component**|**Apache JIRA**|**Summary**|**Details**| |--|--|--|--| |**Spark 2.3** |**N/A** |**Changes as documented in the Apache Spark release notes** |- There's a "Deprecation" document and a "Change of behavior" guide, https://spark.apache.org/releases/spark-release-2-3-0.html#deprecations<br /><br />- For SQL part, there's another detailed "Migration" guide (from 2.2 to 2.3), https://spark.apache.org/docs/latest/sql-programming-guide.html#upgrading-from-spark-sql-22-to-23|
-|Spark |[**HIVE-12505**](https://issues.apache.org/jira/browse/HIVE-12505) |Spark job completes successfully but there is an HDFS disk quota full error |**Scenario:** Running **insert overwrite** when a quota is set on the Trash folder of the user who runs the command.<br /><br />**Previous Behavior:** The job succeeds even though it fails to move the data to the Trash. The result can wrongly contain some of the data previously present in the table.<br /><br />**New Behavior:** When the move to the Trash folder fails, the files are permanently deleted.|
+|Spark |[**HIVE-12505**](https://issues.apache.org/jira/browse/HIVE-12505) |Spark job completes successfully but there's an HDFS disk quota full error |**Scenario:** Running **insert overwrite** when a quota is set on the Trash folder of the user who runs the command.<br /><br />**Previous Behavior:** The job succeeds even though it fails to move the data to the Trash. The result can wrongly contain some of the data previously present in the table.<br /><br />**New Behavior:** When the move to the Trash folder fails, the files are permanently deleted.|
|**Kafka 1.0**|**N/A**|**Changes as documented in the Apache Spark release notes** |https://kafka.apache.org/10/documentation.html#upgrade_100_notable| |**Hive/ Ranger** | |Another ranger hive policies required for INSERT OVERWRITE |**Scenario:** Another ranger hive policies required for **INSERT OVERWRITE**<br /><br />**Previous behavior:** Hive **INSERT OVERWRITE** queries succeed as usual.<br /><br />**New behavior:** Hive **INSERT OVERWRITE** queries are unexpectedly failing after upgrading to HDP-2.6.x with the error:<br /><br />Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user jdoe does not have WRITE privilege on /tmp/\*(state=42000,code=40000)<br /><br />As of HDP-2.6.0, Hive **INSERT OVERWRITE** queries require a Ranger URI policy to allow write operations, even if the user has write privilege granted through HDFS policy.<br /><br />**Workaround/Expected Customer Action:**<br /><br />1. Create a new policy under the Hive repository.<br />2. In the dropdown where you see Database, select URI.<br />3. Update the path (Example: /tmp/*)<br />4. Add the users and group and save.<br />5. Retry the insert query.| |**HDFS**|**N/A** |HDFS should support for multiple KMS Uris |**Previous Behavior:** dfs.encryption.key.provider.uri property was used to configure the KMS provider path.<br /><br />**New Behavior:** dfs.encryption.key.provider.uri is now deprecated in favor of hadoop.security.key.provider.path to configure the KMS provider path.|
Fixed issues represent selected issues that were previously logged via Hortonwor
1. Find out PermissionList.js file under /usr/hdp/current/ranger-admin
- 2. Find out definition of renderPolicyCondtion function (line no:404).
+ 2. Find out definition of renderPolicyCondtion function (line no: 404).
- 3. Remove following line from that function i.e under display function(line no:434)
+ 3. Remove following line from that function i.e under display function(line no: 434)
val = \_.escape(val);//Line No:460
Fixed issues represent selected issues that were previously logged via Hortonwor
**HDInsight Integration with ADLS Gen 2: User directories and permissions issue with ESP clusters** 1. Home directories for users are not getting created on Head Node 1. Workaround is to create these manually and change ownership to the respective userΓÇÖs UPN.
- 2. Permissions on /hdp is currently not set to 751. This needs to be set to
+ 2. Permissions on /hdp are currently not set to 751. This needs to be set to
a. chmod 751 /hdp b. chmod ΓÇôR 755 /hdp/apps ### Deprecation -- **OMS Portal:** We have removed the link from HDInsight resource page that was pointing to OMS portal. Azure Monitor logs initially used its own portal called the OMS portal to manage its configuration and analyze collected data. All functionality from this portal has been moved to the Azure portal where it will continue to be developed. HDInsight has deprecated the support for OMS portal. Customers will use HDInsight Azure Monitor logs integration in Azure portal.
+- **OMS Portal:** We've removed the link from HDInsight resource page that was pointing to OMS portal. Azure Monitor logs initially used its own portal called the OMS portal to manage its configuration and analyze collected data. All functionality from this portal has been moved to the Azure portal where it will continue to be developed. HDInsight has deprecated the support for OMS portal. Customers will use HDInsight Azure Monitor logs integration in Azure portal.
- **Spark 2.3:** [Spark Release 2.3.0 deprecations](https://spark.apache.org/releases/spark-release-2-3-0.html#deprecations)
hdinsight Hdinsight Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-upgrade-cluster.md
description: Learn guidelines to migrate your Azure HDInsight cluster to a newer
Previously updated : 09/19/2022 Last updated : 10/25/2022 # Migrate HDInsight cluster to a newer version
For more information about database backup and restore, see [Recover a database
As mentioned above, Microsoft recommends that HDInsight clusters be regularly migrated to the latest version in order to take advantage of new features and fixes. See the following list of reasons we would request that a cluster to be deleted and redeployed:
-* The cluster version is [Retired](hdinsight-retired-versions.md) or in [Basic support](hdinsight-36-component-versioning.md) and you're having a cluster issue that would be resolved with a newer version.
+* The cluster version is [Retired](hdinsight-retired-versions.md) or if you're having a cluster issue that would be resolved with a newer version.
* The root cause of a cluster issue is determined to relate an undersized VM. [View Microsoft's recommended node configuration](hdinsight-supported-node-configuration.md). * A customer opens a support case and the Microsoft engineering team determines the issue has already been fixed in a newer cluster version. * A default metastore database (Ambari, Hive, Oozie, Ranger) has reached its utilization limit. Microsoft will ask you to recreate the cluster using a [custom metastore](hdinsight-use-external-metadata-stores.md#custom-metastore) database.
hdinsight Hdinsight Version Release https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-version-release.md
Title: HDInsight 4.0 overview - Azure
description: Compare HDInsight 3.6 to HDInsight 4.0 features, limitations, and upgrade recommendations. Previously updated : 10/13/2022 Last updated : 10/25/2022 # Azure HDInsight 4.0 overview
There's no supported upgrade path from previous versions of HDInsight to HDInsig
* [HBase migration guide](./hbase/apache-hbase-migrate-new-version.md) * [Hive migration guide](./interactive-query/apache-hive-migrate-workloads.md) * [Kafka migration guide](./kafk)
-* [Spark migration guide](./spark/migrate-versions.md)
* [Azure HDInsight Documentation](index.yml) * [Release Notes](hdinsight-release-notes.md)
hdinsight Apache Kafka Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-quickstart-powershell.md
New-AzStorageContainer -Name $containerName -Context $storageContext
Create an Apache Kafka on HDInsight cluster with [New-AzHDInsightCluster](/powershell/module/az.HDInsight/New-azHDInsightCluster). ```azurepowershell-interactive
-# Create a Kafka 1.1 cluster
+# Create a Kafka 2.4.0 cluster
$clusterName = Read-Host -Prompt "Enter the name of the Kafka cluster" $httpCredential = Get-Credential -Message "Enter the cluster login credentials" -UserName "admin" $sshCredentials = Get-Credential -Message "Enter the SSH user credentials" -UserName "sshuser"
$clusterType="Kafka"
$disksPerNode=2 $kafkaConfig = New-Object "System.Collections.Generic.Dictionary``2[System.String,System.String]"
-$kafkaConfig.Add("kafka", "1.1")
+$kafkaConfig.Add("kafka", "2.4.0")
New-AzHDInsightCluster ` -ResourceGroupName $resourceGroup `
hdinsight Migrate Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/migrate-versions.md
- Title: Migrate Apache Spark 2.1 or 2.2 workloads to 2.3 or 2.4 - Azure HDInsight
-description: Learn how to migrate Apache Spark 2.1 and 2.2 to 2.3 or 2.4.
-- Previously updated : 08/28/2022--
-# Migrate Apache Spark 2.1 and 2.2 workloads to 2.3 and 2.4
-
-This document explains how to migrate Apache Spark workloads on Spark 2.1 and 2.2 to 2.3 or 2.4.
-
-As discussed in the [Release Notes](../hdinsight-release-notes-archive.md), starting July 1, 2020, the following cluster configurations will not be supported and customers will not be able to create new clusters with these configurations:
-
-Existing clusters in these configurations will run as-is without support from Microsoft. If you are on Spark 2.1 or 2.2 on HDInsight 3.6, move to Spark 2.3 on HDInsight 3.6 by June 30 2020 to avoid potential system/support interruption. If you are on Spark 2.3 on an HDInsight 4.0 cluster, move to Spark 2.4 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
-
-For general information about migrating an HDInsight cluster from 3.6 to 4.0, see [Migrate HDInsight cluster to a newer version](../hdinsight-upgrade-cluster.md). For general information about migrating to a newer version of Apache Spark, see [Apache Spark: Versioning Policy](https://spark.apache.org/versioning-policy.html).
-
-## Guidance on Spark version upgrades on HDInsight
-
-| Upgrade scenario | Mechanism | Things to consider | Spark/Hive integration |
-||--|--||
-|HDInsight 3.6 Spark 2.1 to HDInsight 3.6 Spark 2.3| Recreate clusters with HDInsight Spark 2.3 | Review the following articles: <br> [Apache Spark: Upgrading From Spark SQL 2.2 to 2.3](https://spark.apache.org/docs/latest/sql-migration-guide.html#upgrading-from-spark-sql-22-to-23) <br><br> [Apache Spark: Upgrading From Spark SQL 2.1 to 2.2](https://spark.apache.org/docs/latest/sql-migration-guide.html#upgrading-from-spark-sql-21-to-22) | No Change |
-|HDInsight 3.6 Spark 2.2 to HDInsight 3.6 Spark 2.3 | Recreate clusters with HDInsight Spark 2.3 | Review the following articles: <br> [Apache Spark: Upgrading From Spark SQL 2.2 to 2.3](https://spark.apache.org/docs/latest/sql-migration-guide.html#upgrading-from-spark-sql-22-to-23) | No Change |
-| HDInsight 3.6 Spark 2.1 to HDInsight 4.0 Spark 2.4 | Recreate clusters with HDInsight 4.0 Spark 2.4 | Review the following articles: <br> [Apache Spark: Upgrading From Spark SQL 2.3 to 2.4](https://spark.apache.org/docs/latest/sql-migration-guide.html#upgrading-from-spark-sql-23-to-24) <br><br> [Apache Spark: Upgrading From Spark SQL 2.2 to 2.3](https://spark.apache.org/docs/latest/sql-migration-guide.html#upgrading-from-spark-sql-22-to-23) <br><br> [Apache Spark: Upgrading From Spark SQL 2.1 to 2.2](https://spark.apache.org/docs/latest/sql-migration-guide.html#upgrading-from-spark-sql-21-to-22) | Spark and Hive integration has changed in HDInsight 4.0. <br><br> In HDInsight 4.0, Spark and Hive use independent catalogs for accessing SparkSQL or Hive tables. A table created by Spark lives in the Spark catalog. A table created by Hive lives in the Hive catalog. This behavior is different than HDInsight 3.6 where Hive and Spark shared common catalog. Hive and Spark Integration in HDInsight 4.0 relies on Hive Warehouse Connector (HWC). HWC works as a bridge between Spark and Hive. Learn about Hive Warehouse Connector. <br> In HDInsight 4.0 if you would like to Share the metastore between Hive and Spark, you can do so by changing the property metastore.catalog.default to hive in your Spark cluster. You can find this property in Ambari Advanced spark2-hive-site-override. It's important to understand that sharing of metastore only works for external hive tables, this will not work if you have internal/managed hive tables or ACID tables. <br><br>Read [Migrate Azure HDInsight 3.6 Hive workloads to HDInsight 4.0](../interactive-query/apache-hive-migrate-workloads.md) for more information.<br><br> |
-| HDInsight 3.6 Spark 2.2 to HDInsight 4.0 Spark 2.4 | Recreate clusters with HDInsight 4.0 Spark 2.4 | Review the following articles: <br> [Apache Spark: Upgrading From Spark SQL 2.3 to 2.4](https://spark.apache.org/docs/latest/sql-migration-guide.html#upgrading-from-spark-sql-23-to-24) <br><br> [Apache Spark: Upgrading From Spark SQL 2.2 to 2.3](https://spark.apache.org/docs/latest/sql-migration-guide.html#upgrading-from-spark-sql-22-to-23) | Spark and Hive integration has changed in HDInsight 4.0. <br><br> In HDInsight 4.0, Spark and Hive use independent catalogs for accessing SparkSQL or Hive tables. A table created by Spark lives in the Spark catalog. A table created by Hive lives in the Hive catalog. This behavior is different than HDInsight 3.6 where Hive and Spark shared common catalog. Hive and Spark Integration in HDInsight 4.0 relies on Hive Warehouse Connector (HWC). HWC works as a bridge between Spark and Hive. Learn about Hive Warehouse Connector. <br> In HDInsight 4.0 if you would like to Share the metastore between Hive and Spark, you can do so by changing the property metastore.catalog.default to hive in your Spark cluster. You can find this property in Ambari Advanced spark2-hive-site-override. It's important to understand that sharing of metastore only works for external hive tables, this will not work if you have internal/managed hive tables or ACID tables. <br><br>Read [Migrate Azure HDInsight 3.6 Hive workloads to HDInsight 4.0](../interactive-query/apache-hive-migrate-workloads.md) for more information.|
-
-## Next steps
-
-* [Migrate HDInsight cluster to a newer version](../hdinsight-upgrade-cluster.md)
-* [Migrate Azure HDInsight 3.6 Hive workloads to HDInsight 4.0](../interactive-query/apache-hive-migrate-workloads.md)
hdinsight Spark Dotnet Version Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/spark-dotnet-version-update.md
- Title: Updating .NET for Apache Spark to version v1.0 in HDI
-description: Learn about updating .NET for Apache Spark version to 1.0 in HDI and how that affects your existing code and clusters.
---- Previously updated : 07/22/2022--
-# Updating .NET for Apache Spark to version v1.0 in HDInsight
-
-This document talks about the first major version of [.NET for Apache Spark](https://github.com/dotnet/spark), and how it might impact your current production pipelines in HDInsight clusters.
-
-## About .NET for Apache Spark version 1.0.0
-
-This is the first [major official release](https://github.com/dotnet/spark/releases/tag/v1.0.0) of .NET for Apache Spark and provides DataFrame API completeness for Spark 2.4.x and Spark 3.0.x along with other features. For a complete list of all features, improvements and bug fixes, see the official [v1.0.0 release notes](https://github.com/dotnet/spark/blob/master/docs/release-notes/1.0.0/release-1.0.0.md).
-Another important thing to note is that this version is **not** compatible with prior versions of `Microsoft.Spark` and `Microsoft.Spark.Worker`. Check out the [migration guide](https://github.com/dotnet/spark/blob/master/docs/migration-guide.md#upgrading-from-microsoftspark-0x-to-10) if you're planning to upgrade your .NET for Apache Spark application to be compatible with v1.0.0.
-
-## Using .NET for Apache Spark v1.0 in HDInsight
-
-While current HDI clusters won't be affected (that is, they'll still have the same version as before), newly created HDI clusters will carry this latest v1.0.0 version of .NET for Apache Spark. What this means if:
--- **You have an older HDI cluster**: If you want to upgrade your Spark .NET application to v1.0.0 (recommended), you'll have to update the `Microsoft.Spark.Worker` version on your HDI cluster. For more information, see the [changing versions of .NET for Apache Spark on HDI cluster section](#changing-net-for-apache-spark-version-on-hdinsight).
-If you don't want to update the current version of .NET for Apache Spark in your application, no further steps are necessary.
--- **You have a new HDI cluster**: If you want to upgrade your Spark .NET application to v1.0.0 (recommended), no steps are needed to change the worker on HDI, however you'll have to refer to the [migration guide](https://github.com/dotnet/spark/blob/master/docs/migration-guide.md#upgrading-from-microsoftspark-0x-to-10) to understand the steps needed to update your code and pipelines.
-If you don't want to change the current version of .NET for Apache Spark in your application, you need to change the version on your HDI cluster from v1.0 (default on new clusters) to whichever version you're using. For more information, see the [changing versions of .NET for Apache Spark on HDI cluster section](spark-dotnet-version-update.md#changing-net-for-apache-spark-version-on-hdinsight).
-
-## Changing .NET for Apache Spark version on HDInsight
-
-### Deploy Microsoft.Spark.Worker
-
-`Microsoft.Spark.Worker` is a backend component that lives on the individual worker nodes of your Spark cluster. When you want to execute a C# UDF (user-defined function), Spark needs to understand how to launch the .NET CLR to execute this UDF. `Microsoft.Spark.Worker` provides a collection of classes to Spark that enable this functionality. Select the worker version depending on the version of .NET for Apache Spark you want to deploy on the HDI cluster.
-
-1. Download the Microsoft.Spark.Worker Linux release of your particular version. For example, if you want `.NET for Apache Spark v1.0.0`, you'd download [Microsoft.Spark.Worker.netcoreapp3.1.linux-x64-1.0.0.tar.gz](https://github.com/dotnet/spark/releases/tag/v1.0.0).
-
-2. Download [install-worker.sh](https://github.com/dotnet/spark/blob/master/deployment/install-worker.sh) script to install the worker binaries downloaded in Step 1 to all the worker nodes of your HDI cluster.
-
-3. Upload the above mentioned files to the Azure Storage account your cluster has access to. For more information, see [.NET for Apache Spark HDI deployment article](/dotnet/spark/tutorials/hdinsight-deployment#upload-files-to-azure) for more details.
-
-4. Run the `install-worker.sh` script on all worker nodes of your cluster, using Script actions. For more information, see [.NET for Apache Spark HDI deployment article](/dotnet/spark/tutorials/hdinsight-deployment#run-the-hdinsight-script-action).
-
-### Update your application to use specific version
-
-You can update your .NET for Apache Spark application to use a specific version by choosing the required version of the [Microsoft.Spark NuGet package](https://www.nuget.org/packages/Microsoft.Spark/) in your project. Be sure to check out the release notes of the particular version and the [migration guide](https://github.com/dotnet/spark/blob/master/docs/migration-guide.md#upgrading-from-microsoftspark-0x-to-10) as mentioned above, if choosing to update your application to v1.0.0.
-
-## FAQs
-
-### Will my existing HDI cluster with version < 1.0.0 start failing with the new release?
-
-Existing HDI clusters will continue to have the same previous version for .NET for Apache Spark and your existing application (having previous version of Spark .NET) won't be affected.
-
-## Next steps
-
-[Deploy your .NET for Apache Spark application on HDInsight](/dotnet/spark/tutorials/hdinsight-deployment)
healthcare-apis Export Dicom Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/export-dicom-files.md
The only setting is the list of identifiers to export.
| :- | :- | : | :- | | `Values` | Yes | | A list of one or more DICOM studies, series, and/or SOP instances identifiers in the format of `"<StudyInstanceUID>[/<SeriesInstanceUID>[/<SOPInstanceUID>]]"`. |
-### Destination Settings
+### Destination settings
The connection to the Azure Blob storage account is specified with a `BlobContainerUri`.
Content-Type: application/json
"lastUpdatedTime": "2022-09-08T16:41:01.2776644Z", "status": "completed", "results": {
- "errorHref": "<container uri>/4853cda8c05c44e497d2bc071f8e92c4/errors.log",
+ "errorHref": "https://dicomexport.blob.core.windows.net/export/4853cda8c05c44e497d2bc071f8e92c4/errors.log",
"exported": 1000, "skipped": 3 }
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md
Azure Health Data Services is a set of managed API services based on open standa
| Enhancements/Improvements | Related information | | : | :- |
-| Export is GA |The export feature for the DICOM service is now generally available. Export enables a user-supplied list of studies, series, and/or instances to be exported in bulk to an Azure Storage account. Learn more about the [export feature](https://github.com/microsoft/dicom-server/blob/main/docs/how-to-guides/export-data.md). |
+| Export is GA |The export feature for the DICOM service is now generally available. Export enables a user-supplied list of studies, series, and/or instances to be exported in bulk to an Azure Storage account. Learn more about the [export feature](dicom/export-dicom-files.md). |
|Improved deployment performance |Performance improvements have cut the time to deploy new instances of the DICOM service by more than 55% at the 50th percentile. | | Reduced strictness when validating STOW requests |Some customers have run into issues storing DICOM files that do not perfectly conform to the specification. To enable those files to be stored in the DICOM service, we have reduced the strictness of the validation performed on STOW. <p>The service will now accept the following: <p><ul><li>DICOM UIDs that contain trailing whitespace <li>IS, DS, SV, and UV VRs that are not valid numbers<li>Invalid private creator tags |
iot-dps About Iot Dps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/about-iot-dps.md
Title: Overview of the Microsoft Azure IoT Hub Device Provisioning Service
-description: Describes device provisioning in Azure with the Device Provisioning Service (DPS) and IoT Hub
+ Title: Overview of Azure IoT Hub Device Provisioning Service
+description: Describes production scale device provisioning in Azure with the Device Provisioning Service (DPS) and IoT Hub
Previously updated : 11/22/2021 Last updated : 10/14/2022
Microsoft Azure provides a rich set of integrated public cloud services for all your IoT solution needs. The IoT Hub Device Provisioning Service (DPS) is a helper service for IoT Hub that enables zero-touch, just-in-time provisioning to the right IoT hub without requiring human intervention. DPS enables the provisioning of millions of devices in a secure and scalable manner.
+Many of the manual steps traditionally involved in provisioning are automated with DPS to reduce the time to deploy IoT devices and lower the risk of manual error. The following diagram describes what goes on behind the scenes to get a device provisioned. The first step is manual, all of the following steps are automated.
++
+Before the device provisioning flow begins, there are two manual steps to prepare. On the device side, the device manufacturer prepares the device for provisioning by preconfiguring it with its authentication credentials and assigned Device Provisioning Service ID and endpoint. On the cloud side, you or the device manufacturer prepares the Device Provisioning Service instance with individual enrollments and enrollments groups that identify valid devices and define how they should be provisioned.
+
+Once the device and cloud are set up for provisioning, the following steps kick off automatically as soon as the device powers on for the first time:
+
+1. When the device first powers on, it connects to the DPS endpoint and presents it authentication credentials.
+1. The DPS instance checks the identity of the device against its enrollment list. Once the device identity is verified, DPS assigns the device to an IoT hub and registers it in the hub.
+1. The DPS instance receives the device ID and registration information from the assigned hub and passes that information back to the device.
+1. The device uses its registration information to connect directly to its assigned IoT hub and authenticate.
+1. Once authenticated, the device and IoT hub begin communicating directly. The DPS instance has no further role as an intermediary unless the device needs to reprovision.
+ ## When to use Device Provisioning Service There are many provisioning scenarios in which DPS is an excellent choice for getting devices connected and configured to IoT Hub, such as:
There are many provisioning scenarios in which DPS is an excellent choice for ge
* Reprovisioning based on a change in the device * Rolling the keys used by the device to connect to IoT Hub (when not using X.509 certificates to connect)
-Provisioning of nested edge devices (parent/child hierarchies) is not currently supported by DPS.
-
-## Behind the scenes
-
-All the scenarios listed in the previous section can be done using DPS for zero-touch provisioning with the same flow. Many of the manual steps traditionally involved in provisioning are automated with DPS to reduce the time to deploy IoT devices and lower the risk of manual error. The following section describes what goes on behind the scenes to get a device provisioned. The first step is manual, all of the following steps are automated.
-
-![Basic provisioning flow](./media/about-iot-dps/dps-provisioning-flow.png)
-
-1. Device manufacturer adds the device registration information to the enrollment list in the Azure portal.
-2. Device contacts the DPS endpoint set at the factory. The device passes the identifying information to DPS to prove its identity.
-3. DPS validates the identity of the device by validating the registration ID and key against the enrollment list entry using either a nonce challenge ([Trusted Platform Module](https://trustedcomputinggroup.org/work-groups/trusted-platform-module/)) or standard X.509 verification (X.509).
-4. DPS registers the device with an IoT hub and populates the device's [desired twin state](../iot-hub/iot-hub-devguide-device-twins.md).
-5. The IoT hub returns device ID information to DPS.
-6. DPS returns the IoT hub connection information to the device. The device can now start sending data directly to the IoT hub.
-7. The device connects to IoT hub.
-8. The device gets the desired state from its device twin in IoT hub.
+Provisioning of nested IoT Edge devices (parent/child hierarchies) is not currently supported by DPS.
## Provisioning process
This step is about configuring the cloud for proper automatic provisioning. Gene
There is a one-time initial setup of the provisioning that must occur, which is usually handled by the solution operator. Once the provisioning service is configured, it does not have to be modified unless the use case changes.
-After the service has been configured for automatic provisioning, it must be prepared to enroll devices. This step is done by the device operator, who knows the desired configuration of the device(s) and is in charge of making sure the provisioning service can properly attest to the device's identity when it comes looking for its IoT hub. The device operator takes the identifying key information from the manufacturer and adds it to the enrollment list. There can be subsequent updates to the enrollment list as new entries are added or existing entries are updated with the latest information about the devices.
+After the service has been configured for automatic provisioning, it must be prepared to enroll devices. This step is done by the device operator, who knows the desired configuration of the device(s) and is in charge of making sure the provisioning service can properly attest to the device's identity when it looks for its IoT hub. The device operator takes the identifying key information from the manufacturer and adds it to the enrollment list. There can be subsequent updates to the enrollment list as new entries are added or existing entries are updated with the latest information about the devices.
## Registration and provisioning
DPS has many features, making it ideal for provisioning devices.
* **Secure attestation** support for both X.509 and TPM-based identities. * **Enrollment list** containing the complete record of devices/groups of devices that may at some point register. The enrollment list contains information about the desired configuration of the device once it registers, and it can be updated at any time.
-* **Multiple allocation policies** to control how DPS assigns devices to IoT hubs in support of your scenarios: Lowest latency, evenly weighted distribution (default), and static configuration via the enrollment list. Latency is determined using the same method as [Traffic Manager](../traffic-manager/traffic-manager-routing-methods.md#performance).
+* **Multiple allocation policies** to control how DPS assigns devices to IoT hubs in support of your scenarios: Lowest latency, evenly weighted distribution (default), and static configuration. Latency is determined using the same method as [Traffic Manager](../traffic-manager/traffic-manager-routing-methods.md#performance). Custom allocation, which lets you implement your own allocation policies via webhooks hosted in Azure Functions is also supported.
* **Monitoring and diagnostics logging** to make sure everything is working properly. * **Multi-hub support** allows DPS to assign devices to more than one IoT hub. DPS can talk to hubs across multiple Azure subscriptions. * **Cross-region support** allows DPS to assign devices to IoT hubs in other regions. * **Encryption for data at rest** allows data in DPS to be encrypted and decrypted transparently using 256-bit AES encryption, one of the strongest block ciphers available, and is FIPS 140-2 compliant.
-You can learn more about the concepts and features involved in device provisioning by reviewing the [DPS terminology](concepts-service.md) topic along with the other conceptual topics in the same section.
+You can learn more about the concepts and features involved in device provisioning by reviewing the [DPS terminology](concepts-service.md) article along with the other conceptual articles in the same section.
## Cross-platform support
-Just like all Azure IoT services, DPS works cross-platform with a variety of operating systems. Azure offers open-source SDKs in a variety of [languages](https://github.com/Azure/azure-iot-sdks) to facilitate connecting devices and managing the service. DPS supports the following protocols for connecting devices:
+Just like all Azure IoT services, DPS works cross-platform with various operating systems. Azure offers open-source SDKs in various [languages](https://github.com/Azure/azure-iot-sdks) to facilitate connecting devices and managing the service. DPS supports the following protocols for connecting devices:
* HTTPS * AMQP
iot-dps Concepts Deploy At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-deploy-at-scale.md
An important part of the overall deployment is monitoring the solution end-to-en
## Next steps -- [Provision devices across load-balanced IoT Hubs](tutorial-provision-multiple-hubs.md)
+- [Provision devices across IoT Hubs](how-to-use-allocation-policies.md)
- [Retry timing](https://github.com/Azure/azure-sdk-for-c/blob/main/sdk/docs/iot/mqtt_state_machine.md#retry-timing) when retrying operations
iot-dps Concepts Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-service.md
Title: Terminology used with Azure IoT Hub Device Provisioning Service | Microso
description: Describes common terminology used with the Device Provisioning Service (DPS) and IoT Hub Previously updated : 09/18/2019 Last updated : 10/24/2022
This article gives an overview of the provisioning concepts most applicable to m
## Service operations endpoint
-The service operations endpoint is the endpoint for managing the service settings and maintaining the enrollment list. This endpoint is only used by the service administrator; it is not used by devices.
+The service operations endpoint is the endpoint for managing the service settings and maintaining the enrollment list. This endpoint is only used by the service administrator; it isn't used by devices.
## Device provisioning endpoint
The device provisioning endpoint is the single endpoint all devices use for auto
## Linked IoT hubs
-The Device Provisioning Service can only provision devices to IoT hubs that have been linked to it. Linking an IoT hub to an instance of the Device Provisioning Service gives the service read/write permissions to the IoT hub's device registry; with the link, a Device Provisioning Service can register a device ID and set the initial configuration in the device twin. Linked IoT hubs may be in any Azure region. You may link hubs in other subscriptions to your provisioning service.
+DPS can only provision devices to IoT hubs that have been linked to it. Linking an IoT hub to a DPS instance gives the service read/write permissions to the IoT hub's device registry. With these permissions, DPS can register a device ID and set the initial configuration in the device twin. Linked IoT hubs may be in any Azure region. You may link hubs in other subscriptions to your DPS instance. Settings on a linked IoT hub, for example, the allocation weight setting, determine how it participates in allocation policies.
## Allocation policy
-The service-level setting that determines how Device Provisioning Service assigns devices to an IoT hub. There are four supported allocation policies:
+Allocation policies determine how DPS assigns devices to an IoT hub. Each DPS instance has a default allocation policy, but this policy can be overridden by an allocation policy set on an enrollment. Only IoT hubs [linked](#linked-iot-hubs) to the DPS instance can participate in allocation.
-* **Evenly weighted distribution**: linked IoT hubs are equally likely to have devices provisioned to them. The default setting. If you are provisioning devices to only one IoT hub, you can keep this setting.
+There are four supported allocation policies:
-* **Lowest latency**: devices are provisioned to an IoT hub with the lowest latency to the device. If multiple linked IoT hubs would provide the same lowest latency, the provisioning service hashes devices across those hubs
+* **Evenly weighted distribution**: devices are provisioned to an IoT hub using a weighted hash. By default, linked IoT hubs have the same allocation weight setting, so they're equally likely to have devices provisioned to them. The allocation weight of an IoT hub may be adjusted to increase or decrease its likelihood of being assigned. This is the default allocation policy for a DPS instance. If you're provisioning devices to only one IoT Hub, we recommend using this policy.
-* **Static configuration via the enrollment list**: specification of the desired IoT hub in the enrollment list takes priority over the service-level allocation policy.
+* **Lowest latency**: devices are provisioned to an IoT hub with the lowest latency to the device. If multiple linked IoT hubs would provide the same lowest latency, DPS hashes devices across those IoT hubs based on their configured allocation weight.
-* **Custom (Use Azure Function)**: A [custom allocation policy](concepts-custom-allocation.md) gives you more control over how devices are assigned to an IoT hub. This is accomplished by using custom code in an Azure Function to assign devices to an IoT hub. The device provisioning service calls your Azure Function code providing all relevant information about the device and the enrollment to your code. Your function code is executed and returns the IoT hub information used to provisioning the device.
+* **Static configuration**: devices are provisioned to a single IoT hub, which must be specified on the enrollment.
+
+* **Custom (Use Azure Function)**: A [custom allocation policy](concepts-custom-allocation.md) gives you more control over how devices are assigned to an IoT hub. This is accomplished by using a custom webhook hosted in Azure Functions to assign devices to an IoT hub. DPS calls your webhook providing all relevant information about the device and the enrollment. Your webhook returns the IoT hub and initial device twin (optional) used to provision the device. Can't be set as the DPS instance default policy.
## Enrollment An enrollment is the record of devices or groups of devices that may register through auto-provisioning. The enrollment record contains information about the device or group of devices, including:-- the [attestation mechanism](#attestation-mechanism) used by the device-- the optional initial desired configuration-- desired IoT hub-- the desired device ID+
+* the [attestation mechanism](#attestation-mechanism) used by the device
+* the optional initial desired configuration
+* the [allocation policy](#allocation-policy) to use to assign devices to an IoT hub; if not specified on the enrollment, the DPS instance default allocation policy is used.
+* the [linked IoT hub(s)](#linked-iot-hubs) to apply the allocation policy to. For the *Static configuration* allocation policy, a single IoT hub must be specified. For all other allocation policies, one or more IoT hubs may be specified; if no IoT hubs are specified on the enrollment, all the IoT hubs linked to the DPS instance are used.
+* the desired device ID (individual enrollments only)
There are two types of enrollments supported by Device Provisioning Service:
There are two types of enrollments supported by Device Provisioning Service:
An enrollment group is a group of devices that share a specific attestation mechanism. Enrollment groups support X.509 certificate or symmetric key attestation. Devices in an X.509 enrollment group present X.509 certificates that have been signed by the same root or intermediate Certificate Authority (CA). The subject common name (CN) of each device's end-entity (leaf) certificate becomes the registration ID for that device. Devices in a symmetric key enrollment group present SAS tokens derived from the group symmetric key.
-The name of the enrollment group as well as the registration IDs presented by devices must be case-insensitive strings of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). The enrollment group name can be up to 128 characters long. In symmetric key enrollment groups, the registration IDs presented by devices can be up to 128 characters long. However, in X.509 enrollment groups, because the maximum length of the subject common name in an X.509 certificate is 64 characters, the registration IDs are limited to 64 characters.
+The name of the enrollment group and the registration IDs presented by devices must be case-insensitive strings of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). The enrollment group name can be up to 128 characters long. In symmetric key enrollment groups, the registration IDs presented by devices can be up to 128 characters long. However, in X.509 enrollment groups, because the maximum length of the subject common name in an X.509 certificate is 64 characters, the registration IDs are limited to 64 characters.
For devices in an enrollment group, the registration ID is also used as the device ID that is registered to IoT Hub.
An attestation mechanism is the method used for confirming a device's identity.
The Device Provisioning Service supports the following forms of attestation: * **X.509 certificates** based on the standard X.509 certificate authentication flow. For more information, see [X.509 attestation](concepts-x509-attestation.md).
-* **Trusted Platform Module (TPM)** based on a nonce challenge, using the TPM standard for keys to present a signed Shared Access Signature (SAS) token. This does not require a physical TPM on the device, but the service expects to attest using the endorsement key per the [TPM spec](https://trustedcomputinggroup.org/work-groups/trusted-platform-module/). For more information, see [TPM attestation](concepts-tpm-attestation.md).
+* **Trusted Platform Module (TPM)** based on a nonce challenge, using the TPM standard for keys to present a signed Shared Access Signature (SAS) token. This doesn't require a physical TPM on the device, but the service expects to attest using the endorsement key per the [TPM spec](https://trustedcomputinggroup.org/work-groups/trusted-platform-module/). For more information, see [TPM attestation](concepts-tpm-attestation.md).
* **Symmetric Key** based on shared access signature (SAS) [SAS tokens](../iot-hub/iot-hub-dev-guide-sas.md#sas-tokens), which include a hashed signature and an embedded expiration. For more information, see [Symmetric key attestation](concepts-symmetric-key-attestation.md). ## Hardware security module
The hardware security module, or HSM, is used for secure, hardware-based storage
> [!TIP] > We strongly recommend using an HSM with devices to securely store secrets on your devices.
-Device secrets may also be stored in software (memory), but it is a less secure form of storage than an HSM.
+Device secrets may also be stored in software (memory), but it's a less secure form of storage than an HSM.
## ID scope
-The ID scope is assigned to a Device Provisioning Service when it is created by the user and is used to uniquely identify the specific provisioning service the device will register through. The ID scope is generated by the service and is immutable, which guarantees uniqueness.
+The ID scope is assigned to a Device Provisioning Service when it's created by the user and is used to uniquely identify the specific provisioning service the device will register through. The ID scope is generated by the service and is immutable, which guarantees uniqueness.
> [!NOTE] > Uniqueness is important for long-running deployment operations and merger and acquisition scenarios. ## Registration
-A registration is the record of a device successfully registering/provisioning to an IoT Hub via the Device Provisioning Service. Registration records are created automatically; they can be deleted, but they cannot be updated.
+A registration is the record of a device successfully registering/provisioning to an IoT Hub via the Device Provisioning Service. Registration records are created automatically; they can be deleted, but they can't be updated.
## Registration ID The registration ID is used to uniquely identify a device registration with the Device Provisioning Service. The registration ID must be unique in the provisioning service [ID scope](#id-scope). Each device must have a registration ID. The registration ID is a case-insensitive string of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). DPS supports registration IDs up to 128 characters long.
-* In the case of TPM, the registration ID is provided by the TPM itself.
-* In the case of X.509-based attestation, the registration ID is set to the subject common name (CN) of the device certificate. For this reason, the common name must adhere to the registration ID string format. However, the registration ID is limited to 64 characters because that's the maximum length of the subject common name in an X.509 certificate.
+* With TPM attestation, the registration ID is provided by the TPM itself.
+* With X.509-based attestation, the registration ID is set to the subject common name (CN) of the device certificate. For this reason, the common name must adhere to the registration ID string format. However, the registration ID is limited to 64 characters because that's the maximum length of the subject common name in an X.509 certificate.
## Device ID
-The device ID is the ID as it appears in IoT Hub. The desired device ID may be set in the enrollment entry, but it is not required to be set. Setting the desired device ID is only supported in individual enrollments. If no desired device ID is specified in the enrollment list, the registration ID is used as the device ID when registering the device. Learn more about [device IDs in IoT Hub](../iot-hub/iot-hub-devguide-identity-registry.md).
+The device ID is the ID as it appears in IoT Hub. The desired device ID may be set in the enrollment entry, but it isn't required to be set. Setting the desired device ID is only supported in individual enrollments. If no desired device ID is specified in the enrollment list, the registration ID is used as the device ID when registering the device. Learn more about [device IDs in IoT Hub](../iot-hub/iot-hub-devguide-identity-registry.md).
## Operations
iot-dps How To Manage Enrollments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-manage-enrollments.md
# How to manage device enrollments with Azure portal
-A *device enrollment* creates a record of a single device or a group of devices that may at some point register with the Azure IoT Hub Device Provisioning Service (DPS). The enrollment record contains the initial configuration for the device(s) as part of that enrollment. Included in the configuration is either the IoT hub to which a device will be assigned, or an allocation policy that configures the hub from a set of hubs. This article shows you how to manage device enrollments for your provisioning service.
+A *device enrollment* creates a record of a single device or a group of devices that may at some point register with the Azure IoT Hub Device Provisioning Service (DPS). The enrollment record contains the initial configuration for the device(s) as part of that enrollment. Included in the configuration is either the IoT hub to which a device will be assigned, or an allocation policy that configures the IoT hub from a set of IoT hubs. This article shows you how to manage device enrollments for your provisioning service.
The Azure IoT Device Provisioning Service supports two types of enrollments:
To create a symmetric key enrollment group:
| **Group name** | The name of the group of devices. The enrollment group name is a case-insensitive string (up to 128 characters long) of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`).| | **Attestation Type** |Select **Symmetric Key**.| | **Auto Generate Keys** |Check this box.|
- | **Select how you want to assign devices to hubs** |Select *Static configuration* so that you can assign to a specific hub|
- | **Select the IoT hubs this group can be assigned to** |Select one of your hubs.|
+ | **Select how you want to assign devices to hubs** |Select *Static configuration* so that you can assign to a specific IoT hub. To learn more about allocation policies, see [How to use allocation policies](how-to-use-allocation-policies.md).|
+ | **Select the IoT hubs this group can be assigned to** |Select one of your linked IoT hubs. To learn more about linking IoT hubs to your DPS instance, see [How to link and manage IoT hubs](how-to-manage-linked-iot-hubs.md).|
Leave the rest of the fields at their default values.
To create a symmetric key individual enrollment:
| **Auto Generate Keys** |Check this box. | | **Registration ID** | Type in a unique registration ID.| | **IoT Hub Device ID** | This ID will represent your device. It must follow the rules for a device ID. For more information, see [Device identity properties](../iot-hub/iot-hub-devguide-identity-registry.md). If the device ID is left unspecified, then the registration ID will be used.|
- | **Select how you want to assign devices to hubs** |Select *Static configuration* so that you can assign to a specific hub|
- | **Select the IoT hubs this group can be assigned to** |Select one of your hubs.|
+ | **Select how you want to assign devices to hubs** |Select *Static configuration* so that you can assign to a specific IoT hub. To learn more about allocation policies, see [How to use allocation policies](how-to-use-allocation-policies.md).|
+ | **Select the IoT hubs this group can be assigned to** |Select one of your linked IoT hubs. To learn more about linking IoT hubs to your DPS instance, see [How to link and manage IoT hubs](how-to-manage-linked-iot-hubs.md).|
:::image type="content" source="./media/how-to-manage-enrollments/add-individual-enrollment-symm-key.png" alt-text="Add individual enrollment for symmetric key attestation.":::
To create a X.509 certificate individual enrollment:
| **Mechanism** | Select *X.509* | | **Primary Certificate .pem or .cer file** | Upload a certificate from which you may generate leaf certificates. If choosing .cer file, only base-64 encoded certificate is accepted. | | **IoT Hub Device ID** | This ID will represent your device. It must follow the rules for a device ID. For more information, see [Device identity properties](../iot-hub/iot-hub-devguide-identity-registry. The device ID must be the subject name on the device certificate that you upload for the enrollment. That subject name must conform to the rules for a device ID.|
- | **Select how you want to assign devices to hubs** |Select *Static configuration* so that you can assign to a specific hub|
- | **Select the IoT hubs this group can be assigned to** |Select one of your hubs.|
+ | **Select how you want to assign devices to hubs** |Select *Static configuration* so that you can assign to a specific IoT hub. To learn more about allocation policies, see [How to use allocation policies](how-to-use-allocation-policies.md).|
+ | **Select the IoT hubs this group can be assigned to** |Select one of your linked IoT hubs. To learn more about linking IoT hubs to your DPS instance, see [How to link and manage IoT hubs](how-to-manage-linked-iot-hubs.md).|
:::image type="content" source="./media/how-to-manage-enrollments/add-individual-enrollment-cert.png" alt-text="Add individual enrollment for X.509 certificate attestation.":::
To create a TPM individual enrollment:
| **Endorsement Key** | The unique endorsement key of the TPM device. | | **Registration ID** | Type in a unique registration ID.| | **IoT Hub Device ID** | This ID will represent your device. It must follow the rules for a device ID. For more information, see [Device identity properties](../iot-hub/iot-hub-devguide-identity-registry. If the device ID is left unspecified, then the registration ID will be used.|
- | **Select how you want to assign devices to hubs** |Select *Static configuration* so that you can assign to a specific hub|
- | **Select the IoT hubs this group can be assigned to** |Select one of your hubs.|
+ | **Select how you want to assign devices to hubs** |Select *Static configuration* so that you can assign to a specific IoT hub. To learn more about allocation policies, see [How to use allocation policies](how-to-use-allocation-policies.md).|
+ | **Select the IoT hubs this group can be assigned to** |Select one of your linked IoT hubs. To learn more about linking IoT hubs to your DPS instance, see [How to link and manage IoT hubs](how-to-manage-linked-iot-hubs.md).|
:::image type="content" source="./media/how-to-manage-enrollments/add-individual-enrollment-tpm.png" alt-text="Add individual enrollment for TPM attestation.":::
iot-dps How To Manage Linked Iot Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-manage-linked-iot-hubs.md
+
+ Title: How to manage linked IoT hubs with Device Provisioning Service (DPS)
+description: This article shows how to link and manage IoT hubs with the Device Provisioning Service (DPS).
++ Last updated : 10/24/2022++++++
+# How to link and manage IoT hubs
+
+Azure IoT Hub Device Provisioning Service (DPS) can provision devices across one or more IoT hubs. Before DPS can provision devices to an IoT hub, it must be linked to your DPS instance. Once linked, an IoT hub can be used in an allocation policy. Allocation policies determine how devices are assigned to IoT hubs by DPS. This article provides instruction on how to link IoT hubs and manage them in your DPS instance.
+
+## Linked IoT hubs and allocation policies
+
+DPS can only provision devices to IoT hubs that have been linked to it. Linking an IoT hub to a DPS instance gives the service read/write permissions to the IoT hub's device registry. With these permissions, DPS can register a device ID and set the initial configuration in the device twin. Linked IoT hubs may be in any Azure region. You may link hubs in other subscriptions to your DPS instance.
+
+After an IoT hub is linked to DPS, it's eligible to participate in allocation. Whether and how it will participate in allocation depends on settings in the enrollment that a device provisions through and settings on the linked IoT hub itself.
+
+The following settings control how DPS uses linked IoT hubs:
+
+* **Connection string**: Sets the IoT Hub connection string that DPS uses to connect to the linked IoT hub. The connection string is based on one of the IoT hub's shared access policies. DPS needs the following permissions on the IoT hub: *RegistryWrite* and *ServiceConnect*. The connection string must be for a shared access policy that has these permissions. To learn more about IoT Hub shared access policies, see [IoT Hub access control and permissions](../iot-hub/iot-hub-dev-guide-sas.md#access-control-and-permissions).
+
+* **Allocation weight**: Determines the likelihood of an IoT hub being selected when DPS hashes device assignment across a set of IoT hubs. The value can be between one and 1000. The default is one (or **null**). Higher values increase the IoT hub's probability of being selected.
+
+* **Apply allocation policy**: Sets whether the IoT hub participates in allocation policy. The default is **Yes** (true). If set to **No** (false), devices won't be assigned to the IoT hub. The IoT hub can still be selected on an enrollment, but it won't participate in allocation. You can use this setting to temporarily or permanently remove an IoT hub from participating in allocation; for example, if it's approaching the allowed number of devices.
+
+To learn about DPS allocation policies and how linked IoT hubs participate in them, see [Manage allocation policies](how-to-use-allocation-policies.md).
+
+## Add a linked IoT hub
+
+When you link an IoT hub to your DPS instance, it becomes available to participate in allocation. You can add IoT hubs that are inside or outside of your subscription. When you link an IoT hub, it may or may not be available for allocations in existing enrollments:
+
+* For enrollments that don't explicitly set the IoT hubs to apply allocation policy to, a newly linked IoT hub immediately begins participating in allocation.
+
+* For enrollments that do explicitly set the IoT hubs to apply allocation policy to, you'll need to manually or programmatically add the new IoT hub to the enrollment settings for it to participate in allocation.
+
+### Use the Azure portal to link an IoT hub
+
+In the Azure portal, you can link an IoT hub either from the left menu of your DPS instance or from the enrollment when creating or updating an enrollment. In both cases, the IoT hub is scoped to the DPS instance (not just the enrollment).
+
+To link an IoT hub to your DPS instance in the Azure portal:
+
+1. On the left menu of your DPS instance, select **Linked IoT hubs**.
+
+1. At the top of the page, select **+ Add**.
+
+1. On the **Add link to IoT hub** page, select the subscription that contains the IoT hub and then choose the name of the IoT hub from the **IoT hub** list.
+
+1. After you select the IoT hub, choose an access policy that DPS will use to connect to the IoT hub. The **Access Policy** list shows all shared access policies defined on the selected IoT Hub that have both *RegistryWrite* and *ServiceConnect* permissions defined. The default is the *iothubowner* policy. Select the policy you want to use.
+
+1. Select **Save**.
+
+When you're creating or updating an enrollment, you can use the **Link a new IoT hub** button on the enrollment. You'll be presented with the same page and choices as above. After you save the linked hub, it will be available on your DPS instance and can be selected from your enrollment.
+
+> [!NOTE]
+>
+> In the Azure portal, you can't set the *Allocation weight* and *Apply allocation policy* settings when you add a linked IoT hub. Instead, You can update these settings after the IoT hub is linked. To learn more, see [Update a linked IoT hub](#update-a-linked-iot-hub).
+
+### Use the Azure CLI to link an IoT hub
+
+Use the [az iot dps linked-hub create](/cli/azure/iot/dps/linked-hub#az-iot-dps-linked-hub-create) Azure CLI command to link an IoT hub to your DPS instance.
+
+For example, the following command links an IoT hub named *MyExampleHub* using a connection string for its *iothubowner* shared access policy. This command leaves the *Allocation weight* and *Apply allocation policy* settings at their defaults, but you can specify values for these settings if you want to.
+
+```azurecli
+az iot dps linked-hub create --dps-name MyExampleDps --resource-group MyResourceGroup --connection-string "HostName=MyExampleHub.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=XNBhoasdfhqRlgGnasdfhivtshcwh4bJwe7c0RIGuWsirW0=" --location westus
+```
+
+DPS also supports linking IoT Hubs using the [Create or Update DPS resource](/rest/api/iot-dps/iot-dps-resource/create-or-update?tabs=HTTP) REST API, [Resource Manager templates](/azure/templates/microsoft.devices/provisioningservices?pivots=deployment-language-arm-template), and the [DPS Management SDKs](libraries-sdks.md#management-sdks).
+
+## Update a linked IoT hub
+
+You can update the settings on a linked IoT hub to change its allocation weight, whether it can have allocation policies applied to it, and the connection string that DPS uses to connect to it. When you update the settings for an IoT hub, the changes take effect immediately, whether the IoT hub is specified on an enrollment or used by default.
+
+### Use the Azure portal to update a linked IoT hub
+
+In the Azure portal, you can update the *Allocation weight* and *Apply allocation policy* settings.
+
+To update the settings for a linked IoT hub using the Azure portal:
+
+1. On the left menu of your DPS instance, select **Linked IoT hubs**, then select the IoT hub from the list.
+
+1. On the **Linked IoT hub details** page:
+
+ :::image type="content" source="media/how-to-manage-linked-iot-hubs/set-linked-iot-hub-properties.png" alt-text="Screenshot that shows the linked IoT hub details page.":::.
+
+ * Use the **Allocation weight** slider or text box to choose a weight between one and 1000. The default is one.
+
+ * Set the **Apply allocation policy** switch to specify whether the linked IoT hub should be included in allocation.
+
+1. Save your settings.
+
+> [!NOTE]
+>
+> You can't update the connection string that DPS uses to connect to the IoT hub from the Azure portal. Instead, you can use the Azure CLI to update the connection string, or you can delete the linked IoT hub from your DPS instance and relink it. To learn more, see [Update keys for linked IoT hubs](#update-keys-for-linked-iot-hubs).
+
+### Use the Azure CLI to update a linked IoT hub
+
+With the Azure CLI, you can update the *Allocation weight*, *Apply allocation policy*, and *Connection string* settings.
+
+Use the [az iot dps linked-hub update](/cli/azure/iot/dps/linked-hub#az-iot-dps-linked-hub-update) command to update the allocation weight or apply allocation policies settings. For example, the following command sets the allocation weight and apply allocation policy for a linked IoT hub:
+
+```azurecli
+az iot dps linked-hub update --dps-name MyExampleDps --resource-group MyResourceGroup --linked-hub MyExampleHub --allocation-weight 2 --apply-allocation-policy true
+```
+
+Use the [az iot dps update](/cli/azure/iot/dps#az-iot-dps-update) command to update the connection string for a linked IoT hub. You can use the `--set` parameter along with the connection string for the IoT hub shared access policy you want to use. For details, see [Update keys for linked IoT hubs](#update-keys-for-linked-iot-hubs).
+
+DPS also supports updating linked IoT Hubs using the [Create or Update DPS resource](/rest/api/iot-dps/iot-dps-resource/create-or-update?tabs=HTTP) REST API, [Resource Manager templates](/azure/templates/microsoft.devices/provisioningservices?pivots=deployment-language-arm-template), and the [DPS Management SDKs](libraries-sdks.md#management-sdks).
+
+## Delete a linked IoT hub
+
+When you delete a linked IoT hub from your DPS instance, it will no longer be available to set in future enrollments. However, it may or may not be removed from allocations in existing enrollments:
+
+* For enrollments that don't explicitly set the IoT hubs to apply allocation policy to, a deleted linked IoT hub is no longer available for allocation.
+
+* For enrollments that do explicitly set the IoT hubs to apply allocation policy to, you'll need to manually or programmatically remove the IoT hub from the enrollment settings for it to be removed from participation in allocation. Failure to do so may result in an error when a device tries to provision through the enrollment.
+
+### Use the Azure portal to delete a linked IoT hub
+
+To delete a linked IoT hub from your DPS instance in the Azure portal:
+
+1. On the left menu of your DPS instance, select **Linked IoT hubs**.
+
+1. From the list of IoT hubs, select the check box next to the IoT hub or IoT hubs you want to delete. Then select **Delete** at the top of the page and confirm your choice when prompted.
+
+### Use the Azure CLI to delete a linked IoT hub
+
+Use the [az iot dps linked-hub delete](/cli/azure/iot/dps/linked-hub#az-iot-dps-linked-hub-delete) command to remove a linked IoT hub from the DPS instance. For example, the following command removes the IoT hub named MyExampleHub:
+
+```azurecli
+az iot dps linked-hub delete --dps-name MyExampleDps --resource-group MyResourceGroup --linked-hub MyExampleHub
+```
+
+DPS also supports deleting linked IoT Hubs from the DPS instance using the [Create or Update DPS resource](/rest/api/iot-dps/iot-dps-resource/create-or-update?tabs=HTTP) REST API, [Resource Manager templates](/azure/templates/microsoft.devices/provisioningservices?pivots=deployment-language-arm-template), and the [DPS Management SDKs](libraries-sdks.md#management-sdks).
+
+## Update keys for linked IoT hubs
+
+It may become necessary to either rotate or update the symmetric keys for an IoT hub that's been linked to DPS. In this case, you'll also need to update the connection string setting in DPS for the linked IoT hub. Note that provisioning to an IoT hub will fail during the interim between updating a key on the IoT hub and updating your DPS instance with the new connections string based on that key.
+
+### Use the Azure portal to update keys
+
+You can't update the connection string setting for a linked IoT Hub when using Azure portal. Instead, you need to delete the linked IoT hub from your DPS instance and then re-add it.
+
+To update symmetric keys for a linked IoT hub in the Azure portal:
+
+1. On the left menu of your DPS instance in the Azure portal, select the IoT hub that you want to update the key(s) for.
+
+1. On the **Linked IoT hub details** page, note down the values for *Allocation weight* and *Apply allocation policy*, you'll need these values when you relink the IoT hub to your DPS instance later. Then, select **Manage Resource** to go to the IoT hub.
+
+1. On the left menu of the IoT hub, under **Security settings**, select **Shared access policies**.
+
+1. On **Shared access policies**, under **Manage shared access policies**, select the policy that your DPS instance uses to connect to the linked IoT hub.
+
+1. At the top of the page, select **Regenerate primary key**, **Regenerate secondary key**, or **Swap keys**, and confirm your choice when prompted.
+
+1. Navigate back to your DPS instance.
+
+1. Follow the steps in [Delete an IoT hub](#use-the-azure-portal-to-delete-a-linked-iot-hub) to delete the IoT hub from your DPS instance.
+
+1. Follow the steps in [Link an IoT hub](#use-the-azure-portal-to-link-an-iot-hub) to relink the IoT hub to your DPS instance with the new connection string for the policy.
+
+1. If you need to restore the allocation weight and apply allocation policy settings, follow the steps in [Update a linked IoT hub](#use-the-azure-portal-to-update-a-linked-iot-hub) using the values you saved in step 2.
+
+### Use the Azure CLI to update keys
+
+To update symmetric keys for a linked IoT hub with Azure CLS:
+
+1. Use the [az iot hub policy renew-key](/cli/azure/iot/hub/policy#az-iot-hub-policy-renew-key) command to swap or regenerate the symmetric keys for the shared access policy on the IoT hub. For example, the following command renews the primary key for the *iothubowner* shared access policy on an IoT hub:
+
+ ```azurecli
+ az iot hub policy renew-key --hub-name MyExampleHub --name owner --rk primary
+ ```
+
+1. Use the [az iot hub connection-string show](/cli/azure/iot/hub/policy#az-iot-hub-az-iot-hub-connection-string-show) command to get the new connection string for the shared access policy. For example, the following command gets the primary connection string for the *iothubowner* shared access policy that the primary key was regenerated for in the previous command:
+
+ ```azurecli
+ az iot hub connection-string show --hub-name MyExampleHub --policy-name owner --key-type primary
+ ```
+
+1. Use the [az iot dps linked-hub list](/cli/azure/iot/dps/linked-hub#az-iot-dps-linked-hub-show) command to find the position of the IoT hub in the collection of linked IoT hubs for your DPS instance. For example, the following command gets the primary connection string for the *owner* shared access policy that the primary key was regenerated for in the previous command:
+
+ ```azurecli
+ az iot dps linked-hub list --dos-name MyExampleDps
+ ```
+
+ The output will show the position of the linked IoT hub you want to update the connection string for in the table of linked IoT hubs maintained by your DPS instance. In this case, it's the first IoT hub in the list, *MyExampleHub*.
+
+ ```json
+ [
+ {
+ "allocationWeight": null,
+ "applyAllocationPolicy": null,
+ "connectionString": "HostName=MyExampleHub.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=****",
+ "location": "centralus",
+ "name": "MyExampleHub.azure-devices.net"
+ },
+ {
+ "allocationWeight": null,
+ "applyAllocationPolicy": null,
+ "connectionString": "HostName=MyExampleHub-2.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=****",
+ "location": "centralus",
+ "name": "NyExampleHub-2.azure-devices.net"
+ }
+ ]
+ ```
+
+1. Use the [az iot dps update](/cli/azure/iot/dps#az-iot-dps-update) command to update the connection string for the linked IoT hub. You use the `--set` parameter and the position of the linked IoT hub in the `properties.iotHubs[]` table to target the IoT hub. For example, the following command updates the connection string for *MyExampleHub* returned first in the previous command:
+
+ ```azurecli
+ az iot dps update --name MyExampleDps --set properties.iotHubs[0].connectionString="HostName=MyExampleHub-2.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=NewTokenValue"
+ ```
+
+## Limitations
+
+There are some limitations when working with linked IoT hubs and private endpoints. For more information, see [Private endpoint limitations](virtual-network-support.md#private-endpoint-limitations).
+
+## Next steps
+
+* To learn more about allocation policies, see [Manage allocation policies](how-to-use-allocation-policies.md).
iot-dps How To Reprovision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-reprovision.md
# How to reprovision devices
-During the lifecycle of an IoT solution, it is common to move devices between IoT hubs. This topic is written to assist solution operators configuring reprovisioning policies.
+During the lifecycle of an IoT solution, it's common to move devices between IoT hubs. This topic is written to assist solution operators configuring reprovisioning policies.
For more a more detailed overview of reprovisioning scenarios, see [IoT Hub Device reprovisioning concepts](concepts-device-reprovision.md). - ## Configure the enrollment allocation policy
-The allocation policy determines how the devices associated with the enrollment will be allocated, or assigned, to an IoT hub once reprovisioned.
+The allocation policy determines how the devices associated with the enrollment will be allocated, or assigned, to an IoT hub once reprovisioned. To learn more about allocation polices, see [How to use allocation policies](how-to-use-allocation-policies.md).
The following steps configure the allocation policy for a device's enrollment: 1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Device Provisioning Service instance.
-2. Click **Manage enrollments**, and click the enrollment group or individual enrollment that you want to configure for reprovisioning.
+2. Select **Manage enrollments**, and then select the enrollment group or individual enrollment that you want to configure for reprovisioning.
3. Under **Select how you want to assign devices to hubs**, select one of the following allocation policies:
- * **Lowest latency**: This policy assigns devices to the linked IoT Hub that will result in the lowest latency communications between device and IoT Hub. This option enables the device to communicate with the closest IoT hub based on location.
-
- * **Evenly weighted distribution**: This policy distributes devices across the linked IoT Hubs based on the allocation weight assigned to each linked IoT hub. This policy allows you to load balance devices across a group of linked hubs based on the allocation weights set on those hubs. If you are provisioning devices to only one IoT Hub, we recommend this setting. This setting is the default.
-
- * **Static configuration**: This policy requires a desired IoT Hub be listed in the enrollment entry for a device to be provisioned. This policy allows you to designate a single specific IoT hub that you want to assign devices to.
+ * **Lowest latency**: This policy assigns devices to the IoT hub that will result in the lowest latency communications between the device and IoT Hub. This option enables the device to communicate with the closest IoT hub based on location.
+
+ * **Evenly weighted distribution**: This policy distributes devices across IoT hubs based on the allocation weight configured on each IoT hub. IoT hubs with a higher allocation weight are more likely to be assigned. If you're provisioning devices to only one IoT Hub, we recommend this setting. This setting is the default.
+
+ * **Static configuration**: This policy requires a desired IoT hub be listed in the enrollment entry for a device to be provisioned. This policy allows you to designate a single IoT hub that you want to assign devices to.
-4. Under **Select the IoT hubs this group can be assigned to**, select the linked IoT hubs that you want included with your allocation policy. Optionally, add a new linked Iot hub using the **Link a new IoT Hub** button.
+ * **Custom (Use Azure Function)**: This policy uses a custom webhook hosted in Azure Functions to assign devices to one or more IoT hubs. Custom allocation policies give you more control over how devices are assigned to your IoT hubs. To learn more, see [Understand custom allocation policies](concepts-custom-allocation.md).
- With the **Lowest latency** allocation policy, the hubs you select will be included in the latency evaluation to determine the closest hub for device assignment.
+4. Under **Select the IoT hubs this group can be assigned to**, select the linked IoT hubs that you want included in your allocation policy. Optionally, add a new linked Iot hub using the **Link a new IoT Hub** button.
- With the **Evenly weighted distribution** allocation policy, devices will be load balanced across the hubs you select based on their configured allocation weights and their current device load.
+ * With the **Lowest latency** allocation policy, the IoT hubs you select will be included in the latency evaluation to determine the closest IoT hub for device assignment.
- With the **Static configuration** allocation policy, select the IoT hub you want devices assigned to.
+ * With the **Evenly weighted distribution** allocation policy, devices will be hashed across the IoT hubs you select based on their configured allocation weights.
-4. Click **Save**, or proceed to the next section to set the reprovisioning policy.
+ * With the **Static configuration** allocation policy, select the IoT hub you want devices assigned to.
- ![Select enrollment allocation policy](./media/how-to-reprovision/enrollment-allocation-policy.png)
+ * With the **Custom** allocation policy, select the IoT hubs you want evaluated for assignment by your custom allocation webhook.
+5. Select **Save**, or proceed to the next section to set the reprovisioning policy.
+ ![Screenshot that shows setting the enrollment allocation policy and IoT hubs in the Azure portal.](./media/how-to-reprovision/enrollment-allocation-policy.png)
## Set the reprovisioning policy 1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Device Provisioning Service instance.
-2. Click **Manage enrollments**, and click the enrollment group or individual enrollment that you want to configure for reprovisioning.
+2. Select **Manage enrollments**, and the select the enrollment group or individual enrollment that you want to configure for reprovisioning.
3. Under **Select how you want device data to be handled on re-provision to a different IoT hub**, choose one of the following reprovisioning policies:
The following steps configure the allocation policy for a device's enrollment:
* **Re-provision and reset to initial config**: This policy takes action when devices associated with the enrollment entry submit a new provisioning request. Depending on the enrollment entry configuration, the device may be reassigned to another IoT hub. If the device is changing IoT hubs, the device registration with the initial IoT hub will be removed. The initial configuration data that the provisioning service instance received when the device was provisioned is provided to the new IoT hub. During migration, the device's status will be reported as **Assigning**.
-4. Click **Save** to enable the reprovisioning of the device based on your changes.
-
- ![Screenshot that highlights the changes you've made and the Save button.](./media/how-to-reprovision/reprovisioning-policy.png)
-
+4. Select **Save** to enable the reprovisioning of the device based on your changes.
+ ![Screenshot that shows setting the enrollment reprovisioning policy in the Azure portal.](./media/how-to-reprovision/reprovisioning-policy.png)
## Send a provisioning request from the device
How often a device submits a provisioning request depends on the scenario. When
>[!TIP] > We recommend not provisioning on every reboot of the device, as this could hit the service throttling limits especially when reprovisioning several thousands or millions of devices at once. Instead you should attempt to use the [Device Registration Status Lookup](/rest/api/iot-dps/device/runtime-registration/device-registration-status-lookup) API and try to connect with that information to IoT Hub. If that fails, then try to reprovision as the IoT Hub information might have changed. Keep in mind that querying for the registration state will count as a new device registration, so you should consider the [Device registration limit]( about-iot-dps.md#quotas-and-limits). Also consider implementing an appropriate retry logic, such as exponential back-off with randomization, as described on the [Retry general guidance](/azure/architecture/best-practices/transient-faults). >In some cases, depending on the device capabilities, itΓÇÖs possible to save the IoT Hub information directly on the device to connect directly to IoT Hub after the first-time provisioning using DPS occurred. If you choose to do this, make sure you implement a fallback mechanism in case you get specific [errors from Hub occur](../iot-hub/troubleshoot-message-routing.md#common-error-codes), for example, consider the following scenarios:
+>
> * Retry the Hub operation if the result code is 429 (Too Many Requests) or an error in the 5xx range. Do not retry for any other errors. > * For 429 errors, only retry after the time indicated in the Retry-After header. > * For 5xx errors, use exponential back-off, with the first retry at least 5 seconds after the response. > * On errors other than 429 and 5xx, re-register through DPS > * Ideally you should also support a [method](../iot-hub/iot-hub-devguide-direct-methods.md) to manually trigger provisioning on demand.
->
+>
> We also recommend taking into account the service limits when planning activities like pushing updates to your fleet. For example, updating the fleet all at once could cause all devices to re-register through DPS (which could easily be above the registration quota limit) - For such scenarios, consider planning for device updates in phases instead of updating your entire fleet at the same time. - ## Next steps -- To learn more Reprovisioning, see [IoT Hub Device reprovisioning concepts](concepts-device-reprovision.md) -- To learn more Deprovisioning, see [How to deprovision devices that were previously auto-provisioned](how-to-unprovision-devices.md)
+* To learn more Reprovisioning, see [IoT Hub Device reprovisioning concepts](concepts-device-reprovision.md).
+* To learn more Deprovisioning, see [How to deprovision devices that were previously auto-provisioned](how-to-unprovision-devices.md).
iot-dps How To Use Allocation Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-use-allocation-policies.md
+
+ Title: How to use allocation policies with Device Provisioning Service (DPS)
+description: This article shows how to use the Device Provisioning Service (DPS) allocation policies to automatically provision device across one or more IoT hubs.
++ Last updated : 10/24/2022++++++
+# How to use allocation policies to provision devices across IoT hubs
+
+Azure IoT Hub Device Provisioning Service (DPS) supports several built-in allocation policies that determine how it assigns devices across one or more IoT hubs. DPS also includes support for custom allocation policies, which let you create and use your own allocation policies when your IoT scenario requires functionality not provided by the built-in policies.
+
+This article helps you understand how to use and manage DPS allocation policies.
+
+## Understand allocation policies
+
+Allocation policies determine how DPS assigns devices to an IoT hub. Each DPS instance has a default allocation policy, but this policy can be overridden by an allocation policy set on an enrollment. Only IoT hubs that have been linked to the DPS instance can participate in allocation. Whether a linked IoT hub will participate in allocation depends on settings on the enrollment that a device provisions through.
+
+DPS supports four allocation policies:
+
+* **Evenly weighted distribution**: devices are provisioned to an IoT hub using a weighted hash. By default, linked IoT hubs have the same allocation weight setting, so they're equally likely to have devices provisioned to them. The allocation weight of an IoT hub may be adjusted to increase or decrease its likelihood of being assigned. *Evenly weighted distribution* is the default allocation policy for a DPS instance. If you're provisioning devices to only one IoT hub, we recommend using this policy.
+
+* **Lowest latency**: devices are provisioned to the IoT hub with the lowest latency to the device. If multiple IoT hubs would provide the lowest latency, DPS hashes devices across those hubs based on their configured allocation weight.
+
+* **Static configuration**: devices are provisioned to a single IoT hub, which must be specified on the enrollment.
+
+* **Custom (Use Azure Function)**: A custom allocation policy gives you more control over how devices are assigned to an IoT hub. This is accomplished by using a custom webhook hosted in Azure Functions to assign devices to an IoT hub. DPS calls your webhook providing all relevant information about the device and the enrollment. Your webhook returns the IoT hub and initial device twin (optional) used to provision the device. Custom payloads can also be passed to and from the device. To learn more, see [Understand custom allocation policies](concepts-custom-allocation.md). Can't be set as the DPS instance default policy.
+
+> [!NOTE]
+> The preceding list shows the names of the allocation policies as they appear in the Azure portal. When setting the allocation policy using the DPS REST API, Azure CLI, and DPS service SDKs, they are referred to as follows: **hashed**, **geolatency**, **static**, and **custom**.
+
+There are two settings on a linked IoT hub that control how it participates in allocation:
+
+* **Allocation weight**: sets the weight that the IoT hub will have when participating in allocation policies that involve multiple IoT hubs. It can be a value between one and 1000. The default is one (or **null**).
+
+ * With the *Evenly weighted distribution* allocation policy, IoT hubs with higher allocation weight values have a greater likelihood of being selected compared to those with lower weight values.
+
+ * With the *Lowest latency* allocation policy, the allocation weight value will affect the probability of an IoT hub being selected when more than one IoT hub satisfies the lowest latency requirement.
+
+ * With a *Custom* allocation policy, whether and how the allocation weight value is used will depend on the webhook logic.
+
+* **Apply allocation policy**: specifies whether the IoT hub participates in allocation policy. The default is **Yes** (true). If set to **No** (false), devices won't be assigned to the IoT hub. The IoT hub can still be selected on an enrollment, but it won't participate in allocation. You can use this setting to temporarily or permanently remove an IoT hub from participating in allocation; for example, if it's approaching the allowed number of devices.
+
+To learn more about linking and managing IoT hubs in your DPS instance, see [Link and manage IoT hubs](how-to-manage-linked-iot-hubs.md).
+
+When a device provisions through DPS, the service assigns it to an IoT hub according to the following guidelines:
+
+* If the enrollment specifies an allocation policy, use that policy; otherwise, use the default allocation policy for the DPS instance.
+
+* If the enrollment specifies one or more IoT hubs, apply the allocation policy across those IoT hubs; otherwise, apply the allocation policy across all of the IoT hubs linked to the DPS instance. Note that if the allocation policy is *Static configuration*, the enrollment *must* specify an IoT hub.
+
+> [!IMPORTANT]
+> When you change an allocation policy or the IoT hubs it applies to, the changes only affect subsequent device registrations. Devices already provisioned to an IoT hub won't be affected. If you want your changes to apply retroactively to these devices, you'll need to reprovision them. To learn more, see [How to reprovision devices](how-to-reprovision.md).
+
+## Set the default allocation policy for the DPS instance
+
+The default allocation policy for the DPS instance is used when an allocation policy isn't specified on an enrollment. Only *Evenly weighted distribution*, *Lowest latency*, and *Static configuration* are supported for the default allocation policy. *Custom* allocation isn't supported. When a DPS instance is created, its default policy is automatically set to *Evenly weighted distribution*. However, you can update your DPS instance to set a different allocation policy.
+
+> [!NOTE]
+> If you set *Static configuration* as the default allocation policy for a DPS instance, a linked IoT hub *must* be specified in enrollments that rely on the default policy.
+
+### Use the Azure portal to the set default allocation policy
+
+To set the default allocation policy for the DPS instance in the Azure portal:
+
+1. On the left menu of your DPS instance, select **Manage allocation policy**.
+
+2. Select the button for the allocation policy you want to set: **Lowest latency**, **Evenly weighted distribution**, or **Static configuration**. (Custom allocation isn't supported for the default allocation policy.)
+
+3. Select **Save**.
+
+### Use the Azure CLI to set the default allocation policy
+
+Use the [az iot dps update](/cli/azure/iot/dps#az-iot-dps-update) Azure CLI command to set the default allocation policy for the DPS instance. You use `--set properties.allocationPolicy` to specify the policy. For example, the following command sets the allocation policy to *Evenly weighted distribution* (the default):
+
+```azurecli
+az iot dps update --name MyExampleDps --set properties.allocationPolicy=hashed
+```
+
+DPS also supports setting the default allocation policy using the [Create or Update DPS resource](/rest/api/iot-dps/iot-dps-resource/create-or-update?tabs=HTTP) REST API, [Resource Manager templates](/azure/templates/microsoft.devices/provisioningservices?pivots=deployment-language-arm-template), and the [DPS Management SDKs](libraries-sdks.md#management-sdks).
+
+## Set allocation policy and IoT hubs for enrollments
+
+Individual enrollments and enrollment groups can specify an allocation policy and the linked IoT hubs that it should apply to. If no allocation policy is specified by the enrollment, then the default allocation policy for the DPS instance is used.
+
+In either case, the following conditions apply:
+
+* For *Evenly weighted distribution*, *Lowest latency*, and *Custom* allocation policies, the enrollment *may* specify which linked IoT hubs should be used. If no IoT hubs are selected in the enrollment, then all of the linked IoT hubs in the DPS instance will be used.
+
+* For *Static configuration*, the enrollment *must* specify a single IoT hub from the list of linked IoT hubs.
+
+For both individual enrollments and enrollment groups, you can specify an allocation policy and the linked IoT hubs to apply it to when you create or update an enrollment.
+
+### Use the Azure portal to manage enrollment allocation policy and IoT hubs
+
+To set allocation policy and select IoT hubs on an enrollment in the Azure portal:
+
+1. On the left menu of your DPS instance, select **Manage enrollments**.
+
+1. On the **Manage enrollments** page:
+
+ * To create a new enrollment, select either **+ Add enrollment group** or **+ Add individual enrollment** at the top of the page.
+
+ * To update an existing enrollment, select it from the list under either the **Enrollment Groups** or **Individual Enrollments** tab.
+
+1. On the **Add Enrollment** page (on create) or the **Enrollment details** page (on update), you can select the allocation policy you want applied to the enrollment and select the IoT hubs that should be used:
+
+ :::image type="content" source="media/how-to-use-allocation-policies/select-enrollment-policy-and-hubs.png" alt-text="Screenshot that shows the allocation policy and selected hubs settings on Add Enrollment page.":::.
+
+ * Select the allocation policy you want to apply from the drop-down. The default allocation policy for the DPS instance is selected by default. For custom allocation, you'll also need to specify a custom allocation policy webhook in Azure Functions. For details, see the [Use custom allocation policies](tutorial-custom-allocation-policies.md) tutorial.
+
+ * Select the IoT hubs that devices can be assigned to. If you've selected the *Static configuration* allocation policy, you'll be limited to selecting a single linked IoT hub. For all other allocation policies, all the linked IoT hubs will be selected by default, but you can modify this selection using the drop-down. To have the enrollment automatically use linked IoT hubs as they're added to (or deleted from) the DPS instance, unselect all IoT hubs.
+
+ * Optionally, you can select the **Link a new IoT hub** button to link a new IoT hub to the DPS instance and make it available in the list of IoT hubs that can be selected. For details about linking an IoT hub, see [Link an IoT Hub](how-to-manage-linked-iot-hubs.md#use-the-azure-portal-to-link-an-iot-hub).
+
+1. Set any other properties needed for the enrollment and then save your settings.
+
+### Use the Azure CLI to manage enrollment allocation policy and IoT hubs
+
+Use the [az iot dps enrollment create](/cli/azure/iot/dps/enrollment#az-iot-dps-enrollment-create), [az iot dps enrollment update](/cli/azure/iot/dps/enrollment#az-iot-dps-enrollment-update), [az iot dps enrollment-group create](/cli/azure/iot/dps/enrollment#az-iot-dps-enrollment-group-create), [az iot dps enrollment-group update](/cli/azure/iot/dps/enrollment#az-iot-dps-enrollment-group-update) Azure CLI commands to create or update individual enrollments or enrollment groups.
+
+For example, the following command creates a symmetric key enrollment group that defaults to using the default allocation policy set on the DPS instance and all the IoT hubs linked to the DPS instance:
+
+```azurecli
+az iot dps enrollment-group create --dps-name MyExampleDps --enrollment-id MyEnrollmentGroup
+```
+
+The following command updates the same enrollment group to use the *Lowest latency* allocation policy with IoT hubs named *MyExampleHub* and *MyExampleHub-2*:
+
+```azurecli
+az iot dps enrollment-group update --dps-name MyExampleDps --enrollment-id MyEnrollmentGroup --allocation-policy geolatency --iot-hubs "MyExampleHub.azure-devices.net MyExampleHub-2.azure-devices.net"
+```
+
+DPS also supports setting allocation policy and selected IoT hubs on the enrollment using the [Create or Update individual enrollment](/rest/api/iot-dps/service/individual-enrollment/create-or-update) and [Create or Update enrollment group](/rest/api/iot-dps/service/enrollment-group/create-or-update) REST APIs, and the [DPS service SDKs](libraries-sdks.md#service-sdks).
+
+## Allocation behavior
+
+Note the following behavior when using allocation policies with IoT hub:
+
+* With the Azure CLI, the REST API, and the DPS service SDKs, you can create enrollments with no allocation policy. In this case, DPS uses the default policy for the DPS instance when a device provisions through the enrollment. Changing the default policy setting on the DPS instance will change how devices are provisioned through the enrollment.
+
+* With the Azure portal, the allocation policy setting for the enrollment is pre-populated with the default allocation policy. You can keep this setting or change it to another policy, but, when you save the enrollment, the allocation policy is set on the enrollment. Subsequent changes to the service default allocation policy, won't change how devices are provisioned through the enrollment.
+
+* For the *Equally weighted distribution*, *Lowest latency* and *Custom* allocation policies you can configure the enrollment to use all the IoT hubs linked to the DPS instance:
+
+ * With the Azure CLI and the DPS service SDKs, create the enrollment without specifying any IoT hubs.
+
+ * With the Azure portal, the enrollment is pre-populated with all the IoT hubs linked to the DPS instance selected; unselect all the IoT hubs before you save the enrollment.
+
+ If no IoT hubs are selected on the enrollment, then whenever a new IoT hub is linked to the DPS instance, it will participate in allocation; and vice-versa for an IoT hub that is removed from the DPS instance.
+
+* If IoT hubs are specified on an enrollment, the IoT hubs setting on the enrollment must be manually or programmatically updated for a newly linked IoT hub to be added or a deleted IoT hub to be removed from allocation.
+
+* Changing the allocation policy or IoT hubs used for an enrollment only affects subsequent registrations through that enrollment. If you want the changes to affect prior registrations, you'll need to reprovision all previously registered devices.
+
+## Limitations
+
+There are some limitations when working with allocation policies and private endpoints. For more information, see [Private endpoint limitations](virtual-network-support.md#private-endpoint-limitations).
+
+## Next steps
+
+* To learn more about linking and managing linked IoT hubs, see [Manage linked IoT hubs](how-to-manage-linked-iot-hubs.md).
+
+* To learn more about custom allocation policies, see [Understand custom allocation policies](concepts-custom-allocation.md).
+
+* For an end-to-end example using the lowest latency allocation policy, see the [Provision for geolatency](how-to-provision-multitenant.md) tutorial.
+
+* For an end-to-end example using a custom allocation policy, see the [Use custom allocation policies](tutorial-custom-allocation-policies.md) tutorial.
iot-dps Tutorial Custom Hsm Enrollment Group X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-custom-hsm-enrollment-group-x509.md
When you're finished testing and exploring this device client sample, use the fo
## Next steps
-In this tutorial, you provisioned an X.509 device using a custom HSM to your IoT hub. To learn how to provision IoT devices to multiple hubs continue to the next tutorial.
+In this tutorial, you provisioned an X.509 device using a custom HSM to your IoT hub. To learn how to provision IoT devices across multiple IoT hubs, see:
> [!div class="nextstepaction"]
-> [Tutorial: Provision devices across load-balanced IoT hubs](tutorial-provision-multiple-hubs.md)
+> [How to use allocation policies](how-to-use-allocation-policies.md)
iot-dps Tutorial Provision Multiple Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-provision-multiple-hubs.md
- Title: Tutorial - Provision devices across load balanced hubs using Azure IoT Hub Device Provisioning Service
-description: This tutorial demonstrates how Device Provisioning Service (DPS) enables automatic device provisioning across load balanced IoT hubs in the Azure portal.
-- Previously updated : 10/18/2021------
-# Tutorial: Provision devices across load-balanced IoT hubs
-
-This tutorial shows how to provision devices for multiple, load-balanced IoT hubs using the Device Provisioning Service. In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Use the Azure portal to provision a second device to a second IoT hub
-> * Add an enrollment list entry to the second device
-> * Set the Device Provisioning Service allocation policy to **even distribution**
-> * Link the new IoT hub to the Device Provisioning Service
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
-
-## Use the Azure portal to provision a second device to a second IoT hub
-
-Follow the steps in the quickstarts to link a second IoT hub to your DPS instance and provision a device to that hub:
-
-* [Set up the Device Provisioning Service](quick-setup-auto-provision.md)
-* [Provision a simulated symmetric key device](quick-create-simulated-device-symm-key.md)
-
-## Add an enrollment list entry to the second device
-
-The enrollment list tells the Device Provisioning Service which method of attestation (the method for confirming a device identity) it is using with the device. The next step is to add an enrollment list entry for the second device.
-
-1. In the page for your Device Provisioning Service, click **Manage enrollments**. The **Add enrollment list entry** page appears.
-2. At the top of the page, click **Add**.
-3. Complete the fields and then click **Save**.
-
-## Set the Device Provisioning Service allocation policy
-
-The allocation policy is a Device Provisioning Service setting that determines how devices are assigned to an IoT hub. There are three supported allocation policies: 
-
-1. **Lowest latency**: Devices are provisioned to an IoT hub based on the hub with the lowest latency to the device.
-2. **Evenly weighted distribution** (default): Linked IoT hubs are equally likely to have devices provisioned to them. This is the default setting. If you are provisioning devices to only one IoT hub, you can keep this setting. If you plan to use on IoT hub, but expect to increase the number of hubs as the number of devices increases, it's important to note that, when assigning to an IoT Hub, the policy doesn't take into account previously registered devices. All linked hubs hold an equal chance of getting a device registration based on the weight of the linked IoT Hub. However, if an IoT hub has reached its device capacity limit, it will no longer receive device registrations. You can, however, adjust the weight of allocation for each linked IoT Hub.
-
-3. **Static configuration via the enrollment list**: Specification of the desired IoT hub in the enrollment list takes priority over the Device Provisioning Service-level allocation policy.
-
-### How the allocation policy assigns devices to IoT Hubs
-
-It may be desirable to use only one IoT Hub, until a specific number of devices is reached. In that scenario, it's important to note that, once a new IoT Hub is added, a new device has the potential to be provisioned to any one of the IoT Hubs. If you wish to balance all devices, registered and unregistered, then you'll need to re-provision all devices.
-
-Follow these steps to set the allocation policy:
-
-1. To set the allocation policy, in the Device Provisioning Service page click **Manage allocation policy**.
-2. Set the allocation policy to **Evenly weighted distribution**.
-3. Click **Save**.
-
-## Link the new IoT hub to the Device Provisioning Service
-
-Link the Device Provisioning Service and IoT hub so that the Device Provisioning Service can register devices to that hub.
-
-1. In the **All resources** page, click the Device Provisioning Service you created previously.
-2. In the Device Provisioning Service page, click **Linked IoT hubs**.
-3. Click **Add**.
-4. In the **Add link to IoT hub** page, use the radio buttons to specify whether the linked IoT hub is located in the current subscription, or in a different subscription. Then, choose the name of the IoT hub from the **IoT hub** box.
-5. Click **Save**.
-
-In this tutorial, you learned how to:
-
-> [!div class="checklist"]
-> * Use the Azure portal to provision a second device to a second IoT hub
-> * Add an enrollment list entry to the second device
-> * Set the Device Provisioning Service allocation policy to **even distribution**
-> * Link the new IoT hub to the Device Provisioning Service
-
-## Next steps
-
-<!-- Advance to the next tutorial to learn how to
- Replace this .md
-> [!div class="nextstepaction"]
-> [Bind an existing custom SSL certificate to Azure Web Apps]()
>
iot-edge How To Create Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-create-iot-edge-device.md
Using single device provisioning, you'll need to manually enter provisioning inf
Provisioning devices at-scale refers to provisioning one or more IoT Edge devices with the assistance of the [IoT Hub Device Provisioning Service](../iot-dps/about-iot-dps.md). You'll see provisioning at-scale also referred to as **autoprovisioning**.
-If your IoT Edge solution requires more than one device, autoprovisioning using DPS saves you the effort of manually entering provisioning information into the configuration files of each device. This automated model can be scaled to millions of IoT Edge devices. You can see the automated provisioning flow in the [Behind the scenes section of IoT Hub DPS overview page](../iot-dps/about-iot-dps.md#behind-the-scenes).
+If your IoT Edge solution requires more than one device, autoprovisioning using DPS saves you the effort of manually entering provisioning information into the configuration files of each device. This automated model can be scaled to millions of IoT Edge devices.
You can secure your IoT Edge solution with the authentication method of your choice. **Symmetric key**, **X.509 certificates**, and **trusted platform module (TPM) attestation** authentication methods are available for provisioning devices at-scale. You can read more about those options in the [Choose an authentication method section](#choose-an-authentication-method).
iot-edge Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/version-history.md
This table provides recent version history for IoT Edge package releases, and hi
| Release notes and assets | Type | Release Date | End of Support Date | Highlights | | | - | | - | - |
-| [1.4](https://github.com/Azure/azure-iotedge/releases/tag/1.4.0) | Long-term support (LTS) | August 2022 | November 12, 2024 | IoT Edge 1.4 LTS is supported through November 12, 2022 to match the [.NET 6 release lifecycle](https://dotnet.microsoft.com/platform/support/policy/dotnet-core#lifecycle). <br> Automatic image clean-up of unused Docker images <br> Ability to pass a [custom JSON payload to DPS on provisioning](../iot-dps/how-to-send-additional-data.md#iot-edge-support) <br> Ability to require all modules in a deployment be downloaded before restart <br> Use of the TCG TPM2 Software Stack which enables TPM hierarchy authorization values, specifying the TPM index at which to persist the DPS authentication key, and accommodating more [TPM configurations](http://github.com/Azure/iotedge/blob/897aed8c5573e8cad4b602e5a1298bdc64cd28b4/edgelet/contrib/config/linux/template.toml#L262-L288) |
+| [1.4](https://github.com/Azure/azure-iotedge/releases/tag/1.4.0) | Long-term support (LTS) | August 2022 | November 12, 2024 | IoT Edge 1.4 LTS is supported through November 12, 2024 to match the [.NET 6 release lifecycle](https://dotnet.microsoft.com/platform/support/policy/dotnet-core#lifecycle). <br> Automatic image clean-up of unused Docker images <br> Ability to pass a [custom JSON payload to DPS on provisioning](../iot-dps/how-to-send-additional-data.md#iot-edge-support) <br> Ability to require all modules in a deployment be downloaded before restart <br> Use of the TCG TPM2 Software Stack which enables TPM hierarchy authorization values, specifying the TPM index at which to persist the DPS authentication key, and accommodating more [TPM configurations](http://github.com/Azure/iotedge/blob/897aed8c5573e8cad4b602e5a1298bdc64cd28b4/edgelet/contrib/config/linux/template.toml#L262-L288) |
| [1.3](https://github.com/Azure/azure-iotedge/releases/tag/1.3.0) | Stable | June 2022 | August 2022 | Support for Red Hat Enterprise Linux 8 on AMD and Intel 64-bit architectures.<br>Edge Hub now enforces that inbound/outbound communication uses minimum TLS version 1.2 by default<br>Updated runtime modules (edgeAgent, edgeHub) based on .NET 6 | | [1.2](https://github.com/Azure/azure-iotedge/releases/tag/1.2.0) | Stable | April 2021 | June 2022 | [IoT Edge devices behind gateways](how-to-connect-downstream-iot-edge-device.md?view=iotedge-2020-11&preserve-view=true)<br>[IoT Edge MQTT broker (preview)](how-to-publish-subscribe.md?view=iotedge-2020-11&preserve-view=true)<br>New IoT Edge packages introduced, with new installation and configuration steps. For more information, see [Update from 1.0 or 1.1 to latest release](how-to-update-iot-edge.md#special-case-update-from-10-or-11-to-latest-release).<br>Includes [Microsoft Defender for IoT micro-agent for Edge](../defender-for-iot/device-builders/overview.md).<br> Integration with Device Update. For more information, see [Update IoT Edge](how-to-update-iot-edge.md). | | [1.1](https://github.com/Azure/azure-iotedge/releases/tag/1.1.0) | Long-term support (LTS) | February 2021 | December 13, 2022 | IoT Edge 1.1 LTS is supported through December 13, 2022 to match the [.NET Core 3.1 release lifecycle](https://dotnet.microsoft.com/platform/support/policy/dotnet-core). <br> [Long-term support plan and supported systems updates](support.md) |
load-testing Concept Load Testing Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/concept-load-testing-concepts.md
The configuration of a load test consists of:
- [Environment variables](./how-to-parameterize-load-tests.md). - [Secret parameters](./how-to-parameterize-load-tests.md). - The number of [test engines](#test-engine) to run the test script on.-- The [pass/fail criteria](./how-to-define-test-criteria.md) for the test.
+- The [fail criteria](./how-to-define-test-criteria.md) for the test.
- The list of [app components and resource metrics to monitor](./how-to-monitor-server-side-metrics.md) during the test execution. When you run a test, a [test run](#test-run) instance is created.
When you create or update a load test, you can configure the list of app compone
During a load test, Azure Load Testing collects metrics about the test execution. There are two types of metrics: -- *Client-side metrics* give you details reported by the test engine. These metrics include the number of virtual users, the request response time, the number of failed requests, or the number of requests per second. You can [define pass/fail criteria](./how-to-define-test-criteria.md) based on client-side metrics to specify when a test passes or fails.
+- *Client-side metrics* give you telemetry reported by the test engine. These metrics include the number of virtual users, the request response time, the number of failed requests, or the number of requests per second. You can [define test fail criteria](./how-to-define-test-criteria.md) based on these client-side metrics.
- *Server-side metrics* are available for Azure-hosted applications and provide information about your Azure [application components](#app-component). Azure Load Testing integrates with Azure Monitor, including Application Insights and Container insights, to capture details from the Azure services. Depending on the type of service, different metrics are available. For example, metrics can be for the number of database reads, the type of HTTP responses, or container resource consumption.
load-testing How To Appservice Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-appservice-insights.md
Title: Get more insights from App Service diagnostics
+ Title: Get load test insights from App Service diagnostics
-description: 'Learn how to get detailed insights from App Service diagnostics and Azure Load Testing for App Service workloads.'
+description: 'Learn how to get detailed application performance insights from App Service diagnostics and Azure Load Testing.'
Previously updated : 11/30/2021 Last updated : 10/24/2022
-# Get detailed insights from App Service diagnostics and Azure Load Testing Preview for Azure App Service workloads
+# Get performance insights from App Service diagnostics and Azure Load Testing Preview
-In this article, you'll learn how to gain more insights from Azure App Service workloads by using Azure Load Testing Preview and Azure App Service diagnostics.
+Azure Load Testing Preview collects detailed resource metrics across your Azure app components to help identify performance bottlenecks. In this article, you learn how to use App Service Diagnostics to get additional insights when load testing Azure App Service workloads.
-[App Service diagnostics](../app-service/overview-diagnostics.md) is an intelligent and interactive way to help troubleshoot your app, with no configuration required. When you run into issues with your app, App Service diagnostics can help you resolve the issue easily and quickly.
-
-You can take advantage of App Service diagnostics when you run load tests on applications that run on App Service.
+[App Service diagnostics](/azure/app-service/overview-diagnostics.md) is an intelligent and interactive way to help troubleshoot your app, with no configuration required. When you run into issues with your app, App Service diagnostics can help you resolve the issue easily and quickly.
> [!IMPORTANT] > Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
You can take advantage of App Service diagnostics when you run load tests on app
- An Azure Load Testing resource. If you need to create an Azure Load Testing resource, see the quickstart [Create and run a load test](./quickstart-create-and-run-load-test.md). - An Azure App Service workload that you're running a load test against and that you've added to the app components to monitor during the load test.
-## Get more insights when you test an App Service workload
-
-In this section, you use [App Service diagnostics](../app-service/overview-diagnostics.md) to get more insights from load testing an Azure App Service workload.
+## Use App Service diagnostics for your load test
-1. In the [Azure portal](https://portal.azure.com), go to your Azure Load Testing resource.
+Azure Load Testing lets you monitor server-side metrics for your Azure app components for a load test. You can then visualize and analyze these metrics in the Azure Load Testing dashboard.
-1. On the left pane, select **Tests** to view the list of tests, and then select your test.
+When the application you're load testing is hosted on Azure App Service, you can get extra insights by using [App Service diagnostics](/azure/app-service/overview-diagnostics.md).
-1. On the test runs page, select **Configure**, and then select **App Components** to add or remove Azure resources to monitor during the load test.
+To view the App Service diagnostics information for your application under load test:
- :::image type="content" source="media/how-to-appservice-insights/configure-app-components.png" alt-text="Screenshot that shows the 'Configure' and 'App Components' buttons for configuring the load test.":::
+1. Go to the [Azure portal](https://portal.azure.com).
-1. Select the **Monitoring** tab, and then add your app service to the list of app components to monitor.
+1. Add your App Service resource to the load test app components. Follow the steps in [monitor server-side metrics](./how-to-monitor-server-side-metrics.md) to add your app service.
- :::image type="content" source="media/how-to-appservice-insights/test-monitoring-app-service.png" alt-text="Screenshot of the 'Edit test' pane for selecting and app service resource to monitor.":::
+ :::image type="content" source="media/how-to-appservice-insights/test-monitoring-app-service.png" alt-text="Screenshot of the Monitoring tab when editing a load test in the Azure portal, highlighting the App Service resource.":::
-1. Select **Run** to execute the load test.
+1. Select **Run** to run the load test.
After the test finishes, you'll notice a section about App Service on the test result dashboard.
-1. Select the **here** link in the App Service message.
+ :::image type="content" source="media/how-to-appservice-insights/test-result-app-service-diagnostics.png" alt-text="Screenshot that shows the 'App Service' section on the load testing dashboard in the Azure portal.":::
+
+1. Select the link in **Additional insights** to view the App Service diagnostics information.
- :::image type="content" source="media/how-to-appservice-insights/test-result-app-service-diagnostics.png" alt-text="Screenshot that shows the 'App Service' section on the test result dashboard.":::
+ App Service diagnostics enables you to view in-depth information and dashboard about the performance, resource usage, and stability of your app service.
- Your Azure App Service **Availability and Performance** page opens, which displays your App Service diagnostics.
+ In the screenshot, you notice that there are concerns about the CPU usage, app performance, and failed requests.
:::image type="content" source="media/how-to-appservice-insights/app-diagnostics-overview.png" alt-text="Screenshot that shows the App Service diagnostics overview page, with a list of interactive reports on the left pane.":::
-1. On the left pane, select any of the various interactive reports that are available in App Service diagnostics.
+ On the left pane, you can drill deeper into specific issues by selecting one the diagnostics reports. For example, the following screenshot shows the **High CPU Analysis** report.
:::image type="content" source="media/how-to-appservice-insights/app-diagnostics-high-cpu.png" alt-text="Screenshot that shows the App Service diagnostics CPU usage report.":::
- > [!IMPORTANT]
+ The following screenshot shows the **Web App Slow** report, which gives details and recommendations about application performance.
+
+ :::image type="content" source="media/how-to-appservice-insights/app-diagnostics-web-app-slow.png" alt-text="Screenshot that shows the App Service diagnostics slow application report.":::
+
+ > [!NOTE]
> It can take up to 45 minutes for the insights data to be displayed on this page. ## Next steps -- Learn how to [parameterize a load test](./how-to-parameterize-load-tests.md) with secrets.--- Learn how to [configure automated performance testing](./tutorial-identify-performance-regression-with-cicd.md).
+- Learn how to [parameterize a load test with secrets and environment variables](./how-to-parameterize-load-tests.md).
+- Learn how to [identify performance bottlenecks](./tutorial-identify-bottlenecks-azure-portal.md) for Azure applications.
+- Learn how to [configure automated performance testing](./tutorial-identify-performance-regression-with-cicd.md).
load-testing How To Define Test Criteria https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-define-test-criteria.md
Title: Define load test pass/fail criteria
+ Title: Define load test fail criteria
-description: 'Learn how to configure pass/fail criteria for load tests with Azure Load Testing.'
+description: 'Learn how to configure fail criteria for load tests with Azure Load Testing. Fail criteria let you define conditions that your load test results should meet.'
Previously updated : 11/30/2021 Last updated : 10/19/2022
-# Define pass/fail criteria for load tests by using Azure Load Testing Preview
+# Define fail criteria for load tests by using Azure Load Testing Preview
-In this article, you'll learn how to define pass/fail criteria for your load tests with Azure Load Testing Preview.
-
-By defining test criteria, you can specify the performance expectations of your application under test. By using the Azure Load Testing service, you can set failure criteria for various test metrics.
+In this article, you'll learn how to define test fail criteria for your load tests with Azure Load Testing Preview. Fail criteria let you define performance and quality expectations for your application under load. Azure Load Testing supports various client metrics for defining fail criteria. Criteria can apply to the entire load test, or to an individual request in the JMeter script.
> [!IMPORTANT] > Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
By defining test criteria, you can specify the performance expectations of your
## Prerequisites - An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. -- An Azure Load Testing resource. If you need to create an Azure Load Testing resource, see the quickstart [Create and run a load test](./quickstart-create-and-run-load-test.md).
+- An Azure load testing resource. If you need to create an Azure Load Testing resource, see the quickstart [Create and run a load test](./quickstart-create-and-run-load-test.md).
+
+## Load test fail criteria
+
+Load test fail criteria are conditions for client-side metrics, that your test should meet. You define test criteria at the load test level in Azure Load Testing. A load test can have one or more test criteria. When at least one of the test criteria evaluates to true, the load test gets the *failed* status.
+
+You can define test criteria at two levels. A load test can combine criteria at the different levels.
+
+- At the load test level. For example, to ensure that the total error percentage doesn't exceed a threshold.
+- At the JMeter request level (JMeter sampler). For example, you could specify a threshold for the response time of the *getProducts* request, but disregard the response time of the *sign in* request.
-## Load test pass/fail criteria
+You can define a maximum of 10 test criteria for a load test. If there are multiple criteria for the same client metric, the criterion with the lowest threshold value is used.
-This section discusses the syntax of Azure Load Testing pass/fail criteria. When a criterion evaluates to `true`, the load test gets the *failed* status.
+### Fail criteria structure
-The structure of a pass/fail criterion is: `Request: Aggregate_function (client_metric) condition threshold`.
+The format of fail criteria in Azure Load Testing follows that of a conditional statement for a [supported metric](#supported-client-metrics-for-fail-criteria). For example, ensure that the average number of requests per second is greater than 500.
+
+Fail criteria have the following structure:
+
+- Test criteria at the load test level: `Aggregate_function (client_metric) condition threshold`.
+- Test criteria applied to specific JMeter requests: `Request: Aggregate_function (client_metric) condition threshold`.
The following table describes the different components:
-|Parameter |Description |
-|||
-|`Request` | *Optional.* Name of the sampler in the JMeter script to which the criterion applies. If you don't specify a request name, the criterion applies to the aggregate of all the requests in the script. |
-|`Client metric` | *Required.* The client metric on which the criteria should be applied. |
-|`Aggregate function` | *Required.* The aggregate function to be applied on the client metric. |
-|`Condition` | *Required.* The comparison operator. |
-|`Threshold` | *Required.* The numeric value to compare with the client metric. |
+|Parameter |Description |
+||-|
+|`Client metric` | *Required.* The client metric on which the condition should be applied. |
+|`Aggregate function` | *Required.* The aggregate function to be applied on the client metric. |
+|`Condition` | *Required.* The comparison operator, such as `greater than`, or `less than`. |
+|`Threshold` | *Required.* The numeric value to compare with the client metric. |
+|`Request` | *Optional.* Name of the sampler in the JMeter script to which the criterion applies. If you don't specify a request name, the criterion applies to the aggregate of all the requests in the script. |
+
+### Supported client metrics for fail criteria
+
+Azure Load Testing supports the following client metrics:
-Azure Load Testing supports the following metrics:
+|Metric |Aggregate function |Threshold |Condition | Description |
+|||||-|
+|`response_time_ms` | `avg` (average)<BR> `min` (minimum)<BR> `max` (maximum)<BR> `pxx` (percentile), xx can be 50, 90, 95, 99 | Integer value, representing number of milliseconds (ms). | `>` (greater than)<BR> `<` (less than) | Response time or elapsed time, in milliseconds. Learn more about [elapsed time in the Apache JMeter documentation](https://jmeter.apache.org/usermanual/glossary.html). |
+|`latency_ms` | `avg` (average)<BR> `min` (minimum)<BR> `max` (maximum)<BR> `pxx` (percentile), xx can be 50, 90, 95, 99 | Integer value, representing number of milliseconds (ms). | `>` (greater than)<BR> `<` (less than) | Latency, in milliseconds. Learn more about [latency in the Apache JMeter documentation](https://jmeter.apache.org/usermanual/glossary.html). |
+|`error` | `percentage` | Numerical value in the range 0-100, representing a percentage. | `>` (greater than) <BR> `<` (less than) | Percentage of failed requests. |
+|`requests_per_sec` | `avg` (average) | Numerical value with up to two decimal places. | `>` (greater than) <BR> `<` (less than) | Number of requests per second. |
+|`requests` | `count` | Integer value. | `>` (greater than) <BR> `<` (less than) | Total number of requests. |
-|Metric |Aggregate function |Threshold |Condition |
-|||||
-|`response_time_ms` | `avg` (average)<BR> `min` (minimum)<BR> `max` (maximum)<BR> `pxx` (percentile), xx can be 50, 90, 95, 99 | Integer value, representing number of milliseconds (ms). | `>` (greater than)<BR> `<` (less than) |
-|`latency_ms` | `avg` (average)<BR> `min` (minimum)<BR> `max` (maximum)<BR> `pxx` (percentile), xx can be 50, 90, 95, 99 | Integer value, representing number of milliseconds (ms). | `>` (greater than)<BR> `<` (less than) |
-|`error` | `percentage` | Numerical value in the range 0-100, representing a percentage. | `>` (greater than) <BR> `<` (less than) |
-|`requests_per_sec` | `avg` (average) | Numerical value with up to two decimal places. | `>` (greater than) <BR> `<` (less than) |
-|`requests` | `count` | Integer value. | `>` (greater than) <BR> `<` (less than) |
+## Define load test fail criteria
-## Define test pass/fail criteria in the Azure portal
+# [Azure portal](#tab/portal)
In this section, you configure test criteria for a load test in the Azure portal. 1. In the [Azure portal](https://portal.azure.com), go to your Azure Load Testing resource.
-1. On the left pane, select **Tests** to view the list of load tests, and then select the test you're working with.
+1. On the left pane, select **Tests** to view the list of load tests.
- :::image type="content" source="media/how-to-define-test-criteria/configure-test.png" alt-text="Screenshot of the 'Configure' and 'Test' buttons and a list of load tests.":::
+1. Select your load test from the list, and then select **Edit**.
-1. Select the **Test criteria** tab.
+ :::image type="content" source="media/how-to-define-test-criteria/edit-test.png" alt-text="Screenshot of the list of tests for an Azure load testing resource in the Azure portal, highlighting the 'Edit' button.":::
- :::image type="content" source="media/how-to-define-test-criteria/configure-test-test-criteria.png" alt-text="Screenshot that shows the 'Test criteria' tab and the pane for configuring the criteria.":::
+1. On the **Test criteria** pane, fill the **Metric**, **Aggregate function**, **Condition**, and **Threshold** values for your test.
-1. On the **Test criteria** pane, use the dropdown lists to select the **Metric**, **Aggregate function**, **Condition**, and **Threshold** values for your test.
+ :::image type="content" source="media/how-to-define-test-criteria/test-creation-criteria.png" alt-text="Screenshot of the 'Test criteria' pane for a load test in the Azure portal and highlights the fields for adding a test criterion.":::
- :::image type="content" source="media/how-to-define-test-criteria/test-creation-criteria.png" alt-text="Screenshot of the 'Test criteria' pane and the dropdown controls for adding test criteria to a load test.":::
+ Optionally, enter the **Request name** information to add a test criterion for a specific JMeter request. The value should match the name of the JMeter sampler in the JMX file.
- You can define a maximum of 10 test criteria for a load test. If there are multiple criteria for the same client metric, the criterion with the lowest threshold value is used.
+ :::image type="content" source="media/how-to-define-test-criteria/jmeter-request-name.png" alt-text="Screenshot of the JMeter user interface, highlighting the request name.":::
1. Select **Apply** to save the changes.
-When you run the load test, Azure Load Testing uses the updated test configuration. The test run dashboard shows the test criteria and indicates whether the test results pass or fail the criteria.
+ When you now run the load test, Azure Load Testing uses the test criteria to determine the status of the load test run.
+1. Run the test and view the status in the load test dashboard.
+
+ The dashboard shows each of the test criteria and their status. The overall test status will be failed if at least one criterion was met.
+
+ :::image type="content" source="media/how-to-define-test-criteria/test-criteria-dashboard.png" alt-text="Screenshot that shows the test criteria on the load test dashboard.":::
-## Define test pass/fail criteria in CI/CD workflows
+# [Azure Pipelines](#tab/pipelines)
+
+In this section, you configure test criteria for a load test, as part of an Azure Pipelines CI/CD workflow. Learn how to [set up automated performance testing with CI/CD](./tutorial-identify-performance-regression-with-cicd.md).
+
+For CI/CD workflows, you configure the load test settings in a [YAML test configuration file](./reference-test-config-yaml.md). You store the load test configuration file alongside the JMeter test script file in the source control repository.
+
+To specify fail criteria in the YAML configuration file:
+
+1. Open the YAML test configuration file for your load test in your editor of choice.
+
+1. Add your test criteria in the `failureCriteria` setting.
+
+ Use the [fail criteria format](#fail-criteria-structure), as described earlier. You can add multiple fail criteria for a load test.
-In this section, you learn how to define load test pass/fail criteria for continuous integration and continuous delivery (CI/CD) workflows. To run a load test in your CI/CD workflow, you use a [YAML test configuration file](./reference-test-config-yaml.md).
+ The following example defines three fail criteria. The first two criteria apply to the overall load test, and the last one specifies a condition for the `GetCustomerDetails` request.
-1. Open the YAML test configuration file.
+ ```yaml
+ version: v0.1
+ testName: SampleTest
+ testPlan: SampleTest.jmx
+ description: Load test website home page
+ engineInstances: 1
+ failureCriteria:
+ - avg(response_time_ms) > 300
+ - percentage(error) > 50
+ - GetCustomerDetails: avg(latency_ms) >200
+ ```
+
+ When you define a test criterion for a specific JMeter request, the request name should match the name of the JMeter sampler in the JMX file.
+
+ :::image type="content" source="media/how-to-define-test-criteria/jmeter-request-name.png" alt-text="Screenshot of the JMeter user interface, highlighting the request name.":::
+
+1. Save the YAML configuration file, and commit the changes to source control.
+
+1. After the CI/CD workflow runs, verify the test status in the CI/CD log.
+
+ The log shows the overall test status, and the status of each of the test criteria. The status of the CI/CD workflow run also reflects the test run status.
+
+ :::image type="content" source="media/how-to-define-test-criteria/azure-pipelines-log.png" alt-text="Screenshot that shows the test criteria in the CI/CD workflow log.":::
+
+# [GitHub Actions](#tab/github)
+
+In this section, you configure test criteria for a load test, as part of a GitHub Actions CI/CD workflow. Learn how to [set up automated performance testing with CI/CD](./tutorial-identify-performance-regression-with-cicd.md).
+
+For CI/CD workflows, you configure the load test settings in a [YAML test configuration file](./reference-test-config-yaml.md). You store the load test configuration file alongside the JMeter test script file in the source control repository.
-1. Add the test criteria to the configuration file. For more information about YAML syntax, see [test configuration YAML reference](./reference-test-config-yaml.md).
+To specify fail criteria in the YAML configuration file:
- ```yml
- failureCriteria:
-     - avg(response_time_ms) > 300
-     - percentage(error) > 20
- - GetCustomerDetails: avg(latency_ms) >200
+1. Open the YAML test configuration file for your load test in your editor of choice.
+
+1. Add your test criteria in the `failureCriteria` setting.
+
+ Use the [fail criteria format](#fail-criteria-structure), as described earlier. You can add multiple fail criteria for a load test.
+
+ The following example defines three fail criteria. The first two criteria apply to the overall load test, and the last one specifies a condition for the `GetCustomerDetails` request.
+
+ ```yaml
+ version: v0.1
+ testName: SampleTest
+ testPlan: SampleTest.jmx
+ description: Load test website home page
+ engineInstances: 1
+ failureCriteria:
+ - avg(response_time_ms) > 300
+ - percentage(error) > 50
+ - GetCustomerDetails: avg(latency_ms) >200
```
+
+ When you define a test criterion for a specific JMeter request, the request name should match the name of the JMeter sampler in the JMX file.
-1. Save the YAML configuration file.
+ :::image type="content" source="media/how-to-define-test-criteria/jmeter-request-name.png" alt-text="Screenshot of the JMeter user interface, highlighting the request name.":::
-When the CI/CD workflow runs the load test, the workflow status reflects the status of the pass/fail criteria. The CI/CD logging information shows the status of each of the test criteria.
+1. Save the YAML configuration file, and commit the changes to source control.
+1. After the CI/CD workflow runs, verify the test status in the CI/CD log.
+
+ The log shows the overall test status, and the status of each of the test criteria. The status of the CI/CD workflow run also reflects the test run status.
+
+ :::image type="content" source="media/how-to-define-test-criteria/github-actions-log.png" alt-text="Screenshot that shows the test criteria in the CI/CD workflow log.":::
++ ## Next steps
load-testing How To Use A Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-use-a-managed-identity.md
Previously updated : 11/30/2021 Last updated : 10/20/2022 # Use managed identities for Azure Load Testing Preview
-This article shows how you can create a managed identity for an Azure Load Testing Preview resource and how to use it to read secrets from your Azure key vault.
+This article shows how to create a managed identity for Azure Load Testing Preview. You can use a managed identity to authenticate with and read secrets from Azure Key Vault.
-A managed identity in Azure Active Directory (Azure AD) allows your resource to easily access other Azure AD-protected resources, such as Azure Key Vault. The identity is managed by the Azure platform. For more information about managed identities in Azure AD, see [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md).
+A managed identity from Azure Active Directory (Azure AD) allows your load testing resource to easily access other Azure AD-protected resources, such as Azure Key Vault. The identity is managed by the Azure platform and doesn't require you to manage or rotate any secrets. For more information about managed identities in Azure AD, see [Managed identities for Azure resources](/azure/active-directory/managed-identities-azure-resources/overview).
Azure Load Testing supports two types of identities: -- A **system-assigned identity** is associated with your Azure Load Testing resource and is removed when your resource is deleted. A resource can have only one system-assigned identity.--- A **user-assigned identity** is a standalone Azure resource that you can assign to your Azure Load Testing resource. When you delete the Load Testing resource, the identity is not removed. You can assign multiple user-assigned identities to the Load Testing resource.
+- A **system-assigned identity** is associated with your load testing resource and is deleted when your resource is deleted. A resource can only have one system-assigned identity.
+- A **user-assigned identity** is a standalone Azure resource that you can assign to your load testing resource. When you delete the load testing resource, the managed identity remains available. You can assign multiple user-assigned identities to the load testing resource.
> [!IMPORTANT] > Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
Azure Load Testing supports two types of identities:
## Prerequisites - An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- An Azure load testing resource. If you need to create an Azure load testing resource, see the quickstart [Create and run a load test](./quickstart-create-and-run-load-test.md).
+- To create a user-assigned managed identity, your account needs the [Managed Identity Contributor](/azure/role-based-access-control/built-in-roles#managed-identity-contributor) role assignment.
-- An Azure Load Testing resource. If you need to create an Azure Load Testing resource, see the quickstart [Create and run a load test](./quickstart-create-and-run-load-test.md).-
-## Set a system-assigned identity
+## Assign a system-assigned identity to a load testing resource
-To add a system-assigned identity for your Azure Load Testing resource, you need to enable a property on the resource. You can set this property by using the Azure portal or by using an Azure Resource Manager (ARM) template.
+To assign a system-assigned identity for your Azure load testing resource, enable a property on the resource. You can set this property by using the Azure portal or by using an Azure Resource Manager (ARM) template.
# [Portal](#tab/azure-portal)
-To set up a managed identity in the portal, you first create an Azure Load Testing resource and then enable the feature.
+To set up a managed identity in the portal, you first create an Azure load testing resource and then enable the feature.
-1. In the [Azure portal](https://portal.azure.com), go to your Azure Load Testing resource.
+1. In the [Azure portal](https://portal.azure.com), go to your Azure load testing resource.
1. On the left pane, select **Identity**.
-1. Switch the system-assigned identity status to **On**, and then select **Save**.
+1. Select the **System assigned** tab.
+
+1. Switch the **Status** to **On**, and then select **Save**.
+
+ :::image type="content" source="media/how-to-use-a-managed-identity/system-assigned-managed-identity.png" alt-text="Screenshot that shows how to assign a system-assigned managed identity for Azure Load Testing in the Azure portal.":::
+
+1. On the confirmation window, select **Yes** to confirm the assignment of the managed identity.
- :::image type="content" source="media/how-to-use-a-managed-identity/system-assigned-managed-identity.png" alt-text="Screenshot that shows how to turn on system-assigned managed identity for Azure Load Testing.":::
+1. After assigning the managed identity finishes, the page will show the **Object ID** of the managed identity, and let you assign permissions to it.
+
+ :::image type="content" source="media/how-to-use-a-managed-identity/system-assigned-managed-identity-completed.png" alt-text="Screenshot that shows the system-assigned managed identity information for a load testing resource in the Azure portal.":::
+
+You can now [grant your load testing resource access to your Azure key vault](#grant-access-to-your-azure-key-vault).
# [ARM template](#tab/arm)
-You can use an ARM template to automate the deployment of your Azure resources. You can create any resource of type `Microsoft.LoadTestService/loadtests` with an identity by including the following property in the resource definition:
+You can use an ARM template to automate the deployment of your Azure resources. For more information about using ARM templates with Azure Load Testing, see the [Azure Load Testing ARM reference documentation](/azure/templates/microsoft.loadtestservice/allversions).
+
+You can assign a system-assigned managed identity when you create a resource of type `Microsoft.LoadTestService/loadtests`. Configure the `identity` property with the `SystemAssigned` value in the resource definition:
```json "identity": {
You can use an ARM template to automate the deployment of your Azure resources.
} ```
-By adding the system-assigned type, you're telling Azure to create and manage the identity for your resource. For example, an Azure Load Testing resource might look like the following:
+By adding the system-assigned identity type, you're telling Azure to create and manage the identity for your resource. For example, an Azure load testing resource might look like the following:
```json {
By adding the system-assigned type, you're telling Azure to create and manage th
} ```
-When the resource is created, it gets the following additional properties:
+After the resource creation finishes, the following properties are configured for the resource:
-```json
+```output
"identity": { "type": "SystemAssigned",
- "tenantId": "<TENANTID>",
- "principalId": "<PRINCIPALID>"
+ "tenantId": "00000000-0000-0000-0000-000000000000",
+ "principalId": "00000000-0000-0000-0000-000000000000"
} ```
-The `tenantId` property identifies which Azure AD tenant the identity belongs to. The `principalId` is a unique identifier for the resource's new identity. Within Azure AD, the service principal has the same name as the Azure Load Testing resource.
+The `tenantId` property identifies which Azure AD tenant the managed identity belongs to. The `principalId` is a unique identifier for the resource's new identity. Within Azure AD, the service principal has the same name as the Azure load testing resource.
+
+You can now [grant your load testing resource access to your Azure key vault](#grant-access-to-your-azure-key-vault).
-## Set a user-assigned identity
+## Assign a user-assigned identity to a load testing resource
-Before you can add a user-assigned identity to an Azure Load Testing resource, you must first create this identity. You can then add the identity by using its resource identifier.
+Before you can add a user-assigned managed identity to an Azure load testing resource, you must first create this identity in Azure AD. Then, you can assign the identity by using its resource identifier.
+
+You can add multiple user-assigned managed identities to your resource. For example, if you need to access multiple Azure resources, you can grant different permissions to each of these identities.
# [Portal](#tab/azure-portal)
-1. Create a user-assigned managed identity by following the instructions mentioned [here](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity).
+1. Create a user-assigned managed identity by following the instructions mentioned in [Create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity).
+
+ :::image type="content" source="media/how-to-use-a-managed-identity/create-user-assigned-managed-identity.png" alt-text="Screenshot that shows how to create a user-assigned managed identity in the Azure portal.":::
-1. In the [Azure portal](https://portal.azure.com/), go to your Azure Load Testing resource.
+1. In the [Azure portal](https://portal.azure.com/), go to your Azure load testing resource.
1. On the left pane, select **Identity**.
-1. Select **User assigned** tab and click **Add**.
+1. Select the **User assigned** tab, and select **Add**.
-1. Search and select the identity you created previously. Then select **Add** to add it to the Azure Load Testing resource.
+1. Search and select the managed identity you created previously. Then, select **Add** to add it to the Azure load testing resource.
:::image type="content" source="media/how-to-use-a-managed-identity/user-assigned-managed-identity.png" alt-text="Screenshot that shows how to turn on user-assigned managed identity for Azure Load Testing.":::
+You can now [grant your load testing resource access to your Azure key vault](#grant-access-to-your-azure-key-vault).
+ # [ARM template](#tab/arm)
-You can create an Azure Load Testing resource by using an ARM template and the resource type `Microsoft.LoadTestService/loadtests`. You can specify a user-assigned identity in the `identity` section of the resource definition. Replace the `<RESOURCEID>` text placeholder with the resource ID of your user-assigned identity:
+You can create an Azure load testing resource by using an ARM template and the resource type `Microsoft.LoadTestService/loadtests`. For more information about using ARM templates with Azure Load Testing, see the [Azure Load Testing ARM reference documentation](/azure/templates/microsoft.loadtestservice/allversions).
-```json
-"identity": {
- "type": "UserAssigned",
- "userAssignedIdentities": {
- "<RESOURCEID>": {}
- }
-}
-```
+1. Create a user-assigned managed identity by following the instructions mentioned in [Create a user-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-arm#create-a-user-assigned-managed-identity-3).
-The following code snippet shows an example of an Azure Load Testing ARM resource definition with a user-assigned identity:
+
+1. Specify the user-assigned managed identity in the `identity` section of the resource definition.
-```json
-{
- "type": "Microsoft.LoadTestService/loadtests",
- "apiVersion": "2021-09-01-preview",
- "name": "[parameters('name')]",
- "location": "[parameters('location')]",
- "tags": "[parameters('tags')]",
+ Replace the `<RESOURCEID>` text placeholder with the resource ID of your user-assigned identity:
+
+ ```json
"identity": { "type": "UserAssigned", "userAssignedIdentities": { "<RESOURCEID>": {} }
-}
-```
+ }
+ ```
+
+ The following code snippet shows an example of an Azure Load Testing ARM resource definition with a user-assigned identity:
+
+ ```json
+ {
+ "type": "Microsoft.LoadTestService/loadtests",
+ "apiVersion": "2021-09-01-preview",
+ "name": "[parameters('name')]",
+ "location": "[parameters('location')]",
+ "tags": "[parameters('tags')]",
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "<RESOURCEID>": {}
+ }
+ }
+ ```
-After the Load Testing resource is created, Azure provides the `principalId` and `clientId` properties:
+ After the Load Testing resource is created, Azure provides the `principalId` and `clientId` properties in the output:
-```json
-"identity": {
- "type": "UserAssigned",
- "userAssignedIdentities": {
- "<RESOURCEID>": {
- "principalId": "<PRINCIPALID>",
- "clientId": "<CLIENTID>"
+ ```output
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "<RESOURCEID>": {
+ "principalId": "00000000-0000-0000-0000-000000000000",
+ "clientId": "00000000-0000-0000-0000-000000000000"
+ }
} }
-}
-```
+ ```
+
+ The `principalId` is a unique identifier for the identity that's used for Azure AD administration. The `clientId` is a unique identifier for the resource's new identity that's used for specifying which identity to use during runtime calls.
-The `principalId` is a unique identifier for the identity that's used for Azure AD administration. The `clientId` is a unique identifier for the resource's new identity that's used for specifying which identity to use during runtime calls.
+You can now [grant your load testing resource access to your Azure key vault](#grant-access-to-your-azure-key-vault).
## Grant access to your Azure key vault
-A managed identity allows the Azure Load testing resource to access other Azure resources. In this section, you grant the Azure Load Testing service access to read secret values from your key vault.
+Using managed identities for Azure resources, your Azure load testing resource can access tokens that enable authentication to your Azure key vault. Grant the managed identity access by assigning the [appropriate role](/azure/role-based-access-control/built-in-roles) to the managed identity.
+
+To grant your Azure load testing resource permissions to read secrets from your Azure key vault:
+
-If you don't already have a key vault, follow the instructions in [Azure Key Vault quickstart](../key-vault/secrets/quick-create-cli.md) to create it.
+1. In the [Azure portal](https://portal.azure.com/), go to your Azure key vault resource.
-1. In the Azure portal, go to your Azure Key Vault resource.
+ If you don't have a key vault, follow the instructions in [Azure Key Vault quickstart](/azure/key-vault/secrets/quick-create-cli) to create one.
1. On the left pane, under **Settings**, select **Access Policies**, and then **Add Access Policy**.
If you don't already have a key vault, follow the instructions in [Azure Key Vau
:::image type="content" source="media/how-to-use-a-managed-identity/key-vault-add-policy.png" alt-text="Screenshot that shows how to add an access policy to your Azure key vault.":::
-1. Select **Select principal**, and then select the system-assigned or user-assigned principal for your Azure Load Testing resource.
+1. Select **Select principal**, and then select the system-assigned or user-assigned principal for your Azure load testing resource.
- The name of the system-assigned principal is the same name as the Azure Load Testing resource.
+ If you're using a system-assigned managed identity, the name matches that of your Azure load testing resource.
1. Select **Add**.
-You've now granted access to your Azure Load Testing resource to read the secret values from your Azure key vault.
+You've now granted access to your Azure load testing resource to read the secret values from your Azure key vault.
## Next steps
-* To learn how to parameterize a load test by using secrets, see [Parameterize a load test](./how-to-parameterize-load-tests.md).
-* Learn how to [Manage users and roles in Azure Load Testing](./how-to-assign-roles.md).
+* Learn how to [Parameterize a load test with secrets](./how-to-parameterize-load-tests.md).
+* Learn how to [Manage users and roles in Azure Load Testing](./how-to-assign-roles.md).
+* [What are managed identities for Azure resources?](/azure/active-directory/managed-identities-azure-resources/overview)
load-testing Overview What Is Azure Load Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/overview-what-is-azure-load-testing.md
You can integrate Azure Load Testing in your CI/CD pipeline at meaningful points
Get started with [adding load testing to your CI/CD workflow](./tutorial-identify-performance-regression-with-cicd.md) to quickly identify performance degradation of your application under load.
-In the test configuration, you [specify pass/fail rules](./how-to-define-test-criteria.md) to catch performance regressions early in the development cycle. For example, when the average response time exceeds a threshold, the test should fail.
+In the test configuration, [specify test fail criteria](./how-to-define-test-criteria.md) to catch application performance or stability regressions early in the development cycle. For example, get alerted when the average response time or the number of errors exceed a specific threshold.
Azure Load Testing will automatically stop an automated load test in response to specific error conditions. You can also use the AutoStop listener in your Apache JMeter script. Automatically stopping safeguards you against failing tests further incurring costs, for example, because of an incorrectly configured endpoint URL.
load-testing Reference Test Config Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/reference-test-config-yaml.md
A test configuration uses the following keys:
| `configurationFiles` | array | | List of relevant configuration files or other files that you reference in the Apache JMeter script. For example, a CSV data set file, images, or any other data file. These files will be uploaded to the Azure Load Testing resource alongside the test script. If the files are in a subfolder on your local machine, use file paths that are relative to the location of the test script. <BR><BR>Azure Load Testing currently doesn't support the use of file paths in the JMX file. When you reference an external file in the test script, make sure to only specify the file name. | | `description` | string | | Short description of the test run. | | `subnetId` | string | | Resource ID of the subnet for testing privately hosted endpoints (VNET injection). This subnet will host the injected test engine VMs. For more information, see [how to load test privately hosted endpoints](./how-to-test-private-endpoint.md). |
-| `failureCriteria` | object | | Criteria that indicate when a test should fail. The structure of a pass/fail criterion is: `Request: Aggregate_function (client_metric) condition threshold`. For more information on the supported values, see [Load test pass/fail criteria](./how-to-define-test-criteria.md#load-test-passfail-criteria). |
+| `failureCriteria` | object | | Criteria that indicate when a test should fail. The structure of a fail criterion is: `Request: Aggregate_function (client_metric) condition threshold`. For more information on the supported values, see [Define load test fail criteria](./how-to-define-test-criteria.md#load-test-fail-criteria). |
| `properties` | object | | List of properties to configure the load test. | | `properties.userPropertyFile` | string | | File to use as an Apache JMeter [user properties file](https://jmeter.apache.org/usermanual/test_plan.html#properties). The file will be uploaded to the Azure Load Testing resource alongside the JMeter test script and other configuration files. If the file is in a subfolder on your local machine, use a path relative to the location of the test script. | | `splitAllCSVs` | boolean | False | Split the input CSV files evenly across all test engine instances. For more information, see [Read a CSV file in load tests](./how-to-read-csv-data.md#split-csv-input-data-across-test-engines). |
load-testing Tutorial Identify Performance Regression With Cicd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/tutorial-identify-performance-regression-with-cicd.md
You can specify load test fail criteria for Azure Load Testing in the test confi
- percentage(error) > 20 ```
- You've now specified pass/fail criteria for your load test. The test will fail if at least one of these conditions is met:
+ You've now specified fail criteria for your load test based on the average response time and the error rate. The test will fail if at least one of these conditions is met:
- The aggregate average response time is greater than 100 ms. - The aggregate percentage of errors is greater than 20%.
You can specify load test fail criteria for Azure Load Testing in the test confi
1. After the test finishes, notice that the CI/CD pipeline run has failed.
- In the CI/CD output log, you find that the test failed because one of the fail criteria was met. The load test average response time was higher than the value that you specified in the pass/fail criteria.
+ In the CI/CD output log, you find that the test failed because one of the fail criteria was met. The load test average response time was higher than the value that you specified in the fail criteria.
:::image type="content" source="./media/tutorial-identify-performance-regression-with-cicd/test-criteria-failed.png" alt-text="Screenshot that shows pipeline logs after failed test criteria."::: The Azure Load Testing service evaluates the criteria during the test run. If any of these conditions fails, Azure Load Testing service returns a nonzero exit code. This code informs the CI/CD workflow that the test has failed.
-1. Edit the *SampleApp.yml* file and change the test's pass/fail criteria to increase the criterion for average response time:
+1. Edit the *SampleApp.yml* file and change the test's fail criteria to increase the criterion for average response time:
```yaml failureCriteria:
You've now created a CI/CD workflow that uses Azure Load Testing to automate run
* Learn more about [Configuring server-side monitoring](./how-to-monitor-server-side-metrics.md). * Learn more about [Comparing results across multiple test runs](./how-to-compare-multiple-test-runs.md). * Learn more about [Parameterizing a load test](./how-to-parameterize-load-tests.md).
-* Learn more about [Defining test pass/fail criteria](./how-to-define-test-criteria.md).
+* Learn more about [Defining test fail criteria](./how-to-define-test-criteria.md).
machine-learning How To Authenticate Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-authenticate-batch-endpoint.md
from azure.identity import ManagedIdentityCredential
subscription_id = "<subscription>" resource_group = "<resource-group>" workspace = "<workspace>"
+resource_id = "<resource-id>"
-ml_client = MLClient(ManagedIdentityCredential("<resource-id>"), subscription_id, resource_group, workspace)
+ml_client = MLClient(ManagedIdentityCredential(resource_id), subscription_id, resource_group, workspace)
``` Once authenticated, use the following command to run a batch deployment job:
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data.md
Azure Machine Learning lets you bring data from a local machine or an existing cloud-based storage. In this article you will learn the main data concepts in Azure Machine Learning, including: > [!div class="checklist"]
-> - [**URIs**](#uris) - A **U**niform **R**esource **I**dentifier that is a reference to a storage location on your local computer or in the cloud that makes it very easy to access data in your jobs.
-> - [**Data asset**](#data-asset) - Create data assets in your workspace to share with team members, version, and track data lineage.
-> - [**Datastore**](#datastore) - Azure Machine Learning Datastores securely keep the connection information to your data storage on Azure, so you don't have to code it in your scripts.
-> - [**MLTable**](#mltable) - a method to abstract the schema definition for tabular data so that it is easier for consumers of the data to materialize the table into a Pandas/Dask/Spark dataframe.
+> - [**URIs**](#uris) - A **U**niform **R**esource **I**dentifier that is a reference to a storage location on your local computer or in the cloud that makes it very easy to access data in your jobs. Azure Machine Learning distinguishes two types of URIs:`uri_file` and `uri_folder`. If you want to consume a file as an input of a job, You can define this job input by providing `type` as `uri_file`, `path` as where the file is.
+> - [**MLTable**](#mltable) - `MLTable` helps you to abstract the schema definition for tabular data so it is more suitable for complex/changing schema or to be leveraged in automl. If you just want to create an data asset for a job or you want to write your own parsing logic in python you could use `uri_file`, `uri_folder`.
+> - [**Data asset**](#data-asset) - If you plan to share your data (URIs or MLTables) in your workspace to team members, or you want to track data versions, or track lineage, you can create data assets from URIs or MLTables you have. But if you didn't create data asset, you can still consume the data in jobs without lineange tracking, version management, etc.
+> - [**Datastore**](#datastore) - Azure Machine Learning Datastores securely keep the connection information(storage container name, credentials) to your data storage on Azure, so you don't have to code it in your scripts. You can use AzureML datastore uri and relative path to your data to point to your data. You can also register files/folders in your AzureML datastore into data assets.
+ ## URIs A URI (uniform resource identifier) represents a storage location on your local computer, an attached Datastore, blob/ADLS storage, or a publicly available http(s) location. In addition to local paths (for example: `./path_to_my_data/`), several different protocols are supported for cloud storage locations:
az ml data create --file data-example.yml --version 1
# [Consume data asset](#tab/cli-data-consume-example)
-To consume a data asset in a job, define your job specification in a YAML file the path to be `azureml:<NAME_OF_DATA_ASSET>:<VERSION>`, for example:
+To consume a registered/created data asset in a job, you can define your job specification in a YAML file, you need to specify the type of your data asset (type will be set as ` uri_folder` by default if you don't provide a type value), and you can specify the path to be `azureml:<NAME_OF_DATA_ASSET>:<VERSION>` to spare the effort of checking what is the datastore uri or storage uri (these 2 paths are also supported).
+For example:
```yml # hello-data-uri-file.yml
machine-learning How To Deploy Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-custom-container.md
Learn how to deploy a custom container as an online endpoint in Azure Machine Le
Custom container deployments can use web servers other than the default Python Flask server used by Azure Machine Learning. Users of these deployments can still take advantage of Azure Machine Learning's built-in monitoring, scaling, alerting, and authentication.
+This article focuses on serving a TensorFlow model with TensorFlow Serving. You can find [various examples](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/custom-container) for TorchServe, Triton Inference Server, Plumber R package, and AzureML Inference Minimal image.
+ > [!WARNING] > Microsoft may not be able to help troubleshoot problems caused by a custom image. If you encounter problems, you may be asked to use the default image or one of the images Microsoft provides to see if the problem is specific to your image.
machine-learning Migrate To V2 Assets Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-assets-data.md
This article gives a comparison of data scenario(s) in SDK v1 and SDK v2.
|Functionality in SDK v1|Rough mapping in SDK v2| |-|-|
-|[Method/API in SDK v1](/python/api/azurzeml-core/azureml.datadisplayname: migration, v1, v2)|[Method/API in SDK v2](/python/api/azure-ai-ml/azure.ai.ml.entities)|
+|[Method/API in SDK v1](/python/api/azurzeml-core/azureml.data)|[Method/API in SDK v2](/python/api/azure-ai-ml/azure.ai.ml.entities)|
## Next steps
machine-learning Reference Yaml Mltable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-mltable.md
+
+ Title: 'CLI (v2) mltable YAML schema'
+
+description: Reference documentation for the CLI (v2) MLTable YAML schema.
++++++++ Last updated : 09/15/2022+++
+# CLI (v2) mltable YAML schema
++
+`MLTable` is a way to abstract the schema definition for tabular data so that it is easier for consumers of the data to materialize the table into a Pandas/Dask/Spark dataframe.
+
+The ideal scenarios to use mltable are:
+
+- The schema of your data is complex and/or changes frequently.
+- You only need a subset of data. (for example: a sample of rows or files, specific columns, etc.)
+- AutoML jobs requiring tabular data.
+If your scenario does not fit the above, then it is likely that [URIs](reference-yaml-data.md) are a more suitable type.
+
+The source JSON schema can be found at https://azuremlschemas.azureedge.net/latest/MLTable.schema.json.
++
+> [!Note]
+> If you just want to create an data asset for a job or you want to write your own parsing logic in python you could just use `uri_file`, `uri_folder` as mentioned in [CLI (v2) data YAML schema](reference-yaml-data.md).
+++
+## YAML syntax
+
+| Key | Type | Description | Allowed values | Default value |
+| | - | -- | -- | - |
+| `$schema` | string | The YAML schema. If you use the Azure Machine Learning VS Code extension to author the YAML file, including `$schema` at the top of your file enables you to invoke schema and resource completions. | | |
+| `type` | const | `mltable` to abstract the schema definition for tabular data so that it is easier for consumers of the data to materialize the table into a Pandas/Dask/Spark dataframe | `mltable` | `mltable`|
+| `paths` | array | Paths can be a `file` path, `folder` path or `pattern` for paths. `pattern` specifies a search pattern to allow globbing(* and **) of files and folders containing data. Supported URI types are `azureml`, `https`, `wasbs`, `abfss`, and `adl`. See [Core yaml syntax](reference-yaml-core-syntax.md) for more information on how to use the `azureml://` URI format. |`file`, `folder`, `pattern` | |
+| `transformations`| array | Defined sequence of transformations that are applied to data loaded from defined paths. |`read_delimited`, `read_parquet` , `read_json_lines` , `read_delta_lake`, `take` to take the first N rows from dataset, `take_random_sample` to take a random sample of records in the dataset approximately by the probability specified, `drop_columns`, `keep_columns`,... ||
+
+## Examples
+
+Examples are available in the [examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/sdk/python/assets/data). And please find examples below.
+
+## MLTable paths: file
+```yaml
+type: mltable
+paths:
+ - file: https://dprepdata.blob.core.windows.net/demo/Titanic2.csv
+transformations:
+ - take: 1
+```
+
+## MLTable paths: pattern
+```yaml
+type: mltable
+paths:
+ - pattern: ./*.txt
+transformations:
+ - read_delimited:
+ delimiter: ,
+ encoding: ascii
+ header: all_files_same_headers
+ - columns: [Trip_Pickup_DateTime, Trip_Dropoff_DateTime]
+ column_type:
+ datetime:
+ formats: ['%Y-%m-%d %H:%M:%S']
+```
+
+## MLTable transformations
+These transformations apply to all mltable-artifact files:
+
+- `take`: Takes the first *n* records of the table
+- `take_random_sample`: Takes a random sample of the table where each record has a *probability* of being selected. The user can also include a *seed*.
+- `drop_columns`: Drops the specified columns from the table. This transform supports regex so that users can drop columns matching a particular pattern.
+- `keep_columns`: Keeps only the specified columns in the table. This transform supports regex so that users can keep columns matching a particular pattern.
+- `convert_column_types`
+ - `columns`: The column name you want to convert type of.
+ - `column_type`: The type you want to convert the column to. For example: string, float, int, or datetime with specified formats.
+
+## MLTable transformations: read_delimited
+
+```yaml
+paths:
+ - file: https://dprepdata.blob.core.windows.net/demo/Titanic2.csv
+transformations:
+ - read_delimited:
+ infer_column_types: false
+ delimiter: ','
+ encoding: 'ascii'
+ empty_as_string: false
+ - take: 10
+```
+
+## Delimited files transformations
+The following transformations are specific to delimited files.
+- infer_column_types: Boolean to infer column data types. Defaults to `True`. Type inference requires that the data source is accessible from the current compute. Currently type inference will only pull first 200 rows.
+- encoding: Specify the file encoding. Supported encodings are `utf8`, `iso88591`, `latin1`, `ascii`, `utf16`, `utf32`, `utf8bom` and `windows1252`. Defaults to `utf8`.
+- header: user can choose one of the following options: `no_header`, `from_first_file`, `all_files_different_headers`, `all_files_same_headers`. Defaults to `all_files_same_headers`.
+- delimiter: The separator used to split columns.
+- empty_as_string: Specify if empty field values should be loaded as empty strings. The default (`False`) will read empty field values as nulls. Passing this setting as `True` will read empty field values as empty strings. If the values are converted to numeric or datetime, then this setting has no effect, as empty values will be converted to nulls.
+- include_path_column: Boolean to keep path information as column in the table. Defaults to `False`. This setting is useful when you are reading multiple files, and want to know which file a particular record originated from. And you can also keep useful information in file path.
+- support_multi_line: By default (support_multi_line=`False`), all line breaks, including those in quoted field values, will be interpreted as a record break. Reading data this way is faster and more optimized for parallel execution on multiple CPU cores. However, it may result in silently producing more records with misaligned field values. This setting should be set to `True` when the delimited files are known to contain quoted line breaks.
+
+## MLTable transformations: read_json_lines
+```yaml
+paths:
+ - file: ./order_invalid.jsonl
+transformations:
+ - read_json_lines:
+ encoding: utf8
+ invalid_lines: drop
+ include_path_column: false
+```
+
+## MLTable transformations: read_json_lines, convert_column_types
+```yaml
+paths:
+ - file: ./train_annotations.jsonl
+transformations:
+ - read_json_lines:
+ encoding: utf8
+ invalid_lines: error
+ include_path_column: false
+ - convert_column_types:
+ - columns: image_url
+ column_type: stream_info
+```
+
+### Json lines transformations
+Only flat Json files are supported.
+Below are the supported transformations that are specific for json lines:
+
+- `include_path_column` Boolean to keep path information as column in the MLTable. Defaults to False. This setting is useful when you are reading multiple files, and want to know which file a particular record originated from. And you can also keep useful information in file path.
+- `invalid_lines` How to handle lines that are invalid JSON. Supported values are `error` and `drop`. Defaults to `error`.
+- `encoding` Specify the file encoding. Supported encodings are `utf8`, `iso88591`, `latin1`, `ascii`, `utf16`, `utf32`, `utf8bom` and `windows1252`. Default is `utf8`.
++
+## MLTable transformations: read_parquet
+```yaml
+type: mltable
+traits:
+ index_columns: ID
+paths:
+ - file: ./crime.parquet
+transformations:
+ - read_parquet
+```
+### Parquet files transformations
+If the user doesn't define options for `read_parquet` transformation, default options will be selected (see below).
+
+- `include_path_column`: Boolean to keep path information as column in the table. Defaults to False. This setting is useful when you are reading multiple files, and want to know which file a particular record originated from. And you can also keep useful information in file path.
+
+## MLTable transformations: read_delta_lake
+```yaml
+type: mltable
+
+paths:
+- abfss://my_delta_files
+
+transforms:
+ - read_delta_lake:
+ timestamp_as_of: '2022-08-26T00:00:00Z'
+```
+
+### Delta lake transformations
+
+- `timestamp_as_of`: Timestamp to be specified for time-travel on the specific Delta Lake data.
+- `version_as_of`: Version to be specified for time-travel on the specific Delta Lake data.
+
+## Next steps
+
+- [Install and use the CLI (v2)](how-to-configure-cli.md)
migrate How To Create Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-create-assessment.md
Run an assessment as follows:
1. On the **Get started** page > **Servers, databases and web apps**, select **Discover, assess and migrate**.
- ![Screenshot of Get started screen.](./media/tutorial-assess-vmware-azure-vm/assess.png)
+ :::image type="content" source="./media/tutorial-assess-vmware-azure-vm/assess.png" alt-text="Screenshot of Get started screen.":::
2. In **Azure Migrate: Discovery and assessment**, select **Assess** and select **Azure VM**.
- ![Screenshot of Assess VM selection.](./media/tutorial-assess-vmware-azure-vm/assess-servers.png)
+ :::image type="content" source="./media/tutorial-assess-vmware-azure-vm/assess-servers.png" alt-text="Screenshot of Assess VM selection.":::
3. The **Create assessment** wizard appears with **Azure VM** as the **Assessment type**. 4. In **Discovery source**:
Run an assessment as follows:
1. Select **Edit** to review the assessment properties.
- ![Screenshot of View all button to review assessment properties](./media/tutorial-assess-vmware-azure-vm/assessment-name.png)
+ :::image type="content" source="./media/tutorial-assess-vmware-azure-vm/assessment-name.png" alt-text="Screenshot of View all button to review assessment properties.":::
+
1. In **Assessment properties** > **Target Properties**: - In **Target location**, specify the Azure region to which you want to migrate.
- - Size and cost recommendations are based on the location that you specify. Once you change the target location from default, you will be prompted to specify **Reserved Instances** and **VM series**.
+ - Size and cost recommendations are based on the location that you specify. Once you change the target location from default, you'll be prompted to specify **Reserved Instances** and **VM series**.
- In Azure Government, you can target assessments in [these regions](migrate-support-matrix.md#azure-government). - In **Storage type**, - If you want to use performance-based data in the assessment, select **Automatic** for Azure Migrate to recommend a storage type, based on disk IOPS and throughput. - Alternatively, select the storage type you want to use for VM when you migrate it. - In **Reserved Instances**, specify whether you want to use reserve instances for the VM when you migrate it.
- - If you select to use a reserved instance, you can't specify '**Discount (%)**, or **VM uptime**.
- - [Learn more](https://aka.ms/azurereservedinstances).
+ - If you select to use a reserved instance, you can't specify '**Discount (%)**, or **VM uptime**. [Learn more](https://aka.ms/azurereservedinstances).
1. In **VM Size**: - In **Sizing criterion**, select if you want to base the assessment on server configuration data/metadata, or on performance-based data. If you use performance data: - In **Performance history**, indicate the data duration on which you want to base the assessment.
Run an assessment as follows:
1. Select **Save** if you make changes.
- ![Screenshot of Assessment properties.](./media/tutorial-assess-vmware-azure-vm/assessment-properties.png)
+ :::image type="content" source="./media/tutorial-assess-vmware-azure-vm/assessment-properties.png" alt-text="Screenshot of Assessment properties.":::
1. In **Assess Servers**, select **Next**.
Run an assessment as follows:
1. In **Select or create a group** > select **Create New** and specify a group name.
- ![Screenshot of adding VMs to a group.](./media/tutorial-assess-vmware-azure-vm/assess-group.png)
+ :::image type="content" source="./media/tutorial-assess-vmware-azure-vm/assess-group.png" alt-text="Screenshot of adding VMs to a group.":::
1. Select the appliance, and select the VMs you want to add to the group. Then select **Next**.
An Azure VM assessment describes:
### View an Azure VM assessment 1. In **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select the number next to **Azure VM**.
-2. In **Assessments**, select an assessment to open it. As an example (estimations and costs for example only):
+2. In **Assessments**, select an assessment to open it. As an example (estimations and costs, for example, only):
- ![Screenshot of an Assessment summary.](./media/how-to-create-assessment/assessment-summary.png)
+ :::image type="content" source="./media/how-to-create-assessment/assessment-summary.png" alt-text="Screenshot of an Assessment summary.":::
### Review Azure readiness
This view shows the estimated compute and storage cost of running VMs in Azure.
When you run performance-based assessments, a confidence rating is assigned to the assessment.
-![Screenshot of Confidence rating.](./media/how-to-create-assessment/confidence-rating.png)
- A rating from 1-star (lowest) to 5-star (highest) is awarded. - The confidence rating helps you estimate the reliability of the size recommendations provided by the assessment.
migrate How To Modify Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-modify-assessment.md
ms. Previously updated : 07/15/2019 Last updated : 10/25/2022+
This article describes how to customize assessments created by Azure Migrate Dis
[Azure Migrate](migrate-services-overview.md) provides a central hub to track discovery, assessment, and migration of your on-premises apps and workloads, and private/public cloud VMs, to Azure. The hub provides Azure Migrate tools for assessment and migration, as well as third-party independent software vendor (ISV) offerings.
-You can use the Azure Migrate Discovery and assessment tool to create assessments for on-premises VMware VMs and Hyper-V VMs, in preparation for migration to Azure. Discovery and assessment tool assesses on-premises servers for migration to Azure IaaS virtual machines and Azure VMware Solution (AVS).
+You can use the Azure Migrate Discovery and assessment tool to create assessments for on-premises VMware VMs and Hyper-V VMs, in preparation for migration to Azure. The Discovery and assessment tool assesses on-premises servers for migration to Azure IaaS virtual machines and Azure VMware Solution (AVS).
## About assessments
-Assessments you create with Discovery and assessment tool are a point-in-time snapshot of data. There are two types of assessments you can create using Azure Migrate: Discovery and assessment.
+Assessments that you create with the Discovery and assessment tool are a point-in-time snapshot of data. There are two types of assessments that you can create using Azure Migrate: Discovery and assessment.
**Assessment Type** | **Details** |
-**Azure VM** | Assessments to migrate your on-premises servers to Azure virtual machines. <br/><br/> You can assess your on-premises [VMware VMs](how-to-set-up-appliance-vmware.md), [Hyper-V VMs](how-to-set-up-appliance-hyper-v.md), and [physical servers](how-to-set-up-appliance-physical.md) for migration to Azure using this assessment type.(concepts-assessment-calculation.md)
-**Azure VMware Solution (AVS)** | Assessments to migrate your on-premises servers to [Azure VMware Solution (AVS)](../azure-vmware/introduction.md). <br/><br/> You can assess your on-premises [VMware VMs](how-to-set-up-appliance-vmware.md) for migration to Azure VMware Solution (AVS) using this assessment type.[Learn more](concepts-azure-vmware-solution-assessment-calculation.md)
+**Azure VM** | Assessments to migrate your on-premises servers to Azure virtual machines. <br/><br/> You can assess your on-premises [VMware VMs](how-to-set-up-appliance-vmware.md), [Hyper-V VMs](how-to-set-up-appliance-hyper-v.md), and [physical servers](how-to-set-up-appliance-physical.md) for migration to Azure using this assessment type.[Learn more](concepts-assessment-calculation.md).
+**Azure VMware Solution (AVS)** | Assessments to migrate your on-premises servers to [Azure VMware Solution (AVS)](../azure-vmware/introduction.md). <br/><br/> You can assess your on-premises [VMware VMs](how-to-set-up-appliance-vmware.md) for migration to Azure VMware Solution (AVS) using this assessment type.[Learn more](concepts-azure-vmware-solution-assessment-calculation.md).
Sizing criteria options in Azure Migrate assessments: **Sizing criteria** | **Details** | **Data** | |
-**Performance-based** | Assessments that make recommendations based on collected performance data | **Azure VM assessment**: VM size recommendation is based on CPU and memory utilization data.<br/><br/> Disk type recommendation (standard HDD/SSD or premium-managed disks) is based on the IOPS and throughput of the on-premises disks.<br/><br/>**Azure SQL assessment**: The Azure SQL configuration is based on performance data of SQL instances and databases, which includes: CPU utilization, Memory utilization, IOPS (Data and Log files), throughput and latency of IO operations<br/><br/>**Azure VMware Solution (AVS) assessment**: AVS nodes recommendation is based on CPU and memory utilization data.
+**Performance-based** | Assessments that make recommendations based on collected performance data. | **Azure VM assessment**: VM size recommendation is based on CPU and memory utilization data.<br/><br/> Disk type recommendation (standard HDD/SSD or premium-managed disks) is based on the IOPS and throughput of the on-premises disks.<br/><br/>**Azure SQL assessment**: The Azure SQL configuration is based on performance data of SQL instances and databases, which includes CPU utilization, Memory utilization, IOPS (Data and Log files), throughput, and latency of IO operations<br/><br/>**Azure VMware Solution (AVS) assessment**: AVS nodes recommendation is based on CPU and memory utilization data.
**As-is on-premises** | Assessments that don't use performance data to make recommendations. | **Azure VM assessment**: VM size recommendation is based on the on-premises VM size<br/><br> The recommended disk type is based on what you select in the storage type setting for the assessment.<br/><br/> **Azure VMware Solution (AVS) assessment**: AVS nodes recommendation is based on the on-premises VM size. ## How is an assessment done?
-An assessment done in Azure Migrate Discovery and assessment has three stages. Assessment starts with a suitability analysis, followed by sizing, and lastly, a monthly cost estimation. A machine only moves along to a later stage if it passes the previous one. For example, if a machine fails the Azure suitability check, itΓÇÖs marked as unsuitable for Azure, and sizing and costing won't be done. [Learn more.](./concepts-assessment-calculation.md)
+An assessment done in Azure Migrate Discovery and assessment has three stages. Assessment starts with a suitability analysis, followed by sizing, and lastly, a monthly cost estimation. A machine only moves along to a later stage if it passes the previous one. For example, if a machine fails the Azure suitability check, itΓÇÖs marked as unsuitable for Azure, and sizing and costing won't be done. [Learn more](./concepts-assessment-calculation.md).
## What's in an Azure VM assessment?
An assessment done in Azure Migrate Discovery and assessment has three stages. A
**Currency** | Billing currency. **Discount (%)** | Any subscription-specific discount you receive on top of the Azure offer.<br/> The default setting is 0%. **VM uptime** | If your VMs are not going to be running 24x7 in Azure, you can specify the duration (number of days per month and number of hours per day) for which they would be running and the cost estimations would be done accordingly.<br/> The default value is 31 days per month and 24 hours per day.
-**Azure Hybrid Benefit** | Specify whether you have software assurance and are eligible for [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-use-benefit/). If set to Yes, non-Windows Azure prices are considered for Windows VMs. Default is Yes.
+**Azure Hybrid Benefit** | Specify whether you have software assurance and are eligible for [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-use-benefit/). If set to Yes, non-Windows Azure prices are considered for Windows VMs. By default, Azure Hybrid Benefit is set to Yes.
## What's in an Azure VMware Solution (AVS) assessment?
Here's what's included in an AVS assessment:
| **Property** | **Details** | | - | - |
-| **Target location** | Specifies the AVS private cloud location to which you want to migrate.<br/><br/> AVS Assessment currently supports these target regions: East US, West Europe, West US. |
-| **Storage type** | Specifies the storage engine to be used in AVS.<br/><br/> Note that AVS assessments only supports vSAN as a default storage type. |
+| **Target location** | Specifies the AVS private cloud location to which you want to migrate.<br/><br/> AVS Assessment currently supports these target regions: East US, West Europe, and West US. |
+| **Storage type** | Specifies the storage engine to be used in AVS.<br/><br/> Note that AVS assessments only support vSAN as a default storage type. |
**Reserved Instances (RIs)** | This property helps you specify Reserved Instances in AVS. RIs are currently not supported for AVS nodes. | **Node type** | Specifies the [AVS Node type](../azure-vmware/concepts-private-clouds-clusters.md) used to map the on-premises VMs. Note that default node type is AV36. <br/><br/> Azure Migrate will recommend a required number of nodes for the VMs to be migrated to AVS. |
-**FTT Setting, RAID Level** | Specifies the applicable Failure to Tolerate and Raid combinations. The selected FTT option combined with the on-premises VM disk requirement will determine the total vSAN storage required in AVS. |
+**FTT Setting, RAID Level** | Specifies the applicable Failure to Tolerate (FTT) and Raid combinations. The selected FTT option combined with the on-premises VM disk requirement will determine the total vSAN storage required in AVS. |
**Sizing criterion** | Sets the criteria to be used to _right-size_ VMs for AVS. You can opt for _performance-based_ sizing or _as on-premises_ without considering the performance history. | **Performance history** | Sets the duration to consider in evaluating the performance data of machines. This property is applicable only when the sizing criteria is _performance-based_. | **Percentile utilization** | Specifies the percentile value of the performance sample set to be considered for right-sizing. This property is applicable only when the sizing is performance-based.|
Here's what's included in an AVS assessment:
**Offer** | Displays the [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) you're enrolled in. Azure Migrate estimates the cost accordingly.| **Currency** | Shows the billing currency for your account. | **Discount (%)** | Lists any subscription-specific discount you receive on top of the Azure offer. The default setting is 0%. |
-**Azure Hybrid Benefit** | Specifies whether you have software assurance and are eligible for [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-use-benefit/). Although this has no impact on Azure VMware solutions pricing due to the node based price, customers can still apply their on-premises OS licenses (Microsoft based) in AVS using Azure Hybrid Benefits. Other software OS vendors will have to provide their own licensing terms such as RHEL for example. |
+**Azure Hybrid Benefit** | Specifies whether you have software assurance and are eligible for [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-use-benefit/). Although this has no impact on the Azure VMware solution's pricing due to the node-based price, customers can still apply their on-premises OS licenses (Microsoft-based) in AVS using Azure Hybrid Benefits. Other software OS vendors such as RHEL, for example, will have to provide their own licensing terms. |
**vCPU Oversubscription** | Specifies the ratio of number of virtual cores tied to 1 physical core in the AVS node. The default value in the calculations is 4 vCPU : 1 physical core in AVS. <br/><br/> API users can set this value as an integer. Note that vCPU Oversubscription > 4:1 may begin to cause performance degradation but can be used for web server type workloads. | ## What properties are used to create and customize an Azure SQL assessment?
Here's what's included in Azure SQL assessment properties:
**Property** | **Details** | **Target location** | The Azure region to which you want to migrate. Azure SQL configuration and cost recommendations are based on the location that you specify.
-**Target deployment type** | The target deployment type you want to run the assessment on: <br/><br/> Select **Recommended**, if you want Azure Migrate to assess the readiness of your SQL servers for migrating to Azure SQL MI and Azure SQL DB, and recommend the best suited target deployment option, target tier, Azure SQL configuration and monthly estimates.<br/><br/>Select **Azure SQL DB**, if you want to assess your SQL servers for migrating to Azure SQL Databases only and review the target tier, Azure SQL DB configuration and monthly estimates.<br/><br/>Select **Azure SQL MI**, if you want to assess your SQL servers for migrating to Azure SQL Databases only and review the target tier, Azure SQL MI configuration and monthly estimates.
+**Target deployment type** | The target deployment type you want to run the assessment on: <br/><br/> Select **Recommended** if you want Azure Migrate to assess the readiness of your SQL servers for migrating to Azure SQL MI and Azure SQL DB, and recommend the best suited target deployment option, target tier, Azure SQL configuration, and monthly estimates.<br/><br/>Select **Azure SQL DB** if you want to assess your SQL servers for migrating to Azure SQL Databases only and review the target tier, Azure SQL DB configuration, and monthly estimates.<br/><br/>Select **Azure SQL MI** if you want to assess your SQL servers for migrating to Azure SQL Databases only and review the target tier, Azure SQL MI configuration, and monthly estimates.
**Reserved capacity** | Specifies reserved capacity so that cost estimations in the assessment take them into account.<br/><br/> If you select a reserved capacity option, you can't specify ΓÇ£Discount (%)ΓÇ¥.
-**Sizing criteria** | This property is used to right-size the Azure SQL configuration. <br/><br/> It is defaulted to **Performance-based** which means the assessment will collect the SQL Server instances and databases performance metrics to recommend an optimal-sized Azure SQL Managed Instance and/or Azure SQL Database tier/configuration recommendation.
-**Performance history** | Performance history specifies the duration used when performance data is evaluated.
+**Sizing criteria** | This property is used to right-size the Azure SQL configuration. <br/><br/> It is defaulted to **Performance-based** which means the assessment will collect the SQL Server instances and databases performance metrics to recommend an optimal-sized Azure SQL Managed Instance and/or Azure SQL Database tier/configuration recommendation.
+**Performance history** | Performance history specifies the duration used when the performance data is evaluated.
**Percentile utilization** | Percentile utilization specifies the percentile value of the performance sample used for rightsizing. **Comfort factor** | The buffer used during assessment. It accounts for issues like seasonal usage, short performance history, and likely increases in future usage.<br/><br/> For example, a 10-core instance with 20% utilization normally results in a two-core instance. With a comfort factor of 2.0, the result is a four-core instance instead.
-**Offer/Licensing program** | The [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) in which you're enrolled. Currently you can only choose from Pay-as-you-go and Pay-as-you-go Dev/Test. Note that you can avail additional discount by applying reserved capacity and Azure Hybrid Benefit on top of Pay-as-you-go offer.
+**Offer/Licensing program** | The [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) in which you're enrolled. Currently, you can only choose from **Pay-as-you-go** and **Pay-as-you-go Dev/Test**. Note that you can avail additional discount by applying reserved capacity and Azure Hybrid Benefit on top of Pay-as-you-go offer.
**Service tier** | The most appropriate service tier option to accommodate your business needs for migration to Azure SQL Database and/or Azure SQL Managed Instance:<br/><br/>**Recommended** if you want Azure Migrate to recommend the best suited service tier for your servers. This can be General purpose or Business critical. <br/><br/> **General Purpose** If you want an Azure SQL configuration designed for budget-oriented workloads. [Learn More](/azure/azure-sql/database/service-tier-general-purpose) <br/><br/> **Business Critical** If you want an Azure SQL configuration designed for low-latency workloads with high resiliency to failures and fast failovers. [Learn More](/azure/azure-sql/database/service-tier-business-critical) **Currency** | The billing currency for your account. **Discount (%)** | Any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%.
Here's what's included in Azure SQL assessment properties:
To edit assessment properties after creating an assessment, do the following:
-1. In the Azure Migrate project, click **Servers**.
-2. In **Azure Migrate: Discovery and assessment**, click the assessments count.
-3. In **Assessment**, click the relevant assessment > **Edit properties**.
+1. In the Azure Migrate project, select **Servers**.
+2. In **Azure Migrate: Discovery and assessment**, select the assessments count.
+3. In **Assessment**, select the relevant assessment > **Edit properties**.
5. Customize the assessment properties in accordance with the tables above.
-6. Click **Save** to update the assessment.
+6. Select **Save** to update the assessment.
-You can also edit assessment properties when you're creating an assessment.
+You can also edit the assessment properties when you're creating an assessment.
## Next steps
postgresql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-backup-restore.md
The estimated time to recover the server (recovery time objective, or RTO) depen
During the geo-restore, the server configurations that can be changed include virtual network settings and the ability to remove geo-redundant backup from the restored server. Changing other server configurations--such as compute, storage, or pricing tier (Burstable, General Purpose, or Memory Optimized)--during geo-restore is not supported.
-For more information about performing a geo-restore, see the [how-to guide](how-to-restore-server-portal.md#performing-geo-restore).
+For more information about performing a geo-restore, see the [how-to guide](how-to-restore-server-portal.md#perform-geo-restore).
> [!IMPORTANT] > When the primary region is down, you can't create geo-redundant servers in the respective geo-paired region, because storage can't be provisioned in the primary region. Before you can provision geo-redundant servers in the geo-paired region, you must wait for the primary region to be up.
postgresql Concepts Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-compliance.md
industry specific, and region/country specific. Compliance offerings are based o
Azure Database for PostgreSQL - Flexible Server has achieved a comprehensive set of national, regional, and industry-specific compliance certifications in our Azure public cloud to help you comply with requirements governing the collection and use of your data.
-[!div class="mx-tableFixed"]
+> [!div class="mx-tableFixed"]
> | **Certification**| **Applicable To** | > ||| > |HIPAA and HITECH Act (U.S.) | Healthcare|
postgresql Concepts Firewall Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-firewall-rules.md
Last updated 11/30/2021
When you're running Azure Database for PostgreSQL - Flexible Server, you have two main networking options. The options are private access (virtual network integration) and public access (allowed IP addresses).
-With public access, the Azure Database for PostgreSQL server is accessed through a public endpoint. By default, the firewall blocks all access to the server. To specify which IP hosts can access the server, you create server-level *firewall rules*. Firewall rules specify allowed public IP address ranges. The firewall grants access to the server based on the originating IP address of each request.
+With public access, the Azure Database for PostgreSQL server is accessed through a public endpoint. By default, the firewall blocks all access to the server. To specify which IP hosts can access the server, you create server-level *firewall rules*. Firewall rules specify allowed public IP address ranges. The firewall grants access to the server based on the originating IP address of each request. With [private access](concepts-networking.md#private-access-vnet-integration) no public endpoint is available and only hosts located on the same network can access Azure Database for PostgreSQL - Flexible Server.
You can create firewall rules by using the Azure portal or by using Azure CLI commands. You must be the subscription owner or a subscription contributor.
postgresql How To Manage High Availability Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-high-availability-portal.md
Follow these steps to perform a planned failover from your primary to the standb
There are Azure regions that do not support availability zones. If you have already deployed non-HA servers, you cannot directly enable zone redundant HA on the server, but you can perform restore and enable HA in that server. Following steps shows how to enable Zone redundant HA for that server.
-1. From the overview page of the server, click **Restore** to [perform a PITR](how-to-restore-server-portal.md#restoring-to-the-latest-restore-point). Choose **Latest restore point**.
+1. From the overview page of the server, click **Restore** to [perform a PITR](how-to-restore-server-portal.md#restore-to-the-latest-restore-point). Choose **Latest restore point**.
2. Choose a server name, availability zone. 3. Click **Review+Create**". 4. A new Flexible server will be created from the backup.
postgresql How To Restore Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-restore-server-portal.md
Title: Restore - Azure portal - Azure Database for PostgreSQL - Flexible Server
+ Title: Point-in-time restore of a flexible server - Azure portal
description: This article describes how to perform restore operations in Azure Database for PostgreSQL Flexible Server through the Azure portal.
Last updated 11/30/2021
-# Point-in-time restore of a Flexible Server
+# Point-in-time restore of a flexible server
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-This article provides step-by-step procedure to perform point-in-time recoveries in flexible server using backups. You can perform either to a latest restore point or a custom restore point within your retention period.
+This article provides a step-by-step procedure for using the Azure portal to perform point-in-time recoveries in a flexible server through backups. You can perform this procedure to the latest restore point or to a custom restore point within your retention period.
-## Pre-requisites
+## Prerequisites
-To complete this how-to guide, you need:
+To complete this how-to guide, you need Azure Database for PostgreSQL - Flexible Server. The procedure is also applicable for a flexible server that's configured with zone redundancy.
-- You must have an Azure Database for PostgreSQL - Flexible Server. The same procedure is also applicable for flexible server configured with zone redundancy.
+## Restore to the latest restore point
-## Restoring to the latest restore point
+Follow these steps to restore your flexible server to the latest restore point by using an existing backup:
-Follow these steps to restore your flexible server using an existing backup.
+1. In the [Azure portal](https://portal.azure.com/), choose the flexible server that you want to restore the backup from.
-1. In the [Azure portal](https://portal.azure.com/), choose your flexible server that you want to restore the backup from.
-
-2. Click **Overview** from the left panel and click **Restore**
+2. Select **Overview** from the left pane, and then select **Restore**.
- :::image type="content" source="./media/how-to-restore-server-portal/restore-overview.png" alt-text="Restore overview":::
+ :::image type="content" source="./media/how-to-restore-server-portal/restore-overview.png" alt-text="Screenshot that shows a server overview and the Restore button.":::
-3. Restore page will be shown with an option to choose between the latest restore point and Custom restore point.
+3. Under **Source details**, select **Latest restore point (Now)**.
-4. Select **Latest restore point** and provide a new server name in the **Restore to new server** field. You can optionally choose the Availability zone to restore to.
+4. Under **Server details**, for **Name**, provide a server name. For **Availability zone**, you can optionally choose an availability zone to restore to.
- :::image type="content" source="./media/how-to-restore-server-portal/restore-latest.png" alt-text="Latest restore time":::
-
-5. Click **OK**.
+ :::image type="content" source="./media/how-to-restore-server-portal/restore-latest.png" alt-text="Screenshot that shows selections for restoring to the latest restore point.":::
-6. A notification will be shown that the restore operation has been initiated.
+5. Select **OK**. A notification shows that the restore operation has started.
-## Restoring to a custom restore point
+## Restore to a custom restore point
-Follow these steps to restore your flexible server using an existing backup.
+Follow these steps to restore your flexible server to a custom restore point by using an existing backup:
-1. In the [Azure portal](https://portal.azure.com/), choose your flexible server that you want to restore the backup from.
+1. In the [Azure portal](https://portal.azure.com/), choose the flexible server that you want to restore the backup from.
-2. From the overview page, click **Restore**.
- :::image type="content" source="./media/how-to-restore-server-portal/restore-overview.png" alt-text="Restore overview":::
+2. Select **Overview** from the left pane, and then select **Restore**.
+
+ :::image type="content" source="./media/how-to-restore-server-portal/restore-overview.png" alt-text="Screenshot that shows a server overview and the Restore button.":::
-3. Restore page will be shown with an option to choose between the latest restore point, custom restore point and fast restore point.
+4. Under **Source details**, choose **Select a custom restore point**.
-4. Choose **Custom restore point**.
-
-5. Select date and time and provide a new server name in the **Restore to new server** field. Provide a new server name and you can optionally choose the **Availability zone** to restore to.
+5. Under **Server details**, for **Name**, provide a server name. For **Availability zone**, you can optionally choose an availability zone to restore to.
+ :::image type="content" source="./media/how-to-restore-server-portal/restore-custom-2.png" alt-text="Screenshot that shows selections for restoring to a custom restore point.":::
-6. Click **OK**.
-
-7. A notification will be shown that the restore operation has been initiated.
+6. Select **OK**. A notification shows that the restore operation has started.
- ## Restoring using fast restore
+## Restore by using fast restore
-Follow these steps to restore your flexible server using a fast restore option.
+Follow these steps to restore your flexible server by using a fast restore option:
-1. In the [Azure portal](https://portal.azure.com/), choose your flexible server that you want to restore the backup from.
+1. In the [Azure portal](https://portal.azure.com/), choose the flexible server that you want to restore the backup from.
-2. Click **Overview** from the left panel and click **Restore**
+2. Select **Overview** from the left pane, and then select **Restore**.
- :::image type="content" source="./media/how-to-restore-server-portal/restore-overview.png" alt-text="Restore overview":::
+ :::image type="content" source="./media/how-to-restore-server-portal/restore-overview.png" alt-text="Screenshot that shows a server overview and the Restore button.":::
-3. Restore page will be shown with an option to choose between the latest restore point, custom restore point and fast restore point.
+4. Under **Source details**, choose **Select Fast restore point (Restore using full backup only)**. For **Fast Restore point (UTC)**, select the full backup of your choice.
-4. Choose **Fast restore point (Restore using full backup only)**.
-
-5. Select full backup of your choice from the Fast Restore Point drop-down. Provide a **new server name** and you can optionally choose the **Availability zone** to restore to.
+5. Under **Server details**, for **Name**, provide a server name. For **Availability zone**, you can optionally choose an availability zone to restore to.
+ :::image type="content" source="./media/how-to-restore-server-portal/fast-restore.png" alt-text="Screenshot that shows selections for a fast restore point.":::
-6. Click **OK**.
+6. Select **OK**. A notification shows that the restore operation has started.
-7. A notification will be shown that the restore operation has been initiated.
+## Perform geo-restore
-## Performing Geo-Restore
+If your source server is configured with geo-redundant backup, you can restore the servers in a paired region.
-If your source server is configured with geo-redundant backup, you can restore the servers in a paired region. Note that, for the first time restore, please wait at least 1 hour after the source server is created.
+> [!NOTE]
+> For the first time that you perform a geo-restore, wait at least one hour after you create the source server.
-1. In the [Azure portal](https://portal.azure.com/), choose your flexible server that you want to geo-restore the backup from.
+1. In the [Azure portal](https://portal.azure.com/), choose the flexible server that you want to geo-restore the backup from.
-2. From the overview page, click **Restore**.
- :::image type="content" source="./media/how-to-restore-server-portal/geo-restore-click.png" alt-text="Restore click":::
+2. Select **Overview** from the left pane, and then select **Restore**.
+
+ :::image type="content" source="./media/how-to-restore-server-portal/geo-restore-click.png" alt-text="Screenshot that shows the Restore button.":::
-3. From the restore page, choose Geo-Redundant restore to restore to a paired region.
- :::image type="content" source="./media/how-to-restore-server-portal/geo-restore-choose-checkbox.png" alt-text="Geo-restore select":::
+3. Under **Source details**, for **Geo-redundant restore (preview)**, select the **Restore to paired region** checkbox.
-4. The region and the database versions are pre-selected. It will be restored to the last available data at the paired region. You can choose the **Availability zone** in the region to restore to.
+ :::image type="content" source="./media/how-to-restore-server-portal/geo-restore-choose-checkbox.png" alt-text="Screenshot that shows the option for restoring to a paired region for geo-redundant restore.":::
+
+4. Under **Server details**, the region and the database version are pre-selected. The server will be restored to the last available data at the paired region. For **Availability zone**, you can optionally choose an availability zone to restore to.
+
+5. Select **OK**. A notification shows that the restore operation has started.
-5. By default, the backups for the restored server are configured with Geo-redundant backup. If you do not want geo-redundant backup, you can click **Configure Server** and uncheck the Geo-redundant backup.
+By default, the backups for the restored server are configured with geo-redundant backup. If you don't want geo-redundant backup, you can select **Configure Server** and then clear the **Restore to paired region** checkbox.
-6. If the source server is configured with **private access**, you can only restore to another VNET in the remote region. You can either choose an existing VNET or create a new VNET and restore your server into that VNET.
+If the source server is configured with *private access*, you can restore only to another virtual network in the remote region. You can either choose an existing virtual network or create a new virtual network and restore your server into that network.
## Next steps -- Learn about [business continuity](./concepts-business-continuity.md)-- Learn about [zone redundant high availability](./concepts-high-availability.md)-- Learn about [backup and recovery](./concepts-backup-restore.md)
+- Learn about [business continuity](./concepts-business-continuity.md).
+- Learn about [zone-redundant high availability](./concepts-high-availability.md).
+- Learn about [backup and recovery](./concepts-backup-restore.md).
private-link Create Private Endpoint Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-bicep.md
Title: 'Quickstart: Create a private endpoint using Bicep' description: In this quickstart, you'll learn how to create a private endpoint using Bicep. -+ Last updated 05/02/2022-+ #Customer intent: As someone who has a basic network background but is new to Azure, I want to create a private endpoint using Bicep.
private-link Create Private Link Service Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-link-service-bicep.md
Title: 'Quickstart: Create a private link service in Azure Private Link using Bicep' description: In this quickstart, you use Bicep to create a private link service. -+ Last updated 04/29/2022-+ # Quickstart: Create a private link service using Bicep
purview Register Scan Power Bi Tenant Cross Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant-cross-tenant.md
Previously updated : 09/22/2022 Last updated : 10/24/2022
Use either of the following deployment checklists during the setup, or for troub
1. In the Power BI Azure AD tenant, validate the following app registration settings: 1. The app registration exists in your Azure AD tenant where the Power BI tenant is located.
- 2. Under **API permissions**, the following APIs are set up with **read** for **delegated permissions** and **grant admin consent for the tenant**:
- 1. Power BI Service Tenant.Read.All
- 2. Microsoft Graph openid
- 3. Microsoft Graph User.Read
+
+ 2. If service principal is used, under **API permissions**, the following **delegated permissions** are assigned with read for the following APIs:
+ - Microsoft Graph openid
+ - Microsoft Graph User.Read
+
+ 3. If delegated authentication is used, under **API permissions**, the following **delegated permissions** and **grant admin consent for the tenant** is set up with read for the following APIs:
+ - Power BI Service Tenant.Read.All
+ - Microsoft Graph openid
+ - Microsoft Graph User.Read
+
3. Under **Authentication**: 1. **Supported account types** > **Accounts in any organizational directory (Any Azure AD directory - Multitenant)** is selected. 2. **Implicit grant and hybrid flows** > **ID tokens (used for implicit and hybrid flows)** is selected.
Use either of the following deployment checklists during the setup, or for troub
1. In the Power BI Azure AD tenant, validate the following app registration settings: 1. The app registration exists in your Azure AD tenant where the Power BI tenant is located.
- 2. Under **API permissions**, the following APIs are set up with **read** for **delegated permissions** and **grant admin consent for the tenant**:
- 1. Power BI Service Tenant.Read.All
- 2. Microsoft Graph openid
- 3. Microsoft Graph User.Read
+
+ 2. If service principal is used, under **API permissions**, the following **delegated permissions** are assigned with read for the following APIs:
+ - Microsoft Graph openid
+ - Microsoft Graph User.Read
+
+ 3. If delegated authentication is used, under **API permissions**, the following **delegated permissions** and **grant admin consent for the tenant** is set up with read for the following APIs:
+ - Power BI Service Tenant.Read.All
+ - Microsoft Graph openid
+ - Microsoft Graph User.Read
+
3. Under **Authentication**: 1. **Supported account types** > **Accounts in any organizational directory (Any Azure AD directory - Multitenant)** is selected. 2. **Implicit grant and hybrid flows** > **ID tokens (used for implicit and hybrid flows)** is selected.
To create and run a new scan by using the Azure runtime, perform the following s
1. If your key vault isn't connected to Microsoft Purview yet, you need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-microsoft-purview-account).
-1. Create an app registration in your Azure AD tenant where Power BI is located. Provide a web URL in the **Redirect URI**. Take note of the client ID (app ID).
+1. Create an app registration in your Azure AD tenant where Power BI is located. Provide a web URL in the **Redirect URI**.
+
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-cross-tenant-app-registration.png" alt-text="Screenshot how to create App in Azure AD for cross tenant.":::
+
+3. Take note of the client ID (app ID).
:::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-create-service-principle.png" alt-text="Screenshot that shows how to create a service principle.":::
To create and run a new scan by using the Azure runtime, perform the following s
- Microsoft Graph openid - Microsoft Graph User.Read
- :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-delegated-permissions.png" alt-text="Screenshot of delegated permissions for Power BI and Microsoft Graph.":::
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-delegated-permissions.png" alt-text="Screenshot of delegated permissions on Power BI and Microsoft Graph.":::
1. From the Azure AD dashboard, select the newly created application, and then select **Authentication**. Under **Supported account types**, select **Accounts in any organizational directory (Any Azure AD directory - Multitenant)**.
To create and run a new scan by using the Azure runtime, perform the following s
To create and run a new scan by using the self-hosted integration runtime, perform the following steps:
-1. Create an app registration in your Azure AD tenant where Power BI is located. Provide a web URL in the **Redirect URI**. Take note of the client ID (app ID).
+1. Create an app registration in your Azure AD tenant where Power BI is located. Provide a web URL in the **Redirect URI**.
+
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-cross-tenant-app-registration.png" alt-text="Screenshot how to create App in Azure AD for cross tenant.":::
+
+2. Take note of the client ID (app ID).
:::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-create-service-principle.png" alt-text="Screenshot that shows how to create a service principle.":::
-1. From the Azure AD dashboard, select the newly created application, and then select **App permissions**. Assign the application the following delegated permissions, and grant admin consent for the tenant:
+1. From the Azure AD dashboard, select the newly created application, and then select **App permissions**. Assign the application the following delegated permissions:
- - Power BI Service Tenant.Read.All
- Microsoft Graph openid - Microsoft Graph User.Read
- :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-delegated-permissions.png" alt-text="Screenshot of delegated permissions for Power BI and Microsoft Graph.":::
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-spn-api-permissions.png" alt-text="Screenshot of delegated permissions on Microsoft Graph.":::
1. From the Azure AD dashboard, select the newly created application, and then select **Authentication**. Under **Supported account types**, select **Accounts in any organizational directory (Any Azure AD directory - Multitenant)**.
purview Register Scan Power Bi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant.md
Previously updated : 10/19/2022 Last updated : 10/24/2022
Use any of the following deployment checklists during the setup or for troublesh
1. Validate App registration settings to make sure: 1. App registration exists in your Azure Active Directory tenant.
- 2. Under **API permissions**, the following **delegated permissions** and **grant admin consent for the tenant** is set up with read for the following APIs:
- 1. Power BI Service Tenant.Read.All
- 2. Microsoft Graph openid
- 3. Microsoft Graph User.Read
+
+ 2. If service principal is used, under **API permissions**, the following **delegated permissions** are assigned with read for the following APIs:
+ - Microsoft Graph openid
+ - Microsoft Graph User.Read
+
+ 3. If delegated authentication is used, under **API permissions**, the following **delegated permissions** and **grant admin consent for the tenant** is set up with read for the following APIs:
+ - Power BI Service Tenant.Read.All
+ - Microsoft Graph openid
+ - Microsoft Graph User.Read
+
3. Under **Authentication**, **Allow public client flows** is enabled. 2. If delegated authentication is used, validate Power BI admin user settings to make sure:
Use any of the following deployment checklists during the setup or for troublesh
1. Validate App registration settings to make sure: 1. App registration exists in your Azure Active Directory tenant.
- 2. Under **API permissions**, the following **delegated permissions** and **grant admin consent for the tenant** is set up with read for the following APIs:
- 1. Power BI Service Tenant.Read.All
- 2. Microsoft Graph openid
- 3. Microsoft Graph User.Read
+
+ 2. If service principal is used, under **API permissions**, the following **delegated permissions** are assigned with read for the following APIs:
+ - Microsoft Graph openid
+ - Microsoft Graph User.Read
+
+ 3. If delegated authentication is used, under **API permissions**, the following **delegated permissions** and **grant admin consent for the tenant** is set up with read for the following APIs:
+ - Power BI Service Tenant.Read.All
+ - Microsoft Graph openid
+ - Microsoft Graph User.Read
+
3. Under **Authentication**, **Allow public client flows** is enabled. 2. Review network configuration and validate if:
To create and run a new scan, do the following:
1. In the [Azure portal](https://portal.azure.com), select **Azure Active Directory** and create an App Registration in the tenant. Provide a web URL in the **Redirect URI**. [For information about the Redirect URI see this documenation from Azure Active Directory](/azure/active-directory/develop/reply-url).
- :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-app-registration.png" alt-text="Screenshot how to create App in AAD.":::
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-app-registration.png" alt-text="Screenshot how to create App in Azure AD.":::
2. Take note of Client ID(App ID). :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-create-service-principle.png" alt-text="Screenshot how to create a Service principle.":::
-1. From Azure Active Directory dashboard, select newly created application and then select **App registration**. From **API Permissions**, assign the application the following delegated permissions and grant admin consent for the tenant:
+1. From Azure Active Directory dashboard, select newly created application and then select **App registration**. From **API Permissions**, assign the application the following delegated permissions:
- - Power BI Service Tenant.Read.All
- Microsoft Graph openid - Microsoft Graph User.Read
- :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-delegated-permissions.png" alt-text="Screenshot of delegated permissions for Power BI Service and Microsoft Graph.":::
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-spn-api-permissions.png" alt-text="Screenshot of delegated permissions on Microsoft Graph.":::
1. Under **Advanced settings**, enable **Allow Public client flows**.
To create and run a new scan, do the following:
1. Create an App Registration in your Azure Active Directory tenant. Provide a web URL in the **Redirect URI**.
- :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-app-registration.png" alt-text="Screenshot how to create App in AAD.":::
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-app-registration.png" alt-text="Screenshot how to create App in Azure AD.":::
2. Take note of Client ID(App ID). :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-create-service-principle.png" alt-text="Screenshot how to create a Service principle.":::
-1. From Azure Active Directory dashboard, select newly created application and then select **App registration**. From **API Permissions**, assign the application the following delegated permissions and grant admin consent for the tenant:
+1. From Azure Active Directory dashboard, select newly created application and then select **App registration**. Assign the application the following delegated permissions, and grant admin consent for the tenant:
- Power BI Service Tenant.Read.All - Microsoft Graph openid - Microsoft Graph User.Read
- :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-delegated-permissions.png" alt-text="Screenshot of delegated permissions for Power BI Service and Microsoft Graph.":::
+ :::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-delegated-permissions.png" alt-text="Screenshot of delegated permissions on Power BI Service and Microsoft Graph.":::
1. Under **Advanced settings**, enable **Allow Public client flows**.
sentinel Data Transformation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-transformation.md
Ingestion-time transformations can also be used to mask or remove personal infor
## Data ingestion flow in Microsoft Sentinel
-The following image shows where ingestion-time data transformation enters the data ingestion flow into Microsoft Sentinel.
+The following image shows where ingestion-time data transformation enters the data ingestion flow in Microsoft Sentinel.
-Microsoft Sentinel collects data into the Log Analytics workspace from multiple sources. Data from built-in data connectors is processed in Log Analytics using some combination of hardcoded workflows and ingestion-time transformations, and data ingested directly into the logs ingestion API endpoint is , and then stored in either standard or custom tables.
+Microsoft Sentinel collects data into the Log Analytics workspace from multiple sources.
+- Data from built-in data connectors is processed in Log Analytics using some combination of hardcoded workflows and ingestion-time transformations in the workspace DCR. This data can be stored in standard tables or in a specific set of custom tables.
+- Data ingested directly into the Logs ingestion API endpoint is processed by a DCR that may include an ingestion-time transformation, and then stored in either standard or custom tables. This data can then be stored in either standard or custom tables of any kind.
:::image type="content" source="media/data-transformation/data-transformation-architecture.png" alt-text="Diagram of the Microsoft Sentinel data transformation architecture.":::
sentinel Detect Threats Built In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/detect-threats-built-in.md
Last updated 11/09/2021 - # Detect threats out-of-the-box
This article helps you understand how to detect threats with Microsoft Sentinel:
## View built-in detections
-To view all analytics rules and detections in Microsoft Sentinel, go to **Analytics** > **Rule templates**. This tab contains all the Microsoft Sentinel built-in rules.
+To view all analytics rules and detections in Microsoft Sentinel, go to **Analytics** > **Rule templates**. This tab contains all the Microsoft Sentinel built-in rules, as well as the **Threat Intelligence** rule type.
Built-in detections include:
Built-in detections include:
| **Microsoft security** | Microsoft security templates automatically create Microsoft Sentinel incidents from the alerts generated in other Microsoft security solutions, in real time. You can use Microsoft security rules as a template to create new rules with similar logic. <br><br>For more information about security rules, see [Automatically create incidents from Microsoft security alerts](create-incidents-from-alerts.md). | | <a name="fusion"></a>**Fusion**<br>(some detections in Preview) | Microsoft Sentinel uses the Fusion correlation engine, with its scalable machine learning algorithms, to detect advanced multistage attacks by correlating many low-fidelity alerts and events across multiple products into high-fidelity and actionable incidents. Fusion is enabled by default. Because the logic is hidden and therefore not customizable, you can only create one rule with this template. <br><br>The Fusion engine can also correlate alerts produced by [scheduled analytics rules](#scheduled) with those from other systems, producing high-fidelity incidents as a result. | | **Machine learning (ML) behavioral analytics** | ML behavioral analytics templates are based on proprietary Microsoft machine learning algorithms, so you cannot see the internal logic of how they work and when they run. <br><br>Because the logic is hidden and therefore not customizable, you can only create one rule with each template of this type. |
+| **Threat Intelligence** | Take advantage of threat intelligence produced by Microsoft to generate high fidelity alerts and incidents with the **Microsoft Threat Intelligence Analytics** rule. This unique rule is not customizable, but when enabled, will automatically match Common Event Format (CEF) logs, Syslog data or Windows DNS events with domain, IP and URL threat indicators from Microsoft Threat Intelligence. Certain indicators will contain additional context information through MDTI (**Microsoft Defender Threat Intelligence**).<br><br>For more information on how to enable this rule, see [Use matching analytics to detect threats](use-matching-analytics-to-detect-threats.md).<br>For more details on MDTI, see [What is Microsoft Defender Threat Intelligence](/../defender/threat-intelligence/what-is-microsoft-defender-threat-intelligence-defender-ti)
| <a name="anomaly"></a>**Anomaly**<br>(Preview) | Anomaly rule templates use machine learning to detect specific types of anomalous behavior. Each rule has its own unique parameters and thresholds, appropriate to the behavior being analyzed. <br><br>While the configurations of out-of-the-box rules can't be changed or fine-tuned, you can duplicate a rule and then change and fine-tune the duplicate. In such cases, run the duplicate in **Flighting** mode and the original concurrently in **Production** mode. Then compare results, and switch the duplicate to **Production** if and when its fine-tuning is to your liking. <br><br>For more information, see [Use customizable anomalies to detect threats in Microsoft Sentinel](soc-ml-anomalies.md) and [Work with anomaly detection analytics rules in Microsoft Sentinel](work-with-anomaly-rules.md). | | <a name="scheduled"></a>**Scheduled** | Scheduled analytics rules are based on built-in queries written by Microsoft security experts. You can see the query logic and make changes to it. You can use the scheduled rules template and customize the query logic and scheduling settings to create new rules. <br><br>Several new scheduled analytics rule templates produce alerts that are correlated by the Fusion engine with alerts from other systems to produce high-fidelity incidents. For more information, see [Advanced multistage attack detection](configure-fusion-rules.md#configure-scheduled-analytics-rules-for-fusion-detections).<br><br>**Tip**: Rule scheduling options include configuring the rule to run every specified number of minutes, hours, or days, with the clock starting when you enable the rule. <br><br>We recommend being mindful of when you enable a new or edited analytics rule to ensure that the rules will get the new stack of incidents in time. For example, you might want to run a rule in synch with when your SOC analysts begin their workday, and enable the rules then.| | <a name="nrt"></a>**Near-real-time (NRT)**<br>(Preview) | NRT rules are limited set of scheduled rules, designed to run once every minute, in order to supply you with information as up-to-the-minute as possible. <br><br>They function mostly like scheduled rules and are configured similarly, with some limitations. For more information, see [Detect threats quickly with near-real-time (NRT) analytics rules in Microsoft Sentinel](near-real-time-rules.md). |
sentinel Investigate Large Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/investigate-large-datasets.md
Use a search job when you start an investigation to find specific events in logs
Search in Microsoft Sentinel is built on top of search jobs. Search jobs are asynchronous queries that fetch records. The results are returned to a search table that's created in your Log Analytics workspace after you start the search job. The search job uses parallel processing to run the search across long time spans, in extremely large datasets. So search jobs don't impact the workspace's performance or availability.
-Search results remain in a search results table that has a *_SRCH suffix.
+Search results are stored in a table that has a *_SRCH suffix.
The following image shows example search criteria for a search job.
The following image shows example search criteria for a search job.
Use search to find events in any of the following log types: - [Analytics logs](../azure-monitor/logs/data-platform-logs.md)-- [Basic logs (preview)](../azure-monitor/logs/basic-logs-configure.md)
+- [Basic logs](../azure-monitor/logs/basic-logs-configure.md)
-You can also search analytics or basic log data stored in [archived logs (preview)](../azure-monitor/logs/data-retention-archive.md).
+You can also search analytics or basic log data stored in [archived logs](../azure-monitor/logs/data-retention-archive.md).
### Limitations of a search job Before you start a search job, be aware of the following limitations: - Optimized to query one table at a time.-- Search date range is up to one year.
+- Search date range is up to seven years.
- Supports long running searches up to a 24-hour time-out. - Results are limited to one million records in the record set. - Concurrent execution is limited to five search jobs per workspace.
Similar to the [threat hunting dashboard](hunting.md#use-the-hunting-dashboard),
## Next steps -- [Search across long time spans in large datasets (preview)](search-jobs.md)-- [Restore archived logs from search (preview)](restore.md)
+- [Search across long time spans in large datasets](search-jobs.md)
+- [Restore archived logs from search](restore.md)
sentinel Deploy Data Connector Agent Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-data-connector-agent-container.md
If you're not using SNC, then your SAP configuration and authentication secrets
1. **Sign in to the newly created machine** with a user with sudo privileges.
-1. **Create and configure a data disk** to be mounted at the Docker root directory.
- 1. **download and run the deployment Kickstart script**: For public cloud, the command is: ```bash
sentinel Preparing Sap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/preparing-sap.md
Title: Deploy SAP Change Requests (CRs) and configure authorization | Microsoft Docs
+ Title: Deploy SAP Change Requests (CRs) and configure authorization
description: This article shows you how to deploy the SAP Change Requests (CRs) necessary to prepare the environment for the installation of the SAP agent, so that it can properly connect to your SAP systems.
Last updated 04/07/2022
-# Deploy SAP Change Requests (CRs) and configure authorization
+# Deploy SAP Change Requests and configure authorization
-This article shows you how to deploy the SAP Change Requests (CRs) necessary to prepare the environment for the installation of the SAP agent, so that it can properly connect to your SAP systems.
+This article shows you how to deploy SAP Change Requests (CRs), which prepare the environment for the installation of the SAP agent, so that it can properly connect to your SAP systems.
+
+> [!IMPORTANT]
+> - This article presents a [**step-by-step guide**](#deploy-crs) to deploying the relevant CRs. It's recommended for SOC engineers or implementers who may not necessarily be SAP experts.
+> - Experienced SAP administrators that are familiar with the CR deployment process may prefer to get the appropriate CRs directly from the [**SAP environment validation steps**](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#sap-environment-validation-steps) section of the guide and deploy them. Note that the *NPLK900271* CR deploys a sample role, and the administrator may prefer to manually define the role according to the information in the [**Required ABAP authorizations**](#required-abap-authorizations) section below.
+
+## Required and optional CRs
+
+This article discusses the installation of the following CRs:
+
+|CR |Required/optional |Description |
+||||
+|NPLK900271 |Required |This CR creates and configures a role. Alternatively, you can can load the authorizations directly from a file. [Review how to create and configure a role](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#create-and-configure-a-role-required). |
+|NPLK900201 or NPLK900202 |Optional |[Retrieves additional information from SAP](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#retrieve-additional-information-from-sap-optional). You select one of these CRs according to your SAP version. |
+
+## Prerequisites
+
+1. Make sure you've copied the details of the **SAP system version**, **System ID (SID)**, **System number**, **Client number**, **IP address**, **administrative username** and **password** before beginning the deployment process. For the following example, the following details are assumed:
+
+ - **SAP system version:** `SAP ABAP Platform 1909 Developer edition`
+ - **SID:** `A4H`
+ - **System number:** `00`
+ - **Client number:** `001`
+ - **IP address:** `192.168.136.4`
+ - **Administrator user:** `a4hadm`, however, the SSH connection to the SAP system is established with `root` user credentials.
+1. Review the [SAP environment validation steps](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#sap-environment-validation-steps) to determine which CRs to install.
+1. If you installed the NPLK900202 [optional CR](#required-and-optional-crs) used to retrieve additional information, make sure you've installed the [relevant SAP note](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#deploy-sap-note-optional).
## Deployment milestones
Track your SAP solution deployment journey through this series of articles:
- [Configure auditing](configure-audit.md) - [Configure data connector to use SNC](configure-snc.md)
+To deploy the CRs, follow the steps outlined below. The steps below may differ according to the version of the SAP system and should be considered for demonstration purposes only.
-> [!IMPORTANT]
-> - This article presents a [**step-by-step guide**](#deploy-change-requests) to deploying the required CRs. It's recommended for SOC engineers or implementers who may not necessarily be SAP experts.
-> - Experienced SAP administrators that are familiar with CR deployment process may prefer to get the appropriate CRs directly from the [**SAP environment validation steps**](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#sap-environment-validation-steps) section of the guide and deploy them. Note that the *NPLK900271* CR deploys a sample role, and the administrator may prefer to manually define the role according to the information in the [**Required ABAP authorizations**](#required-abap-authorizations) section below.
+## Deploy CRs
> [!NOTE] > > It is *strongly recommended* that the deployment of SAP CRs be carried out by an experienced SAP system administrator.
->
-> The steps below may differ according to the version of the SAP system and should be considered for demonstration purposes only.
->
-> Make sure you've copied the details of the **SAP system version**, **System ID (SID)**, **System number**, **Client number**, **IP address**, **administrative username** and **password** before beginning the deployment process.
->
-> For the following example, the following details are assumed:
-> - **SAP system version:** `SAP ABAP Platform 1909 Developer edition`
-> - **SID:** `A4H`
-> - **System number:** `00`
-> - **Client number:** `001`
-> - **IP address:** `192.168.136.4`
-> - **Administrator user:** `a4hadm`, however, the SSH connection to the SAP system is established with `root` user credentials.
-The deployment of the Microsoft Sentinel Solution for SAP requires the installation of several CRs. More details about the required CRs can be found in the [SAP environment validation steps](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#sap-environment-validation-steps) section of this guide.
+### Set up the files
-To deploy the CRs, follow the steps outlined below:
+1. Sign in to the SAP system using SSH.
-## Deploy change requests
+1. Transfer the CR files to the SAP system. Learn more about [the CRs in this step](#required-and-optional-crs).
-### Set up the files
+ Alternatively, you can download the files directly onto the SAP system from the SSH prompt. Use the following commands:
-1. Sign in to the SAP system using SSH.
+ - Download NPLK900271 (required)
+ ```bash
+ wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/K900271.NPL
+ wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/R900271.NPL
+ ```
-1. Transfer the CR files to the SAP system.
- Alternatively, you can download the files directly onto the SAP system from the SSH prompt. Use the following commands:
- - Download NPLK900202
+ Alternatively, you can [load these authorizations directly from a file](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#create-and-configure-a-role-required).
+
+ - Download NPLK900202 (optional)
```bash wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/K900202.NPL wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/R900202.NPL ```
- - Download NPLK900201
+ - Download NPLK900201 (optional)
```bash wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/K900201.NPL wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/R900201.NPL
- ```
-
- - Download NPLK900271
- ```bash
- wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/K900271.NPL
- wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/R900271.NPL
- ```
+ ```
Note that each CR consists of two files, one beginning with K and one with R.
To deploy the CRs, follow the steps outlined below:
1. In the **Add Transport Request** confirmation dialog, select **Yes**.
-1. Repeat the procedure in the preceding 5 steps to add the remaining Change Requests to be deployed.
+1. If you plan to deploy more CRs, repeat the procedure in the preceding 5 steps for the remaining CRs.
1. In the **Import Queue** window, select the relevant Transport Request once, and then select **F9** or **Select/Deselect Request** icon.
-1. To add the remaining Transport Requests to the deployment, repeat step 9.
+1. If you have remaining Transport Requests to add to the deployment, repeat step 9.
1. Select the Import Requests icon:
To deploy the CRs, follow the steps outlined below:
:::image type="content" source="media/preparing-sap/import-history.png" alt-text="Screenshot of import history.":::
-1. The *NPLK900202* change request is expected to display a **Warning**. Select the entry to verify that the warnings displayed are of type "Table \<tablename\> was activated."
+1. If you deployed the *NPLK900202* CR, it is expected to display a **Warning**. Select the entry to verify that the warnings displayed are of type "Table \<tablename\> was activated."
+
+ The CRs and versions in the screenshots below may change according to your installed CR version.
:::image type="content" source="media/preparing-sap/import-status.png" alt-text="Screenshot of import status display." lightbox="media/preparing-sap/import-status-lightbox.png":::
To deploy the CRs, follow the steps outlined below:
## Configure Sentinel role
-After the *NPLK900271* change request is deployed, a **/MSFTSEN/SENTINEL_CONNECTOR** role is created in SAP. If the role is created manually, it may bear a different name.
+After the *NPLK900271* CR is deployed, a **/MSFTSEN/SENTINEL_CONNECTOR** role is created in SAP. If the role is created manually, it may bear a different name.
In the examples shown here, we will use the role name **/MSFTSEN/SENTINEL_CONNECTOR**.
The following table lists the ABAP authorizations required to ensure that SAP lo
The required authorizations are listed here by log type. Only the authorizations listed for the types of logs you plan to ingest into Microsoft Sentinel are required. > [!TIP]
-> To create a role with all the required authorizations, deploy the SAP change request *NPLK900271* on the SAP system, or load the role authorizations from the [MSFTSEN_SENTINEL_CONNECTOR_ROLE_V0.0.27.SAP](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/Sample%20Authorizations%20Role%20File) file. This change request creates the **/MSFTSEN/SENTINEL_CONNECTOR** role that has all the necessary permissions for the data connector to operate.
-> Alternatively, you can create a role that has minimal permissions by deploying change request *NPLK900268*, or loading the role authorizations from the [MSFTSEN_SENTINEL_AGENT_BASIC_ROLE_V0.0.1.SAP](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/Sample%20Authorizations%20Role%20File) file. This change request or authorizations file creates the **/MSFTSEN/SENTINEL_AGENT_BASIC** role. This role has the minimal required permissions for the data connector to operate. Note that if you choose to deploy this role, you might need to update it frequently.
+> To create a role with all the required authorizations, deploy the SAP *NPLK900271* CR on the SAP system, or load the role authorizations from the [MSFTSEN_SENTINEL_CONNECTOR_ROLE_V0.0.27.SAP](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/Sample%20Authorizations%20Role%20File) file. This CR creates the **/MSFTSEN/SENTINEL_CONNECTOR** role that has all the necessary permissions for the data connector to operate.
+> Alternatively, you can create a role that has minimal permissions by deploying the *NPLK900268* CR, or loading the role authorizations from the [MSFTSEN_SENTINEL_AGENT_BASIC_ROLE_V0.0.1.SAP](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/Sample%20Authorizations%20Role%20File) file. This CR or authorizations file creates the **/MSFTSEN/SENTINEL_AGENT_BASIC** role. This role has the minimal required permissions for the data connector to operate. Note that if you choose to deploy this role, you might need to update it frequently.
| Authorization Object | Field | Value | | -- | -- | -- |
sentinel Prerequisites For Deploying Sap Continuous Threat Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/prerequisites-for-deploying-sap-continuous-threat-monitoring.md
To successfully deploy the Microsoft Sentinel Solution for SAP, you must meet th
| Prerequisite | Description | | - | -- |
-| **Supported SAP versions** | The SAP data connector agent works best with [SAP_BASIS versions 750 SP13](https://support.sap.com/en/my-support/software-downloads/support-package-stacks/product-versions.html#:~:text=SAP%20NetWeaver%20%20%20%20SAP%20Product%20Version,%20%20SAPKB710%3Cxx%3E%20%207%20more%20rows) or later. <br><br>Certain steps in this tutorial provide alternative instructions if you're working on the older [SAP_BASIS version 740](https://support.sap.com/en/my-support/software-downloads/support-package-stacks/product-versions.html#:~:text=SAP%20NetWeaver%20%20%20%20SAP%20Product%20Version,%20%20SAPKB710%3Cxx%3E%20%207%20more%20rows). |
+| **Supported SAP versions** | The SAP data connector agent support SAP NetWeaver systems and was tested on [SAP_BASIS versions 731](https://support.sap.com/en/my-support/software-downloads/support-package-stacks/product-versions.html#:~:text=SAP%20NetWeaver%20%20%20%20SAP%20Product%20Version,%20%20SAPKB710%3Cxx%3E%20%207%20more%20rows) and above. <br><br>Certain steps in this tutorial provide alternative instructions if you're working on the older [SAP_BASIS version 740](https://support.sap.com/en/my-support/software-downloads/support-package-stacks/product-versions.html#:~:text=SAP%20NetWeaver%20%20%20%20SAP%20Product%20Version,%20%20SAPKB710%3Cxx%3E%20%207%20more%20rows). |
| **Required software** | SAP NetWeaver RFC SDK 7.50 ([Download here](https://aka.ms/sentinel4sapsdk))<br>Make sure that you also have an SAP user account in order to access the SAP software download page. | | **SAP system details** | Make a note of the following SAP system details for use in this tutorial:<br>- SAP system IP address and FQDN hostname<br>- SAP system number, such as `00`<br>- SAP System ID, from the SAP NetWeaver system (for example, `NPL`) <br>- SAP client ID, such as `001` | | **SAP NetWeaver instance access** | The SAP data connector agent uses one of the following mechanisms to authenticate to the SAP system: <br>- SAP ABAP user/password<br>- A user with an X.509 certificate (This option requires additional configuration steps) |
-## SAP environment validation steps
-
-### Deploy SAP notes
-
-Ensure the following SAP notes are deployed in your SAP system, according to its version:
+## SAP environment validation steps
> [!NOTE] >
-> Step-by-step instructions for deploying a CR and assigning the required role are available in the [**Deploying SAP CRs and configuring authorization**](preparing-sap.md) guide. Determine which CRs need to be deployed, retrieve the required CRs from the links in the tables below, and proceed to the step-by-step guide.
-
-| SAP BASIS versions | Required note |
-| | |
-| - 750 SP01 to SP12<br>- 751 SP01 to SP06<br>- 752 SP01 to SP03 | [2641084 - Standardized read access to data of Security Audit Log](https://launchpad.support.sap.com/#/notes/2641084)* |
-| - 700 to 702<br>- 710 to 711<br>- 730<br>- 731<br>- 740<br>- 750 | [2173545: CD: CHANGEDOCUMENT_READ_ALL](https://launchpad.support.sap.com/#/notes/2173545)* |
-| - 700 to 702<br>- 710 to 711<br>- 730<br>- 731<br>- 740<br>- 750 to 752 | [2502336 - CD: RSSCD100 - read only from archive, not from database](https://launchpad.support.sap.com/#/notes/2502336)* |
-| | * An SAP account is required to access SAP notes |
+> Step-by-step instructions for deploying a CR and assigning the required role are available in the [**Deploying SAP CRs and configuring authorization**](preparing-sap.md) guide. Determine which CRs need to be deployed, retrieve the relevant CRs from the links in the tables below, and proceed to the step-by-step guide.
-### Retrieve additional information from SAP
-
-To enable the SAP data connector to retrieve certain information from your SAP system, you must deploy additional CRs from the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/CR):
-- **SAP BASIS 7.5 SP12 and above**: Client IP Address information from security audit log-- **ANY SAP BASIS version**: DB Table logs-
-| SAP BASIS versions | Recommended CR |
-| | |
-| - 750 and later | *NPLK900202*: [K900202.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/K900202.NPL), [R900202.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/R900202.NPL) |
-| - 740 | *NPLK900201*: [K900201.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/K900201.NPL), [R900201.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/R900201.NPL) |
+- [Create and configure a role (required)](#create-and-configure-a-role-required)
+- [Retrieve additional information from SAP (optional)](#retrieve-additional-information-from-sap-optional)
-### Create and configure a role
+### Create and configure a role (required)
-To allow the SAP data connector to connect to your SAP system, you must create a role. Create the role by deploying CR **NPLK900271** or by loading the role authorizations from the [MSFTSEN_SENTINEL_CONNECTOR_ROLE_V0.0.27.SAP](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/Sample%20Authorizations%20Role%20File) file..
+To allow the SAP data connector to connect to your SAP system, you must create a role. Create the role by deploying CR **NPLK900271** or by loading the role authorizations from the [MSFTSEN_SENTINEL_CONNECTOR_ROLE_V0.0.27.SAP](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/Sample%20Authorizations%20Role%20File) file.
> [!NOTE] > Alternatively, you can create a role that has minimal permissions by deploying change request *NPLK900268*, or loading the role authorizations from the [MSFTSEN_SENTINEL_AGENT_BASIC_ROLE_V0.0.1.SAP](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/Sample%20Authorizations%20Role%20File) file.
Experienced SAP administrators may choose to create the role manually and assign
| SAP BASIS versions | Sample CR | | | |
-| Any version | *NPLK900271*: [K900271.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/K900271.NPL), [R900271.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/R900271.NPL) |
+| Any version | *NPLK900271*: [K900271.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/K900271.NPL), [R900271.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/R900271.NPL) |
+
+### Retrieve additional information from SAP (optional)
+
+You can deploy additional CRs from the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/CR) to enable the SAP data connector to retrieve certain information from your SAP system.
+- **SAP BASIS 7.5 SP12 and above**: Client IP Address information from security audit log
+- **ANY SAP BASIS version**: DB Table logs, Spool Output log
+
+| SAP BASIS versions | Recommended CR |Notes |
+| | | |
+| - 750 and later | *NPLK900202*: [K900202.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/K900202.NPL), [R900202.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/R900202.NPL) | Deploy the relevant [SAP note](#deploy-sap-note-optional). |
+| - 740 | *NPLK900201*: [K900201.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/K900201.NPL), [R900201.NPL](https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/CR/R900201.NPL) | |
+
+#### Deploy SAP note (optional)
+
+If you choose to retrieve additional information with the [NPLK900202 optional CR](#retrieve-additional-information-from-sap-optional), ensure that the following SAP note is deployed in your SAP system, according to its version:
+
+| SAP BASIS versions | Notes |
+| | |
+| - 750 SP01 to SP12<br>- 751 SP01 to SP06<br>- 752 SP01 to SP03 | [2641084 - Standardized read access to data of Security Audit Log](https://launchpad.support.sap.com/#/notes/2641084)* |
## Next steps
sentinel Sap Solution Log Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-solution-log-reference.md
This functionality is heavily used in the Deterministic and Anomalous Audit Log
| The "SAP User Config" watchlist | SearchKey | Search Key | | The "SAP User Config" watchlist | SAPUser | The SAP User | OSS, DDIC | The "SAP User Config" watchlist | Tags | string of tags assigned to user | RunObsoleteProgOK
-| The "SAP User Config" watchlist | User AAD Object ID | Azure AD Object ID |
+| The "SAP User Config" watchlist | User's Microsoft Azure Active Directory (Azure AD) Object ID | Azure AD Object ID |
| The "SAP User Config" watchlist | User Identifier | AD User Identifier |
-| The "SAP User Config" watchlist | User On-Premises Sid | |
+| The "SAP User Config" watchlist | User on-premises Sid | |
| The "SAP User Config" watchlist | User Principal Name | | | The "SAP User Config" watchlist | TagsList | A list of tags assigned to user | ChangeUserMasterDataOK;RunObsoleteProgOK | Logic | TagsIntersect | A set of tags that matched SearchForTags | ["ChangeUserMasterDataOK","RunObsoleteProgOK"]
For best results, use the Microsoft Sentinel functions listed below to visualize
- **Log purpose**: Records the progress of an application execution so that you can reconstruct it later as needed.
- Available by using RFC with a custom service based on standard services of XBP interface. This log is generated per client.
+ Available by using RFC based on standard SAP table and standard services of XBP interface. This log is generated per client.
#### ABAPAppLog_CL log schema
For best results, use the Microsoft Sentinel functions listed below to visualize
- Other entities in the SAP system, such as user data, roles, addresses.
- Available by using RFC with a custom service based on standard services. This log is generated per client.
+ Available by using RFC based on standard SAP tables. This log is generated per client.
#### ABAPChangeDocsLog_CL log schema
For best results, use the Microsoft Sentinel functions listed below to visualize
- **Log purpose**: Includes the Change & Transport System (CTS) logs, including the directory objects and customizations where changes were made.
- Available by using RFC with a custom service based on standard tables and standard services. This log is generated with data across all clients.
+ Available by using RFC based on standard tables and standard SAP services. This log is generated with data across all clients.
> [!NOTE] > In addition to application logging, change documents, and table recording, all changes that you make to your production system using the Change & Transport System are documented in the CTS and TMS logs.
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
- **Log purpose**: Combines all background processing job logs (SM37).
- Available by using RFC with a custom service based on standard services of XBP interfaces. This log is generated with data across all clients.
+ Available by using RFC based on standard SAP table and standard services of XBP interfaces. This log is generated with data across all clients.
#### ABAPJobLog_CL log schema
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
- **Log purpose**: Serves as the main log for SAP Printing with the history of spool requests. (SP01).
- Available by using RFC with a custom service based on standard tables. This log is generated with data across all clients.
+ Available by using RFC based on standard SAP table. This log is generated with data across all clients.
#### ABAPSpoolLog_CL log schema
To have this log sent to Microsoft Sentinel, you must [add it manually to the **
For example, unmapped business processes may be simple release or approval procedures, or more complex business processes such as creating base material and then coordinating the associated departments.
- Available by using RFC with a custom service based on standard tables and standard services. This log is generated per client.
+ Available by using RFC based on standard SAP tables. This log is generated per client.
#### ABAPWorkflowLog_CL log schema
sentinel Search Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/search-jobs.md
Go to **Search** in Microsoft Sentinel to enter your search criteria.
1. Select the **Table** menu and choose a table for your search. 1. In the **Search** box, enter a search term.
- :::image type="content" source="media/search-jobs/search-job-criteria.png" alt-text="Screenshot of search page with search criteria of administrator, time range last 90 days, and table selected.":::
+ :::image type="content" source="media/search-jobs/search-job-criteria.png" alt-text="Screenshot of search page with search criteria of administrator, time range last 90 days, and table selected." lightbox="media/search-jobs/search-job-criteria.png":::
1. Click the **Run search** link to open the advanced KQL editor and a preview of the results for a seven day time range.
- :::image type="content" source="media/search-jobs/search-job-advanced-kql.png" alt-text="Screenshot of KQL editor with the initial search and the results for a seven day period.":::
+ :::image type="content" source="media/search-jobs/search-job-advanced-kql.png" alt-text="Screenshot of KQL editor with the initial search and the results for a seven day period." lightbox="media/search-jobs/search-job-advanced-kql.png":::
1. You can modify the KQL and see an updated preview of the search results by selecting **Run**.
- :::image type="content" source="media/search-jobs/search-job-advanced-kql-revise.png" alt-text="Screenshot of KQL editor with revised search.":::
+ :::image type="content" source="media/search-jobs/search-job-advanced-kql-revise.png" alt-text="Screenshot of KQL editor with revised search." lightbox="media/search-jobs/search-job-advanced-kql-revise.png":::
1. Once you're satisfied with the query and the search results preview, click on the 3 dots **...** > toggle the **Search job mode** switch > click the **Search job** button.
- :::image type="content" source="media/search-jobs/search-job-advanced-kql-ellipsis.png" alt-text="Screenshot of KQL editor with revised search with ellipsis highlighted for Search job mode.":::
+ :::image type="content" source="media/search-jobs/search-job-advanced-kql-ellipsis.png" alt-text="Screenshot of KQL editor with revised search with ellipsis highlighted for Search job mode." lightbox="media/search-jobs/search-job-advanced-kql-ellipsis.png":::
1. Select the appropriate **Time range**.
- :::image type="content" source="media/search-jobs/search-job-advanced-kql-custom-time-range.png" alt-text="Screenshot of KQL editor with revised search, and custom time range.":::
+ :::image type="content" source="media/search-jobs/search-job-advanced-kql-custom-time-range.png" alt-text="Screenshot of KQL editor with revised search, and custom time range." lightbox="media/search-jobs/search-job-advanced-kql-custom-time-range.png":::
1. Make sure to resolve any KQL issues indicated by a squiggly red line in the editor. When you're ready to start the search job, select **Search**.
View the status and results of your search job by going to the **Saved Searches*
1. On the search card, select **View search results**.
- :::image type="content" source="media/search-jobs/view-search-results.png" alt-text="Screenshot that shows the link to view search results at the bottom of the search job card.":::
+ :::image type="content" source="media/search-jobs/view-search-results.png" alt-text="Screenshot that shows the link to view search results at the bottom of the search job card." lightbox="media/search-jobs/view-search-results.png":::
1. By default, you see all the results that match your original search criteria. 1. To refine the list of results returned from the search table, click the **Add filter** button.
- :::image type="content" source="media/search-jobs/search-results-filter.png" alt-text="Screenshot that shows search job results with added filters.":::
+ :::image type="content" source="media/search-jobs/search-results-filter.png" alt-text="Screenshot that shows search job results with added filters." lightbox="media/search-jobs/search-results-filter.png":::
1. As you're reviewing your search job results, click **Add bookmark**, or select the bookmark icon to preserve a row. Adding a bookmark allows you to tag events, add notes, and attach these events to an incident for later reference.
- :::image type="content" source="media/search-jobs/search-results-add-bookmark.png" alt-text="Screenshot that shows search job results with a bookmark in the process of being added.":::
+ :::image type="content" source="media/search-jobs/search-results-add-bookmark.png" alt-text="Screenshot that shows search job results with a bookmark in the process of being added." lightbox="media/search-jobs/search-results-add-bookmark.png":::
1. Click the **Columns** button and select the checkbox next to columns you'd like to add to the results view.
sentinel Understand Threat Intelligence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/understand-threat-intelligence.md
For more information, see [Connect your threat intelligence platform to Microsof
### Add threat indicators to Microsoft Sentinel with the Threat Intelligence - TAXII data connector
-The most widely-adopted industry standard for the transmission of threat intelligence is a [combination of the STIX data format and the TAXII protocol](https://oasis-open.github.io/cti-documentation/). If your organization obtains threat indicators from solutions that support the current STIX/TAXII version (2.0 or 2.1), you can use the **Threat Intelligence - TAXII** data connector to bring your threat indicators into Microsoft Sentinel. The Threat Intelligence - TAXII data connector enables a built-in TAXII client in Microsoft Sentinel to import threat intelligence from TAXII 2.x servers.
+The most widely adopted industry standard for the transmission of threat intelligence is a [combination of the STIX data format and the TAXII protocol](https://oasis-open.github.io/cti-documentation/). If your organization obtains threat indicators from solutions that support the current STIX/TAXII version (2.0 or 2.1), you can use the **Threat Intelligence - TAXII** data connector to bring your threat indicators into Microsoft Sentinel. The Threat Intelligence - TAXII data connector enables a built-in TAXII client in Microsoft Sentinel to import threat intelligence from TAXII 2.x servers.
:::image type="content" source="media/understand-threat-intelligence/threat-intel-taxii-import-path.png" alt-text="TAXII import path":::
By default, when these built-in rules are triggered, an alert will be created. I
For more details on using threat indicators in your analytics rules, see [Use threat intelligence to detect threats](use-threat-indicators-in-analytics-rules.md).
+Microsoft provides access to its threat intelligence through the **Microsoft Threat Intelligence Analytics** rule. For more information on how to take advantage of this rule which generates high fidelity alerts and incidents, see [Use matching analytics to detect threats](use-matching-analytics-to-detect-threats.md)
++ ## Workbooks provide insights about your threat intelligence Workbooks provide powerful interactive dashboards that give you insights into all aspects of Microsoft Sentinel, and threat intelligence is no exception. You can use the built-in **Threat Intelligence workbook** to visualize key information about your threat intelligence, and you can easily customize the workbook according to your business needs. You can even create new dashboards combining many different data sources so you can visualize your data in unique ways. Since Microsoft Sentinel workbooks are based on Azure Monitor workbooks, there is already extensive documentation available, and many more templates. A great place to start is this article on how to [Create interactive reports with Azure Monitor workbooks](../azure-monitor/visualize/workbooks-overview.md).
In this document, you learned about the threat intelligence capabilities of Micr
- See which [TIP platforms, TAXII feeds, and enrichments](threat-intelligence-integration.md) can be readily integrated with Microsoft Sentinel. - [Work with threat indicators](work-with-threat-indicators.md) throughout the Microsoft Sentinel experience. - Detect threats with [built-in](./detect-threats-built-in.md) or [custom](./detect-threats-custom.md) analytics rules in Microsoft Sentinel-- [Investigate incidents](./investigate-cases.md) in Microsoft Sentinel.
+- [Investigate incidents](./investigate-cases.md) in Microsoft Sentinel.
service-bus-messaging Service Bus Dotnet Get Started With Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-dotnet-get-started-with-queues.md
In this section, you'll add code to retrieve messages from the queue.
// the client that owns the connection and can be used to create senders and receivers ServiceBusClient client;
- // the sender used to publish messages to the queue
- ServiceBusSender sender;
+ // the processor that reads and processes messages from the queue
+ ServiceBusProcessor processor;
// The Service Bus client types are safe to cache and use as a singleton for the lifetime // of the application, which is best practice when messages are being published or read
service-fabric Service Fabric Reliable Services Reliable Collections Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-reliable-collections-guidelines.md
The guidelines are organized as simple recommendations prefixed with the terms *
* Consider using backup and restore functionality to have disaster recovery. * Avoid mixing single entity operations and multi-entity operations (e.g `GetCountAsync`, `CreateEnumerableAsync`) in the same transaction due to the different isolation levels. * Do handle InvalidOperationException. User transactions can be aborted by the system for variety of reasons. For example, when the Reliable State Manager is changing its role out of Primary or when a long-running transaction is blocking truncation of the transactional log. In such cases, user may receive InvalidOperationException indicating that their transaction has already been terminated. Assuming, the termination of the transaction was not requested by the user, best way to handle this exception is to dispose the transaction, check if the cancellation token has been signaled (or the role of the replica has been changed), and if not create a new transaction and retry.
+* Do not apply any parallelism within a transaction.
+* Consider dispose transaction as soon as possible after commit completes (especially if using ConcurrentQueue).
+* Do not perform any blocking code inside a transaction.
Here are some things to keep in mind:
service-fabric Service Fabric Scale Up Non Primary Node Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-scale-up-non-primary-node-type.md
The following will walk you through the process for updating the VM size and ope
> Before attempting this procedure on a production cluster, we recommend that you study the sample templates and verify the process against a test cluster. > > Do not attempt a non-primary node type scale up procedure if the cluster status is unhealthy, as this will only destabilize the cluster further.
-We'll make use of the step-by-step Azure deployment templates used in the [Scale up a Service Fabric cluster primary node type](service-fabric-scale-up-primary-node-type.md) guide. However, we'll modify them so they aren't specific to primary node types. The templates are [available on GitHub](https://github.com/microsoft/service-fabric-scripts-and-templates/tree/master/templates/nodetype-upgrade-nonprimary).
+We'll make use of the step-by-step Azure deployment templates used in the [Scale up a Service Fabric cluster primary node type](service-fabric-scale-up-primary-node-type.md) guide. However, we'll modify them so they aren't specific to primary node types. The templates are [available on GitHub](https://github.com/microsoft/service-fabric-scripts-and-templates/tree/master/templates/nodetype-upgrade).
## Set up the test cluster
-Let's set up the initial Service Fabric test cluster. First, [download](https://github.com/microsoft/service-fabric-scripts-and-templates/tree/master/templates/nodetype-upgrade-nonprimary) the Azure Resource Manager sample templates that we'll use to complete this scenario.
+Let's set up the initial Service Fabric test cluster. First, [download](https://github.com/microsoft/service-fabric-scripts-and-templates/tree/master/templates/nodetype-upgrade) the Azure Resource Manager sample templates that we'll use to complete this scenario.
Next, sign in to your Azure account.
Next, sign in to your Azure account.
Login-AzAccount -SubscriptionId "<subscription ID>" ```
-Next open the [*parameters.json*](https://github.com/microsoft/service-fabric-scripts-and-templates/tree/master/templates/nodetype-upgrade-nonprimary/parameters.json) file and update the value for `clusterName` to something unique (within Azure).
+Next open the [*parameters.json*](https://github.com/microsoft/service-fabric-scripts-and-templates/blob/master/templates/nodetype-upgrade/parameters.json) file and update the value for `clusterName` to something unique (within Azure).
The following commands will guide you through generating a new self-signed certificate and deploying the test cluster. If you already have a certificate you'd like to use, skip to [Use an existing certificate to deploy the cluster](#use-an-existing-certificate-to-deploy-the-cluster).
In order to upgrade (vertically scale) a node type, we'll first need to deploy a
Here are the section-by-section modifications of the original cluster deployment template for adding a new node type and supporting resources.
-Most of the required changes for this step have already been made for you in the [*Step1-AddPrimaryNodeType.json*](https://github.com/microsoft/service-fabric-scripts-and-templates/tree/master/templates/nodetype-upgrade-nonprimary/Step1-AddPrimaryNodeType.json) template file. However, an additional change must be made so the template file works for non-primary node types. The following sections will explain these changes in detail, and call outs will be made when you must make a change.
+Most of the required changes for this step have already been made for you in the [*Step1-AddPrimaryNodeType.json*](https://github.com/microsoft/service-fabric-scripts-and-templates/blob/master/templates/nodetype-upgrade/Step1-AddPrimaryNodeType.json) template file. However, an additional change must be made so the template file works for non-primary node types. The following sections will explain these changes in detail, and call outs will be made when you must make a change.
> [!Note] > Ensure that you use names that are unique from the original node type, scale set, load balancer, public IP, and subnet of the original non-primary node type, as these resources will be deleted at a later step in the process.
Service Fabric Explorer should now reflect only the five nodes of the new node t
### Update the deployment template to reflect the newly scaled-up non-primary node type
-Most of the required changes for this step have already been made for you in the [*Step3-CleanupOriginalPrimaryNodeType.json*](https://github.com/microsoft/service-fabric-scripts-and-templates/tree/master/templates/nodetype-upgrade-nonprimary/Step3-CleanupOriginalPrimaryNodeType.json) template file. However, an additional change must be made so the template file works for non-primary node types. The following sections will explain these changes in detail, and call outs will be made when you must make a change.
+Most of the required changes for this step have already been made for you in the [*Step3-CleanupOriginalPrimaryNodeType.json*](https://github.com/microsoft/service-fabric-scripts-and-templates/blob/master/templates/nodetype-upgrade/Step3-CleanupOriginalPrimaryNodeType.json) template file. However, an additional change must be made so the template file works for non-primary node types. The following sections will explain these changes in detail, and call outs will be made when you must make a change.
#### Update the cluster management endpoint
With that, you've vertically scaled a cluster non-primary node type!
* Learn about [application scalability](service-fabric-concepts-scalability.md). * [Scale an Azure cluster in or out](service-fabric-tutorial-scale-cluster.md). * [Scale an Azure cluster programmatically](service-fabric-cluster-programmatic-scaling.md) using the fluent Azure compute SDK.
-* [Scale a standalone cluster in or out](service-fabric-cluster-windows-server-add-remove-nodes.md).
+* [Scale a standalone cluster in or out](service-fabric-cluster-windows-server-add-remove-nodes.md).
spring-apps Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/faq.md
Azure Spring Apps has the following known limitations:
Which one should I use and what are the limits within each tier?
-* Azure Spring Apps offers two pricing tiers: Basic and Standard. The Basic tier is targeted for Dev/Test and trying out Azure Spring Apps. The Standard tier is optimized to run general purpose production traffic. See [Azure Spring Apps pricing details](https://azure.microsoft.com/pricing/details/spring-apps/) for limits and feature level comparison.
+* Azure Spring Apps offers three pricing tiers: Basic, Standard, and Enterprise. The Basic tier is targeted for Dev/Test and trying out Azure Spring Apps. The Standard tier is optimized to run general purpose production traffic. The Enterprise tier is for production workloads with VMware Tanzu components. See [Azure Spring Apps pricing details](https://azure.microsoft.com/pricing/details/spring-apps/) for limits and feature level comparison.
### What's the difference between Service Binding and Service Connector?
Yes.
The number of outbound public IP addresses may vary according to the tiers and other factors.
-| Azure Spring Apps instance type | Default number of outbound public IP addresses |
-| -- | - |
-| Basic Tier instances | 1 |
-| Standard Tier instances | 2 |
-| VNet injection instances | 1 |
+| Azure Spring Apps instance type | Default number of outbound public IP addresses |
+|||
+| Basic tier instances | 1 |
+| Standard/Enterprise tier instances | 2 |
+| VNet injection instances | 1 |
### Can I increase the number of outbound public IP addresses?
storage Storage Auth Abac Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-attributes.md
Previously updated : 10/19/2022 Last updated : 10/25/2022
To understand the role assignment condition format, see [Azure role assignment c
## Suboperations
-Multiple Storage service operations can be associated with a single permission or DataAction. However, each of these operations that are associated with the same permission might support different parameters. *Suboperations* enable you to differentiate between service operations that require the same permission but support different set of attributes for conditions. Thus, by using a suboperation, you can specify one condition for access to a subset of operations that support a given parameter. Then, you can use another access condition for operations with the same action that doesn't support that parameter.
+Multiple Storage service operations can be associated with a single permission or DataAction. However, each of these operations that are associated with the same permission might support different parameters. *Suboperations* enable you to differentiate between service operations that require the same permission but support a different set of attributes for conditions. Thus, by using a suboperation, you can specify one condition for access to a subset of operations that support a given parameter. Then, you can use another access condition for operations with the same action that doesn't support that parameter.
-For example, the `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` action is required for over a dozen different service operations. Some of these operations can accept blob index tags as request parameter, while others don't. For operations that accept blob index tags as a parameter, you can use blob index tags in a Request condition. However, if such a condition is defined on the `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` action, all operations that don't accept tags as a request parameter cannot evaluate this condition, and will fail the authorization access check.
+For example, the `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` action is required for over a dozen different service operations. Some of these operations can accept blob index tags as a request parameter, while others don't. For operations that accept blob index tags as a parameter, you can use blob index tags in a Request condition. However, if such a condition is defined on the `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write` action, all operations that don't accept tags as a request parameter cannot evaluate this condition, and will fail the authorization access check.
In this case, the optional suboperation `Blob.Write.WithTagHeaders` can be used to apply a condition to only those operations that support blob index tags as a request parameter. > [!NOTE]
-> Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to blob index tags, you must use blob index tags with conditions. For more information, see [Manage and find Azure Blob data with blob index tags](../blobs/storage-manage-find-blobs.md).
+> Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to blob index tags, you must use blob index tags with conditions. For more information, see [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md).
Storage accounts support the following suboperations:
This section lists the supported Azure Blob Storage actions and suboperations yo
### Read content from a blob with tag conditions
-> [!IMPORTANT]
-> Although `Read content from a blob with tag conditions` is currently supported for compatibility with conditions implemented during the ABAC feature preview, that suboperation has been deprecated and Microsoft recommends using the [ΓÇ£Read a blobΓÇ¥](#read-a-blob) action instead.
->
-> When configuring ABAC conditions in the Azure portal, you might see "DEPRECATED: Read content from a blob with tag conditions". Remove the operation and replace it with the ΓÇ£Read a blobΓÇ¥ operation instead.
->
-> If you are authoring your own condition where you want to restrict read access by tag conditions, please refer to [Example: Read blobs with a blob index tag](storage-auth-abac-examples.md#example-read-blobs-with-a-blob-index-tag).
+The `Read content from a blob with tag conditions` suboperation has been deprecated. Although it is currently supported for compatibility with conditions implemented during the ABAC feature preview, Microsoft recommends using the [Read a blob](#read-a-blob) action instead.
+
+When configuring ABAC conditions in the Azure portal, you might see **DEPRECATED: Read content from a blob with tag conditions**. Microsoft recommends removing the operation and replacing it with the `Read a blob` action.
+
+If you are authoring your own condition where you want to restrict read access by tag conditions, please refer to [Example: Read blobs with a blob index tag](storage-auth-abac-examples.md#example-read-blobs-with-a-blob-index-tag).
### Read blob index tags
This section lists the supported Azure Blob Storage actions and suboperations yo
> | **Resource attributes** | [Account name](#account-name)<br/>[Is Current Version](#is-current-version)<br/>[Is hierarchical namespace enabled](#is-hierarchical-namespace-enabled)<br/>[Container name](#container-name)<br/>[Blob path](#blob-path)<br/>[Blob index tags [Values in key]](#blob-index-tags-values-in-key)<br/>[Blob index tags [Keys]](#blob-index-tags-keys) | > | **Request attributes** | [Version ID](#version-id)<br/>[Snapshot](#snapshot) | > | **Principal attributes support** | True |
-> | **Learn more** | [Manage and find Azure Blob data with blob index tags](../blobs/storage-manage-find-blobs.md) |
+> | **Learn more** | [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md) |
### Find blobs by tags
This section lists the supported Azure Blob Storage actions and suboperations yo
> | **Request attributes** | [Blob index tags [Values in key]](#blob-index-tags-values-in-key)<br/>[Blob index tags [Keys]](#blob-index-tags-keys) | > | **Principal attributes support** | True | > | **Examples** | `!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'} AND SubOperationMatches{'Blob.Write.WithTagHeaders'})`<br/>`!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'} AND SubOperationMatches{'Blob.Write.WithTagHeaders'})`<br/>[Example: New blobs must include a blob index tag](storage-auth-abac-examples.md#example-new-blobs-must-include-a-blob-index-tag) |
-> | **Learn more** | [Manage and find Azure Blob data with blob index tags](../blobs/storage-manage-find-blobs.md) |
+> | **Learn more** | [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md) |
### Create a blob or snapshot, or append data
This section lists the supported Azure Blob Storage actions and suboperations yo
> | **Request attributes** | [Blob index tags [Values in key]](#blob-index-tags-values-in-key)<br/>[Blob index tags [Keys]](#blob-index-tags-keys)<br/>[Version ID](#version-id)<br/>[Snapshot](#snapshot) | > | **Principal attributes support** | True | > | **Examples** | `!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write'})`<br/>[Example: Existing blobs must have blob index tag keys](storage-auth-abac-examples.md#example-existing-blobs-must-have-blob-index-tag-keys) |
-> | **Learn more** | [Manage and find Azure Blob data with blob index tags](../blobs/storage-manage-find-blobs.md) |
+> | **Learn more** | [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md) |
### Write Blob legal hold and immutability policy
This section lists the supported Azure Blob Storage actions and suboperations yo
> | **Request attributes** | | > | **Principal attributes support** | True | > | **Examples** | [Example: Read, write, or delete blobs in named containers](storage-auth-abac-examples.md#example-read-write-or-delete-blobs-in-named-containers)<br/>[Example: Read blobs in named containers with a path](storage-auth-abac-examples.md#example-read-blobs-in-named-containers-with-a-path)<br/>[Example: Read or list blobs in named containers with a path](storage-auth-abac-examples.md#example-read-or-list-blobs-in-named-containers-with-a-path)<br/>[Example: Write blobs in named containers with a path](storage-auth-abac-examples.md#example-write-blobs-in-named-containers-with-a-path)<br/>[Example: Read only current blob versions](storage-auth-abac-examples.md#example-read-only-current-blob-versions)<br/>[Example: Read current blob versions and any blob snapshots](storage-auth-abac-examples.md#example-read-current-blob-versions-and-any-blob-snapshots)<br/>[Example: Read only storage accounts with hierarchical namespace enabled](storage-auth-abac-examples.md#example-read-only-storage-accounts-with-hierarchical-namespace-enabled) |
-> | **Learn more** | [Azure Data Lake Storage Gen2 hierarchical namespace](../blobs/data-lake-storage-namespace.md) |
+> | **Learn more** | [Azure Data Lake Storage Gen2 hierarchical namespace](data-lake-storage-namespace.md) |
## Azure Blob Storage attributes
This section lists the Azure Blob Storage attributes you can use in your conditi
> | **Is key case sensitive** | True | > | **Hierarchical namespace support** | False | > | **Examples** | `@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags&$keys$&] ForAllOfAnyValues:StringEquals {'Project', 'Program'}`<br/>[Example: Existing blobs must have blob index tag keys](storage-auth-abac-examples.md#example-existing-blobs-must-have-blob-index-tag-keys) |
-> | **Learn more** | [Manage and find Azure Blob data with blob index tags](../blobs/storage-manage-find-blobs.md)<br/>[Azure Data Lake Storage Gen2 hierarchical namespace](../blobs/data-lake-storage-namespace.md) |
+> | **Learn more** | [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md)<br/>[Azure Data Lake Storage Gen2 hierarchical namespace](data-lake-storage-namespace.md) |
### Blob index tags [Values in key]
This section lists the Azure Blob Storage attributes you can use in your conditi
> | **Is key case sensitive** | True | > | **Hierarchical namespace support** | False | > | **Examples** | `@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:`*keyname*`<$key_case_sensitive$>`<br/>`@Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>] StringEquals 'Cascade'`<br/>[Example: Read blobs with a blob index tag](storage-auth-abac-examples.md#example-read-blobs-with-a-blob-index-tag) |
-> | **Learn more** | [Manage and find Azure Blob data with blob index tags](../blobs/storage-manage-find-blobs.md)<br/>[Azure Data Lake Storage Gen2 hierarchical namespace](../blobs/data-lake-storage-namespace.md) |
+> | **Learn more** | [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md)<br/>[Azure Data Lake Storage Gen2 hierarchical namespace](data-lake-storage-namespace.md) |
### Blob path
This section lists the Azure Blob Storage attributes you can use in your conditi
> | **Attribute type** | String | > | **Exists support** | True | > | **Examples** | `@Resource[Microsoft.Storage/storageAccounts/encryptionScopes:name] ForAnyOfAnyValues:StringEquals {'validScope1', 'validScope2'}`<br/>[Example: Read blobs with specific encryption scopes](storage-auth-abac-examples.md#example-read-blobs-with-specific-encryption-scopes) |
-> | **Learn more** | [Create and manage encryption scopes](../blobs/encryption-scope-manage.md) |
+> | **Learn more** | [Create and manage encryption scopes](encryption-scope-manage.md) |
### Is Current Version
This section lists the Azure Blob Storage attributes you can use in your conditi
> | **Attribute source** | Resource | > | **Attribute type** | Boolean | > | **Examples** | `@Resource[Microsoft.Storage/storageAccounts:isHnsEnabled] BoolEquals true`<br/>[Example: Read only storage accounts with hierarchical namespace enabled](storage-auth-abac-examples.md#example-read-only-storage-accounts-with-hierarchical-namespace-enabled) |
-> | **Learn more** | [Azure Data Lake Storage Gen2 hierarchical namespace](../blobs/data-lake-storage-namespace.md) |
+> | **Learn more** | [Azure Data Lake Storage Gen2 hierarchical namespace](data-lake-storage-namespace.md) |
### Snapshot
This section lists the Azure Blob Storage attributes you can use in your conditi
> | **Exists support** | True | > | **Hierarchical namespace support** | False | > | **Examples** | `Exists @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:snapshot]`<br/>[Example: Read current blob versions and any blob snapshots](storage-auth-abac-examples.md#example-read-current-blob-versions-and-any-blob-snapshots) |
-> | **Learn more** | [Blob snapshots](../blobs/snapshots-overview.md)<br/>[Azure Data Lake Storage Gen2 hierarchical namespace](../blobs/data-lake-storage-namespace.md) |
+> | **Learn more** | [Blob snapshots](snapshots-overview.md)<br/>[Azure Data Lake Storage Gen2 hierarchical namespace](data-lake-storage-namespace.md) |
### Version ID
This section lists the Azure Blob Storage attributes you can use in your conditi
> | **Exists support** | True | > | **Hierarchical namespace support** | False | > | **Examples** | `@Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:versionId] DateTimeEquals '2022-06-01T23:38:32.8883645Z'`<br/>[Example: Read current blob versions and a specific blob version](storage-auth-abac-examples.md#example-read-current-blob-versions-and-a-specific-blob-version)<br/>[Example: Read current blob versions and any blob snapshots](storage-auth-abac-examples.md#example-read-current-blob-versions-and-any-blob-snapshots) |
-> | **Learn more** | [Azure Data Lake Storage Gen2 hierarchical namespace](../blobs/data-lake-storage-namespace.md) |
+> | **Learn more** | [Azure Data Lake Storage Gen2 hierarchical namespace](data-lake-storage-namespace.md) |
## See also
storage Storage Auth Abac Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-cli.md
Previously updated : 10/21/2022 Last updated : 10/25/2022 #Customer intent:
Last updated 10/21/2022
# Tutorial: Add a role assignment condition to restrict access to blobs using Azure CLI
-In most cases, a role assignment will grant the permissions you need to Azure resources. However, in some cases you might want to provide more fine-grained access control by adding a role assignment condition.
+In most cases, a role assignment will grant the permissions you need to Azure resources. However, in some cases you might want to provide more granular access control by adding a role assignment condition.
In this tutorial, you learn how to:
Here is what the condition looks like in code:
## Step 3: Set up storage
-You can authorize access to Blob storage from the Azure CLI either with Azure AD credentials or by using the storage account access key. This article shows how to authorize Blob storage operations using Azure AD. For more information, see [Quickstart: Create, download, and list blobs with Azure CLI](../blobs/storage-quickstart-blobs-cli.md)
+You can authorize access to Blob storage from the Azure CLI either with Azure AD credentials or by using the storage account access key. This article shows how to authorize Blob storage operations using Azure AD. For more information, see [Quickstart: Create, download, and list blobs with Azure CLI](storage-quickstart-blobs-cli.md)
-1. Use [az storage account](/cli/azure/storage/account) to create a storage account that is compatible with the blob index feature. For more information, see [Manage and find Azure Blob data with blob index tags](../blobs/storage-manage-find-blobs.md#regional-availability-and-storage-account-support).
+1. Use [az storage account](/cli/azure/storage/account) to create a storage account that is compatible with the blob index feature. For more information, see [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md#regional-availability-and-storage-account-support).
1. Use [az storage container](/cli/azure/storage/container) to create a new blob container within the storage account and set the Public access level to **Private (no anonymous access)**. 1. Use [az storage blob upload](/cli/azure/storage/blob#az-storage-blob-upload) to upload a text file to the container.
-1. Add the following blob index tag to the text file. For more information, see [Use blob index tags to manage and find data on Azure Blob Storage](../blobs/storage-blob-index-how-to.md).
+1. Add the following blob index tag to the text file. For more information, see [Use blob index tags to manage and find data on Azure Blob Storage](storage-blob-index-how-to.md).
> [!NOTE] > Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to blob index tags, you must use blob index tags with conditions.
storage Storage Auth Abac Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-examples.md
Previously updated : 10/21/2022 Last updated : 10/25/2022 #Customer intent: As a dev, devops, or it admin, I want to learn about the conditions so that I write more complex conditions.
For information about the prerequisites to add or edit role assignment condition
## Blob index tags
-> [!IMPORTANT]
-> Although ΓÇ£Read content from a blob with tag conditionsΓÇ¥ is currently supported for compatibility with conditions implemented during the ABAC feature preview, that suboperation has been deprecated and Microsoft recommends using the ΓÇ£Read a blobΓÇ¥ suboperation instead.
+This section includes examples involving blob index tags.
+
+> [!NOTE]
+> Although the `Read content from a blob with tag conditions` suboperation is currently supported for compatibility with conditions implemented during the ABAC feature preview, it has been deprecated and Microsoft recommends using the [`Read a blob`](storage-auth-abac-attributes.md#read-a-blob) action instead.
>
-> When configuring ABAC conditions in the Azure portal, you might see "DEPRECATED: Read content from a blob with tag conditions". Remove the operation and replace it with the ΓÇ£Read a blobΓÇ¥ suboperation instead.
+> When configuring ABAC conditions in the Azure portal, you might see **DEPRECATED: Read content from a blob with tag conditions**. Microsoft recommends removing the operation and replacing it with the `Read a blob` action.
>
-> If you are authoring your own condition where you want to restrict read access by tag conditions, please refer to [Example: Read blobs with a blob index tag](#example-read-blobs-with-a-blob-index-tag).
+> If you are authoring your own condition where you want to restrict read access by tag conditions, please refer to [Example: Read blobs with a blob index tag](storage-auth-abac-examples.md#example-read-blobs-with-a-blob-index-tag).
### Example: Read blobs with a blob index tag
-This condition allows users to read blobs with a [blob index tag](../blobs/storage-blob-index-how-to.md) key of Project and a value of Cascade. Attempts to access blobs without this key-value tag will not be allowed.
+This condition allows users to read blobs with a [blob index tag](storage-blob-index-how-to.md) key of Project and a value of Cascade. Attempts to access blobs without this key-value tag will not be allowed.
You must add this condition to any role assignments that include the following action.
Get-AzStorageBlob -Container <containerName> -Blob <blobName> -Context $bearerCt
### Example: New blobs must include a blob index tag
-This condition requires that any new blobs must include a [blob index tag](../blobs/storage-blob-index-how-to.md) key of Project and a value of Cascade.
+This condition requires that any new blobs must include a [blob index tag](storage-blob-index-how-to.md) key of Project and a value of Cascade.
There are two actions that allow you to create new blobs, so you must target both. You must add this condition to any role assignments that include one of the following actions.
$content = Set-AzStorageBlobContent -File $localSrcFile -Container example2 -Blo
### Example: Existing blobs must have blob index tag keys
-This condition requires that any existing blobs be tagged with at least one of the allowed [blob index tag](../blobs/storage-blob-index-how-to.md) keys: Project or Program. This condition is useful for adding governance to existing blobs.
+This condition requires that any existing blobs be tagged with at least one of the allowed [blob index tag](storage-blob-index-how-to.md) keys: Project or Program. This condition is useful for adding governance to existing blobs.
There are two actions that allow you to update tags on existing blobs, so you must target both. You must add this condition to any role assignments that include one of the following actions.
$content = Set-AzStorageBlobContent -File $localSrcFile -Container example3 -Blo
### Example: Existing blobs must have a blob index tag key and values
-This condition requires that any existing blobs to have a [blob index tag](../blobs/storage-blob-index-how-to.md) key of Project and values of Cascade, Baker, or Skagit. This condition is useful for adding governance to existing blobs.
+This condition requires that any existing blobs to have a [blob index tag](storage-blob-index-how-to.md) key of Project and values of Cascade, Baker, or Skagit. This condition is useful for adding governance to existing blobs.
There are two actions that allow you to update tags on existing blobs, so you must target both. You must add this condition to any role assignments that include one of the following actions.
$content = Set-AzStorageBlobContent -Container $grantedContainer -Blob "uploads/
### Example: Read blobs with a blob index tag and a path
-This condition allows a user to read blobs with a [blob index tag](../blobs/storage-blob-index-how-to.md) key of Program, a value of Alpine, and a blob path of logs*. The blob path of logs* also includes the blob name.
+This condition allows a user to read blobs with a [blob index tag](storage-blob-index-how-to.md) key of Program, a value of Alpine, and a blob path of logs*. The blob path of logs* also includes the blob name.
You must add this condition to any role assignments that include the following action.
Here are the settings to add this condition using the Azure portal.
### Example: Read only storage accounts with hierarchical namespace enabled
-This condition allows a user to only read blobs in storage accounts with [hierarchical namespace](../blobs/data-lake-storage-namespace.md) enabled. This condition is applicable only at resource group scope or above.
+This condition allows a user to only read blobs in storage accounts with [hierarchical namespace](data-lake-storage-namespace.md) enabled. This condition is applicable only at resource group scope or above.
You must add this condition to any role assignments that include the following actions.
Here are the settings to add this condition using the Azure portal.
### Example: Read or write blobs based on blob index tags and custom security attributes
-This condition allows read or write access to blobs if the user has a [custom security attribute](../../active-directory/fundamentals/custom-security-attributes-overview.md) that matches the [blob index tag](../blobs/storage-blob-index-how-to.md).
+This condition allows read or write access to blobs if the user has a [custom security attribute](../../active-directory/fundamentals/custom-security-attributes-overview.md) that matches the [blob index tag](storage-blob-index-how-to.md).
For example, if Brenda has the attribute `Project=Baker`, she can only read or write blobs with the `Project=Baker` blob index tag. Similarly, Chandra can only read or write blobs with `Project=Cascade`.
Here are the settings to add this condition using the Azure portal.
### Example: Read blobs based on blob index tags and multi-value custom security attributes
-This condition allows read access to blobs if the user has a [custom security attribute](../../active-directory/fundamentals/custom-security-attributes-overview.md) with any values that matches the [blob index tag](../blobs/storage-blob-index-how-to.md).
+This condition allows read access to blobs if the user has a [custom security attribute](../../active-directory/fundamentals/custom-security-attributes-overview.md) with any values that matches the [blob index tag](storage-blob-index-how-to.md).
For example, if Chandra has the Project attribute with the values Baker and Cascade, she can only read blobs with the `Project=Baker` or `Project=Cascade` blob index tag.
storage Storage Auth Abac Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-portal.md
Previously updated : 10/21/2022 Last updated : 10/25/2022 #Customer intent:
Last updated 10/21/2022
# Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal
-In most cases, a role assignment will grant the permissions you need to Azure resources. However, in some cases you might want to provide more fine-grained access control by adding a role assignment condition.
+In most cases, a role assignment will grant the permissions you need to Azure resources. However, in some cases you might want to provide more granular access control by adding a role assignment condition.
In this tutorial, you learn how to:
Here is what the condition looks like in code:
## Step 2: Set up storage
-1. Create a storage account that is compatible with the blob index tags feature. For more information, see [Manage and find Azure Blob data with blob index tags](../blobs/storage-manage-find-blobs.md#regional-availability-and-storage-account-support).
+1. Create a storage account that is compatible with the blob index tags feature. For more information, see [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md#regional-availability-and-storage-account-support).
1. Create a new container within the storage account and set the Public access level to **Private (no anonymous access)**.
Here is what the condition looks like in code:
1. In the **Blob index tags** section, add the following blob index tag to the text file.
- If you don't see the Blob index tags section and you just registered your subscription, you might need to wait a few minutes for changes to propagate. For more information, see [Use blob index tags to manage and find data on Azure Blob Storage](../blobs/storage-blob-index-how-to.md).
+ If you don't see the Blob index tags section and you just registered your subscription, you might need to wait a few minutes for changes to propagate. For more information, see [Use blob index tags to manage and find data on Azure Blob Storage](storage-blob-index-how-to.md).
> [!NOTE] > Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to blob index tags, you must use blob index tags with conditions.
storage Storage Auth Abac Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-powershell.md
Previously updated : 10/21/2022 Last updated : 10/25/2022 #Customer intent:
Last updated 10/21/2022
# Tutorial: Add a role assignment condition to restrict access to blobs using Azure PowerShell
-In most cases, a role assignment will grant the permissions you need to Azure resources. However, in some cases you might want to provide more fine-grained access control by adding a role assignment condition.
+In most cases, a role assignment will grant the permissions you need to Azure resources. However, in some cases you might want to provide more granular access control by adding a role assignment condition.
In this tutorial, you learn how to:
Here is what the condition looks like in code:
## Step 4: Set up storage
-1. Use [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount) to create a storage account that is compatible with the blob index feature. For more information, see [Manage and find Azure Blob data with blob index tags](../blobs/storage-manage-find-blobs.md#regional-availability-and-storage-account-support).
+1. Use [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount) to create a storage account that is compatible with the blob index feature. For more information, see [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md#regional-availability-and-storage-account-support).
1. Use [New-AzStorageContainer](/powershell/module/az.storage/new-azstoragecontainer) to create a new blob container within the storage account and set the Public access level to **Private (no anonymous access)**. 1. Use [Set-AzStorageBlobContent](/powershell/module/az.storage/set-azstorageblobcontent) to upload a text file to the container.
-1. Add the following blob index tag to the text file. For more information, see [Use blob index tags to manage and find data on Azure Blob Storage](../blobs/storage-blob-index-how-to.md).
+1. Add the following blob index tag to the text file. For more information, see [Use blob index tags to manage and find data on Azure Blob Storage](storage-blob-index-how-to.md).
> [!NOTE] > Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to blob index tags, you must use blob index tags with conditions.
storage Storage Auth Abac Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-security.md
Previously updated : 10/19/2022 Last updated : 10/25/2022
Role assignment conditions are only evaluated when using Azure RBAC for authoriz
- [Account shared access signature](/rest/api/storageservices/create-account-sas) (SAS) - [Service SAS](/rest/api/storageservices/create-service-sas).
-Similarly, conditions are not evaluated when access is granted using [access control lists (ACLs)](../blobs/data-lake-storage-access-control.md) in storage accounts with a [hierarchical namespace](../blobs/data-lake-storage-namespace.md) (HNS).
+Similarly, conditions are not evaluated when access is granted using [access control lists (ACLs)](data-lake-storage-access-control.md) in storage accounts with a [hierarchical namespace](data-lake-storage-namespace.md) (HNS).
You can prevent shared key, account-level SAS, and service-level SAS authorization by [disabling shared key authorization](../common/shared-key-authorization-prevent.md) for your storage account. Since user delegation SAS depends on Azure RBAC, role-assignment conditions are evaluated when using this method of authorization.
When using blob path as a *@Resource* attribute for a condition, you should also
### Blob index tags
-[Blob index tags](../blobs/storage-manage-find-blobs.md) are used as free-form attributes for conditions in storage. If you author any access conditions by using these tags, you must also protect the tags themselves. Specifically, the `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write` DataAction allows users to modify the tags on a storage object. You can restrict this action to prevent users from manipulating a tag key or value to gain access to unauthorized objects.
+[Blob index tags](storage-manage-find-blobs.md) are used as free-form attributes for conditions in storage. If you author any access conditions by using these tags, you must also protect the tags themselves. Specifically, the `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write` DataAction allows users to modify the tags on a storage object. You can restrict this action to prevent users from manipulating a tag key or value to gain access to unauthorized objects.
In addition, if blob index tags are used in conditions, data may be vulnerable if the data and the associated index tags are updated in separate operations. You can use `@Request` conditions on blob write operations to require that index tags be set in the same update operation. This approach can help secure data from the instant it's written to storage.
storage Storage Auth Abac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac.md
Previously updated : 10/24/2022 Last updated : 10/25/2022
# Authorize access to Azure Blob Storage using Azure role assignment conditions
-Attribute-based access control (ABAC) is an authorization strategy that defines access levels based on attributes associated with security principals, resources, requests, and the environment. With ABAC, you can grant a security principal access to a resource based on a condition expressed as a predicate using these attributes.
+Attribute-based access control (ABAC) is an authorization strategy that defines access levels based on attributes associated with security principals, resources and requests. With ABAC, you can grant a security principal access to a resource based on a condition expressed as a predicate using these attributes.
Azure ABAC builds on Azure role-based access control (Azure RBAC) by adding [conditions to Azure role assignments](../../role-based-access-control/conditions-overview.md). It enables you to author role-assignment conditions based on principal, resource and request attributes.
Azure ABAC builds on Azure role-based access control (Azure RBAC) by adding [con
## Overview of conditions in Azure Storage
-You can [use of Azure Active Directory](../common/authorize-data-access.md) (Azure AD) to authorize requests to Azure storage resources using Azure RBAC. Azure RBAC helps you manage access to resources by defining who has access to resources and what they can do with those resources, using role definitions and role assignments. Azure Storage defines a set of Azure [built-in roles](../../role-based-access-control/built-in-roles.md#storage) that encompass common sets of permissions used to access Azure storage data. You can also define custom roles with select sets of permissions. Azure Storage supports role assignments for both storage accounts and blob containers.
+You can [use Azure Active Directory](../common/authorize-data-access.md) (Azure AD) to authorize requests to Azure storage resources using Azure RBAC. Azure RBAC helps you manage access to resources by defining who has access to resources and what they can do with those resources, using role definitions and role assignments. Azure Storage defines a set of Azure [built-in roles](../../role-based-access-control/built-in-roles.md#storage) that encompass common sets of permissions used to access Azure storage data. You can also define custom roles with select sets of permissions. Azure Storage supports role assignments for both storage accounts and blob containers.
Azure ABAC builds on Azure RBAC by adding [role assignment conditions](../../role-based-access-control/conditions-overview.md) in the context of specific actions. A *role assignment condition* is an additional check that is evaluated when the action on the storage resource is being authorized. This condition is expressed as a predicate using attributes associated with any of the following: - Security principal that is requesting authorization - Resource to which access is being requested - Parameters of the request-- Environment from which the request originates The benefits of using role assignment conditions are:
The benefits of using role assignment conditions are:
- **Reduce the number of role assignments you have to create and manage** - You can do this by using a generalized role assignment for a security group, and then restricting the access for individual members of the group using a condition that matches attributes of a principal with attributes of a specific resource being accessed (such as a blob or a container). - **Express access control rules in terms of attributes with business meaning** - For example, you can express your conditions using attributes that represent a project name, business application, organization function, or classification level.
-The tradeoff of using conditions is that you need a structured and consistent taxonomy when using attributes across your organization. Attributes must be protected to prevent access from being compromised. Also, conditions must be carefully designed and reviewed for their effect.
+The trade-off of using conditions is that you need a structured and consistent taxonomy when using attributes across your organization. Attributes must be protected to prevent access from being compromised. Also, conditions must be carefully designed and reviewed for their effect.
-Role-assignment conditions in Azure Storage are supported for Azure blob storage. You can also use conditions with accounts that have the [hierarchical namespace](../blobs/data-lake-storage-namespace.md) (HNS) feature enabled on them (Azure Data Lake Storage Gen2).
+Role-assignment conditions in Azure Storage are supported for Azure blob storage. You can also use conditions with accounts that have the [hierarchical namespace](data-lake-storage-namespace.md) (HNS) feature enabled on them (Data Lake Storage Gen2).
## Supported attributes and operations
You can add conditions to built-in roles or custom roles. The built-in roles on
- [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor) - [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner).
-You can use conditions with custom roles so long as the role includes [actions that support conditions](storage-auth-abac-attributes.md#azure-blob-storage-actions-and-suboperations).
+You can use conditions with custom roles as long as the role includes [actions that support conditions](storage-auth-abac-attributes.md#azure-blob-storage-actions-and-suboperations).
-If you're working with conditions based on [blob index tags](../blobs/storage-manage-find-blobs.md), you should use the *Storage Blob Data Owner* since permissions for tag operations are included in this role.
+If you're working with conditions based on [blob index tags](storage-manage-find-blobs.md), you should use the *Storage Blob Data Owner* since permissions for tag operations are included in this role.
> [!NOTE] > Blob index tags are not supported for Data Lake Storage Gen2 storage accounts, which use a hierarchical namespace. You should not author role-assignment conditions using index tags on storage accounts that have HNS enabled.
-The [Azure role assignment condition format](../../role-based-access-control/conditions-format.md) allows use of `@Principal`, `@Resource` or `@Request` attributes in the conditions. A `@Principal` attribute is a custom security attribute on a principal, such as a user, enterprise application (service principal), or managed identity. A `@Resource` attribute refers to an existing attribute of a storage resource that is being accessed, such as a storage account, a container, or a blob. A `@Request` attribute refers to an attribute or parameter included in a storage operation request.
+The [Azure role assignment condition format](../../role-based-access-control/conditions-format.md) allows the use of `@Principal`, `@Resource` or `@Request` attributes in the conditions. A `@Principal` attribute is a custom security attribute on a principal, such as a user, enterprise application (service principal), or managed identity. A `@Resource` attribute refers to an existing attribute of a storage resource that is being accessed, such as a storage account, a container, or a blob. A `@Request` attribute refers to an attribute or parameter included in a storage operation request.
-Azure RBAC currently supports 4,000 role assignments in a subscription. If you need to create thousands of Azure role assignments, you may encounter this limit. Managing hundreds or thousands of role assignments can be difficult. In some cases, you can use conditions to reduce the number of role assignments on your storage account and make them easier to manage. You can [scale the management of role assignments](../../role-based-access-control/conditions-custom-security-attributes-example.md) using conditions and [Azure AD custom security attributes]() for principals.
+[Azure RBAC supports a limited number of role assignments per subscription](../../azure-resource-manager/management/azure-subscription-service-limits.md#azure-rbac-limits). If you need to create thousands of Azure role assignments, you may encounter this limit. Managing hundreds or thousands of role assignments can be difficult. In some cases, you can use conditions to reduce the number of role assignments on your storage account and make them easier to manage. You can [scale the management of role assignments](../../role-based-access-control/conditions-custom-security-attributes-example.md) using conditions and [Azure AD custom security attributes]() for principals.
## Next steps
storage Storage Quickstart Blobs Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-java.md
You can authorize access to data in your storage account using the following ste
:::image type="content" source="./media/storage-quickstart-blobs-java/storage-account-name.png" alt-text="A screenshot showing how to find the storage account name."::: > [!NOTE]
- > When deployed to Azure, this same code can be used to authorize requests to Azure Storage from an application running in Azure. However, you'll need to enable managed identity on your app in Azure. Then configure your storage account to allow that managed identity to connect. For detailed instructions on configuring this connection between Azure services, see the [Auth from Azure-hosted apps](/dotnet/azure/sdk/authentication-azure-hosted-apps) tutorial.
+ > When deployed to Azure, this same code can be used to authorize requests to Azure Storage from an application running in Azure. However, you'll need to enable managed identity on your app in Azure. Then configure your storage account to allow that managed identity to connect. For detailed instructions on configuring this connection between Azure services, see the [Auth from Azure-hosted apps](/azure/developer/java/sdk/identity-azure-hosted-auth) tutorial.
### [Connection String](#tab/connection-string)
storage Storage Quickstart Blobs Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-python.md
You can authorize access to data in your storage account using the following ste
:::image type="content" source="./media/storage-quickstart-blobs-python/storage-account-name.png" alt-text="A screenshot showing how to find the storage account name."::: > [!NOTE]
- > When deployed to Azure, this same code can be used to authorize requests to Azure Storage from an application running in Azure. However, you'll need to enable managed identity on your app in Azure. Then configure your storage account to allow that managed identity to connect. For detailed instructions on configuring this connection between Azure services, see the [Auth from Azure-hosted apps](/dotnet/azure/sdk/authentication-azure-hosted-apps) tutorial.
+ > When deployed to Azure, this same code can be used to authorize requests to Azure Storage from an application running in Azure. However, you'll need to enable managed identity on your app in Azure. Then configure your storage account to allow that managed identity to connect. For detailed instructions on configuring this connection between Azure services, see the [Auth from Azure-hosted apps](/azure/developer/python/sdk/authentication-azure-hosted-apps) tutorial.
### [Connection String](#tab/connection-string)
storage Authorize Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/authorize-data-access.md
Previously updated : 10/21/2022 Last updated : 10/25/2022
Each authorization option is briefly described below:
> > [The status of ABAC condition features in Azure Storage](#status-of-condition-features-in-azure-storage)
-<! After the core ABAC doc updates are published, change the heading in the second link above to: #status-of-condition-features >
--- **Azure Active Directory Domain Services (Azure AD DS) authentication** for Azure Files. Azure Files supports identity-based authorization over Server Message Block (SMB) through Azure AD DS. You can use Azure RBAC for fine-grained control over a client's access to Azure Files resources in a storage account. For more information about Azure Files authentication using domain services, see the [overview](../files/storage-files-active-directory-overview.md).
+- **Azure Active Directory Domain Services (Azure AD DS) authentication** for Azure Files. Azure Files supports identity-based authorization over Server Message Block (SMB) through Azure AD DS. You can use Azure RBAC for granular control over a client's access to Azure Files resources in a storage account. For more information about Azure Files authentication using domain services, see the [overview](../files/storage-files-active-directory-overview.md).
- **On-premises Active Directory Domain Services (AD DS, or on-premises AD DS) authentication** for Azure Files. Azure Files supports identity-based authorization over SMB through AD DS. Your AD DS environment can be hosted in on-premises machines or in Azure VMs. SMB access to Files is supported using AD DS credentials from domain joined machines, either on-premises or in Azure. You can use a combination of Azure RBAC for share level access control and NTFS DACLs for directory/file level permission enforcement. For more information about Azure Files authentication using domain services, see the [overview](../files/storage-files-active-directory-overview.md).
storage Storage Sas Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-sas-overview.md
Previously updated : 12/28/2021 Last updated : 10/25/2022
The following recommendations for using shared access signatures can help mitiga
- **Configure a SAS expiration policy for the storage account.** A SAS expiration policy specifies a recommended interval over which the SAS is valid. SAS expiration policies apply to a service SAS or an account SAS. When a user generates service SAS or an account SAS with a validity interval that is larger than the recommended interval, they'll see a warning. If Azure Storage logging with Azure Monitor is enabled, then an entry is written to the Azure Storage logs. To learn more, see [Create an expiration policy for shared access signatures](sas-expiration-policy.md). -- **Define a stored access policy for a service SAS.** Stored access policies give you the option to revoke permissions for a service SAS without having to regenerate the storage account keys. Set the expiration on these very far in the future (or infinite) and make sure it's regularly updated to move it farther into the future.
+- **Define a stored access policy for a service SAS.** Stored access policies give you the option to revoke permissions for a service SAS without having to regenerate the storage account keys. Set the expiration on these very far in the future (or infinite) and make sure it's regularly updated to move it farther into the future. There is a limit of five stored access policies per container.
- **Use near-term expiration times on an ad hoc SAS service SAS or account SAS.** In this way, even if a SAS is compromised, it's valid only for a short time. This practice is especially important if you cannot reference a stored access policy. Near-term expiration times also limit the amount of data that can be written to a blob by limiting the time available to upload to it.
The following recommendations for using shared access signatures can help mitiga
- **Be specific with the resource to be accessed.** A security best practice is to provide a user with the minimum required privileges. If a user only needs read access to a single entity, then grant them read access to that single entity, and not read/write/delete access to all entities. This also helps lessen the damage if a SAS is compromised because the SAS has less power in the hands of an attacker.
+ There is no direct way to identify which clients have accessed a resource. However, you can use the unique fields in the SAS, the signed IP (`sip`), signed start (`st`), and signed expiry (`se`) fields, to track access. For example, you can generate a SAS token with a unique expiry time that you can then correlate with client to whom it was issued.
+ - **Understand that your account will be billed for any usage, including via a SAS.** If you provide write access to a blob, a user may choose to upload a 200 GB blob. If you've given them read access as well, they may choose to download it 10 times, incurring 2 TB in egress costs for you. Again, provide limited permissions to help mitigate the potential actions of malicious users. Use short-lived SAS to reduce this threat (but be mindful of clock skew on the end time). - **Validate data written using a SAS.** When a client application writes data to your storage account, keep in mind that there can be problems with that data. If you plan to validate data, perform that validation after the data is written and before it is used by your application. This practice also protects against corrupt or malicious data being written to your account, either by a user who properly acquired the SAS, or by a user exploiting a leaked SAS.
storage Elastic San Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-networking.md
You can manage virtual network rules for volume groups through the Azure portal,
> You can use the **subscription** parameter to retrieve the subnet ID for a virtual network belonging to another Azure AD tenant. ```azurecli
- az elastic-san volume-group update -e $sanName -g $resourceGroupName --name $volumeGroupName --network-acls '{virtual-network-rules:[{id:/subscriptions/subscriptionID/resourceGroups/RGName/providers/Microsoft.Network/virtualNetworks/vnetName/subnets/default,action:Allow}]}'
+ az elastic-san volume-group update -e $sanName -g $resourceGroupName --name $volumeGroupName --network-acls "{virtual-network-rules:[{id:/'subscriptions/subscriptionID/resourceGroups/RGName/providers/Microsoft.Network/virtualNetworks/vnetName/subnets/default',action:Allow}]}"
``` - Remove a network rule. The following command removes the first network rule, modify it to remove the network rule you'd like.
storage File Sync Troubleshoot Sync Group Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot-sync-group-management.md
Title: Troubleshoot Azure File Sync sync group management | Microsoft Docs
+ Title: Troubleshoot Azure File Sync sync group management
description: Troubleshoot common issues in managing Azure File Sync sync groups, including cloud endpoint creation and server endpoint creation, deletion, and health. Previously updated : 7/28/2022 Last updated : 10/25/2022
On the server that is showing as "Appears offline" in the portal, look at Event
- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 - TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
+> [!Note]
+> Different Windows versions support different TLS cipher suites and priority order. See [TLS Cipher Suites in Windows](/windows/win32/secauthn/cipher-suites-in-schannel) for the corresponding Windows version and the supported cipher suites and default order in which they are chosen by the Microsoft Schannel Provider.
+ - If **GetNextJob completed with status: -2134347764** is logged, the server is unable to communicate with the Azure File Sync service due to an expired or deleted certificate. - Run the following PowerShell command on the server to reset the certificate used for authentication: ```powershell
storage Storage Files Quick Create Use Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-quick-create-use-windows.md
description: This tutorial covers how to create an SMB Azure file share using th
Previously updated : 03/23/2022 Last updated : 10/24/2022
# Tutorial: Create an SMB Azure file share and connect it to a Windows VM using the Azure portal
-Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard [Server Message Block (SMB) protocol](/windows/win32/fileio/microsoft-smb-protocol-and-cifs-protocol-overview) or [Network File System (NFS) protocol](https://en.wikipedia.org/wiki/Network_File_System). In this tutorial, you will learn a few ways you can use an SMB Azure file share in a Windows virtual machine (VM).
+Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard [Server Message Block (SMB) protocol](/windows/win32/fileio/microsoft-smb-protocol-and-cifs-protocol-overview) or [Network File System (NFS) protocol](https://en.wikipedia.org/wiki/Network_File_System). In this tutorial, you'll learn a few ways you can use an SMB Azure file share in a Windows virtual machine (VM).
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
If you don't have an Azure subscription, create a [free account](https://azure.m
### Create a storage account
-Before you can work with an Azure file share, you have to create an Azure storage account.
+Before you can work with an Azure file share, you must create an Azure storage account.
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. On the Azure portal menu, select **All services**. In the list of resources, type **Storage Accounts**. As you begin typing, the list filters based on your input. Select **Storage Accounts**.
-1. On the **Storage Accounts** window that appears, choose **+ New**.
-1. On the **Basics** tab, select the subscription in which to create the storage account.
-1. Under the **Resource group** field, select your desired resource group, or create a new resource group.
-1. Next, enter a name for your storage account. The name you choose must be unique across Azure. The name also must be between 3 and 24 characters in length, and may include only numbers and lowercase letters.
-1. Select a region for your storage account, or use the default region.
-1. Select a performance tier. The default tier is *Standard*.
-1. Specify how the storage account will be replicated. The default redundancy option is *Geo-redundant storage (GRS)*.
-1. Select **Review + Create** to review your storage account settings and create the account.
-1. Select **Create**.
-
-The following image shows the settings on the **Basics** tab for a new storage account:
- ### Create an Azure file share
storage Storage How To Create File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-create-file-share.md
Title: Create an SMB Azure file share
-description: How to create and delete an SMB Azure file share by using the Azure portal, PowerShell, or Azure CLI.
+description: How to create and delete an SMB Azure file share by using the Azure portal, Azure PowerShell, or Azure CLI.
Previously updated : 10/21/2022 Last updated : 10/24/2022
To delete an Azure file share, you can use the Azure portal, Azure PowerShell, o
:::image type="content" source="media/storage-how-to-create-file-share/delete-file-share.png" alt-text="Screen shot of the Azure portal procedure for deleting a file share." border="true" lightbox="media/storage-how-to-create-file-share/delete-file-share.png"::: # [PowerShell](#tab/azure-powershell)
-1. Log in to your Azure account and specify your tenant ID.
+1. Log in to your Azure account. To use multi-factor authentication, you'll need to supply your Azure tenant ID.
```azurepowershell Login-AzAccount -TenantId <YourTenantID>
storage Storage How To Use Files Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-portal.md
This Quickstart only applies to SMB Azure file shares. Standard and premium SMB
| Standard file shares (GPv2), GRS/GZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | | Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
-## Get started
+## Getting started
# [Portal](#tab/azure-portal)
storage Storage Troubleshooting Files Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-troubleshooting-files-performance.md
If the number of **DirectoryOpen/DirectoryClose** calls is among the top API cal
### Workaround - A fix for this issue is available in the [April Platform Update for Windows](https://support.microsoft.com/help/4052623/update-for-windows-defender-antimalware-platform).-
-## File creation is slower than expected
-
-### Cause
-
-Workloads that rely on creating a large number of files won't see a substantial difference in performance between premium file shares and standard file shares.
-
-### Workaround
--- None.- ## Slow performance from Windows 8.1 or Server 2012 R2 ### Cause
stream-analytics App Insights Export Sql Stream Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/app-insights-export-sql-stream-analytics.md
Title: 'Export to SQL from Azure Application Insights | Microsoft Docs' description: Continuously export Application Insights data to SQL using Stream Analytics. Previously updated : 10/17/2022 Last updated : 10/24/2022
FROM [dbo].[PageViewsTable]
``` ## Next steps
-* [Export to Power BI using Stream Analytics](../azure-monitor/app/export-power-bi.md)
* [detailed data model reference for the property types and values.](../azure-monitor/app/export-data-model.md) * [Continuous Export in Application Insights](../azure-monitor/app/export-telemetry.md)
stream-analytics App Insights Export Stream Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/app-insights-export-stream-analytics.md
Title: Export using Stream Analytics from Azure Application Insights | Microsoft Docs description: Stream Analytics can continuously transform, filter and route the data you export from Application Insights. Previously updated : 10/17/2022 Last updated : 10/24/2022
In this example, we'll create an adaptor that takes data from Application Insights using continuous export, renames and processes some of the fields, and pipes it into Power BI. > [!WARNING]
-> There are much better and easier [recommended ways to display Application Insights data in Power BI](../azure-monitor/app/export-power-bi.md). The path illustrated here is just an example to illustrate how to process exported data.
+> There are much better and easier [recommended ways to display Application Insights data in Power BI](../azure-monitor/logs/log-powerbi.md) if you've [migrated to a workspace-based resource](../azure-monitor/app/convert-classic-resource.md). The path illustrated here is just an example to illustrate how to process exported data.
> [!IMPORTANT] > Continuous export will be deprecated on February 29, 2024 and is only supported for classic Application Insights resources. Azure Stream Analytics does not support reading from AppInsights with diagnostic settings.
Wait until the job is Running.
## See results in Power BI > [!WARNING]
-> There are much better and easier [recommended ways to display Application Insights data in Power BI](../azure-monitor/app/export-power-bi.md). The path illustrated here is just an example to illustrate how to process exported data.
+> There are much better and easier [recommended ways to display Application Insights data in Power BI](../azure-monitor/logs/log-powerbi.md) if you've [migrated to a workspace-based resource](../azure-monitor/app/convert-classic-resource.md). The path illustrated here is just an example to illustrate how to process exported data.
Open Power BI with your work or school account, and select the dataset and table that you defined as the output of the Stream Analytics job.
synapse-analytics Apache Spark Sql Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/data-sources/apache-spark-sql-connector.md
Title: Azure SQL and SQL Server description: This article provides information on how to use the connector for moving data between Azure MS SQL and serverless Apache Spark pools. --++
dbname = "<< database name >>"
url = servername + ";" + "databaseName=" + dbname + ";" dbtable = "<< table name >> " user = "<< username >>"
+principal_client_id = "<< service principal client id >>"
+principal_secret = "<< service principal secret ho>>"
password = mssparkutils.credentials.getSecret('azure key vault name','secret name') ```
except ValueError as error :
### Python example with service principal ```python
-import adal
+import msal
# Located in App Registrations from Azure Portal tenant_id = "<< tenant id >> "
tenant_id = "<< tenant id >> "
# Located in App Registrations from Azure Portal resource_app_id_url = "https://database.windows.net/"
+# Define scope of the Service for the app registration before requesting from AAD
+scope ="https://database.windows.net/.default"
+ # Authority
-authority = "https://login.windows.net/" + tenant_id
+authority = "https://login.microsoftonline.net/" + tenant_id
+
+# Get service principal
+service_principal_id = mssparkutils.credentials.getSecret('azure key vault name','principal_client_id')
+service_principal_secret = mssparkutils.credentials.getSecret('azure key vault name','principal_secret')
++
+context = msal.ConfidentialClientApplication(
+ service_principal_id, service_principal_secret, authority
+ )
-context = adal.AuthenticationContext(authority)
-token = context.acquire_token_with_client_credentials(resource_app_id_url, service_principal_id, service_principal_secret)
-access_token = token["accessToken"]
+token = app.acquire_token_silent([scope])
+access_token = token["access_token"]
jdbc_df = spark.read \ .format("com.microsoft.sqlserver.jdbc.spark") \
synapse-analytics Connect Synapse Link Sql Database Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/connect-synapse-link-sql-database-vnet.md
Title: Configure Synapse link for Azure SQL Database with network security (Preview)
-description: Learn how to configure Synapse link for Azure SQL Database with network security (Preview).
+ Title: Configure Azure Synapse Link for Azure SQL Database with network security (preview)
+description: Learn how to configure Azure Synapse Link for Azure SQL Database with network security (preview).
-# Configure Synapse link for Azure SQL Database with network security (Preview)
+# Configure Azure Synapse Link for Azure SQL Database with network security (preview)
-This article provides a guide on configuring Azure Synapse Link for Azure SQL Database with network security. Before reading this documentation, You should have known how to create and start Synapse link for Azure SQL DB from [Get started with Azure Synapse Link for Azure SQL Database](connect-synapse-link-sql-database.md).
+This article is a guide for configuring Azure Synapse Link for Azure SQL Database with network security. Before you begin, you should know how to create and start Azure Synapse Link for Azure SQL Database from [Get started with Azure Synapse Link for Azure SQL Database](connect-synapse-link-sql-database.md).
> [!IMPORTANT]
-> Azure Synapse Link for SQL is currently in PREVIEW.
+> Azure Synapse Link for SQL is currently in preview.
> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-## Managed workspace Virtual Network without data exfiltration
+## Create a managed workspace virtual network without data exfiltration
-1. Create Synapse workspace with managed virtual network enabled. You will enable **managed virtual network** and select **No** to allow outbound traffic from the workspace to any target. You can learn more about managed virtual network from [this](../security/synapse-workspace-managed-vnet.md).
+In this section, you create an Azure Synapse workspace with a managed virtual network enabled. For **Managed virtual network**, you'll select **Enable**, and for **Allow outbound data traffic only to approved targets**, you'll select **No**. For an overview, see [Azure Synapse Analytics managed virtual network](../security/synapse-workspace-managed-vnet.md).
- :::image type="content" source="../media/connect-synapse-link-sql-database/create-synapse-workspace-allow-outbound-traffic.png" alt-text="Screenshot of creating synapse workspace allow outbound traffic.":::
-1. Navigate to your Synapse workspace on Azure portal, go to **Networking** tab to enable **Allow Azure Synapse Link for Azure SQL Database to bypass firewall rules**.
+1. Sign in to the [Azure portal](https://portal.azure.com).
- :::image type="content" source="../media/connect-synapse-link-sql-database/enable-bypass-firewall-rules.png" alt-text="Screenshot of enabling bypass firewall rules.":::
+1. Go to your Azure Synapse workspace, select **Networking**, and then select the **Allow Azure Synapse Link for Azure SQL Database to bypass firewall rules** checkbox.
-1. Launch Synapse Studio, navigate to **Manage**, click **Integration runtimes** and select **AutoResolvingIntegrationRuntime**. On the pop-up slide, you can click **Virtual network** tab, and enable **Interactive authoring**.
+ :::image type="content" source="../media/connect-synapse-link-sql-database/enable-bypass-firewall-rules.png" alt-text="Screenshot that shows how to enable bypassing firewall rules.":::
- :::image type="content" source="../media/connect-synapse-link-sql-database/enable-interactive-authoring.png" alt-text="Screenshot of enabling interactive authoring.":::
+1. Open Synapse Studio, go to **Manage**, select **Integration runtimes**, and then select **AutoResolvingIntegrationRuntime**.
-1. Now you can create a link connection from **Integrate** tab to replicate data from Azure SQL DB to Synapse SQL pool.
+1. In the pop-up window, select the **Virtual network** tab, and then enable **Interactive authoring**.
- :::image type="content" source="../media/connect-synapse-link-sql-database/create-link.png" alt-text="Screenshot of creating a link.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-database/enable-interactive-authoring.png" alt-text="Screenshot that shows how to enable interactive authoring.":::
- :::image type="content" source="../media/connect-synapse-link-sql-database/create-link-sql-db.png" alt-text="Screenshot of creating link sql db.":::
+1. From the **Integrate** pane, create a link connection to replicate data from your Azure SQL database to an Azure Synapse SQL pool.
-1. Start your link connection
+ :::image type="content" source="../media/connect-synapse-link-sql-database/create-link.png" alt-text="Screenshot that shows how to create a link to an Azure Synapse SQL pool.":::
- :::image type="content" source="../media/connect-synapse-link-sql-database/start-link.png" alt-text="Screenshot of starting link.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-database/create-link-sql-db.png" alt-text="Screenshot that shows how to create a link connection from an Azure SQL database.":::
+1. Start your link connection.
-## Managed workspace Virtual Network with data exfiltration
+ :::image type="content" source="../media/connect-synapse-link-sql-database/start-link.png" alt-text="Screenshot of starting a link connection.":::
-1. Create Synapse workspace with managed virtual network enabled. You will enable **managed virtual network** and select **Yes** to limit outbound traffic from the Managed workspace Virtual Network to targets through Managed private endpoints. You can learn more about managed virtual network from [this](../security/synapse-workspace-managed-vnet.md)
+## Create a managed workspace virtual network with data exfiltration
- :::image type="content" source="../media/connect-synapse-link-sql-database/create-synapse-workspace-disallow-outbound-traffic.png" alt-text="Screenshot of creating synapse workspace disallow outbound traffic.":::
+In this section, you create an Azure Synapse workspace with managed virtual network enabled. You'll enable **Managed virtual network**, and you'll select **Yes** to limit outbound traffic from the managed workspace virtual network to targets through managed private endpoints. For an overview, see [Azure Synapse Analytics managed virtual network](../security/synapse-workspace-managed-vnet.md).
-1. Navigate to your Synapse workspace on Azure portal, go to **Networking** tab to enable **Allow Azure Synapse Link for Azure SQL Database to bypass firewall rules**.
- :::image type="content" source="../media/connect-synapse-link-sql-database/enable-bypass-firewall-rules.png" alt-text="Screenshot of enabling bypass firewall rules.":::
+1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Launch Synapse Studio, navigate to **Manage**, click **Integration runtimes** and select **AutoResolvingIntegrationRuntime**. On the pop-up slide, you can click **Virtual network** tab, and enable **Interactive authoring**.
+1. Go to your Azure Synapse workspace, select **Networking**, and then select the **Allow Azure Synapse Link for Azure SQL Database to bypass firewall rules** checkbox.
- :::image type="content" source="../media/connect-synapse-link-sql-database/enable-interactive-authoring.png" alt-text="Screenshot of enabling interactive authoring.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-database/enable-bypass-firewall-rules.png" alt-text="Screenshot that shows how to enable bypassing firewall rules.":::
-1. Create a linked service connecting to Azure SQL DB with managed private endpoint enabled.
+1. Open Synapse Studio, go to **Manage**, select **Integration runtimes**, and then select **AutoResolvingIntegrationRuntime**.
- * Create a linked service connecting to Azure SQL DB.
+1. In the pop-up window, select the **Virtual network** tab, and then enable **Interactive authoring**.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-database/enable-interactive-authoring.png" alt-text="Screenshot that shows how to enable interactive authoring.":::
+
+1. Create a linked service that connects to your Azure SQL database with a managed private endpoint enabled.
+
+ a. Create a linked service that connects to your Azure SQL database.
- :::image type="content" source="../media/connect-synapse-link-sql-database/new-sql-db-linked-service-pe.png" alt-text="Screenshot of new sql db linked service pe.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-database/new-sql-db-linked-service-pe.png" alt-text="Screenshot of a new Azure SQL database linked service private endpoint.":::
- * Create a managed private endpoint in linked service for Azure SQL DB.
+ b. Create a managed private endpoint in a linked service for the Azure SQL database.
- :::image type="content" source="../media/connect-synapse-link-sql-database/new-sql-db-linked-service-pe1.png" alt-text="Screenshot of new sql db linked service pe1.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-database/new-sql-db-linked-service-pe1.png" alt-text="Screenshot of a new Azure SQL database linked service private endpoint 1.":::
- * Complete the managed private endpoint creation in the linked service for Azure SQL DB.
+ c. Complete the managed private endpoint creation in the linked service for the Azure SQL database.
- :::image type="content" source="../media/connect-synapse-link-sql-database/new-sql-db-linked-service-pe2.png" alt-text="Screenshot of new sql db linked service pe2.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-database/new-sql-db-linked-service-pe2.png" alt-text="Screenshot of a new Azure SQL database linked service private endpoint 2.":::
- * Go to Azure portal of your SQL Server hosting Azure SQL DB as source store, approve the Private endpoint connections.
+ d. Go to the Azure portal for your SQL Server instance that hosts an Azure SQL database as a source store, and then approve the private endpoint connections.
- :::image type="content" source="../media/connect-synapse-link-sql-database/new-sql-db-linked-service-pe3.png" alt-text="Screenshot of new sql db linked service pe3.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-database/new-sql-db-linked-service-pe3.png" alt-text="Screenshot of a new Azure SQL database linked service private endpoint 3.":::
-1. Now you can create a link connection from **Integrate** tab to replicate data from Azure SQL DB to Synapse SQL pool.
+1. Now you can create a link connection from the **Integrate** pane to replicate data from your Azure SQL database to an Azure Synapse SQL pool.
- :::image type="content" source="../media/connect-synapse-link-sql-database/create-link.png" alt-text="Screenshot of creating a link.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-database/create-link.png" alt-text="Screenshot that shows how to create a link.":::
- :::image type="content" source="../media/connect-synapse-link-sql-database/create-link-sql-db.png" alt-text="Screenshot of creating link sqldb.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-database/create-link-sql-db.png" alt-text="Screenshot of creating a link to the SQL database.":::
-1. Start your link connection
+1. Start your link connection.
- :::image type="content" source="../media/connect-synapse-link-sql-database/start-link.png" alt-text="Screenshot of starting link.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-database/start-link.png" alt-text="Screenshot that shows how to start the link connection.":::
## Next steps
-If you are using a different type of database, see how to:
+If you're using a database other than an Azure SQL database, see:
* [Configure Azure Synapse Link for Azure Cosmos DB](../../cosmos-db/configure-synapse-link.md?context=/azure/synapse-analytics/context/context) * [Configure Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/azure-synapse-link-synapse?context=/azure/synapse-analytics/context/context)
synapse-analytics Connect Synapse Link Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/connect-synapse-link-sql-database.md
Title: Get started with Azure Synapse Link for Azure SQL Database (Preview)
-description: Learn how to connect an Azure SQL database to an Azure Synapse workspace with Azure Synapse Link (Preview).
+ Title: Get started with Azure Synapse Link for Azure SQL Database (preview)
+description: Learn how to connect an Azure SQL database to an Azure Synapse workspace with Azure Synapse Link (preview).
-# Get started with Azure Synapse Link for Azure SQL Database (Preview)
+# Get started with Azure Synapse Link for Azure SQL Database (preview)
-This article provides a step-by-step guide for getting started with Azure Synapse Link for Azure SQL Database. For more information, see [Synapse Link for Azure SQL Database (Preview)](sql-database-synapse-link.md).
+This article is a step-by-step guide for getting started with Azure Synapse Link for Azure SQL Database. For an overview of this feature, see [Azure Synapse Link for Azure SQL Database (preview)](sql-database-synapse-link.md).
> [!IMPORTANT]
-> Azure Synapse Link for SQL is currently in PREVIEW.
+> Azure Synapse Link for SQL is currently in preview.
> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ## Prerequisites
-* [Create a new Synapse workspace](https://portal.azure.com/#create/Microsoft.Synapse) to get Azure Synapse Link for SQL. The current tutorial is to create Synapse link for SQL in public network. The assumption is that you have checked "Disable Managed virtual network" and "Allow connections from all IP address" when creating Synapse workspace. If you want to configure Synapse link for Azure SQL Database with network security, please also refer to [Configure Synapse link for Azure SQL Database with network security](connect-synapse-link-sql-database-vnet.md).
+* To get Azure Synapse Link for SQL, see [Create a new Azure Synapse workspace](https://portal.azure.com/#create/Microsoft.Synapse). The current tutorial is to create Azure Synapse Link for SQL in a public network. This article assumes that you selected **Disable Managed virtual network** and **Allow connections from all IP address** when you created an Azure Synapse workspace. If you want to configure Azure Synapse Link for Azure SQL Database with network security, also see [Configure Azure Synapse Link for Azure SQL Database with network security](connect-synapse-link-sql-database-vnet.md).
-* For DTU-based provisioning, make sure your Azure SQL Database service is at least Standard tier with a minimum of 100 DTUs. Free, Basic, or Standard tiers with fewer than 100 DTUs provisioned are not supported.
+* For database transaction unit (DTU)-based provisioning, make sure that your Azure SQL Database service is at least Standard tier with a minimum of 100 DTUs. Free, Basic, or Standard tiers with fewer than 100 DTUs provisioned aren't supported.
-## Configure your source Azure SQL Database
+## Configure your source Azure SQL database
-1. Go to Azure portal, navigate to your Azure SQL Server, select **Identity**, and then set **System assigned managed identity** to **On**.
+1. Sign in to the [Azure portal](https://portal.azure.com).
- :::image type="content" source="../media/connect-synapse-link-sql-database/set-identity-sql-database.png" alt-text="Screenshot of turning on system assigned managed identity.":::
+1. Go to your Azure SQL logical server, select **Identity**, and then set **System assigned managed identity** to **On**.
-1. Navigate to **Networking**, then check **Allow Azure services and resources to access this server**.
+ :::image type="content" source="../media/connect-synapse-link-sql-database/set-identity-sql-database.png" alt-text="Screenshot of turning on the system-assigned managed identity.":::
- :::image type="content" source="../media/connect-synapse-link-sql-database/configure-network-firewall-sql-database.png" alt-text="Screenshot of configuring firewalls for your SQL DB using Azure portal.":::
+1. Go to **Networking**, and then select the **Allow Azure services and resources to access this server** checkbox.
-1. Using Microsoft SQL Server Management Studio (SSMS) or Azure Data Studio, connect to the Azure SQL Server. If you want to have your Synapse workspace connect to your Azure SQL Database using a managed identity, set the Azure Active Directory admin on Azure SQL Server, and use the same admin name to connect to Azure SQL Server with administrative privileges in order to have the privileges in step 5.
+ :::image type="content" source="../media/connect-synapse-link-sql-database/configure-network-firewall-sql-database.png" alt-text="Screenshot that shows how to configure firewalls for your SQL database by using the Azure portal.":::
-1. Expand **Databases**, right select the database you created above, and select **New Query**.
+1. Using Microsoft SQL Server Management Studio (SSMS) or Azure Data Studio, connect to the logical server. If you want to have your Azure Synapse workspace connect to your Azure SQL database by using a managed identity, set the Azure Active Directory admin permissions on the logical server. To apply the privileges in step 6, use the same admin name to connect to the logical server with administrative privileges.
- :::image type="content" source="../media/connect-synapse-link-sql-database/ssms-new-query.png" alt-text="Select your database and create a new query.":::
+1. Expand **Databases**, right-click the database you've created, and then select **New Query**.
-1. If you want to have your Synapse workspace connect to your source Azure SQL Database using a [managed identity](../../active-directory/managed-identities-azure-resources/overview.md), run the following script to provide the managed identity permission to the source database.
+ :::image type="content" source="../media/connect-synapse-link-sql-database/ssms-new-query.png" alt-text="Screenshot that shows how to select your database and create a new query.":::
- **You can skip this step** if you instead want to have your Synapse workspace connect to your source Azure SQL Database via SQL authentication.
+1. If you want to have your Azure Synapse workspace connect to your source Azure SQL database by using a [managed identity](../../active-directory/managed-identities-azure-resources/overview.md), run the following script to provide the managed identity permission to the source database.
+
+ **You can skip this step** if you instead want to have your Azure Synapse workspace connect to your source Azure SQL database via SQL authentication.
```sql CREATE USER <workspace name> FROM EXTERNAL PROVIDER; ALTER ROLE [db_owner] ADD MEMBER <workspace name>; ```
-1. You can create a table with your own schema; the following is just an example for a `CREATE TABLE` query. You can also insert some rows into this table to ensure there's data to be replicated.
+1. You can create a table with your own schema. The following code is just an example of a `CREATE TABLE` query. You can also insert some rows into this table to ensure that there's data to be replicated.
```sql CREATE TABLE myTestTable1 (c1 int primary key, c2 int, c3 nvarchar(50)) ```
-## Create your target Synapse SQL pool
+## Create your target Azure Synapse SQL pool
-1. Launch [Synapse Studio](https://web.azuresynapse.net/).
+1. Open [Synapse Studio](https://web.azuresynapse.net/).
-1. Open the **Manage** hub, navigate to **SQL pools**, and select **+ New**.
+1. Go to the **Manage** hub, select **SQL pools**, and then select **New**.
- :::image type="content" source="../media/connect-synapse-link-sql-database/studio-new-sql-pool.png" alt-text="Create a new SQL dedicated pool from Synapse Studio.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-database/studio-new-sql-pool.png" alt-text="Screenshot that shows how to create a new SQL dedicated pool from Synapse Studio.":::
1. Enter a unique pool name, use the default settings, and create the dedicated pool.
-1. You need to create a schema if your expected schema is not available in target Synapse SQL database. If your schema is dbo, you can skip this step.
+1. You need to create a schema if your expected schema isn't available in the target Azure Synapse SQL database. If your schema is *database owner* (dbo), you can skip this step.
## Create the Azure Synapse Link connection
-1. Open the **Integrate** hub, and select **+ Link connection(Preview)**.
+1. On the left pane of the Azure portal, select **Integrate**.
+
+1. On the **Integrate** pane, select the plus sign (**+**), and then select **Link connection (Preview)**.
- :::image type="content" source="../media/connect-synapse-link-sql-database/studio-new-link-connection.png" alt-text="Select a new link connection from Synapse Studio.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-database/studio-new-link-connection.png" alt-text="Screenshot that shows how to select a new link connection from Synapse Studio.":::
1. Under **Source linked service**, select **New**.
- :::image type="content" source="../media/connect-synapse-link-sql-database/studio-new-linked-service-dropdown.png" alt-text="Select a new linked service.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-database/studio-new-linked-service-dropdown.png" alt-text="Screenshot that shows how to select a new linked service.":::
-1. Enter the information for your source Azure SQL Database.
+1. Enter the information for your source Azure SQL database.
- * Select the subscription, server, and database corresponding to your Azure SQL Database.
- * If you wish to connect your Synapse workspace to the source DB using the workspace's managed identity, set **Authentication type** to **Managed Identity**.
- * If you wish to use SQL authentication instead and know the username/password to use, select **SQL Authentication** instead.
+ * Select the subscription, server, and database corresponding to your Azure SQL database.
+ * Do either of the following:
+ * To connect your Azure Synapse workspace to the source database by using the workspace's managed identity, set **Authentication type** to **Managed Identity**.
+ * To use SQL authentication instead, if you know the username and password to use, select **SQL Authentication**.
- :::image type="content" source="../media/connect-synapse-link-sql-database/studio-new-linked-service.png" alt-text="Enter the server, database details to create a new linked service.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-database/studio-new-linked-service.png" alt-text="Screenshot that shows how to enter the server and database details to create a new linked service.":::
-1. Select **Test connection** to ensure the firewall rules are properly configured and the workspace can successfully connect to the source Azure SQL Database.
+1. Select **Test connection** to ensure that the firewall rules are properly configured and the workspace can successfully connect to the source Azure SQL database.
1. Select **Create**.+ > [!NOTE]
- > The linked service that you create here is not dedicated to Azure Synapse Link for SQL - it can be used by any workspace user that has the appropriate permissions. Please take time to understand the scope of users who may have access to this linked service and its credentials. For more information on permissions in Azure Synapse workspaces, see [Azure Synapse workspace access control overview - Azure Synapse Analytics](../security/synapse-workspace-access-control-overview.md).
+ > The linked service that you create here isn't dedicated to Azure Synapse Link for SQL. It can be used by any workspace user who has the appropriate permissions. Take time to understand the scope of users who might have access to this linked service and its credentials. For more information about permissions in Azure Synapse workspaces, see [Azure Synapse workspace access control overview - Azure Synapse Analytics](../security/synapse-workspace-access-control-overview.md).
-1. Select one or more source tables to replicate to your Synapse workspace and select **Continue**.
+1. Select one or more source tables to replicate to your Azure Synapse workspace, and then select **Continue**.
> [!NOTE]
- > A given source table can only be enabled in at most one link connection at a time.
+ > A specified source table can be enabled in only one link connection at a time.
-1. Select a target Synapse SQL database and pool.
+1. Select a target Azure Synapse SQL database and pool.
1. Provide a name for your Azure Synapse Link connection, and select the number of cores for the [link connection compute](sql-database-synapse-link.md#link-connection). These cores will be used for the movement of data from the source to the target. > [!NOTE]
- > We recommend starting low and increasing as needed.
+ > We recommend starting low and increasing the number of cores as needed.
1. Select **OK**.
-1. With the new Azure Synapse Link connection open, you can update the target table name, distribution type and structure type.
+1. With the new Azure Synapse Link connection open, you can update the target table name, distribution type, and structure type.
> [!NOTE]
- > * Consider heap table for structure type when your data contains varchar(max), nvarchar(max), and varbinary(max).
- > * Make sure the schema in your Synapse dedicated SQL pool has already been created before you start the link connection. Azure Synapse Link for SQL will create tables automatically under your schema in the Synapse dedicated SQL pool.
+ > * Consider using *heap table* for the structure type when your data contains varchar(max), nvarchar(max), and varbinary(max).
+ > * Make sure that the schema in your Azure Synapse SQL dedicated pool has already been created before you start the link connection. Azure Synapse Link for SQL will create tables automatically under your schema in the Azure Synapse SQL dedicated pool.
- :::image type="content" source="../media/connect-synapse-link-sql-database/studio-edit-link.png" alt-text="Edit Azure Synapse Link connection from Synapse Studio.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-database/studio-edit-link.png" alt-text="Screenshot that shows where to edit the Azure Synapse Link connection from Synapse Studio.":::
1. Select **Publish all** to save the new link connection to the service. ## Start the Azure Synapse Link connection
-1. Select **Start** and wait a few minutes for the data to be replicated.
+Select **Start**, and then wait a few minutes for the data to be replicated.
> [!NOTE]
- > When being started, a link connection will start from a full initial load from your source database followed by incremental change feeds via the change feed feature in Azure SQL database. For more information, see [Azure Synapse Link for SQL change feed](/sql/sql-server/synapse-link/synapse-link-sql-server-change-feed).
+ > A link connection will start from a full initial load from your source database, followed by incremental change feeds via the change feed feature in Azure SQL Database. For more information, see [Azure Synapse Link for SQL change feed](/sql/sql-server/synapse-link/synapse-link-sql-server-change-feed).
## Monitor the status of the Azure Synapse Link connection
-You may monitor the status of your Azure Synapse Link connection, see which tables are being initially copied over (Snapshotting), and see which tables are in continuous replication mode (Replicating).
+You can monitor the status of your Azure Synapse Link connection, see which tables are being initially copied over (*snapshotting*), and see which tables are in continuous replication mode (*replicating*).
-1. Navigate to the **Monitor** hub, and select **Link connections**.
+1. Go to the **Monitor** hub, and then select **Link connections**.
- :::image type="content" source="../media/connect-synapse-link-sql-database/studio-monitor-link-connections.png" alt-text="Monitor the status of Azure Synapse Link connection from the monitor hub.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-database/studio-monitor-link-connections.png" alt-text="Screenshot that shows how to monitor the status of the Azure Synapse Link connection from the monitor hub.":::
-1. Open the Azure Synapse Link connection you started and view the status of each table.
+1. Open the Azure Synapse Link connection that you started, and view the status of each table.
1. Select **Refresh** on the monitoring view for your connection to observe any updates to the status.
-## Query replicated data
+## Query the replicated data
+
+Wait for a few minutes, and then check to ensure that the target database has the expected table and data. You can also now explore the replicated tables in your target Azure Synapse SQL dedicated pool.
+
+1. In the **Data** hub, under **Workspace**, open your target database.
-Wait for a few minutes, then check the target database has the expected table and data. You can also now explore the replicated tables in your target Synapse dedicated SQL pool.
+1. Under **Tables**, right-click one of your target tables.
-1. In the **Data** hub, under **Workspace**, open your target database, and within **Tables**, right-click one of your target tables.
+1. Select **New SQL script**, and then select **Top 100 rows**.
-1. Choose **New SQL script**, then **Select TOP 100 rows**.
+1. Run this query to view the replicated data in your target Azure Synapse SQL dedicated pool.
-1. Run this query to view the replicated data in your target Synapse dedicated SQL pool.
+1. You can also query the target database by using SSMS or other tools. Use the SQL dedicated endpoint for your workspace as the server name. This name is usually `<workspacename>.sql.azuresynapse.net`. Add `Database=databasename@poolname` as an extra connection string parameter when you're connecting via SSMS or other tools.
-1. You can also query the target database with SSMS (or other tools). Use the dedicated SQL endpoint for your workspace as the server name. This is typically `<workspacename>.sql.azuresynapse.net`. Add `Database=databasename@poolname` as another connection string parameter when connecting via SSMS (or other tools).
+## Add or remove a table in an existing Azure Synapse Link connection
-## Add/remove table in existing Azure Synapse Link connection
+To add or remove tables in Synapse Studio, do the following:
-You can add/remove tables on Synapse Studio as following:
+1. Open the **Integrate** hub.
-1. Open the **Integrate Hub**.
+1. Select the link connection that you want to edit, and then open it.
-1. Select the **Link connection** you want to edit and open it.
+1. Do either of the following:
-1. Select **+New** table to add tables on Synapse Studio or select the trash can icon to the right or a table to remove an existing table. You can add or remove tables when the link connection is running.
+ * To add a table, select **New table**.
+ * To remove a table, select the trash can icon next to it.
- :::image type="content" source="../media/connect-synapse-link-sql-server-2022/link-connection-add-remove-tables.png" alt-text="Screenshot of link connection to add table.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/link-connection-add-remove-tables.png" alt-text="Screenshot of the link connection pane for adding or removing tables.":::
> [!NOTE] > You can directly add or remove tables when a link connection is running. ## Stop the Azure Synapse Link connection
-You can stop the Azure Synapse Link connection in Synapse Studio as follows:
+To stop the Azure Synapse Link connection in Synapse Studio, do the following:
-1. Open the **Integrate Hub** of your Synapse workspace.
+1. In your Azure Synapse workspace, open the **Integrate** hub.
-1. Select the **Link connection** you want to edit and open it.
+1. Select the link connection that you want to edit, and then open it.
1. Select **Stop** to stop the link connection, and it will stop replicating your data.
- :::image type="content" source="../media/connect-synapse-link-sql-server-2022/stop-link-connection.png" alt-text="Screenshot of link connection to stop link.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/stop-link-connection.png" alt-text="Screenshot of the pane for stopping a link connection.":::
> [!NOTE]
- > If you restart a link connection after stopping it, it will start from a full initial load from your source database followed by incremental change feeds.
+ > If you restart a link connection after stopping it, it will start from a full initial load from your source database, and incremental change feeds will follow.
## Next steps
-If you are using a different type of database, see how to:
+If you're using a database other than an Azure SQL database, see:
* [Configure Azure Synapse Link for Azure Cosmos DB](../../cosmos-db/configure-synapse-link.md?context=/azure/synapse-analytics/context/context) * [Configure Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/azure-synapse-link-synapse?context=/azure/synapse-analytics/context/context) * [Get started with Azure Synapse Link for SQL Server 2022](connect-synapse-link-sql-server-2022.md)
-* [Get or set a managed identity for an Azure SQL Database logical server or managed instance](/sql/azure-sql/database/authentication-azure-ad-user-assigned-managed-identity.md#get-or-set-a-managed-identity-for-a-logical-server-or-managed-instance)
+* [Get or set a managed identity for an Azure SQL Database logical server or managed instance](/sql/azure-sql/database/authentication-azure-ad-user-assigned-managed-identity.md#get-or-set-a-managed-identity-for-a-logical-server-or-managed-instance)
synapse-analytics Connect Synapse Link Sql Server 2022 Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/connect-synapse-link-sql-server-2022-vnet.md
Title: Configure Synapse link for SQL Server 2022 with network security (Preview)
-description: Learn how to configure Synapse link for SQL Server 2022 with network security (Preview).
+ Title: Configure Azure Synapse Link for SQL Server 2022 with network security (preview)
+description: Learn how to configure Azure Synapse Link for SQL Server 2022 with network security (preview).
-# Configure Synapse link for SQL Server 2022 with network security (Preview)
+# Configure Azure Synapse Link for SQL Server 2022 with network security (preview)
-This article provides a guide on configuring Azure Synapse Link for SQL Server 2022 with network security. Before reading this documentation, You should have known how to create and start Synapse link for SQL Server 2022 from [Get started with Azure Synapse Link for SQL Server 2022](connect-synapse-link-sql-server-2022.md).
+This article is a guide for configuring Azure Synapse Link for SQL Server 2022 with network security. Before you begin this process, you should know how to create and start Azure Synapse Link for SQL Server 2022. For information, see [Get started with Azure Synapse Link for SQL Server 2022](connect-synapse-link-sql-server-2022.md).
> [!IMPORTANT]
-> Azure Synapse Link for SQL is currently in PREVIEW.
+> Azure Synapse Link for SQL is currently in preview.
> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-## Managed workspace Virtual Network without data exfiltration
+## Create a managed workspace virtual network without data exfiltration
-1. Create Synapse workspace with managed virtual network enabled. You will enable **managed virtual network** and select **No** to allow outbound traffic from the workspace to any target. You can learn more about managed virtual network from [this](../security/synapse-workspace-managed-vnet.md).
+In this section, you create an Azure Synapse workspace with a managed virtual network enabled. You'll enable **managed virtual network**, and then select **No** to allow outbound traffic from the workspace to any target. For an overview, see [Azure Synapse Analytics managed virtual network](../security/synapse-workspace-managed-vnet.md).
- :::image type="content" source="../media/connect-synapse-link-sql-database/create-synapse-workspace-allow-outbound-traffic.png" alt-text="Screenshot of creating synapse workspace allow outbound traffic.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-database/create-synapse-workspace-allow-outbound-traffic.png" alt-text="Screenshot that shows how to create an Azure Synapse workspace that allows outbound traffic.":::
-1. Navigate to your Synapse workspace on Azure portal, go to **Networking** tab to enable **Allow Azure Synapse Link for Azure SQL Database to bypass firewall rules**.
+1. Sign in to the [Azure portal](https://portal.azure.com).
- :::image type="content" source="../media/connect-synapse-link-sql-database/enable-bypass-firewall-rules.png" alt-text="Screenshot of enabling bypass firewall rules.":::
+1. Go to your Azure Synapse workspace, select **Networking**, and then select the **Allow Azure Synapse Link for Azure SQL Database to bypass firewall rules** checkbox.
-1. Launch Synapse Studio, navigate to **Manage**, click **Integration runtimes** and select **AutoResolvingIntegrationRuntime**. On the pop-up slide, you can click **Virtual network** tab, and enable **Interactive authoring**.
+ :::image type="content" source="../media/connect-synapse-link-sql-database/enable-bypass-firewall-rules.png" alt-text="Screenshot that shows how to enable bypassing firewall rules.":::
- :::image type="content" source="../media/connect-synapse-link-sql-database/enable-interactive-authoring.png" alt-text="Screenshot of enabling interactive authoring.":::
+1. Open Synapse Studio, go to **Manage**, select **Integration runtimes**, and then select **AutoResolvingIntegrationRuntime**.
-1. Now you can create a link connection from **Integrate** tab to replicate data from SQL Server 2022 to Synapse SQL pool.
+1. In the pop-up window, select the **Virtual network** tab, and then enable **Interactive authoring**.
- :::image type="content" source="../media/connect-synapse-link-sql-database/create-link.png" alt-text="Screenshot of creating a link.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-database/enable-interactive-authoring.png" alt-text="Screenshot that shows how to enable interactive authoring.":::
- :::image type="content" source="../media/connect-synapse-link-sql-database/create-link-sql-server.png" alt-text="Screenshot of creating link sql server.":::
+1. On the **Integrate** pane, create a link connection to replicate data from your SQL Server 2022 instance to the Azure Synapse SQL pool.
-1. Start your link connection
+ :::image type="content" source="../media/connect-synapse-link-sql-database/create-link.png" alt-text="Screenshot that shows how to create a link to an Azure Synapse SQL pool.":::
- :::image type="content" source="../media/connect-synapse-link-sql-database/start-link.png" alt-text="Screenshot of starting a link.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-database/create-link-sql-server.png" alt-text="Screenshot that shows how to create a link connection from an Azure SQL Server 2022 instance.":::
+1. Start your link connection.
-## Managed workspace Virtual Network with data exfiltration
+ :::image type="content" source="../media/connect-synapse-link-sql-database/start-link.png" alt-text="Screenshot of starting a link connection.":::
-1. Create Synapse workspace with managed virtual network enabled. You will enable **managed virtual network** and select **Yes** to limit outbound traffic from the Managed workspace Virtual Network to targets through Managed private endpoints. You can learn more about managed virtual network from [this](../security/synapse-workspace-managed-vnet.md)
- :::image type="content" source="../media/connect-synapse-link-sql-database/create-synapse-workspace-disallow-outbound-traffic.png" alt-text="Screenshot of creating synapse workspace disallow outbound traffic.":::
+## Create a managed workspace virtual network with data exfiltration
-1. Navigate to your Synapse workspace on Azure portal, go to **Networking** tab to enable **Allow Azure Synapse Link for Azure SQL Database to bypass firewall rules**.
+In this section, you create an Azure Synapse workspace with managed virtual network enabled. You'll enable **managed virtual network** and select **Yes** to limit outbound traffic from the managed workspace virtual network to targets through managed private endpoints. For an overview, see [Azure Synapse Analytics managed virtual network](../security/synapse-workspace-managed-vnet.md).
- :::image type="content" source="../media/connect-synapse-link-sql-database/enable-bypass-firewall-rules.png" alt-text="Screenshot of enabling bypass firewall rules.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-database/create-synapse-workspace-disallow-outbound-traffic.png" alt-text="Screenshot that shows how to create an Azure Synapse workspace that disallows outbound traffic.":::
-1. Launch Synapse Studio, navigate to **Manage**, click **Integration runtimes** and select **AutoResolvingIntegrationRuntime**. On the pop-up slide, you can click **Virtual network** tab, and enable **Interactive authoring**.
+1. Sign in to the [Azure portal](https://portal.azure.com).
- :::image type="content" source="../media/connect-synapse-link-sql-database/enable-interactive-authoring.png" alt-text="Screenshot of enabling interactive authoring.":::
+1. Go to your Azure Synapse workspace, select **Networking**, and then select the **Allow Azure Synapse Link for Azure SQL Database to bypass firewall rules** checkbox.
-1. Create a linked service connecting to SQL Server 2022. You can get more details from [this](connect-synapse-link-sql-server-2022.md#create-linked-service-for-your-source-sql-server-2022).
+ :::image type="content" source="../media/connect-synapse-link-sql-database/enable-bypass-firewall-rules.png" alt-text="Screenshot that shows how to enable bypassing firewall rules.":::
-1. Add role assignment to make sure that you have granted your Synapse workspace managed identity permissions to ADLS Gen2 storage account used as the landing zone. You can get more details from [this](connect-synapse-link-sql-server-2022.md#create-linked-service-to-connect-to-your-landing-zone-on-azure-data-lake-storage-gen2).
+1. Open Synapse Studio, go to **Manage**, select **Integration runtimes**, and then select **AutoResolvingIntegrationRuntime**.
-1. Create a linked service connecting to ADLS Gen2 storage(landing zone) with managed private endpoint enabled.
+1. In the pop-up window, select the **Virtual network** tab, and then enable **Interactive authoring**.
- * Create a managed private endpoint in linked service for ADLS Gen2 storage.
+ :::image type="content" source="../media/connect-synapse-link-sql-database/enable-interactive-authoring.png" alt-text="Screenshot that shows how to enable interactive authoring.":::
+
+1. Create a linked service that connects to your SQL Server 2022 instance.
+
+ To learn how, see the "Create a linked service for your source SQL Server 2022 database" section of [Get started with Azure Synapse Link for SQL Server 2022 (preview)](connect-synapse-link-sql-server-2022.md#create-a-linked-service-for-your-source-sql-server-2022-database).
+
+1. Add a role assignment to ensure that you've granted your Azure Synapse workspace managed identity permissions to your Azure Data Lake Storage Gen2 storage account that's used as the landing zone.
+
+ To learn how, see the "Create a linked service to connect to your landing zone on Azure Data Lake Storage Gen2" section of [Get started with Azure Synapse Link for SQL Server 2022 (preview)](connect-synapse-link-sql-server-2022.md#create-a-linked-service-to-connect-to-your-landing-zone-on-azure-data-lake-storage-gen2).
+
+1. Create a linked service that connects to your Azure Data Lake Storage Gen2 storage (landing zone) with managed private endpoint enabled.
+
+ a. Create a managed private endpoint in the linked service for Azure Data Lake Storage Gen2 storage.
- :::image type="content" source="../media/connect-synapse-link-sql-database/new-sql-server-linked-service-pe1.png" alt-text="Screenshot of new sql db linked service pe1.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-database/new-sql-server-linked-service-pe1.png" alt-text="Screenshot of a new Azure SQL Server 2022 database linked service private endpoint 1.":::
- * Complete the managed private endpoint creation in the linked service for ADLS Gen2 storage.
+ b. Complete the managed private endpoint creation in the linked service for Azure Data Lake Storage Gen2 storage.
- :::image type="content" source="../media/connect-synapse-link-sql-database/new-sql-server-linked-service-pe2.png" alt-text="Screenshot of new sql db linked service pe2.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-database/new-sql-server-linked-service-pe2.png" alt-text="Screenshot of a new Azure SQL Server 2022 database linked service private endpoint 2.":::
- * Go to Azure portal of your ADLS Gen2 storage as landing zone, approve the Private endpoint connections.
+ c. Go to the Azure portal for your Azure Data Lake Storage Gen2 storage as a landing zone, and then approve the private endpoint connections.
- :::image type="content" source="../media/connect-synapse-link-sql-database/new-sql-server-linked-service-pe3.png" alt-text="Screenshot of new sql db linked service pe3.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-database/new-sql-server-linked-service-pe3.png" alt-text="Screenshot of a new Azure SQL Server 2022 database linked service private endpoint 3.":::
- * Complete the creation of linked service for ADLS Gen2 storage.
+ d. Complete the creation of the linked service for Azure Data Lake Storage Gen2 storage.
:::image type="content" source="../media/connect-synapse-link-sql-database/new-sql-server-linked-service-pe4.png" alt-text="Screenshot of new sql db linked service pe4.":::
-1. Now you can create a link connection from **Integrate** tab to replicate data from SQL Server 2022 to Synapse SQL pool.
+1. Now you can create a link connection from the **Integrate** pane to replicate data from your SQL Server 2022 instance to an Azure Synapse SQL pool.
- :::image type="content" source="../media/connect-synapse-link-sql-database/create-link.png" alt-text="Screenshot of creating a link.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-database/create-link.png" alt-text="Screenshot that shows how to create a link.":::
- :::image type="content" source="../media/connect-synapse-link-sql-database/create-link-sql-server.png" alt-text="Screenshot of creating link sqldb.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-database/create-link-sql-server.png" alt-text="Screenshot that shows how to create a link from the SQL Server 2022 instance.":::
-1. Start your link connection
+1. Start your link connection.
- :::image type="content" source="../media/connect-synapse-link-sql-database/start-link.png" alt-text="Screenshot of starting link.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-database/start-link.png" alt-text="Screenshot that shows how to start the link connection.":::
## Next steps
-If you are using a different type of database, see how to:
+If you're using a database other than a SQL Server 2022 instance, see:
* [Configure Azure Synapse Link for Azure Cosmos DB](../../cosmos-db/configure-synapse-link.md?context=/azure/synapse-analytics/context/context) * [Configure Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/azure-synapse-link-synapse?context=/azure/synapse-analytics/context/context)
synapse-analytics Connect Synapse Link Sql Server 2022 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/connect-synapse-link-sql-server-2022.md
Title: Create Azure Synapse Link for SQL Server 2022 (Preview)
-description: Learn how to create and connect a SQL Server 2022 instance to an Azure Synapse workspace with Azure Synapse Link (Preview).
+ Title: Create Azure Synapse Link for SQL Server 2022 (preview)
+description: Learn how to create and connect a SQL Server 2022 instance to an Azure Synapse workspace by using Azure Synapse Link (preview).
-# Get started with Azure Synapse Link for SQL Server 2022 (Preview)
+# Get started with Azure Synapse Link for SQL Server 2022 (preview)
-This article provides a step-by-step guide for getting started with Azure Synapse Link for SQL Server 2022. For more information, see [Azure Synapse Link for SQL Server 2022](sql-server-2022-synapse-link.md).
+This article is a step-by-step guide for getting started with Azure Synapse Link for SQL Server 2022. For an overview, see [Azure Synapse Link for SQL Server 2022](sql-server-2022-synapse-link.md).
> [!IMPORTANT]
-> Azure Synapse Link for SQL is currently in PREVIEW.
+> Azure Synapse Link for SQL is currently in preview.
> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ## Prerequisites
-* [Create a new Synapse workspace](https://portal.azure.com/#create/Microsoft.Synapse) to get Azure Synapse Link for SQL. The current tutorial is to create Synapse link for SQL in public network. The assumption is that you have checked "Disable Managed virtual network" and "Allow connections from all IP address" when creating Synapse workspace. If you want to configure Synapse link for SQL Server 2022 with network security, please also refer to [this](connect-synapse-link-sql-server-2022-vnet.md).
+* Before you begin, see [Create a new Azure Synapse workspace](https://portal.azure.com/#create/Microsoft.Synapse) to get Azure Synapse Link for SQL. The current tutorial is to create Azure Synapse Link for SQL in a public network. This article assumes that you selected **Disable Managed virtual network** and **Allow connections from all IP addresses** when you created an Azure Synapse workspace. If you want to configure Azure Synapse Link for SQL Server 2022 with network security, also see [Configure Azure Synapse Link for SQL Server 2022 with network security (preview)](connect-synapse-link-sql-server-2022-vnet.md).
+* Create an Azure Data Lake Storage Gen2 account, which is different from the account you create with the Azure Synapse Analytics workspace. You'll use this account as the landing zone to stage the data submitted by SQL Server 2022. For more information, see [Create an Azure Data Lake Storage Gen2 account](../../storage/blobs/create-data-lake-storage-account.md).
-* Create an Azure Data Lake Storage Gen2 account (different from the account created with the Azure Synapse Analytics workspace) used as the landing zone to stage the data submitted by SQL Server 2022. See [how to create a Azure Data Lake Storage Gen2 account](../../storage/blobs/create-data-lake-storage-account.md) article for more details.
--
-* Make sure your database in SQL Server 2022 has a master key created.
+* Make sure that your SQL Server 2022 database has a master key created.
```sql CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<a new password>' ```
-## Create your target Synapse dedicated SQL pool
+## Create your target Azure Synapse SQL dedicated pool
-1. Launch [Synapse Studio](https://ms.web.azuresynapse.net/).
+1. Open [Synapse Studio](https://ms.web.azuresynapse.net/).
-1. Open the **Manage** hub, navigate to **SQL pools**, and select **+ New**.
+1. Open the **Manage** hub, go to **SQL pools**, and then select **New**.
- :::image type="content" source="../media/connect-synapse-link-sql-database/studio-new-sql-pool.png" alt-text="Screenshot of creating a new SQL dedicated pool from Synapse Studio.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-database/studio-new-sql-pool.png" alt-text="Screenshot that shows how to create a new Azure Synapse SQL dedicated pool from Synapse Studio.":::
1. Enter a unique pool name, use the default settings, and create the dedicated pool.
-1. From the **Data** hub, under **Workspace**, you should see your new Synapse SQL database listed under **Databases**. From your new Synapse SQL database, select **New SQL script**, then **Empty script**.
+1. From the **Data** hub, under **Workspace**, your new Azure Synapse SQL database should be listed under **Databases**. From your new Azure Synapse SQL database, select **New SQL script**, and then select **Empty script**.
- :::image type="content" source="../media/connect-synapse-link-sql-server-2022/studio-new-empty-sql-script.png" alt-text="Screenshot of creating a new empty SQL script from Synapse Studio.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/studio-new-empty-sql-script.png" alt-text="Screenshot that shows how to create a new empty SQL script from Synapse Studio.":::
-1. Paste the following script and select **Run** to create the master key for your target Synapse SQL database.
+1. To create the master key for your target Azure Synapse SQL database, paste the following script, and then select **Run**.
```sql CREATE MASTER KEY ```
-## Create linked service for your source SQL Server 2022
+## Create a linked service for your source SQL Server 2022 database
-1. Open the **Manage** hub, and navigate to **Linked services**.
+1. Select the **Manage** hub button, and then select **Linked services**.
- :::image type="content" source="../media/connect-synapse-link-sql-server-2022/studio-linked-service-navigation.png" alt-text="Navigate to linked services from Synapse studio.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/studio-linked-service-navigation.png" alt-text="Go to linked services from Synapse Studio.":::
-1. Press **+ New**, select **SQL Server** and select **Continue**.
+1. Press **New**, select **SQL Server** and select **Continue**.
- :::image type="content" source="../media/connect-synapse-link-sql-server-2022/studio-linked-service-select.png" alt-text="Create a SQL server linked service.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/studio-linked-service-select.png" alt-text="Screenshot that shows how to create a SQL server linked service.":::
-1. Enter the **name** of linked service of SQL Server 2022.
+1. In the **Name** box, enter the name of the linked service of SQL Server 2022.
- :::image type="content" source="../media/connect-synapse-link-sql-server-2022/studio-linked-service-new.png" alt-text="Enter server and database names to connect.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/studio-linked-service-new.png" alt-text="Screenshot that shows where to enter the server and database names to connect.":::
-1. When selecting the integration runtime, choose your **self-hosted integration runtime**. If your synapse workspace doesn't have self-hosted integration runtime available, create one.
+1. When you're choosing the integration runtime, select your self-hosted integration runtime. If your Azure Synapse workspace doesn't have an available self-hosted integration runtime, create one.
-1. Use the following steps to create a self-hosted integration runtime to connect to your source SQL Server 2022 (optional)
+1. (Optional) To create a self-hosted integration runtime to connect to your source SQL Server 2022, do the following:
- * Select **+New**.
+ a. Select **New**.
- :::image type="content" source="../media/connect-synapse-link-sql-server-2022/create-new-integration-runtime.png" alt-text="Creating a new self-hosted integration runtime.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/create-new-integration-runtime.png" alt-text="Screenshot that shows how to create a new self-hosted integration runtime.":::
- * Select **Self-hosted** and select **continue**.
+ b. Select **Self-hosted**, and then select **Continue**.
- * Input the **name** of Self-hosted integration runtime and select **Create**.
+ c. In the **Name** box, enter the name of the self-hosted integration runtime, and then select **Create**.
- :::image type="content" source="../media/connect-synapse-link-sql-server-2022/input-name-integration-runtime.png" alt-text="Enter a name for the self-hosted integration runtime.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/input-name-integration-runtime.png" alt-text="Screenshot that shows where to enter a name for the self-hosted integration runtime.":::
- * Now a self-hosted integration runtime is available in your Synapse workspace. Follow the prompts in the UI to **download**, **install** and use the key to **register** your integration runtime agent on your windows machine, which has direct access on your SQL Server 2022. For more information, see [Create a self-hosted integration runtime - Azure Data Factory & Azure Synapse](../../data-factory/create-self-hosted-integration-runtime.md?context=%2Fazure%2Fsynapse-analytics%2Fcontext%2Fcontext&tabs=synapse-analytics#install-and-register-a-self-hosted-ir-from-microsoft-download-center)
+ A self-hosted integration runtime is now available in your Azure Synapse workspace.
+
+ d. Follow the prompts to download, install, and use the key to register your integration runtime agent on your Windows machine, which has direct access to your SQL Server 2022 instance. For more information, see [Create a self-hosted integration runtime - Azure Data Factory and Azure Synapse](../../data-factory/create-self-hosted-integration-runtime.md?context=%2Fazure%2Fsynapse-analytics%2Fcontext%2Fcontext&tabs=synapse-analytics#install-and-register-a-self-hosted-ir-from-microsoft-download-center).
- :::image type="content" source="../media/connect-synapse-link-sql-server-2022/set-up-integration-runtime.png" alt-text="Download, install and register the integration runtime.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/set-up-integration-runtime.png" alt-text="Screenshot that shows where to download, install, and register the integration runtime.":::
- * Select **Close**, and go to monitoring page to make sure your self-hosted integration runtime is running by selecting **refresh** to get the latest status of integration runtime.
+ e. Select **Close**.
:::image type="content" source="../media/connect-synapse-link-sql-server-2022/integration-runtime-status.png" alt-text="Get the status of integration runtime.":::
-1. Continue to input the rest information on your linked service including **SQL Server name**, **Database name**, **Authentication type**, **User name** and **Password** to connect to your SQL Server 2022.
+ f. Go to the monitoring page, and then ensure that your self-hosted integration runtime is running by selecting **Refresh** to get the latest status of integration runtime.
+
+1. Continue to enter the remaining information for your linked service, including **SQL Server name**, **Database name**, **Authentication type**, **User name**, and **Password** to connect to your SQL Server 2022 instance.
> [!NOTE]
- > We recommend that you enable encryption on this connection. To enable encryption, add the `Encrypt` property with a value of `true` as an Additional connection property, and also set the `Trust Server Certificate` property to either `true` or `false` - depending on your server configuration. For more information, see [Enable encrypted connections to the Database Engine](/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine).
+ > We recommend that you enable encryption on this connection. To do so, add the `Encrypt` property with a value of `true` as an additional connection property. Also set the `Trust Server Certificate` property to either `true` or `false`, depending on your server configuration. For more information, see [Enable encrypted connections to the database engine](/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine).
+
+1. Select **Test Connection** to ensure that your self-hosted integration runtime can access your SQL Server instance.
-1. Select **Test Connection** to ensure your self-hosted integration runtime can access on your SQL Server.
+1. Select **Create**.
-1. Select **Create**, and you'll have your new linked service connecting to SQL Server 2022 available in your workspace.
+ Your new linked service will be connected to the SQL Server 2022 instance that's available in your workspace.
- :::image type="content" source="../media/connect-synapse-link-sql-server-2022/view-linked-service-connection.png" alt-text="View the linked service connection.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/view-linked-service-connection.png" alt-text="Screenshot that shows where to view the linked service connection.":::
> [!NOTE]
- > The linked service that you create here is not dedicated to Azure Synapse Link for SQL - it can be used by any workspace user that has the appropriate permissions. Please take time to understand the scope of users who may have access to this linked service and its credentials. For more information on permissions in Azure Synapse workspaces, see [Azure Synapse workspace access control overview - Azure Synapse Analytics](../security/synapse-workspace-access-control-overview.md).
+ > The linked service that you create here isn't dedicated to Azure Synapse Link for SQL. It can be used by any workspace user who has the appropriate permissions. Take time to understand the scope of users who might have access to this linked service and its credentials. For more information about permissions in Azure Synapse workspaces, see [Azure Synapse workspace access control overview - Azure Synapse Analytics](../security/synapse-workspace-access-control-overview.md).
-## Create linked service to connect to your landing zone on Azure Data Lake Storage Gen2
+## Create a linked service to connect to your landing zone on Azure Data Lake Storage Gen2
-1. Go to your created Azure Data Lake Storage Gen2 account, navigate to **Access Control (IAM)**, select **+Add**, and select **Add role assignment**.
+1. Go to your newly created Azure Data Lake Storage Gen2 account, select **Access Control (IAM)**, select **Add**, and then select **Add role assignment**.
- :::image type="content" source="../media/connect-synapse-link-sql-server-2022/adls-gen2-access-control.png" alt-text="Navigate to Access Control (IAM) of the Data Lake Storage Gen2 account.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/adls-gen2-access-control.png" alt-text="Screenshot of the 'Access Control (IAM)' pane of the Data Lake Storage Gen2 account.":::
-1. Select **Storage Blob Data Contributor** for the selected role, choose **Managed identity** in Managed identity, and select your Synapse workspace in **Members**. This may take a few minutes to take effect to add role assignment.
+1. Select **Storage Blob Data Contributor** for the chosen role, select **Managed identity** and then, under **Members**, select your Azure Synapse workspace. Adding this role assignment might take a few minutes.
- :::image type="content" source="../media/connect-synapse-link-sql-server-2022/adls-gen2-assign-blob-data-contributor-role.png" alt-text="Add a role assignment.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/adls-gen2-assign-blob-data-contributor-role.png" alt-text="Screenshot that shows how to add a role assignment.":::
> [!NOTE]
- > Make sure that you have granted your Synapse workspace managed identity permissions to ADLS Gen2 storage account used as the landing zone. For more information, see how to [Grant permissions to managed identity in Synapse workspace - Azure Synapse Analytics](../security/how-to-grant-workspace-managed-identity-permissions.md#grant-the-managed-identity-permissions-to-adls-gen2-storage-account)
+ > Make sure that you've granted your Azure Synapse workspace managed identity permissions to the Azure Data Lake Storage Gen2 storage account that's used as the landing zone. For more information, see [Grant permissions to a managed identity in an Azure Synapse workspace - Azure Synapse Analytics](../security/how-to-grant-workspace-managed-identity-permissions.md#grant-the-managed-identity-permissions-to-adls-gen2-storage-account).
+
+1. Open the **Manage** hub in your Azure Synapse workspace, and go to **Linked services**.
-1. Open the **Manage** hub in your Synapse workspace, and navigate to **Linked services**.
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/studio-linked-service-navigation.png" alt-text="Screenshot that shows how to go to the linked service.":::
- :::image type="content" source="../media/connect-synapse-link-sql-server-2022/studio-linked-service-navigation.png" alt-text="Navigate to the linked service.":::
+1. Select **New**, and then select **Azure Data Lake Storage Gen2**.
-1. Press **+ New** and select **Azure Data Lake Storage Gen2**.
+1. Do the following:
-1. Input the following settings:
+ a. In the **Name** box, enter the name of the linked service for your landing zone.
- * Enter the **name** of linked service for your landing zone.
+ b. For **Authentication method**, enter **Managed Identity**.
- * Input **Authentication method**, and it must be **Managed Identity**.
+ c. Select the **Storage account name**, which has already been created.
- * Select the **Storage account name** which had already been created.
+1. Select **Test Connection** to ensure that you can access your Azure Data Lake Storage Gen2 account.
-1. Select **Test Connection** to ensure you get access on your Azure Data Lake Storage Gen2.
+1. Select **Create**.
-1. Select **Create** and you'll have your new linked service connecting to Azure Data Lake Storage Gen2.
+ Your new linked service will be connected to the Azure Data Lake Storage Gen2 account.
- :::image type="content" source="../media/connect-synapse-link-sql-server-2022/storage-gen2-linked-service-created.png" alt-text="New linked service to Azure Data Lake Storage Gen2.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/storage-gen2-linked-service-created.png" alt-text="Screenshot that shows the new linked service to Azure Data Lake Storage Gen2.":::
> [!NOTE]
- > The linked service that you create here is not dedicated to Azure Synapse Link for SQL - it can be used by any workspace user that has the appropriate permissions. Please take time to understand the scope of users who may have access to this linked service and its credentials. For more information on permissions in Azure Synapse workspaces, see [Azure Synapse workspace access control overview - Azure Synapse Analytics](../security/synapse-workspace-access-control-overview.md).
+ > The linked service that you create here isn't dedicated to Azure Synapse Link for SQL. It can be used by any workspace user who has the appropriate permissions. Take time to understand the scope of users who might have access to this linked service and its credentials. For more information about permissions in Azure Synapse workspaces, see [Azure Synapse workspace access control overview - Azure Synapse Analytics](../security/synapse-workspace-access-control-overview.md).
## Create the Azure Synapse Link connection
-1. From the Synapse studio, open the **Integrate** hub, and select **+Link connection(Preview)**.
+1. From Synapse Studio, open the **Integrate** hub.
- :::image type="content" source="../media/connect-synapse-link-sql-server-2022/new-link-connection.png" alt-text="New link connection.":::
-1. Input your source database:
+1. On the **Integrate** pane, select the plus sign (**+**), and then select **Link connection (Preview)**.
- * Select Source type to **SQL Server**.
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/new-link-connection.png" alt-text="Screenshot that shows the 'Link connection (Preview)' button.":::
- * Select your source **linked service** to connect to your SQL Server 2022.
+1. Enter your source database:
- * Select **table names** from your SQL Server to be replicated to your Synapse SQL pool.
+ a. For **Source type**, select **SQL Server**.
- * Select **Continue**.
+ b, For your source **Linked service**, select the service that connects to your SQL Server 2022 instance.
- :::image type="content" source="../media/connect-synapse-link-sql-server-2022/input-source-database-details-link-connection.png" alt-text="Input source database details.":::
+ c. For **Table names**, select names from your SQL Server instance to be replicated to your Azure Synapse SQL pool.
-1. Select a target database name from **Synapse SQL Dedicated Pools**.
+ d. Select **Continue**.
+
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/input-source-database-details-link-connection.png" alt-text="Screenshot that shows where to enter source database details.":::
+
+1. From **Synapse SQL Dedicated Pools**, select a target database name.
1. Select **Continue**.
-1. Input your link connection settings:
+1. Enter your link connection settings:
- * Input your **link connection name**.
+ a. For **Link connection name**, enter the name.
- * Select your **Core count** for the [link connection compute](sql-server-2022-synapse-link.md#link-connection). These cores will be used for the movement of data from the source to the target. We recommend starting from small number and increasing as needed.
+ b. For **Core count** for the [link connection compute](sql-server-2022-synapse-link.md#link-connection), enter the number of cores. These cores will be used for the movement of data from the source to the target. We recommend that you start with a small number and increase the count as needed.
- * Configure your landing zone. Select your **linked service** connecting to your landing zone.
+ c. For **Linked service**, select the service that will connect to your landing zone.
- * Input your ADLS Gen2 **container name or container/folder name** as landing zone folder path for staging the data. The container is required to be created first.
+ d. Enter your Azure Data Lake Storage Gen2 **container name or container/folder name** as a landing zone folder path for staging the data. The container must be created first.
- * Input your ADLS Gen2 shared access signature (SAS) token. SAS token is required for SQL change feed to get access on landing zone. If your ADLS Gen2 doesn't have SAS token, you can create one by selecting **+Generate token**.
+ e. Enter your Azure Data Lake Storage Gen2 shared access signature token. The token is required for the SQL change feed to access the landing zone. If your Azure Data Lake Storage Gen2 account doesn't have a shared access signature token, you can create one by selecting **Generate token**.
- * Select **OK**.
+ f. Select **OK**.
- :::image type="content" source="../media/connect-synapse-link-sql-server-2022/link-connection-compute-settings.png" alt-text="Input the link connection settings.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/link-connection-compute-settings.png" alt-text="Screenshot that shows where to enter the link connection settings.":::
-1. With the new Azure Synapse Link connection open, you have chance to update the target table name, distribution type and structure type.
+1. With the new Azure Synapse Link connection open, you can now update the target table name, distribution type, and structure type.
> [!NOTE]
- > * Consider heap table for structure type when your data contains varchar(max), nvarchar(max), and varbinary(max).
- > * Make sure the schema in your Synapse SQL pool has already been created before you start the link connection. Azure Synapse Link will help you to create tables automatically under your schema in Azure Synapse SQL Pool.
+ > * Consider using *heap table* for the structure type when your data contains varchar(max), nvarchar(max), and varbinary(max).
+ > * Make sure that the schema in your Azure Synapse SQL dedicated pool has already been created before you start the link connection. Azure Synapse Link for SQL will create tables automatically under your schema in the Azure Synapse SQL pool.
1. Select **Publish all** to save the new link connection to the service. ## Start the Azure Synapse Link connection
-1. Select **Start** and wait a few minutes for the data to be replicated.
+Select **Start**, and then wait a few minutes for the data to be replicated.
- > [!NOTE]
- > When being started, a link connection will start from a full initial load from your source database followed by incremental change feeds via the change feed feature in SQL Server 2022. For more information, see [Azure Synapse Link for SQL change feed](/sql/sql-server/synapse-link/synapse-link-sql-server-change-feed).
+> [!NOTE]
+> A link connection will start from a full initial load from your source database, followed by incremental change feeds via the change feed feature in SQL Server 2022. For more information, see [Azure Synapse Link for SQL change feed](/sql/sql-server/synapse-link/synapse-link-sql-server-change-feed).
## Monitor Azure Synapse Link for SQL Server 2022
-You may monitor the status of your Azure Synapse Link connection, see which tables are being initially copied over (Snapshotting), and see which tables are in continuous replication mode (Replicating).
+You can monitor the status of your Azure Synapse Link connection, see which tables are being initially copied over (*snapshotting*), and see which tables are in continuous replication mode (*replicating*).
-1. Navigate to the **Monitor hub** of your Synapse workspace.
+1. Go to the **Monitor hub** of your Azure Synapse workspace, and then select **Link connections**.
-1. Select **Link connections**.
-
-1. Open the link connection you started and view the status of each table.
+1. Open the link connection you started, and view the status of each table.
1. Select **Refresh** on the monitoring view for your connection to observe any updates to the status. :::image type="content" source="../media/connect-synapse-link-sql-server-2022/monitor-link-connection.png" alt-text="Monitor the linked connection.":::
-## Query replicated data
+## Query the replicated data
+
+Wait for a few minutes, and then check to ensure that the target database has the expected table and data. See the data available in your Azure Synapse SQL dedicated pool destination store. You can also now explore the replicated tables in your target Azure Synapse SQL dedicated pool.
+
+1. In the **Data** hub, under **Workspace**, open your target database.
-Wait for a few minutes, then check the target database has the expected table and data. See the data available in your Synapse dedicated SQL pool destination store. You can also now explore the replicated tables in your target Synapse dedicated SQL pool.
+1. Under **Tables**, right-click one of your target tables.
-1. In the **Data** hub, under **Workspace**, open your target database, and within **Tables**, right-click one of your target tables.
+1. Select **New SQL script**, and then select **Top 100 rows**.
-1. Choose **New SQL script**, then **Select TOP 100 rows**.
+1. Run this query to view the replicated data in your target Azure Synapse SQL dedicated pool.
-1. Run this query to view the replicated data in your target Synapse dedicated SQL pool.
+1. You can also query the target database by using Microsoft SQL Server Management Studio (SSMS) or other tools. Use the SQL dedicated endpoint for your workspace as the server name. This name is usually `<workspacename>.sql.azuresynapse.net`. Add `Database=databasename@poolname` as an extra connection string parameter when connecting via SSMS or other tools.
-1. You can also query the target database with SSMS (or other tools). Use the dedicated SQL endpoint for your workspace as the server name. This is typically `<workspacename>.sql.azuresynapse.net`. Add `Database=databasename@poolname` as an extra connection string parameter when connecting via SSMS (or other tools).
+## Add or remove a table in an existing Azure Synapse Link connection
-## Add/remove table in existing Azure Synapse Link connection
+To add or remove tables in Synapse Studio, do the following:
-You can add/remove tables on Synapse Studio as follows:
+1. In your Azure Synapse workspace, open the **Integrate** hub.
-1. Open the **Integrate Hub**.
+1. Select the link connection that you want to edit, and then open it.
-1. Select the **Link connection** you want to edit and open it.
+1. Do either of the following:
-1. Select **+New** table to add tables on Synapse Studio or select the trash can icon to the right of a table to remove an existing table. You can add or remove tables when the link connection is running.
+ * To add a table, select **New table**.
+ * To remove a table, select the trash can icon next to it.
- :::image type="content" source="../media/connect-synapse-link-sql-server-2022/link-connection-add-remove-tables.png" alt-text="Link connection add table.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/link-connection-add-remove-tables.png" alt-text="Screenshot of the link connection pane for adding or removing tables.":::
> [!NOTE] > You can directly add or remove tables when a link connection is running. ## Stop the Azure Synapse Link connection
-You can stop the Azure Synapse Link connection on Synapse Studio as follows:
+To stop the Azure Synapse Link connection in Synapse Studio, do the following:
-1. Open the **Integrate Hub** of your Synapse workspace.
+1. In your Azure Synapse workspace, open the **Integrate** hub.
-1. Select the **Link connection** you want to edit and open it.
+1. Select the link connection that you want to edit, and then open it.
1. Select **Stop** to stop the link connection, and it will stop replicating your data.
- :::image type="content" source="../media/connect-synapse-link-sql-server-2022/stop-link-connection.png" alt-text="Link connection stop link.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/stop-link-connection.png" alt-text="Screenshot of the pane for stopping a link connection.":::
> [!NOTE]
- > If you restart a link connection after stopping it, it will start from a full initial load from your source database followed by incremental change feeds.
+ > If you restart a link connection after stopping it, it will start from a full initial load from your source database and incremental change feeds will follow.
-## Rotate the SAS token for landing zone
+## Rotate the shared access signature token for the landing zone
-A SAS token is required for SQL change feed to get access to the landing zone and push data there. It has an expiration date so you need to rotate the SAS token before the expiration date. Otherwise, Azure Synapse Link will fail to replicate the data from SQL Server to the Synapse dedicated SQL pool.
+A shared access signature token is required for the SQL change feed to get access to the landing zone and push data there. It has an expiration date, so you need to rotate the token before that date. Otherwise, Azure Synapse Link will fail to replicate the data from the SQL Server instance to the Azure Synapse SQL dedicated pool.
-1. Open the **Integrate Hub** of your Synapse workspace.
+1. In your Azure Synapse workspace, open the **Integrate** hub.
-1. Select the **Link connection** you want to edit and open it.
+1. Select the link connection that you want to edit, and then open it.
1. Select **Rotate token**.
- :::image type="content" source="../media/connect-synapse-link-sql-server-2022/link-connection-locate-rotate-token.png" alt-text="Rotate S A S token.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/link-connection-locate-rotate-token.png" alt-text="Screenshot that shows where to rotate a shared access signature token.":::
-1. Select **Generate automatically** or **Input manually** to get the new SAS token, and then select **OK**.
+1. To get the new shared access signature token, select **Generate automatically** or **Input manually**, and then select **OK**.
- :::image type="content" source="../media/connect-synapse-link-sql-server-2022/landing-zone-rotate-sas-token.png" alt-text="Get the new S A S token.":::
+ :::image type="content" source="../media/connect-synapse-link-sql-server-2022/landing-zone-rotate-sas-token.png" alt-text="Screenshot that shows how to get a new shared access signature token.":::
## Next steps
-If you are using a different type of database, see how to:
+If you're using a database other than SQL Server 2022, see:
* [Configure Azure Synapse Link for Azure Cosmos DB](../../cosmos-db/configure-synapse-link.md?context=/azure/synapse-analytics/context/context) * [Configure Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/azure-synapse-link-synapse?context=/azure/synapse-analytics/context/context)
virtual-desktop Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/authentication.md
Once you're connected to your remote app or desktop, you may be prompted for aut
> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-Azure Virtual Desktop supports in-session passwordless authentication (preview) using [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-overview) or security devices like FIDO keys when using the [Windows Desktop client](user-documentation/connect-windows-7-10.md). Passwordless authentication is enabled automatically when the session host and local PC are using the following operating systems:
+Azure Virtual Desktop supports in-session passwordless authentication (preview) using [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-overview) or security devices like FIDO keys when using the [Windows Desktop client](users/connect-windows.md). Passwordless authentication is enabled automatically when the session host and local PC are using the following operating systems:
- Windows 11 Enterprise single or multi-session with the [2022-09 Cumulative Updates for Windows 11 Preview (KB5017383)](https://support.microsoft.com/kb/KB5017383) or later installed. - Windows 10 Enterprise single or multi-session, versions 20H2 or later with the [2022-09 Cumulative Updates for Windows 10 Preview (KB5017380)](https://support.microsoft.com/kb/KB5017380) or later installed.
virtual-desktop Compare Remote Desktop Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/compare-remote-desktop-clients.md
+
+ Title: Compare the features of the Remote Desktop clients for Azure Virtual Desktop - Azure Virtual Desktop
+description: Compare the features of the Remote Desktop clients when connecting to Azure Virtual Desktop.
++ Last updated : 09/26/2022+++
+# Compare the features of the Remote Desktop clients when connecting to Azure Virtual Desktop
+
+There are some differences between the features of each of the Remote Desktop clients when connecting to Azure Virtual Desktop. Below you can find information about what these differences are.
+
+> [!TIP]
+> Some clients and features differ when using Azure Virtual Desktop to using Remote Desktop Services. If you want to see the clients and features for Remote Desktop Services, see [Compare the clients: features](/windows-server/remote/remote-desktop-services/clients/remote-desktop-features) and [Compare the clients: redirections](/windows-server/remote/remote-desktop-services/clients/remote-desktop-app-compare).
+
+## Features comparison
+
+The following table compares the features of each Remote Desktop client when connecting to Azure Virtual Desktop.
+
+| Feature | Windows Desktop | Microsoft Store | Android or Chrome OS | iOS or iPadOS | macOS | Web | Description |
+|--|--|--|--|--|--|--|--|
+| Remote Desktop sessions | X | X | X | X | X | X | Desktop of a remote computer presented in a full screen or windowed mode. |
+| Integrated RemoteApp sessions | X | | | | X | | Individual remote apps integrated into the local desktop as if they are running locally. |
+| Immersive RemoteApp sessions | | X | X | X | | X | Individual remote apps presented in a window or maximized to a full screen. |
+| Multiple monitors | 16 monitor limit | | | | 16 monitor limit | | Lets the user run Remote Desktop or remote apps on all local monitors.<br /><br />Each monitor can have a maximum resolution of 8K, with the total resolution limited to 32K. These limits depend on factors such as session host specification and network connectivity. |
+| Dynamic resolution | X | X | | | X | X | Resolution and orientation of local monitors is dynamically reflected in the remote session. If the client is running in windowed mode, the remote desktop is resized dynamically to the size of the client window. |
+| Smart sizing | X | X | | | X | | Remote Desktop in Windowed mode is dynamically scaled to the window's size. |
+| Localization | X | X | English only | X | | X | Client user interface is available in multiple languages. |
+| Multi-factor authentication | X | X | X | X | X | X | Supports multi-factor authentication for remote connections. |
+| Teams optimization for Azure Virtual Desktop | X | | | | X | | Media optimizations for Microsoft Teams to provide high quality calls and screen sharing experiences. Learn more at [Use Microsoft Teams on Azure Virtual Desktop](/azure/virtual-desktop/teams-on-avd). |
+
+## Redirections comparison
+
+The following tables compare support for device and other redirections across the different Remote Desktop clients when connecting to Azure Virtual Desktop. Organizations can configure redirections centrally through Azure Virtual Desktop RDP properties or Group Policy.
+
+> [!IMPORTANT]
+> You can only enable redirections with binary settings that apply to both to and from the remote machine. One-way blocking of redirections from only one side of the connection is not supported.
+
+### Input redirection
+
+| Input | Windows Desktop | Microsoft Store client | Android or Chrome OS | iOS or iPadOS | macOS | Web client |
+|--|--|--|--|--|--|--|
+| Keyboard | X | X | X | X | X | X |
+| Mouse | X | X | X | X | X | X |
+| Touch | X | X | X | X | | X |
+| Pen | X | | X (as touch) | X (as touch) | | |
+
+### Port redirection
+
+| Redirection | Windows Desktop | Microsoft Store client | Android or Chrome OS | iOS or iPadOS | macOS | Web client |
+|--|--|--|--|--|--|--|
+| Serial port | X | | | | | |
+| USB | X | | | | | |
+
+When you enable USB port redirection, all USB devices attached to USB ports are automatically recognized in the remote session. For devices to work as expected, you must make sure to install their required drivers on both the local device and session host. You will need to make sure the drivers are certified to run in remote scenarios. If you need more information about using your USB device in remote scenarios, talk to the device manufacturer.
+
+### Other redirection (devices, etc.)
+
+| Redirection | Windows Desktop | Microsoft Store client | Android or Chrome OS | iOS or iPadOS | macOS | Web client |
+|--|--|--|--|--|--|--|
+| Cameras | X | | | X | X | |
+| Clipboard | X | X | Text | Text, images | X | Text |
+| Local drive/storage | X | | X | X | X | X\* |
+| Location | X | | | | | |
+| Microphones | X | X | X | X | X | X |
+| Printers | X | | | | X\*\* (CUPS only) | PDF print |
+| Scanners | X | | | | | |
+| Smart cards | X | | | | X (Windows sign-in not supported) | |
+| Speakers | X | X | X | X | X | X |
+| Third-party virtual channel plugins | X | | | | | |
+| WebAuthn | X | | | | | |
+
+\* Limited to uploading and downloading files through the Remote Desktop Web client.
+
+\*\* For printer redirection, the macOS app supports the Publisher Imagesetter printer driver by default. The app doesn't support the native printer drivers.
virtual-desktop Configure Adfs Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-adfs-sso.md
Before configuring AD FS single sign-on, you must have the following setup runni
The following Azure Virtual Desktop clients support this feature:
-* [Windows Desktop client](./user-documentation/connect-windows-7-10.md)
-* [Web client](./user-documentation/connect-web.md)
+* [Windows Desktop client](./users/connect-windows.md)
+* [Web client](./users/connect-web.md)
## Configure the certificate authority to issue certificates
UnConfigureWVDSSO.ps1 -WvdWebAppAppIDUri "<WVD Web App URI>" -WvdClientAppApplic
Now that you've configured single sign-on, you can sign in to a supported Azure Virtual Desktop client to test it as part of a user session. If you want to learn how to connect to a session using your new credentials, check out these articles:
-* [Connect with the Windows Desktop client](./user-documentation/connect-windows-7-10.md)
-* [Connect with the web client](./user-documentation/connect-web.md)
+* [Connect with the Windows Desktop client](./users/connect-windows.md)
+* [Connect with the web client](./users/connect-web.md)
virtual-desktop Configure Host Pool Personal Desktop Assignment Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-host-pool-personal-desktop-assignment-type.md
To reassign a personal desktop in the Azure portal:
Now that you've configured the personal desktop assignment type, you can sign in to an Azure Virtual Desktop client to test it as part of a user session. These articles will show you how to connect to a session using the client of your choice: -- [Connect with the Windows Desktop client](./user-documentation/connect-windows-7-10.md)-- [Connect with the web client](./user-documentation/connect-web.md)-- [Connect with the Android client](./user-documentation/connect-android.md)-- [Connect with the iOS client](./user-documentation/connect-ios.md)-- [Connect with the macOS client](./user-documentation/connect-macos.md)
+- [Connect with the Windows Desktop client](./users/connect-windows.md)
+- [Connect with the web client](./users/connect-web.md)
+- [Connect with the Android client](./users/connect-android-chrome-os.md)
+- [Connect with the iOS client](./users/connect-ios-ipados.md)
+- [Connect with the macOS client](./users/connect-macos.md)
virtual-desktop Configure Rdp Shortpath Limit Ports Public Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-rdp-shortpath-limit-ports-public-networks.md
When choosing the base and pool size, consider the number of ports you choose. T
## Prerequisites -- A client device running the [Remote Desktop client for Windows](user-documentation/connect-windows-7-10.md), version 1.2.3488 or later. Currently, non-Windows clients aren't supported.
+- A client device running the [Remote Desktop client for Windows](users/connect-windows.md), version 1.2.3488 or later. Currently, non-Windows clients aren't supported.
- Internet access for both clients and session hosts. Session hosts require outbound UDP connectivity from your session hosts to the internet. For more information you can use to configure firewalls and Network Security Group, see [Network configurations for RDP Shortpath](rdp-shortpath.md#network-configuration). ## Enable a limited port range
virtual-desktop Configure Rdp Shortpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-rdp-shortpath.md
Before you can enable RDP Shortpath, you'll need to meet the prerequisites. Sele
# [Managed networks](#tab/managed-networks) -- A client device running the [Remote Desktop client for Windows](user-documentation/connect-windows-7-10.md), version 1.2.3488 or later. Currently, non-Windows clients aren't supported.
+- A client device running the [Remote Desktop client for Windows](users/connect-windows.md), version 1.2.3488 or later. Currently, non-Windows clients aren't supported.
- Direct line of sight connectivity between the client and the session host. Having direct line of sight connectivity means that the client can connect directly to the session host on port 3390 (default) without being blocked by firewalls (including the Windows Firewall) or Network Security Group, and using a managed network such as: - [ExpressRoute private peering](../expressroute/expressroute-circuit-peerings.md). - Site-to-site or Point-to-site VPN (IPsec), such as [Azure VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md).
Before you can enable RDP Shortpath, you'll need to meet the prerequisites. Sele
> > The steps to configure RDP Shortpath for public networks are provided for session hosts and clients in case these defaults have been changed. -- A client device running the [Remote Desktop client for Windows](user-documentation/connect-windows-7-10.md), version 1.2.3488 or later. Currently, non-Windows clients aren't supported.
+- A client device running the [Remote Desktop client for Windows](users/connect-windows.md), version 1.2.3488 or later. Currently, non-Windows clients aren't supported.
- Internet access for both clients and session hosts. Session hosts require outbound UDP connectivity from your session hosts to the internet. To reduce the number of ports required, you can [limit the port range used by clients for public networks](configure-rdp-shortpath-limit-ports-public-networks.md). For more information you can use to configure firewalls and Network Security Group, see [Network configurations for RDP Shortpath](rdp-shortpath.md#network-configuration). - Check your client can connect to the STUN endpoints and verify that basic UDP functionality works by running the `Test-Shortpath.ps1` PowerShell script. For steps of how to do this, see [Verifying STUN server connectivity and NAT type](troubleshoot-rdp-shortpath.md#verifying-stun-server-connectivity-and-nat-type).
virtual-desktop Configure Single Sign On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-single-sign-on.md
Single sign-on is available on session hosts using the following operating syste
You can enable SSO for connections to Azure Active Directory (AD)-joined VMs. You can also use SSO to access Hybrid Azure AD-joined VMs, but only after creating a Kerberos Server object. Azure Virtual Desktop doesn't support this solution with VMs joined to Azure AD Domain Services.
-You can use the [Windows Desktop client](user-documentation/connect-windows-7-10.md) on local PCs running Windows 10 or later. There's no requirement for the local PC to be joined to a domain or Azure AD. You can also have a single sign-on experience when using the [web client](user-documentation/connect-web.md).
+You can use the [Windows Desktop client](users/connect-windows.md) on local PCs running Windows 10 or later. There's no requirement for the local PC to be joined to a domain or Azure AD. You can also have a single sign-on experience when using the [web client](users/connect-web.md).
SSO is currently supported in the Azure Public cloud.
When enabling single sign-on, you'll currently be prompted to authenticate to Az
## Next steps - Check out [In-session passwordless authentication (preview)](authentication.md#in-session-passwordless-authentication-preview) to learn how to enable passwordless authentication.-- If you're accessing Azure Virtual Desktop from our Windows Desktop client, see [Connect with the Windows Desktop client](./user-documentation/connect-windows-7-10.md).-- If you're accessing Azure Virtual Desktop from our web client, see [Connect with the web client](./user-documentation/connect-web.md).
+- If you're accessing Azure Virtual Desktop from our Windows Desktop client, see [Connect with the Windows Desktop client](./users/connect-windows.md).
+- If you're accessing Azure Virtual Desktop from our web client, see [Connect with the web client](./users/connect-web.md).
- If you encounter any issues, go to [Troubleshoot connections to Azure AD-joined VMs](troubleshoot-azure-ad-connections.md).
virtual-desktop Customize Feed For Virtual Desktop Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/customize-feed-for-virtual-desktop-users.md
You can change the display name for a published remote desktop by setting a frie
Now that you've customized the feed for users, you can sign in to a Azure Virtual Desktop client to test it out. To do so, continue to the Connect to Azure Virtual Desktop How-tos:
- * [Connect with Windows 10 or Windows 7](./user-documentation/connect-windows-7-10.md)
- * [Connect with the web client](./user-documentation/connect-web.md)
- * [Connect with the Android client](./user-documentation/connect-android.md)
- * [Connect with the iOS client](./user-documentation/connect-ios.md)
- * [Connect with the macOS client](./user-documentation/connect-macos.md)
+ * [Connect with Windows](./users/connect-windows.md)
+ * [Connect with the web client](./users/connect-web.md)
+ * [Connect with the Android client](./users/connect-android-chrome-os.md)
+ * [Connect with the iOS client](./users/connect-ios-ipados.md)
+ * [Connect with the macOS client](./users/connect-macos.md)
virtual-desktop Customize Rdp Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/customize-rdp-properties.md
CustomRdpProperty : <CustomRDPpropertystring>
Now that you've customized the RDP properties for a given host pool, you can sign in to an Azure Virtual Desktop client to test them as part of a user session. These next how-to guides will tell you how to connect to a session using the client of your choice: -- [Connect with the Windows Desktop client](./user-documentation/connect-windows-7-10.md)-- [Connect with the web client](./user-documentation/connect-web.md)-- [Connect with the Android client](./user-documentation/connect-android.md)-- [Connect with the macOS client](./user-documentation/connect-macos.md)-- [Connect with the iOS client](./user-documentation/connect-ios.md)
+- [Connect with the Windows Desktop client](./users/connect-windows.md)
+- [Connect with the web client](./users/connect-web.md)
+- [Connect with the Android client](./users/connect-android-chrome-os.md)
+- [Connect with the macOS client](./users/connect-macos.md)
+- [Connect with the iOS client](./users/connect-ios-ipados.md)
virtual-desktop Deploy Azure Ad Joined Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/deploy-azure-ad-joined-vm.md
This section explains how to access Azure AD-joined VMs from different Azure Vir
### Connect using the Windows Desktop client
-The default configuration supports connections from Windows 11 or Windows 10 using the [Windows Desktop client](user-documentation/connect-windows-7-10.md). You can use your credentials, smart card, [Windows Hello for Business certificate trust](/windows/security/identity-protection/hello-for-business/hello-hybrid-cert-trust) or [Windows Hello for Business key trust with certificates](/windows/security/identity-protection/hello-for-business/hello-deployment-rdp-certs) to sign in to the session host. However, to access the session host, your local PC must meet one of the following conditions:
+The default configuration supports connections from Windows 11 or Windows 10 using the [Windows Desktop client](users/connect-windows.md). You can use your credentials, smart card, [Windows Hello for Business certificate trust](/windows/security/identity-protection/hello-for-business/hello-hybrid-cert-trust) or [Windows Hello for Business key trust with certificates](/windows/security/identity-protection/hello-for-business/hello-deployment-rdp-certs) to sign in to the session host. However, to access the session host, your local PC must meet one of the following conditions:
- The local PC is Azure AD-joined to the same Azure AD tenant as the session host - The local PC is hybrid Azure AD-joined to the same Azure AD tenant as the session host
Now that you've deployed some Azure AD joined VMs, we recommend enabling single
- [Configure single sign-on](configure-single-sign-on.md) - [Create a profile container with Azure Files and Azure AD](create-profile-container-azure-ad.md)-- [Connect with the Windows Desktop client](user-documentation/connect-windows-7-10.md)-- [Connect with the web client](user-documentation/connect-web.md)
+- [Connect with the Windows Desktop client](users/connect-windows.md)
+- [Connect with the web client](users/connect-web.md)
- [Troubleshoot connections to Azure AD-joined VMs](troubleshoot-azure-ad-connections.md)
virtual-desktop Disaster Recovery Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/disaster-recovery-concepts.md
Another option is an active-active deployment, where you use both sets of infras
- Have extra session hosts in both active regions, but deallocate them when they aren't needed, which reduces costs. - Only provision new infrastructure during disaster recovery and allow affected users to connect to the newly provisioned session hosts. This method requires regular testing with infrastructure-as-code tools so you can deploy the new infrastructure as quickly as possible during a disaster.
-## Recommended diaster recovery methods
+## Recommended disaster recovery methods
The disaster recovery methods we recommend are:
virtual-desktop Environment Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/environment-setup.md
An app group can be one of two types:
Each host pool has a preferred app group type that dictates whether users see RemoteApp or Desktop apps in their feed if both resources have been published to the same user. By default, Azure Virtual Desktop automatically creates a Desktop app group named "Desktop Application Group" whenever you create a host pool and sets the host pool's preferred app group type to **Desktop**. You can remove the Desktop app group at any time. If you want your users to only see RemoteApps in their feed, you should set the **Preferred App Group Type** value to **RemoteApp**. You can't create another Desktop app group in the host pool while a Desktop app group exists.
-You must create a RemoteApp app group to publish RemoteApp apps. You can create multiple RemoteApp app groups to accommodate different worker scenarios. Different RemoteApp app groups can also contain overlapping RemoteApps. To publish resources to users, you must assign them to app groups.
-
-When assigning users to app groups, consider the following things:
--- Azure Virtual Desktop doesn't support assigning both RemoteApp and Desktop app groups in a single host pool to the same user. Doing so will cause that user to have two user sessions in a single host pool at the same time. Users aren't supposed to have two user sessions in a single host pool, as this can cause the following things to happen:
-
- - The session hosts become overloaded.
- - Users get stuck when trying to sign in.
- - Connections won't work.
- - The screen turns black.
- - The application crashes.
- - Other negative effects on end-user experience and session performance.
--- You can assign a user to multiple app groups within the same host pool. Their feed will show apps from all their assigned app groups. -- Personal host pools only allow and support RemoteApp app groups.
+To publish resources to users, you must assign them to app groups. When assigning users to app groups, consider the following things:
+
+- We don't support assigning both the RemoteApp and desktop app groups in a single host pool to the same user. Doing so will cause a single user to have two user sessions in a single host pool. Users aren't supposed to have two active user sessions at the same time, as this can cause the following things to happen:
+ - The session hosts become overloaded
+ - Users get stuck when trying to login
+ - Connections won't work
+ - The screen turns black
+ - The application crashes
+ - Other negative effects on end-user experience and session performance
+- A user can be assigned to multiple app groups within the same host pool, and their feed will be an accumulation of both app groups.
+- Personal host pools only allow and support RemoteApp app groups.
>[!NOTE] >If your host poolΓÇÖs Preferred App Group Type is set to **Undefined**, that means that you havenΓÇÖt set the value yet. You must finish configuring your host pool by setting its Preferred App Group Type before you start using it to prevent app incompatibility and session host overload issues.
To learn how to set up your Azure Virtual Desktop host pool, see [Create a host
To learn how to connect to Azure Virtual Desktop, see one of the following articles: -- [Connect with Windows 10 or Windows 7](./user-documentation/connect-windows-7-10.md)-- [Connect with a web browser](./user-documentation/connect-web.md)-- [Connect with the Android client](./user-documentation/connect-android.md)-- [Connect with the macOS client](./user-documentation/connect-macos.md)-- [Connect with the iOS client](./user-documentation/connect-ios.md)
+- [Connect with Windows](./users/connect-windows.md)
+- [Connect with a web browser](./users/connect-web.md)
+- [Connect with the Android client](./users/connect-android-chrome-os.md)
+- [Connect with the macOS client](./users/connect-macos.md)
+- [Connect with the iOS client](./users/connect-ios-ipados.md)
virtual-desktop Expand Existing Host Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/expand-existing-host-pool.md
To expand your host pool by adding virtual machines:
Now that you've expanded your existing host pool, you can sign in to a Azure Virtual Desktop client to test them as part of a user session. You can connect to a session with any of the following clients: -- [Connect with the Windows Desktop client](./user-documentation/connect-windows-7-10.md)-- [Connect with the web client](./user-documentation/connect-web.md)-- [Connect with the Android client](./user-documentation/connect-android.md)-- [Connect with the macOS client](./user-documentation/connect-macos.md)-- [Connect with the iOS client](./user-documentation/connect-ios.md)
+- [Connect with the Windows Desktop client](./users/connect-windows.md)
+- [Connect with the web client](./users/connect-web.md)
+- [Connect with the Android client](./users/connect-android-chrome-os.md)
+- [Connect with the macOS client](./users/connect-macos.md)
+- [Connect with the iOS client](./users/connect-ios-ipados.md)
virtual-desktop Getting Started Feature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/getting-started-feature.md
Here's how to deploy Azure Virtual Desktop using the getting started feature whe
## Connect to the desktop
-Once the deployment has completed successfully, if you created a test account or assigned an existing user during deployment, you can connect to it following the steps for one of the supported Remote Desktop clients. For example, you can follow the steps to [Connect with the Windows Desktop client](user-documentation/connect-windows-7-10.md).
+Once the deployment has completed successfully, if you created a test account or assigned an existing user during deployment, you can connect to it following the steps for one of the supported Remote Desktop clients. For example, you can follow the steps to [Connect with the Windows Desktop client](users/connect-windows.md).
If you didn't create a test account or assigned an existing user during deployment, you'll need to add users to the **AVDValidationUsers** security group before you can connect.
virtual-desktop Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/prerequisites.md
To learn more, see [Understanding Azure Virtual Desktop network connectivity](ne
Your users will need a [Remote Desktop client](/windows-server/remote/remote-desktop-services/clients/remote-desktop-clients) to connect to virtual desktops and remote apps. The following clients support Azure Virtual Desktop: -- [Windows Desktop client](./user-documentation/connect-windows-7-10.md)-- [Web client](./user-documentation/connect-web.md)-- [macOS client](./user-documentation/connect-macos.md)-- [iOS client](./user-documentation/connect-ios.md)-- [Android client](./user-documentation/connect-android.md)-- [Microsoft Store client](./user-documentation/connect-microsoft-store.md)
+- [Windows Desktop client](./users/connect-windows.md)
+- [Web client](./users/connect-web.md)
+- [macOS client](./users/connect-macos.md)
+- [iOS and iPadOS client](./users/connect-ios-ipados.md)
+- [Android and Chrome OS client](./users/connect-android-chrome-os.md)
+- [Microsoft Store client](./users/connect-microsoft-store.md)
> [!IMPORTANT] > Azure Virtual Desktop doesn't support connections from the RemoteApp and Desktop Connections (RADC) client or the Remote Desktop Connection (MSTSC) client.
virtual-desktop Proxy Server Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/proxy-server-support.md
The following table shows which Azure Virtual Desktop clients support proxy serv
| macOS | Yes | | Windows Store | Yes |
-For more information about proxy support on Linux based thin clients, see [Thin client support](./user-documentation/linux-overview.md).
+For more information about proxy support on Linux based thin clients, see [Thin client support](users/connect-thin-clients.md).
## Support limitations
virtual-desktop Rdp Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/rdp-properties.md
+
+ Title: Supported RDP properties with Azure Virtual Desktop - Azure Virtual Desktop
+description: Learn about the supported RDP properties you can use with Azure Virtual Desktop.
++ Last updated : 09/26/2022+++
+# Supported RDP properties with Azure Virtual Desktop
+
+Organizations can configure RDP properties centrally in Azure Virtual Desktop to determine how a connection to Azure Virtual Desktop should behave. There are a wide range of RDP properties that can be be set, such as for device redirection, display settings, session behavior, and more.
+
+Supported RDP properties differ when using Azure Virtual Desktop compared to Remote Desktop Services. Use the following tables to understand each setting and whether it applies when connecting to Azure Virtual Desktop, Remote Desktop Services, or both.
+
+## Connection information
+
+| Display name | RDP setting | Azure Virtual Desktop | Remote Desktop Services | Description | Values | Default value |
+|--|--|:-:|:-:|--|--|:-:|
+| Azure AD authentication | enablerdsaadauth:i:*value* | Γ£ö | Γ£ö | Determines whether the client will use Azure AD to authenticate to the remote PC if it's available. | - 0: RDP won't use Azure AD authentication, even if the remote PC supports it.</br>- 1: RDP will use Azure AD authentication if the remote PC supports it. | 0 |
+| Credential Security Support Provider | enablecredsspsupport:i:*value* | Γ£ö | Γ£ö | Determines whether the client will use the Credential Security Support Provider (CredSSP) for authentication if it's available. | - 0: RDP won't use CredSSP, even if the operating system supports CredSSP.</br>- 1: RDP will use CredSSP if the operating system supports CredSSP. | 1 |
+| Alternate shell | alternate shell:s:*value* | Γ£ö | Γ£ö | Specifies a program to be started automatically in the remote session as the shell instead of explorer. | Valid path to an executable file, such as "C:\ProgramFiles\Office\word.exe". | None |
+| KDC proxy name | kdcproxyname:s:*value* | Γ£ö | Γ£ù | Specifies the fully qualified domain name of a KDC proxy. | Valid path to a KDC proxy server, such as `kdc.contoso.com`. | None |
+| Address | full address:s:value | Γ£ù | Γ£ö | This setting specifies the hostname or IP address of the remote computer that you want to connect to.</br></br>This is the only required setting in an RDP file. | A valid name, IPv4 address, or IPv6 address. | None |
+| Alternate address | alternate full address:s:value | Γ£ù | Γ£ö | Specifies an alternate name or IP address of the remote computer. | A valid name, IPv4 address, or IPv6 address. | None |
+| Username | username:s:value | Γ£ù | Γ£ö | Specifies the name of the user account that will be used to sign in to the remote computer. | Any valid username. | None |
+| Domain | domain:s:value | Γ£ù | Γ£ö | Specifies the name of the domain in which the user account that will be used to sign in to the remote computer is located. | A valid domain name, such as *CONTOSO*. | None |
+| RD Gateway hostname | gatewayhostname:s:value | Γ£ù | Γ£ö | Specifies the RD Gateway host name. | A valid name, IPv4 address, or IPv6 address. | None |
+| RD Gateway authentication | gatewaycredentialssource:i:value | Γ£ù | Γ£ö | Specifies the RD Gateway authentication method. | - 0: Ask for password (NTLM).</br>- 1: Use smart card.</br>- 2: Use the credentials for the currently signed in user.</br>- 3: Prompt the user for their credentials and use basic authentication.</br>- 4: Allow user to select later.</br>- 5: Use cookie-based authentication. | 0 |
+| Use RD Gateway | gatewayusagemethod:i:value | Γ£ù | Γ£ö | Specifies when to use an RD Gateway for the connection. | - 0: Don't use an RD Gateway.</br>- 1: Always use an RD Gateway.</br>- 2: Use an RD Gateway if a direct connection can't be made to the RD Session Host.</br>- 3: Use the default RD Gateway settings.</br>- 4: Don't use an RD Gateway, bypass gateway for local addresses.</br>Setting this property value to 0 or 4 are effectively equivalent, but setting this property to 4 enables the option to bypass local addresses. | 0 |
+| Save credentials | promptcredentialonce:i:value | Γ£ù | Γ£ö | Determines whether a user's credentials are saved and used for both the RD Gateway and the remote computer. | - 0: Remote session won't use the same credentials.</br>- 1: Remote session will use the same credentials. | 1 |
+| Server authentication | authentication level:i:value | Γ£ù | Γ£ö | Defines the server authentication level settings. | - 0: If server authentication fails, connect to the computer without warning (Connect and don't warn me).</br>- 1: If server authentication fails, don't establish a connection (Don't connect).</br>- 2: If server authentication fails, show a warning, and allow me to connect or refuse the connection (Warn me).</br>- 3: No authentication requirement specified. | 3 |
+| Connection sharing | disableconnectionsharing:i:value | Γ£ù | Γ£ö | Determines whether the client reconnects to any existing disconnected session or initiate a new connection when a new connection is launched. | - 0: Reconnect to any existing session.</br>- 1: Initiate new connection. | 0 |
+
+## Session behavior
+
+| Display name | RDP setting | Azure Virtual Desktop | Remote Desktop Services | Description | Values | Default value |
+|--|--|:-:|:-:|--|--|:-:|
+| Reconnection | autoreconnection enabled:i:*value* | Γ£ö | Γ£ö | Determines whether the client will automatically try to reconnect to the remote computer if the connection is dropped, such as when there's a network connectivity interruption. | - 0: Client doesn't automatically try to reconnect.</br>- 1: Client automatically tries to reconnect. | 1 |
+| Bandwidth auto detect | bandwidthautodetect:i:*value* | Γ£ö | Γ£ö | Determines whether or not to use automatic network bandwidth detection. Requires bandwidthautodetect to be set to 1. | - 0: Disable automatic network type detection.</br>- 1: Enable automatic network type detection. | 1 |
+| Network auto detect | networkautodetect:i:*value* | Γ£ö | Γ£ö | Determines whether automatic network type detection is enabled. | - 0: Don't use automatic network bandwidth detection.</br> - 1: Use automatic network bandwidth detection. | 1 |
+| Compression | compression:i:*value* | Γ£ö | Γ£ö | Determines whether bulk compression is enabled when it's transmitted by RDP to the local computer. | - 0: Disable RDP bulk compression.</br>- 1: Enable RDP bulk compression. | 1 |
+| Video playback | videoplaybackmode:i:*value* | Γ£ö | Γ£ö | Determines if the connection will use RDP-efficient multimedia streaming for video playback. | - 0: Don't use RDP efficient multimedia streaming for video playback.</br>- 1: Use RDP-efficient multimedia streaming for video playback when possible | 1 |
+
+## Device redirection
+
+> [!IMPORTANT]
+> You can only enable redirections with binary settings that apply to both to and from the remote machine. The service doesn't currently support one-way blocking of redirections from only one side of the connection.
+
+| Display name | RDP setting | Azure Virtual Desktop | Remote Desktop Services | Description | Values | Default value |
+|--|--|:-:|:-:|--|--|:-:|
+| Microphone redirection | audiocapturemode:i:*value* | Γ£ö | Γ£ö | Indicates whether audio input redirection is enabled. | - 0: Disable audio capture from the local device.</br>- 1: Enable audio capture from the local device and redirection to an audio application in the remote session. | 0 |
+| Redirect video encoding | encode redirected video capture:i:*value* | Γ£ö | Γ£ö | Enables or disables encoding of redirected video. | - 0: Disable encoding of redirected video.</br>- 1: Enable encoding of redirected video. | 1 |
+| Encoded video quality | redirected video capture encoding quality:i:*value* | Γ£ö | Γ£ö | Controls the quality of encoded video. | - 0: High compression video. Quality may suffer when there's a lot of motion. </br>- 1: Medium compression.</br>- 2: Low compression video with high picture quality. | 0 |
+| Audio output location | audiomode:i:*value* | Γ£ö | Γ£ö | Determines whether the local or remote machine plays audio. | - 0: Play sounds on the local computer (Play on this computer).</br>- 1: Play sounds on the remote computer (Play on remote computer).</br>- 2: Don't play sounds (Do not play). | 0 |
+| Camera redirection | camerastoredirect:s:*value* | Γ£ö | Γ£ö | Configures which cameras to redirect. This setting uses a semicolon-delimited list of KSCATEGORY_VIDEO_CAMERA interfaces of cameras enabled for redirection. | - * : Redirect all cameras.</br> - List of cameras, such as `\\?\usb#vid_0bda&pid_58b0&mi`.</br>- You can exclude a specific camera by prepending the symbolic link string with "-". | Don't redirect any cameras |
+| Media Transfer Protocol (MTP) and Picture Transfer Protocol (PTP) | devicestoredirect:s:*value* | Γ£ö | Γ£ö | Determines which devices on the local computer will be redirected and available in the remote session. | - *: Redirect all supported devices, including ones that are connected later.</br> - Valid hardware ID for one or more devices.</br> - DynamicDevices: Redirect all supported devices that are connected later. | Don't redirect any devices |
+| Drive/storage redirection | drivestoredirect:s:*value* | Γ£ö | Γ£ö | Determines which disk drives on the local computer will be redirected and available in the remote session. | - No value specified: don't redirect any drives.</br>- * : Redirect all disk drives, including drives that are connected later.</br>- DynamicDrives: redirect any drives that are connected later.</br>- The drive and labels for one or more drives, such as `drivestoredirect:s:C\:;E\:;`, redirect the specified drive(s). | Don't redirect any drives |
+| Windows key combinations | keyboardhook:i:*value* | Γ£ö | Γ£ö | Determines when Windows key combinations (Windows key, Alt+Tab) are applied to the remote session for desktop and RemoteApp connections. | - 0: Windows key combinations are applied on the local computer.</br>- 1: (Desktop only) Windows key combinations are applied on the remote computer when in focus.</br>- 2: (Desktop only) Windows key combinations are applied on the remote computer in full screen mode only.</br>- 3: (RemoteApp only) Windows key combinations are applied on the RemoteApp when in focus. We recommend you use this value only when publishing the Remote Desktop Connection app (*mstsc.exe*) from the host pool on Azure Virtual Desktop. This value is only supported when using the [Windows Desktop client](users/connect-windows.md). | 2 |
+| Clipboard redirection | redirectclipboard:i:*value* | Γ£ö | Γ£ö | Determines whether clipboard redirection is enabled. | - 0: Clipboard on local computer isn't available in remote session.</br>- 1: Clipboard on local computer is available in remote session. | 1 |
+| COM ports redirection | redirectcomports:i:*value* | Γ£ö | Γ£ö | Determines whether COM (serial) ports on the local computer will be redirected and available in the remote session. | - 0: COM ports on the local computer aren't available in the remote session.</br>- 1: COM ports on the local computer are available in the remote session. | 0 |
+| Location service redirection | redirectlocation:i:*value* | Γ£ö | Γ£ö | Determines whether the location of the local device will be redirected and available in the remote session. | - 0: The remote session uses the location of the remote computer or virtual machine.</br>- 1: The remote session uses the location of the local device. | 0 |
+| Printer redirection | redirectprinters:i:*value* | Γ£ö | Γ£ö | Determines whether printers configured on the local computer will be redirected and available in the remote session. | - 0: The printers on the local computer aren't available in the remote session.</br>- 1: The printers on the local computer are available in the remote session. | 1 |
+| Smart card redirection | redirectsmartcards:i:*value* | Γ£ö | Γ£ö | Determines whether smart card devices on the local computer will be redirected and available in the remote session. | - 0: The smart card device on the local computer isn't available in the remote session.</br>- 1: The smart card device on the local computer is available in the remote session. | 1 |
+| WebAuthn redirection | redirectwebauthn:i:*value* | Γ£ö | Γ£ö | Determines whether WebAuthn requests on the remote computer will be redirected to the local computer allowing the use of local authenticators (such as Windows Hello for Business, security key, and so on). | - 0: WebAuthn requests from the remote session aren't sent to the local computer for authentication and must be completed in the remote session.</br>- 1: WebAuthn requests from the remote session are sent to the local computer for authentication. | 1 |
+| USB device redirection | usbdevicestoredirect:s:*value* | Γ£ö | Γ£ö | Determines which supported RemoteFX USB devices on the client computer will be redirected and available in the remote session when you connect to a remote session that supports RemoteFX USB redirection. | - \*: Redirect all USB devices that aren't already redirected by another high-level redirection.</br> - {*Device Setup Class GUID*}: Redirect all devices that are members of the specified [device setup class](/windows-hardware/drivers/install/system-defined-device-setup-classes-available-to-vendors/).</br> - *USBInstanceID*: Redirect a specific USB device identified by the instance ID. | Don't redirect any USB devices |
+
+## Display settings
+
+| Display name | RDP setting | Azure Virtual Desktop | Remote Desktop Services | Description | Values | Default value |
+|--|--|:-:|:-:|--|--|:-:|
+| Muitiple displays | use multimon:i:*value* | Γ£ö | Γ£ö | Determines whether the remote session will use one or multiple displays from the local computer. | - 0: Don't enable multiple display support.</br>- 1: Enable multiple display support. | 0 |
+| Selected monitors | selectedmonitors:s:*value* | Γ£ö | Γ£ö | Specifies which local displays to use from the remote session. The selected displays must be contiguous. Requires use multimon to be set to 1.</br></br>Only available on the Windows Inbox (MSTSC) and Windows Desktop (MSRDC) clients. | Comma separated list of machine-specific display IDs. You can retrieve IDs by calling `mstsc.exe /l`. The first ID listed will be set as the primary display in the session. | All displays |
+| Maximize to current displays | maximizetocurrentdisplays:i:*value* | Γ£ö | Γ£ö | Determines which display the remote session goes full screen on when maximizing. Requires use multimon to be set to 1.</br></br>Only available on the Windows Desktop (MSRDC) client. | - 0: Session goes full screen on the displays initially selected when maximizing.</br>- 1: Session dynamically goes full screen on the displays touched by the session window when maximizing. | 0 |
+| Multi to single display switch | singlemoninwindowedmode:i:*value* | Γ£ö | Γ£ö | Determines whether a multi display remote session automatically switches to single display when exiting full screen. Requires use multimon to be set to 1.</br></br>Only available on the Windows Desktop (MSRDC) client. | - 0: Session retains all displays when exiting full screen.</br>- 1: Session switches to single display when exiting full screen. | 0 |
+| Screen mode | screen mode id:i:*value* | Γ£ö | Γ£ö | Determines whether the remote session window appears full screen when you launch the connection. | - 1: The remote session will appear in a window.</br>- 2: The remote session will appear full screen. | 2 |
+| Smart sizing | smart sizing:i:*value* | Γ£ö | Γ£ö | Determines whether or not the local computer scales the content of the remote session to fit the window size. | - 0: The local window content won't scale when resized.</br>- 1: The local window content will scale when resized. | 0 |
+| Dynamic resolution | dynamic resolution:i:*value* | Γ£ö | Γ£ö | Determines whether the resolution of the remote session is automatically updated when the local window is resized. | - 0: Session resolution remains static during the session.</br>- 1: Session resolution updates as the local window resizes. | 1 |
+| Desktop size | desktop size id:i:*value* | ✔ | ✔ | Specifies the dimensions of the remote session desktop from a set of predefined options. This setting is overridden if desktopheight and desktopwidth are specified. | - 0: 640×480</br>- 1: 800×600</br>- 2: 1024×768</br>- 3: 1280×1024</br>- 4: 1600×1200 | Match the local computer |
+| Desktop height | desktopheight:i:*value* | Γ£ö | Γ£ö | Specifies the resolution height (in pixels) of the remote session. | Numerical value between 200 and 8192. | Match the local computer |
+| Desktop width | desktopwidth:i:*value* | Γ£ö | Γ£ö | Specifies the resolution width (in pixels) of the remote session. | Numerical value between 200 and 8192. | Match the local computer |
+| Desktop scale factor | desktopscalefactor:i:*value* | Γ£ö | Γ£ö | Specifies the scale factor of the remote session to make the content appear larger. | Numerical value from the following list: 100, 125, 150, 175, 200, 250, 300, 400, 500. | Match the local computer |
+
+## RemoteApp
+
+| Display name | RDP setting | Azure Virtual Desktop | Remote Desktop Services | Description | Values | Default value |
+|--|--|:-:|:-:|--|--|:-:|
+| Command-line parameters | remoteapplicationcmdline:s:value | Γ£ù | Γ£ö | Optional command-line parameters for the RemoteApp. | Valid command-line parameters. | N/A |
+| Command-line variables | remoteapplicationexpandcmdline:i:value | Γ£ù | Γ£ö | Determines whether environment variables contained in the RemoteApp command-line parameter should be expanded locally or remotely. | - 0: Environment variables should be expanded to the values of the local computer.</br>- 1: Environment variables should be expanded to the values of the remote computer. | 1 |
+| Working directory variables | remoteapplicationexpandworkingdir:i:value | Γ£ù | Γ£ö | Determines whether environment variables contained in the RemoteApp working directory parameter should be expanded locally or remotely. | - 0: Environment variables should be expanded to the values of the local computer.</br> - 1: Environment variables should be expanded to the values of the remote computer.</br>The RemoteApp working directory is specified through the shell working directory parameter. | 1 |
+| Open file | remoteapplicationfile:s:value | Γ£ù | Γ£ö | Specifies a file to be opened on the remote computer by the RemoteApp.</br>For local files to be opened, you must also enable drive redirection for the source drive. | Valid file path. | N/A |
+| Icon file | remoteapplicationicon:s:value | Γ£ù | Γ£ö | Specifies the icon file to be displayed in the client UI while launching a RemoteApp. If no file name is specified, the client will use the standard Remote Desktop icon. Only ".ico" files are supported. | Valid file path. | N/A |
+| Application mode | remoteapplicationmode:i:value | Γ£ù | Γ£ö | Determines whether a connection is launched as a RemoteApp session. | - 0: Don't launch a RemoteApp session.</br>- 1: Launch a RemoteApp session. | 1 |
+| Application display name | remoteapplicationname:s:value | Γ£ù | Γ£ö | Specifies the name of the RemoteApp in the client interface while starting the RemoteApp. | App display name. For example, "Excel 2016". | N/A |
+| Alias/executable name | remoteapplicationprogram:s:value | Γ£ù | Γ£ö | Specifies the alias or executable name of the RemoteApp. | Valid alias or name. For example, "EXCEL". | N/A |
virtual-desktop Safe Url List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/safe-url-list.md
Azure Virtual Desktop currently doesn't have a list of IP address ranges that yo
## Remote Desktop clients
-Any [Remote Desktop clients](user-documentation/connect-windows-7-10.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json) you use to connect to Azure Virtual Desktop must have access to the following URLs. Select the relevant tab based on which cloud you're using. Opening these URLs is essential for a reliable client experience. Blocking access to these URLs is unsupported and will affect service functionality.
+Any [Remote Desktop clients](users/connect-windows.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json) you use to connect to Azure Virtual Desktop must have access to the following URLs. Select the relevant tab based on which cloud you're using. Opening these URLs is essential for a reliable client experience. Blocking access to these URLs is unsupported and will affect service functionality.
# [Azure cloud](#tab/azure)
virtual-desktop Set Up Service Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/set-up-service-alerts.md
To configure service alerts:
In this tutorial, you learned how to set up and use Azure Service Health to monitor service issues and health advisories for Azure Virtual Desktop. To learn about how to sign in to Azure Virtual Desktop, continue to the Connect to Azure Virtual Desktop How-tos. > [!div class="nextstepaction"]
-> [Connect to the Remote Desktop client on Windows 7 and Windows 10](./user-documentation/connect-windows-7-10.md)
+> [Connect to the Remote Desktop client on Windows 7 and Windows 10](./users/connect-windows.md)
virtual-desktop Start Virtual Machine Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/start-virtual-machine-connect.md
To use Start VM on Connect, make sure you follow these guidelines:
- You can only configure Start VM on Connect on existing host pools. You can't enable it at the same time you create a new host pool. - The following Remote Desktop clients support Start VM on Connect:
- - The [web client](./user-documentation/connect-web.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)
- - The [Windows client](./user-documentation/connect-windows-7-10.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json) (version 1.2.2061 or later)
- - The [Android client](./user-documentation/connect-android.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json) (version 10.0.10 or later)
- - The [macOS client](./user-documentation/connect-macos.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json) (version 10.6.4 or later)
- - The [iOS client](./user-documentation/connect-ios.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json) (version 10.2.5 or later)
- - The [Microsoft Store client](./user-documentation/connect-microsoft-store.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json) (version 10.2.2005.0 or later)
- - Thin clients listed in [Thin client support](./user-documentation/linux-overview.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)
+ - The [Windows client](./users/connect-windows.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json) (version 1.2.2061 or later)
+ - The [Web client](./users/connect-web.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)
+ - The [macOS client](./users/connect-macos.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json) (version 10.6.4 or later)
+ - The [iOS and iPadOS client](./users/connect-ios-ipados.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json) (version 10.2.5 or later)
+ - The [Android and Chrome OS client](./users/connect-android-chrome-os.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json) (version 10.0.10 or later)
+ - The [Microsoft Store client](./users/connect-microsoft-store.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json) (version 10.2.2005.0 or later)
+ - Thin clients listed in [Thin client support](./users/connect-thin-clients.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json)
- If you want to configure Start VM on Connect using PowerShell, you'll need to have [the Az.DesktopVirtualization PowerShell module](https://www.powershellgallery.com/packages/Az.DesktopVirtualization) (version 2.1.0 or later) installed on the device you use to run the commands. - You must grant Azure Virtual Desktop access to power on session host VMs, check their status, and report diagnostic information. You must have the `Microsoft.Authorization/roleAssignments/write` permission on your subscriptions in order to assign the role-based access control (RBAC) role for the Azure Virtual Desktop service principal on those subscriptions. This is part of **User Access Administrator** and **Owner** built in roles.
virtual-desktop Teams On Avd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/teams-on-avd.md
With media optimization for Microsoft Teams, the Remote Desktop client handles a
Before you can use Microsoft Teams on Azure Virtual Desktop, you'll need to do these things: - [Prepare your network](/microsoftteams/prepare-network/) for Microsoft Teams.-- Install the [Remote Desktop client](./user-documentation/connect-windows-7-10.md) on a Windows 10, Windows 10 IoT Enterprise, Windows 11, or macOS 10.14 or later device that meets the [hardware requirements for Microsoft Teams](/microsoftteams/hardware-requirements-for-the-teams-app#hardware-requirements-for-teams-on-a-windows-pc/).
+- Install the [Remote Desktop client](./users/connect-windows.md) on a Windows 10, Windows 10 IoT Enterprise, Windows 11, or macOS 10.14 or later device that meets the [hardware requirements for Microsoft Teams](/microsoftteams/hardware-requirements-for-the-teams-app#hardware-requirements-for-teams-on-a-windows-pc/).
- Connect to an Azure Virtual Desktop session host running Windows 10 or 11 Multi-session or Windows 10 or 11 Enterprise. - The latest version of the [Microsoft Visual C++ Redistributable](https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads).
virtual-desktop Connect Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/user-documentation/connect-android.md
- Title: Connect to Azure Virtual Desktop with the Android client - Azure
-description: How to connect to Azure Virtual Desktop using the Android client.
-- Previously updated : 03/25/2020---
-# Connect to Azure Virtual Desktop with the Android client
-
-> Applies to: Android 4.1 and later, Chromebooks with ChromeOS 53 and later.
-
->[!IMPORTANT]
->This content applies to Azure Virtual Desktop with Azure Resource Manager Azure Virtual Desktop objects. If you're using Azure Virtual Desktop (classic) without Azure Resource Manager objects, see [this article](../virtual-desktop-fall-2019/connect-android-2019.md).
-
-You can access Azure Virtual Desktop resources from your Android device with our downloadable client. You can also use the Android client on Chromebook devices that support the Google Play Store. This guide will tell you how to set up the Android client.
-
-## Install the Android client
-
-To get started, [download](https://play.google.com/store/apps/details?id=com.microsoft.rdc.androidx) and install the client on your Android device.
-
-## Subscribe to a feed
-
-Subscribe to the feed provided by your admin to get the list of managed resources you can access on your Android device.
-
-To subscribe to a feed:
-
-1. In the Connection Center, tap **+**, and then tap **Workspaces**.
-2. Enter the feed URL into the **Feed URL** field. The feed URL can be either a URL or an email address.
- - If you use a URL, use the one your admin gave you, normally <https://rdweb.wvd.microsoft.com/api/arm/feeddiscovery>.
- - To use email, enter your email address. The client will search for a URL associated with your email address if your admin configured the server that way.
- - To connect through the US Gov portal, use <https://rdweb.wvd.azure.us/api/arm/feeddiscovery>.
-3. Tap **NEXT**.
-4. Provide your credentials when prompted.
- - For **User name**, give the user name with permission to access resources.
- - For **Password**, give the password associated with the user name.
- - You may also be prompted to provide additional factors if your admin configured authentication that way.
-
-After subscribing, the Connection Center should display the remote resources.
-
-Once subscribed to a feed, the feed's content will update automatically on a regular basis. Resources may be added, changed, or removed based on changes made by your administrator.
-
-## Next steps
-
-To learn more about how to use the Android client, check out [Get started with the Android client](/windows-server/remote/remote-desktop-services/clients/remote-desktop-android/).
virtual-desktop Connect Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/user-documentation/connect-ios.md
- Title: Connect to Azure Virtual Desktop with the iOS client - Azure
-description: How to connect to Azure Virtual Desktop using the iOS client.
-- Previously updated : 02/08/2020---
-# Connect to Azure Virtual Desktop with the iOS client
-
-> Applies to: iOS 14.0 or later. Compatible with iPhone, iPad, and iPod touch.
-
->[!IMPORTANT]
->This content applies to Azure Virtual Desktop with Azure Resource Manager Azure Virtual Desktop objects. If you're using Azure Virtual Desktop (classic) without Azure Resource Manager objects, see [this article](../virtual-desktop-fall-2019/connect-ios-2019.md).
-
-You can access Azure Virtual Desktop resources from your iOS device with our downloadable client. This guide will tell you how to set up the iOS client.
-
-## Install the iOS client
-
-To get started, [download](https://aka.ms/rdios) and install the client on your iOS device.
-
-## Subscribe to a feed
-
-Subscribe to the feed provided by your admin to get the list of managed resources you can access on your iOS device.
-
-To subscribe to a feed:
-
-1. In the Connection Center, tap **+**, and then tap **Add Workspace**.
-2. Enter the feed URL into the **Feed URL** field. The feed URL can be either a URL or an email address.
- - If you use a URL, use the one your admin gave you. Normally, the URL is <https://rdweb.wvd.microsoft.com/api/arm/feeddiscovery>.
- - To use email, enter your email address. This tells the client to search for a URL associated with your email address if your admin configured the server that way.
- - To connect through the US Gov portal, use <https://rdweb.wvd.azure.us/api/arm/feeddiscovery>.
-3. Tap **Next**.
-4. Provide your credentials when prompted.
- - For **User name**, give the user name with permission to access resources.
- - For **Password**, give the password associated with the user name.
- - You may also be prompted to provide additional factors if your admin configured authentication that way.
-5. Tap **Save**.
-
-After this, the Connection Center should display the remote resources.
-
-Once subscribed to a feed, the feed's content will update automatically on a regular basis. Resources may be added, changed, or removed based on changes made by your administrator.
-
-## Next steps
-
-To learn more about how to use the iOS client, check out the [Get started with the iOS client](/windows-server/remote/remote-desktop-services/clients/remote-desktop-ios/) documentation.
virtual-desktop Connect Macos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/user-documentation/connect-macos.md
- Title: Connect to Azure Virtual Desktop with the macOS client - Azure
-description: How to connect to Azure Virtual Desktop using the macOS client.
-- Previously updated : 04/08/2020---
-# Connect to Azure Virtual Desktop with the macOS client
-
-> Applies to: macOS 10.12 or later
-
->[!IMPORTANT]
->This content applies to Azure Virtual Desktop with Azure Resource Manager Azure Virtual Desktop objects. If you're using Azure Virtual Desktop (classic) without Azure Resource Manager objects, see [this article](../virtual-desktop-fall-2019/connect-macos-2019.md).
-
-You can access Azure Virtual Desktop resources from your macOS devices with our downloadable client. This guide will tell you how to set up the client.
-
-## Install the client
-
-To get started, [download](https://apps.apple.com/app/microsoft-remote-desktop/id1295203466?mt=12) and install the client on your macOS device.
-
-## Subscribe to a feed
-
-Subscribe to the feed your admin gave you to get the list of managed resources available to you on your macOS device.
-
-To subscribe to a feed:
-
-1. Select **Add Workspace** on the main page to connect to the service and retrieve your resources.
-2. Enter the Feed URL. This can be a URL or email address:
- - If you use a URL, use the one your admin gave you. Normally, the URL is <https://rdweb.wvd.microsoft.com/api/arm/feeddiscovery>.
- - To use email, enter your email address. This tells the client to search for a URL associated with your email address if your admin configured the server that way.
- - To connect through the US Gov portal, use <https://rdweb.wvd.azure.us/api/arm/feeddiscovery>.
-3. Select **Add**.
-4. Sign in with your user account when prompted.
-
-After you've signed in, you should see a list of available resources.
-
-Once you've subscribed to a feed, the feed's content will update automatically on a regular basis. Resources may be added, changed, or removed based on changes made by your administrator.
-
-## Next steps
-
-To learn more about the macOS client, check out the [Get started with the macOS client](/windows-server/remote/remote-desktop-services/clients/remote-desktop-mac/) documentation.
virtual-desktop Connect Microsoft Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/user-documentation/connect-microsoft-store.md
- Title: Connect to Azure Virtual Desktop with the Microsoft Store client - Azure
-description: How to connect to Azure Virtual Desktop using the Microsoft Store client.
-- Previously updated : 10/05/2020---
-# Connect with the Microsoft Store client
-
->Applies to: Windows 10.
-
-You can access Azure Virtual Desktop resources on devices with Windows 10.
-
-## Install the Microsoft Store client
-
-You can install the client for the current user, which doesn't require admin rights. Alternatively, your admin can install and configure the client so that all users on the device can access it.
-
-Once installed, the client can be launched from the Start menu by searching for Remote Desktop.
-
-To get started, [download and install the client from the Microsoft Store](https://www.microsoft.com/store/productId/9WZDNCRFJ3PS).
-
-## Subscribe to a workspace
-
-Subscribe to the workspace provided by your admin to get the list of managed resources you can access on your PC.
-
-To subscribe to a workspace:
-
-1. In the Connection Center screen, tap **+Add**, then tap **Workspaces**.
-2. Enter the Workspace URL into the Workspace URL field provided by your admin. The workspace URL can be either a URL or an email address.
-
- - If you're using a Workspace URL, use the URL your admin gave you.
- - If you're connecting from Azure Virtual Desktop, use one of the following URLs depending on which version of the service you're using:
- - Azure Virtual Desktop (classic): `https://rdweb.wvd.microsoft.com/api/feeddiscovery/webfeeddiscovery.aspx`.
- - Azure Virtual Desktop: `https://rdweb.wvd.microsoft.com/api/arm/feeddiscovery`.
-
-3. Tap **Subscribe**.
-4. Provide your credentials when prompted.
-5. After subscribing, the workspaces should be displayed in the Connection Center.
-
-Workspaces may be added, changed, or removed based on changes made by your administrator.
-
-## Next steps
-
-To learn more about how to use the Microsoft Store client, check out [Get started with the Microsoft Store client](/windows-server/remote/remote-desktop-services/clients/windows/).
virtual-desktop Connect Web https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/user-documentation/connect-web.md
- Title: Connect to Azure Virtual Desktop with the web client - Azure
-description: How to connect to Azure Virtual Desktop using the web client.
-- Previously updated : 03/21/2022---
-# Connect to Azure Virtual Desktop with the web client
-
->[!IMPORTANT]
->This content applies to Azure Virtual Desktop with Azure Resource Manager Azure Virtual Desktop objects. If you're using Azure Virtual Desktop (classic) without Azure Resource Manager objects, see [this article](../virtual-desktop-fall-2019/connect-web-2019.md).
-
-The web client lets you access your Azure Virtual Desktop resources from a web browser without the lengthy installation process.
-
->[!NOTE]
->The web client doesn't currently have mobile OS support.
-
-## Supported operating systems and browsers
-
->[!IMPORTANT]
->As of September 30, 2021, the Azure Virtual Desktop web client no longer supports Internet Explorer. We recommend that you use Microsoft Edge to connect to the web client instead. For more information, see our [blog post](https://aka.ms/WVDSupportIE11).
-
-While any HTML5-capable browser should work, we officially support the following operating systems and browsers:
-
-| Browser | Supported OS | Notes |
-|-|-||
-| Microsoft Edge | Windows, macOS, Linux, Chrome OS | Version 79 or later |
-| Apple Safari | macOS | Version 11 or later |
-| Mozilla Firefox | Windows, macOS, Linux | Version 55 or later |
-| Google Chrome | Windows, macOS, Linux, Chrome OS | Version 57 or later |
-
-## Access remote resources feed
-
-In a browser, navigate to the Azure Resource Manager-integrated version of the [Azure Virtual Desktop web client](https://client.wvd.microsoft.com/arm/webclient/https://docsupdatetracker.net/index.html) and sign in with your user account.
-
->[!IMPORTANT]
->We plan to start automatically redirecting to a new web client URL at `https://client.wvd.microsoft.com/arm/webclient/https://docsupdatetracker.net/index.html` as of April 18th, 2022. The URLs at `https://rdweb.wvd.microsoft.com/arm/webclient/https://docsupdatetracker.net/index.html` and `https://www.wvd.microsoft.com/arm/webclient/https://docsupdatetracker.net/index.html` will still be available, but we recommend you update your bookmarks to the new URL at `https://client.wvd.microsoft.com/arm/webclient/https://docsupdatetracker.net/index.html` as soon as possible.
-
->[!NOTE]
->If you're using Azure Virtual Desktop (classic) without Azure Resource Manager integration, connect to your resources at <https://client.wvd.microsoft.com/webclient/https://docsupdatetracker.net/index.html> instead.
->
-> If you're using the US Gov portal, use <https://rdweb.wvd.azure.us/arm/webclient/https://docsupdatetracker.net/index.html>.
->
-> To connect to the Azure China portal, use `https://rdweb.wvd.azure.cn/arm/webclient/https://docsupdatetracker.net/index.html>.
-
->[!NOTE]
->If you've already signed in with a different Azure Active Directory account than the one you want to use for Azure Virtual Desktop, you should either sign out or use a private browser window.
-
-After signing in, you should now see a list of resources. You can launch resources by selecting them like you would a normal app in the **All Resources** tab.
-
-## Next steps
-
-To learn more about how to use the web client, check out [Get started with the Web client](/windows-server/remote/remote-desktop-services/clients/remote-desktop-web-client).
virtual-desktop Connect Windows 7 10 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/user-documentation/connect-windows-7-10.md
- Title: Connect to Azure Virtual Desktop with the Windows Desktop client - Azure
-description: How to connect to Azure Virtual Desktop using the Windows Desktop client.
-- Previously updated : 08/08/2022---
-adobe-target: true
--
-# Connect with the Windows Desktop client
-
-You can access Azure Virtual Desktop resources on devices with Windows 11, Windows 10, Windows 10 IoT Enterprise, and Windows 7 using the Windows Desktop client.
-
-> [!IMPORTANT]
-> This method doesn't support Windows 8 or Windows 8.1.
->
-> This method only supports Azure Resource Manager objects. To support objects without Azure Resource Manager, see [Connect with Windows Desktop (classic) client](../virtual-desktop-fall-2019/connect-windows-7-10-2019.md).
->
-> This method also doesn't support the RemoteApp and Desktop Connections (RADC) client or the Remote Desktop Connection (MSTSC) client.
->
-> Extended support for using Windows 7 to connect to Azure Virtual Desktop ends on January 10, 2023.
-
-## Install the Windows Desktop client
-
-Download the client based on your Windows version:
--- [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2068602)-- [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2098960)-- [Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2098961)-
-During installation to determine access, select either:
--- **Install just for you**-- **Install for all users of this machine** (requires admin rights)-
-To launch the client after installation, use the **Start** menu and search for **Remote Desktop**.
-
-## Subscribe to a Workspace
-
-To subscribe to a Workspace, choose to:
--- Use a work or school account and have the client discover the resources available for you.-- Use the specific URL of the resource.-
-To launch the resource once subscribed, go to the **Connection Center** and double-click the resource.
-
-> [!TIP]
-> To launch a resource from the **Start** menu, you can find the folder with the Workspace name or enter the resource name in the search bar.
-
-### Use a user account
-
-1. Select **Subscribe** from the main page.
-2. Sign in with your user account when prompted.
-
-The resources grouped by workspace appear in the **Connection Center**.
-
- > [!NOTE]
- > The Windows client automatically defaults to Azure Virtual Desktop (classic).
- >
- > However, if the client detects additional Azure Resource Manager resources, it adds them automatically or notifies the user that they're available.
-
-### Use a specific URL
-
-1. Select **Subscribe with URL** from the main page.
-2. In the **Email or Workspace URL** field:
- - For Workspace URL, use the URL provided by your admin.
-
- |Available Resources|URL|
- |-|-|
- |Azure Virtual Desktop (classic)|`https://rdweb.wvd.microsoft.com/api/feeddiscovery/webfeeddiscovery.aspx`|
- |Azure Virtual Desktop|`https://rdweb.wvd.microsoft.com/api/arm/feeddiscovery`|
- |Azure Virtual Desktop (US Gov)|`https://rdweb.wvd.azure.us/api/arm/feeddiscovery`|
- |Azure Virtual Desktop (China)|`https://rdweb.wvd.azure.cn/api/arm/feeddiscovery`|
-
- - For email, use your email address.
-
- The client finds the URL associated with your email, provided your admin has enabled [email discovery](/windows-server/remote/remote-desktop-services/rds-email-discovery).
-
-3. Select **Next**.
-4. Sign in with your user account when prompted.
-
-The resources grouped by workspace appear in the **Connection Center**.
-
-## Next steps
-
-To learn more about how to use the client, check out [Get started with the Windows Desktop client](/windows-server/remote/remote-desktop-services/clients/windowsdesktop/).
-
-If you're an admin interested in learning more about the client's features, check out [Windows Desktop client for admins](/windows-server/remote/remote-desktop-services/clients/windowsdesktop-admin).
virtual-desktop Linux Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/user-documentation/linux-overview.md
- Title: Azure Virtual Desktop thin client support - Azure
-description: A brief overview of thin client support for Azure Virtual Desktop.
-- Previously updated : 04/01/2021---
-# Thin client support
-
-You can access Azure Virtual Desktop resources from your thin client devices with the [web client](connect-web.md) or the following supported clients, provided by our partners. We're working with a number of partners to enable supported Azure Virtual Desktop clients on other platforms.
-
-## Connect with your thin client device
-
-The following partners have approved Azure Virtual Desktop clients.
-
-|Partner|Partner documentation|Partner support|
-|:|:--|:--|
-|10ZiG |[10ZiG client documentation](https://www.10zig.com/about/microsoft-windows-virtual-desktop)|[10ZiG support](https://www.10zig.com/resources/support_faq)|
-|Dell |[Dell client documentation](https://www.delltechnologies.com/en-us/collaterals/unauth/data-sheets/products/thin-clients/dell-thinos-9-for-microsoft-wvd.pdf)|[Dell support](https://www.dell.com/support)|
-|HP |[HP client documentation](https://h20195.www2.hp.com/v2/GetDocument.aspx?docname=c07051097)|[HP support](https://support.hp.com/us-en/products/workstations-thin-clients)|
-|IGEL |[IGEL client documentation](https://www.igel.com/igel-solution-family/)|[IGEL support](https://www.igel.com/support/)|
-|NComputing |[NComputing client documentation](https://www.ncomputing.com/microsoft)|[NComputing support](https://www.ncomputing.com/support/support-options)|
-|Stratodesk |[Stratodesk client documentation](https://kb.stratodesk.com/microsoft-windows-virtual-desktop-wvd)|[Stratodesk support](https://www.stratodesk.com/support/)|
-
-## Next steps
-
-Check out our documentation for the following clients:
--- [Windows Desktop client](connect-windows-7-10.md)-- [Web client](connect-web.md)-- [Android client](connect-android.md)-- [macOS client](connect-macos.md)-- [iOS client](connect-ios.md)
virtual-desktop Client Features Android Chrome Os https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/client-features-android-chrome-os.md
+
+ Title: Use features of the Remote Desktop client for Android and Chrome OS - Azure Virtual Desktop
+description: Learn how to use features of the Remote Desktop client for Android and Chrome OS when connecting to Azure Virtual Desktop.
++ Last updated : 10/04/2022+++
+# Use features of the Remote Desktop client for Android and Chrome OS when connecting to Azure Virtual Desktop
+
+Once you've connected to Azure Virtual Desktop using the Remote Desktop client, it's important to know how to use the features. This article shows you how to use the features available in the Remote Desktop client for Android and Chrome OS. If you want to learn how to connect to Azure Virtual Desktop, see [Connect to Azure Virtual Desktop with the Remote Desktop client for Android and Chrome OS](connect-android-chrome-os.md).
+
+You can find a list of all the Remote Desktop clients at [Remote Desktop clients overview](remote-desktop-clients-overview.md). For more information about the differences between the clients, see [Compare the Remote Desktop clients](../compare-remote-desktop-clients.md).
+
+> [!NOTE]
+> Your admin can choose to override some of these settings in Azure Virtual Desktop, such as being able to copy and paste between your local device and your remote session. If some of these settings are disabled, please contact your admin.
+
+## Edit, refresh, or delete a workspace
+
+To edit, refresh or delete a workspace:
+
+1. Open the **RD Client** app on your device, then tap **Workspaces**.
+
+1. Tap the three dots to the right-hand side of the name of a workspace where you'll see a menu with options for **Edit**, **Refresh**, and **Delete**.
+
+ - **Edit** allows you to specify a user account to use each time you connect to the workspace without having to enter the account each time. To learn more, see [Manage user accounts](#manage-user-accounts).
+ - **Refresh** makes sure you have the latest desktops and apps and their settings provided by your admin.
+ - **Delete** removes the workspace from the Remote Desktop client.
+
+## User accounts
+
+### Add user credentials to a workspace
+
+You can save a user account and associate it with workspaces to simplify the connection sequence, as the sign-in credentials will be used automatically.
+
+1. Open the **RD Client** app on your device, then tap **Workspaces**.
+
+1. Tap the three dots to the right-hand side of the name of a workspace, then select **Edit**.
+
+1. For **User account**, tap the drop-down menu, then select **Add User Account** to add a new account, or select an account you've previously added.
+
+1. If you selected **Add User Account**, enter a username and password, then tap **Save**.
+
+1. Tap **Save** again to return to Workspaces.
+
+### Manage user accounts
+
+You can save a user account and associate it with workspaces to simplify the connection sequence, as the sign-in credentials will be used automatically. You can also remove accounts you no longer want to use.
+
+To save a user account:
+
+1. Open the **RD Client** app on your device.
+
+1. In the top left-hand corner, tap the menu icon (three horizontal lines), then tap **User Accounts**.
+
+1. Tap the plus icon (**+**).
+
+1. Enter a username and password, then tap **Save**. You can then add this account to a workspace by following the steps in [Add user credentials to a workspace](#add-user-credentials-to-a-workspace).
+
+1. Tap the back arrow (**<**) to return to Workspaces.
+
+To remove an account you no longer want to use:
+
+1. Open the **RD Client** app on your device.
+
+1. In the top left-hand corner, tap the menu icon (three horizontal lines), then tap **User Accounts**.
+
+1. Tap and hold the account you want to remove.
+
+1. Tap delete (the bin icon). Confirm you want to delete the account.
+
+1. Tap the back arrow (**<**) to return to Workspaces.
+
+## Display preferences
+
+### Set orientation
+
+You can set the orientation of the Remote Desktop client to landscape, portrait, or auto-adjust, where it will match the orientation of your device. Auto-adjust is supported when your remote session is running Windows 10 and Windows Server 2012 R2 or later. The window will maintain the same scaling and update the resolution to match the new orientation. This setting applies to all workspaces.
+
+To set the orientation:
+
+1. Open the **RD Client** app on your device.
+
+1. In the top left-hand corner, tap the menu icon (three horizontal lines), then tap **Display**.
+
+1. For orientation, tap your preference from **Auto-adjust**, **Lock to landscape** or **Lock to portrait**.
+
+1. Tap the back arrow (**<**) to return to Workspaces.
+
+### Set display resolution
+
+You can choose the resolution for your remote session from a predefined list. This setting applies to all workspaces. You'll need to reconnect to remote sessions if you changed the resolution while connected.
+
+To set the resolution:
+
+1. Open the **RD Client** app on your device.
+
+1. In the top left-hand corner, tap the menu icon (three horizontal lines), then tap **Display**.
+
+1. You can tap **Default**, **Match this device**, or tap **+ Customized** for a drop-down list of predefined resolutions. If you choose a customized resolution, you can also choose the scaling percentage.
+
+1. Tap the back arrow (**<**) to return to Workspaces.
+
+## DeX
+
+You can use *Samsung DeX* with a remote session, which enables you to extend your Android or Chromebook device's display to a larger monitor or TV.
+
+## Connection bar and session overview menu
+
+When you've connected to Azure Virtual Desktop, you'll see a bar at the top, which is called the **connection bar**. This gives you quick access to a zoom control, represented by a magnifying glass icon, and the ability to toggle between showing and hiding the on-screen keyboard. You can move the connection bar around the top edge of the display by tapping and dragging it to where you want it.
+
+The middle icon in the connection bar is of the Remote Desktop logo. If you tap this, it shows the *session overview* screen. The *session overview* screen enables you to:
+
+- Go to the *Connection Center* using the **Home** icon.
+- Switch inputs between touch and the mouse pointer (when not using a separate mouse).
+- Switch between active desktops and apps.
+- Disconnect all active sessions.
+
+You can return back to an active session from the Connection Center using the **Return Arrow** button found in the bottom right corner of the Connection Center.
+
+## Input methods
+
+The Remote Desktop client supports native touch gestures, keyboard, mouse, and trackpad.
+
+### Use touch gestures and mouse modes in a remote session
+
+You can use touch gestures to replicate mouse actions in your remote session. Two mouse modes are available:
+
+- **Direct touch**: where you tap on the screen is the equivalent to clicking a mouse in that position. The mouse pointer isn't shown on screen.
+- **Mouse pointer**: The mouse pointer is shown on screen. When you tap the screen and move your finger, the mouse pointer will move.
+
+If you use Windows 10 or later with Azure Virtual Desktop, native Windows touch gestures are supported in direct touch mode.
+
+The following table shows which mouse operations map to which gestures in specific mouse modes:
+
+| Mouse mode | Mouse operation | Gesture |
+|:--|:|:|
+| Direct touch | Left-click | Tap with one finger |
+| Direct touch | Right-click | Tap and hold with one finger |
+| Mouse pointer | Left-click | Tap with one finger |
+| Mouse pointer | Left-click and drag | Double-tap and hold with one finger, then drag |
+| Mouse pointer | Right-click | Tap with two fingers, or tap and hold with one finger |
+| Mouse pointer | Right-click drag | Double-tap and hold with two fingers, then drag
+| Mouse pointer | Mouse wheel | Tap and hold with two fingers, then drag up or down |
+| Mouse pointer | Zoom | With two fingers, pinch to zoom out and spread fingers apart to zoom in |
+
+#### Input Method Editor
+
+The Remote Desktop client supports Input Method Editor (IME) in a remote session for input sources. The local Android or Chrome OS IME experience will be accessible in the remote session.
+
+> [!IMPORTANT]
+> For an IME to work, the input mode needs to be in Unicode Mode. To learn more, see [Keyboard modes](#keyboard-modes).
+
+### Keyboard
+
+You can use some familiar keyboard shortcuts when using a keyboard with your Android or Chrome OS device and Azure Virtual Desktop, for example using <kbd>CTRL</kbd>+<kbd>C</kbd> for copy.
+
+Some Windows keyboard shortcuts are also used as shortcuts on Android and Chrome OS devices, for example using <kbd>ALT</kbd>+<kbd>TAB</kbd> to switch between open applications. By default, these shortcuts won't be passed through to a remote session. Depending on your Android or Chrome OS device, you may be able to disable certain shortcuts being used locally, where they'll then be passed through to a remote session.
+
+#### Keyboard modes
+
+There are two different modes you can use that control how keyboard input is interpreted in a remote session: *Scancode* and *Unicode*.
+
+With *Scancode*, user input is redirected by sending key press *up* and *down* information to the remote session. Each key is identified by its physical position on the keyboard and uses the keyboard layout of the remote session, not the keyboard of the local device. For example, scancode 31 is the key next to <kbd>Caps Lock</kbd>. On a US keyboard this key would produce the character "A", while on a French keyboard this key would produce the character "Q".
+
+With *Unicode*, user input is redirected by sending each character to the remote session. When a key is pressed, the locale of the user is used to translate this input to a character. This can be as simple as the character "a" by simply pressing the "a" key, but it can enable an Input Method Editor (IME), allowing you to input multiple keystrokes to create more complex characters, such as for Chinese and Japanese input sources. Below are some examples of when to use each mode.
+
+When to use *Scancode*:
+
+- Dealing with characters that aren't printable, such as <kbd>Arrow Up</kbd> or shortcut combinations.
+
+- Certain applications that don't accept Unicode input for characters such as: Hyper-V VMConnect (for example, no way to input a BitLocker password), VMware Remote Console, all applications written using the *Qt framework* (for example R Studio, TortoiseHg, QtCreator).
+
+- Applications that utilize scancode input for actions, such as <kbd>Space bar</kbd> to check/uncheck a checkbox, or individual keys as shortcuts, for example applications in browser.
+
+When to use *Unicode*:
+
+- To avoid a mismatch in expectations. A user who expects the keyboard to behave in a certain way can run into issues where there are differences for the same locale/region layout.
+
+- When the keyboard layout used on the client might not be available on the server.
+
+By default, the Remote Desktop client uses *Unicode*. To switch between keyboard modes:
+
+1. Open the **RD Client** app on your device.
+
+1. In the top left-hand corner, tap the menu icon (three horizontal lines), then tap **General**.
+
+1. Toggle **Use scancode input when available** to **On** to use *scancode*, or **Off** to use *Unicode*.
+
+## Redirections
+
+You can allow the remote computer to the clipboard on your local device. When you connect to a remote session, you'll be prompted whether you want to allow access to local resources. The Remote Desktop client supports copying and pasting text only.
+
+To use the clipboard between your local device and your remote session:
+
+1. Open the **RD Client** app on your device.
+
+1. Tap one of the icons to launch a session to Azure Virtual Desktop.
+
+1. For the prompt **Make sure you trust the remote PC before you connect**, check the box for **Clipboard**, then select **Connect**.
+
+## General app settings
+
+To set other general settings of the Remote Desktop app to use with Azure Virtual Desktop:
+
+1. Open the **RD Client** app on your device.
+
+1. In the top left-hand corner, tap the menu icon (three horizontal lines), then tap **General**.
+
+1. You can change the following settings:
+
+ | Setting | Value | Description |
+ |--|--|--|
+ | Show desktop previews | Toggle **On** or **Off** | Show thumbnails of remote sessions. |
+ | Use HTTP Proxy | Toggle **On** or **Off** | Use the HTTP proxy specified in Android or Chrome OS network settings. |
+ | Help improve Remote Desktop | Toggle **On** or **Off** | Send anonymous data to Microsoft. |
+ | Theme | Select from **Light**, **Dark**, or **System** | Set the appearance of the Remote Desktop client. |
+
+## Test the beta client
+
+If you want to help us test new builds before they're released, you should download our beta client. Organizations can use the beta client to validate new versions for their users before they're generally available.
+
+> [!NOTE]
+> The beta client shouldn't be used in production environments.
+
+You can download the beta client for Android and Chrome OS from the [Google Play Store](https://play.google.com/apps/testing/com.microsoft.rdc.androidx). You'll need to give consent to access preview versions and download the client. You'll receive preview versions directly through the Google Play Store.
+
+## Provide feedback
+
+If you want to provide feedback to us on the Remote Desktop client for Android and Chrome OS, you can do so in the app:
+
+1. Open the **RD Client** app on your device.
+
+1. In the top left-hand corner, tap the menu icon (three horizontal lines), then tap **General**.
+
+1. Tap **Submit feedback**, which will open the feedback page in your browser.
+
+## Next steps
+
+If you're having trouble with the Remote Desktop client, see [Troubleshoot the Remote Desktop client](../troubleshoot-client.md).
virtual-desktop Client Features Ios Ipados https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/client-features-ios-ipados.md
+
+ Title: Use features of the Remote Desktop client for iOS and iPadOS - Azure Virtual Desktop
+description: Learn how to use features of the Remote Desktop client for iOS and iPadOS when connecting to Azure Virtual Desktop.
++ Last updated : 10/04/2022+++
+# Use features of the Remote Desktop client for iOS and iPadOS when connecting to Azure Virtual Desktop
+
+Once you've connected to Azure Virtual Desktop using the Remote Desktop client, it's important to know how to use the features. This article shows you how to use the features available in the Remote Desktop client for iOS and iPadOS. If you want to learn how to connect to Azure Virtual Desktop, see [Connect to Azure Virtual Desktop with the Remote Desktop client for iOS and iPadOS](connect-ios-ipados.md).
+
+You can find a list of all the Remote Desktop clients at [Remote Desktop clients overview](remote-desktop-clients-overview.md). For more information about the differences between the clients, see [Compare the Remote Desktop clients](../compare-remote-desktop-clients.md).
+
+> [!NOTE]
+> Your admin can choose to override some of these settings in Azure Virtual Desktop, such as being able to copy and paste between your local device and your remote session. If some of these settings are disabled, please contact your admin.
+
+## Edit, refresh, or delete a workspace
+
+To edit, refresh or delete a workspace:
+
+1. Open the **RD Client** application on your device, then tap **Workspaces**.
+
+1. Tap and hold the name of a workspace and you'll see a menu with options for **Edit**, **Refresh**, and **Delete**. You can also pull down to refresh all workspaces.
+
+ - **Edit** allows you to specify a user account to use each time you connect to the workspace without having to enter the account each time. To learn more, see [Manage user accounts](#manage-user-accounts).
+ - **Refresh** makes sure you have the latest desktops and apps and their settings provided by your admin.
+ - **Delete** removes the workspace from the Remote Desktop client.
+
+## User accounts
+
+Learn how to add user credentials to a workspace and manage them.
+
+### Add user credentials to a workspace
+
+You can save a user account and associate it with workspaces to simplify the connection sequence, as the sign-in credentials will be used automatically.
+
+1. Open the **RD Client** application on your device, then tap **Workspaces**.
+
+1. Tap and hold the name of a workspace, then select **Edit**.
+
+1. Tap **User account**, then select **Add User Account** to add a new account, or select an account you've previously added.
+
+1. If you selected **Add User Account**, enter a username, password, and optionally a friendly name, then tap the back arrow (**<**).
+
+1. Tap the **X** mark to return to Workspaces.
+
+### Manage user accounts
+
+You can save a user account and associate it with workspaces to simplify the connection sequence, as the sign-in credentials will be used automatically. You can also remove accounts you no longer want to use.
+
+To save a user account:
+
+1. Open the **RD Client** application on your device.
+
+1. In the top left-hand corner, tap the menu icon (the circle with three dots inside), then tap **Settings**.
+
+1. Tap **User Accounts**, then tap **Add User Account**.
+
+1. Enter a username, password, and optionally a friendly name, then tap the back arrow (**<**). You can then add this account to a workspace by following the steps in [Add user credentials to a workspace](#add-user-credentials-to-a-workspace).
+
+1. Tap the back arrow (**<**), then tap the **X** mark.
+
+To remove an account you no longer want to use:
+
+1. Open the **RD Client** application on your device.
+
+1. In the top left-hand corner, tap the menu icon (the circle with three dots inside), then tap **Settings**.
+
+1. Tap **User Accounts**, then select the account you want to remove.
+
+1. Tap **Delete**. The account will be removed immediately.
+
+1. Tap the back arrow (**<**), then tap the **X** mark.
+
+## Display preferences
+
+Learn how to set display preferences, such as orientation and resolution.
+
+### Set orientation
+
+You can set the orientation of the Remote Desktop client to landscape, portrait, or auto-adjust, where it will match the orientation of your device. Auto-adjust is supported when your remote session is running Windows 10 and Windows Server 2012 R2 or later. The window will maintain the same scaling and update the resolution to match the new orientation. This setting applies to all workspaces.
+
+To set the orientation:
+
+1. Open the **RD Client** application on your device.
+
+1. In the top left-hand corner, tap the menu icon (the circle with three dots inside), then tap **Settings**.
+
+1. Tap **Display**, then tap **Orientation**.
+
+1. Tap your preference from **Auto-adjust**, **Lock to Landscape** or **Lock to Portrait**.
+
+1. You can also set **Use Home Indicator Area**. Toggling this on will show graphics from the remote session in the area at the bottom of the screen occupied by the Home indicator. This setting only applies in landscape orientation.
+
+1. Tap the back arrow (**<**), then tap the **X** mark.
+
+### Set display resolution
+
+You can choose the resolution for your remote session from a predefined list. This setting applies to all workspaces.
+
+To set the resolution:
+
+1. Open the **RD Client** application on your device.
+
+1. In the top left-hand corner, tap the menu icon (the circle with three dots inside), then tap **Settings**.
+
+1. Tap **Display**.
+
+1. Tap a resolution from the list.
+
+1. You can also set **Use Home Indicator Area**. Toggling this on will show graphics from the remote session in the area at the bottom of the screen occupied by the Home indicator. This setting only applies in landscape orientation. For more information about display orientation, see [Set orientation](#set-orientation).
+
+1. Tap the back arrow (**<**), then tap the **X** mark.
+
+## Connection bar and session overview menu
+
+When you've connected to Azure Virtual Desktop, you'll see a bar at the top, which is called the **connection bar**. This gives you quick access to a zoom control, represented by a magnifying glass icon, and the ability to toggle between showing and hiding the on-screen keyboard. You can move the connection bar around the top and side edges of the display by tapping and dragging it to where you want it. If you tap and hold the zoom control, you can choose the percentage by which to zoom by using the slider. If you use a keyboard, you can also show and hide the connection bar by pressing <kbd>Shift</kbd>+<kbd>CMD</kbd>+<kbd>Space bar</kbd>.
+
+The middle icon in the connection bar is of the Remote Desktop logo. If you tap this, it shows the *session overview* screen. The *session overview* screen enables you to:
+
+- Go to the *Connection Center* using the **Home** icon.
+- Switch inputs between touch and the mouse pointer (when not using a separate mouse).
+- Switch between active desktops and apps.
+- Disconnect all active sessions.
+
+Pressing <kbd>Tab</kbd> on a keyboard will switch between the PCs and Apps tab in the *session overview* menu. You can also use arrow keys to navigate and select an active session to open.
+
+You can return back to an active session from the Connection Center using the **Return Arrow** button found in the bottom right corner of the Connection Center.
+
+## Input methods
+
+The Remote Desktop client supports native touch gestures, keyboard, mouse, and trackpad.
+
+### Use touch gestures and mouse modes in a remote session
+
+You can use touch gestures to replicate mouse actions in your remote session. Two mouse modes are available:
+
+- **Direct touch**: where you tap on the screen is the equivalent to clicking a mouse in that position. The mouse pointer isn't shown on screen.
+- **Mouse pointer**: The mouse pointer is shown on screen. When you tap the screen and move your finger, the mouse pointer will move.
+
+If you use Windows 10 or later with Azure Virtual Desktop, native Windows touch gestures are supported in direct touch mode.
+
+The following table shows which mouse operations map to which gestures in specific mouse modes:
+
+| Mouse mode | Mouse operation | Gesture |
+|:--|:|:|
+| Direct touch | Left-click | Tap with one finger |
+| Direct touch | Right-click | Tap and hold with one finger |
+| Mouse pointer | Left-click | Tap with one finger |
+| Mouse pointer | Left-click and drag | Double-tap and hold with one finger, then drag |
+| Mouse pointer | Right-click | Tap with two fingers, or tap and hold with one finger |
+| Mouse pointer | Right-click drag | Double-tap and hold with two fingers, then drag |
+| Mouse pointer | Mouse wheel | Tap and hold with two fingers, then drag up or down |
+| Mouse pointer | Zoom | With two fingers, pinch to zoom out and spread fingers apart to zoom in |
+
+### Keyboard
+
+You can use familiar keyboard shortcuts when using a keyboard with your iPad or iPhone and Azure Virtual Desktop. Mac and Windows keyboard layouts differ slightly - for example, the <kbd>Command</kbd> key on a Mac keyboard equals the <kbd>Windows</kbd> key on a Windows keyboard. To help with the differences this makes when using keyboard shortcuts, the Remote Desktop client automatically maps common shortcuts found in iOS and iPadOS so they'll work in Windows. These are:
+
+| Key combination | Function |
+|--||
+| <kbd>CMD</kbd>+<kbd>C</kbd> | Copy |
+| <kbd>CMD</kbd>+<kbd>X</kbd> | Cut |
+| <kbd>CMD</kbd>+<kbd>V</kbd> | Paste |
+| <kbd>CMD</kbd>+<kbd>A</kbd> | Select all |
+| <kbd>CMD</kbd>+<kbd>Z</kbd> | Undo |
+| <kbd>CMD</kbd>+<kbd>F</kbd> | Find |
+| <kbd>CMD</kbd>+<kbd>+</kbd> | Zoom in |
+| <kbd>CMD</kbd>+<kbd>-</kbd> | Zoom out |
+
+In addition, the <kbd>Alt</kbd> key to the right of the space bar on a Mac keyboard equals the <kbd>Alt Gr</kbd> in Windows.
+
+### Mouse and trackpad
+
+You can use a mouse or trackpad with the Remote Desktop client. However, support for these devices depends on whether you're using iOS or iPadOS. iPadOS natively supports a mouse and trackpad as an input method, whereas support can only be enabled in iOS with *AssistiveTouch*. For more information, see [Connect a Bluetooth mouse or trackpad to your iPad](https://support.apple.com/HT211009) or [How to use a pointer device with AssistiveTouch on your iPhone, iPad, or iPod touch](https://support.apple.com/HT210546).
+
+## Redirections
+
+The Remote Desktop client enables you to make your local clipboard available in your remote session. By default, text you copy on your iOS or iPadOS device is available to paste in your remote session, and text you copy in your remote session is available to paste on your iOS or iPadOS device.
+
+## General app settings
+
+To set other general settings of the Remote Desktop app to use with Azure Virtual Desktop:
+
+1. Open the **RD Client** application on your device.
+
+1. In the top left-hand corner, tap the menu icon (the circle with three dots inside), then tap **Settings**.
+
+1. You can change the following settings:
+
+ | Setting | Value | Description |
+ |--|--|--|
+ | Show PC Thumbnails | Toggle **On** or **Off** | Show thumbnails of remote sessions. |
+ | Allow Display Auto-Lock | Toggle **On** or **Off** | Allow your device to turn off its screen. |
+ | Use HTTP Proxy | Toggle **On** or **Off** | Use the HTTP proxy specified in iOS/iPadOS network settings. |
+ | Appearance | Select from **Light**, **Dark**, or **System** | Set the appearance of the Remote Desktop client. |
+ | Send Data to Microsoft | Toggle **On** or **Off** | Help improve the Remote Desktop client by sending anonymous data to Microsoft. |
+
+## Test the beta client
+
+If you want to help us test new builds before they're released, you should download our beta client. Organizations can use the beta client to validate new versions for their users before they're generally available.
+
+> [!NOTE]
+> The beta client shouldn't be used in production.
+
+You can download the beta client for iOS and iPadOS from TestFlight. To get started, see [Microsoft Remote Desktop for iOS](https://testflight.apple.com/join/vkLIflUJ).
+
+## Provide feedback
+
+If you want to provide feedback to us on the Remote Desktop client for iOS and iPadOS, you can do so in the app:
+
+1. Open the **RD Client** application on your device.
+
+1. In the top left-hand corner, tap the menu icon (the circle with three dots inside), then tap **Settings**.
+
+1. Tap **Submit feedback**, which will open the feedback page in your browser.
+
+## Next steps
+
+If you're having trouble with the Remote Desktop client, see [Troubleshoot the Remote Desktop client](../troubleshoot-client.md).
virtual-desktop Client Features Macos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/client-features-macos.md
+
+ Title: Use features of the Remote Desktop client for macOS - Azure Virtual Desktop
+description: Learn how to use features of the Remote Desktop client for macOS when connecting to Azure Virtual Desktop.
++ Last updated : 10/04/2022+++
+# Use features of the Remote Desktop client for macOS when connecting to Azure Virtual Desktop
+
+Once you've connected to Azure Virtual Desktop using the Remote Desktop client, it's important to know how to use the features. This article shows you how to use the features available in the Remote Desktop client for macOS. If you want to learn how to connect to Azure Virtual Desktop, see [Connect to Azure Virtual Desktop with the Remote Desktop client for macOS](connect-macos.md).
+
+You can find a list of all the Remote Desktop clients at [Remote Desktop clients overview](remote-desktop-clients-overview.md). For more information about the differences between the clients, see [Compare the Remote Desktop clients](../compare-remote-desktop-clients.md).
+
+> [!NOTE]
+> Some of the settings in this article can be overridden by your admin, such as being able to copy and paste between your local device and your remote session. If some of these settings are disabled, please contact your admin.
+
+## Edit, refresh, or delete a workspace
+
+To edit, refresh or delete a workspace:
+
+1. Open the **Microsoft Remote Desktop** application on your device, then select **Workspaces**.
+
+1. Right-click the name of a workspace or hover your mouse cursor over it and you'll see a menu with options for **Edit**, **Refresh**, and **Delete**.
+
+ - **Edit** allows you to specify a user account to use each time you connect to the workspace without having to enter the account each time. To learn more, see [Manage user accounts](#manage-user-accounts).
+ - **Refresh** makes sure you have the latest desktops and apps and their settings provided by your admin.
+ - **Delete** removes the workspace from the Remote Desktop client.
+
+## User accounts
+
+### Add user credentials to a workspace
+
+You can save a user account and associate it with workspaces to simplify the connection sequence, as the sign-in credentials will be used automatically.
+
+1. Open the **Microsoft Remote Desktop** application on your device, then select **Workspaces**.
+
+1. Right-click the name of a workspace, then select **Edit**.
+
+1. For **User account**, select **Add User Account...** to add a new account, or select an account you've previously added.
+
+1. If you selected **Add User Account...**, enter a username, password, and optionally a friendly name, then select **Add**.
+
+1. Select **Save**.
+
+### Manage user accounts
+
+You can save a user account and associate it with workspaces to simplify the connection sequence, as the sign-in credentials will be used automatically. You can also remove accounts you no longer want to use.
+
+To save a user account:
+
+1. Open the **Microsoft Remote Desktop** application on your device.
+
+1. From the macOS menu bar, select **Microsoft Remote Desktop**, then select **Preferences**.
+
+1. Select the **User Accounts** tab, then the **+** (plus) icon.
+
+1. Enter a username, password, and optionally a friendly name, then select **Add**. You can then add this account to a workspace by following the steps in [Add user credentials to a workspace](#add-user-credentials-to-a-workspace).
+
+1. Close Preferences.
+
+To remove an account you no longer want to use:
+
+1. Open the **Microsoft Remote Desktop** application on your device.
+
+1. From the macOS menu bar, select **Microsoft Remote Desktop**, then select **Preferences**.
+
+1. Select the **User Accounts** tab, then select the account you want to remove.
+
+1. Select the **-** (minus) icon, then confirm you want to delete the user account.
+
+1. Close Preferences.
+
+## Display preferences
+
+### Add, remove, or restore display resolutions
+
+To add, remove or restore display resolutions:
+
+1. Open the **Microsoft Remote Desktop** application on your device.
+
+1. From the macOS menu bar, select **Microsoft Remote Desktop**, then select **Preferences**.
+
+1. Select the **Resolutions** tab.
+
+1. To add a custom resolution, select the **+** (plus) icon and enter in the **width** and **height** in pixels, then select **Add**.
+
+1. To remove a resolution, select the resolution you want to remove, then select the **-** (minus) icon. Confirm you want to delete the resolution by selecting **Delete**.
+
+1. To restore default resolutions, select **Restore Defaults**.
+
+### Display settings for each remote desktop
+
+If you want to use different display settings to those specified by your admin, you can configure custom settings.
+
+1. Open the **Microsoft Remote Desktop** application on your device, then select **Workspaces**.
+
+1. Right-click the name of a desktop, for example **SessionDesktop**, then select **Edit**.
+
+1. Check the box for **Use custom settings**.
+
+1. On the **Display** tab, you can select from the following options:
+
+| Option | Description |
+|--|--|
+| Resolution | Select the resolution to use for the desktop. You can select from a predefined list, or add custom resolutions. |
+| Use all monitors | Automatically use all monitors for the desktop. If you have multiple monitors, all of them will be used.<br /><br />For information on limits, see [Compare the features of the Remote Desktop clients](../compare-remote-desktop-clients.md). |
+| Start session in full screen | The desktop will be displayed full screen, rather than windowed. |
+| Fit session to window | When you resize the window, the scaling of the desktop will automatically adjust to fit the new window size. The resolution will stay the same. |
+| Color quality | The quality and number of colors used. Higher quality will use more bandwidth. |
+| Optimize for Retina displays | Scale the desktop to match the scaling used on the Mac client. This will use four times more bandwidth. |
+| Update the session resolution on resize | When you resize the window, the resolution of the desktop will automatically change to match. |
+
+### Displays have separate spaces
+
+macOS allows you to create extra desktops, called *Spaces*, where only the Windows that are in that space are visible. This is set in macOS **System Preferences** > **Mission Control** > **Displays have separate Spaces**. If this is disabled, macOS will use the same desktop across all monitors.
+
+When separate Spaces are disabled, if the Remote Desktop client has **Start session in full screen** enabled, but **Use all monitors** disabled, only one monitor will be used and the others will be blank. Either enable **Use all monitors** so the remote desktop is displayed on all monitors, or enable **Displays have separate spaces** in Mission Control so that the remote desktop will be displayed full screen on one monitor, but others will show the macOS desktop.
+
+### Sidecar
+
+You can use *Apple Sidecar* during a remote session, allowing you to extend a Mac desktop display using an iPad as an extra monitor.
+
+## Input methods
+
+You can use a built-in or external Mac keyboard, trackpad and mouse to control desktops or apps.
+
+### Keyboard
+
+Mac and Windows keyboard layouts differ slightly - for example, the <kbd>Command</kbd> key on a Mac keyboard equals the <kbd>Windows</kbd> key on a Windows keyboard. To help with the differences this makes when using keyboard shortcuts, the Remote Desktop client automatically maps common shortcuts found in macOS so they'll work in Windows. These are:
+
+| Key combination | Function |
+|--||
+| <kbd>CMD</kbd>+<kbd>C</kbd> | Copy |
+| <kbd>CMD</kbd>+<kbd>X</kbd> | Cut |
+| <kbd>CMD</kbd>+<kbd>V</kbd> | Paste |
+| <kbd>CMD</kbd>+<kbd>A</kbd> | Select all |
+| <kbd>CMD</kbd>+<kbd>Z</kbd> | Undo |
+| <kbd>CMD</kbd>+<kbd>F</kbd> | Find |
+
+In addition, the <kbd>Alt</kbd> key to the right of the space bar on a Mac keyboard equals the <kbd>Alt Gr</kbd> in Windows.
+
+### Keyboard language
+
+By default, remote desktops and apps will use the same keyboard language, also known as *locale*, as your Mac. For example, if your Mac uses **en-GB** for *English (United Kingdom)*, that will also be used by Windows in the remote session.
+
+There are some Mac-specific layouts or custom layouts for which an exact match may not be available on the version of Windows you're connecting to. Your Mac keyboard will be matched to the best available on the remote session.
+
+If your keyboard layout is set to a variation of a language, such as *Canadian-French*, and if the remote session can't map you to that exact variation, it will map the closest available language instead. For example, if you chose the *Canadian-French* locale and it wasn't available, the closest language would be *French*. However, some of the Mac keyboard shortcuts you're used to using on your Mac may not work as expected in the remote session.
+
+There are some scenarios where characters in the remote session don't match the characters you typed on the Mac keyboard:
+
+- Using a keyboard that the remote session doesn't recognize. When Azure Virtual Desktop doesn't recognize the keyboard, it defaults to the language last used with the remote PC.
+- Connecting to a previously disconnected session from Azure Virtual Desktop where that session uses a different keyboard language than the language you're currently trying to use.
+- Needing to switch keyboard modes between unicode and scancode. To learn more, see [Keyboard modes](#keyboard-modes).
+
+You can manually set which keyboard language to use in the remote session by following the steps at [Managing display language settings in Windows](https://support.microsoft.com/windows/manage-display-language-settings-in-windows-219f28b0-9881-cd4c-75ca-dba919c52321). You might need to close and restart the application you're currently using for the keyboard changes to take effect.
+
+#### Keyboard modes
+
+There are two different modes you can use that control how keyboard input is interpreted in a remote session: *Scancode* and *Unicode*.
+
+With *Scancode*, user input is redirected by sending key press *up* and *down* information to the remote session. Each key is identified by its physical position on the keyboard and uses the keyboard layout of the remote session, not the keyboard of the local device. For example, scancode 31 is the key next to <kbd>Caps Lock</kbd>. On a US keyboard this key would produce the character "A", while on a French keyboard this key would produce the character "Q".
+
+With *Unicode*, user input is redirected by sending each character to the remote session. When a key is pressed, the locale of the user is used to translate this input to a character. This can be as simple as the character "a" by simply pressing the "a" key, but it can enable an Input Method Editor (IME), allowing you to input multiple keystrokes to create more complex characters, such as for Chinese and Japanese input sources. Below are some examples of when to use each mode.
+
+When to use *Scancode*:
+
+- Dealing with characters that aren't printable, such as <kbd>Arrow Up</kbd> or shortcut combinations.
+
+- Certain applications that don't accept Unicode input for characters such as: Hyper-V VMConnect (for example, no way to input a BitLocker password), VMware Remote Console, all applications written using the *Qt framework* (for example R Studio, TortoiseHg, QtCreator).
+
+- Applications that utilize scancode input for actions, such as <kbd>Space bar</kbd> to check/uncheck a checkbox, or individual keys as shortcuts, for example applications in browser.
+
+When to use *Unicode*:
+
+- To avoid a mismatch in expectations. A user who expects the keyboard to behave like a Mac keyboard and not like a PC keyboard can run into issues where Mac and PC have differences for the same locale/region layout.
+
+- When the keyboard layout used on the client might not be available on the server.
+
+To switch between keyboard modes:
+
+1. Open the **Microsoft Remote Desktop** application on your device.
+
+1. From the macOS menu bar, select **Connections**, then select **Keyboard Mode**.
+
+1. Choose **Scancode** or **Unicode**.
+
+Alternatively, you can use the following keyboard shortcut to select each mode:
+
+- Scancode: <kbd>Ctrl</kbd>+<kbd>Command</kbd>+<kbd>K</kbd>
+- Unicode: <kbd>Ctrl</kbd>+<kbd>Command</kbd>+<kbd>U</kbd>
+
+#### Input Method Editor
+
+The Remote Desktop client supports Input Method Editor (IME) in a remote session for input sources. The local macOS IME experience will be accessible in the remote session.
+
+> [!IMPORTANT]
+> For an IME to work, the input mode needs to be in Unicode Mode. To learn more, see [Keyboard modes](#keyboard-modes).
+
+### Mouse and trackpad
+
+You can use a mouse or trackpad with the Remote Desktop client. In order to use the right-click or secondary-click, you may need to configure macOS to enable right-click, or you can plug in a standard PC two-button USB mouse. To enable right-click in macOS:
+
+1. Open **System Preferences**.
+
+1. For the Apple Magic Mouse, select **Mouse**, then check the box for **Secondary click**.
+
+1. For the Apple Magic Trackpad of MacBook Trackpad, select **Trackpad**, then check the box for **Secondary click**.
+
+## Redirections
+
+### Folder redirection
+
+The Remote Desktop client enables you to make local folders available in your remote session. This is known as *folder redirection*. This means you can open files from and save files to your Mac with your remote session. Folders can also be redirected as read-only. Redirected folders appear in the remote session as a network drive in Windows Explorer.
+
+#### All remote sessions
+
+To enable folder redirection for all remote desktops:
+
+1. Open the **Microsoft Remote Desktop** application on your device.
+
+1. From the macOS menu bar, select **Microsoft Remote Desktop**, then select **Preferences**.
+
+1. Select the **General** tab, then for **If folder redirection is enabled for RDP files or managed resources, redirect:**, select **Choose Folder...**.
+
+1. Navigate to the folder you want to be available in all your remote desktop sessions, then select **Choose**.
+
+1. Close the **Preferences** window. Optionally, if you want to make this folder available as read-only, check the box before closing the window.
+
+#### Each remote resource
+
+To enable folder redirection for each remote desktop individually:
+
+If you want to use different display settings to those specified by your admin for the workspace, you can configure custom settings.
+
+1. Open the **Microsoft Remote Desktop** application on your device, then select **Workspaces**.
+
+1. Right-click the name of a desktop, for example **SessionDesktop**, then select **Edit**.
+
+1. Check the box for **Use custom settings**.
+
+1. On the **Folders** tab, check the box **Redirect folders**, then select the **+** (plus) icon.
+
+1. Navigate to the folder you want to be available when accessing this remote resource, then select **Open**. You can add multiple folders by repeating the previous step and this step.
+
+1. Select **Save**. Optionally, if you want to make this folder available as read-only, check the box, then select **Save**.
+
+### Redirect devices, audio, and clipboard
+
+The Remote Desktop client can make your local clipboard and local devices available in your remote desktop where you can copy and paste text, images, and files. You can also redirect the audio from the remote desktop to your local device. You can redirect:
+
+- Printers
+- Smart cards
+- Clipboard
+- Microphones
+- Cameras
+
+To enable redirection of devices, audio and the clipboard:
+
+1. Open the **Microsoft Remote Desktop** application on your device, then select **Workspaces**.
+
+1. Right-click the name of a desktop, for example **SessionDesktop**, then select **Edit**.
+
+1. Check the box for **Use custom settings**.
+
+1. On the **Devices & Audio** tab, check the box for each device you want to use in the remote desktop.
+
+1. Select whether you want to play sound **On this computer**, **On the remote PC**, or **Never**.
+
+1. Select **Save**.
+
+## Microsoft Teams optimizations
+
+You can use Microsoft Teams on Azure Virtual Desktop to chat, collaborate, make calls, and join meetings. With media optimization, the Remote Desktop client handles audio and video locally for Teams calls and meetings. For more information, see [Use Microsoft Teams on Azure Virtual Desktop](../teams-on-avd.md).
+
+Starting with version 10.7.7 of the Remote Desktop client for macOS, optimizations for Teams is enabled by default. If you need to enable optimizations for Microsoft Teams:
+
+1. Open the **Microsoft Remote Desktop** application on your device.
+
+1. From the macOS menu bar, select **Microsoft Remote Desktop**, then select **Preferences**.
+
+1. Select the **General** tab, then check the box **Enable optimizations for Microsoft Teams**.
+
+## General app settings
+
+To set other general settings of the Remote Desktop app to use with Azure Virtual Desktop:
+
+1. Open the **Microsoft Remote Desktop** application on your device.
+
+1. From the macOS menu bar, select **Microsoft Remote Desktop**, then select **Preferences**.
+
+1. Select the **General** tab. You can change the following settings:
+
+ | Setting | Value | Description |
+ |--|--|--|
+ | Show PC thumbnails | Check **On** or **Off** | Show thumbnails of remote sessions. |
+ | Help improve Remote Desktop | Check **On** or **Off** | Send anonymous data to Microsoft. |
+ | Use Mac shortcuts for copy, cut, paste and select all, undo, and find | Check **On** or **Off** | Use these shortcuts in remote sessions. |
+ | Use system proxy configuration | Check **On** or **Off** | Use the proxy specified in macOS network settings. |
+ | Graphics interpolation level | Select from **Automatic**, **None**, **Low**, **Medium**, or **High** | As the interpolation level is increased, most text and graphics appear smoother, but rendering performance will decrease (if hardware acceleration is disabled). |
+ | Use hardware acceleration when possible | Check **On** or **Off** | Use graphics hardware to render graphics. |
+
+## Admin link to subscribe to a workspace
+
+The Remote Desktop client for macOS supports the *ms-rd* Uniform Resource Identifier (URI) scheme. This enables you to use a link that users can help to automatically subscribe to a workspace, rather than them having to manually add the workspace in the Remote Desktop client.
+
+To subscribe to a workspace with a link:
+
+1. Open the following link in a web browser: `ms-rd:subscribe?url=https://rdweb.wvd.microsoft.com`.
+
+1. If you see the prompt **This site is trying to open Microsoft Remote Desktop.app**, select **Open**. The **Microsoft Remote Desktop** application should open and automatically show a sign-in prompt.
+
+1. Enter your user account, then select **Sign in**. After a few seconds, your workspaces should show the desktops and applications that have been made available to you by your admin.
+
+## Test the beta client
+
+If you want to help us test new builds before they're released, you should download our beta client. Organizations can use the beta client to validate new versions for their users before they're generally available.
+
+> [!NOTE]
+> The beta client shouldn't be used in production.
+
+You can download the beta client for macOS from our [preview channel on AppCenter](https://aka.ms/rdmacbeta). You don't need to create an account or sign into AppCenter to download the beta client.
+
+If you already have the beta client, you can check for updates to ensure you have the latest version by following these steps:
+
+1. Open the **Microsoft Remote Desktop** application on your device.
+
+1. From the macOS menu bar, select **Microsoft Remote Desktop**, then select **Check for updates**.
+
+## Provide feedback
+
+If you want to provide feedback to us on the Remote Desktop client for macOS, you can do so in the app:
+
+1. Open the **Microsoft Remote Desktop** application on your device.
+
+1. From the macOS menu bar, select **Help**, then select **Submit Feedback**.
+
+## Next steps
+
+If you're having trouble with the Remote Desktop client, see [Troubleshoot the Remote Desktop client](../troubleshoot-client.md).
virtual-desktop Client Features Microsoft Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/client-features-microsoft-store.md
+
+ Title: Use features of the Remote Desktop client for Windows (Microsoft Store) - Azure Virtual Desktop
+description: Learn how to use features of the Remote Desktop client for Windows (Microsoft Store) when connecting to Azure Virtual Desktop.
++ Last updated : 10/04/2022+++
+# Use features of the Remote Desktop client for Windows (Microsoft Store) when connecting to Azure Virtual Desktop
+
+Once you've connected to Azure Virtual Desktop using the Remote Desktop client, it's important to know how to use the features. This article shows you how to use the features available in the Remote Desktop client for Windows (Microsoft Store). If you want to learn how to connect to Azure Virtual Desktop, see [Connect to Azure Virtual Desktop with the Remote Desktop client for Windows (Microsoft Store)](connect-microsoft-store.md).
+
+You can find a list of all the Remote Desktop clients at [Remote Desktop clients overview](remote-desktop-clients-overview.md). For more information about the differences between the clients, see [Compare the Remote Desktop clients](../compare-remote-desktop-clients.md).
+
+> [!NOTE]
+> Your admin can choose to override some of these settings in Azure Virtual Desktop, such as being able to copy and paste between your local device and your remote session. If some of these settings are disabled, please contact your admin.
+
+## Refresh or unsubscribe from a workspace or see its details
+
+To refresh or unsubscribe from a workspace or see its details:
+
+1. Open the **Remote Desktop** application on your device.
+
+1. Select the three dots to the right-hand side of the name of a workspace where you'll see a menu with options for **Details**, **Refresh**, and **Unsubscribe**.
+
+ - **Details** shows you details about the workspace, such as:
+ - The name of the workspace.
+ - The URL and username used to subscribe.
+ - The number of desktops and apps.
+ - The date and time of the last refresh.
+ - The status of the last refresh.
+ - **Refresh** makes sure you have the latest desktops and apps and their settings provided by your admin.
+ - **Unsubscribe** removes the workspace from the Remote Desktop client.
+
+## User accounts
+
+### Add user credentials to a workspace
+
+You can save a user account and associate it with workspaces to simplify the connection sequence, as the sign-in credentials will be used automatically.
+
+1. Open the **Remote Desktop** application on your device, then select **Workspaces**.
+
+1. Select one of the icons to launch a session to Azure Virtual Desktop.
+
+1. When prompted to choose an account, select **+** for *User Account* to add a new account, or select an account you've previously added.
+
+1. If you selected to add an account, enter a username, password, and optionally a friendly name, then select **Add**.
+
+1. Select **Save**, then select **Connect**.
+
+### Manage user accounts
+
+You can save a user account and associate it with workspaces to simplify the connection sequence, as the sign-in credentials will be used automatically. You can also edit a saved account or remove accounts you no longer want to use.
+
+To save a user account:
+
+1. Open the **Remote Desktop** application on your device.
+
+1. Select **Settings**.
+
+1. Select the **+** (plus) icon next to **User account**.
+
+1. Enter a username, password, and optionally a display name, then select **Save**. You can then add this account to a workspace by following the steps in [Add user credentials to a workspace](#add-user-credentials-to-a-workspace).
+
+To remove an account you no longer want to use:
+
+1. Open the **Remote Desktop** application on your device.
+
+1. Select **Settings**.
+
+1. Select the user account from the drop-down list you want to remove, then select **Edit** (pencil icon).
+
+1. Select **Remove account**, then confirm you want to delete the user account.
+
+To change the user account a remote session is using, you'll need to remove the workspace and add it again.
+
+## Display preferences
+
+If you want to use different display settings to those specified by your admin, you can configure custom settings. Display settings apply to all workspaces.
+
+1. Open the **Remote Desktop** application on your device.
+
+1. Select **Settings**.
+
+1. You can configure the following settings:
+
+ | Setting | Value |
+ |--|--|
+ | Start connections in full screen | **On** or **off** |
+ | Start each connection in a new window | **On** or **off** |
+ | When resizing the app | - Stretch the content, preserving aspect ratio<br />- Stretch the content<br />- Show scroll bars |
+ | Prevent the screen from timing out | **On** or **off** |
+
+## Connection bar and command menu
+
+When you've connected to Azure Virtual Desktop, you'll see a bar at the top, which is called the **connection bar**. This gives you quick access to a zoom control, represented by a magnifying glass icon, and more options. You can move the connection bar around the top edge of the display by tapping and dragging it to where you want it.
+
+The icon with three dots in the connection bar shows the **command menu** that enables you to:
+
+- Disconnect the remote session.
+- Toggle between full screen and a window.
+- Toggle between direct touch and mouse input.
+
+## Input methods
+
+You can use touch input, or a built-in or external PC keyboard, trackpad and mouse to control desktops or apps.
+
+### Use touch gestures and mouse modes in a remote session
+
+You can use touch gestures to replicate mouse actions in your remote session. Two mouse modes are available:
+
+- **Direct touch**: where you tap on the screen is the equivalent to clicking a mouse in that position. The mouse pointer isn't shown on screen.
+- **Mouse pointer**: The mouse pointer is shown on screen. When you tap the screen and move your finger, the mouse pointer will move.
+
+If you use Windows 10 or later with Azure Virtual Desktop, native Windows touch gestures are supported in direct touch mode.
+
+The following table shows which mouse operations map to which gestures in specific mouse modes:
+
+| Mouse mode | Mouse operation | Gesture |
+|:--|:|:-|
+| Direct touch | Left-click | Tap with one finger |
+| Direct touch | Right-click | Tap and hold with one finger |
+| Mouse pointer | Left-click | Tap with one finger |
+| Mouse pointer | Left-click and drag | Double-tap and hold with one finger, then drag |
+| Mouse pointer | Right-click | Tap with two fingers |
+| Mouse pointer | Right-click and drag | Double-tap and hold with two fingers, then drag |
+| Mouse pointer | Mouse wheel | Tap and hold with two fingers, then drag up or down |
+| Mouse pointer | Zoom | With two fingers, pinch to zoom out and move fingers apart to zoom in |
+
+### Keyboard
+
+There are several keyboard shortcuts you can use to help use some of the features. Most common Windows keyboard shortcuts, such as <kbd>CTRL</kbd>+<kbd>C</kbd> for copy and <kbd>CTRL</kbd>+<kbd>Z</kbd> for undo, are the same when using Azure Virtual Desktop. There are some keyboard shortcuts that are different so Windows knows when to use them in Azure Virtual Desktop or on your local device. These are:
+
+| Windows shortcut | Azure Virtual Desktop shortcut | Description |
+|--|--|--|
+| <kbd>CTRL</kbd>+<kbd>ALT</kbd>+<kbd>DELETE</kbd> | <kbd>CTRL</kbd>+<kbd>ALT</kbd>+<kbd>END</kbd> | Shows the Windows Security dialog box. |
+
+You can configure the Remote Desktop client whether to send keyboard commands to the remote session:
+
+1. Open the **Remote Desktop** application on your device.
+
+1. Select **Settings**.
+
+1. For **Use keyboard commands with**, select from one of the following:
+
+ - My local PC only.
+ - My remote session when it's in full screen (*default*).
+ - My remote session when it's in use.
+
+### Keyboard language
+
+By default, remote desktops and apps will use the same keyboard language, also known as *locale*, as your Windows PC. For example, if your Windows PC uses **en-GB** for *English (United Kingdom)*, that will also be used by Windows in the remote session.
+
+You can manually set which keyboard language to use in the remote session by following the steps at [Managing display language settings in Windows](https://support.microsoft.com/windows/manage-display-language-settings-in-windows-219f28b0-9881-cd4c-75ca-dba919c52321). You might need to close and restart the application you're currently using for the keyboard changes to take effect.
+
+## Redirections
+
+The Remote Desktop client can make your local clipboard and microphone available in your remote session where you can copy and paste text, images, and files. The audio from the remote session can also be redirected to your local device. However, redirection can't be configured using the Remote Desktop client for Windows. This behavior is configured by your admin in Azure Virtual Desktop.
+
+## Update the client
+
+Updates for the Remote Desktop client are delivered through the Microsoft Store. Use the Microsoft Store to check for and download updates.
+
+## App display modes
+
+You can configure the Remote Desktop client to be displayed in light or dark mode, or match the mode of your system:
+
+1. Open the **Remote Desktop** application on your device.
+
+1. Select **Settings**.
+
+1. Under **Theme preference**, select **Light**, **Dark**, or **Use system setting**. Restart the app to apply the change.
+
+## Pin to the Start menu
+
+You can pin your remote desktops to the Start menu on your local device to make them easier to launch:
+
+1. Open the **Remote Desktop** application on your device.
+
+1. Right-click a resource, then select **Pin to Start**.
+
+## Admin link to subscribe to a workspace
+
+The Remote Desktop client for Windows supports the *ms-rd* Uniform Resource Identifier (URI) scheme. This enables you to use a link that users can help to automatically subscribe to a workspace, rather than them having to manually add the workspace in the Remote Desktop client.
+
+To subscribe to a workspace with a link:
+
+1. Open the following link in a web browser: `ms-rd:subscribe?url=https://rdweb.wvd.microsoft.com`.
+
+1. If you see the prompt **This site is trying to open Remote Desktop**, select **Open**. The **Remote Desktop** application should open and automatically show a sign-in prompt.
+
+1. Enter your user account, then select **Sign in**. After a few seconds, your workspaces should show the desktops and applications that have been made available to you by your admin.
+
+## Provide feedback
+
+If you want to provide feedback to us on the Remote Desktop client for Windows, you can do so by selecting the button that looks like a smiley face emoji in the client app, as shown in the following image. This will open the **Feedback Hub**.
++
+To best help you, we need you to give us as detailed information as possible. Along with a detailed description, you can include screenshots, attach a file, or make a recording. For more tips about how to provide helpful feedback, see [Feedback](/windows-insider/feedback#add-new-feedback).
+
+## Next steps
+
+If you're having trouble with the Remote Desktop client, see [Troubleshoot the Remote Desktop client](../troubleshoot-client.md).
virtual-desktop Client Features Web https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/client-features-web.md
+
+ Title: Use features of the Remote Desktop Web client - Azure Virtual Desktop
+description: Learn how to use features of the Remote Desktop Web client when connecting to Azure Virtual Desktop.
++ Last updated : 10/04/2022+++
+# Use features of the Remote Desktop Web client when connecting to Azure Virtual Desktop
+
+Once you've connected to Azure Virtual Desktop using the Remote Desktop client, it's important to know how to use the features. This article shows you how to use the features available in the Remote Desktop Web client. If you want to learn how to connect to Azure Virtual Desktop, see [Connect to Azure Virtual Desktop with the Remote Desktop Web client](connect-web.md).
+
+You can find a list of all the Remote Desktop clients at [Remote Desktop clients overview](remote-desktop-clients-overview.md). For more information about the differences between the clients, see [Compare the Remote Desktop clients](../compare-remote-desktop-clients.md).
+
+> [!NOTE]
+> Your admin can choose to override some of these settings in Azure Virtual Desktop, such as being able to copy and paste between your local device and your remote session. If some of these settings are disabled, please contact your admin.
+
+## Display preferences
+
+A remote desktop will automatically fit the size of the browser window. If you resize the browser window, the remote desktop will resize with it. You can also enter fullscreen by selecting **fullscreen** (the diagonal arrows icon) on the taskbar.
+
+If you use a high-DPI display, the Remote Desktop Web client supports using native display resolution during remote sessions. In sessions running on a high-DPI display, native resolution can provide higher-fidelity graphics and improved text clarity.
+
+> [!NOTE]
+> Enabling native display resolution with a high-DPI display may cause increased CPU or network usage.
+
+Native resolution is set to off by default. To turn on native resolution:
+
+1. Sign in to the Remote Desktop Web client, then select **Settings** on the taskbar.
+
+1. Set **Enable native display resolution** to **On**.
+
+You can use a built-in or external PC keyboard, trackpad and mouse to control desktops or apps.
+
+### Keyboard
+
+There are several keyboard shortcuts you can use to help use some of the features. Most common Windows keyboard shortcuts, such as <kbd>CTRL + C</kbd> for copy and <kbd>CTRL + Z</kbd> for undo, are the same when using Azure Virtual Desktop. There are some keyboard shortcuts that are different so Windows knows when to use them in Azure Virtual Desktop or on your local device. These are:
+
+| Windows shortcut | Azure Virtual Desktop shortcut | Description |
+|--|--|--|
+| <kbd>CTRL</kbd>+<kbd>ALT</kbd>+<kbd>DELETE</kbd> | <kbd>CTRL</kbd>+<kbd>ALT</kbd>+<kbd>END</kbd> (Windows) | Shows the Windows Security dialog box. |
+| <kbd>CTRL</kbd>+<kbd>ALT</kbd>+<kbd>DELETE</kbd> | <kbd>FN</kbd>+<kbd>Control</kbd>+<kbd>Option</kbd>+<kbd>Delete</kbd> (macOS) | Shows the Windows Security dialog box. |
+| <kbd>Windows</kbd> | <kbd>ALT</kbd>+<kbd>F3</kbd> | Sends the *Windows* key to the remote session. |
+| <kbd>ALT</kbd>+<kbd>TAB</kbd> | <kbd>ALT</kbd>+<kbd>PAGE UP</kbd> | Switches between programs from left to right. |
+| <kbd>ALT</kbd>+<kbd>SHIFT</kbd>+<kbd>TAB</kbd> | <kbd>ALT</kbd>+<kbd>PAGE DOWN</kbd> | Switches between programs from right to left. |
+
+> [!NOTE]
+> You can copy and paste text only. Files can't be copied or pasted to and from the web client. Additionally, you can only use <kbd>CTRL</kbd>+<kbd>C</kbd> and <kbd>CTRL</kbd>+<kbd>V</kbd> to copy and paste text.
+
+#### Input Method Editor
+
+The web client supports Input Method Editor (IME) in the remote session. Before you can use the IME, you must install the language pack for the keyboard you want to use in the remote session must be installed on your session host by your admin. To learn more about setting up language packs in the remote session, see [Add language packs to a Windows 10 multi-session image](/azure/virtual-desktop/language-packs).
+
+To enable IME input using the web client:
+
+1. Sign in to the Remote Desktop Web client, then select **Settings** on the taskbar.
+
+1. Set **Enable Input Method Editor** to **On**.
+
+1. In the drop-down menu, select the keyboard you want to use in a remote session.
+
+1. Connect to a remote session.
+
+The web client will suppress the local IME window when you're focused on the remote session. If you change the IME settings after you've already connected to the remote session, the setting changes won't have any effect.
+
+> [!NOTE]
+> The web client doesn't support IME input while using a private browsing window.
+>
+> If the language pack isn't installed on the session host, the keyboard in the remote session will default to English (United States).
+
+## Redirections
+
+You can allow the remote computer to access to files, printers, and the clipboard on your local device. When you connect to a remote session, you'll be prompted whether you want to allow access to local resources.
+
+### Transfer files
+
+To transfer files between your local device and your remote session:
+
+1. Sign in to the Remote Desktop Web client and launch a remote session.
+
+1. For the prompt **Access local resources**, check the box for **File transfer**, then select **Allow**.
+
+1. Once you're remote session has started, an extra icon will appear in the Remote Desktop Web client taskbar for **Upload new file** (the upwards arrow icon). Selecting this will open a file explorer window on your local device.
+
+1. Browse to and select files you want to upload to the remote session. You can select multiple files by holding down the <kbd>CTRL</kbd> key on your keyboard for Windows, or the <kbd>Command</kbd> key for macOS, then select **Open**.
+
+1. In your remote session, open **File Explorer**, then select **This PC**.
+
+1. You'll see a redirected drive called **Remote Desktop Virtual Drive on RDWebClient**. Inside this drive are two folders: **Uploads** and **Downloads**. **Uploads** contains the files you uploaded through the Remote Desktop Web client.
+
+1. To transfer files from your remote session to your local device, copy and paste files to the **Downloads** folder. Before the paste can complete, the Remote Desktop Web client will prompt you **Are you sure you want to download *N* file(s)?**. Select **Confirm**. Your browser will download the files in its normal way.
+
+ If you don't want to see this prompt every time you download files from the current browser, check the box for **DonΓÇÖt ask me again on this browser** before confirming.
+
+> [!IMPORTANT]
+> - We recommend using *Copy* rather than *Cut* when transferring files from your remote session to your local device as an issue with the network connection can cause the files to be lost.
+>
+> - Uploaded files are available in a remote session until you sign out of the Remote Desktop Web client.
+
+### Clipboard
+
+To use the clipboard between your local device and your remote session:
+
+1. Sign in to the Remote Desktop Web client and launch a remote session.
+
+1. For the prompt **Access local resources**, check the box for **Clipboard**, then select **Allow**.
+
+ The Remote Desktop Web client supports copying and pasting text only. Files can't be copied or pasted to and from the web client. To transfer files, see [Transfer files](#transfer-files).
+
+### Printer
+
+You can enable the *Remote Desktop Virtual Printer* in your remote session. When you print to this printer, a PDF file of your print job will be generated for you to download and print on your local device. To enable the *Remote Desktop Virtual Printer*:
+
+1. Sign in to the Remote Desktop Web client and launch a remote session.
+
+1. For the prompt **Access local resources**, check the box for **Printer**, then select **Allow**.
+
+1. Start the printing process as you would normally for the app you want to print from.
+
+1. When prompted to choose a printer, select **Remote Desktop Virtual Printer**.
+
+1. If you wish, you can set the orientation and paper size. When you're ready, select **Print**. A PDF file of your print job will be generated and your browser will download the files in its normal way. You can choose to either open the PDF and print its contents to your local printer or save it to your PC for later use.
+
+## Launch remote session with another Remote Desktop client
+
+If you have another Remote Desktop client installed, you can download an RDP file instead of using the browser window for a remote session. To configure the Remote Desktop Web client to download RDP files:
+
+1. Sign in to the Remote Desktop Web client, then select **Settings** on the taskbar.
+
+1. For **Resources Launch Method**, select **Download the RDP file**.
+
+1. Select the resource you want to open (for example, Excel). Your browser will download the RDP in its normal way.
+
+1. Open the downloaded RDP file in your Remote Desktop client to launch a remote session.
+
+## Provide feedback
+
+If you want to provide feedback to us on the Remote Desktop Web client, you can do so in the Web client:
+
+1. Sign in to the Remote Desktop Web client, then select the three dots (**...**) on the taskbar to show the menu.
+
+1. Select **Feedback** to open the Azure Virtual Desktop Feedback page.
+
+## Next steps
+
+If you're having trouble with the Remote Desktop client, see [Troubleshoot the Remote Desktop client](../troubleshoot-client.md).
virtual-desktop Client Features Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/client-features-windows.md
+
+ Title: Use features of the Remote Desktop client for Windows - Azure Virtual Desktop
+description: Learn how to use features of the Remote Desktop client for Windows when connecting to Azure Virtual Desktop.
++ Last updated : 10/04/2022+++
+# Use features of the Remote Desktop client for Windows when connecting to Azure Virtual Desktop
+
+Once you've connected to Azure Virtual Desktop using the Remote Desktop client, it's important to know how to use the features. This article shows you how to use the features available in the Remote Desktop client for Windows. If you want to learn how to connect to Azure Virtual Desktop, see [Connect to Azure Virtual Desktop with the Remote Desktop client for Windows](connect-windows.md).
+
+You can find a list of all the Remote Desktop clients at [Remote Desktop clients overview](remote-desktop-clients-overview.md). For more information about the differences between the clients, see [Compare the Remote Desktop clients](../compare-remote-desktop-clients.md).
+
+> [!NOTE]
+> Your admin can choose to override some of these settings in Azure Virtual Desktop, such as being able to copy and paste between your local device and your remote session. If some of these settings are disabled, please contact your admin.
+
+## Refresh or unsubscribe from a workspace or see its details
+
+To refresh or unsubscribe from a workspace or see its details:
+
+1. Open the **Remote Desktop** application on your device.
+
+1. Select the three dots to the right-hand side of the name of a workspace where you'll see a menu with options for **Details**, **Refresh**, and **Unsubscribe**.
+
+ - **Details** shows you details about the workspace, such as:
+ - The name of the workspace.
+ - The URL and username used to subscribe.
+ - The number of desktops and apps.
+ - The date and time of the last refresh.
+ - The status of the last refresh.
+ - **Refresh** makes sure you have the latest desktops and apps and their settings provided by your admin.
+ - **Unsubscribe** removes the workspace from the Remote Desktop client.
+
+## User accounts
+
+### Manage user accounts
+
+You can save a user account and associate it with workspaces to simplify the connection sequence, as the sign-in credentials will be used automatically. You can also edit a saved account or remove accounts you no longer want to use.
+
+User accounts are stored and managed in *Credential Manager* in Windows as a *generic credential*.
+
+To save a user account:
+
+1. Open the **Remote Desktop** app on your device.
+
+1. Double-click one of the icons to launch a session to Azure Virtual Desktop. If you're prompted to enter the password for your user account again, enter the password and check the box **Remember me**, then select **OK**.
+
+To edit or remove a saved user account:
+
+1. Open **Credential Manager** from the Control Panel. You can also open Credential Manager by searching the Start menu.
+
+1. Select **Windows Credentials**.
+
+1. Under **Generic Credentials**, find your saved user account and expand its details. It will begin with **RDPClient**.
+
+1. To edit the user account, select **Edit**. You can update the username and password. Once you're done, select **Save**.
+
+1. To remove the user account, select **Remove** and confirm that you want to delete it.
+
+## Display preferences
+
+### Display settings for each remote desktop
+
+If you want to use different display settings to those specified by your admin, you can configure custom settings.
+
+1. Open the **Remote Desktop** application on your device.
+
+1. Right-click the name of a desktop or app, for example **SessionDesktop**, then select **Settings**.
+
+1. Toggle **Use default settings** to off.
+
+1. On the **Display** tab, you can select from the following options:
+
+ | Display configuration | Description |
+ |--|--|
+ | All displays | Automatically use all displays for the desktop. If you have multiple displays, all of them will be used. <br /><br />For information on limits, see [Compare the features of the Remote Desktop clients](../compare-remote-desktop-clients.md).|
+ | Single display | Only a single display will be used for the remote desktop. |
+ | Select displays | Only select displays will be used for the remote desktop. |
+
+ Each display configuration in the table above has its own settings. Use the following table to understand each setting:
+
+ | Setting | Display configurations | Description |
+ |--|--|--|
+ | Single display when in windowed mode | All displays<br />Select displays | Only use a single display when running in windows mode, rather than full screen. |
+ | Start in full screen | Single display | The desktop will be displayed full screen. |
+ | Fit session to window | All displays<br />Single display<br />Select displays | When you resize the window, the scaling of the desktop will automatically adjust to fit the new window size. The resolution will stay the same. |
+ | Update the resolution on resize | Single display | When you resize the window, the resolution of the desktop will automatically change to match.<br /><br />If this is disabled, a new option for **Resolution** is displayed where you can select from a pre-defined list of resolutions. |
+ | Choose which display to use for this session | Select displays | Select which displays you want to use. All selected displays must be next to each other. |
+ | Maximize to current displays | Select displays | The remote desktop will show full screen on the current display(s) the window is on, even if this isn't the display selected in the settings. If this is off, the remote desktop will show full screen the same display(s) regardless of the current display the window is on. If your window overlaps multiple displays, those displays will be used when maximizing the remote desktop. |
+
+## Input methods
+
+You can use a built-in or external PC keyboard, trackpad and mouse to control desktops or apps.
+
+### Keyboard
+
+There are several keyboard shortcuts you can use to help use some of the features. Some of these are for controlling how the Remote Desktop client displays the session. These are:
+
+| Key combination | Description |
+|--|--|
+| <kbd>CTRL</kbd>+<kbd>ALT</kbd>+<kbd>HOME</kbd> | Activates the connection bar when in full-screen mode and the connection bar isn't pinned. |
+| <kbd>CTRL</kbd>+<kbd>ALT</kbd>+<kbd>PAUSE</kbd> | Switches the client between full-screen mode and window mode. |
+
+Most common Windows keyboard shortcuts, such as <kbd>CTRL</kbd>+<kbd>C</kbd> for copy and <kbd>CTRL</kbd>+<kbd>Z</kbd> for undo, are the same when using Azure Virtual Desktop. There are some keyboard shortcuts that are different so Windows knows when to use them in Azure Virtual Desktop or on your local device. These are:
+
+| Windows shortcut | Azure Virtual Desktop shortcut | Description |
+|--|--|--|
+| <kbd>CTRL</kbd>+<kbd>ALT</kbd>+<kbd>DELETE</kbd> | <kbd>CTRL</kbd>+<kbd>ALT</kbd>+<kbd>END</kbd> | Shows the Windows Security dialog box. |
+| <kbd>ALT</kbd>+<kbd>TAB</kbd> | <kbd>ALT</kbd>+<kbd>PAGE UP</kbd> | Switches between programs from left to right. |
+| <kbd>ALT</kbd>+<kbd>SHIFT</kbd>+<kbd>TAB</kbd> | <kbd>ALT</kbd>+<kbd>PAGE DOWN</kbd> | Switches between programs from right to left. |
+| <kbd>WINDOWS</kbd> key, or <br /><kbd>CTRL</kbd>+<kbd>ESC</kbd> | <kbd>ALT</kbd>+<kbd>HOME</kbd> | Shows the Start menu. |
+| <kbd>ALT</kbd>+<kbd>SPACE BAR</kbd> | <kbd>ALT</kbd>+<kbd>DELETE</kbd> | Shows the system menu. |
+| <kbd>PRINT SCREEN</kbd> | <kbd>CTRL</kbd>+<kbd>ALT</kbd>+<kbd>+</kbd> (plus sign) | Takes a snapshot of the entire remote session, and places it in the clipboard. |
+| <kbd>ALT</kbd>+<kbd>PRINT SCREEN</kbd> | <kbd>CTRL</kbd>+<kbd>ALT</kbd>+<kbd>-</kbd> (minus sign) | Takes a snapshot of the active window in the remote session, and places it in the clipboard. |
+
+> [!NOTE]
+> Keyboard shortcuts will not work when using Remote Desktop or RemoteApp sessions that are nested.
+
+### Keyboard language
+
+By default, remote desktops and apps will use the same keyboard language, also known as *locale*, as your Windows PC. For example, if your Windows PC uses **en-GB** for *English (United Kingdom)*, that will also be used by Windows in the remote session.
+
+You can manually set which keyboard language to use in the remote session by following the steps at [Managing display language settings in Windows](https://support.microsoft.com/windows/manage-display-language-settings-in-windows-219f28b0-9881-cd4c-75ca-dba919c52321). You might need to close and restart the application you're currently using for the keyboard changes to take effect.
+
+## Redirections
+
+### Folder redirection
+
+The Remote Desktop client can make local folders available in your remote session. This is known as *folder redirection*. This means you can open files from and save files to your Windows PC with your remote session. Redirected folders appear as a network drive in Windows Explorer.
+
+Folder redirection can't be configured using the Remote Desktop client for Windows. This behavior is configured by your admin in Azure Virtual Desktop. By default, all local drives are redirected to a remote session.
+
+### Redirect devices, audio, and clipboard
+
+The Remote Desktop client can make your local clipboard and local devices available in your remote session where you can copy and paste text, images, and files. The audio from the remote session can also be redirected to your local device. However, redirection can't be configured using the Remote Desktop client for Windows. This behavior is configured by your admin in Azure Virtual Desktop. Here's a list of some of the devices and resources that can be redirected. For the full list, see [Compare the features of the Remote Desktop clients when connecting to Azure Virtual Desktop](../compare-remote-desktop-clients.md?toc=%2Fazure%2Fvirtual-desktop%2Fusers%2Ftoc.json#redirections-comparison).
+
+- Printers
+- USB devices
+- Audio output
+- Smart cards
+- Clipboard
+- Microphones
+- Cameras
+
+## Update the client
+
+By default, you'll be notified whenever a new version of the client is available as long as your admin hasn't disabled notifications. The notification will appear in the client and the Windows Action Center. To update your client, just select the notification.
+
+You can also manually search for new updates for the client:
+
+1. Open the **Remote Desktop** application on your device.
+
+1. Select the three dots at the top right-hand corner to show the menu, then select **About**. The client will automatically search for updates.
+
+1. If there's an update available, tap **Install update** to update the client. If the client is already up to date, you'll see a green check box, and the message **You're up to date**.
+
+## App display modes
+
+You can configure the Remote Desktop client to be displayed in light or dark mode, or match the mode of your system:
+
+1. Open the **Remote Desktop** application on your device.
+
+1. Select **Settings**.
+
+1. Under **App mode**, select **Light**, **Dark**, or **Use System Mode**. The change is applied instantly.
+
+## Views
+
+You can view your remote desktops and apps as either a tile view (default) or list view:
+
+1. Open the **Remote Desktop** application on your device.
+
+1. If you want to switch to List view, select **Tile**, then select **List view**.
+
+1. If you want to switch to Tile view, select **List**, then select **Tile view**.
+
+## Enable Windows Insider releases
+
+If you want to help us test new builds before they're released, you should download our Insider releases. Organizations can use the Insider releases to validate new versions for their users before they're generally available.
+
+> [!NOTE]
+> Insider releases shouldn't be used in production.
+
+Insider releases are made available in the Remote Desktop client once you've configured the client to use Insider releases. To configure the client to use Insider releases:
+
+1. Add the following registry key and value:
+
+ - **Key**: HKLM\\Software\\Microsoft\\MSRDC\\Policies
+ - **Type**: REG_SZ
+ - **Name**: ReleaseRing
+ - **Data**: insider
+
+ You can do this with PowerShell. On your local device, open PowerShell as an administrator and run the following commands:
+
+ ```powershell
+ New-Item -Path "HKLM:\SOFTWARE\Microsoft\MSRDC\Policies" -Force
+ New-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\MSRDC\Policies" -Name ReleaseRing -PropertyType String -Value insider -Force
+ ```
+
+1. Restart your local device.
+
+1. Open the Remote Desktop client. The title in the top left-hand corner should be **Remote Desktop (Insider)**:
+
+ :::image type="content" source="../media/remote-desktop-client-windows-insider.png" alt-text="A screenshot of the Remote Desktop client with Insider features enabled. The title is highlighted in a red box.":::
+
+If you already have configured the Remote Desktop client to use Insider releases, you can check for updates to ensure you have the latest Insider release by checking for updates in the normal way. For more information, see [Update the client](#update-the-client).
+
+## Admin management
+
+### Enterprise deployment
+
+To deploy the Remote Desktop client in an enterprise, you can use `maiexec` to install the MSI file. You can install the client per-device or per-user by running the relevant command from Command Prompt as an administrator:
+
+- Per-device installation:
+
+ ```cmd
+ msiexec /i <path to the MSI> /qn ALLUSERS=1
+ ```
+
+- Per-user installation:
+
+ ```cmd
+ msiexec /i <path to the MSI> /qn ALLUSERS=2 MSIINSTALLPERUSER=1
+ ```
+
+### Update behavior
+
+You can control notifications about updates and when updates are installed. The update behavior of the client depends on two factors:
+
+- Whether the app is installed for only the current user or for all users on the machine
+- The value of the following registry key:
+
+ - **Key:** HKLM\\Software\\Microsoft\\MSRDC\\Policies
+ - **Type:** REG_DWORD
+ - **Name:** AutomaticUpdates
+
+The Remote Desktop client offers three ways to update:
+
+- Notification-based updates, where the client shows the user a notification in the client UI or a pop-up message in the taskbar. The user can choose to update the client by selecting the notification.
+- Silent on-close updates, where the client automatically updates after the user has closed the Remote Desktop client.
+- Silent background updates, where a background process checks for updates a few times a day and will update the client if a new update is available.
+
+To avoid interrupting users, silent updates won't happen while users have the client open, have a remote connection active, or if you've disabled automatic updates. If the client is running while a silent background update occurs, the client will show a notification to let users know an update is available.
+
+You can set the *AutomaticUpdates* registry key to one of the following values:
+
+| Value | Update behavior (per user installation) | Update behavior (per machine installation) |
+||||
+| 0 | Disable notifications and turn off auto-update. | Disable notifications and turn off auto-update. |
+| 1 | Notification-based updates. | Notification-based updates. |
+| 2 (default) | Notification-based updates when the app is running. Otherwise, silent on-close and background updates. | Notification-based updates. No support for silent update mechanisms, as users may not have administrator access rights on the client device. |
+
+### URI to subscribe to a workspace
+
+The Remote Desktop client for Windows supports the *ms-rd* Uniform Resource Identifier (URI) scheme. This enables you to use a link that users can help to automatically subscribe to a workspace, rather than them having to manually add the workspace in the Remote Desktop client.
+
+To subscribe to a workspace with a link:
+
+1. Open the following link in a web browser: `ms-rd:subscribe?url=https://rdweb.wvd.microsoft.com`.
+
+1. If you see the prompt **This site is trying to open Microsoft Remote Desktop Connection Center**, select **Open**. The **Remote Desktop** application should open and automatically show a sign-in prompt.
+
+1. Enter your user account, then select **Sign in**. After a few seconds, your workspaces should show the desktops and applications that have been made available to you by your admin.
+
+## Provide feedback
+
+If you want to provide feedback to us on the Remote Desktop client for Windows, you can do so by selecting the button that looks like a smiley face emoji in the client app, as shown in the following image. This will open the **Feedback Hub**.
++
+To best help you, we need you to give us as detailed information as possible. Along with a detailed description, you can include screenshots, attach a file, or make a recording. For more tips about how to provide helpful feedback, see [Feedback](/windows-insider/feedback#add-new-feedback).
+
+## Next steps
+
+If you're having trouble with the Remote Desktop client, see [Troubleshoot the Remote Desktop client](../troubleshoot-client.md).
virtual-desktop Connect Android Chrome Os https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/connect-android-chrome-os.md
+
+ Title: Connect to Azure Virtual Desktop with the Remote Desktop client for Android and Chrome OS- Azure Virtual Desktop
+description: Learn how to connect to Azure Virtual Desktop using the Remote Desktop client for Android and Chrome OS.
++ Last updated : 10/04/2022+++
+# Connect to Azure Virtual Desktop with the Remote Desktop client for Android and Chrome OS
+
+The Microsoft Remote Desktop client is used to connect to Azure Virtual Desktop to access your desktops and applications. This article shows you how to connect to Azure Virtual Desktop with the Remote Desktop client for Android and Chrome OS.
+
+You can find a list of all the Remote Desktop clients you can use to connect to Azure Virtual Desktop at [Remote Desktop clients overview](remote-desktop-clients-overview.md).
+
+If you want to connect to Remote Desktop Services instead of Azure Virtual Desktop or a local PC, see [Connect to Remote Desktop Services with the Remote Desktop client for Android and Chrome OS](/windows-server/remote/remote-desktop-services/clients/remote-desktop-android).
+
+## Prerequisites
+
+Before you can access your resources, you'll need to meet the prerequisites:
+
+- Internet access
+
+- One of the following:
+ - Smartphone or tablet running Android 9 or later.
+ - Chromebook running Chrome OS 53 or later. Learn more about [Android applications running in Chrome OS](https://sites.google.com/a/chromium.org/dev/chromium-os/chrome-os-systems-supporting-android-apps).
+
+- Download and install the Remote Desktop client from [Google Play](https://play.google.com/store/apps/details?id=com.microsoft.rdc.androidx).
+
+> [!IMPORTANT]
+> The Android client is not available on platforms built on the Android Open Source Project (AOSP) that do not include Google Mobile Services (GMS), the client is only available through the canonical Google Play Store.
+
+## Subscribe to a workspace
+
+A workspace combines all the desktops and applications that have been made available to you by your admin. To be able to see these in the Remote Desktop client, you need to subscribe to the workspace by following these steps:
+
+1. Open the **RD Client** app on your device.
+
+1. In the Connection Center, tap **+**, then tap **Add Workspace**.
+
+1. In the **Email or Workspace URL** box, either enter your user account, for example `user@contoso.com`, or the relevant URL from the following table. After a few seconds, the message **A workspace is associated with this URL** should be displayed.
+
+ > [!TIP]
+ > If you see the message **No workspace is associated with this email address**, your admin might not have set up email discovery. Use one of the following workspace URLs instead.
+
+ | Azure environment | Workspace URL |
+ |--|--|
+ | Azure cloud *(most common)* | `https://rdweb.wvd.microsoft.com` |
+ | Azure US Gov | `https://rdweb.wvd.azure.us/api/arm/feeddiscovery` |
+ | Azure China 21Vianet 21Vianet | `https://rdweb.wvd.azure.cn/api/arm/feeddiscovery` |
+
+1. Tap **Next**.
+
+1. Sign in with your user account. After a few seconds, your workspaces should show the desktops and applications that have been made available to you by your admin.
+
+Once you've subscribed to a workspace, its content will update automatically regularly. Resources may be added, changed, or removed based on changes made by your admin.
+
+## Connect to your desktops and applications
+
+1. Open the **RD Client** app on your device.
+
+1. Tap one of the icons to launch a session to Azure Virtual Desktop. You may be prompted to enter the password for your user account again, and to make sure you trust the remote PC before you connect, depending on how your admin has configured Azure Virtual Desktop.
+
+## Beta client
+
+If you want to help us test new builds before they're released, you should download our beta client. Organizations can use the beta client to validate new versions for their users before they're generally available. For more information, see [Test the beta client](client-features-android-chrome-os.md#test-the-beta-client).
+
+## Next steps
+
+To learn more about the features of the Remote Desktop client for Android and Chrome OS, check out [Use features of the Remote Desktop client for Android and Chrome OS when connecting to Azure Virtual Desktop](client-features-android-chrome-os.md).
virtual-desktop Connect Ios Ipados https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/connect-ios-ipados.md
+
+ Title: Connect to Azure Virtual Desktop with the Remote Desktop client for iOS and iPadOS - Azure Virtual Desktop
+description: Learn how to connect to Azure Virtual Desktop using the Remote Desktop client for iOS and iPadOS.
++ Last updated : 10/04/2022+++
+# Connect to Azure Virtual Desktop with the Remote Desktop client for iOS and iPadOS
+
+The Microsoft Remote Desktop client is used to connect to Azure Virtual Desktop to access your desktops and applications. This article shows you how to connect to Azure Virtual Desktop with the Remote Desktop client for iOS and iPadOS.
+
+You can find a list of all the Remote Desktop clients you can use to connect to Azure Virtual Desktop at [Remote Desktop clients overview](remote-desktop-clients-overview.md).
+
+If you want to connect to Remote Desktop Services instead of Azure Virtual Desktop or a local PC, see [Connect to Remote Desktop Services with the Remote Desktop client for iOS and iPadOS](/windows-server/remote/remote-desktop-services/clients/remote-desktop-ios).
+
+## Prerequisites
+
+Before you can access your resources, you'll need to meet the prerequisites:
+
+- Internet access.
+
+- iPhone running iOS 14 or later, or iPad running iPadOS 14 or later.
+
+- Download and install the Remote Desktop client from the [App Store](https://apps.apple.com/app/microsoft-remote-desktop/id714464092).
+
+## Subscribe to a workspace
+
+A workspace combines all the desktops and applications that have been made available to you by your admin. To be able to see these in the Remote Desktop client, you need to subscribe to the workspace by following these steps:
+
+1. Open the **RD Client** app on your device.
+
+1. In the Connection Center, tap **+**, then tap **Add Workspace**.
+
+1. In the **Email or Workspace URL** box, either enter your user account, for example `user@contoso.com`, or the relevant URL from the following table. After a few seconds, the message **A workspace is associated with this URL** should be displayed.
+
+ > [!TIP]
+ > If you see the message **No workspace is associated with this email address**, your admin might not have set up email discovery. Use one of the following workspace URLs instead.
+
+ | Azure environment | Workspace URL |
+ |--|--|
+ | Azure cloud *(most common)* | `https://rdweb.wvd.microsoft.com` |
+ | Azure US Gov | `https://rdweb.wvd.azure.us/api/arm/feeddiscovery` |
+ | Azure China 21Vianet | `https://rdweb.wvd.azure.cn/api/arm/feeddiscovery` |
+
+1. Tap **Next**.
+
+1. Sign in with your user account. After a few seconds, your workspaces should show the desktops and applications that have been made available to you by your admin.
+
+Once you've subscribed to a workspace, its content will update automatically regularly. Resources may be added, changed, or removed based on changes made by your admin.
+
+## Connect to your desktops and applications
+
+1. Open the **RD Client** app on your device.
+
+1. Tap one of the icons to launch a session to Azure Virtual Desktop. You may be prompted to enter the password for your user account again, depending on how your admin has configured Azure Virtual Desktop.
+
+## Beta client
+
+If you want to help us test new builds before they're released, you should download our beta client. Organizations can use the beta client to validate new versions for their users before they're generally available. For more information, see [Test the beta client](client-features-ios-ipados.md#test-the-beta-client).
+
+## Next steps
+
+To learn more about the features of the Remote Desktop client for iOS and iPadOS, check out [Use features of the Remote Desktop client for iOS and iPadOS when connecting to Azure Virtual Desktop](client-features-ios-ipados.md).
virtual-desktop Connect Macos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/connect-macos.md
+
+ Title: Connect to Azure Virtual Desktop with the Remote Desktop client for macOS - Azure Virtual Desktop
+description: Learn how to connect to Azure Virtual Desktop using the Remote Desktop client for macOS.
++ Last updated : 10/04/2022+++
+# Connect to Azure Virtual Desktop with the Remote Desktop client for macOS
+
+The Microsoft Remote Desktop client is used to connect to Azure Virtual Desktop to access your desktops and applications. This article shows you how to connect to Azure Virtual Desktop with the Remote Desktop client for macOS.
+
+You can find a list of all the Remote Desktop clients at [Remote Desktop clients overview](remote-desktop-clients-overview.md).
+
+If you want to connect to Remote Desktop Services instead of Azure Virtual Desktop or a local PC, see [Connect to Remote Desktop Services with the Remote Desktop client for macOS](/windows-server/remote/remote-desktop-services/clients/remote-desktop-mac).
+
+## Prerequisites
+
+Before you can access your resources, you'll need to meet the prerequisites:
+
+- Internet access.
+
+- A device running macOS 10.14 or later.
+
+- Download and install the Remote Desktop client from the [Mac App Store](https://apps.apple.com/app/microsoft-remote-desktop/id1295203466?mt=12).
+
+## Subscribe to a workspace
+
+A workspace combines all the desktops and applications that have been made available to you by your admin. To be able to see these in the Remote Desktop client, you need to subscribe to the workspace by following these steps:
+
+1. Open the **Microsoft Remote Desktop** app on your device.
+
+1. In the Connection Center, select **+**, then select **Add Workspace**.
+
+1. In the **Email or Workspace URL** box, either enter your user account, for example `user@contoso.com`, or the relevant URL from the following table. After a few seconds, the message **A workspace is associated with this URL** should be displayed.
+
+ > [!TIP]
+ > If you see the message **No workspace is associated with this email address**, your admin might not have set up email discovery. Use one of the following workspace URLs instead.
+
+ | Azure environment | Workspace URL |
+ |--|--|
+ | Azure cloud *(most common)* | `https://rdweb.wvd.microsoft.com` |
+ | Azure US Gov | `https://rdweb.wvd.azure.us/api/arm/feeddiscovery` |
+ | Azure China 21Vianet | `https://rdweb.wvd.azure.cn/api/arm/feeddiscovery` |
+
+1. Select **Add**.
+
+1. Sign in with your user account. After a few seconds, your workspaces should show the desktops and applications that have been made available to you by your admin.
+
+Once you've subscribed to a workspace, its content will update automatically every six hours and each time you start the client. Resources may be added, changed, or removed based on changes made by your admin.
+
+## Connect to your desktops and applications
+
+1. Open the **Microsoft Remote Desktop** app on your device.
+
+1. Double-click one of the icons to launch a session to Azure Virtual Desktop. You may be prompted to enter the password for your user account again, depending on how your admin has configured Azure Virtual Desktop.
+
+## Beta client
+
+If you want to help us test new builds before they're released, you should download our beta client. Organizations can use the beta client to validate new versions for their users before they're generally available. For more information, see [Test the beta client](client-features-macos.md#test-the-beta-client).
+
+## Next steps
+
+To learn more about the features of the Remote Desktop client for macOS, check out [Use features of the Remote Desktop client for macOS when connecting to Azure Virtual Desktop](client-features-macos.md).
virtual-desktop Connect Microsoft Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/connect-microsoft-store.md
+
+ Title: Connect to Azure Virtual Desktop with the Remote Desktop client for Windows (Microsoft Store) - Azure Virtual Desktop
+description: Learn how to connect to Azure Virtual Desktop using the Remote Desktop client for Windows (Microsoft Store).
++ Last updated : 10/04/2022+++
+# Connect to Azure Virtual Desktop with the Remote Desktop client for Windows (Microsoft Store)
+
+The Microsoft Remote Desktop client is used to connect to Azure Virtual Desktop to access your desktops and applications. This article shows you how to connect to Azure Virtual Desktop with the Remote Desktop client for Windows from the Microsoft Store.
+
+You can find a list of all the Remote Desktop clients at [Remote Desktop clients overview](remote-desktop-clients-overview.md).
+
+If you want to connect to Remote Desktop Services instead of Azure Virtual Desktop or a local PC, see [Connect to Remote Desktop Services with the Remote Desktop client for Windows (Microsoft Store)](/windows-server/remote/remote-desktop-services/clients/windows).
+
+## Prerequisites
+
+Before you can access your resources, you'll need to meet the prerequisites:
+
+- Internet access.
+
+- A device running Windows 11 or Windows 10.
+
+- Download and install the Remote Desktop client from the [Microsoft Store](https://go.microsoft.com/fwlink/?LinkID=616709).
+
+## Subscribe to a workspace
+
+A workspace combines all the desktops and applications that have been made available to you by your admin. To be able to see these in the Remote Desktop client, you need to subscribe to the workspace by following these steps:
+
+1. Open the **Remote Desktop** app on your device.
+
+1. In the Connection Center, select **+ Add**, then select **Workspaces**.
+
+1. In the **Email or Workspace URL** box, either enter your user account, for example `user@contoso.com`, or the relevant URL from the following table. After a few seconds, the message **We found Workspaces at the following URLs** should be displayed.
+
+ > [!TIP]
+ > If you see the message **We couldn't find any Workspaces associated with this email address. Try providing a URL instead**, your admin might not have set up email discovery. Use one of the following workspace URLs instead.
+
+ | Azure environment | Workspace URL |
+ |--|--|
+ | Azure cloud *(most common)* | `https://rdweb.wvd.microsoft.com` |
+ | Azure US Gov | `https://rdweb.wvd.azure.us/api/arm/feeddiscovery` |
+ | Azure China 21Vianet | `https://rdweb.wvd.azure.cn/api/arm/feeddiscovery` |
+
+1. Select **Subscribe**.
+
+1. Sign in with your user account. After a few seconds, your workspaces should show the desktops and applications that have been made available to you by your admin.
+
+Once you've subscribed to a workspace, its content will update automatically regularly. Resources may be added, changed, or removed based on changes made by your admin.
+
+## Connect to your desktops and applications
+
+1. Open the **Remote Desktop** app on your device.
+
+1. Select one of the icons to launch a session to Azure Virtual Desktop. You may be prompted to enter the password for your user account again, depending on how your admin has configured Azure Virtual Desktop.
+
+## Next steps
+
+To learn more about the features of the Remote Desktop client for Windows from the Microsoft Store, check out [Use features of the Remote Desktop client for Windows (Microsoft Store) when connecting to Azure Virtual Desktop](client-features-microsoft-store.md).
virtual-desktop Connect Thin Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/connect-thin-clients.md
+
+ Title: Connect to Azure Virtual Desktop with thin clients - Azure Virtual Desktop
+description: Learn how to connect to Azure Virtual Desktop using thin clients.
++ Last updated : 10/04/2022+++
+# Connect to Azure Virtual Desktop with thin clients
+
+Thin clients are available from several partners you can use to connect to Azure Virtual Desktop to access your desktops and applications. This article provides links to those partners where you can read more about connecting to Azure Virtual Desktop. You can also use a web browser on a thin client to access Azure Virtual Desktop using the web client.
+
+You can find a list of all the Remote Desktop clients at [Remote Desktop clients overview](remote-desktop-clients-overview.md).
+
+## Partner thin client devices
+
+The following partners have thin client devices that have been approved to use with Azure Virtual Desktop. Visit their documentation to learn how to connect to Azure Virtual Desktop with thin clients.
+
+| Partner | Partner documentation | Partner support |
+|:-|:-|:-|
+| 10ZiG | [10ZiG client documentation](https://www.10zig.com/about/microsoft-windows-virtual-desktop) | [10ZiG support](https://www.10zig.com/resources/support_faq) |
+| Dell | [Dell client documentation](https://www.delltechnologies.com/en-us/collaterals/unauth/data-sheets/products/thin-clients/dell-thinos-9-for-microsoft-wvd.pdf) | [Dell support](https://www.dell.com/support) |
+| HP | [HP client documentation](https://h20195.www2.hp.com/v2/GetDocument.aspx?docname=c07051097) | [HP support](https://support.hp.com/us-en/products/workstations-thin-clients) |
+| IGEL | [IGEL client documentation](https://www.igel.com/igel-solution-family/) | [IGEL support](https://www.igel.com/support/) |
+| NComputing | [NComputing client documentation](https://www.ncomputing.com/microsoft) | [NComputing support](https://www.ncomputing.com/support/support-options) |
+| Stratodesk | [Stratodesk client documentation](https://kb.stratodesk.com/microsoft-windows-virtual-desktop-wvd) | [Stratodesk support](https://www.stratodesk.com/support/) |
+
+## Next steps
+
+Learn more about Remote Desktop clients at [Remote Desktop clients overview](remote-desktop-clients-overview.md).
virtual-desktop Connect Web https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/connect-web.md
+
+ Title: Connect to Azure Virtual Desktop with the Remote Desktop Web client - Azure
+description: Learn how to connect to Azure Virtual Desktop using the Remote Desktop web client.
++ Last updated : 10/04/2022+++
+# Connect to Azure Virtual Desktop with the Remote Desktop Web client
+
+The Microsoft Remote Desktop client is used to connect to Azure Virtual Desktop to access your desktops and applications. This article shows you how to connect to Azure Virtual Desktop with the Remote Desktop Web client. The web client lets you access your Azure Virtual Desktop resources directly from a web browser without needing to install a separate client.
+
+You can find a list of all the Remote Desktop clients you can use to connect to Azure Virtual Desktop at [Remote Desktop clients overview](remote-desktop-clients-overview.md).
+
+## Prerequisites
+
+Before you can access your resources, you'll need to meet the prerequisites:
+
+- Internet access.
+
+- A supported web browser. While any HTML5-capable web browser should work, we officially support the following web browsers and operating systems:
+
+ | Web browser | Supported operating system | Notes |
+ |-|-||
+ | Microsoft Edge | Windows, macOS, Linux, Chrome OS | Version 79 or later |
+ | Google Chrome | Windows, macOS, Linux, Chrome OS | Version 57 or later |
+ | Apple Safari | macOS | Version 11 or later |
+ | Mozilla Firefox | Windows, macOS, Linux | Version 55 or later |
+
+> [!NOTE]
+> The Remote Desktop Web client doesn't support mobile web browsers.
+>
+> As of September 30, 2021, the Remote Desktop Web client no longer supports Internet Explorer. We recommend that you use Microsoft Edge with the Remote Desktop Web client instead. For more information, see our [blog post](https://aka.ms/WVDSupportIE11).
+
+## Access your resources
+
+When you sign in to the Remote Desktop Web client, you'll see your workspaces. A workspace combines all the desktops and applications that have been made available to you by your admin. You sign in by following these steps:
+
+1. Open your web browser.
+
+1. Go to one of the following URLs:
+
+ | Azure environment | Workspace URL |
+ |--|--|
+ | Azure cloud *(most common)* | `https://client.wvd.microsoft.com/arm/webclient/` |
+ | Azure cloud (classic) | `https://client.wvd.microsoft.com/webclient/https://docsupdatetracker.net/index.html` |
+ | Azure US Gov | `https://rdweb.wvd.azure.us/arm/webclient/` |
+ | Azure China 21Vianet | `https://rdweb.wvd.azure.cn/arm/webclient/` |
+
+1. Sign in with your user account. Once you've signed in successfully, your workspaces should show the desktops and applications that have been made available to you by your admin.
+
+1. Select one of the icons to launch a session to Azure Virtual Desktop. You may be prompted to enter the password for your user account again, depending on how your admin has configured Azure Virtual Desktop.
+
+1. A prompt for **Access local resources** may be displayed asking you confirm which local resources you want to be available in the remote session. Make your selection, then select **Allow**.
+
+>[!TIP]
+>If you've already signed in to the web browser with a different Azure Active Directory account than the one you want to use for Azure Virtual Desktop, you should either sign out or use a private browser window.
+
+## Next steps
+
+To learn more about the features of the Remote Desktop Web client, check out [Use features of the Remote Desktop Web client when connecting to Azure Virtual Desktop](client-features-web.md).
virtual-desktop Connect Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/connect-windows.md
+
+ Title: Connect to Azure Virtual Desktop with the Remote Desktop client for Windows - Azure Virtual Desktop
+description: Learn how to connect to Azure Virtual Desktop using the Remote Desktop client for Windows.
++ Last updated : 10/04/2022+++
+# Connect to Azure Virtual Desktop with the Remote Desktop client for Windows
+
+The Microsoft Remote Desktop client is used to connect to Azure Virtual Desktop to access your desktops and applications. This article shows you how to connect to Azure Virtual Desktop with the Remote Desktop client for Windows.
+
+You can find a list of all the Remote Desktop clients you can use to connect to Azure Virtual Desktop at [Remote Desktop clients overview](remote-desktop-clients-overview.md).
+
+If you want to connect to Remote Desktop Services instead of Azure Virtual Desktop or a local PC, see [Connect to Remote Desktop Services with the Remote Desktop client for Windows](/windows-server/remote/remote-desktop-services/clients/windowsdesktop).
+
+## Prerequisites
+
+Before you can access your resources, you'll need to meet the prerequisites:
+
+- Internet access.
+
+- A device running one of the following versions of Windows:
+ - Windows 11
+ - Windows 10
+ - Windows 10 IoT Enterprise
+ - Windows 7
+
+- Download the Remote Desktop client installer, choosing the correct version for your device:
+ - [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2068602) *(most common)*
+ - [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2098960)
+ - [Windows on Arm](https://go.microsoft.com/fwlink/?linkid=2098961)
+
+> [!IMPORTANT]
+> Extended support for using Windows 7 to connect to Azure Virtual Desktop ends on January 10, 2023.
+
+## Install the Remote Desktop client
+
+Once you've downloaded the Remote Desktop client, you'll need to install it by following these steps:
+
+1. Run the installer by double-clicking the file you downloaded.
+
+1. On the welcome screen, select **Next**.
+
+1. To accept the end-user license agreement, check the box for **I accept the terms in the License Agreement**, then select **Next**.
+
+1. For the Installation Scope, select one of the following options:
+
+ - **Install just for you**: Remote Desktop will be installed in a per-user folder and be available just for your user account. You don't need local Administrator privileges.
+ - **Install for all users of this machine**: Remote Desktop will be installed in a per-machine folder and be available for all users. You must have local Administrator privileges
+
+1. Select **Install**.
+
+1. Once installation has completed, select **Finish**.
+
+1. If you left the box for **Launch Remote Desktop when setup exits** selected, the Remote Desktop client will automatically open. Alternatively to launch the client after installation, use the Start menu to search for and select **Remote Desktop**.
+
+## Subscribe to a workspace
+
+A workspace combines all the desktops and applications that have been made available to you by your admin. To be able to see these in the Remote Desktop client, you need to subscribe to the workspace by following these steps:
+
+1. Open the **Remote Desktop** app on your device.
+
+2. The first time you subscribe to a workspace, from the **Let's get started** screen, select **Subscribe** or **Subscribe with URL**. Use the tabs below for your scenario.
+
+# [Subscribe](#tab/subscribe)
+
+3. If you selected **Subscribe**, sign in with your user account when prompted, for example `user@contoso.com`. After a few seconds, your workspaces should show the desktops and applications that have been made available to you by your admin.
+
+ > [!TIP]
+ > If you see the message **No workspace is associated with this email address**, your admin might not have set up email discovery. Try the steps in the **Subscribe with URL** tab instead.
+
+# [Subscribe with URL](#tab/subscribe-with-url)
+
+3. If you selected **Subscribe with URL**, in the **Email or Workspace URL** box, enter the relevant URL from the following table. After a few seconds, the message **We found Workspaces at the following URLs** should be displayed.
+
+ | Azure environment | Workspace URL |
+ |--|--|
+ | Azure cloud *(most common)* | `https://rdweb.wvd.microsoft.com` |
+ | Azure US Gov | `https://rdweb.wvd.azure.us/api/arm/feeddiscovery` |
+ | Azure China 21Vianet | `https://rdweb.wvd.azure.cn/api/arm/feeddiscovery` |
+
+4. Select **Next**.
+
+5. Sign in with your user account when prompted. After a few seconds, the workspace should show the desktops and applications that have been made available to you by your admin.
+
+Once you've subscribed to a workspace, its content will update automatically regularly and each time you start the client. Resources may be added, changed, or removed based on changes made by your admin.
+++
+## Connect to your desktops and applications
+
+1. Open the **Remote Desktop** app on your device.
+
+1. Double-click one of the icons to launch a session to Azure Virtual Desktop. You may be prompted to enter the password for your user account again, depending on how your admin has configured Azure Virtual Desktop.
+
+## Windows Insider
+
+If you want to help us test new builds before they're released, you should download our Insider releases. Organizations can use the Insider releases to validate new versions for their users before they're generally available. For more information, see [Enable Windows Insider releases](client-features-windows.md#enable-windows-insider-releases).
+
+## Next steps
+
+To learn more about the features of the Remote Desktop client for Windows, check out [Use features of the Remote Desktop client for Windows when connecting to Azure Virtual Desktop](client-features-windows.md).
virtual-desktop Remote Desktop Clients Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/remote-desktop-clients-overview.md
+
+ Title: Remote Desktop clients for Azure Virtual Desktop - Azure Virtual Desktop
+description: Overview of the Remote Desktop clients you can use to connect to Azure Virtual Desktop.
++ Last updated : 10/04/2022+++
+# Remote Desktop clients for Azure Virtual Desktop
+
+With Microsoft Remote Desktop clients, you can connect to Azure Virtual Desktop and use and control desktops and apps that your admin has made available to you. There are clients available for many different types of devices on different platforms and form factors, such as desktops and laptops, tablets, smartphones, and through a web browser. Using your web browser on desktops and laptops, you can connect without having to download and install any software.
+
+There are many features you can use to enhance your remote experience, such as:
+
+- Multiple monitor support.
+- Custom display resolutions.
+- Dynamic display resolutions and scaling.
+- Device redirection, such as webcams, storage devices, and printers.
+- Microsoft Teams optimizations.
+
+Some features are only available with certain clients, so it's important to check [Compare the features of the Remote Desktop clients](../compare-remote-desktop-clients.md) to understand the differences when connecting to Azure Virtual Desktop.
+
+If you want information on Remote Desktop Services instead, see [Remote Desktop clients for Remote Desktop Services](/windows-server/remote/remote-desktop-services/clients/remote-desktop-clients).
+
+Here's a list of the Remote Desktop client apps and our documentation for connecting to Azure Virtual Desktop, where you can find download links, what's new, and learn how to install and use each client.
+
+| Remote Desktop client | Documentation and download links | Version information |
+|--|--|--|
+| Windows Desktop | [Connect to Azure Virtual Desktop with the Remote Desktop client for Windows](connect-windows.md) | [What's new](/windows-server/remote/remote-desktop-services/clients/windowsdesktop-whatsnew?context=/azure/virtual-desktop/context/context) |
+| Web | [Connect to Azure Virtual Desktop with the Remote Desktop client for Web](connect-web.md) | [What's new](/windows-server/remote/remote-desktop-services/clients/web-client-whatsnew?context=/azure/virtual-desktop/context/context) |
+| macOS | [Connect to Azure Virtual Desktop with the Remote Desktop client for macOS](connect-macos.md) | [What's new](/windows-server/remote/remote-desktop-services/clients/mac-whatsnew?context=/azure/virtual-desktop/context/context) |
+| iOS/iPadOS | [Connect to Azure Virtual Desktop with the Remote Desktop client for iOS and iPadOS](connect-ios-ipados.md) | [What's new](/windows-server/remote/remote-desktop-services/clients/ios-whatsnew?context=/azure/virtual-desktop/context/context) |
+| Android/Chrome OS | [Connect to Azure Virtual Desktop with the Remote Desktop client for Android and Chrome OS](connect-android-chrome-os.md) | [What's new](/windows-server/remote/remote-desktop-services/clients/android-whatsnew?context=/azure/virtual-desktop/context/context) |
+| Microsoft Store | [Connect to Azure Virtual Desktop with the Remote Desktop client for Windows (Microsoft Store)](connect-microsoft-store.md) | [What's new](/windows-server/remote/remote-desktop-services/clients/windows-whatsnew?context=/azure/virtual-desktop/context/context) |
virtual-desktop Connect Android 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/connect-android-2019.md
> Applies to: Android 4.1 and later, Chromebooks with ChromeOS 53 and later. >[!IMPORTANT]
->This content applies to Azure Virtual Desktop (classic), which doesn't support Azure Resource Manager Azure Virtual Desktop objects. If you're trying to manage Azure Resource Manager Azure Virtual Desktop objects, see [this article](../user-documentation/connect-android.md).
+>This content applies to Azure Virtual Desktop (classic), which doesn't support Azure Resource Manager Azure Virtual Desktop objects. If you're trying to manage Azure Resource Manager Azure Virtual Desktop objects, see [this article](../users/connect-android-chrome-os.md).
You can access Azure Virtual Desktop resources from your Android device with our downloadable client. You can also use the Android client on Chromebook devices that support the Google Play Store. This guide will tell you how to set up the Android client.
virtual-desktop Connect Ios 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/connect-ios-2019.md
> Applies to: iOS 13.0 or later. Compatible with iPhone, iPad, and iPod touch. >[!IMPORTANT]
->This content applies to Azure Virtual Desktop (classic), which doesn't support Azure Resource Manager Azure Virtual Desktop objects. If you're trying to manage Azure Resource Manager Azure Virtual Desktop objects, see [this article](../user-documentation/connect-ios.md).
+>This content applies to Azure Virtual Desktop (classic), which doesn't support Azure Resource Manager Azure Virtual Desktop objects. If you're trying to manage Azure Resource Manager Azure Virtual Desktop objects, see [this article](../users/connect-ios-ipados.md).
You can access Azure Virtual Desktop resources from your iOS device with our downloadable client. This guide will tell you how to set up the iOS client.
virtual-desktop Connect Macos 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/connect-macos-2019.md
> Applies to: macOS 10.12 or later >[!IMPORTANT]
->This content applies to Azure Virtual Desktop (classic), which doesn't support Azure Resource Manager Azure Virtual Desktop objects. If you're trying to manage Azure Resource Manager Azure Virtual Desktop objects, see [this article](../user-documentation/connect-macos.md).
+>This content applies to Azure Virtual Desktop (classic), which doesn't support Azure Resource Manager Azure Virtual Desktop objects. If you're trying to manage Azure Resource Manager Azure Virtual Desktop objects, see [this article](../users/connect-macos.md).
You can access Azure Virtual Desktop resources from your macOS devices with our downloadable client. This guide will tell you how to set up the client.
virtual-desktop Connect Web 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/connect-web-2019.md
# Connect to Azure Virtual Desktop (classic) with the web client >[!IMPORTANT]
->This content applies to Azure Virtual Desktop (classic), which doesn't support Azure Resource Manager Azure Virtual Desktop objects. If you're trying to manage Azure Resource Manager Azure Virtual Desktop objects, see [this article](../user-documentation/connect-web.md).
+>This content applies to Azure Virtual Desktop (classic), which doesn't support Azure Resource Manager Azure Virtual Desktop objects. If you're trying to manage Azure Resource Manager Azure Virtual Desktop objects, see [this article](../users/connect-web.md).
The web client lets you access your Azure Virtual Desktop resources from a web browser without the lengthy installation process.
virtual-desktop Connect Windows 7 10 2019 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/connect-windows-7-10-2019.md
> Applies to: Windows 7, Windows 10, and Windows 10 IoT Enterprise >[!IMPORTANT]
->This content applies to Azure Virtual Desktop (classic), which doesn't support Azure Resource Manager Azure Virtual Desktop objects. If you're trying to manage Azure Resource Manager Azure Virtual Desktop objects, see [this article](../user-documentation/connect-windows-7-10.md).
+>This content applies to Azure Virtual Desktop (classic), which doesn't support Azure Resource Manager Azure Virtual Desktop objects. If you're trying to manage Azure Resource Manager Azure Virtual Desktop objects, see [this article](../users/connect-windows.md).
You can access Azure Virtual Desktop resources on devices with Windows 7, Windows 10, and Windows 10 IoT Enterprise using the Windows Desktop client. The client doesn't support Windows 8 or Windows 8.1.
virtual-machines Dsc Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/dsc-credentials.md
This article covers the Desired State Configuration (DSC) extension for Azure. For an overview of the DSC extension handler, see [Introduction to the Azure Desired State Configuration extension handler](dsc-overview.md). -
+> [!NOTE]
+> Before you enable the DSC extension, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Automange named [machine configuration](../../governance/machine-configuration/overview.md). The machine configuration feature combines features of the Desired State Configuration (DSC) extension handler, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Machine configuration also includes hybrid machine support through [Arc-enabled servers](../../azure-arc/servers/overview.md).
## Pass in credentials
virtual-machines Dsc Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/dsc-linux.md
ms.devlang: azurecli
Desired State Configuration (DSC) is a management platform that you can use to manage your IT and development infrastructure with configuration as code.
+> [!IMPORTANT]
+> The desired state configuration VM extension for Linux will be [retired on **September 30, 2023**](https://aka.ms/dscext4linuxretirement). If you're currently using the desired state configuration VM extension for Linux, you should start planning your migration to the machine configuration feature of Azure Automanage by using the information in this article.
+ > [!NOTE] > The DSC extension for Linux and the [Log Analytics virtual machine extension for Linux](./oms-linux.md) currently present a conflict > and aren't supported in a side-by-side configuration. Don't use the two solutions together on the same VM.
->
-> Before you enable the DSC extension, we would like you to know that a newer version of DSC is now available in preview, managed by a feature of Azure Policy named [guest configuration](../../governance/machine-configuration/overview.md). The guest configuration feature combines features of the Desired State Configuration (DSC) extension handler, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../../azure-arc/servers/overview.md).
The DSCForLinux extension is published and supported by Microsoft. The extension installs the OMI and DSC agent on Azure virtual machines. The DSC extension can also do the following actions:
virtual-machines Dsc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/dsc-overview.md
ms.devlang: azurecli
# Introduction to the Azure Desired State Configuration extension handler
-The Azure VM Agent and associated extensions are part of Microsoft Azure infrastructure services. VM extensions are software components that extend VM functionality and simplify various VM management operations.
- > [!NOTE]
-> Before you enable the DSC extension, we would like you to know that a newer version of DSC is now available in preview, managed by a feature of Azure Policy named [guest configuration](../../governance/machine-configuration/overview.md). The guest configuration feature combines features of the Desired State Configuration (DSC) extension handler, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../../azure-arc/servers/overview.md).
+> Before you enable the DSC extension, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Automange named [machine configuration](../../governance/machine-configuration/overview.md). The machine configuration feature combines features of the Desired State Configuration (DSC) extension handler, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Machine configuration also includes hybrid machine support through [Arc-enabled servers](../../azure-arc/servers/overview.md).
+
+The Azure VM Agent and associated extensions are part of Microsoft Azure infrastructure services. VM extensions are software components that extend VM functionality and simplify various VM management operations.
The primary use case for the Azure Desired State Configuration (DSC) extension is to bootstrap a VM to the [Azure Automation State Configuration (DSC) service](../../automation/automation-dsc-overview.md).
virtual-machines Dsc Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/dsc-template.md
# Desired State Configuration extension with Azure Resource Manager templates
+> [!NOTE]
+> Before you enable the DSC extension, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Automange named [machine configuration](../../governance/machine-configuration/overview.md). The machine configuration feature combines features of the Desired State Configuration (DSC) extension handler, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Machine configuration also includes hybrid machine support through [Arc-enabled servers](../../azure-arc/servers/overview.md).
+ This article describes the Azure Resource Manager template for the [Desired State Configuration (DSC) extension handler](dsc-overview.md). Many of the examples use **RegistrationURL** (provided as a String) and **RegistrationKey** (provided as a
as a String) and **RegistrationKey** (provided as a
Automation. For details about obtaining those values, see [Use DSC metaconfiguration to register hybrid machines](../../automation/automation-dsc-onboarding.md#use-dsc-metaconfiguration-to-register-hybrid-machines).
+> [!NOTE]
+> Before you enable the DSC extension, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Automange named [machine configuration](../../governance/machine-configuration/overview.md). The machine configuration feature combines features of the Desired State Configuration (DSC) extension handler, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Machine configuration also includes hybrid machine support through [Arc-enabled servers](../../azure-arc/servers/overview.md).
+ > [!NOTE] > You might encounter slightly different schema examples. The change in schema occurred in the October 2016 release. For details, see [Update from a previous format](#update-from-a-previous-format).
->
-> Before you enable the DSC extension, we would like you to know that a newer version of DSC is now available in preview, managed by a feature of Azure Policy named [guest configuration](../../governance/machine-configuration/overview.md). The guest configuration feature combines features of the Desired State Configuration (DSC) extension handler, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../../azure-arc/servers/overview.md).
## Template example for a Windows VM
virtual-machines Dsc Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/dsc-windows.md
Last updated 03/26/2018
# PowerShell DSC Extension
+> [!NOTE]
+> Before you enable the DSC extension, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Automange named [machine configuration](../../governance/machine-configuration/overview.md). The machine configuration feature combines features of the Desired State Configuration (DSC) extension handler, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Machine configuration also includes hybrid machine support through [Arc-enabled servers](../../azure-arc/servers/overview.md).
+ ## Overview The PowerShell DSC Extension for Windows is published and supported by Microsoft. The extension uploads and applies a PowerShell DSC Configuration on an Azure VM. The DSC Extension calls into PowerShell DSC to enact the received DSC configuration on the VM. This document details the supported platforms, configurations, and deployment options for the DSC virtual machine extension for Windows.
-> [!NOTE]
-> Before you enable the DSC extension, we would like you to know that a newer version of DSC is now available in preview, managed by a feature of Azure Policy named [guest configuration](../../governance/machine-configuration/overview.md). The guest configuration feature combines features of the Desired State Configuration (DSC) extension handler, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Guest configuration also includes hybrid machine support through [Arc-enabled servers](../../azure-arc/servers/overview.md).
- ## Prerequisites ### Operating system
virtual-machines Oms Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/oms-linux.md
The following table provides a mapping of the version of the Log Analytics VM ex
| Log Analytics Linux VM extension version | Log Analytics Agent bundle version | |--|--|
+| 1.14.20 | [1.14.20](https://github.com/microsoft/OMS-Agent-for-Linux/releases/tag/OMSAgent_v1.14.20-0) |
| 1.14.19 | [1.14.19](https://github.com/microsoft/OMS-Agent-for-Linux/releases/tag/OMSAgent_v1.14.19-0) | | 1.14.16 | [1.14.16](https://github.com/microsoft/OMS-Agent-for-Linux/releases/tag/OMSAgent_v1.14.16-0) | | 1.14.13 | [1.14.13](https://github.com/microsoft/OMS-Agent-for-Linux/releases/tag/OMSAgent_v1.14.13-0) |
virtual-machines Run Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/run-command.md
Title: Run scripts in a Linux VM in Azure using action Run Commands
-description: This topic describes how to run scripts within an Azure Linux virtual machine by using the Run Command feature
+description: This article describes how to run scripts within an Azure Linux virtual machine by using the Run Command feature
Previously updated : 09/08/2022 Last updated : 10/25/2022
The following restrictions apply when you're using Run Command:
* The minimum time to run a script is about 20 seconds. * Scripts run by default as an elevated user on Linux. * You can run one script at a time.
-* Scripts that prompt for information (interactive mode) are not supported.
+* Scripts that prompt for information (interactive mode) aren't supported.
* You can't cancel a running script. * The maximum time a script can run is 90 minutes. After that, the script will time out. * Outbound connectivity from the VM is required to return the results of the script.
The following restrictions apply when you're using Run Command:
## Available commands
-This table shows the list of commands available for Linux VMs. You can use the **RunShellScript** command to run any custom script that you want. When you're using the Azure CLI or PowerShell to run a command, the value that you provide for the `--command-id` or `-CommandId` parameter must be one of the following listed values. When you specify a value that is not an available command, you receive this error:
+This table shows the list of commands available for Linux VMs. You can use the **RunShellScript** command to run any custom script that you want. When you're using the Azure CLI or PowerShell to run a command, the value that you provide for the `--command-id` or `-CommandId` parameter must be one of the following listed values. When you specify a value that isn't an available command, you receive this error:
```error The entity was not found in this Azure location
Running a command requires the `Microsoft.Compute/virtualMachines/runCommand/act
You can use one of the [built-in roles](../../role-based-access-control/built-in-roles.md) or create a [custom role](../../role-based-access-control/custom-roles.md) to use Run Command.
+## Action Run Command Linux troubleshooting
+
+When troubleshooting action run command for Linux environments, refer to the *handler* log file typically located in the following directory: `/var/log/azure/run-command/handler.log` for further details.
+
+### Known issues
+The Linux action run command logs have a few notable differences compared to the action run command Windows logs:
+
+- The sequence number is reported with each line of the log as 'seq=#'
+- There won't be a line that contains `Awaiting completion...` as this will be in action run command Windows only.
+- The line `Command existed with code: #` is also only present in action run command Windows logging.
+
+### Action Run Command Removal
+
+If needing to remove your action run command Linux extension, refer to the below steps for Azure PowerShell and CLI:
+
+ Replace *rgname* and *vmname* with your relevant resource group name and virtual machine name in the following removal examples.
++
+```powershell-interactive
+ Invoke-AzVMRunCommand -ResourceGroupName 'rgname' -VMName 'vmname' -CommandId 'RemoveRunCommandLinuxExtension'
+```
+
+```azurecli-interactive
+az vm run-command invoke --command-id RemoveRunCommandLinuxExtension --name vmname -g rgname
+```
+ ## Next steps To learn about other ways to run scripts and commands remotely in your VM, see [Run scripts in your Linux VM](run-scripts-in-vm.md).
virtual-machines Run Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/run-command.md
Title: Run scripts in a Windows VM in Azure using action Run Commands
-description: This topic describes how to run PowerShell scripts within an Azure Windows virtual machine by using the Run Command feature
+description: This article describes how to run PowerShell scripts within an Azure Windows virtual machine by using the Run Command feature
Previously updated : 09/07/2022 Last updated : 10/25/2022
The following restrictions apply when you're using Run Command:
* The minimum time to run a script is about 20 seconds. * Scripts run as System on Windows. * One script at a time can run.
-* Scripts that prompt for information (interactive mode) are not supported.
+* Scripts that prompt for information (interactive mode) aren't supported.
* You can't cancel a running script. * The maximum time a script can run is 90 minutes. After that, it will time out. * Outbound connectivity from the VM is required to return the results of the script.
-* It is not recommended to run a script that will cause a stop or update of the VM Agent. This can let the extension in a Transitioning state, leading to a timeout.
+* It isn't recommended to run a script that will cause a stop or update of the VM Agent. This can let the extension in a Transitioning state, leading to a timeout.
> [!NOTE] > To function correctly, Run Command requires connectivity (port 443) to Azure public IP addresses. If the extension doesn't have access to these endpoints, the scripts might run successfully but not return the results. If you're blocking traffic on the virtual machine, you can use [service tags](../../virtual-network/network-security-groups-overview.md#service-tags) to allow traffic to Azure public IP addresses by using the `AzureCloud` tag.
The following restrictions apply when you're using Run Command:
## Available commands
-This table shows the list of commands available for Windows VMs. You can use the **RunPowerShellScript** command to run any custom script that you want. When you're using the Azure CLI or PowerShell to run a command, the value that you provide for the `--command-id` or `-CommandId` parameter must be one of the following listed values. When you specify a value that is not an available command, you receive this error:
+This table shows the list of commands available for Windows VMs. You can use the **RunPowerShellScript** command to run any custom script that you want. When you're using the Azure CLI or PowerShell to run a command, the value that you provide for the `--command-id` or `-CommandId` parameter must be one of the following listed values. When you specify a value that isn't an available command, you receive this error:
```error The entity was not found in this Azure location
Running a command requires the `Microsoft.Compute/virtualMachines/runCommand/act
You can use one of the [built-in roles](../../role-based-access-control/built-in-roles.md) or create a [custom role](../../role-based-access-control/custom-roles.md) to use Run Command. +
+## Action Run Command Windows troubleshooting
+
+When troubleshooting action run command for Windows environments, refer to the *RunCommandExtension* log file typically located in the following directory: `C:\WindowsAzure\Logs\Plugins\Microsoft.CPlat.Core.RunCommandWindows\<version>\RunCommandExtension.log` for further details.
+
+### Known issues
+
+Your Action Run Command Extension might fail to execute in your Windows environment if the command contains reserved characters. For example:
+
+If the `&` symbol is passed in the parameter of your command such as the below PowerShell script, it might fail.
+
+```powershell-interactive
+$paramm='abc&jj'
+Invoke-AzVMRunCommand -ResourceGroupName AzureCloudService1 -Name test -CommandId 'RunPowerShellScript' -ScriptPath C:\data\228332902\PostAppConfig.ps1 -Parameter @{"Prefix" = $paramm}
+```
+
+Use the `^` character to escape the `&` in the argument, such as `$paramm='abc^&jj'`
+
+The Run Command extension might also fail to execute if command to be executed contains "\n" in the path, as it will be treated as a new line. For example, `C:\Windows\notepad.exe` contains the `\n` in the file path. Consider replacing `\n` with `\N` in your path.
+
+### Action Run Command Removal
+
+If needing to remove your action run command Windows extension, refer to the below steps for Azure PowerShell and CLI:
+
+ Replace *rgname* and *vmname* with your relevant resource group name and virtual machine name in the following removal examples.
++
+```powershell-interactive
+ Invoke-AzVMRunCommand -ResourceGroupName 'rgname' -VMName 'vmname' -CommandId 'RemoveRunCommandWindowsExtension'
+```
+
+```azurecli-interactive
+az vm run-command invoke --command-id RemoveRunCommandWindowsExtension --name vmname -g rgname
+```
+ ## Next steps To learn about other ways to run scripts and commands remotely in your VM, see [Run scripts in your Windows VM](run-scripts-in-vm.md).
virtual-network Public Ip Upgrade Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-upgrade-portal.md
Previously updated : 10/21/2022 Last updated : 10/25/2022
In this article, you'll learn how to upgrade a static Basic SKU public IP addres
In this section, you'll sign in to the Azure portal and upgrade your static Basic SKU public IP to the Standard sku.
-In order to upgrade a public IP, it must not be associated with any resource. For more information, see [View, modify settings for, or delete a public IP address](./virtual-network-public-ip-address.md#view-modify-settings-for-or-delete-a-public-ip-address).
+In order to upgrade a public IP, it must not be associated with any resource. For more information, see [View, modify settings for, or delete a public IP address](./virtual-network-public-ip-address.md#view-modify-settings-for-or-delete-a-public-ip-address) to learn how to disassociate a public IP.
>[!IMPORTANT] >Public IPs upgraded from Basic to Standard SKU continue to have no [availability zones](../../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones). This means they cannot be associated with an Azure resource that is either zone-redundant or tied to a pre-specified zone in regions where this is offered.
In this article, you upgraded a basic SKU public IP address to standard SKU.
For more information on public IP addresses in Azure, see: - [Public IP addresses in Azure](public-ip-addresses.md)-- [Create a public IP - Azure portal](./create-public-ip-portal.md)
+- [Create a public IP - Azure portal](./create-public-ip-portal.md)
virtual-network Public Ip Upgrade Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-upgrade-powershell.md
Title: Upgrade a public IP address - Azure PowerShell
-description: In this article, learn how to upgrade a basic SKU public IP address using Azure PowerShell.
+ Title: 'Upgrade a public IP address - Azure PowerShell'
+description: In this article, you learn how to upgrade a basic SKU public IP address using Azure PowerShell.
Previously updated : 05/20/2021 Last updated : 10/25/2022
In this article, you'll learn how to upgrade a static Basic SKU public IP addres
## Prerequisites
-* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-* A **static** basic SKU public IP address in your subscription. For more information, see [Create public IP address - Azure portal](./create-public-ip-portal.md#create-a-basic-sku-public-ip-address).
+* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* A **static** basic SKU public IP address in your subscription. For more information, see [Create a basic public IP address using PowerShell](./create-public-ip-powershell.md?tabs=create-public-ip-basic%2Ccreate-public-ip-non-zonal%2Crouting-preference#create-public-ip).
* Azure PowerShell installed locally or Azure Cloud Shell If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
If you choose to install and use PowerShell locally, this article requires the A
In this section, you'll use the Azure CLI to upgrade your static Basic SKU public IP to the Standard SKU.
-In order to upgrade a public IP, it must not be associated with any resource (see [this page](/azure/virtual-network/virtual-network-public-ip-address#view-modify-settings-for-or-delete-a-public-ip-address) for more information about how to disassociate public IPs).
+In order to upgrade a public IP, it must not be associated with any resource. For more information, see [View, modify settings for, or delete a public IP address](./virtual-network-public-ip-address.md#view-modify-settings-for-or-delete-a-public-ip-address) to learn how to disassociate a public IP.
>[!IMPORTANT] >Public IPs upgraded from Basic to Standard SKU continue to have no [availability zones](../../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones). This means they cannot be associated with an Azure resource that is either zone-redundant or tied to a pre-specified zone in regions where this is offered.
Set-AzPublicIpAddress -PublicIpAddress $pubIP
``` > [!NOTE]
-> The basic public IP you are upgrading must have the static allocation type. You'll receive a warning that the IP can't be upgraded if you try to upgrade a dynamically allocated IP address.
+> The basic public IP you are upgrading must have static assignment. You'll receive a warning that the IP can't be upgraded if you try to upgrade a dynamically allocated IP address. Change the IP address assignment to static before upgrading.
> [!WARNING]
-> Upgrading a basic public IP to standard SKU can't be reversed. Public IPs upgraded from basic to standard SKU continue to have no guaranteed [availability zones](../../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones).
+> Upgrading a basic public IP to standard SKU can't be reversed. Public IPs upgraded from basic to standard SKU continue to have no guaranteed [availability zones](../../availability-zones/az-overview.md#availability-zones).
## Verify upgrade
In this article, you upgraded a basic SKU public IP address to standard SKU.
For more information on public IP addresses in Azure, see: - [Public IP addresses in Azure](public-ip-addresses.md)-- [Create a public IP - Azure portal](./create-public-ip-portal.md)
+- [Create a public IP address using PowerShell](./create-public-ip-powershell.md)
virtual-network Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-bicep.md
Title: 'Quickstart: Create a virtual network using Bicep'
description: Learn how to use Bicep to create an Azure virtual network. -+ Last updated 06/24/2022-+
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/service-tags-overview.md
-
+
Title: Azure service tags overview description: Learn about service tags. Service tags help minimize the complexity of security rule creation.
By default, service tags reflect the ranges for the entire cloud. Some service t
| **ApiManagement** | Management traffic for Azure API Management-dedicated deployments. <br/><br/>**Note**: This tag represents the Azure API Management service endpoint for control plane per region. The tag enables customers to perform management operations on the APIs, Operations, Policies, NamedValues configured on the API Management service. | Inbound | Yes | Yes | | **ApplicationInsightsAvailability** | Application Insights Availability. | Inbound | No | No | | **AppConfiguration** | App Configuration. | Outbound | No | No |
-| **AppService** | Azure App Service. This tag is recommended for outbound security rules to web apps and Function apps.<br/><br/>**Note**: This tag does not include IP addresses assigned when using IP-based SSL (App-assigned address). | Outbound | Yes | Yes |
+| **AppService** | Azure App Service. This tag is recommended for outbound security rules to web apps and Function apps.<br/><br/>**Note**: This tag doesn't include IP addresses assigned when using IP-based SSL (App-assigned address). | Outbound | Yes | Yes |
| **AppServiceManagement** | Management traffic for deployments dedicated to App Service Environment. | Both | No | Yes |
+| **AutonomousDevelopmentPlatform** | Autonomous Development Platform | Both | Yes | Yes |
| **AzureActiveDirectory** | Azure Active Directory. | Outbound | No | Yes | | **AzureActiveDirectoryDomainServices** | Management traffic for deployments dedicated to Azure Active Directory Domain Services. | Both | No | Yes | | **AzureAdvancedThreatProtection** | Azure Advanced Threat Protection. | Outbound | No | No |
By default, service tags reflect the ranges for the entire cloud. Some service t
| **AzureCloud** | All [datacenter public IP addresses](https://www.microsoft.com/download/details.aspx?id=56519). | Both | Yes | Yes | | **AzureCognitiveSearch** | Azure Cognitive Search. <br/><br/>This tag or the IP addresses covered by this tag can be used to grant indexers secure access to data sources. For more information about indexers, see [indexer connection documentation](../search/search-indexer-troubleshooting.md#connection-errors). <br/><br/> **Note**: The IP of the search service isn't included in the list of IP ranges for this service tag and **also needs to be added** to the IP firewall of data sources. | Inbound | No | No | | **AzureConnectors** | This tag represents the IP addresses used for managed connectors that make inbound webhook callbacks to the Azure Logic Apps service and outbound calls to their respective services, for example, Azure Storage or Azure Event Hubs. | Both | Yes | Yes |
+| **AzureContainerAppsService** | Azure Container Apps Service | Both | Yes | Yes |
| **AzureContainerRegistry** | Azure Container Registry. | Outbound | Yes | Yes | | **AzureCosmosDB** | Azure Cosmos DB. | Outbound | Yes | Yes | | **AzureDatabricks** | Azure Databricks. | Both | No | No |
By default, service tags reflect the ranges for the entire cloud. Some service t
| **AzurePlatformIMDS** | Azure Instance Metadata Service (IMDS), which is a basic infrastructure service.<br/><br/>You can use this tag to disable the default IMDS. Be cautious when you use this tag. We recommend that you read [Azure platform considerations](./network-security-groups-overview.md#azure-platform-considerations). We also recommend that you perform testing before you use this tag. | Outbound | No | No | | **AzurePlatformLKM** | Windows licensing or key management service.<br/><br/>You can use this tag to disable the defaults for licensing. Be cautious when you use this tag. We recommend that you read [Azure platform considerations](./network-security-groups-overview.md#azure-platform-considerations). We also recommend that you perform testing before you use this tag. | Outbound | No | No | | **AzureResourceManager** | Azure Resource Manager. | Outbound | No | No |
+| **AzureSentinel** | Microsoft Sentinel. | Inbound | Yes | Yes |
| **AzureSignalR** | Azure SignalR. | Outbound | No | No | | **AzureSiteRecovery** | Azure Site Recovery.<br/><br/>**Note**: This tag has a dependency on the **AzureActiveDirectory**, **AzureKeyVault**, **EventHub**,**GuestAndHybridManagement** and **Storage** tags. | Outbound | No | No | | **AzureSphere** | This tag or the IP addresses covered by this tag can be used to restrict access to Azure Sphere Security Services. | Both | No | Yes |
By default, service tags reflect the ranges for the entire cloud. Some service t
| **AzureTrafficManager** | Azure Traffic Manager probe IP addresses.<br/><br/>For more information on Traffic Manager probe IP addresses, see [Azure Traffic Manager FAQ](../traffic-manager/traffic-manager-faqs.md). | Inbound | No | Yes | | **AzureUpdateDelivery** | For accessing Windows Updates. <br/><br/>**Note**: This tag provides access to Windows Update metadata services. To successfully download updates, you must also enable the **AzureFrontDoor.FirstParty** service tag and configure outbound security rules with the protocol and port defined as follows: <ul><li>AzureUpdateDelivery: TCP, port 443</li><li>AzureFrontDoor.FirstParty: TCP, port 80</li></ul> | Outbound | No | No | | **BatchNodeManagement** | Management traffic for deployments dedicated to Azure Batch. | Both | No | Yes |
+| **ChaosStudio** | Azure Chaos Studio. <br/><br/>**Note**: If you have enabled Application Insights integration on the Chaos Agent, the AzureMonitor tag is also required. | Both | Yes | Yes |
| **CognitiveServicesManagement** | The address ranges for traffic for Azure Cognitive Services. | Both | No | No | | **DataFactory** | Azure Data Factory | Both | No | No | | **DataFactoryManagement** | Management traffic for Azure Data Factory. | Outbound | No | No |
web-application-firewall Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/quick-create-bicep.md
Title: 'Quickstart: Create an Azure WAF v2 on Application Gateway - Bicep'
description: Learn how to use Bicep to create a Web Application Firewall v2 on Azure Application Gateway. -+ Last updated 06/22/2022-+